image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "replication" ]
[ { "code": "2022-09-19T20:30:54.921+0000 I REPL [rsBackgroundSync] **Starting rollback due to OplogStartMissing** : Our last optime fetched: { ts: Timestamp(1663318803, 4), t: 95 }. source's GTE: { ts: Timestamp(1663318861, 1), t: 96 }2022-09-19T20:30:54.807+0000 I REPL [rsBackgroundSync] Choosing primary as sync source candidate: mongodb-2.mongodb.amazon.svc.cluster.local:27017\n2022-09-19T20:30:54.808+0000 I CONNPOOL [RS] Connecting to mongodb-2.mongodb.amazon.svc.cluster.local:27017\n2022-09-19T20:30:54.827+0000 I REPL [rsBackgroundSync] Changed sync source from empty to mongodb-2.mongodb.amazon.svc.cluster.local:27017\n2022-09-19T20:30:54.919+0000 I NETWORK [conn249] received client metadata from 127.0.0.1:44348 conn249: { application: { name: \"MongoDB Shell\" }, driver: { name: \"MongoDB Internal Client\", version: \"4.2.11\" }, os: { type: \"Linux\", name: \"Ubuntu\", architecture: \"x86_64\", version: \"16.04\" } }\n2022-09-19T20:30:54.921+0000 I REPL [rsBackgroundSync] Starting rollback due to OplogStartMissing: Our last optime fetched: { ts: Timestamp(1663318803, 4), t: 95 }. source's GTE: { ts: Timestamp(1663318861, 1), t: 96 }\n2022-09-19T20:30:54.921+0000 I REPL [rsBackgroundSync] Replication commit point: { ts: Timestamp(0, 0), t: -1 }\n2022-09-19T20:30:54.921+0000 I REPL [rsBackgroundSync] Rollback using 'recoverToStableTimestamp' method.\n2022-09-19T20:30:54.921+0000 I REPL [rsBackgroundSync] Scheduling rollback (sync source: mongodb-2.mongodb.amazon.svc.cluster.local:27017)\n2022-09-19T20:30:54.921+0000 I ROLLBACK [rsBackgroundSync] transition to ROLLBACK\n2022-09-19T20:30:54.921+0000 I REPL [rsBackgroundSync] State transition ops metrics: { lastStateTransition: \"rollback\", userOpsKilled: 0, userOpsRunning: 127 }\n2022-09-19T20:30:54.921+0000 I REPL [rsBackgroundSync] transition to ROLLBACK from SECONDARY\n2022-09-19T20:30:54.924+0000 I NETWORK [rsBackgroundSync] Skip closing connection for connection # 30\n2022-09-19T20:30:54.924+0000 I NETWORK [rsBackgroundSync] Skip closing connection for connection # 15\n2022-09-19T20:30:54.924+0000 I ROLLBACK [rsBackgroundSync] Waiting for all background operations to complete before starting rollback\n2022-09-19T20:30:54.924+0000 I ROLLBACK [rsBackgroundSync] Finished waiting for background operations to complete before rollback\n2022-09-19T20:30:54.924+0000 I ROLLBACK [rsBackgroundSync] finding common point\n2022-08-01T02:30:21.915+0000 I REPL [replication-0] We are too stale to use mongodb-0.mongodb.amazon.svc.cluster.local:27017 as a sync source. Blacklisting this sync source because our last fetched timestamp: Timestamp(1659314292, 1) is before their earliest timestamp: Timestamp(1659320029, 301) for 1min until: 2022-08-01T02:31:21.915+0000\n", "text": "We use MongoDB 4.2 as a replica-set on three different nodes . We have one primary and two secondary nodes. We are trying to perform Chaos testing (as in randomly rebooting nodes) to see how the data pipelines are performing well or not, as in with two nodes if rest of 2 MongoDB nodes are able to form cluster cluster or not .On two different MongoDB replica-sets we see different issues. And I am trying to understand the differences between the two.Issue 12022-09-19T20:30:54.921+0000 I REPL [rsBackgroundSync] **Starting rollback due to OplogStartMissing** : Our last optime fetched: { ts: Timestamp(1663318803, 4), t: 95 }. source's GTE: { ts: Timestamp(1663318861, 1), t: 96 }To give more context around this here are error logs before and after that.Issue2There are two different classes of errors. I am trying to understand the difference.", "username": "Vineel_Yalamarthi" }, { "code": "", "text": "Hi @Vineel_Yalamarthi and welcome to the MongoDB community. In the first case it sounds like you shut down a primary member that had writes that had not been replicated to the secondary member. This node then came online and got repromoted to primary status and MongoDB rolled those write back to keep data consistent with the writes that happened while this member was not primary. You can learn more about rollbacks in the documentation.In the second case it seems like the secondary member was down longer than what the other members oplogs could hold for data. When the secondary came back online it could not reconcile its local oplog with where the other members were and therefore could not catch itself up. If that’s the case, you would need to resync the member.I am not sure how chaotic you’re being in your testing, what type of resources these machines have, how active the machines are, etc, but rollbacks can happen and are expected if the primary node goes down. As for stale members, if the secondary is down longer than the other member’s oplog window, then it will not be able to manually catch up. You can resize your oplog if you find the default to be too small. Note that the oplog is a capped collection of a given size. How long of a window the oplog stores is dependent on the amount of writes you do on the system. If you have a heavy write system, you might want to size bigger than the default to hold operations for a longer period of time.", "username": "Doug_Duncan" }, { "code": "", "text": "@Doug_Duncan I understood your explanation for the second case. Thanks a lot.Can you elaborate how and why roll backs lead to the error we are getting and how can we avoid that and what should we do to recover from that. I am more interesting in understanding the root cause.Let’s say at time t1 we have X, Y, Z which are mongo instances and X is primary. Now when X is shutdown and lets say Y took over the role which is PRIMARY.After a long time, lets say X came back online and found that it used to be PRIMARY and that it has writes that are yet to be written to the SECONDARY.From here how does it lead to the error “OplogStartMissing” I am confused. Thanks in advance.", "username": "Vineel_Yalamarthi" }, { "code": "{ ts: Timestamp(0, 0), t: -1 }storage.oplogMinRetentionHours", "text": "Welcome to the MongoDB Community @Vineel_Yalamarthi!As @Doug_Duncan mentioned, whatever chaos testing you are doing has made this replica set member too stale to rejoin the replica set.Starting rollback due to OplogStartMissing : Our last optime fetched: { ts: Timestamp(1663318803, 4), t: 95 }. source’s GTE: { ts: Timestamp(1663318861, 1), t: 96 }\n…\n2022-09-19T20:30:54.921+0000 I REPL [rsBackgroundSync] Replication commit point: { ts: Timestamp(0, 0), t: -1 }\n2022-09-19T20:30:54.921+0000 I REPL [rsBackgroundSync] Rollback using ‘recoverToStableTimestamp’ method.This member was restarted before writing a stable replication commit point ({ ts: Timestamp(0, 0), t: -1 } is a sentinel value) and does not have an oplog entry in common with the sync source. There is an attempt to Rollback by recovering to a stable timestamp.We are too stale to use X:27017 as a sync source.The latest oplog entry on this member is older than the oldest oplog entry for the sync source. Unless you happen to have a sync source with a larger oplog available (perhaps another member down during your chaos testing), you will have to Resync this member.To try to avoid members becoming stale while offline you can:increase your oplog sizesuse majority write concern (the default in MongoDB 5.0+ compatible drivers) to ensure commits propagate to a majority of replica set membersupgrade to MongoDB 4.4 or newer and configure storage.oplogMinRetentionHours to guarantee an oplog window (with the caveat that the oplog will grow without constraint to meet this configuration setting)MongoDB 4.2 (first released in Aug 2019) is currently the oldest non-EOL server release series. I recommend planning an upgrade to a newer server release series to take advantage of improvements and features that have been added in the last 3 years.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Stennie_X The second one is clear. Thanks so much for taking time to answer my question.", "username": "Vineel_Yalamarthi" } ]
Chaos testing MongoDB Replica-set and Oplog issues
2022-09-20T02:20:30.361Z
Chaos testing MongoDB Replica-set and Oplog issues
2,448
null
[ "compass", "data-api" ]
[ { "code": "{ \"date\": { $gt:ISODate('2022-06-05') } }{\n \"dataSource\": \"{{DATA_SOURCE}}\",\n \"database\": \"{{DATABASE}}\",\n \"collection\": \"{{COLLECTION}}\",\n \"filter\": {\n \"date\": {\"$gt\": ('2022-06-05')}\n }\n}\n", "text": "Hello everyone,\nI have a collection that I want to query on dates, and I need to pass themUnder mongo compass in filter I pass { \"date\": { $gt:ISODate('2022-06-05') } } and it works, but I can’t convert it to work with postman", "username": "Cedric_95fr" }, { "code": "filter\"filter\": {\n \"date\": \n {\"$gt\": {\"$date\": \"2022-06-05T00:00:00Z\"}}\n }\n", "text": "Hi @Cedric_95fr - Welcome to the community Do the details and examples on this post help? In saying so, I believe your filter should be something like the following example:Please test this out on a test environment and see if it helps in your case.If not, could you provide sample documents within your database and your expected output?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hi Jason , do you know how to query with a regular expresion?\nI tried this { “area” : /. no se que. / } , it works in Compas .\nbut when I tried with the data api I needed to put quotes\nas this : {“area” : “/. no se que. /”} and this didn’t workRegards", "username": "Brahian_Velazquez" }, { "code": "", "text": "Hi,I’ve signed up because I ran into the same issue.The filter works for regular collections, but time series collections force you to store the time index as a BSON format. I can’t really replicate what JS is doing with the ISODate()-function by hand. I’ve tried all varaiations i could find, timestamps, EJSON formats, regular JSON formats (both found here: ), the suggestion from Jason, nothing works.I couldn’t find a way to replicate what the ISODate() function does to a string either (I’m no Web / JS dev). Is this a limitation of the data API or am I doing something wrong here?Tl;dr: I can filter regular collections just by string input, time series collections on the other hand seem to demand BSON timestamps that I can’t replicate with the given data API. Pls Help. ", "username": "Jon_Stau" }, { "code": "timeFielddate \"filter\": {\"date\": {\"$gte\": { \"$date\": \"2022-08-01T00:00:00Z\" } \n }\n }\n}'\n", "text": "Hi @Jon_Stau the above linked solution from @Jason_Tran should work for you. Time-series collections require the timeField to be stored as a BSONDate type. When using the Data API to query a time-series collection or any other collection the syntax must be valid MongoDB Extended JSON so the ISODate function is not recognized and the DataAPI query should the $date operator to product the same result. For example for a timeField using the field name date:", "username": "Michael_Gargiulo" } ]
Date query in Postman
2022-05-26T08:11:03.492Z
Date query in Postman
9,353
null
[ "java", "swift", "transactions" ]
[ { "code": "", "text": "Hi folks.I know in the Swift SDK, there’s two ways to write data to the realm:However, I’m looking for the latter in the realm-java SDK. I have a scenario where I’m writing data to the realm in a transaction that’s spread across a bunch of methods, so it’s hard to coalesce them all into a single “executeTransactionAsync”.", "username": "Alex_Tang1" }, { "code": "Realm.beginTransactionRealm.commitTransaction", "text": "Yes, there is Realm.beginTransaction and Realm.commitTransaction: Realm (Realm 10.10.1) and Realm (Realm 10.10.1)", "username": "ChristanMelchior" }, { "code": "", "text": "Thanks! I thought I looked through the docs for a “begin*” but apparently I missed it! ", "username": "Alex_Tang1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
In swift, there is Realm.beginWrite() and commitWrite(). Is there an equivalent for realm-java?
2022-09-20T00:04:18.312Z
In swift, there is Realm.beginWrite() and commitWrite(). Is there an equivalent for realm-java?
1,545
null
[ "kafka-connector" ]
[ { "code": "MongoSourceConnectoroutput.format.value = jsonvalue.converter = org.apache.kafka.connect.storage.StringConverter", "text": "I’m trying to apply different configurations on a MongoSourceConnector to have data on a kafka topic in json format, but with some transormations (replaceField, time conversion): seems that the only way to have messages as json is with output.format.value = json and value.converter = org.apache.kafka.connect.storage.StringConverter, but in this way I’m not able to apply transformations (SMT), with error: ‘transformation supports only Struct objects’.\nIs there a way to not use schema, having json format as output, and use SMT ?", "username": "Francesco_Fiorentino" }, { "code": "", "text": "Kafka SMTs require a schema to be defined on the source. Thus, you’ll need to define one", "username": "Robert_Walters" }, { "code": "output.format.value: schemaschemapayload", "text": "Thanks for your prompt support! Do you mean that I need to specify output.format.value: schema to use SMT or I’m losting somewhere? Is necessary that any message on topic include schema and payload subobjects?\nMy goal is to have a message on json format with standard changeStream fields (_id, operationType, fullDocument,…) but with SMTs: can you help to understand if is this possible or not?", "username": "Francesco_Fiorentino" }, { "code": "", "text": "if its just replacing a field is all you need SMTs for, why not just use the pipeline ? https://www.mongodb.com/docs/kafka-connector/current/source-connector/usage-examples/custom-pipeline/#customize-a-pipeline-to-filter-change-events.then you dont need to define the schema or use SMTs at all and it will be the best performance. (SMTs can be slow)", "username": "Robert_Walters" }, { "code": "", "text": "Sorry to coming back here after a time: for field replacing pipeline is enough, but what if I need something more? For example it could be helpful to move some values on header, what is the suggested approach?", "username": "Francesco_Fiorentino" } ]
Json with SMT on SourceConnector
2022-08-09T14:04:52.388Z
Json with SMT on SourceConnector
3,083
null
[]
[ { "code": "", "text": "I’ve read over the MongoDB Manual GridFS section, and I’m left wondering how to determine what an adequate chunk size is?I understand some of the detriments of choosing a small chunk size, and I experienced the worst detriment of all when I tried to upload a file with a 1KB chunk size. I know that in addition to that (now) obvious issue, there are index & document count concerns.I understand that ifyou want to access information from portions of large files without having to load whole files into memory, you can use GridFS to recall sections of files without reading the entire file into memory.What if I’m with certainty always going to read the whole file in? Is there any reason not to choose near the max chunk size? I assume that the max chunk size is the same as the max document size: 16 MB - affording room for the additional document data.Or, if my app has a file size restriction of 5 MB, is it more appropriate to set that as the chunk size? Is there much of a difference in setting the chunk size to 5 MB vs 15 MB if all of my files will be, at most, 5 MB?Thanks for any advice or general aspects of determining chunk size that I’m missing.", "username": "Michael_Jay1" }, { "code": "", "text": "Hi @Michael_Jay1,If you are always going to read the whole file and all of your files will be less than the 16MB document size limit, you could serialise each file as binary data in a single document instead of using GridFS.GridFS is more useful when you want to store files larger than 16MB or access portions of a large binary file without having to read the entire file into memory.See When to Use GridFS for more background, including similar advice:Furthermore, if your files are all smaller than the 16 MB BSON Document Size limit, consider storing each file in a single document instead of using GridFS. You may use the BinData data type to store the binary data. See your drivers documentation for details on using BinData.Regards,\nStennie", "username": "Stennie_X" } ]
I'm looking for pointers & heuristics in determining the right GridFS chunk size
2022-09-13T03:16:30.598Z
I’m looking for pointers & heuristics in determining the right GridFS chunk size
938
https://www.mongodb.com/…406a74caf8e4.png
[ "100daysofcode" ]
[ { "code": "day-01 go mod init yourModuleName or go mod init example.com/moduleName\n Example: go mod init hello\npackage main\n\nimport \"fmt\"\n\nfunc main() {\n\tfmt.Println(\"Hello folks, Happy to Join the 100daysofcode!\")\n}\npackage mainimport “fmt”fmt.Printlnmain.gogo run main.go", "text": "Hello folks, happy to join the #100DaysofCode inspired by @henna.s \nIn this amazing journey, I’ll be sharing with you my daily progress learning Golang (moving from Node JS, Python). Starting from a hello world program at day-01 till finishing the adventure at day-100 with a complete API integrated with MongoDB. What is Golang? Golang, an open-source, compiled, and statically typed programming language designed by Google. Built to be simple, high-performing, readable, efficient, and concurrent nature. Used for server-side programming, game development, cloud-based programming, command line tools and more.Why Golang? The above command creates a go.mod file to track your code’s dependencies. So far, the file includes only the name of your module and the Go version your code supports. But as you add dependencies, the go. mod file will list the versions your code depends on.Let’s quickly analyze our simple first hello world go program package main: tells the Go compiler that the package should compile as an executable program instead of a shared library. The main function in the package “main” will be the entry point of our executable programimport “fmt”: here we are importing the fmt package, that allows us to format basic strings, values, or anything and print them or collect user input from the console or write into a file using a writer or even print customized fancy error messages.Finally we are defining our main function using the func keyword, that print for us a normal String using the fmt.Println function.And now let’s run our main.go file using the following command to see the function result on the console \ngo run main.go\nimage1508×248 44.5 KB\nThat was everything for our Day-01 Hello world in Golang. Stay tuned in the coming days for some amazing and advanced topics, going from zero To hero in GO ", "username": "eliehannouch" }, { "code": "", "text": "Hello @eliehannouch,Super glad to see you being part of the #100Days of Journey… and happy that I played a part in inspiring you.We should spread some more noise around this. I can’t wait to see your progress in these 100 days. Wish you all the best Cheers, ", "username": "henna.s" }, { "code": "(int8, int16, int, int32, int64uint8, byte, uint16, uint, uint32, uint64, uintptrbool, float32, float64, complex64, complex128int32 str:= 'B'\n fmt.Println(rune(str))\n // Output: 66\nstrExample := \"Hello everyone, this a string\" len([]rune(strExample)), or len(strExample)rune() strExample := \"Hello GoLang\"\n fmt.Println([]rune(strExample))\n\n Output: [72 101 108 108 111 32 71 111 76 97 110 103]\n type productID int type productPrice float64", "text": "Hello folks, Here we go, new day Day-02 and some new knowledge gained in Golang. Today we will dive in the GoLang data types.Go is a statically typed language. Which means that variable types are known at compile timeGo use type inference where the compiler can infer the type of a certain variable without explicitly specify it by the developersSigned Integer Types (int8, int16, int, int32, int64)Unsigned Integer Types (uint8, byte, uint16, uint, uint32, uint64, uintptr)Others (bool, float32, float64, complex64, complex128)\nimage600×600 52.7 KB\nString is a data type that represent a slice of bytes that are UTF-8 encoded. Strings are enclosed with double quotes.\nstrExample := \"Hello everyone, this a string\"We can get the String length by typing the following\n len([]rune(strExample)), or len(strExample)We can use the rune() function to convert string to an array of runesMoving from String to rune", "username": "eliehannouch" }, { "code": " import s \"strings\"s_ “fmt” import “fmt”fmtfmtPrint()Println() Scanf():fmtimport “strings”Compare()Join():ToLower():ToUpper():// tells the Go compiler that the package should compile as\n// an executable program instead of a shared library\npackage main\n \n// here we are importing multiple packages in a single import\nimport (\n \"fmt\"\n \"strings\"\n)\n \nfunc main() {\n \n // transforming the string content to lower case\n lower := strings.ToLower(\"THE COMMUNITY IS THE BEST PLACE TO GROW\")\n fmt.Println(lower)\n \n checkTxt := \"Hello everyone, this is a test\"\n \n // checking if the checkTxt string contains a substring called everyone\n doesItContain := strings.Contains(checkTxt, \"everyone\")\n fmt.Println(doesItContain)\n // Output: true\n \n // making an array of strings\n arrayOfStrings := []string{\"I love\", \"Go Lang\", \"and\", \"MongoDB\"}\n fmt.Println(arrayOfStrings)\n //output: [I love Go Lang and MongoDB]\n \n // joining the array content into one string, separated by a space\n fromArrToString := strings.Join(arrayOfStrings, \" \")\n fmt.Println(fromArrToString)\n // output: I love Go Lang and MongoDB\n}\n\n//First we need to declare the package using the following piece of code: \npackage simplecalculator\n\n// create a simple addTwoNumbers function\nfunc addTwoNumbers(num1, num2 int) int {\n\treturn num1 + num2\n }\n\npackagesimplecalculatoraddTwoNumbers", "text": "It’s Day03 \nHello folks glad to share with you a new day dealing with golang and exploring it alongside my daily work.\nToday we will talk more about the package system in go, its advantages, and how to deal with them and explore some of them in an example.What is a Package in simple terms ?In simple terms a package is a container or a box that contains multiple functions to perform specific tasks.When using packages, we can organize our Go code into reusable units.To use name aliasing in Go packages we can simply say:\n import s \"strings\", and here we can use s when calling the package instead of stringsTo solve a common problem of importing unused packages we can simply put a blank identifier before its definition\n_ “fmt”, and in this case no error will appearHow to import a package in Go ?To import a package, simply we can use the import keyword as following:\n import “fmt”In the above import statement we have imported the fmt package, that is used to format basic strings, values, inputs, and outputs. And now after importing it all the available functions can be used on the fly inside our go file.Let’s explore the fmt package quickly and some of its basic functionsAll these functions and much more can be used on the fly after importing the fmt packageExample to showcase how to deal with these packagesHow to create a custom package in Golang ?", "username": "eliehannouch" }, { "code": "go. mod file module helloworld\n go 1.17\n module example.com/my/thing\n go 1.17\n\n require example.com/other/thing v1.0.2\n require example.com/new/thing/v2 v2.3.4\n\nor\n module example.com/my/thing\n go 1.17\n\n require (\n\t example.com/other/thing v1.0.2\n\t example.com/new/thing/v2 v2.3.4\n\t)\n", "text": "Hello folks, every day is a new challenge. And today marks the 4th day in our amazing journey where we will make a quick intro about the modules system in Golang. What is a module? What is the benefit of using a module in Golang? **How to initialize a new module in Go? **What does this command do? ", "username": "eliehannouch" }, { "code": "truefalse package main\n \n import \"fmt\"\n \n func main() {\n \n // If condition without an else clause\n var i int = 300\n if i%6 == 0 {\n fmt.Println(i, \"is divisible by 6\")\n }\n // simple and without parenthesis 🥰 \n if 20%2 == 0 {\n fmt.Println(\"20 is even\")\n } else {\n fmt.Println(\"20 is odd\")\n }\n \n // Statement initialization with If condition\n if n := 50; n < 0 {\n fmt.Println(n, \"is negative\")\n } else if n < 10 {\n fmt.Println(n, \"has 1 digit\")\n } else {\n fmt.Println(n, \"has multiple digits\")\n }\n }\n package main\n \n import (\n \"fmt\"\n )\n \n func main() {\n i := 0\n for i < 50 {\n fmt.Println(\"i =\", i)\n i += 2\n \n }\n package main\n \n import \"fmt\"\n \n func main() {\n \n // The for loop that replace while in Golang\n i := -2\n for i <= 3 {\n fmt.Println(i)\n i++\n }\n //A classical for loop.\n for j := 0; j <= 10; j++ {\n fmt.Println(j)\n }\n \n // We can use the continue keyword to jump to the next iteration of the loop\n for n := 0; n <= 5; n++ {\n if n%2 == 0 {\n continue\n }\n fmt.Println(n)\n }\n \n // A for without condition will loop repeatedly until you break out of it.\n for {\n fmt.Println(\"loop\")\n break\n }\n }\n", "text": "Hello friends, the 5th day is done , a new day, and some new knowledge in Golang. In today post we will review the conditionals in Golang, how they differ from other programming languages. We will also talk about loops and how to deal with them. I hope you will enjoy reading the post First, let make a quick refresher on what conditionals are all about in any programming language?What is an if statement and how it works in Golang? In simple terms we can define them a decision-making statement that guide a program to take decisions based on a specific condition, where it execute on a set of code it the specified condition is met (true) or on the other if it does not met (false)In Go, we don’t use brackets around the condition itselfIf statement can be used without an else A statement can precede the conditionals and any declared var in the statement is available in all other branches In Golang there is no ternary operator , so we need to write a full if else statement to achieve the same result.Example illustrating how the if statement work in GoLangWhat is a while loop? Does it exist in Golang? And how can we replace it Loops In Go First of all let’s remember, what does loops are all about in programming before diving in the Go editionIn computer programming, a loop is a sequence of instructions that is continually repeated until a certain condition is reached. Example for loops, while, do-whileIn Go, the only looping construct is for, so let’s explore the different ways we can deal withExample:", "username": "eliehannouch" }, { "code": " Arrays Example package main\n \n import \"fmt\"\n \n func main() {\n \n var a [5]int\n fmt.Println(\"Initial values of a:\", a) // Output: [0 0 0 0 0]\n \n // Here we are assigning the 3rd and 5th element of the array to the values -3 and 20 respectively.\n a[2] = -3\n a[4] = 20\n fmt.Println(\"setting data:\", a)\n fmt.Println(\"retrieving:\", a[3])\n \n // Array length\n fmt.Println(\"array length:\", len(a))\n \n // declaring an integer array with 5 elements\n b := [5]int{1, 2, 3, 4, 5}\n fmt.Println(b)\n \n // Creating a 2D array and filling it with values\n var twoD [2][3]int\n for i := 0; i < 2; i++ {\n for j := 0; j < 3; j++ {\n twoD[i][j] = i + j\n }\n }\n fmt.Println(\"Two dimensional array:\", twoD) // Output: [[0 1 2] [1 2 3]]\n }\n\ncaplenmakeappendcopySlices Example package main\n \n import (\n \"fmt\"\n )\n \n func main() {\n // Delcare empty slice of type int or string or any other type\n var sliceinteger []int\n \n // Create a new slice by filling it with data\n //var filledslice = []int{1, 2, 3, 4}\n \n // Checking the capacity of a slice using the built in cap\n fmt.Println(\"Capacity: \", cap(sliceinteger))\n \n // Checking the length of a slice using the built in function len\n fmt.Println(\"Length: \", len(sliceinteger))\n \n // Create a new slice using the make keyword, with a length of 10 and type string\n var slicestring = make([]string, 10)\n \n // Add element to the end of a slice using the append method\n slicestring = append(slicestring, \"MongoDB\", \"GoLang\", \"NodeJS\", \"Python\")\n \n // copy a slice to another\n a := []int{1, 2, 3}\n b := make([]int, 5, 10)\n copy(b, a)\n b[3] = -4\n b[4] = 5\n fmt.Println(b) // Output: [1 2 3 -4 5]\n \n // Slicing tips\n slice := []int{1, 2, 3, 4}\n fmt.Println(slice[:]) // Output: [1 2 3 4]\n fmt.Println(slice[1:]) // Output: [2 3 4]\n fmt.Println(slice[:1]) // Output: [1]\n fmt.Println(slice[:2]) // Output: [1 2]\n fmt.Println(slice[1:4]) // Output: [2 3 4]\n }\n", "text": "Hey folks, it’s the 6th day / 100, let’s explore some new topics in Golang, and make this journey a funny one .In today post, we will talk about some of the basic data structures in Golang, starting with arrays, and slices.Arrays in Golang What is an array in Golang?Slices in GolangWhat are Slices in Golang? ", "username": "eliehannouch" }, { "code": "rangedictshashesGolang Maps Example package main\n \n import (\n \"fmt\"\n )\n \n func main() {\n \n // First we are declaring a map with string keys and string values.\n var programmingLanguages map[string]string\n \n // Now we are initializing the map using the make function\n programmingLanguages = make(map[string]string)\n \n // Adding Data to the map\n programmingLanguages[\"p1\"] = \"GoLang\"\n programmingLanguages[\"p2\"] = \"JavaScript\"\n programmingLanguages[\"p3\"] = \"Python\"\n programmingLanguages[\"p4\"] = \"Rust\"\n \n fmt.Println(programmingLanguages) // Output: map[p1:GoLang p2:JavaScript p3:Python p4:Rust]\n \n // Access a certain value from the map\n var c1Value = programmingLanguages[\"p2\"]\n fmt.Println(c1Value) // Output: JavaScript\n \n // Delete a value (Rust) from the map\n delete(programmingLanguages, \"p4\") // Output: map[p1:GoLang p2:JavaScript p3:Python]\n \n // Iterate over the map using the for loop\n for language, key := range programmingLanguages {\n fmt.Println(language, key)\n }\n \n // Get the map length\n mapLength := len(programmingLanguages)\n fmt.Println(mapLength) // Output: 3\n }\n\nrangeGolang Range example package main\n \n import \"fmt\"\n \n func main() {\n \n numbers := []int{1, 2, 3, 4}\n sum := 0\n \n // Iterating through the array of numbers and calculating the sum\n // We are passing _ as first argument, as the index in our case does not matter\n for _, num := range numbers {\n sum += num\n }\n fmt.Println(\"sum:\", sum)\n \n // Here we are iterating through the array and both the index and the numbers matter\n // The first argument i --> represent the index\n // The second argument num --> represent the number\n for i, num := range numbers {\n if num == 3 {\n fmt.Println(\"index:\", i)\n }\n }\n \n // Here we are iterating through the map\n // key reperent the available keys in the map (g,m)\n // value represent the value associated with each key\n techStack := map[string]string{\"g\": \"Golang\", \"m\": \"MongoDB\"}\n for key, value := range techStack {\n fmt.Printf(\"%s -> %s\\n\", key, value)\n // Output: g -> Golang\n // m -> MongoDB\n }\n \n // In maps , using range we can iterate only over keys\n for key := range techStack {\n fmt.Println(\"key:\", key)\n // Output: key: g\n // key: m\n }\n \n // In Golang we can use the range on strings over Unicode code points\n // i represent the index\n // r represent the rune\n for i, r := range \"Golang\" {\n fmt.Println(i, r)\n // Output; \n /*\n i r\n ------\n 0 71\n 1 111\n 2 108\n 3 97\n 4 110\n 5 103\n */\n }\n }\n\n", "text": "Hello folks, today marks our 7th day / 100 , so let’s learn some amazing new topics in Golang, moving everyday toward our goal the DAY 100. Today, we will continue our tour with a new data structure the Golang map and then we will discover the range and learn how to deal with it in go. Maps in Golang What is a Map in Golang and how we can use it? ❶ A built-in associative data type, that maps keys to values❷ Used for fast lookups, deletion, and data retrieval based on keys❸ Store data in key-value pairs style❹ A map cannot have identical keys❺ Called dicts or hashes in other languages❻ Maps are unordered, So each time we iterate through them , we can get a new different order of elements❼ Now let’s move to an example, where we will illustrate how we can create maps in different ways, initialize them with data, change these data, delete them and much more…Golang Maps ExampleRange in Golang What is a range in Golang, and how it works? ", "username": "eliehannouch" }, { "code": "func\tfunc ourFirstFunction (parameter1, parameter2 ,parameter3 …) datatype {\n\t\t// The function logic written here …\n\t}\n package main\n \n import \"fmt\"\n \n // A function that takes 2 integers and return the result as an int\n func addition(a int, b int) int {\n return a + b\n }\n \n // If we have multiple arguments of the same type, we can pass the type\n // to the last argument among them, like a,b,c int\n func addition2(a, b, c int) int {\n return a + b + c\n }\n \n // A golang function without any return type\n func subtract(num1 int, num2 int) {\n result := 0\n result = num1 - num2\n fmt.Println(result)\n }\n \n func main() {\n \n res1 := addition(1, 2)\n fmt.Println(\"1+2 =\", res1)\n \n res2 := addition2(1, 2, 3)\n fmt.Println(\"1+2+3 =\", res2)\n \n subtract(5, 3)\n \n }\n\n package main\n \n import \"fmt\"\n \n func values() (int, int) {\n return 10, 70\n }\n \n func main() {\n \n a, b := values()\n fmt.Println(a) // Output: 10\n fmt.Println(b) // Output: 70\n \n // When we use the blank identifier on the first argument\n // only the second one get returned and vice versa\n _, c := values()\n fmt.Println(c) // Output: 70\n }\n\n package main\n \n import \"fmt\"\n \n // A function called addition that take a variable number of arguments\n func addition(nums ...int) {\n // 1st: We are printing the numbers\n fmt.Print(nums, \" \")\n // 2nd: Iterating through the numbers and adding them\n total := 0\n for _, num := range nums {\n total += num\n }\n // 3rd: printing the total\n fmt.Println(total)\n }\n \n func main() {\n \n // Calling the function with 2 args\n addition(1, 2) // Output: [1 2] 3\n // Calling the same function with 3 args\n addition(1, 2, 3) // Output: [1 2 3] 6\n \n // And finally here we are passing an array of numbers as an argument to the function\n nums := []int{1, 2, 3, 4}\n addition(nums...) // Output: [1 2 3 4] 10\n }\n", "text": "Hello, Hello folks how it’s going on, today marks our DAY / 100 . GOLANG is going very smoothly till now . I hope you are enjoying my posts, and getting to learn more about this new in demand technology.In today post, we will talk about functions, their types, and how we can use them in Golang What is a function ? How to declare a normal function in Golang ?Multiple Return functions in GolangVariadic Functions in Golang Coming from its name, a variadic function is simply a one that takes a variable number of args. And each time we call the function we can pass a varying number of arguments of the same typeA variadic function can be simply created by adding an ellipsis “…” to the final parameter , as the example shown belowExample", "username": "eliehannouch" }, { "code": " package main\n \n import \"fmt\"\n \n func main() {\n \n // anonymous function\n var helloCommunity = func() {\n fmt.Println(\"Hello, MongoDB community 🥰 🚀\")\n }\n \n // calling the function\n helloCommunity()\n \n // anonymous function with arguments\n var multiplication = func(a, b int) int {\n return a * b\n }\n // passing arguments to the anonymous function\n fmt.Println(\"Multiplication result: \",multiplication(5, 4))\n }\n \n package main\n \n import (\n \"fmt\"\n str \"strings\"\n )\n // The outer function\n func hello(name string) {\n \n // The inner function using the args name of its enclosing function\n var displayName = func() {\n updatedName := str.ToUpper(name)\n fmt.Println(\"Hello\", updatedName)\n }\n // call the inner function\n displayName()\n }\n func main() {\n // Call the parent outer function\n hello(\"elie\")\n // Output: Hello ELIE\n }\n \n package main\n import \"fmt\"\n \n func helloWorld() func() {\n \n return func() {\n fmt.Println(\"Hello World\")\n }\n \n }\n \n func main() {\n \n functionInsideFunction := helloWorld()\n functionInsideFunction()\n }\n \n package main\n \n import (\n \"fmt\"\n \"strings\"\n )\n // The outer function that return another anonymous function\n func displayNameInLoweCase(name string) func() string {\n greeting := \"Hello,\"\n // The anonymous inner function that return\n // the name passed from the outer function concatenated with a string in lower case\n return func() string {\n transformedName := strings.ToLower(name)\n return greeting + transformedName\n }\n }\n func main() {\n \n lower := displayNameInLoweCase(\"JOHN\")\n fmt.Println(lower())\n }\n \n", "text": "Hello folks, another day, and some new progress in this amazing journey. Today marks our 9th day in the 100days code challenge. And in today’s post, we will continue our tour on some new concepts in Golang. Where we will talk about anonymous functions, nested functions, and closures What is an anonymous function in Golang ?An anonymous function can be assigned to a variable.ExampleWhat is a nested function ? And does Golang support such functions ? A nested function is a function which is defined within another function, the enclosing function.A nested function is it self invisible outside of its immediately enclosing function, but can see (access) all local objects (data, functions, types, etc.) of its immediately enclosing function.Golang allow us to deal with such functions, on the fly.ExampleA function that returns a function ??Yeh, in Golang we can simply create a function that returns a function, let us understand how we can do that, in the following example.ExampleWhat is a closure in Golang ??A function that references variables outside its body or scope, or in other words we can define it as an inner function that has an access to the variables in the scope in which it was created.Example:", "username": "eliehannouch" }, { "code": "structs package main\n \n import \"fmt\"\n \n func factorial(n int) int {\n if n == 0 {\n return 1\n }\n // the factorial function calling itself, until solving the problem\n return n * factorial(n-1)\n }\n \n func fibonacci(n int) int {\n if n < 2 {\n return n\n }\n // fibonacci calling it self\n return fibonacci(n-1) + fibonacci(n-2)\n }\n \n func main() {\n fmt.Println(factorial(4)) // Output: 24\n \n fmt.Println(fibonacci(6)) // Output: 8\n }\n\nStructstypestruct namestruct type StructureName struct {\n // structure definition \n }\n package main\n \n import (\n \"fmt\"\n )\n \n func main() {\n \n // A basic car struct\n type Car struct {\n carModel string\n productionYear int\n carColor string\n }\n \n // declaring a struct that represent a User\n type User struct {\n firstName string\n lastName string\n phoneNumber string\n age int\n jobTitle string\n // use the car struct as a user defined data type\n car Car\n }\n \n // To define a struct instance\n var car Car\n \n // assign value to car struct\n car = Car{\n carModel: \"BMW\",\n productionYear: 2019,\n carColor: \"Black\",\n }\n \n // Here we are creating a new User based on the defined struct\n // and adding a predefined car to it as a nested struct\n user1 := User{\"John\", \"Doe\", \"000000\", 25, \"Software Engineer\", car}\n fmt.Println(user1)\n // Output: {John Doe 000000 25 Software Engineer {BMW 2019 Black}}\n \n // To access a certain field from user1 we use the dot\n fmt.Println(user1.firstName, \"-\", user1.jobTitle)\n // Output: John - Software Engineer\n \n // To access a nested struct\n fmt.Println(user1.car.carModel) // Output: BMW\n \n }\n", "text": "Hello community , hope you are enjoying the journey and learning some amazing things in Golang. \nA day means some new progress and time to share what I learned to this amazing​:heart_eyes: community. So everything stays from the community to the community. In today’s post we will talk about recursion, how we can create recursive functions❓ and deal with them, then we will move to discover structs and their role in Golang.What does recursion mean ? When should I use recursion The recursion approach is used mainly when we need to solve problems that can be broken down into smaller and repetitive problems. The recursion approach win when the problem to be solved have many possible branches and these branches are too complex to iterate through themExample: Searching in a file systemAnd now let’s move to learn a new amazing concept in Golang. Structs !!  What is a struct in Golang And how can we create and manipulate them ??", "username": "eliehannouch" }, { "code": "\tfunc (t Type) nameofMethod (param1, param2, …) {\n\t\t// Method logic define here in the body\n\t}\n package main\n \n import \"fmt\"\n import \"math\"\n \n // Defining a Rectangle struct with 2 attributes height and width\n type Rectangle struct {\n height int\n width int\n }\n \n // Defining a Circle with a radius of type float\n type Circle struct {\n radius float64\n }\n \n // Here we are defining an Area method that deal with the rectangle struct\n func (r Rectangle) Area() int {\n return r.height * r.width\n }\n \n // Defining a method that deal with the rectangle struct and compute the perimeter\n func (r Rectangle) perimeter() int {\n return 2*r.width + 2*r.height\n }\n \n // As we can see here we are defining the same method Area\n // defined above, with a different receiver which is Circle\n func (c Circle) Area() float64 {\n return math.Pi * c.radius * c.radius\n }\n \n func main() {\n rec := Rectangle{\n height: 12,\n width: 4,\n }\n fmt.Printf(\"The Area of the rectangle %d\\n\", rec.Area())\n //Output: The Area of the rectangle 48\n\n fmt.Printf(\"The Perimeter of the rectangle %d\\n\", rec.perimeter())\n // Output: The Perimeter of the rectangle 32\n\n circ := Circle{\n radius: 4.2,\n }\n fmt.Printf(\"The area of the circle %f\", circ.Area())\n // Output: The area of the circle 55.417694\n }\n\n", "text": "Hello folks, a day is here, let’s wrap it up with some new content and a lot of fun . To mark this amazing day as completed we should dive into a new interesting topic in Golang !! In today’s post , we will make a gentle intro on Golang methods, how we can deal with them and the difference between them and functions. What is a method in Golang ? It’s simply a normal function or we can say that a method behaves like a function but with a special receiver type. The receiver in the Golang method can be a struct or a non struct type.We can define the same method on different struct types , for example if we have a Rectangle ▭ and a Square struct. We can simply define an Area method on both of them.Why methods in Golang ? Golang is not​:no_entry: a complete object oriented programming language and hence it does not support the classes architecture. Methods were created with an aim to represent the same behavior done in classes. Methods allow the grouping of all functions that belong to the same struct type together. How to define a method in Golang ? To define a method in Golang we use the same function style with one extra parameter which is the receiver type between the func keyword and the method name Syntax Example:Methods Example And we reached the end of our gentle intro defining what methods are all about, and how we can deal with them. In tomorrow’s post we will dive more into methods and make some more advanced operations on them after introducing the pointers in Go. Stay tuned for the coming post, and let’s make this journey an amazing one, where all of us can learn and take some new skills in a funny way ", "username": "eliehannouch" }, { "code": "intstringboolarraystructinterface*&var pointerName *TypepointerNameTypeintstring package main\n \n import \"fmt\"\n \n func main() {\n // A variable of type int\n var integerNumber int\n integerNumber = 34 // A number of type int\n \n // A pointer of type string\n var stringPointer *string\n \n // A pointer of type int\n var integerPointer *int\n \n // integerPointer pointing to integerNumber\n integerPointer = &integerNumber\n \n fmt.Println(\"The address on which the pointer is pointing\", integerPointer)\n // Output: output in hex is 0x1400001c070\n \n fmt.Println(\"The value of the variable to which the pointer is pointing\", *integerPointer)\n // Output: The value is: 34\n \n // To change the value of the variable via the pointer we use:\n \n *integerPointer = 45\n \n fmt.Println(\"The new value of the variable integerNumber is: \", integerNumber)\n // Output: 45\n }\n \n package main\n \n import \"fmt\"\n \n func main() {\n \n // create a pointer using new()\n var newPointer = new(string)\n \n *newPointer = \"elie hannouch\"\n \n fmt.Println(newPointer) // 0x14000010230\n fmt.Println(*newPointer) // elie hannouch\n \n }\n\n package main\n \n import \"fmt\"\n \n func updateName(name *string) {\n // Using the dereference operator we change\n // value of the name variable\n *name = \"Elie Hannouch\"\n \n }\n \n func main() {\n // Name with initial value\n var name string = \"John Doe\"\n fmt.Println(\"The original name is\", name) // John Doe\n \n // Passing the address of the name variable to the updateName function\n updateName(&name)\n fmt.Println(\"The original name is now changed to\", name) // Elie Hannouch\n }\n\nstructsmethods", "text": "Day 12 is already here . Hope you are doing well and enjoying your journeys to the maximum. So guys, it’s time to continue and dive more into some interesting topics in Golang, to wrap up the day successfully. In today’s post, we will talk about pointers in Go, their role and where to use them. Moving from some theoretical definitions to an interactive funny example that illustrate the targeted topics in a clear way What is a pointer in Golang ? A pointer is simply a normal 🥹 variable that stores the address of a certain object stored in the memory.A pointer points to different variables with different data types like int , string , bool and more and only stores their memory address in hexadecimal format. In Golang, pointers are called special variables.Why do we use pointers in Go ? Pass by reference Pass by value What is a dereference operator in Golang pointer ? What is the address operator in Golang ?How to declare a pointer in Golang ?Create a pointer using the keyword Functions with pointers ", "username": "eliehannouch" }, { "code": " package main\n \n import \"fmt\"\n \n func main() {\n \n // Here we are defining a user struct with some specific attributes.\n type User struct {\n firstName string\n lastName string\n age int\n email string\n phoneNumber string\n }\n \n // And now we are gonna initialize a user instance\n user1 := User{\n firstName: \"John\",\n lastName: \"Doe\",\n age: 30,\n email: \"[email protected]\",\n phoneNumber: \"555-555-5555\",\n }\n \n // create a pointer with struct type\n // to save the user address\n \n var pointer1 *User\n \n // Save the person1 struct address in the defined pointer\n pointer1 = &user1\n \n // Print the pointer struct\n fmt.Println(pointer1)\n // Output: &{John Doe 30 [email protected] 555-555-5555}\n \n // change the struct attributes values via the created pointer\n pointer1.firstName = \"Jonas\"\n pointer1.phoneNumber = \"111-111-1111\"\n fmt.Println(pointer1)\n // Updated Output: &{Jonas Doe 30 [email protected] 111-111-1111}\n }\n\n package main\n \n import (\n \"fmt\"\n )\n \n type Car struct {\n carModel string\n price int\n }\n \n func ValueReceiver(c Car) {\n c.carModel = \"BMW\"\n fmt.Println(\"Inside ValueReceiver method: \", c.carModel)\n }\n \n func PointerReceiver(c *Car) {\n c.price = 10000\n fmt.Println(\"Inside PointerReceiver method: \", c.price)\n }\n func main() {\n car1 := Car{\"Nissan\", 8500}\n car2 := &Car{\"Mercedes\", 12800}\n ValueReceiver(car1)\n fmt.Println(\"Inside Main after calling the value receiver : \", car1.carModel)\n PointerReceiver(car2)\n fmt.Println(\"Inside Main after calling the pointer receiver : \", car2.price)\n }\n \n // output:\n // Inside ValueReceiver method: BMW\n // Inside Main after calling the value receiver : Nissan\n // Inside PointerReceiver method: 10000\n // Inside Main after calling the pointer receiver : 10000\n\n", "text": "Hello, Hello, The 13th day is here. Okay folks let’s keep the amazing journey going on. \nBefore marking our day as completed, let’s dive more into some new and amazing topics in Golang. In yesterday post , we talked about pointers, how to use them and their benefits in Go, we also made a gentle intro on how we can utilize the power of pointers, when dealing with functions where we explored the concept of pass by reference vs the pass by value and how each of them deal with the code in different way, resulting in different results. Today we will dive more with pointers to explore how we can use them with structs and methods, and how they give us some extra power , when utilizing them correctly. In this post we will move from the theoretical definitions and explanations and get into an interactive example that shows us how we can add pointers to structs and methods. Pointers with structsMethods and PointersMethods in Golang ?: It’s simply a normal function or we can say that a method behaves like a function but with a special receiver type In yesterday’s post we explored the difference between pass by value and pass by reference in normal functions. Today we will explore Value vs Pointer receiver in methods.Value Receiver Pointer receiverA value receiver, take a copy of the type and simply pass it to the method, and in this case our method holds an object equal to the original one but each of them sit in a different memory location. And now any change happening on the passed object only affects the local one. Where the original object stay untouchedOn the other hand, A pointer receiver: take the address of the original object and pass it to the method. And now the method has a reference to the original object location and any modification will affect this original one directlyExampleAnd now to analyze our code, using the value receiver which takes a copy of the object, the carModel name changed only inside the function but the change is not reflected in the main function On the other hand by using the pointer receiver which takes the address of the object, when we changed the price to 10000 this price affected the car2 object found in the main function after calling the PointerReceiver method on it.So now we can demonstrate that when we use a pointer receiver , any occurring change will affect the original object and when using the value receiver the changes only affect the copied object locally.", "username": "eliehannouch" }, { "code": "", "text": "Hello family , yeah family the best word we can say in the world . Today I’m here to share with you our big mongo DB family some amazing news. Okay, let me wrap this special day here, in this special place with our amazing community. In today post I’m not going to talk about some technical topics, no it’s not the target now.In this special day, me and @darine_tleiss the sister that God gave me, we closed a chapter, and we are officially graduated as computer scientist. My friends, I’m here today to tell you all, keep your friends by your side . They are the God grace and the life gift . 𝟯 years passed during which I learned a lot in this wonderful major. I got to know many classmates from whom I learned many lessons . And today I’m telling you all, and the joy fills my heart​:heart:. We marked the old chapter as completed and we opened a one towards a better future and a challenging adventure . Together as best friends , we will fight till the end to craft a future we deserve. In front ➬ of you all I would like to thank my best and closest sister Darine , for everything you did for me. We defied the odds and conquered the problems 🆇. You were the one who heard my problems, accepted my good and bad . Thanks for all your sacrifices . And today I’m asking God to keep you by my side until infinity. ∞∞ ", "username": "eliehannouch" }, { "code": " package main\n \n import (\n \"fmt\"\n \"math\"\n )\n \n // Here we are defining the interface geometry\n // That define 2 signatures the area and perimeter in an abstract way\n type geometry interface {\n area() float64\n perim() float64\n }\n \n // Here we have a rectangle struct with 2 attributes width and height\n type rectangle struct {\n width float64\n height float64\n }\n \n // The circle struct with the radius attribute\n type circle struct {\n radius float64\n }\n \n // With the rectangle struct we are implementing the interface\n // with all the signatures available inside of it\n func (r rectangle) area() float64 {\n return r.width * r.height\n }\n func (r rectangle) perim() float64 {\n return 2*r.width + 2*r.height\n }\n \n // Same here for the circle functions that implement the geometry interface\n func (c circle) area() float64 {\n return math.Pi * c.radius * c.radius\n }\n \n func (c circle) perim() float64 {\n return 2 * math.Pi * c.radius\n }\n // a generic measure function\n func measure(g geometry) {\n fmt.Println(g)\n fmt.Println(g.area())\n fmt.Println(g.perim())\n }\n \n func main() {\n r := rectangle{width: 3, height: 4}\n c := circle{radius: 5}\n \n measure(r)\n measure(c)\n }\n\n", "text": "Hello folks, a new day and some new progress. In today’s post we will resume the technical content to dive more in Golang and explore some new amazing concepts. In our last Golang post, we talked about pointers and their role and how we can use them with structs, functions, methods and more. In today’s post we will talk about interfaces, their role and how we can utilize them. Starting from some theory information and wrapping up with an example that shows us how we can use interfaces in a real example. What is an interface in Golang ?What happens if a struct does not implement all the interface methods ? Now we reached the post end. In tomorrow’s post we will dive more into mixins and then we will introduce the functions concept, to utilize more the power of this amazing technology. ", "username": "eliehannouch" }, { "code": " package main\n \n import (\n \"fmt\"\n s \"strings\"\n )\n \n var p = fmt.Println\n \n func main() {\n \n p(\"Contains: \", s.Contains(\"test\", \"es\"))\n p(\"Count: \", s.Count(\"test\", \"t\"))\n p(\"HasPrefix: \", s.HasPrefix(\"test\", \"te\"))\n p(\"HasSuffix: \", s.HasSuffix(\"test\", \"st\"))\n p(\"Index: \", s.Index(\"test\", \"e\"))\n p(\"Join: \", s.Join([]string{\"a\", \"b\"}, \"-\"))\n p(\"Repeat: \", s.Repeat(\"a\", 5))\n p(\"Replace: \", s.Replace(\"foo\", \"o\", \"0\", -1))\n p(\"Replace: \", s.Replace(\"foo\", \"o\", \"0\", 1))\n p(\"Split: \", s.Split(\"a-b-c-d-e\", \"-\"))\n p(\"ToLower: \", s.ToLower(\"TEST\"))\n p(\"ToUpper: \", s.ToUpper(\"test\"))\n \n \n // Output\n /*\n Contains: true\n Count: 2\n HasPrefix: true\n HasSuffix: true\n Index: 1\n Join: a-b\n Repeat: aaaaa\n Replace: f00\n Replace: f0o\n Split: [a b c d e]\n ToLower: test\n ToUpper: TEST\n */\n }\n\n", "text": "Hello friends, here we go, the 16th day is already here . Let’s finish it successfully by taking some new information, to extend our existing knowledge in the Golang world. In today’s post, I’ll move a little bit out of track and share with you some interesting information in String functions, that help us manipulate and deal with strings easily. So in this post we will explore the strings package , which gives us a lot of useful string-related functions to be used out of the box. ", "username": "eliehannouch" }, { "code": " package main\n \n import (\n \"fmt\"\n )\n \n type point struct {\n x, y int\n }\n \n func main() {\n \n p := point{5, 7}\n \n fmt.Printf(\"struct: %v\\n\", p)\n // The %v print the struct values as they are in the\n // Output: struct: {5 7}\n \n fmt.Printf(\"struct: %+v\\n\", p)\n // The %+v print the struct values with the struct field names\n // Output: struct: {x:5 y:7}\n \n fmt.Printf(\"struct: %#v\\n\", p)\n // The %#v print the struct values, field names and the syntax representation\n // Output: struct: main.point{x:5, y:7}\n \n fmt.Printf(\"type: %T\\n\", p)\n // The %T print the type of the variable\n // Output: type: main.point\n \n fmt.Printf(\"int: %d\\n\", 123)\n // The %d print the value in base 10 representation\n // Output: 123\n \n fmt.Printf(\"bin: %b\\n\", 14)\n // The %b print the value in binary\n // Output: 1110\n \n fmt.Printf(\"char: %c\\n\", 33)\n // The %c print the char corresponding to the integer\n // char: !\n \n fmt.Printf(\"hex: %x\\n\", 456)\n // The %x print the value in hexadecimal\n // Output: 1c8\n \n fmt.Printf(\"float1: %f\\n\", 78.9)\n // The %f print the float number in it's basic form\n // Output: 78.900000\n \n fmt.Printf(\"float2: %e\\n\", 123400000.0)\n fmt.Printf(\"float3: %E\\n\", 123400000.0)\n // The %e and %E print the float value in different way\n // Output: 1.234000e+08 or 1.234000E+08\n \n fmt.Printf(\"str: %s\\n\", \"\\\"string\\\"\")\n // The %s is used to print the string\n // Output: \"string\"\n \n fmt.Printf(\"str: %q\\n\", \"\\\"string\\\"\")\n // The %q is used to double quote strings\n // Output: \"\\\"string\\\"\"\n \n fmt.Printf(\"str: %x\\n\", \"hex this\")\n // The %x render the string in base 16\n // Output: 6865782074686973\n \n fmt.Printf(\"pointer: %p\\n\", &p)\n // The %p is used to print the representation of the pointer\n // Output: 0x1400012a010\n \n }\n\n", "text": "Hello folks, a new day is here, let’s wrap it successfully by learning some new techniques to format a string in Golang. In yesterday post we talked about the string functions and how we can use them to manipulate strings in Go In today’s post we will learn the different ways to format a string, by moving from the boring theory part, to directly moving on directly to an interactive example to learn from it. Exercise Let’s end this amazing post now, I hope you can learn some amazing stuff from it, and stay tuned for tomorrow’s post where we will explore new interesting topics. ", "username": "eliehannouch" }, { "code": "regexregexpfindfind and replace package main\n \n import (\n \"fmt\"\n \"regexp\"\n )\n \n func main() {\n \n match, _ := regexp.MatchString(\"p([a-z]+)ch\", \"peach\")\n fmt.Println(match)\n \n r, _ := regexp.Compile(\"p([a-z]+)ch\")\n \n fmt.Println(r.MatchString(\"peach\"))\n \n fmt.Println(r.FindString(\"peach punch\"))\n \n fmt.Println(\"idx:\", r.FindStringIndex(\"peach punch\"))\n \n fmt.Println(r.FindStringSubmatch(\"peach punch\"))\n \n fmt.Println(r.FindStringSubmatchIndex(\"peach punch\"))\n \n fmt.Println(r.FindAllString(\"peach punch pinch\", -1))\n \n fmt.Println(\"all:\", r.FindAllStringSubmatchIndex(\n \"peach punch pinch\", -1))\n \n fmt.Println(r.FindAllString(\"peach punch pinch\", 2))\n \n fmt.Println(r.Match([]byte(\"peach\")))\n \n r = regexp.MustCompile(\"p([a-z]+)ch\")\n fmt.Println(\"regexp:\", r)\n \n fmt.Println(r.ReplaceAllString(\"a peach\", \"<fruit>\"))\n \n /*\n true\n true\n peach\n idx: [0 5]\n [peach ea]\n [0 5 1 3]\n [peach punch pinch]\n all: [[0 5 1 3] [6 11 7 9] [12 17 13 15]]\n [peach punch]\n true\n regexp: p([a-z]+)ch\n a <fruit>\n a PEACH\n */\n \n }\n\n", "text": "Hello folks, hope you are doing well, and you are learning some amazing things in this amazing tech world. In yesterday’s post, we talked about string formatting in Golang. And in today’s post we will define the regular expressions in Golang. How to use them and to do so ? What is a regular expression ?Application of regular expression ? Simple parsing.The production of syntax highlighting systems. Data Validation. Data Scraping (especially web scraping)Data wrangling.Example", "username": "eliehannouch" }, { "code": " package main\n \n import (\n \"log\"\n \"os\"\n \"text/template\"\n )\n \n func main() {\n template, err := template.ParseFiles(\"template.txt\")\n // Capture any error\n if err != nil {\n log.Fatalln(err)\n }\n // Print out the template to std\n template.Execute(os.Stdout, nil)\n }\ntemplate.Musttemplate.Execute package main\n \n import (\n \"log\"\n \"os\"\n \"text/template\"\n )\n \n // Declare a pointer for the template.\n var temp *template.Template\n \n // we use the init() function, to make sure that the template is parsed once\n func init() {\n // The Must function take the ParseFiles response and perform error checking\n // template.Must takes the response of template.ParseFiles and does error checking\n temp = template.Must(template.ParseFiles(\"template.txt\"))\n }\n func main() {\n // Execute myName into the template and print to Stdout\n myName := \"Elie Hannouch\"\n err := temp.Execute(os.Stdout, myName)\n if err != nil {\n log.Fatalln(err)\n }\n }\n\n", "text": "Hello folks, a new day and some new knowledge are required. First of all I hope that your journeys are going well. In yesterday post talked about regular expressions and their roles in Golang, how they are used to validate input, events to restrict it and much more. In today’s post we will dive into a new amazing topic in Golang, the text templates feature that helps developers create dynamic content and show customized output to the user based on their needs. First of all let’s explore what the Go Template package is and how we can use it to render some dynamic content based on the user and developer needs. 🥹:desktop_computer:What is a template in Golang ? In simple terms a template is a file that defines a specific pattern, where the developer can dynamically change the input to show different output based on the embedded rules and logic. A template simply helps developers transform their static files into more dynamic and interactive ones.ParseFiles in Go Must & Execute Method in Golang template.Must: Must is a helper that wraps a call to a function returning (*Template, error ) and panics if the error is non-nil.template.Execute: Execute applies a parsed template to the specified data object, and writes the output to wr. If an error occurs executing the template or writing its output, execution stops, but partial results may already have been written to the output writer.Example:And now let’s wrap this amazing post up, and stay tuned for tomorrow post where we will talk more about variables and loops inside the template in Golang 🫡", "username": "eliehannouch" } ]
The Journey of #100DaysofCode (@eliehannouch)
2022-06-08T21:06:19.994Z
The Journey of #100DaysofCode (@eliehannouch)
16,368
null
[ "data-modeling" ]
[ { "code": "", "text": "We are collecting a that when translated into a JSON object, this will contain more than 20K key and value pairs. Is this something that MongoDB can handle?", "username": "Rommel_Oli" }, { "code": "", "text": "There is a document size limit of 16 MB.\nDocuments — MongoDB ManualIf every key has its double value, then your document size will approximately be 350 kB. I recommend setting indexes as queries with such large documents will run very slowly.", "username": "Simon_Bieri" }, { "code": "", "text": "Welcome to the MongoDB Community @Rommel_Oli !If you are inserting JSON documents as-is, the limitations for individual documents are:Depending on your use case, there may be better ways to model this data in MongoDB. For example, 20k fields which are infrequently used could lead to anti-patterns like Bloated Documents.Some recommended references areRegards,\nStennie", "username": "Stennie_X" } ]
More than 20K key and value pair
2022-09-08T05:59:32.396Z
More than 20K key and value pair
1,268
https://www.mongodb.com/…cc1454144f46.png
[ "100daysofcode" ]
[ { "code": "", "text": "Hello everyone , I am Darine from Lebanon MUG happy to join the #100DaysOfCode inspired by @henna.s, where I’ll share my daily progress learning some amazing JS stuff.For today, the #Day1 day in this amazing journey, I’ll be sharing some amazing information about the Document Object Model in JavaScript.What is DOM ? The Document Object Model is a programming interface for web documents. It represents these documents as nodes and objects, so in this case the programming language like JS can interact with the page to change the content, the style or even the structure.DOM !== JAVASCRIPT DOM is not a programming language, neither a part of the JavaScript language, but instead it’s a Web API used to build websites, and JS without it wouldn’t have any model of the web pages, and we as JS developers cannot make those cool animations that we see around in most of today websites. Where all of these animations are made simply by manipulating the DOM.Dom Manipulation It’s the interaction process with the JS DOM API to create, modify, style, or delete elements without a refresh. It also promotes user interactivity with browsers.DOM Tree\n950×1024 56.6 KB\n\nLet’s explore the above tree, and get to know some of it’s important parts:", "username": "Darine_Tleiss" }, { "code": "", "text": "Hello @Darine_Tleiss ,Welcome to MongoDB Community Absolutely Delighted to know that I inspired you This has been an amazing journey for me and I am thinking of getting back to it sometime in July now I wish you all the best on your self-discovery and please feel free to reach out anytime when you need Cheers, ", "username": "henna.s" }, { "code": "document.documentElementdocument.querySelector('p')document.querySelectorAll('p')NodeList(6) [p, p, p, p, p, p]\n0: p\n1: p\n2: p\n3: p\n4: p\n5: p\nlength: 6\n[[Prototype]]: NodeList\ndocument.getElementById(\"section1\");document.getElementsByTagName('tagName');document.getElementsByClassName('btn');const message = document.createElement('div');message.classList.add('greeting-message');message.innerHTML = ‘Hello dear community, I’m so happy joining this amazing journey and sharing my daily progress’const body = document.querySelector('body’);body.append(message);", "text": "Today we will explore some of the document object model functions that help us accessing different parts of our web page, or even create a new one from scratch.First let’s explore some of the basic DOM methods that help, retrieving and selecting certain sections of the document starting with:document.documentElement\nReturn the root element of the document, for example the html element for the HTML documents.document.querySelector('p')\nA method that returns the first element that matches a CSS selector.document.querySelectorAll('p')\nA method that returns a static NodeList representing a list of the document’s elements that match the specified group of selectors.\nOutput:document.getElementById(\"section1\");\nA method that returns an element with a specified value or returns null if the element does not exist.document.getElementsByTagName('tagName');\nA method that returns an HTMLCollection of elements with the given tag, and the returned collection is live, meaning that it updates itself automatically And if we pass * as a tagName the method returns all the elements of the document.document.getElementsByClassName('btn');\nA method that returns an array-like object of all child elements which have all of the given class name(s).And Now it’s time, to learn more on how we can create a new element using DOM, and add some content to it : const message = document.createElement('div');\nHere we are creating a div using the .createElement method and saving it to a message const.message.classList.add('greeting-message');\nIn the above code, we are adding a className (greeting-message) to our newly created div with the help of classList.add method.message.innerHTML = ‘Hello dear community, I’m so happy joining this amazing journey and sharing my daily progress’ \nHere by using the .innerHTML method we are embedding some content to the created div.const body = document.querySelector('body’);\nbody.append(message);\nAnd finally we append it to the body section of our HTML document using the following piece of code.", "username": "Darine_Tleiss" }, { "code": "", "text": "Hello friends , a new day == a new progress \nToday we will learn how to create documents using DOM, add elements to it, prepend, append and manipulate them Lets create a new document so we can add, remove and manipulate their elements\nlet newDoc = new Document();\nHere we are initialising a new document called newDoc.Let’s create some html elements to add them to our newly created documentlet html = document.createElement(“html”);\nHere using the document.createElement function we are creating a new HTML element to be appended to our documentnewDoc.append(html)\nThe .append method, add the html element to the document and the doc structure become as following:\nNow let’s expand the html element and add some new elements to it\nlet head = document.createElement(“head”);\nlet body = document.createElement(“body”);html.append(body)Let’s anaylze our code and see the generated document\nAfter creating the html document, we have created 2 new elements, the body and the head.\nAnd using the .append method, we’ve added the body to the html and the structure became as following:\nAnd now how can add the head element, before the body one ??Easy we can simply use the .before method that inserts a set of Node or string objects in the children list of this Element’s parent, just before this ElementSo we execute it as following:body.before(head);After preparing the main document structure, let’s create some elements to be added to our body. So we can perform some experiments on them let h1 = document.createElement(“h1”);\nlet p = document.createElement(“p”);\nlet div = document.createElement(“div”);\nbody.append(p,div)So finally let add the missing h1 element as a first child to the body.How ? Using the body.prepend() method that inserts a set of Node objects or string objects before the first child of the Element.body.prepend(h1)Finally for today let’s add some text with a className ‘content’ to our paragraph, get rid of the unnecessary created h1 and move the paragraph to become the child of the empty div.p.innerHTML = “Hello everyone, here we are inserting some text to the p element using the write method”p.className = “content”\nbody.removeChild(h1)\ndiv.appendChild(p)\n1600×440 81.3 KB\n", "username": "Darine_Tleiss" }, { "code": "const r = document.getElementById('root')r.hasChildNodes()\ntrue\n const r = document.getElementById('root')\n r.childNodes\n \n NodeList(3) [ul#nav-access.a11y-nav, div.page-wrapper.category-api.document-page, footer#nav-footer.page-footer]\n 0: ul#nav-access.a11y-nav\n 1: div.page-wrapper.category-api.document-page\n 2: footer#nav-footer.page-footer\n length: 3\n [[Prototype]]: NodeList\n const r = document.getElementById('root')\n r.firstChild\n <ul id=​\"nav-access\" class=​\"a11y-nav\"> ​… <​/ul>\nconst r = document.getElementById('root')\n r.lastChild\n <footer id=​\"nav-footer\" class=​\"page-footer\"> …​ </footer>\nconst r = document.getElementById('root')\nr.children[1]\n<div class=​\"page-wrapper category-api document-page\"> ​…​ </div​>\nconst r = document.getElementById('root')\nr.removeChild(r.children[0])\n<ul id=​\"nav-access\" class=​\"a11y-nav\"> ​… </ul> ​\n HTMLCollection(2) [div.page-wrapper.category-api.document-page, footer#nav-footer.page-footer, nav-footer: footer#nav-footer.page-footer]\n 0: div.page-wrapper.category-api.document-page\n 1: footer#nav-footer.page-footer\n nav-footer: footer#nav-footer.page-footer\n length: 2\n [[Prototype]]: HTMLCollection\nconst r = document.getElementById('root')\nr.parentNode\n<body> …​ </body>​\nconst element = document.getElementById('p2')\nelement.previousSibling\n<div class=​\"page-footer-nav-col-1\" id=​\"p1\"> ​… </div​>\nelement.nextSibling\n<div class=​\"page-footer-nav-col-3\" id=​\"p3\"> ​…​ </div​>\n", "text": "Hello friends , hope you are excited to learn something new in the amazing world of JavaScript. Today marks the 4th day in this challenging and fun journey. And Now it’s time to continue the exploration process of the DOM.First of all we are gonna start exploring the different elements children, their parents & siblings, how we can access them and what operations we can perform on.How can we check if a certain element has some children ? Select the targeted element by its ID, or className as following\nconst r = document.getElementById('root')After selecting it, we can easily run the following command which returns either true or false, where true means the element has some childrens and vice versa.How can we list the childNodes of a certain element if they exist ? Using the Element.childNodes function that returns a live NodeList of child nodes, where the first child node is assigned with index 0, and so on.Now let’s dive into our elements and learn more about how we can access a first child, a last one, how we can change, delete them and more …To access the first child , using the Node.firstChild property that returns the first child in the tree.To access the last child , using the Node.lastChild that returns the last child of the nodeTo access an element at a certain position we can access it simply by specifying it index, using the Element.children[index]To delete a certain child , we can use the removeChild() method that returns the specified child if it exists from the DOM then returns it.\nExample:And here the first child with the index 0 is removed from the tree and the new set is:Moving from childrens to parents & siblings. Let’s now learn more about them, how to access some of them and more …To access the parent of a certain element we can simply use the following property Element.parentNode, which return the parent of the targeted element as following:To access the siblings of a certain element, we have to use the basic properties of the Node.nextSibling that return the next sibling in the tree and the Node.previousSibling that return the previous one.\nExample:", "username": "Darine_Tleiss" }, { "code": "window.addEventListener('load', (event) => {\nconsole.log('The entire page and its dependencies is fully loaded');\n});\nwindow.onunload = (event) => {\n console.log('The page is unloaded successfully');\n };\n// A function to count the user message length\nfunction countUserMessageLength(e) {\nconsole.log(e.target.value.length)\n}\n// Select the input element\nlet input = document.querySelector('input');\n// Assign the onchange eventHandler to it\ninput.onchange = handleChange;\n<ul id=\"list\">\n <li>Javascript</li>\n <li>MongoDB</li>\n</ul>\n\nlet list = document.getElementById(\"list\");\n\nlist.addEventListener(\"mouseover\", function( event ) {\n\t// on mouseover change the li color\n\tevent.target.style.color = \"orange\";\n \n\t// reset the color after 1000 ms to the original\n\tsetTimeout(function() {\n\t event.target.style.color = \"\";\n\t}, 1000);\n }, false);\n<div class=\"source\" contenteditable=\"true\">Copy from here ... </div>\n<div class=\"target\" contenteditable=\"true\">...and pasting it here</div>\n\nconst target = document.getElementByClassName(\"target\")\n\ntarget.addEventListener('paste', (event) => {\n let paste = (event.clipboardData || window.clipboardData).getData('text');\n paste = paste.toUpperCase();\n \n const selection = window.getSelection();\n if (!selection.rangeCount) return false;\n selection.deleteFromDocument();\n selection.getRangeAt(0).insertNode(document.createTextNode(paste));\n \n event.preventDefault();\n});\n", "text": "Hey everyone , what is going on ??. Today marks the 5th day in our coding challenge.\nIn Today post we will make a gentle dive in the document object model events, their types and how we can deal with them.What are the HTML DOM events ? \nEvents that allow the JavaScript programming language to add different event handlers on HTML elements. They are a combined with callbacks functions where these functions will only execute if the specified event occur.Examples of HTML DOM events:And now let’s explore some of the basic events in JavaScript.The load and onload event, that gets fired when the whole page loads, including all the dependencies resources like the external images, stylesheets …\nExample:The unloaded event is fired when the document or one of its child gets unloaded, it typically execute when the user navigate from one page to another, and this type of events can be used to clean up the references to avoid memory leaks\nExample:The onchange event, that get fired when a user commit a value change to a form control\nExample:The mouseover event, that gets fired at an Element when a pointing device (such as a mouse) is used to move the cursor onto the element or one of its child elements.\nExample:The onpaste event, gets fired when the user has initiated a “paste” action through the browser’s user interface.\nExample:", "username": "Darine_Tleiss" }, { "code": "let positionTracker = 0;\nlet ticking = false;\n \nfunction doSomething(scrollPos) {\n // Do something with the scroll position\n}\n \ndocument.addEventListener('scroll', function(e) {\n positionTracker = window.scrollY;\n \n if(positionTracker > 1500) {\n alert(\"OH we reached the second half of our website\")\n }\n \n if (!ticking) {\n window.requestAnimationFrame(function() {\n ticking = false;\n });\n \n ticking = true;\n }\n});\n<details><aside id=\"log\">\n <b>Open chapters:</b>\n <div data-id=\"ch1\" hidden>I</div>\n <div data-id=\"ch2\" hidden>II</div>\n</aside>\n<section id=\"summaries\">\n <b>Chapter summaries:</b>\n <details id=\"ch1\">\n <summary>Chapter I</summary>\n MongoDB is the powerful NoSQL database\n </details>\n <details id=\"ch2\">\n <summary>Chapter II</summary>\n Javascript is one of the most popular programming languages.\n </details>\n</section>\n<script>\nfunction logItem(e) {\n const item = document.querySelector(`[data-id=${e.target.id}]`);\n item.toggleAttribute('hidden');\n }\n const chapters = document.querySelectorAll('details');\n chapters.forEach((chapter) => {\n chapter.addEventListener('toggle', logItem);\n });\n</script>\nwindow.addEventListener('offline', (event) => {\n console.log(\"The network connection has been lost.\");\n});\n \n// onoffline version\nwindow.onoffline = (event) => {\n console.log(\"The network connection has been lost.\");\n};\n<div>Lets scale with the mouse wheel.</div>\n \nconst element = document.querySelector('div');\nlet scale = 1;\nfunction zoom(event) {\n event.preventDefault();\n scale += event.deltaY * -0.01;\n element.style.transform = `scale(${scale})`;\n }\n element.addEventListener('wheel', zoom);\nconst form = document.getElementById('form');\nconst log = document.getElementById('log');\n \nfunction logReset(event) {\n log.textContent = `The Form has been reseted! Time stamp: ${event.timeStamp}`;\n }\n \nform.addEventListener('reset', logReset);\n", "text": "Hello friends , hope your journeys are going smoothly. Today marks the 6th / 100 day. Let’s learn something new in Javascript and make our journey a successful one .Today, we will continue discovering some new HTML DOM events, and how we can use them to manipulate our document and make some amazing effects.The scroll event, that fires when the document view has been scrolled.\nExample:The toggle event, or ontoggle event that gets fired when the open / closed state of a <details> element is toggled.\nExample:The offline event, that gets fired when the browser has lost access to the network and the value of Navigator.onLine switches to false\nExample:The wheel event fires when the user rotates a wheel button on a pointing device\nExampleThe reset event fires when the html element is resetted\nExample:", "username": "Darine_Tleiss" }, { "code": "// index.html file\n<!DOCTYPE html>\n<html>\n <head>\n <meta charset=\"utf-8\">\n <title>Css External file</title>\n <link rel=\"stylesheet\" href=\"styles.css\">\n </head>\n <body>\n <h1>Hello World!</h1>\n </body>\n</html>\n// style.css file\nh1 {\n color: blue;\n background-color: yellow;\n border: 1px solid black;\n }\n<!DOCTYPE html>\n<html>\n <head>\n <meta charset=\"utf-8\">\n <title>Internal StyleSheets</title>\n <style>\n p {\n color: blue;\n font-size: 12px;\n }\n </style>\n </head>\n <body>\n \n <p>Hello folks, in today post we are exploring css basics,\n by doing a gentle tour on its basics\n </p>\n </body>\n</html>\n<!DOCTYPE html>\n<html>\n <head>\n <meta charset=\"utf-8\">\n <title>Css Inline Style</title>\n </head>\n <body>\n <p style=\"color:blue;\">I love learning CSS</p>\n </body>\n</html>\n", "text": "Hey friends, Today is the 7th day of our amazing journey. I hope everyone enjoying their journey and learning some amazing things .In Today’s post, we will take a break from Javascript , and move on to learn and discover CSS, and its role in building stunning Web interfaces .So, starting with a quick introductory question , What does CSS stands for ?? C → CascadingS → Style S → Sheets\nsta-je-css1000×600 86.1 KB\nWhat CSS is all about in simple terms ?A standard that describes the formatting of markup languages pages, CSS describes how elements should be rendered on screen, on paper, in speech, or on other media. It define the formatting rules for the following document types:The C in CSS\nThe process of combining different stylesheets and resolving conflict between different CSS rules and declarations, when more than one rule applies to a certain element.A gentle look to CSS behind the scene How can we apply CSS rules to an HTML document ?In CSS we have three different methods to apply css to an html document.External stylesheets\nAn external stylesheet containing the css rules separated in a file with a .css extension. This most common and useful method of adding CSS to a document, where we can link a single css file to multiple web pages, styling all of them with the same stylesheet. And we link the HTML page with the CSS sheet simply by using the link element, and specifying the path of the css file in the href attribute.\nExample:File 1: index.htmlFile 2: styles.cssInternal Stylesheets\nThe internal stylesheet live within the HTML document, and we can simply creating it by placing our css inside a element in the head of our HTML document.\nExample:Inline Style\nCSS declarations that affect a single HTML element, contained with a style attribute.\nExample:", "username": "Darine_Tleiss" }, { "code": "h1 {\n\ncolor: blue;\n\ntext-align: center;\n\n}\n\nIn the above css rule, the selector is simply the h1 element, and the selector can be an HTML element, a class, an ID …\n\nWe can also combine more than selector in the same css rule as follows.\n\nh1, .container {\n\ncolor: blue;\n\n}\n* {\n\ncolor: red;\n\nfont-size: 14px;\n\n}\n<p class=\"cssclass\"></p>\n\n<div class=\"cssclass\"></div>\n.cssclass {\n\ncolor: blue;\n\n}\n<p id=\"cssID\"></p>#cssID {\n\ncolor: blue;\n\nfont-size: 2rem;\n\n}\n<div data-type=\"primary\"></div>[data-type='primary'] {\n\ncolor: red;\n\n}\nh1,\n\n#cssID,\n\n.my-class,\n\n[lang] {\n\ncolor: red;\n\n}\n", "text": "Hello family, hope you’re enjoying my daily posts . Let’s continue, and celebrate our daily progress . Today marks the 8th day / 100. And in today’s post, we will dive more in CSS and learn some amazing new things. First of all, we will discover what selectors are and how we can use them in our CSS document.What is a CSS rule set ? A CSS rule set contains one or more selectors and one or more declarations. The selector(s), which in this example is the class .my-css-rule , points to an HTML element. The declaration(s), which in this example are background: red; color:beige; font-size: 1.2rem\n760×406 9.33 KB\nWhat is a CSS selector ?A CSS selector is the first part of a CSS Rule. It is a pattern of elements and other terms that tell the browser which HTML elements should be selected to have the CSS property values inside the rule applied to them.Example:In the above example, the css rule contains 2 different selectors, an h1 element, and a class called container.Type of selectors:Universal selector: Known as wildcard, that matches any elementThe defined selector, below will affect all html elements, to have a color of red and a font size of 14 pixelsExample:Class selector: each HTML element can have one or more items defined in their attribute. The class selector matches any element that has the class applied to it.Classes in css start with a dot , example: .cssclassindex.htmlstyles.cssID selector: each HTML element can have an ID, and this ID should be unique to only one element in the pageThe ID in the css, is represented with a #index.html<p id=\"cssID\"></p>styles.cssAttribute selectorThe CSS attribute selector matches elements based on the presence or value of a given attribute. Instruct CSS to look for attributes by wrapping the selector with square brackets ([ ]).Example:index.html<div data-type=\"primary\"></div>styles.cssGrouping selectorA selector doesn’t have to match only a single element. You can group multiple selectors by separating them with commasExample:", "username": "Darine_Tleiss" }, { "code": "", "text": "Yes, we are enjoying your posts! ", "username": "Harshit" }, { "code": "", "text": "Hi @Darine_Tleiss,Thank you for continuing to share such informative posts – definitely enjoying the daily read and following your learning journey Regards,\nStennie", "username": "Stennie_X" }, { "code": "user-agentauthoruser stylesheetsUser-agent stylesheets, Author stylesheets, User stylesheets<p style=”color:red”;> Hello </p>#paragraphStyle {color: red}.paragraphStyle, :hover, [href]p, h1, :before, ::after!important A: p {color: red; }\n \n B: p#content {color: yellow; }\n \n C: <p id=\"content\" class=”paragraphContent” style=\"color: blue;\">Hello World</p>\n \n D: p.paragraphContent { color: green; }\n", "text": "Hello community, hope everyone is doing well . The 9th day is here, and some knowledge are required before marking the day as completed In today’s post, we will dive more into CSS, exploring some amazing features , and expanding our existing knowledge. What is the Cascade in CSS ? Cascade is an algorithm that defines how the user agent combines properties from different sources. And how it solves the conflict when multiple CSS rules apply to an HTML element.It lies at the core of CSS, and simply when a selector matches an element , the property that comes from the origin with high precedence gets applied .How does the Cascade algorithm work ?   The defined algorithm, is splitted into different stages Position and order of appearance: the order in which your CSS rules appearSpecificity: the algorithm that determines which CSS selector has the strongest matchOrigin ⌱: the order of when CSS appears and from where it comes from ( user-agent, author, user stylesheets)Importance : the css rule, with a heavy weight than others (especially the one marked with !important)What are the origin types in CSS ?    User-agent stylesheets     Author stylesheets     User stylesheets What is CSS Specificity ?Inline styles\nExample <p style=”color:red”;> Hello </p>ID’s\nExample: #paragraphStyle {color: red}Classes, pseudo-classes, and attribute selectors\nExample: .paragraphStyle, :hover, [href]HTML elements, and pseudo elements\nExample: p, h1, :before, ::afterHow can we calculate the CSS Specificity ?We give the ID’s a specificity value of We give the Classes a specificity value of We give the HTML elements a specificity value of ❶We give the inline styles a specificity value of 1000Example:For A, we have 1 (for the HTML element) For B, we have 101 (1 for the HTML element and 100 for the ID ) 🥹For C, we have 1000 (for the inline style) For D, we have 11 (1 for the HTML element, and 10 for the Class) If we compute the results quickly we can see that the winning specificity is the one related to the inline style, and in this case the browser applies the css defined in the inline rule.", "username": "Darine_Tleiss" }, { "code": "inheritinitial <html>\n <head>\n <title>Inheritance in CSS</title>\n <style>\n #parent{\n color: blue;\n font-weight: bold;\n }\n </style>\n </head>\n <body>\n <div id=\"parent\">\n This is the parent\n <div id=\"child1\">\n This is the first child\n </div>\n <div id=\"child2\">\n This is the second child.\n </div>\n </div>\n \n </body>\n </html>\ncolor: bluefont-weight: bolduser-agentfont-* (like font-size, font-family, font-weight …)text-align , text-indent, text-transformcolorborder-spacing … <html>\n <head>\n <title>Using the inherit keyword</title>\n <style>\n #parent{\n border: 3px solid black;\n margin-bottom: 10px;\n }\n #child2{\n border: inherit;\n }\n \n #child1{\n margin-bottom: inherit;\n }\n \n </style>\n </head>\n <body>\n <div id=\"parent\">\n This is the parent\n <div id=\"child1\">\n This is the first children\n </div>\n <div id=\"child2\">\n This is the second children.\n </div>\n </div>\n \n </body>\n </html>\nchild1margin-bottominherit <html>\n <head>\n <title>Using the initial keyword</title>\n <style>\n #parent{\n color: red;\n }\n h1 {\n color: initial\n }\n </style>\n </head>\n <body>\n <div id=\"parent\">\n <h1> Hello from the h1 tag</h1>\n Here some normal lorem ipsum text.\n </div>\n </body>\n </html>\n", "text": "Day , is already here , Hello folks, how are your journeys going ? Let’s continue our learning journey and dive more into some new topics in the amazing world of CSS. *In today Post, we will talk more about inheritance , an important topic in CSS​:flushed:, then we will move to discover the inherit, initial, keyword and how we can use them to control the inheritance process.*So What does the term inheritance mean ? What does the CSS inheritance mean ? ResultIn the previous example we have ⒉ child div’s, wrapped inside a parent div, each of them have a unique ID, and as we can see in our example, when the parent div got some CSS rules like color: blue, and font-weight: bold, these css rules affected the child divs directly. And with a quick inspection to our webpage we can see clearly that the css rules get inherited from the parent, as demonstrated in the below image.\n1056×378 61 KB\nCan all CSS properties get inherited ?? How can we inherit a non-inheritable property in CSS ?Result\n1134×224 29.9 KB\nThe initial value ? How it works, and how we can use it ? ⁇Result:\n1134×224 31.8 KB\nAs we can see in our example , the div with the id parent get the color:red applied to its lorem ipsum content, but the h1 tag was not affected at all. Why? Simply because we are using the initial keyword on it. Which give it the initial color property from the user-agent style and not the inherited one from it’s parent.", "username": "Darine_Tleiss" }, { "code": "<html>\n <head>\n <title>Using Current Color</title>\n <style>\ndiv {\n padding: 10px;\n color: royalblue;\n border: 4px solid;\n border-color: currentColor;\n}\n </style>\n</head>\n <body>\n <div>\n Hello world\n </div>\n </body>\n</html>\n", "text": "Hello friends , a new day is here, and today marks the 11th day from my amazing learning journey. So let’s dive more into CSS and discover all the amazing features that CSS provides us on the fly to build such amazing web interfaces .Starting our post, we will talk about colors , their types and how CSS deal with them to apply the corresponding value added to each property.Where and how the CSS colors are applied ? Color keywords:In CSS we have a list of around 148 named colors. These colors are written in plain english and have meaningful descriptions so they can be used in the correct place to give the correct result.\nExample: blue , red , white , salmon​:two_hearts:, royalblue​:blue_heart:, lightgreen​:green_heart:.Alongside to these named colors we have other special keywords that can be used, like:Example:ResultAnd here we can see that the border-color currentcolor is set automatically to royal blue which is the text color that we defined previously.Numeric colors:The basic building blocks of a color in CSS, where any developer can deal with colors using their numeric values, like the hexadecimal colors, the hexadecimal colors with alpha, the RGB and finally the HSL colors.Hexadecimal colors.A color hex code is a hexadecimal way to represent a color in RGB format by combining three values – the amounts of red, green and blue.Hexadecimal numbers have a range of 0-9 and A-F , where 10 is A, 11 is B, 12 is C …, and when we use the six digit sequence they got translated to RGB numerical ranges, where the six digits are divided sequentially on the 3 colors (2 → Red, 2 → Green, 2 → Blue)Example (the first 2 values represent the amount of red, the second 2 the green and the last 2 the blue) where this combination give us the shown color below.Hexadecimal colors with alpha.The hexadecimal part, stay the same and the alpha value is simply a numeric value that define the percentage of transparency.When adding a transparency percentage the sequence becomes eight digits instead of 6, where the last 2 digits represent this transparency level.O alpha is represented with 00.50% alpha is represented with 80.75% alpha is represented with BF.Example:RGB (Red, Green, Blue) colors.The RGB color model is an additive color model in which the red, green, and blue primary colors of light are added together in various ways to reproduce a broad array of colors.Colors combinations can be set as numbers in a range of 0-255 inside the rgb() function.Example: rgb(183, 21, 64);Example: rgb(25%, 31%, 39%);\n796×346 43 KB\nExample: to add a 50% transparency we can set the color values in the rgba functions as followrgba(0, 0, 0, 50%) or rgba(0, 0, 0, 0.5).HSL (Hue, Saturation, Lightness) colors.Hue: is a degree on the color wheel from 0 to 360, where 0 is red, 120 is green and 240 is the blue.Saturation: can be described as the intensity of a color, where 100% means that we are dealing with pure color, without any shades of gray. 50% have 50 shades of gray. And 0 a complete gray.Lightness: specify the amount of light we want to add to the color. Where 0% means no light, and 100% means full light.\nExample:\n1502×480 46.3 KB\n", "username": "Darine_Tleiss" }, { "code": "", "text": "Hello everyone , hope your day was a productive and a fun one . Today marks our 12th day from this amazing journey . And now let’s wrap it up with some new and important topics.In today’s post, we will discover what SASS / SCSS is all about, how it differs from the normal CSS and what are the extra capabilities that SCSS gives us out of the box.Be ready to learn some new and in demand information, and please enjoy your day to the maximum. What is SASS ? How does SASS extend CSS ? What is SCSS ? SCSS vs. SASS – How they differ in terms of syntax.How does SCSS/SASS work ? SCSS Architecture ? How is SCSS organized behind the scenes ?? \nFirst of all let’s start by understanding the Think-Build-Architect mindset.Think ? : we should think about the layout of the webpage, or web app before writing the codeModular building blocks that make up interfaces.Held together by the layout of the page.Reusable across a project, and between different projects.Independent, allowing us to use them anywhere on the page.Build ? : we should build the HTML layout, alongside with the CSS following a consistent structure for classes naming.\nHere we should use the BEM methodology.Block ?: a standalone component that is meaningful on its own.Element?: a part of the block, that has no standalone meaning.Modifier?: a different version of a block or an element.And now we have reached the end of this long and informative post , I hope you will enjoy exploring these amazing new topics in the world of CSS / SCSS and SASS. Stay tuned for tomorrow’s post where we will dive together in the 7-1 pattern which plays an important role in architecting every production SCSS project out there .", "username": "Darine_Tleiss" }, { "code": " \tbase/\n _animations.scss\n \t\t\t _typography.scss\n \t\t _ base.scss(contains the html and body components)\n\t\t components/\n\t\t\t\t_button.scss\n\t\t\t\t_form.scss\n _popup.scss\n\t layout/\n\t\t_footer.scss\n\t\t_navigation.scss\n\t\t_header.scss\n \t pages/\n _home.scss\n\t themes/\n\t\t _theme.scss\n\t\t _admin.scss\n abstracts/\n\t _functions.scs\n\t _mixins.scss\n\t _variables.scss\n\t vendors/\n\t \t_bootstrap.scss \n\t \t_jquery-ui.scss\n \n @import \"abstracts/functions\";\n @import \"abstracts/mixins\";\n @import \"abstracts/variables\"; \n \n @import \"base/animations\"; \n @import \"base/base\"; \n @import \"base/typography\"; \n @import \"base/utilities\"; \n \n @import \"components/bg-video\" ; \n @import \"components/button\"; \n @import \"components/card\";\n @import \"components/composition\";\n @import \"components/feature-box\";\n @import \"components/form\";\n @import \"components/popup\";\n @import \"components/story\";\n \n @import \"layout/footer\";\n @import \"layout/grid\"; \n @import \"layout/header\";\n @import \"layout/navigation\";\n \n @import \"pages/home\";\n", "text": "Hello friends, How it’s going on , wrapping up our 13th day successfully with some amazing content. And a deep dive into SASS and SCSS.In yesterday’s post, we defined SCSS and SASS, how they extend the CSS capabilities giving us a lot of features out of box , making the front end developers life a smooth and productive one . We also mentioned the SCSS architecture , with a gentle intro about the thinking and building phase. And now it’s time to resume our process and talk more about the 3 and the most important phase. The (Think Build Architect) 3rd parameter.YEAH, You guessed it !!. The Architect phase   Architect ?: In this phase we should create a logical architecture to organize our CSS files\n and folders.\n  And here we follows the famous 7-1 pattern How does the seven/one pattern organize the folders and files in SCSS ?   Starting with folders, we will explore each one of them in deep details, starting with:Base: the folder that hold the boilerplate code of the project, including all standard styles, like the typographic rules, which they are used commonly throughout the project \nExample:Components: the folder that hold the style of the sliders, buttons, and all similar components like the reusable widgets in React JS \nExample:Layout: the folder that contains the styles files that control the project layout , like the styles of the navigation bar, the header, the footer …\nExample:Pages: the folder that contains the specific style of an individual page, like the home page style. Which is not required in other pages and at the same time does not 🆇 require any style from other pages.\nExample:Themes: a folder that contains the files of specific themes, and it’s not used in every project out there , as it only required if we have some sections the contain alternate schemes \nExample:Abstracts: the folder that contains the helper files, mixins functions, SASS tools and any other config file , and this files are just helpers which don’t output when they get compiled \nExample:Vendors: the folder that contains all third party code and all external libraries​:books:, like bootstrap, jQuery, tailwind …\nExample:  So yeah, now we explored our folders successfully , so let’s move on and explore the single file that exists in the amazing architecture. main.scss: the file that only contains the required imports from all other files.\nExample:And now, we have reached the end of our post. I hope you enjoyed reading it and benefited from it. Stay tuned for my ⎘ next posts where we will learn together how we can install SASS and SCSS, via the node package manager and start exploring it in an interactive way ", "username": "Darine_Tleiss" }, { "code": " {\n \"name\": \"hellofromscss\",\n \"version\": \"1.0.0\",\n \"description\": \"\",\n \"main\": \"index.js\",\n \"scripts\": {\n },\n \"keywords\": [],\n \"author\": \"\",\n \"license\": \"ISC\"\n }\n{\n \"name\": \"hellofromscss\",\n \"version\": \"1.0.0\",\n \"description\": \"\",\n \"main\": \"index.js\",\n \"scripts\": {\n \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\"\n },\n \"keywords\": [],\n \"author\": \"\",\n \"license\": \"ISC\",\n \"dependencies\": {\n \"node-sass\": \"^7.0.1\"\n }\n}\n", "text": "Hello everyone, day 14 is already here, let’s move on and make some daily progress toward the success goal. Yesterday we discovered the / architecture that makes our work with such kinds of front end apps a smooth one.\nIn today’s post, we will move on from the theory part, to learn more about how to install SCSS in your next front end project. What are the requirements and how can we get the project running ?SCSS on your hello world project \nIn the modern web development stacks, the node package manager (npm), plays an important role in making our life an easy one. With a simple command we can install one of a million available packages that give us the required features out of the box and for free.How can we install NPM ? \nTo install the node package manager and use it in your next project, first of all you should visit the official documentation of nodejs (which runs JS on the server on simple terms). And install it based on your machine type.\nNodejs link: Download | Node.jsWhat should we do after installing node js locally on our machine ? \nFirst of all to make sure that everything in your setup, is completed correctly and to move on smoothly to the next phase, please run the following 2 commands on your terminal, to test if node js and npm gets installed correctlyTest NodeJS\nnode --version\nWhich should return a certain version like v17.0.1 in my caseTest npm\nnpm --version\nWhich should return a certain version like v8.1.3\nin my caseHow to install SCSS in a new clean project ? First of all please visit your favorite location, in the local machine and create a new directory under the name helloFromScss.Navigate to the newly created folder, and open it with your favorite editor (in my case i’m using vs-code).Open the terminal inside your project, and at the root level perform the following command : npm init --y.The npm init command initializes a project and creates the package.json file. And when adding the --y, we are simply skipping the interactive questions process that ask for example for the author name and other information.\nOutput: package.jsonLet’s analyze the package.json file : in simple terms this generated file plays an important role in building the project where it contains some metadata about it. Like the name of the project and it’s description and other functional requirements like the list of dependencies and installed npm packages which are required by our application to run properly.Now our empty project is ready , the package.json file is generated and all the required setup is done. So it’s time to install sass in our project using this simple command: npm install node-sass or npm i node-sass.The npm install installs all modules that are listed on package. json file and their dependencies.\nAnd now the package.json file becomes as follow:", "username": "Darine_Tleiss" }, { "code": " // Here we are defining some color variables\n $color-primary: #c69963;\n $color-primary-dark: #b28451;\n $color-secondary: #101d2c;\n $color-grey-light-1: #f9f7f6;\n $color-grey-dark: #54483a;\n \n // And now some font variables gets defined\n $font-primary: \"Nunito\", sans-serif;\n $font-display: \"Josefin Sans\", sans-serif;\n \n // For example, here in the body rule instead of adding\n // the font family and color values again\n // we just call them using their names\n body {\n font-family: $font-primary;\n color: $color-grey-dark;\n font-weight: 300;\n line-height: 1.6;\n }\n /* import the math library using the @use syntax */\n @use \"sass:math\" as math; \n /* use the available variables in the math library for example */\n math.$pi // Output: 3.1415926536\n math.$e; // Output: 2.7182818285\n $global-variable: global value;\n \n .content {\n $local-variable: local value;\n global: $global-variable;\n local: $local-variable;\n }\n \n .sidebar {\n global: $global-variable;\n \n // This would fail, because $local-variable isn't in scope:\n // local: $local-variable;\n }\n**!global** $variable: first global value;\n \n .content {\n $variable: second global value !global;\n value: $variable;\n }\n \n .sidebar {\n value: $variable;\n }\n\n @use \"sass:map\";\n \n $theme-colors: (\n \n \"success\": #28a745,\n \n \"warning\": #ffc107,\n \n );\n \n .alert {\n \n // Instead of $theme-color-#{warning}\n \n background-color: map.get($theme-colors, \"warning\");\n \n }\n@mixin at-rule@include @mixin horizontal-list {\n \n li {\n display: inline-block;\n margin: {\n left: -4px;\n right: 3em;\n }\n }\n }\n nav ul {\n @include horizontal-list;\n }\n\n", "text": "Hello friends , a new day is here. Let’s wrap the 15th day from this amazing 100 day journey. In yesterday’s post, we explored how to install nodeJS, with the node package manager that helps us starting a project and install all the requirements to use and manipulate sass code. In today’s post, we will start diving in some amazing topics that show us clearly the power of sass and how this powerful extension to the normal css helps us write efficient code, reusable and non redundant one. We will start by exploring the variables concept in Sass, how we can define a new variable, set a value for it and use it in the entire file.What are sass variables ? In simple terms we can say that a variable is a container 🫙 that holds a certain value in a specific scope . In Sass we can simply create a new variable by adding a $ sign followed by the var name, and then we can refer to the defined name instead of repeating the same value in different places.Variables are one of the most useful tools that help developers reduce repetition, do complex math, and even configure libraries.Sass variables are all compiled away by Sass Sass treats variables in an imperative way, which means if we use a variable then we change its value later on in the project. The earlier use will stay the same . Which differ from the normal css approach which treat it’s variables in a declarative way where any change to the var value affect all uses In Sass, we have some built in variables which get imported from 3rd parties modules. And such variables cannot be modified in terms of value.In Sass, all underscores and hyphens are treated the same, for example if we have a variable called (font_size) and another one called (font-size). The Sass compiler treat them as one same variable\nExample : 1 Example : 2 - Use a built in variable is Sass Variables Scope in SassFirst of all let’s start by understanding what a scope means in general. We can define the scope in simple terms as the part of the program where the association of the name to the entity is valid. Scope helps prevent name collisions; it allows the same name to refer to ≄ different objects - as long as the names have separate scopes In Sass the variables declared at the top of the stylesheet are global , which means that they can be accessed anywhere in their module after the declaration phase. On the other hand the var’s defined in blocks (inside curly braces in SCSS) are local and they can only be accessed within the block in which they were declared. \nExample : 3Variables shadowing in SASS ❏💭When a local variable is defined with the same name as a already defined one. In this we have 2 identical variables in terms of their name. One local and one global. If we need to set a global variable value from the local scope like a mixin, we can use the **!global** flag , where any variable assigned with such flag gets its value from the global scope.\nExample : 4Variables functions in SassAnd now after a deep dive in the variables concept in SASS. It’s time to make a gentle intro to mixins another amazing concept in SASS to be continued in our next posts.Mixins in SASSIn Sass, a mixins allows us to define styles that can be reused in the entire stylesheet. It allows developers to avoid the usage of non semantic classes like .float–right or .float-left.To define a mixins, we use the @mixin at-rule followed by the name, where this name can be any sass identifier. To include a mixins inside a context, we use the @include keyword followed by the name of the rule. \nExample :Now we reached the post end. In tomorrow’s post we will dive more into mixins and then we will introduce the functions concept, to utilize more the power of this amazing technology. ", "username": "Darine_Tleiss" }, { "code": "@mixin rtl($property, $ltr-value, $rtl-value) {\n #{$property}: $ltr-value;\n \n [dir=rtl] & {\n #{$property}: $rtl-value;\n }\n}\n \n.sidebar {\n @include rtl(float, left, right);\n}\n@mixin replace-text($image, $x: 50%, $y: 50%) {\n text-indent: -99999em;\n overflow: hidden;\n text-align: left;\n\n background: {\n image: $image;\n repeat: no-repeat;\n position: $x $y;\n }\n}\n\n.mail-icon {\n @include replace-text(url(\"/images/mail.svg\"), 0);\n}\n@mixin order($height, $selectors...) {\n @for $i from 0 to length($selectors) {\n #{nth($selectors, $i + 1)} {\n position: absolute;\n height: $height;\n margin-top: $i * $height;\n }\n }\n}\n\n@include order(150px, \"input.name\", \"input.address\", \"input.zip\");\n", "text": "Hello friends , a new day is here and some new information is required to wrap up this day successfully .In yesterday’s post, we talked about some amazing concepts in SASS, starting with a deep dive into the variable concept, how we use variables and all the related information about them. Then we moved on to a new interesting topic, the mixins where we made a gentle intro about it.In today’s post , we will continue our tour to learn more about mixins , and how to use them to write some reusable pieces of code .First of all let’s review quickly what mixins are in SASS ? Arguments in mixins ? A Sass mixin can take arguments which allow the developer to change the behavior of the defined mixin each time it gets called. Any argument can be added after the mixin name, as a list of variables surrounded by parentheses.\nExample :Optional arguments in mixins ? Every argument declared in a mixin must be passed when this mixin is included. But there is a way to make this argument an optional one by giving it a default value which will be used if the targeted argument isn’t passed.\nExample :Arbitrary arguments in Sass mixinsA very powerful option, can extend the functionality of a mixin in Sass, if we can pass any number of arguments to it.Can this happen ? : yeah, in a sass mixin if the last variable end with 3 dots …, then all extra arguments to that mixin are passed to that argument as a list, and this argument is called the argument list.\nExample :And now after exploring the different ways to deal with a mixin in Sass. It’s time to wrap up our post, stay tuned for tomorrow’s post which will talk about a new interesting topic in Sass which is the functions .", "username": "Darine_Tleiss" }, { "code": "@function pow($base, $exponent) {\n $result: 1;\n @for $_ from 1 through $exponent {\n $result: $result * $base;\n }\n @return $result;\n}\n\n.sidebar {\n float: left;\n margin-left: pow(4, 3) * 1px;\n}\n@function invert($color, $amount: 100%) {\n $inverse: change-color($color, $hue: hue($color) + 180);\n @return mix($inverse, $color, $amount);\n}\n\n$primary-color: #036;\n.header {\n background-color: invert($primary-color, 80%);\n}\n@function sum($numbers...) {\n $sum: 0;\n @each $number in $numbers {\n $sum: $sum + $number;\n }\n @return $sum;\n}\n\n.micro {\n width: sum(50px, 30px, 100px);\n}\n@use \"sass:string\";\n\n@function str-insert($string, $insert, $index) {\n // Avoid making new strings if we don't need to.\n @if string.length($string) == 0 {\n @return $insert;\n }\n\n $before: string.slice($string, 0, $index);\n $after: string.slice($string, $index);\n @return $before + $insert + $after;\n}\n\n#div1 {\n width: calc(100% - 100px);\n heigh: calc(100% / 3);\n padding: 5px;\n text-align: center;\n}\n", "text": "Hello friends , the 17/100 day is already here, and a lot of new information are required to crash this amazing day and wrap it successfully .In yesterday’s post, we discovered the mixins concepts, which allow developers to write reusable pieces of code, to be used anywhere in the stylesheets.In today’s post, we will explore together the functions concepts, which add more power to our CSS stylesheet.Let’s start by defining a function in SASS ? A Sass function can simply define complex operations on SassScript values to be re-used throughout the stylesheet.Functions are defined using the @functions at-rule, in the following way @function followed by its name and finally the list of arguments.In Sass functions the @return at-rule indicates the value to use as the result of the function call.\nExample :How does Sass treat hyphens and underscores while defining a new function ? Functions with arguments in Sass ? A function in sass can take a list of arguments, taking into consideration that any defined argument must be passed when calling the function.If we need to pass an optional argument to the function, we must add a default value to the targeted argument, so it gets used if no data are passed.\nExample :Arbitrary Arguments in Sass functions It’s so important to create a reusable function, where we can pass any number of arguments to it every time we call it, and this can be done simply by passing an arbitrary argument to the function.We can pass an arbitrary list of keywords where we use the meta.keywords() function to extract them.Any argument that ends with the three dots … is called an argument list.\nExample :Return in Sass functions The @return at-rule indicates the result of the function to be called.The return keyword is only allowed within the function body, where each function must end with a @returnWhen a @return is encountered, it immediately ends the function and returns its result.\nExample :calc function in Sass The calc() function is a function that takes a single expression as its parameter and takes the expression’s value as the result. The expression can be any simple expression including the basic arithmetic operators like addition, subtraction, multiplication and division.", "username": "Darine_Tleiss" } ]
The journey of #100DaysOfCode (@Darine_Tleiss)
2022-06-08T10:36:18.095Z
The journey of #100DaysOfCode (@Darine_Tleiss)
18,754
null
[ "api" ]
[ { "code": "", "text": "Hi all,Currently, I am trying to build a dashboard where-in I can show the current price my organization is paying for using the MongoDB service.Is there any API, which I can use to get such data??", "username": "bhagyashree_sarode2" }, { "code": "", "text": "Welcome to the MongoDB community @bhagyashree_sarode2 !You can collect data via the Atlas Billing API.There’s a great reference with an example MongoDB Charts dashboard from @clark: Optimize Atlas Spending with the Billing Dashboard in Atlas Charts.The accompanying project repo is GitHub - mongodb/atlas-billing.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Rest API to get billing related data (clusters which is being used)
2022-09-16T10:42:32.189Z
Rest API to get billing related data (clusters which is being used)
2,207
null
[ "node-js" ]
[ { "code": "", "text": "Hello,I have a Node.js Express application, which I believe can’t be deployed on the same infrastructure as my MongoDB Atlas database. What are people doing to keep their application(s) as close to their MongoDB Atlas database as possible (in order to reduce latency)?Many thanks in advance.", "username": "_simon" }, { "code": "", "text": "Hi @_simon - Welcome to the community In general, if you keep your application deployment within the same region as your Atlas deployment, it would be closer (in network/region terms) to the Atlas deployment compared to an application in a different region or cloud provider.which I believe can’t be deployed on the same infrastructure as my MongoDB Atlas database.If you are referring to the same hardware/VM’s in which the Atlas clusters are hosted, you are correct as Atlas is a managed service.What are people doing to keep their application(s) as close to their MongoDB Atlas database as possible (in order to reduce latency)?More details regarding cloud provider availability and regions for Atlas can be found here. However, in saying so, if you’re already hosting your application on a particular cloud provider on a certain region, you should generally be choosing the same region and cloud provider for the Atlas cluster. Of course this will depend on each use case / requirement(s).If your application spans across the globe, I would perhaps go over the Manage Global Clusters documentation as well.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Keeping your application close to your MongoDB Atlas database
2022-09-14T13:05:29.405Z
Keeping your application close to your MongoDB Atlas database
1,394
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "const UserSchema = new mongoose.Schema({\n profile: {\n // ...other stuff\n age: {\n type: Number,\n min: [18, \"Min age is 18\"],\n max: [99, \"Max age is 99\"],\n default: 0,\n },\n/users?profile.age[lte]=100users?profile.age=36export const getUsers = asyncHandler(async (req, res, next) => {\n let query;\n let queryStr = JSON.stringify(req.query);\n\n queryStr = queryStr.replace(\n /\\b(gt|gte|lt|lte|in|elemMatch|eq)\\b/g,\n (match) => `$${match}`\n );\n\n query = JSON.parse(queryStr);\n\n const users = await User.find(query);\n\n if (!users) {\n next(new ErrorResponse(`Unable to get users.`, 500));\n } else {\n res.status(200).json({\n success: true,\n count: users.length,\n data: users,\n });\n }\n});\n{ 'profile.age': { '$lte': '36' } }ltltegtgteCastError: Cast to Number failed for value \"{ '$lte': '36' }\" (type Object) at path \"profile.age\" for model \"User\"query-to-mongo", "text": "So I have a mongo schema, which looks something like this:And Im trying to query it through postman with the following: /users?profile.age[lte]=100Other queries work, such as users?profile.age=36. This returns the correct number of results.In my controller I have:logging the query here gives me { 'profile.age': { '$lte': '36' } } which looks right to meSo basically every time I use lt lte gt gte it throws the following:CastError: Cast to Number failed for value \"{ '$lte': '36' }\" (type Object) at path \"profile.age\" for model \"User\"Any help much appreciated.Thanks!Edit: I also tried query-to-mongo in case I was handling the query incorrectly but it returns the same error.", "username": "Mo_Ali" }, { "code": "query\"profile.age\"{ '$lte': '36' }", "text": "Hi @Mo_Ali - Welcome to the community CastError: Cast to Number failed for value “{ ‘$lte’: ‘36’ }” (type Object) at path “profile.age” for model “User”Is it possible that there may be a mismatch in what mongoose is expecting for the query and what was actually passed through for the variable query? The error seems indicates \"profile.age\" expects a Number but an attempt to cast the object { '$lte': '36' } to a Number fails.However, I attempted to reproduce this error but so far have not been successful. Would you be able to provide the following information:Regards,\nJason", "username": "Jason_Tran" }, { "code": "const users = await User.find({\n \"profile.age\": { $eq: 36 },\n});\nconst users = await User.find({\n \"profile.age\": { $lt: 100 },\n});\n6.5.4", "text": "Hey @Jason_Tran thanks for respondingOkay so to give some more context, this isn’t just when pulling from the query string:This does work:But this doesn’t:Mongoose version 6.5.4. Sorry, how do I check my mongo version?", "username": "Mo_Ali" }, { "code": "gt gte lt ltefind const users = await User.aggregate([\n {\n $match: { \"profile.age\": { $lt: 36 } },\n },\n ]);\nfindselect", "text": "So yeah, i dont know why, but gt gte lt lte don’t work with find.This does give the desired result however:But I’ve read that find is more performant, and also Im not sure if you can chain other operators with aggregate such as selectThanks", "username": "Mo_Ali" }, { "code": "DB>db.users.find()\n[\n { _id: ObjectId(\"6323d1916b10eece4870dbff\"), profile: { age: 20 } },\n { _id: ObjectId(\"6323d1c634a3c771ab1a8b28\"), profile: { age: 15 } },\n { _id: ObjectId(\"6323d1dc34a3c771ab1a9dc9\"), profile: { age: 50 } }\n]\nimport mongoose from 'mongoose';\nconst mongouri = \"CONNECTION_STRING\";\nconst User = mongoose.model('User', mongoose.Schema({\n profile: {\n age: {\n type: Number,\n min: [18, \"Min age is 18\"],\n max: [99, \"Max age is 99\"],\n default: 0\n }\n }\n}));\n\nconst run = async function() {\n await mongoose.connect(mongouri, {useNewUrlParser: true, useUnifiedTopology: true})\n\n const result = await User.find({\n \"profile.age\": {$lte: 36}\n });\n console.log(result);\n\n await mongoose.connection.close()\n}()\nmongod --version\n", "text": "Hi @Mo_Ali - The behaviour described does sound odd.On a test environment (you may need to insert the documents i’ve provided), would you be able to run the following code? I have inserted 3 documents to test it against.Test documents to be inserted:Code to test:Please replace `“CONNECTION_STRING” with the test environment connection string you are going to test against and advise of the output.Note: I’ve reduced the schema to only contain a “profile” field which is an object containing a field called “age”Sorry, how do I check my mongo version?If you’re running on Atlas, the version should be stated under “Version” in the Database Deployments screen. Otherwise you can try run:Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "@Jason_Tran do you mean set up a new project locally and try that code? I tried in mongoplayground and what you wrote there works as expected", "username": "Mo_Ali" }, { "code": "", "text": "do you mean set up a new project locally and try that code?Yes, on the same environment in which you were receiving the error previously. Since the code works elsewhere (including on my machine), I was thinking that the error may be related to the project / code in which the original issue occurred. If you set up a new project and the code works, it may indicate that there may be some incorrect settings in the previous project.Have you also tried updating to the latest version of mongoose to see if the same error occurs? I believe it is version 6.6.1 as of the time of this message.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
"Cast to Number failed for value \"{ '$lte': '36' }\" (type Object) at path \"profile.age\" for model \"User\""
2022-09-05T19:47:17.308Z
&ldquo;Cast to Number failed for value \&rdquo;{ &lsquo;$lte&rsquo;: &lsquo;36&rsquo; }\&rdquo; (type Object) at path \&rdquo;profile.age\&rdquo; for model \&rdquo;User\&rdquo;&rdquo;
7,050
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "I’m using http://localhost:8000 for local development with the Realm SDK. Some research indicated the title error above may be CORS related so I input http://localhost:8000 in Allowed Request Origins in Realm App Services | App Settings, but it hasn’t made a difference and I continue to get the same problem.What else can I try to get past this error?\nShould I try the ‘hosting’ option provided and continue my development via Realm App Services (this would involve uploading every change I make, adding overhead)?\nthanks …p.s. I can, of course, add the relevant .html .js here code if that would be helpful(?) …", "username": "freeross" }, { "code": "<div id=\"loginDiv\" style=\"color:rgb(39, 93, 4); text-align: center\">\n <label for=\"email\"><b>Email</b></label>\n <input type=\"text\" placeholder=\"Enter Email\" id=\"email\" required><br>\n <br>\n <label for=\"psw\"><b>Password</b></label>\n <input type=\"password\" placeholder=\"Enter Password\" id=\"psw\" required><br>\n <br>\n <input type=\"button\" value=\"Login\" onclick=\"login()\"/><br>\n <br>\n <input type=\"button\" value=\"Register\" onclick=\"register()\"/><br>\n <br>\n <label>\n <input type=\"checkbox\" checked=\"checked\" name=\"remember\"> Remember me\n </label><br>\n </div>\n// Add your App ID\nconst realmapp = new Realm.App({ id: \"<app id>\" });\n\nasync function loginEmailPassword(email, password) {\n const credentials = Realm.Credentials.emailPassword(email, password);\n try {\n // Authenticate the user\n const user = await realmapp.logIn(credentials);\n // `App.currentUser` updates to match the logged in user - only logs in console if false\n console.assert(user.id === realmapp.currentUser.id);\n // HIDE login\n document.getElementById(\"loginDiv\").style.display = \"none\";\n\n // HIDE previous error messages\n document.getElementById(\"authError\").style.display = \"none\";\n \n // SHOW Elm app\n document.getElementById(\"elmappDiv\").style.display = \"block\";\n return user;\n } catch (err) {\n console.error(\"Failed to log in\", err);\n // Still show login\n document.getElementById(\"loginDiv\").style.display = \"block\";\n\n // Show error\n document.getElementById(\"authError\").innerHTML = '<br>' + \"Authentication error. \" + '<br>' + \"Please register or try again.\";\n document.getElementById(\"authError\").style.color = \"rgb(225, 22, 11)\";\n\n document.getElementById(\"authError\").style.display = \"block\";\n \n // HIDE Elm app\n document.getElementById(\"elmappDiv\").style.display = \"none\";\n }\n }\n\n function login () {\n\n // Get input values\n const email = document.getElementById(\"email\").value;\n const password = document.getElementById(\"psw\").value;\n\n //invoke loginEmailPassword from authentication.js to login to mongodb realm\n (async function() {\n const user = await loginEmailPassword(email, password);\n console.log(\"Successfully logged in!\", user.id);\n })();\n }\n", "text": "SOLVED. I was getting network errors that may/may not have been directly CORS related as a result of trying different solutions to getting a simple login page working. The async code suggested by mongodb was close but didn’t work with a submit button the way I expected/required without some important tweaks. This is what I needed on the index.html page:and this is what I needed in a separate authenticate.js file:I did something similar for registration (password re-set still remains to be done).", "username": "freeross" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Failed to log in TypeError: NetworkError when attempting to fetch resource (CORS?)
2022-09-14T06:47:47.098Z
Failed to log in TypeError: NetworkError when attempting to fetch resource (CORS?)
6,070
null
[]
[ { "code": "org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1@1bfdd8d6]; nested exception is org.springframework.data.mongodb.UncategorizedMongoDbException: Exception sending message; nested exception is com.mongodb.MongoSocketWriteException", "text": "We have set Mongo 6(Community edition) version in our Production environment.we notice frequently primary instance is getting down due to which we faceDescription: MongDB Write Exception in PROD: org.springframework.messaging.MessageHandlingException: error occurred in message handler [org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1@1bfdd8d6]; nested exception is org.springframework.data.mongodb.UncategorizedMongoDbException: Exception sending message; nested exception is com.mongodb.MongoSocketWriteExceptionThis get vanished once we restart the Mongo instances seprately.Can anyone help me in thisThis is in Production so need immediate response", "username": "Arunkumar_s" }, { "code": "", "text": "Hi @Arunkumar_sThere could be many reasons why a primary goes down: network partition, hardware failures, deliberate stepdown command, among other things. Is there anything in the primary’s logs that could shed a light into why it steps down? What was in the server logs immediately before you observe an error in the application?Most modern official MongoDB drivers support retryable writes so a topology change should not affect writes too much as it will automatically retry the write once a new primary is elected. Whether the Spring framework supports this feature is perhaps a question for them This is in Production so need immediate responseUnfortunately the Community Forum does not provide an SLA and might not be the right venue for mission-critical questions needing immediate troubleshooting. For these purposes, you might want to engage MongoDB support that can help you in production situations.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hi Kevin,\nMany thanks for the reply . Please find the below logs available and in particular path/var/log/mongodb/mongodb I can see logs for only backdated dates i.e (2-aug-2022)\nserver_logs.txt (110.7 KB)\nserver_logs_mongodlog.txt (5.0 KB)Thanks\nArunkumar.S", "username": "Arunkumar_s" } ]
Mongo 6 -community version Primary instance going down frequently
2022-09-19T07:23:38.956Z
Mongo 6 -community version Primary instance going down frequently
1,472
null
[]
[ { "code": "", "text": "Hi there, I would like to know if MongoDB open-source version can handle large-scale apps where there are hundreds of thousands of reads and writes. Currently, we are using PostgreSQL as our database but we are looking for a document-oriented database so, we are considering MongoDB.\nLately, we faced some issues that couldn’t be solved in our backend because of a huge increase in users so we are rewriting our backend with different technologies and MongoDB is among them, however, we only considering the open-source version of MongoDB because we have a lot of users up to 150K active user regularly read and write mostly read and we can’t afford the enterprise version.\nIn two years we are planning to launch a paid service and then we will be able to afford the enterprise version but for now, we want to stick with the open source version, so my question is:\nCan MongoDB open-source version handle 150K active users? Our main concern is performance and latency.", "username": "Amir_Osman" }, { "code": "", "text": "Hi @Amir_OsmanIt’s been a while since you posted this. Have you reached a conclusion yet?In terms of capabilities, there’s not much difference between Community edition & Enterprise edition; they both use the same WiredTiger storage engine, and both are capable to be used as a main database in large applications (when properly done). The main difference is that the Enterprise edition contains more “Enterprise”-needs features such as auditing, Kerberos, encrypted storage engine, and others. See MongoDB Enterprise Advanced for more information.In terms of “can it handle a lot of users”, there are many success stories that you might want to peruse.Alternatively you can also use Atlas to have the ops side of things managed for you. Notably, Atlas uses MongoDB Enterprise edition.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB open source for large scale app
2022-08-29T19:20:59.538Z
MongoDB open source for large scale app
2,481
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi everyone,\nNew to Mongo and had a question about structuring data. Our organization is trying to decide how to partition data into collections. Each piece of data we insert has a property called the “symbol”, and our original approach was to put each “symbol” into its own collection. However after reading a previous thread on the topic (Maximum number of Collections) and an article (https://www.mongodb.com/article/schema-design-anti-pattern-massive-number-collections/) the sense I got was that this isn’t a great idea, even though Mongo no longer has a hard limit on the number of collections, because there’s no limit to the number of “symbols” we may have, and thus the number of collections may grow unbounded which seems likely to cause performance problems eventually (if my understanding of those links is accurate).Given that, I have two questions:Thanks!\nSam", "username": "Sam_Conrad" }, { "code": "", "text": "Hi, In the end, what did you decided? I have the same doubt. Thanks", "username": "OSCAR_GOMEZ1" } ]
Downsides of using a single collection vs multiple
2021-05-28T16:38:36.853Z
Downsides of using a single collection vs multiple
3,741
null
[ "python", "atlas-cluster" ]
[ { "code": "Traceback (most recent call last):\n File \"/Users/aarik/python/test.py\", line 4, in <module>\n cluster = MongoClient(\"mongodb+srv://aarik:<i already put in a password here>@cluster0.uyuc29f.mongodb.net/?retryWrites=true&w=majority\")\n File \"/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pymongo/mongo_client.py\", line 677, in __init__\n res = uri_parser.parse_uri(\n File \"/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pymongo/uri_parser.py\", line 455, in parse_uri\n raise ConfigurationError(\npymongo.errors.ConfigurationError: The \"dnspython\" module must be installed to use mongodb+srv:// URIs. To fix this error install pymongo with the srv extra:\n /usr/local/bin/python3.10 -m pip install \"pymongo[srv]\"\n➜ python -m pip install \"pymongo[srv]\"\nzsh: command not found: -m\n➜ python pip install \"pymongo[srv]\"\nRequirement already satisfied: pymongo[srv] in /Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages (4.1.1)\nCollecting dnspython<3.0.0,>=1.16.0\n Downloading dnspython-2.2.1-py3-none-any.whl (269 kB)\n |████████████████████████████████| 269 kB 2.1 MB/s \nInstalling collected packages: dnspython\nSuccessfully installed dnspython-2.2.1\n➜ python /usr/local/bin/python3.10 /Users/aarik/python/test.py\nTraceback (most recent call last):\n File \"/Users/aarik/python/test.py\", line 4, in <module>\n cluster = MongoClient(\"mongodb+srv://aarik:<already put in a password>@cluster0.uyuc29f.mongodb.net/?retryWrites=true&w=majority\")\n File \"/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pymongo/mongo_client.py\", line 677, in __init__\n res = uri_parser.parse_uri(\n File \"/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/site-packages/pymongo/uri_parser.py\", line 455, in parse_uri\n raise ConfigurationError(\npymongo.errors.ConfigurationError: The \"dnspython\" module must be installed to use mongodb+srv:// URIs. To fix this error install pymongo with the srv extra:\n /usr/local/bin/python3.10 -m pip install \"pymongo[srv]\"\n➜ python /usr/local/bin/python3.6 /Users/aarik/python/test.py\n/usr/local/bin/python3.6 /Users/aarik/python/test.py\n/usr/local/bin/python3.6 /Users/aarik/python/test.py\nTraceback (most recent call last):\n File \"/Users/aarik/python/test.py\", line 10, in <module>\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/collection.py\", line 613, in insert_one\n comment=comment,\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/collection.py\", line 547, in _insert_one\n self.__database.client._retryable_write(acknowledged, _insert_command, session)\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/mongo_client.py\", line 1398, in _retryable_write\n with self._tmp_session(session) as s:\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/contextlib.py\", line 81, in __enter__\n return next(self.gen)\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/mongo_client.py\", line 1676, in _tmp_session\n s = self._ensure_session(session)\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/mongo_client.py\", line 1663, in _ensure_session\n return self.__start_session(True, causal_consistency=False)\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/mongo_client.py\", line 1608, in __start_session\n self._topology._check_implicit_session_support()\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/topology.py\", line 519, in _check_implicit_session_support\n self._check_session_support()\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/topology.py\", line 536, in _check_session_support\n readable_server_selector, self._settings.server_selection_timeout, None\n File \"/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pymongo/topology.py\", line 229, in _select_servers_loop\n % (self._error_message(selector), timeout, self.description)\npymongo.errors.ServerSelectionTimeoutError: SSL handshake failed: ac-oucby4l-shard-00-02.uyuc29f.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852),SSL handshake failed: ac-oucby4l-shard-00-01.uyuc29f.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852),SSL handshake failed: ac-oucby4l-shard-00-00.uyuc29f.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852), Timeout: 30s, Topology Description: <TopologyDescription id: 631e0275062c9e52d5465ad0, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('ac-oucby4l-shard-00-00.uyuc29f.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('SSL handshake failed: ac-oucby4l-shard-00-00.uyuc29f.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)',)>, <ServerDescription ('ac-oucby4l-shard-00-01.uyuc29f.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('SSL handshake failed: ac-oucby4l-shard-00-01.uyuc29f.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)',)>, <ServerDescription ('ac-oucby4l-shard-00-02.uyuc29f.mongodb.net', 27017) server_type: Unknown, rtt: None, error=AutoReconnect('SSL handshake failed: ac-oucby4l-shard-00-02.uyuc29f.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)',)>]>\n", "text": "how can i fix this", "username": "Aarik_Ghosh" }, { "code": "test.py", "text": "Hi @Aarik_Ghosh welcome to the community!Are you still seeing this issue? If yes, could you please follow the troubleshooting steps described here Keep getting ServerSelectionTimeoutError and see if it fixes it?If not, could you specify your OS version, and the code in the file test.pyBest regards\nKevin", "username": "kevinadi" }, { "code": "import pymongo\nfrom pymongo import MongoClient\n\ncluster = MongoClient(\"mongodb+srv://aarik:<i entered the correct password here>@cluster0.uyuc29f.mongodb.net/?retryWrites=true&w=majority\")\ndb = cluster[\"testDatabase\"]\ncollection = db[\"testName\"]\n\npost = {\"_id\": 0, \"name\": \"aarik\", \"score\": 5}\n\ncollection.insert_one(post)\n", "text": "I am using macOS Monterey Version 12.4. The code in test.py is:", "username": "Aarik_Ghosh" }, { "code": "test.py", "text": "The test.py looks straightforward enough, thanks for confirming. The log messages you posted earlier seem to point to a connection issue, though. Could you confirm if the troubleshooting steps in the previously linked topic helped with the issue?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "I read through it, but I’m not sure about what to change.", "username": "Aarik_Ghosh" }, { "code": "", "text": "I think before that, you should try the suggestions in Troubleshoot Connection Issues.Typical connection issues are:Please note that these are just off the top of my head and is not an exhaustive list of possible causes.Sometimes if it’s a company-issued laptop/PC, there are restrictions placed by the company’s IT department that just cannot be worked around without consulting with them (this is typically OS-level or network-policy level restrictions).I think when faced with connection issues, the first step is to determine which of the four points above is the culprit.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Just a question, does Atlas work on MacOS?", "username": "Aarik_Ghosh" }, { "code": "", "text": "MongoDB Atlas is a managed service, so it works with any OS; as long as you have a way to communicate with the servers on port 27017.Best regards\nKevin", "username": "kevinadi" }, { "code": "Unable to connect: connection <monitor> to 52.200.166.55:27017 closed\n", "text": "I used the option that let’s you connect to a database using Visual Studio Code extension, but when I do everything correctly and press Connect, I get", "username": "Aarik_Ghosh" }, { "code": "Unable to connect: connection <monitor> to 52.200.166.55:27017 closed", "text": "Unable to connect: connection <monitor> to 52.200.166.55:27017 closedNow I was able to conncet using the Visual Studio Code option, but the original code still doesn’t work", "username": "Aarik_Ghosh" }, { "code": "", "text": "Hi @Aarik_GhoshI’m not sure I understand. Can you connect to Atlas now, but the code doesn’t work? Is there any error you’re seeing?Best regards\nKevin", "username": "kevinadi" } ]
I cant add a post to my db
2022-09-11T15:51:04.067Z
I cant add a post to my db
3,226
null
[ "swift" ]
[ { "code": "realm.write { ... }realm.write {}", "text": "So I’m using realm.write { ... } to perform changes in the database but I want to know when they are persisted in a file. I assume file write is async and may happen some time after realm.write {} is performed. Am I right? If so is there a callback for that operation?", "username": "Anton_P" }, { "code": "realm.writelet p = PersonClass()\np.name = \"Jay\"\nrealm.writeAsync({\n realm.add(p)\n}, onComplete: { error in\n if let err = error {\n print(err.localizedDescription)\n return\n }\n\n print(\"data was saved\")\n})\n", "text": "realm.write as synchronous. It’s essentially a wrapper around realm.beginWrite and realm.commitWrite. Very important that to avoid blocking the UI, you do these on a background thread. But…Realm, 10.25 and above support sawait async formatted calls but I think you’re most interested in asynchronous writes. See writeAsyncUnfortunately, the documentation is a bit thin when it comes to async calls (Realm team???), so a lot of discovery is by trial and error.example use", "username": "Jay" }, { "code": "", "text": "Well, my assumption was that file write might be async even for a sync write and continue after write is committed. Though, I checked it in the playground project and it looks like it performed sync so my assumption was wrong", "username": "Anton_P" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is Realm write to a file is synchronous?
2022-09-19T01:09:33.991Z
Is Realm write to a file is synchronous?
2,021
null
[]
[ { "code": "Reading package lists... Done\nBuilding dependency tree... Done\nReading state information... Done\nE: Unable to locate package mongodb-org```", "text": "Hi all.Does anybody know when the Ubuntu 22 apt will be able to install Mongo Jammy 4.4 and 6.0?I was already able to Install Mongo Jammy 6.0 manually using a deb file but I want to install it through apt in my ansible.", "username": "Carsten" }, { "code": "sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 656408e390cfb1f5\necho \"deb [arch=amd64] http://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/4.4 multiverse\" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.4.list\nsudo apt update\nsudo apt install mongodb-org\n> Reading package lists... Done\n> Building dependency tree... Done\n> Reading state information... Done\n> E: Unable to locate package mongodb-org\n", "text": "There are already files in MongoDB Repositories.", "username": "Carsten" } ]
Installing Mongo 4.4 and 6.0 through apt on Ubuntu 22
2022-09-19T12:12:59.965Z
Installing Mongo 4.4 and 6.0 through apt on Ubuntu 22
3,762
null
[ "flexible-sync" ]
[ { "code": "{\n \"rules\": {},\n \"defaultRoles\": [\n {\n \"name\": \"admin\",\n \"applyWhen\": {\n \"%%user.custom_data.isAdmin\": true\n },\n \"read\": true,\n \"write\": true\n },\n {\n \"name\": \"user\",\n \"applyWhen\": {},\n \"read\": {\n \"$or\": [\n {\n \"userId\": \"%%user.id\"\n },\n {\n \"sync\": \"PUBLIC\"\n }\n ]\n },\n \"write\": {\n \"userId\": \"%%user.id\"\n }\n }\n ]\n}\n", "text": "I am struggling to set up permission for realm flexible sync. I have four Realms. Two Realms sync only on the “userId”. The other two Realms sync if the “userIid” matches or the “sync” field is set to PUBLIC.This permission set causes errors since most of the collections don’t have the “sync” field. What is the best way to set up permission for this and are there any good guides out available? I haven’t found many useful docs or guides.", "username": "Matthew_Brimmer" }, { "code": "", "text": "Hi, when you say you have 4 realms, do you mean you have 4 collections? You are using “defaultRoles”, but there are also “collection-roles” (see here: https://www.mongodb.com/docs/atlas/app-services/sync/data-access-patterns/permissions/#type-specific-and-default-roles)Therefore, you can set different roles for different collections. Please let me know if I misunderstood you though.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "I have four Realms and each one has different collections and they each have different sync queries in the app. I think this is what I need. I can have userId rules in the default and then each collection that needs additional sync permission will have a collection based rule?", "username": "Matthew_Brimmer" } ]
Realm Flexible Sync Permission
2022-09-19T17:24:48.957Z
Realm Flexible Sync Permission
2,058
null
[]
[ { "code": "", "text": "Hi, guys!\nI installed Garuda Linux in my quite old machine, and I am quite happy with it but mongo is not working.\nI keep getting the ECONNREFUSED status when trying to connect.\nI tried to open mongod and got this:\nfish: Job 1, ‘mongod’ terminated by signal SIGILL (Illegal instruction)\nThat might mean that my machine is too old so I tried to uninstall mongo, but it keeps telling me that neither: mongo, mongod nor mongodb are installed.\nCan anybody guide me?\nBy the way: I am unabled to find out which distribution is Garuda based on, so I can’t go deeper", "username": "Rafa_Gomez" }, { "code": "mongod", "text": "Hello @Rafa_Gomez and welcome to the MongoDB Community forums! By the way: I am unabled to find out which distribution is Garuda based on, so I can’t go deeperA quick search shows that Garuda is based on Arch Linux.I keep getting the ECONNREFUSED status when trying to connect.\nI tried to open mongod and got this:\nfish: Job 1, ‘mongod’ terminated by signal SIGILL (Illegal instruction)Can you share the mongod log files? Having just a single line does not help.I tried to uninstall mongo, but it keeps telling me that neither: mongo, mongod nor mongodb are installed.Can you show a screenshot of how you’re trying to uninstall the packages? Again seeing everything that you see will make things easier for us to help you out.", "username": "Doug_Duncan" } ]
ECONNREFUSSED on Garuda
2022-09-19T15:24:43.323Z
ECONNREFUSSED on Garuda
1,078
null
[ "node-js" ]
[ { "code": "asyncawaitrequire('util').callbackify(() => collection.findOne())(callback)require('mongodb')require('mongodb-legacy')", "text": "The MongoDB Node.js team is pleased to announce version 4.10.0 of the mongodb package!Looking to improve our API’s consistency and handling of errors we are planning to remove callback support in the next major release of the driver. Today marks the notice of their removal. Migrating to a promise only API allows us to offer uniform error handling and better native support for automatic promise construction. In this release you will notice deprecation warnings in doc comments for all our callback overloads and if you are working in VSCode you should notice ~strikethroughs~ on these APIs. We encourage you to migrate to promises where possible:While the 4.10.0 version only deprecates our support of callbacks, there will be a major version that removes the support altogether. In order to keep using callbacks after v5 is released, we recommend migrating your driver version to mongodb-legacy (github link). This package wraps every single async API our driver offers and is designed to provide the exact behavior of the MongoDB 4.10.0 release (both callbacks and promises are supported). Any new features added to MongoDB will be automatically inherited but will only support promises. This package is fully tested against our current suite and adoption should be confined to changing an import require('mongodb') → require('mongodb-legacy'). If this package is useful to you and your use case we encourage you to adopt it before v5 to ensure it continues to work as expected.Read more about it on the package’s readme here:We invite you to try the mongodb library immediately, and report any issues to the NODE project.", "username": "neal" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Node.js Driver 4.10.0 Released
2022-09-19T15:21:32.154Z
MongoDB Node.js Driver 4.10.0 Released
2,390
null
[ "node-js", "mongoose-odm" ]
[ { "code": "Promise must be a function, got undefined\n\nMongoError@http://localhost:3000/node_modules/.vite/mongodb.js?v=5d182f1b:101:11\nMongoDriverError@http://localhost:3000/node_modules/.vite/mongodb.js?v=5d182f1b:145:9\nimport { MongoClient } from 'mongodb';\nimport dotenv from 'dotenv';\ndotenv.config();\n\nexport const MONGODB_URI = process.env['MONGODB_URI'];\nexport const MONGODB_DB = \"ttrack\";\n\nif (!MONGODB_URI) {\n throw new Error('Please define the mongoURI property inside config/default.json');\n}\n\nif (!MONGODB_DB) {\n throw new Error('Please define the mongoDB property inside config/default.json');\n}\n\n/**\n * Global is used here to maintain a cached connection across hot reloads\n * in development. This prevents connections growing exponentially\n * during API Route usage.\n */\nlet cached = global.mongo;\n\nif (!cached) {\n cached = global.mongo = { conn: null, promise: null };\n}\n\nexport const connectToDatabase = async () => {\n if (cached.conn) {\n return cached.conn;\n }\n\n if (!cached.promise) {\n const opts = {\n useNewUrlParser: true,\n useUnifiedTopology: true\n };\n\n cached.promise = MongoClient.connect(MONGODB_URI, opts).then((client) => {\n return {\n client,\n db: client.db(MONGODB_DB)\n };\n });\n }\n cached.conn = await cached.promise;\n return cached.conn;\n}\nrequire_promise_provider", "text": "I currently have a SvelteKit/NodeJS project that needs to connect to MongoDB. I’ve made a file to handle caching an instance of a MongoDB client to use, but when I run the code, it returns the error:Here is the file that I am using to connect to my DB:I’m unsure how to proceed with this, as I haven’t been able to find any other documentation on this error for MongoDB (I have found some for mongoose, but they do not apply to my project). Could it possibly be a localhost error? I tracked it down to a function called require_promise_provider, but I have no clue where to go from there. Any help would be awesome!", "username": "Ali_Mosallaei" }, { "code": "", "text": "Hi @Ali_Mosallaei,\nYour code looks fine to me.\nIn my opinion, the problem is with the code not attached to your message.\nIf you create git repository and upload there the part of your code that is enough to reproduce your problem it may be easier to help you then.\nThanks,\nRafael,", "username": "Rafael_Green" }, { "code": "", "text": "Did you ever solve this issue? I’m running into it (along with quite a few other headaches with mongodb + sveltekit)", "username": "Zack_T" }, { "code": "MongoInvalidArgumentError: Promise must be a function, got undefined\n MongoError error.ts:129\n MongoDriverError error.ts:200\n MongoAPIError error.ts:220\n MongoInvalidArgumentError error.ts:601\n validate promise_provider.ts:22\n set promise_provider.ts:28\n mongodb mongodb.js:6310\n __require2 chunk-OROXOI2D.js:16\n mongodb mongodb.js:6786\n __require2 chunk-OROXOI2D.js:16\n mongodb mongodb.js:7946\n __require2 chunk-OROXOI2D.js:16\n mongodb mongodb.js:8304\n __require2 chunk-OROXOI2D.js:16\n mongodb mongodb.js:29300\n __require2 chunk-OROXOI2D.js:16\n <anonymous> mongodb:1\nclient-manifest.js:26:44\n\n", "text": "Any news here?\nimport {MongoClient} from “mongodb”;\nleads to below error. im about to cry", "username": "Philipp_Steinke" } ]
Promise must be a function, got undefined
2022-02-14T01:29:09.636Z
Promise must be a function, got undefined
4,107
null
[ "installation" ]
[ { "code": "Job for mongod.service failed because the control process exited with error code.\nSee \"systemctl status mongod.service\" and \"journalctl -xe\" for details.\n\nsystemctl status mongod.service\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\n Active: failed (Result: exit-code) since Tue 2021-10-05 00:09:52 AEST; 1min 5s ago\n Docs: https://docs.mongodb.org/manual\n Process: 3665 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=1/FAILURE)\n Process: 3662 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 3660 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\n Process: 3657 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\n\nOct 05 00:09:52 ava-tran systemd[1]: Starting MongoDB Database Server...\nOct 05 00:09:52 ava-tran mongod[3665]: about to fork child process, waiting until server is ready for connections.\nOct 05 00:09:52 ava-tran mongod[3665]: forked process: 3668\nOct 05 00:09:52 ava-tran mongod[3665]: ERROR: child process failed, exited with 1\nOct 05 00:09:52 ava-tran mongod[3665]: To see additional information in this output, start without the \"--fork\" option.\nOct 05 00:09:52 ava-tran systemd[1]: mongod.service: Control process exited, code=exited status=1\nOct 05 00:09:52 ava-tran systemd[1]: mongod.service: Failed with result 'exit-code'.\nOct 05 00:09:52 ava-tran systemd[1]: Failed to start MongoDB Database Server.\n", "text": "Hi all,i have installed mongodb 5.03 on RHEL 8 but i cant start with systemctl start mongodPlease advice as I am new to MongoDBDavid", "username": "David_Tran" }, { "code": "", "text": "Does your mongod.log shows any additional details on error number 1\nYou may check your config file for the log path", "username": "Ramachandra_Tummala" } ]
Mongod with REHL 8
2021-10-04T14:12:26.516Z
Mongod with REHL 8
3,278
null
[ "android", "flexible-sync" ]
[ { "code": "{Realms.Sync.Exceptions.AppException: code 998: An unexpected error occurred while sending the request: Authentication failed, see inner exception.: Ssl error:1000007d:SSL routines:OPENSSL_internal:CERTIFICATE_VERIFY_FAILED\n at /Users/builder/jenkins/workspace/archive-mono/2020-02/android/release/external/boringssl/ssl/handshake_client.c:1132\n at Realms.Sync.AppHandle.LogInAsync (Realms.Sync.Native.Credentials credentials) [0x00055] in D:\\a\\realm-dotnet\\realm-dotnet\\Realm\\Realm\\Handles\\AppHandle.cs:294 \n at Realms.Sync.App.LogInAsync (Realms.Sync.Credentials credentials) [0x0003d] in D:\\a\\realm-dotnet\\realm-dotnet\\Realm\\Realm\\Sync\\App.cs:214\nif(this._dbService == null)\n {\n _dbService = Ioc.Default.GetService<IDbService>();\n }\n\n if (_dbService != null)\n {\n var user = await _dbService.LoginAsync(email, password);\n if (user != null)\n {\n\n return ResultType.SUCCESS;\n\n }\n else\n {\n return ResultType.UNKNOWN;\n\n }\n }\n else\n {\n return ResultType.UNKNOWN;\n }\n", "text": "I am trying to login using realm-dotnet version: 10.15.1.\nWhile using MAUI, it logged in fine.\nBut when I use the same code for login in xamarin i get the following error:Here is the code I used to login:", "username": "Ahmad_Pasha" }, { "code": "", "text": "What http client are you using? Because our cloud certificates are issued by Let’s Encrypt, those will work well with the native android client, but not with the BoringSSL implementation that Xamarin ships with.The reason is a bit convoluted, but it boils down to a bug in the Android http client that doesn’t validate expiration of root certificates, which allows Let’s Encrypt to cross-sign certificates with their old root CA and support older Android versions.My suggestion would be to try and enable the native AndroidClientHandler from your project settings and see if that solves the issue.", "username": "nirinchev" }, { "code": "", "text": "Thanks,\nI will be back after trying this.", "username": "Ahmad_Pasha" }, { "code": "", "text": "@nirinchev Its working now,\nThank you very much.", "username": "Ahmad_Pasha" } ]
Certificate_verify_failed
2022-09-16T12:55:07.591Z
Certificate_verify_failed
2,416
null
[ "aggregation", "mongodb-shell" ]
[ { "code": "", "text": "I noticed when you download the newest version of mongodb (6.0) mongo.exe is not anymore in the bin folder.\nThe solution: Download mongo shell as zip, uncompress and copy-paste the files that appear in the bin folder of the mongo shell into the bin folder of mongodb. Now, instead of typing mongo everytime you have to type this command, you’ll have to replace it by mongosh.\nHope it helps!Ana", "username": "Ana_Escobar_Llamazares" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
New version MongoDB 6.0
2022-09-11T19:54:23.735Z
New version MongoDB 6.0
1,419
null
[ "replication" ]
[ { "code": "", "text": "Hi\nI want to configure replica set with vip\nPlease anyone tell me how to doThanks", "username": "kIL_Yoon" }, { "code": "", "text": "Hi @kIL_Yoon ,Can you provide more details on the “vip” you are referring to and the replica set configuration you are trying to create?Are you asking about Virtual IP addresses?Regards,\nStennie", "username": "Stennie_X" } ]
How to implement mongodb replica set with vip
2022-09-18T14:23:28.178Z
How to implement mongodb replica set with vip
1,701
null
[ "aggregation", "queries", "node-js", "data-modeling" ]
[ { "code": "", "text": "I want to build a user notification system for my website. Is the right way to go about this to have one “notifications” collection where all notifications of all users are in?I would prefer to never delete an old notification. This means that over time there will be millions, maybe even billions of documents in this notifications collection. Will this cause any problems when it comes to retrieving these notifications? Will I be able to retrieve the latest notifications for a specific user quickly enough (within seconds)?", "username": "Florian_Walther" }, { "code": "", "text": "Hi @Florian_Walther,Will this cause any problems when it comes to retrieving these notifications? Will I be able to retrieve the latest notifications for a specific user quickly enough (within seconds)?Regarding the “specific user”, you could create an index for the associated field containing the user’s unique identifier (and any additional specifications e.g. date, notification messages, etc) to assist with performance. However, this is based off my interpretation of what fields the document(s) in the notification collection have.In terms of the latest notification, I interpret this as a newly-inserted notification document. I believe when the document in question is still within your working set, it should be fast to retrieve, the total size of the collection notwithstanding. However if your goal is to never delete any document, you might be interested in taking a look at Atlas Online Archive which will automatically archive old documents, while still having the capability of querying them.In saying so, for optimal performance you will essentially want your working set and commonly used indexes to fit in memory to prevent minimal reads from disk.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you for the tips! It seems like I can implement all these techniques (archiving, optimized indexes) in hindsight, right? So I might just go with a single collection now and implement these once the queries start to become slow.It seems that by archiving old notification documents, the total amount of old notifications shouldn’t play a role anymore (for performance). It makes a lot of sense.", "username": "Florian_Walther" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Building a notification system - How will it handle millions of documents?
2022-09-18T09:09:18.547Z
Building a notification system - How will it handle millions of documents?
3,523
https://www.mongodb.com/…680239e84693.png
[ "punjab-mug" ]
[ { "code": "Co-Founder, LuxstekSoftware Engineer, MongoDBSoftware Engineer, MongoDB", "text": "\nMongoDB User Group Punjab960×540 109 KB\nEnd your week by joining Punjab MongoDB User Group and GDSC LPU for a MongoDB User Group workshop filled with interactive sessions, games, and fun at the School of CSE, Lovely Professional University, Jalandhar on Sept 16th.We have a fun-filled day planned with sessions and hands-on that include learning about MongoDB basics, a mini hackathon, speaker sessions as well as an introduction to the world of MongoDB Community!Not to forget there will also be, Trivia Swags , and Lunch !The sessions being planned are focused on beginner database operations. If you are a beginner or have some experience with MongoDB already, there is something for all of you! Based on your hackathon performance and participation in Hands On, you may be nominated to be our next Punjab MUG Leader! *Gates open at 9:30 AM ISTEvent Type: In-Person\nLocation: School of CSE, Lovely Professional University, JalandharTo RSVP - Please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.\nSamson720×960 58.1 KB\nCo-Founder, Luxstek–\n\nRaman420×540 27 KB\n–\n\nimage738×738 75 KB\nSoftware Engineer, MongoDB–\nSoftware Engineer, MongoDBJoin the Punjab Group to stay updated with upcoming meetups and discussions.", "username": "Satyam" }, { "code": "", "text": "Hi everybody, We are so excited to meet you all this Friday, and we need your help in making the workshop fun and interactive.It would be great if you could follow the steps mentioned below before joining the session and make yourself prepared to see the Data In Action.\nPS: We will be covering this part at the beginning of the day as well so fear not in case you face any issues :\n1508×628 111 KB\nIf you are unable to deploy a cluster on MongoDB Atlas, please use the connection string mentioned below to connect your Compass and play with the sample dataset:mongodb+srv://global:[email protected]/Also, here’s a list of all the useful things that we will also be talking about:That’s it, you are all set to rock. See you all at the event! Thanks & Regards,\nSatyam Gupta", "username": "Satyam" }, { "code": "", "text": "mongodb+srv://global:[email protected]/Spain or portugal82", "username": "Varun_Bansal" }, { "code": "", "text": "633 in spain\ndfklughdflgrsffsdxclkghds", "username": "kartikay_patni" }, { "code": "", "text": "633ldsjbkvbijasdvbijasblvkjasbvlkjasbvljasblvkjsb", "username": "Doctor_Arrow" }, { "code": "", "text": "mongodb+srv://global:[email protected]/spain or portugal count\n1188", "username": "Hanna_Gupta" }, { "code": "", "text": "mongodb+srv://global:[email protected]/The answer is: 1188 records", "username": "Arihant_Robotics" }, { "code": "", "text": "ques for spain and portugal the answer is 633", "username": "Sukhman_Singh" }, { "code": "", "text": "633\nkdja; flaskjflslasjfaslfkjafdasfd", "username": "Sahil_Chauhan" }, { "code": "", "text": "82 is the right answer", "username": "Nitish_Raj" }, { "code": "", "text": "the answer is of question i s 555", "username": "sandeep_mallina" }, { "code": "", "text": "661 Spain\n551 Portugal", "username": "Sayan_Roy2" }, { "code": "", "text": "1288\n…bdshbvjjhdsbvhjdbshjvds", "username": "SAURABH_TIWARI1" }, { "code": "", "text": "555 are in portugal sadjhas", "username": "kartikay_patni" }, { "code": "", "text": "1188 is the answer for the quesiton\n\\", "username": "Bysani_abhinay_kumar" }, { "code": "", "text": "82 is the answer for the second question", "username": "Vasudeva_Kilaru" }, { "code": "", "text": "mongodb+srv://global:[email protected]/82 spain and portugal …", "username": "RAJA_SAMPATH_KUMAR" }, { "code": "", "text": "ques for spain and portugal the answer is 1163", "username": "akanksha_verma" }, { "code": "", "text": "1188 in Portugal or Spain", "username": "Ba3a" }, { "code": "", "text": "this is for housespain or porygal 1188", "username": "Varun_Bansal" } ]
Punjab MUG: MongoDB Workshop, Startup Journey, Hackathon and Fun!
2022-09-02T09:58:56.827Z
Punjab MUG: MongoDB Workshop, Startup Journey, Hackathon and Fun!
14,234
null
[ "aggregation" ]
[ { "code": "{\n CREATEDBY:\"Lorraine Clarkin\"\n CREATIONDATE:\"03-MAR-21\"\n BARCODEID:\"A1429857\"\n RECORDDESCRIPTION:\"Putnam N1A review 10 31 2020 funds1 of 4\"\n CHECKINDATE:\"01-JAN-01\"\n RECORDTYPE:\"Assurance Workpapers\"\n RECORDSTATE:\"Recorded\"\n}\n{\n \"BARCODEID\":\"A1429857\",\n \"RECORDCONTENTS\":\"Signed rep letter\"\n},\n{\n \"BARCODEID\":\"A1429857\",\n \"RECORDCONTENTS\":\"Reference letter\"\n}\n{\n CREATEDBY:\"Lorraine Clarkin\"\n CREATIONDATE:\"03-MAR-21\"\n BARCODEID:\"A1429857\"\n RECORDDESCRIPTION:\"Putnam N1A review 10 31 2020 funds1 of 4\"\n CHECKINDATE:\"01-JAN-01\"\n RECORDTYPE:\"Assurance Workpapers\"\n RECORDSTATE:\"Recorded\"\n \"RECORDCONTENTS\": [\"Signed rep letter\",\"Reference letter\"]\n}\n", "text": "I have 2 collections as below\ncollection 1:collection 2:And I want RECORDCONTENTS from 2 nd collection to be merged with 1 collection as comma seperated like belowFinal Result :can someone help", "username": "Rehan_Raza_Khatib_US" }, { "code": "$lookup$mapdb.coll_1.aggregate([\n {\n \"$lookup\": {\n \"from\": \"coll_2\",\n \"localField\": \"BARCODEID\",\n \"foreignField\": \"BARCODEID\",\n \"as\": \"RECORDCONTENTS\"\n }\n },\n {\n \"$set\": {\n \"RECORDCONTENTS\": {\n \"$map\": {\n \"input\": \"$RECORDCONTENTS\",\n \"in\": \"$$this.RECORDCONTENTS\"\n }\n }\n }\n }\n])\n", "text": "You can do it like this:Working example", "username": "NeNaD" }, { "code": "", "text": "Thank you,Script is working fine", "username": "Rehan_Raza_Khatib_US" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
ADD Data from 2 collection as comma seperated value
2022-09-16T12:30:06.393Z
ADD Data from 2 collection as comma seperated value
862
null
[ "cxx" ]
[ { "code": "mongocxx::uriconst std::stringbsoncxx::string::view_or_valueurimongocxx::pool", "text": "Hi,after facing an issue with the driver crashing during an iterative read/write process, I wanted to try the upgrade from 3.5.0 to 3.6.0. Compilation was no big deal after figuring out howto do it at the last time. I’m still using Boost (1.72.0) for the Polyfill even if I use VS2017 for my application.After switching to 3.6.0 I was not able to connect to the DB nor even create an mongocxx::uri by passing a const std::string. Also taking the detour by creating a bsoncxx::string::view_or_value first does not help. After passing the c-String from my string object, the uri constructor was not throwing any longer but then creating a mongocxx::pool was causing the trouble. From then, the passed string was no longer a undefined object. There I stopped further investigations and went back to 3.5.0.I’m actually also struggeling by catching exceptions, but I’m sure this is another topic. I’m also pretty sure, that the crashing behaviour is linked to some weird unmatching VS compiling setting or sth.Very thankful for all kind of hints", "username": "Uwe_Dittus" }, { "code": "mongocxx::uriconst std::stringmongocxx::instance instance{};\nmongocxx::uri uri(\"mongodb://MONGODB_URI/\");\nauto client = mongocxx::client{uri};\nmongocxx::pool", "text": "Hi @Uwe_Dittus,After switching to 3.6.0 I was not able to connect to the DB nor even create an mongocxx::uri by passing a const std::string .Unfortunately I’m unable to reproduce the issue that you’re seeing. The following code snippet works without an issue in both v3.5.x and v3.6.xIf you’re able to reproduce the issue consistently, could you provide the following:Also could you elaborate what do you mean by mongocxx::pool causing trouble ?Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Hi @wan,thanks for the answer. I’ll try it again next week, just too busy.Cheers", "username": "Uwe_Dittus" }, { "code": "", "text": "Just stumbled over this again.\nFor those who wondering. It was a simple VisualStudio Project config issue. The debug version of my application was not compiled with the /MDd flag.", "username": "Uwe_Dittus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[Mongo-cxx-driver.r3.6.0] using std::string on mongocxx::uri leading to crash
2020-09-24T07:15:35.068Z
[Mongo-cxx-driver.r3.6.0] using std::string on mongocxx::uri leading to crash
4,482
null
[ "performance" ]
[ { "code": "", "text": "Hi Experts,We are finding Mongo Response Time is high in our software release which is using Mongo 4.2.20 compared to our earlier software release that has Mongo 4.0.27, although same storage engine (Wired Tiger) is used in either.In 4.0.27, Mongo response time is around 600 ms, whereas in 4.2.20 it is 1-2 seconds in some occasions, which seems to be a bigger jump.Is this any known issue/behavior. Kindly confirm.Thanks,\nKiran", "username": "Kiran_Pamula" }, { "code": "mongoddb.getCmdLineOpts()mongostat", "text": "Hey @Kiran_Pamula,Welcome to the MongoDB Community Forums! In 4.0.27, Mongo response time is around 600 ms, whereas in 4.2.20 it is 1-2 seconds in some occasions, which seems to be a bigger jump.How are you calculating the response time? It would be great to share details of your deployment topology, and are you running mongod along with some other processes? Kindly also share the output of db.getCmdLineOpts() and mongostat output from both 4.0.27 & 4.2.20 to make any definitive conclusion.\nResponse time usually depends on a number of factors like your hardware, number of processes running, your memory, etc. Also, starting MongoDB 4.2 retryable writes are default set to true and MongoDB also removed the 16MB total size limit for a transaction. This may be one of the reasons why your application is having a higher response rate than the previous version.Regards,\nSatyam", "username": "Satyam" } ]
Mongo Response Time is high in 4.2.20 compared to 4.0.27
2022-09-13T19:47:14.736Z
Mongo Response Time is high in 4.2.20 compared to 4.0.27
2,191
null
[ "monitoring" ]
[ { "code": "", "text": "Hi Experts,wanted to know if there are any way we can check number of DML happening in collections.Any help will be appriciated.Regards\nPrince.", "username": "Prince_Das" }, { "code": "{\n \"_id\": { <Resume Token> },\n \"operationType\": \"insert\",\n ...\n \"ns\": {\n \"db\": \"example\",\n \"coll\": \"testing\"\n },\n \"fullDocument\": {\n \"_id\": ObjectId(\"599af247bb69cd81261c345f\"),\n ...\n }\n}\n", "text": "Hey @Prince_Das,There are a number of ways you can use to check DML happening in a collection. You can try using mongostat which provides a quick overview of running instances and mongotop which provides statistics on a per-collection level.You can also try using change streams in MongoDB which allows applications to access real-time data changes. With change streams, you can track data changes in a single collection, a database, or even an entire deployment. For the insert operation, the change event will give an output like this:Here, the ns field shows the namespace (database and or collection) affected by the event.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Wanted to check number of insert happening to a collection in daily basis
2022-09-08T12:31:19.762Z
Wanted to check number of insert happening to a collection in daily basis
2,594
null
[ "aggregation", "queries", "crud" ]
[ { "code": "{\n \"foo\": [\n {\n \"userId\": \"e736f24f-164e-46d1-8a8b-6ac2f69237a8\",\n \"params\": {\n \"location\": {\n \"coordinates\": {\n \"first\": -80.514124,\n \"second\": 45.509854\n }\n },\n \"withinDistance\": 30,\n ... many other fields ...\n },\n },\n {\n \"userId\": \"b71d2067-9dee-45cc-9903-7dea371ea685\",\n \"params\": {\n \"location\": {\n \"coordinates\": {\n \"first\": -82.923723,\n \"second\": 47.210577\n }\n },\n \"withinDistance\": 30,\n ... many other fields ...\n },\n },\n ]\n}\nfoo.params.location.coordinates{\n \"foo\": [\n {\n \"userId\": \"e736f24f-164e-46d1-8a8b-6ac2f69237a8\",\n \"params\": {\n \"location\": {\n \"type\": \"Point\",\n \"coordinates\": [\n -80.514124,\n 45.509854\n ]\n },\n \"withinDistance\": 30,\n ... many other fields ...\n },\n },\n {\n \"userId\": \"b71d2067-9dee-45cc-9903-7dea371ea685\",\n \"params\": {\n \"location\": {\n \"type\": \"Point\",\n \"coordinates\": [\n -82.923723,\n 47.210577\n ]\n },\n \"withinDistance\": 30,\n ... many other fields ...\n },\n },\n ]\n}\nwithinDistanceparamsdb.collection.updateMany({}, [\n {\n \"$set\": {\n \"foo\": {\n \"$map\": {\n \"input\": \"$foo\",\n \"as\": \"f\",\n \"in\": {\n $mergeObjects: [\n \"$$f\",\n { \n \"params\": {\n \"location\" : {\n \"type\": \"Point\",\n \"coordinates\": [\n \"$$f.params.location.coordinates.first\",\n \"$$f.params.location.coordinates.second\"\n ]\n }\n }\n }\n ]\n }\n }\n }\n }\n }\n])\n", "text": "This seems like it should be easy but it seems to be annoyingly hard.Input collection contains documents like this:And I want to transform the non-standard foo.params.location.coordinates object into the standard GeoJSON format, so the output should look like this:The closest I’ve come is this but I lose the withinDistance and other fields in the params structure:", "username": "Raman_Gupta" }, { "code": "", "text": "I think you are simply missing a extra $mergeObjects from the old params with the modified params.", "username": "steevej" }, { "code": "location$mergeParamsclone(t={}){const r=t.loc||{};return e({loc:new Position(\"line\"in r?r.line:this.loc.line,\"column\"in r?r.column:...<omitted>...)} could not be cloned.\ndb.collection.updateMany({}, [\n {\n \"$set\": {\n \"foo\": {\n \"$map\": {\n \"input\": \"$foo\",\n \"as\": \"f\",\n \"in\": {\n $mergeObjects: [\n \"$$f\",\n { \n \"params\": {\n $mergeObjects: [\n \"$$f.params\",\n {\n \"location\" : {\n \"type\": \"Point\",\n \"coordinates\": [\n \"$$f.params.location.coordinates.first\",\n \"$$f.params.location.coordinates.second\"\n ]\n }\n }\n ]\n }\n }\n ]\n }\n }\n }\n }\n }\n])\n", "text": "Thank you, that worked. I had actually tried that before, but now I realize I had missed the braces around the location object in the $mergeParams array, and MongoDB gave me a very strange error, which didn’t lead me to my error:After your post I reviewed what I had done and found the missing braces.For posterity, the solution looks like this:Thanks!", "username": "Raman_Gupta" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update all elements of an array, reading data from existing elements
2022-09-17T23:38:03.633Z
Update all elements of an array, reading data from existing elements
1,097
null
[ "atlas-cluster", "php" ]
[ { "code": "<?php\n\nrequire_once dirname( __FILE__ ) . '/vendor/autoload.php';\n\n$client = new MongoDB\\Client(\n 'mongodb+srv://<my-database>:<my-password>@<my-cluster>.lkffbtl.mongodb.net/?retryWrites=true&w=majority');\n\nvar_dump( $client );\n public function __construct($uri = 'mongodb://127.0.0.1/', array $uriOptions = [], array $driverOptions = [])\n {\n $driverOptions += ['typeMap' => self::$defaultTypeMap];\n\n if (! is_array($driverOptions['typeMap'])) {\n throw InvalidArgumentException::invalidType('\"typeMap\" driver option', $driverOptions['typeMap'], 'array');\n }\n\n if (isset($driverOptions['autoEncryption']['keyVaultClient'])) {\n if ($driverOptions['autoEncryption']['keyVaultClient'] instanceof self) {\n $driverOptions['autoEncryption']['keyVaultClient'] = $driverOptions['autoEncryption']['keyVaultClient']->manager;\n } elseif (! $driverOptions['autoEncryption']['keyVaultClient'] instanceof Manager) {\n throw InvalidArgumentException::invalidType('\"keyVaultClient\" autoEncryption option', $driverOptions['autoEncryption']['keyVaultClient'], [self::class, Manager::class]);\n }\n }\n\n $driverOptions['driver'] = $this->mergeDriverInfo($driverOptions['driver'] ?? []);\n\n $this->uri = (string) $uri;\n $this->typeMap = $driverOptions['typeMap'] ?? null;\n\n unset($driverOptions['typeMap']);\n\n $this->manager = new Manager($uri, $uriOptions, $driverOptions);\n $this->readConcern = $this->manager->getReadConcern();\n $this->readPreference = $this->manager->getReadPreference();\n $this->writeConcern = $this->manager->getWriteConcern();\n }\n$this->manager = new Manager($uri, $uriOptions, $driverOptions);", "text": "Hello, I am trying to connect via the PHP driver. I have followed the installation procedures from MongoDB’s PHP Driver documentation regarding pecl install for the extension and composer install of mongodb into the root directory.I have used the below code to launch the php driver and establish the connection to the database (note that I did change the inputs for db, pass, and cluster in the actual code).However, when I run this, I receive the following fatal error:Fatal error : Uncaught Error: Class ‘MongoDB\\Driver\\Manager’ not found in /home3/coradase/public_html/cora-staging/wp-content/plugins/MongoDB/vendor/mongodb/mongodb/src/Client.php:124 Stack trace: #0 /home3/coradase/public_html/cora-staging/wp-content/plugins/MongoDB/conf.php(6): MongoDB\\Client->__construct(‘mongodb+srv://c…’) #1 /home3/coradase/public_html/cora-staging/wp-content/plugins/MongoDB/mongodb.php(28): require_once(’/home3/coradase…’) #2 /home3/coradase/public_html/cora-staging/wp-includes/class-wp-hook.php(307): cora_mongodb_admin_page(’’) #3 /home3/coradase/public_html/cora-staging/wp-includes/class-wp-hook.php(331): WP_Hook->apply_filters(’’, Array) #4 /home3/coradase/public_html/cora-staging/wp-includes/plugin.php(476): WP_Hook->do_action(Array) #5 /home3/coradase/public_html/cora-staging/wp-admin/admin.php(259): do_action(‘toplevel_page_c…’) #6 {main} thrown in /home3/coradase/public_html/cora-staging/wp-content/plugins/MongoDB/vendor/mongodb/mongodb/src/Client.php on line 124Client.php (the file referenced in the error code) is a default file that came with the Composer installation. I have not edited the file. The function in Client.php that contains line 124 (referenced in the error) is shown below:For reference, line 124 is:\n$this->manager = new Manager($uri, $uriOptions, $driverOptions);Again, this code comes directly from the composer installation of mongodb, and has not been edited at all.I appreciate any insight form the team or anyone who has had this same problem in trying to debug so that the code will properly establish the connection to the database.Thank you!", "username": "michael_demiceli" }, { "code": "extension=mongodb.so/etc/php/8.1/cli/php.ini", "text": "Did you remember to add the line extension=mongodb.so to the end of your /etc/php/8.1/cli/php.ini or whatever ini file is appropriate to your php installation?", "username": "Jack_Woehr" }, { "code": "", "text": "Hi Jack - yes, I did add the extension line to php.ini.For reference, I am trying to connect MongoDB to a web app running on Wordpress.", "username": "michael_demiceli" }, { "code": "$ php\n<?php phpinfo(); ?>\nmongodb\n\nMongoDB support => enabled\nMongoDB extension version => 1.14.0\nMongoDB extension stability => stable\nlibbson bundled version => 1.22.0\nlibmongoc bundled version => 1.22.0\nlibmongoc SSL => enabled\nlibmongoc SSL library => OpenSSL\nlibmongoc crypto => enabled\nlibmongoc crypto library => libcrypto\nlibmongoc crypto system profile => disabled\nlibmongoc SASL => disabled\nlibmongoc ICU => enabled\nlibmongoc compression => enabled\nlibmongoc compression snappy => disabled\nlibmongoc compression zlib => enabled\nlibmongoc compression zstd => disabled\nlibmongocrypt bundled version => 1.5.0\nlibmongocrypt crypto => enabled\nlibmongocrypt crypto library => libcrypto\n", "text": "Well, what’s happening is that the classes built into the mongodb.so are not being found. Whatever the reason. Try loading php at the command line …Then ctl-D to exit and PHP should spew a lot of configuration info.\nLook for lines like:and if you don’t find them, then the extension is not being loaded.", "username": "Jack_Woehr" }, { "code": "", "text": "Hi\nMy extension have loaded, but get the same error. Also, I want to have connect in local db (it seems runing on 127.0.1.1) - what connection string should I use?", "username": "New_Triangle" }, { "code": "mongodb://user:password", "text": "what connection string should I use?mongodb://user:password should be good enough assuming you really mean 127.0.0.1", "username": "Jack_Woehr" }, { "code": "s connecting without prompt and also located in 127.0.1.1 (I use command mongodb --host 127.0.1.1) And in php file it", "text": "Thanks for reply, but problem is still here\nI use mongodb from another application and its connecting without prompt and also located in 127.0.1.1 (I use command mongodb --host 127.0.1.1) And in php file its:\n$client = new MongoDB\\Driver\\Manager(‘mongodb://127.0.1.1’);\nI try to change file Client.php with manually adding “127.0.1.1”, but it doesn`t work too", "username": "New_Triangle" }, { "code": "$serverApi = new ServerApi(ServerApi::V1); $client = new MongoDB\\Client( 'mongodb+srv://user:<password>@<url>.mongodb.net/?retryWrites=true&w=majority', [], ['serverApi' => $serverApi]); $db = $client->test;", "text": "I’ve just tried to create a free DB on Atlas and I’ve used the string from docs$serverApi = new ServerApi(ServerApi::V1); $client = new MongoDB\\Client( 'mongodb+srv://user:<password>@<url>.mongodb.net/?retryWrites=true&w=majority', [], ['serverApi' => $serverApi]); $db = $client->test;I have the same problem, so it’s something with mongodb/compose deployment", "username": "New_Triangle" }, { "code": "", "text": "Just try to reboot server - it was a solution to me", "username": "New_Triangle" } ]
Fatal error: Uncaught Error: Class 'MongoDB\Driver\Manager'
2022-09-18T00:43:48.128Z
Fatal error: Uncaught Error: Class &lsquo;MongoDB\Driver\Manager&rsquo;
3,314
null
[ "queries" ]
[ { "code": "", "text": "I’m on the M2 cluster tier and have a single 3MB document that I am querying every minute. The queries have gone from taking less than a second to pull the data to consistently taking over 30 seconds to pull the single document. I noticed that my oplog is over 3.5GB and assuming this is why the queries have become so slow. I’ve been researching and it looks like I have no control of the oplog through Atlas. Is there anything I can do to reduce the size of the oplog and make the query faster again?", "username": "Levi_Ackerman" }, { "code": "", "text": "Hi @Levi_Ackerman - Welcome to the community I was not aware one of the Survey Corps members used Atlas! The queries have gone from taking less than a second to pull the data to consistently taking over 30 seconds to pull the single document.Regarding your issue with the >30 second query time, it is possible you’ve hit the Data Transfer Limits as part of the free/shared tier limitations. As per the same docs:Atlas handles clusters that exceed the rate limit as follows:This may explain the delay if you have exceeded the limit(s). If you’re still experiencing the delay, I would contact the Atlas support team via the in-app chat and see if the support members can check if this limit has been exceeded for your particular M2 cluster in question.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Atlas oplog is slowing down queries
2022-09-18T06:33:12.069Z
MongoDB Atlas oplog is slowing down queries
1,614
null
[ "aggregation" ]
[ { "code": "dailyReportdailyReportdailyReport{\n date: '2021-06-06 22:00:00.000Z',\n customerId: '559687785574',\n customerName: 'Test A Customer',\n campaigns: [ # contains an indefinite number of campaigns\n {\n _id: '62c5810cbccef7125b81870b',\n name: 'newsletter-2021',\n requestCount: 4, # sum of countries.requestCount\n bookingCount: 2, # sum of countries.bookingCount\n revenue: 1500, # sum of countries.revenue\n countries: [ # contains an indefinite number of countries\n {\n countryCode: 'DE',\n requestCount: 4, # sum of regions.requestCount\n bookingCount: 2, # sum of regions.bookingCount\n revenue: 1500, # sum of regions.revenue\n regions: [ # contains an indefinite number of regions\n {\n countryRegion: 'Bayern',\n requestCount: 1,\n bookingCount: 1,\n revenue: 1000\n },\n {\n countryRegion: 'Sachsen',\n requestCount: 3,\n bookingCount: 1,\n revenue: 500\n }\n ]\n }\n ]\n }\n ]\n}\n[\n # limit docs by customer id and requested date range\n { $match: {\n customerId: '559687785574',\n date: {\n $gt: new Date('2021-05-31 22:00:00.000Z'),\n $lt: new Date('2021-07-01 22:00:00.000Z'),\n }\n }},\n # group by customer id and push all campaign data in one array\n { $group: {\n _id: '$customerId',\n customerName: { $first: '$customerName' },\n campaigns: { $push: '$campaigns' }\n }},\n# 'transform' $campaigns to single level array\n { $project: {\n _id: 1,\n customerName: 1,\n campaigns: { $reduce: {\n input: '$campaigns',\n initialValue: [],\n in: { $concatArrays: ['$value', '$this'] },\n }}\n }}\n]\n[\n # ... shorten for readability\n {\n _id: '62c5810cbccef7125b81870b',\n name: 'newsletter-2021',\n requestCount: 340,\n bookingCount: 157,\n revenue: 20358,\n countries: [\n # ... shorten for readability\n {\n countryCode: 'DE',\n requestCount: 123,\n bookingCount: 67,\n revenue: 10564,\n regions: [\n {\n countryRegion: 'Bayern',\n requestCount: 74,\n bookingCount: 37,\n revenue: 7854,\n },\n {\n countryRegion: 'Sachsen',\n requestCount: 74,\n bookingCount: 37,\n revenue: 7854,\n },\n # ... shorten for readability\n ],\n # ... shorten for readability\n },\n # ... shorten for readability\n ],\n },\n # ... shorten for readability\n]\n", "text": "Hello there!There is a daily cronjob which creates a daily report (model name: dailyReport) per customer and its campaigns based on request and booking numbers per country & region. These values are nested in the docs and I need to summarize these values. Up front: I have no influence to the structure and model of dailyReport so I can’t change how the daily cronjob creates those.Example doc for dailyReport:So these docs are created for every customer for every day. Now I need to aggregate/sum the data for a single customer within a date range.How can i aggregate the dailyReports for a single customer grouped by campaigns and sum the nested values for requestCount, bookingCount & revenue for every region, country and campaign?I tried the following to aggregate the data for may 2021:But now I’m stuck because every doc in $campaigns holds its own data for countries and regions, but I need to somehow sum them based on same campaign, country, region data but keep the nesting. So the final array from aggregation should look like this:I hope I explained myself clear enough and someone can help me.cheers\nGeorg", "username": "Georg_Bote" }, { "code": "\"campaigns\"\"countries\"\"regions\"\"requestCount\"\"bookingCount\"\"revenue\"\"campaigns\"null\"customerId\"\"campaign\"db.campaigns.find()\n[\n/// Document 1\n {\n _id: ObjectId(\"6327c2b3b56990d3451c53cb\"),\n date: '2021-06-06 22:00:00.000Z',\n customerId: '559687785574',\n customerName: 'Test A Customer',\n campaigns: [\n {\n _id: '62c5810cbccef7125b81870b',\n name: 'newsletter-2021',\n requestCount: 4,\n bookingCount: 2,\n revenue: 1500,\n countries: [\n {\n countryCode: 'DE',\n requestCount: 4,\n bookingCount: 2,\n revenue: 1500,\n regions: [\n {\n countryRegion: 'Bayern',\n requestCount: 1,\n bookingCount: 1,\n revenue: 1000\n },\n {\n countryRegion: 'Sachsen',\n requestCount: 3,\n bookingCount: 1,\n revenue: 500\n }\n ]\n }\n ]\n }\n ]\n },\n/// Document 2\n {\n _id: ObjectId(\"6327c2b3b56990d3451c53cc\"),\n date: '2021-06-06 22:00:00.000Z',\n customerId: '559687785574',\n customerName: 'Test A Customer',\n campaigns: [\n {\n _id: '62c5810cbccef7125b81870b v2',\n name: 'newsletter-2021 v2',\n requestCount: 5,\n bookingCount: 2,\n revenue: 1500,\n countries: [\n {\n countryCode: 'AU',\n requestCount: 4,\n bookingCount: 2,\n revenue: 1500,\n regions: [\n {\n countryRegion: 'Bayern',\n requestCount: 2,\n bookingCount: 1,\n revenue: 1000\n },\n {\n countryRegion: 'Sachsen',\n requestCount: 3,\n bookingCount: 1,\n revenue: 500\n }\n ]\n }\n ]\n },\n {\n _id: '62c5810cbccef7125b81870b v3',\n name: 'newsletter-2021 v3',\n requestCount: 6,\n bookingCount: 2,\n revenue: 1500,\n countries: [\n {\n countryCode: 'US',\n requestCount: 4,\n bookingCount: 2,\n revenue: 1500,\n regions: [\n {\n countryRegion: 'Bayern',\n requestCount: 3,\n bookingCount: 1,\n revenue: 1000\n },\n {\n countryRegion: 'Sachsen',\n requestCount: 3,\n bookingCount: 1,\n revenue: 500\n }\n ]\n }\n ]\n }\n ]\n }\n]\ndb.campaigns.aggregate(\n{ $match: {customerId: '559687785574'}},\n{ $unwind:\"$campaigns\"},\n{ $group: { \"_id\" : '$customerId', \"customerName\":{\"$first\":'$customerName'},campaigns: { $push: '$campaigns' }}}\n)\n[\n {\n _id: '559687785574',\n customerName: 'Test A Customer',\n campaigns: [\n {\n _id: '62c5810cbccef7125b81870b',\n name: 'newsletter-2021',\n requestCount: 4,\n bookingCount: 2,\n revenue: 1500,\n countries: [\n {\n countryCode: 'DE',\n requestCount: 4,\n bookingCount: 2,\n revenue: 1500,\n regions: [\n {\n countryRegion: 'Bayern',\n requestCount: 1,\n bookingCount: 1,\n revenue: 1000\n },\n {\n countryRegion: 'Sachsen',\n requestCount: 3,\n bookingCount: 1,\n revenue: 500\n }\n ]\n }\n ]\n },\n {\n _id: '62c5810cbccef7125b81870b v2',\n name: 'newsletter-2021 v2',\n requestCount: 5,\n bookingCount: 2,\n revenue: 1500,\n countries: [\n {\n countryCode: 'AU',\n requestCount: 4,\n bookingCount: 2,\n revenue: 1500,\n regions: [\n {\n countryRegion: 'Bayern',\n requestCount: 2,\n bookingCount: 1,\n revenue: 1000\n },\n {\n countryRegion: 'Sachsen',\n requestCount: 3,\n bookingCount: 1,\n revenue: 500\n }\n ]\n }\n ]\n },\n {\n _id: '62c5810cbccef7125b81870b v3',\n name: 'newsletter-2021 v3',\n requestCount: 6,\n bookingCount: 2,\n revenue: 1500,\n countries: [\n {\n countryCode: 'US',\n requestCount: 4,\n bookingCount: 2,\n revenue: 1500,\n regions: [\n {\n countryRegion: 'Bayern',\n requestCount: 3,\n bookingCount: 1,\n revenue: 1000\n },\n {\n countryRegion: 'Sachsen',\n requestCount: 3,\n bookingCount: 1,\n revenue: 500\n }\n ]\n }\n ]\n }\n ]\n }\n]\n$unwind", "text": "Hi @Georg_Bote,Thanks for providing a really detailed explanation including what you’ve attempted so far campaigns: [ # contains an indefinite number of campaigns\ncountries: [ # contains an indefinite number of countries\nregions: [ # contains an indefinite number of regionsRegarding the \"campaigns\" , \"countries\" and \"regions\" array fields, you have advised these can contain an indefinite number of values / documents within. If you’re planning to add to these arrays forever, please take note of the BSON Document Size limit as it may cause issues in future if you hit this limit.But now I’m stuck because every doc in $campaigns holds its own data for countries and regions, but I need to somehow sum them based on same campaign, country, region data but keep the nesting. So the final array from aggregation should look like this:Based off your current pipeline and expected output, I presume you wish to add some further stages to do the summing to achieve the top level \"requestCount\", \"bookingCount\" and \"revenue\" values (in your expected output document). Is this correct?I tried to use the sample document and your current pipeline but received a \"campaigns\" value of null.In my test environment, I have created 2 documents for the same \"customerId\". The below is the pipeline used (I have removed the date range filter just for demonstration purposes for the output).2 sample documents inside the test environment (same customerId, values within the \"campaign\" array differing. E.g. altered country codes):PIpeline stages used:Output:Note: The main difference between the pipeline you have provided and above is that the above uses $unwind to achieve a single level array.I understand you have redacted some details for readability however if you require further assistance could send 3-4 sample documents for the same customerId and advise of the expected output?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Group documents and sum values of deeply nested arrays
2022-09-08T11:03:42.190Z
Group documents and sum values of deeply nested arrays
3,239
null
[ "queries" ]
[ { "code": "[{\n \"productCategory\": \"Electronics\",\n \"price\": \"20\",\n \"priceCondition\": \"Fixed\",\n \"adCategory\": \"Sale\",\n \"productCondition\": \"New\",\n \"addDescription\": \"Lorem Ipsum Dolor Sit Amet Consectetur Adipisicing Elit Maxime Ab Nesciunt Dignissimos.\",\n \"city\": \"Los Angeles\",\n \"rating\": {\n \"oneStar\": 1,\n \"twoStar\": 32,\n \"threeStar\": 13,\n \"fourStar\": 44,\n \"fiveStar\": 1\n },\n \"click\": 12,\n \"views\": 3\n },\n {\n\n \"productCategory\": \"Automobiles\",\n \"price\": \"1500\",\n \"priceCondition\": \"Negotiable\",\n \"adCategory\": \"Rent\",\n \"productCondition\": \"New\",\n \"addDescription\": \"Lorem Ipsum Dolor Sit Amet Consectetur Adipisicing Elit \n Maxime Ab Nesciunt Dignissimos.\",\n \"city\": \"California\",\n \"rating\": {\n \"oneStar\": 2,\n \"twoStar\": 13,\n \"threeStar\": 10,\n \"fourStar\": 50,\n \"fiveStar\": 4\n },\n \"click\": 22,}\n\n},\n{\n \"productCategory\": \"Hospitality\",\n \"price\": \"500\",\n \"priceCondition\": \"Yearly\",\n \"adCategory\": \"Booking\",\n \"productCondition\": \"New\",\n \"addDescription\": \"Lorem Ipsum Dolor Sit Amet Consectetur Adipisicing Elit Maxime Ab Nesciunt Dignissimos.\",\n \"city\": \"Houston\",\n \"rating\": {\n \"oneStar\": 16,\n \"twoStar\": 19,\n \"threeStar\": 28,\n \"fourStar\": 16,\n \"fiveStar\": 17\n },\n \"click\": 102,\n \"views\": 47\n}\n]\nhttp://localhost:8080/api/v1/filter?productCondition=New&price=100&productCategory=Hospitality db\n .collection(Index.Add)\n .find({\n $or: [\n { productCategory },\n { price },\n { adCategory },\n { priceCondition },\n { productCondition },\n { city },\n ],\n })\n .limit(pageSize)\n .skip(pageSize * parsePage)\n .toArray();\n\n db\n .collection(Index.Add)\n .find({\n $and: [\n { productCategory },\n { price },\n { adCategory },\n { priceCondition },\n { productCondition },\n { city },\n ],\n })\n .limit(pageSize)\n .skip(pageSize * parsePage)\n .toArray();\n", "text": "I have Product collection. In this collection each document has same keys and different values. Several documents are shown in the example below.I would like to search each document with one or more matching search queries to match the document with my search.\nIn the example below, I will show url query: http://localhost:8080/api/v1/filter?productCondition=New&price=100&productCategory=HospitalitySo far I have tried to solve the filtration using the $or and $and operators but here there is a problem because with the first operator $or when I do a query only the first condition is searched, the rest is omitted, with the second operator $and I have to add all the condition and the problem arises because I will not always have all querys.I am using the find method to achieve my filtering methods.", "username": "Bartlomiej_Figatowski" }, { "code": "queries = []\nif ( typeof productCategory !== undefined ) queries.push( { productCategory } )\nif ( typeof price !== undefined ) queries.push( { price } )\n/* ... */\nif ( typeof city !== undefined ) queries.push( { city } )\nc.find( { \"$and\" : queries } )\n", "text": "Using $or and $and leads to completely different results.It is not clear if you want $or or you want $and.A query is simply a json object and you may use any conditional javascript statement to build your query.You simply do:", "username": "steevej" }, { "code": "undefined const check = (object: IAdds) => {\n for (let key in {\n productCategory,\n price,\n priceCondition,\n adCategory,\n productCondition,\n city,\n }) {\n if (object[key] !== undefined) return req.query;\n }\n };\n", "text": "undefined Thank you for your response. I think I wrote the question too quickly. I have already fixed the problem with the filtration. You’re right in saying that I should check to see if my query is undefined. Actually, I should start with that. I solved the filtration problem like this. And I used $and operator in my case.", "username": "Bartlomiej_Figatowski" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb filter by categories
2022-09-17T17:01:36.133Z
Mongodb filter by categories
3,897
https://www.mongodb.com/…9_2_1024x208.png
[ "queries" ]
[ { "code": "", "text": "db.solarSystem.count({type:{$ne:“Star”}}) //8\ndb.solarSystem.countDocuments({type:{$ne:“Star”}}) //8\ndb.solarSystem.estimatedDocumentCount({type:{$ne:“Star”}}) //9\ndb.solarSystem.find({type:{$ne:“Star”}}).count() //8\n\ncount-countDocuments-estimatedDocumentCount1036×211 12.6 KB\n", "username": "Jin_Xin_Chen" }, { "code": "estimatedDocumentCount()db.collection.estimatedDocumentCount()", "text": "Hi @Jin_Xin_Chen and welcome to the MongoDB Community forums. Taking a look at the estimatedDocumentCount() documentation, we see the following: db.collection.estimatedDocumentCount() does not take a query filter and instead uses metadata to return the count for a collection.Since it’s using metadata the count may not be correct. This also explains the estimated portion of the name. It’s returning a best guess answer without running the actual filtered query.", "username": "Doug_Duncan" }, { "code": "estimatedDocumentCount()typeoptionsestimatedDocumentCount()db.solarSystem.countDocuments()", "text": "Hi @Jin_Xin_Chen,The estimatedDocumentCount() result is based on metadata for the count of all documents in a collection.As noted in the documentation @Doug_Duncan quoted, this method does not take a query filter so it is not equivalent to your other counts which filter on type values. Your query document is being interpreted as the options parameter for estimatedDocumentCount() and ignored.You could compare with db.solarSystem.countDocuments(), which would be an accurate count of all documents (versus a fast but possibly inaccurate estimate).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Why does "estimatedDocumentCount()" get a different result than "count()" and "countDocuments()"?
2022-09-17T02:13:11.497Z
Why does &ldquo;estimatedDocumentCount()&rdquo; get a different result than &ldquo;count()&rdquo; and &ldquo;countDocuments()&rdquo;?
4,011
null
[ "aggregation", "queries" ]
[ { "code": "[{\n \"id\": 1,\n \"phase\": \"phase1\",\n \"address\": \"address1\",\n \"grades\": [{\n \"grade\":80, \"mean\": 75, \"std\": 6\n },{\n \"grade\":85, \"mean\": 90, \"std\": 4\n },{\n \"grade\": 91, \"mean\": 85, \"std\": 4\n }\n ]\n },\n {\n \"id\": 1_1,\n \"parent\": 1,\n \"grades\": [{\n \"grade\":90, \"mean\": 75, \"std\": 6\n },{\n \"grade\":87, \"mean\": 90, \"std\": 4\n },{\n \"grade\": 91, \"mean\": 85, \"std\": 4\n }\n ]\n },{\n \"id\": 1_2,\n \"parent\": 1,\n \"grades\": [{\n \"grade\":82, \"mean\": 75, \"std\": 6\n },{\n \"grade\":83, \"mean\": 81, \"std\": 4\n },{\n \"grade\": 99, \"mean\": 85, \"std\": 4\n }\n ]\n }\n]\n[{\n \"id\": 1,\n \"phase\": \"phase1\",\n \"address\": \"address1\",\n \"grades\": [{\n \"grade\":80, \"mean\": 75, \"std\": 6\n },{\n \"grade\":85, \"mean\": 90, \"std\": 4\n },{\n \"grade\": 91, \"mean\": 85, \"std\": 4\n },{\n \"grade\":90, \"mean\": 75, \"std\": 6\n },{\n \"grade\":87, \"mean\": 90, \"std\": 4\n },{\n \"grade\": 91, \"mean\": 85, \"std\": 4\n },{\n \"grade\":82, \"mean\": 75, \"std\": 6\n },{\n \"grade\":83, \"mean\": 81, \"std\": 4\n },{\n \"grade\": 99, \"mean\": 85, \"std\": 4\n }\n ]\n }\n]\n", "text": "Hi,I have a document with an attribute of type array, and other attributes. I applied outlier pattern, so that the array overflow data added into a new document of the same type. The result is we have master/parent document (id=1 in below dataset) contains all attributes including the attribute of type array, and the overflow documents having only the attribute of type an array (id=2 and 3 in below dataset), and parent attribute points to the id of the parent document, as following:Is there a way to aggregate the data all documents to return grades greater than 80 and address = “address1”? as following expected result:Thanks", "username": "Rami_Khal" }, { "code": "db.collection.aggregate([\n {\n $unwind: \"$grades\"\n },\n {\n $match: {\n \"grades.grade\": {\n $gte: 80\n },\n \"address\": \"address1\"\n }\n },\n {\n $group: {\n _id: {\n id: {\n $ifNull: [\n \"$parent\",\n \"$id\"\n ]\n }\n },\n grades: {\n $push: \"$grades\"\n }\n }\n }\n])\n", "text": "Hi,I started with something like this:But the issue is that matching on address, returns only first document. As other documents not having address attribute.\nAny idea how can handle these documents as one document in aggregating data, since all have the same parent which have the information?Thanks", "username": "Rami_Khal" }, { "code": "", "text": "you should match before unwind. match is smart enough to match within arrays. you have a higher probability to leverage your indexes if you match firstyou only get the grades on the parent because you are matching address. I see 2 solutionsyou then use $filter to get the grades subdocumentssince you do not unwind you do not need to group", "username": "steevej" }, { "code": "db.collection.aggregate([\n {\n \"$match\": {\n \"address\": \"address1\"\n }\n },\n {\n \"$project\": {\n \"grades\": {\n \"$filter\": {\n \"input\": \"$grades\",\n \"cond\": {\n \"$gte\": [\n \"$$this.grade\",\n 80\n ]\n }\n }\n }\n }\n }\n])\n", "text": "Thanks a lot Steeve for the help!Ended up as follows:I was trying to find a way to connect document and subdocuments based of the “id” and “parent” fields. But didn’t find such a way. As you suggested, either have to duplicate the data (address) in subdocuments. Or using lookup.\nI would use duplicate data (address) in subdocuments (solution #1), as I’m under impression that the lookup is not a proper solution for large dataset from performance perspective.Thanks", "username": "Rami_Khal" }, { "code": "", "text": "yes, lookup will be slower, but your data schema is defined in such a way that this is the solution. make sure you have proper indexesfor your lookup, use localField:id with foreignField:parent and it should worknote that when you project you loose id, phase and address fields", "username": "steevej" }, { "code": "", "text": "Thanks a lot Steeve!This design for the data schema was to avoid unbounded arrays, so I used outlier pattern, by created new subdocuments for overflow data in arrays. But with this approach, I had to handle all CRUD APIs to take into consideration the subdocuments.\nBecause of the performance impact for lookup, I wont use it. Instead, going to copy all fields needed in retrieve queries in subdocuments.Thanks a lot", "username": "Rami_Khal" } ]
Aggregate data from master document and it's details
2022-09-16T14:09:49.710Z
Aggregate data from master document and it&rsquo;s details
1,909
null
[ "graphql", "app-services-data-access" ]
[ { "code": "{\n \"title\": \"patient\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"name\": {\n \"bsonType\": \"string\"\n },\n \"user\": {\n \"bsonType\": \"objectId\"\n }\n }\n}\ncurl --location --request POST 'https://realm.mongodb.com/api/client/v2.0/app/aaaa/graphql' \\\n--header 'email: [email protected]' \\\n--header 'password: aaaa' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\"query\":\"mutation {\\r\\n insertOnePatient(data:{\\r\\n name: \\\"heeeeee\\\"\\r\\n }) {\\r\\n _id\\r\\n name\\r\\n }\\r\\n}\",\"variables\":{}}'\n", "text": "I’m using mongodb atlas with mongodb realm graphql api. I’m justing use realm ui and postman.1 - I made a simple colletion called patient on atlas.\n2 - I made a simple realm app.\n3 - I made a rule with “Users can only read and write their own data” template.\n4 - I put {“user.id”: “%%user.id”} on Apply When field.\n5 - I made a schema like that:6 - I’m not using sync.\n7 - i turn on email/password auth.\n8 - i create a email/password on realm ui.\n9 - I review and deploy.When i try to insert patient by postman, the api returns:reason=“no matching role found for document with _id: ObjectID(\\“606dfbeef6c4be4cb6d91831\\”)”; code=“NoMatchingRuleFound”; untrusted=“insert not permitted”; details=map[]This is my request on curl:it’s work when i try to insert using graphql realm ui. some ideia?", "username": "Bob_Dylan" }, { "code": "", "text": "this is my app url https://realm.mongodb.com/api/client/v2.0/app/nicetry-mdjhm/graphql", "username": "Bob_Dylan" }, { "code": "", "text": "Hi @Bob_Dylan, welcome to the community.Could you please share the full rule definition for this collection (e.g., click on the “ADVANCED MODE” button and copy the JSON)?Also, is there an error shown in the Realm logs?", "username": "Andrew_Morgan" }, { "code": "{\n \"roles\": [\n {\n \"name\": \"owner\",\n \"apply_when\": {\n \"user\": \"%%user.id\"\n },\n \"insert\": true,\n \"delete\": true,\n \"search\": true,\n \"write\": true,\n \"fields\": {},\n \"additional_fields\": {}\n }\n ],\n \"filters\": [],\n \"schema\": {\n \"title\": \"patient\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"name\": {\n \"bsonType\": \"string\"\n },\n \"user\": {\n \"bsonType\": \"string\"\n }\n }\n }\n}\n", "text": "Hi Andrew,i made a request today and i dont saw any error, but i saw a error two days ago with this message: “no authentication methods were specified”.", "username": "Bob_Dylan" }, { "code": "", "text": "I am also getting the same issue, any solution to this?", "username": "Vishal_Shetty" }, { "code": "", "text": "Hello, did you ever find the solution? I’m triying to implement quite the same role.", "username": "Guido_Lopez" }, { "code": "{\n \"_id\": {\n \"%stringToOid\": \"%%user.id\"\n }\n}\n", "text": "Just if anyone stops here. I’ve found the solution, the problem was that I were creating a rule where the user in order to access its own data would need his own _id but I wasn’t matching this _id quite correctly, because I was matching an string with an object and it should’ve been a match between an object with an object. So, the rule I created was this:Regards.", "username": "Guido_Lopez" } ]
Permission error NoMatchingRuleFound
2021-04-07T18:43:10.057Z
Permission error NoMatchingRuleFound
6,634
null
[ "dot-net", "android" ]
[ { "code": "public class StringAndInt : EmbeddedObject\n{\n [MapTo(\"name\")]\n public string Name { get; set; }\n\n [MapTo(\"number\")]\n public int Number { get; set; }\n\n public StringAndInt()\n {\n }\n\n public StringAndInt(string name, int number)\n {\n Name = name;\n DayNumber = number;\n }\n}\npublic class App_Dictionaries : RealmObject\n{\n [PrimaryKey]\n [MapTo(\"_id\")]\n public ObjectId Id { get; set; } = ObjectId.GenerateNewId();\n\n [MapTo(Constants.Partition)]\n [Required]\n public string Partition { get; set; } = \"a_partition\"; //just an example\n\n [MapTo(\"string_and_int_dictionary\")]\n public IDictionary<string, StringAndInt> StringAndInt { get; }\n", "text": "Hi,I’m currently using a MongoDB Atlas Realm with a Xamarin App for mobile phones (iOS and Android) in Development Mode. I created a new RealmObject that contains several dictionaries that use a string as a key and the value contains an object similary to the classes below…}In my code, I created a new instance of the App_Dictionaries, a new instance of StringAndInt, and added it successfully to the Realm. This worked fine for the first (to make sure it worked) time. However, when I subsequently added another IDictionary (say, an object with a string, an int, and another string) to the RealmObject above, instantiating it worked, but writing to the Realm did not. Turns out that even tho development mode was ON, creating a Realm Object with IDictionary like the above code created Schemas for all the collections in my database. I was curious if this was a feature or a problem. It took a bit of poking around before I realized these Schemas had been created even tho the app was set to Development Mode on. As far as I was able to test, even deleting the schema wouldn’t let me do any subsequent changes to the RealmObject without getting rid of it, deleting the schemas, and building it all again in one shot with no subsequent alterations. Might be a Fody Weave problem? Thanks", "username": "Josh_Whitehouse" }, { "code": "App_Dictionaries", "text": "Hi @Josh_Whitehouse, thanks for your message.I am a little bit confused about what is the issue here. If I understood correctly is that you added another dictionary property to App_Dictionaries and the schema got changed on Atlas, am I correct? If that is so, then this is by design with development mode. The main idea with development mode is that you don’t need to create the schema yourself in Atlas, but it’s synchronized automatically from the app. This doesn’t work for all changes, as some are incompatible, but it would work for an additive change like adding a new property.If this was not your doubt maybe you can show some pieces of code of what your’re trying to do and what you’re getting and we can take a look at it.", "username": "papafe" }, { "code": "", "text": "Hi Ferdinando, it wasn’t clear to me that development mode created a schema. One hadn’t been created by my app until I added the collection containing embedded objects, each a dictionary. So, when working with these embedded dictionaries and not seeing the updates to them in the app, the creation of the schema (for all root collections did confuse me).It looks like with that schema generated by the app, I can’t add any additional dictionaries as embedded objects to the collection I store them in. I tried from my app, but the new dictionary didn’t show in the atlas. So, in my example above, I tested and successfully created an “App_Dictionaries” RealmObject, added a IDictionary<string, StringAndInt> StringAndInt, and saved it to the Realm. However, if I tried to add an additional IDictionary as an embedded object (say an IDictioary<int, StringAndInt>) to the App_Dictionaries object, it didn’t make it into the Realm. I could only successfully load ALL the dictionaries into the App_Dictionaries object at one time, not incrementally.Another related issue (as I was continuing to work with this) - it seems like there’s no way to get change notifications for these dictionaries that are embedded objects, other than the PropertyChanged for the collection holding the dictionaries (guessing because they are embedded?). So I can’t register for changes at the dictionary level (which would be more granular), only the collection containing the dictionaries. Is there a way to register for changes to a collection that are sub documents?Thanks for your help here.\nJosh", "username": "Josh_Whitehouse" }, { "code": "IDictionary<int, StringAndInt>AsRealmCollectionIDictionaryINotifyCollectionChanged/INotifyPropertyChangedvar appDict = new App_Dictionaries();\nappDict.StringAndIntDict.AsRealmCollection().SubscribeForNotifications(callback);\n//Alternatively\nappDict.StringAndIntDict.AsRealmCollection().CollectionChanged += handler;\n\n", "text": "Hi @Josh_Whitehouse, I understand the source of confusion. Development mode actually was created exactly to infer the schema directly from the objects that are synced. You can find more info in the docs. In the same section you can also find information about how to update your schema, and also what kind of changes are considered breaking changes. If you didn’t do it yet, I would suggest giving those pages a look, as they explain how all of this works in practice.Regarding why adding a new dictionary didn’t update the schema, I think there could be different things happening here. What could be happening is that you introduced a breaking change and so you need a client reset. You should check the logs in Atlas to verify that.\nAnother thing is that only dictionaries with strings as keys are supported, so if you’re using a property like IDictionary<int, StringAndInt> this will not work. Actually this should give a Fody error after compilation, and please let us know if it does not.Regarding the notifications, you can actually use AsRealmCollection to get the underlying realm collection of a IDictionary property. This allows you to subscribe for notifications or even subscribe to INotifyCollectionChanged/INotifyPropertyChanged events.\nSo you can do something like:I hope this helps some of your doubts.", "username": "papafe" }, { "code": "", "text": "Thanks, Ferdinand, for your response, it is helpful. I’ll do a test and look for a breaking change in the Atlas log and report if I see it. Likely it was, and I’d need to make sure of a client reset under these conditions.As to the IDictionary<int, StringAndInt> example, I’m not actually using an int as a key, it was a quick and sloppy example on my part, I didn’t mean to introduce confusion with it.I’ll try AsRealmCollection() for notifications. I wasn’ t receiving notifications when I did prior, but I also wasn’t getting notifications for the collection containing the dictionaries. I believe this was a threading context issue (where I was registering for the notifications, is this possible to do?) which I changed, and now I receive notifications for the collection containing the dictionaries fine. I’ll test AsRealmCollection() and let you know.Thanks again for your help,Josh", "username": "Josh_Whitehouse" }, { "code": "", "text": "Regarding notifications, you should register for them on the main thread of your application, otherwise the realm doesn’t “advance” his internal version and notifications aren’t calculated. So yes, it could be a threading issue if you’re registering for them in another thread.", "username": "papafe" }, { "code": "", "text": "This was resolved by making sure I registered for notifications on the main thread.", "username": "Josh_Whitehouse" } ]
Xamarin C# Realm Dictionary behavior
2022-09-09T20:53:37.703Z
Xamarin C# Realm Dictionary behavior
2,484
null
[ "swift", "atlas-device-sync" ]
[ { "code": "", "text": "I’m wondering if anyone has tried using the new Swift 5.5 concurrency features with Realm Swift.I’ve read https://docs.mongodb.com/realm/sdk/ios/advanced-guides/threading/I’m a little tired of the ThreadSafeReference / DispatchQueue / autoreleasepool / do-catch / newRealm / newRealm.resolve / write block boilerplate.I understand the rule: “Avoid writes on the UI thread if you write on a background thread:”. However, I’m wondering ow @MainActor shown at WWDC21 can work with Realm-Swift?", "username": "Adam_Ek" }, { "code": "", "text": "Hi @Adam_Ek we definitely believe that there are features announced at WWDC that we can exploit for the Swift SDK. We’ll share more here ASAP", "username": "Andrew_Morgan" }, { "code": "", "text": "I’m trying this approach right now.", "username": "Adam_Ek" }, { "code": "", "text": "Did you find a satisfying approach? It’s a bit hard to find great modern examples", "username": "Alex_Ehlke" } ]
Realm Swift and new Switft 5.5 Concurrency Model?
2021-06-19T13:49:49.776Z
Realm Swift and new Switft 5.5 Concurrency Model?
3,915
null
[ "queries" ]
[ { "code": " exports = async function (payload, response) {\n\n const collection = context.services.get(\"mongodb-atlas\")\n .db(\"mydb\")\n .collection(\"my collection\");\n\nconst foundWantedCards = await collection.find(\n { card_id: { $in: [ \"61ddf66c68ebbba1a4084e65\", \"61ddf5b768ebbba1a4fd2c33\" ] }, forTrade:\"Yes\" }\n);\n\nreturn foundWantedCards\n}\n> result: \n[\n {\n \"_id\": {\n \"$oid\": \"61e995ff2738954edae011ec\"\n },\n \"card_id\": \"61ddf5b768ebbba1a4fd2c33\",\n...\nFoundWantedCardsEJSON.parse(payload.body.text())", "text": "I’m trying to build out a function that will accept a payload of IDs, query the DB and return related documents.At the moment I have these values hardcoded in my function, but the client calling this API is getting an empty object.My function is:When I run this function within the editor I get a resultwhich is correct, but the value of FoundWantedCards is empty. How should I return the result in the reponse?Also, how should I access the payload that is sent in the POST request? I’ve tried via EJSON.parse(payload.body.text()) but again that is empty.Any help much appreciated!", "username": "Stuart_Brown" }, { "code": "payloadpayload.bodycontext{\n \"name\": \"wantedCards\",\n \"arguments\": [\n {\n \"query\": {},\n \"headers\": {\n \"Content-Length\": [\n \"55\"\n ],\n \"Sec-Ch-Ua-Mobile\": [\n \"?0\"\n ],\n \"User-Agent\": [\n \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/99.0.4844.83 Safari/537.36\"\n ],\n \"Origin\": [\n \"http://localhost:3000\"\n ],\n \"Sec-Fetch-Mode\": [\n \"cors\"\n ],\n \"Sec-Fetch-Dest\": [\n \"empty\"\n ],\n \"X-Envoy-External-Address\": [\n \"51.194.154.211\"\n ],\n \"Sec-Ch-Ua\": [\n \"\\\" Not A;Brand\\\";v=\\\"99\\\", \\\"Chromium\\\";v=\\\"99\\\", \\\"Google Chrome\\\";v=\\\"99\\\"\"\n ],\n \"Content-Type\": [\n \"application/json\"\n ],\n \"Referer\": [\n \"http://localhost:3000/\"\n ],\n \"Accept-Encoding\": [\n \"gzip, deflate, br\"\n ],\n \"X-Cluster-Client-Ip\": [\n \"51.194.154.211\"\n ],\n \"X-Request-Id\": [\n \"afc2ddf7-883a-4287-83f7-0b96e45bb098\"\n ],\n \"Sec-Ch-Ua-Platform\": [\n \"\\\"macOS\\\"\"\n ],\n \"Accept\": [\n \"*/*\"\n ],\n \"Sec-Fetch-Site\": [\n \"cross-site\"\n ],\n \"Accept-Language\": [\n \"en-GB,en-US;q=0.9,en;q=0.8\"\n ],\n \"X-Forwarded-For\": [\n \"51.194.154.211\"\n ],\n \"X-Forwarded-Proto\": [\n \"https\"\n ]\n },\n \"body\": \"eyI2MWRkZjY2YzY4ZWJiYmExYTQwODRlNjUiOiI2MWRkZjViNzY4ZWJiYmExYTRmZDJjMzMifQ==\"\n },\n {}\n ]\n}\nbody", "text": "OK, I am successfully getting a response back now, but I still can’t figure out how to access the payload in the POST request. payload is truncated in my logs so I can’t see all the values, but payload.body doesn’t show anything. I’ve tried looking in context but can’t find anything either.In my logs I do see an objectan I assume that the body property here is the encoded payload, but how do I access this in my function?", "username": "Stuart_Brown" }, { "code": "", "text": "@Stuart_Brown Did you solve that issue? I am having the same problem.", "username": "Thomas_Anderl" }, { "code": "exports = async function (payload, response) {\n\n const collection = context.services.get(\"mongodb-atlas\")\n .db(\"cards\")\n .collection(\"my collection\");\n const data = JSON.parse(payload.body.text())\n", "text": "It’s been a while, but I think it was along the lines of:", "username": "Stuart_Brown" } ]
POST request Help
2022-03-28T11:20:36.812Z
POST request Help
5,493
null
[ "app-services-data-access" ]
[ { "code": " {\n \"%or\": [\n { \"uid\": \"%%user.id\" },\n {\n \"%%root.status\": \"new\",\n \"%%args.body.uid\": \"%%user.id\"\n }\n ]\n}\n", "text": "New to Realm, trying to create a rule that states: “if the user ID of the account equals the “uid” field, OR if the document does not yet exist AND the uid argument is equal to the user ID, allow a write”. Not totally sure if the expressions I wrote correctly execute that rule, but I am getting an error:error validating rule: do not know how to expand ‘%%root’for the expressions:root.status is a valid expansion according to the documentation, so I’m not sure what the issue is.", "username": "Jeremy_N_A" }, { "code": "%%root{ \"uid\": \"%%user.id\" } // This part is good!\n%%root%%args%%root%%root%%root.statusstatus%%prevRoot%%argstwilio.send({ to: \"+15558675309\", from: \"+15551234567\", body: \"Hello from Realm!\" });%%argsfromtobody%%prevRoot{ \"%%prevRoot\": { \"$exists\": false } }", "text": "Hey Jeremy! Welcome to the forums I think you’re on the right track for your rule definition but I definitely see some issues. One thing that would be very helpful to know - what are you writing this rule expression for? Is this the partition-level read/write rules for sync? Or is it collection-level roles on a non-synced cluster?If you’re defining rules for a non-synced collection, see below. Based on your error though I don’t think this is the case.If you’re defining sync rules, then %%root and other document-level expansions aren’t available because sync does not support document-level rules. Instead, you’ll want to design a partition strategy for your app and then define read/write rules for your partitions.For a non-synced collection, I assume this is an Apply When expression for a role.The first user id part looks correct to me! i.e. “if the user ID of the account equals the “uid” field”For the second part I think you’re confused about how %%root and %%args work. These are only available in some rule expressions depending on where you’re using them. Specifically -%%root is only available in MongoDB rule expressions (i.e. the apply when for a role) and represents a changed (or inserted) document after the change has been applied. The properties on %%root are the same as in your document, so %%root.status only means something if the document you’re evaluating rules for has a status field. There’s also a %%prevRoot expansion that represents the document before the change is applied.%%args is only available in third party service rules and represents the named arguments that you pass to a service action. For example, if I called twilio.send({ to: \"+15558675309\", from: \"+15551234567\", body: \"Hello from Realm!\" }); then %%args would let me access the values of from, to, and body in my rule.I’m a little unsure what you’re referring to when you say “the uid argument” - what function are you calling with this argument where you expect the rule to run?To represent “the document does not yet exist” you can use %%prevRoot like so:", "username": "nlarew" }, { "code": "", "text": "Hello, nice to meet you Nick.May I ask, what is difference between by example query with %%root._id and just query by the field _id? I see virtually no difference between the two.", "username": "Guido_Lopez" } ]
Realm Data Access Rules Error ( error validating rule: do not know how to expand '%%root')
2021-09-16T09:34:09.992Z
Realm Data Access Rules Error ( error validating rule: do not know how to expand &lsquo;%%root&rsquo;)
4,028
null
[ "compass", "atlas-cluster", "connector-for-bi" ]
[ { "code": "", "text": "Hi,I have been trying to establish successful ODBC connection using mongoDB BI connector. But I am not successful. Here are the steps I have done:systemLog:\nlogAppend: false\npath: ‘c:\\logs\\jg-mongosql.log’\nverbosity: 2security:\nenabled: truemongodb:\nnet:\nuri: ‘cluster0.99ltrf9.mongodb.net:27017’\nauth:\nusername: ‘username’\npassword: ‘password’\nssl:\nenabled: true\nPEMKeyFile: ‘C:\\opt\\certs\\mdb.pem’\nCAFile: ‘C:\\opt\\certs\\mdbca.crt’net:\nbindIp: 127.0.0.1\nport: 3307\nssl:\nmode: ‘allowSSL’\nPEMKeyFile: ‘C:\\opt\\certs\\mdb.pem’\nCAFile: ‘C:\\opt\\certs\\mdbca.crt’schema:path: ‘C:\\Program Files\\MongoDB\\Connector for BI\\2.14\\bin\\jgschema.drdl’processManagement:\nservice:\nname: jg-mongosqld\ndisplayName: jg-mongosqld\ndescription: “BI Connector SQL proxy server”But I keep getting following error in the log to establish the odbc connection:2022-09-17T19:12:58.762+0530 I NETWORK [initandlisten] waiting for connections at 127.0.0.1:3307\n2022-09-17T19:13:03.765+0530 E NETWORK [initandlisten] unable to load MongoDB information: failed to create admin session for loading server cluster information: unable to execute command: server selection error: context deadline exceeded, current topology: { Type: Unknown, Servers: [{ Addr: <protected - mongodb cluster address>, Type: Unknown, Average RTT: 0, Last error: connection() error occured during connection handshake: dial tcp: lookup cluster0.99ltrf9.mongodb.net: no such host }, ] }Looking forward for the support to resolve the issue. Appreciate your support.Best Rgards\nJG", "username": "J_G" }, { "code": "dig$ dig cluster0.99ltrf9.mongodb.net SRV\n\n; <<>> DiG 9.10.6 <<>> cluster0.99ltrf9.mongodb.net SRV\n;; global options: +cmd\n;; Got answer:\n;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 56311\n;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1\n\n;; OPT PSEUDOSECTION:\n; EDNS: version: 0, flags:; udp: 512\n;; QUESTION SECTION:\n;cluster0.99ltrf9.mongodb.net. IN SRV\n\n;; ANSWER SECTION:\ncluster0.99ltrf9.mongodb.net. 60 IN SRV 0 0 27017 ac-ozsmqgm-shard-00-00.99ltrf9.mongodb.net.\ncluster0.99ltrf9.mongodb.net. 60 IN SRV 0 0 27017 ac-ozsmqgm-shard-00-01.99ltrf9.mongodb.net.\ncluster0.99ltrf9.mongodb.net. 60 IN SRV 0 0 27017 ac-ozsmqgm-shard-00-02.99ltrf9.mongodb.net.\n\n;; Query time: 106 msec\n;; SERVER: 192.168.1.1#53(192.168.1.1)\n;; WHEN: Sat Sep 17 10:37:15 MDT 2022\n;; MSG SIZE rcvd: 243\n", "text": "Hi @J_G and welcome to the MongoDB Community forums. The URI you’re using (cluster0.99ltlrf9.mongodb.net) is an SRV record which eases connections to a replica set. There error is stating that this host does not exist, which would make sense as it’s a DSN record.We can see this by using a tool such as dig:Here we can see that SRV record mentions three hosts. I would try putting one of those hosts names into your ODBC connection to see if that works for you.", "username": "Doug_Duncan" } ]
Unable to establish ODBC connection via mongoDB BI connector for Tableau reporting
2022-09-17T13:51:57.856Z
Unable to establish ODBC connection via mongoDB BI connector for Tableau reporting
2,748
null
[ "vscode" ]
[ { "code": "", "text": "(Reference to this forum: Inputting MondoDB Databases into VScode )\nI’m wondering if I can import the database into VSCode without MongoDB Atlas installed since I’m kind of worried that it could slow my hardware down (Using a Windows 10 atm)\nUsing VSCode’s Mongo databases, is it even possible?\n(I’ve also checked YT, not much resources are helping)Thanks for any advice!", "username": "Ali_Codes" }, { "code": "mongoshmongoimportmongorestoremongodump", "text": "VSCode has plugins for access the data in MongoDB, but it is just a frontend similar to MongoDB Compass, Studio3T or even the plain mongosh shell.MongoDB Atlas is a DBaaS and is not installable on your system. This service hosts your MongoDB databases in the cloud so you don’t have to worry about managing them on your own.To import data into MongoDB, you could use mongoimport to import data in JSON, CSV or TSV formatted plain text files, or mongorestore to restore BSON based database files that were created by mongodump. You could also write applications in your favorite language to bulk load data if you wanted to.There is a lecture in the M001: MongoDB Basics in Chapter 2 that goes over the MongoDB based import and export tools. I would highly recommend this course if you haven’t taken it yet. Also read through the documentation linked above.I think that some of this is covered in your linked post, but let us know if you have more questions.", "username": "Doug_Duncan" }, { "code": "", "text": "So would I have to get some kind of operating system like macOS in order to set this data up?", "username": "Ali_Codes" }, { "code": "mongoimportmongorestore", "text": "Nope, the MongoDB tools run on all operating systems. You just need to run the mongoimport or mongorestore command from a terminal window/command prompt/powershell window. If those tools didn’t get placed on your system when you installed MongoDB, you can download them separately.", "username": "Doug_Duncan" }, { "code": "", "text": "Got it, thank you so much!", "username": "Ali_Codes" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Map component import, requirements, and VSCode
2022-09-16T21:14:06.403Z
Map component import, requirements, and VSCode
2,588
https://www.mongodb.com/…4_2_1024x576.png
[]
[ { "code": "", "text": "\nimage1280×720 48.1 KB\nLearn how to build your first iOS mobile application using SwiftUI to build the UI and Realm to manage your data. This talk is aimed at experienced developers who are new to mobile. I’ll start with a brief tour of the anatomy of a mobile app—covering both the backend and frontend components. The bulk of the session will be devoted to a hands-on demonstration of building a simple problem reporting app. (Pre-installing Xcode on your Mac will speed things up.)By the end of the session, you’ll have the skills to build this simple app for yourself, extend it to add new features, and use it as the starting point to build your own mobile apps.", "username": "Shane_McAllister" }, { "code": "", "text": "Hi Andrew, Shane,Many thanks for setting up this introductory talk.I followed along (using the recording on YouTube) and am trying to solve the challenge of getting product names directly from Realm. I’m somewhat stuck though with how to do this.My idea is I should be able to run a query using the reference to realm from the environment object and then do realm.objects(Ticket.self).product.distinct() - however this doesn’t seem to work. I’m also somewhat unclear why.Could you kindly give me a few pointers here?Many thanks,\nMat", "username": "Mat_W" }, { "code": "", "text": "Would someone kindly have an idea, please?", "username": "Mat_W" } ]
World Workshop: From 0 to Mobile Developer in 2 Hours with Realm and SwiftUI
2022-06-02T15:18:11.242Z
World Workshop: From 0 to Mobile Developer in 2 Hours with Realm and SwiftUI
1,693
null
[ "mongodb-shell", "storage" ]
[ { "code": "", "text": "Got this error on MongoDB connection.WiredTiger error (24) [1663392808:381846][2181:0x7f650de7aa80], file:collection-21134–4134575628632678244.wt, WT_SESSION.open_cursor: _posix_open_file, 715: /var/lib/mongodb/collection-21134–4134575628632678244.wt: handle-open: open: Too many open files Raw: [1663392808:381846][2181:0x7f650de7aa80], file:collection-21134–4134575628632678244.wt, WT_SESSION.open_cursor: _posix_open_file, 715: /var/lib/mongodb/collection-21134–4134575628632678244.wt: handle-open: open: Too many open files", "username": "Swati_Sharma" }, { "code": "ulimit", "text": "Hello @Swati_Sharma and welcome to the MongoDB Community forums! It sounds like your ulimit settings are too low. Check out the documentation around this…", "username": "Doug_Duncan" } ]
MongoDB start gives error: Failed to open a WiredTiger cursor
2022-09-17T07:13:52.910Z
MongoDB start gives error: Failed to open a WiredTiger cursor
2,188
https://www.mongodb.com/…8_2_1024x744.png
[ "node-js" ]
[ { "code": "", "text": "I went through many other posts but none of the solutions worked for me. I asked the following question in reply to someone’s post but couldn’t sort out the issue so now I am posting it as a separate topic.\nSo, without going into other details of my project, I have a collection containing multiple documents. In each document, there is an array named “team” and inside this array, I have objects. Each individual object contain information for a particular player like name, points, _id and isCaptain (which tells that whether this particular player is a captain of this team or not in the form of ‘true’ and ‘false’ i.e isCaptain: true means that this player is captain of this team and isCaptain: false means that this player is not a captain of this team). From the frontend, I am sending two values; id and points. Now, what I am doing is that I’m fetching/reaching out to all the players(objects) whose _id matches with the input ‘_id’ (coming from the frontend). After this, I want to check if this player is the captain of this team or not. If he is, then I want to $set the points of this player as double points by doing points 2x. And if that player is not a captain, then I just want to $set points to the input points without making it double.**I was able to do everything but I’m stuck on isCaptain. I want my code to be modelled in such a way that it checks the matching _id and also check that if that player is captain or not. If he is captain than double the points (points 2x) otherwise go to the other update operation. I have attached the code snippet plus the Screenshots of my documents.\n\nScreen Shot 2022-05-21 at 4.23.42 PM1330×967 81.3 KB\nLet me know if I was able to make it clear enough for you to understand. If you need more details, I can provide you with that.", "username": "Najam_Ul_Sehar" }, { "code": "update$set$cond$eq$multiplydb.collection.update({\n _id: 1\n},\n[\n {\n \"$set\": {\n \"points\": {\n \"$cond\": {\n \"if\": {\n \"$eq\": [\n \"$isCaptain\",\n true\n ]\n },\n \"then\": {\n \"$multiply\": [\n 100,\n 2\n ]\n },\n \"else\": 100\n }\n }\n }\n }\n])\n", "text": "Hi @Najam_Ul_Sehar,You can do it with single update query like this:Working example", "username": "NeNaD" }, { "code": "\n MyTeam.updateMany({ matchid: matchid, 'team._id': id },\n {\n \"$cond\": {\n \"if\": {\n \"$eq\": [\n \"team.$.isCaptain\",\n true\n ]\n },\n \"then\": {\n \"$set\": {\n \"team.$.points\": points*2\n }\n // \"$multiply\": [\n // points,\n // 2\n // ]\n },\n \"else\": {\n \"$set\": {\n \"team.$.points\": points\n }\n }\n }\n },(err, numAffected) => {\n if(err) throw err;\n res.send(numAffected);\n }\n)\n", "text": "Hi NeNaD,\nFirst of all, thank you for reaching out and helping out.\nI copied your code and tested it with my collection but it didn’t work.\nI have documents like following:\n\nScreen Shot 2022-09-17 at 11.50.54 AM1276×721 70.3 KB\nAnd my team array look like following:\n\nScreen Shot 2022-09-17 at 11.52.58 AM1257×749 73.6 KB\nSo, my goal is to update the “points” property of individual matching objects (players).\nFirstly, does it really matter that we put “” marks around $set or $cond etc? Secondly, I have an array with multiple objects (objects means each individual players) and hence I am doing “team.$.isCaptain” to access each player’s captain property, that’s why I modified the code in the following way but it didn’t work either.Let me know what I’m doing wrong here.", "username": "Najam_Ul_Sehar" } ]
If-else conditional in $set operator
2022-09-17T03:38:16.388Z
If-else conditional in $set operator
11,708
null
[ "queries" ]
[ { "code": "_id: {{ a: 1, b: 3 }}\n_id: {{ a: 2, b: 2 }}\n_id: {{ a: 1, b: 2 }}\n\ndb.inventory.find( { $in: [ {_id: {a: 1, b: 1}, {a: 1,b: 1} }] } )\n", "text": "Hey I have groups with 2 properties which together create 1 unique ID. We can’t have the same a and b repeated.I have two problemsSomething like thisWhat is the best approach to accomplish this?", "username": "dimitar_vasilev" }, { "code": " { _id : { $in: [ {a: 1, b: 1}, {a: 2,b: 3} ] }\n{ $in: [ {_id: {a: 1, b: 1}, {a: 1,b: 1} }] }", "text": "The order of the properties shouldn’t be important. {a:1, b:2} === {b:2,a:1}The above will be hard to enforce as in most drivers and languages the 2 objects differs. Your best bet is to make sure you do your inserts with an API. You may use triggers or change stream to make sure you are notified of the invalid inserts.Try the followingrather than{ $in: [ {_id: {a: 1, b: 1}, {a: 1,b: 1} }] }", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Search multiple _id created by the combination of two fields
2022-09-14T09:51:02.511Z
Search multiple _id created by the combination of two fields
1,699
null
[ "queries", "swift" ]
[ { "code": "&&key == \"rooms\"value == 4 4class Property: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) var _id: ObjectId\n @Persisted var name: String\n @Persisted var specs: List<PropertySpec>\n\n // ...\n}\nclass PropertySpec: EmbeddedObject, ObjectKeyIdentifiable {\n @Persisted var key: String\n @Persisted var value: Double?\n\n // ...\n}\nstruct SearchResultView: View {\n @ObservedResults(Property.self) var properties\n \n var filteredProperties: Slice<Results<Property>> {\n var results = properties\n .where {\n // It ignores && operator\n // Actually it seems to consider any value == 4, not only \"rooms\"\n $0.specs.key == \"rooms\" && $0.specs.value == 4 \n }\n\n return results.prefix(50)\n }\n\n}\nspec.value4.where {\n $0.specs.contains(PropertySpec(key: \"rooms\", value: 4))\n}\n", "text": "It seem realm does not respect && operator when filtering a list of embedded objects.In this specific case it filter correctly key == \"rooms\" but second statement value == 4 seems to be ignored , once results brings rows with values different from 4 .I am suspecting it will consider any spec.value with 4 .Sample Data:I also tried this, but it does not work. The list will be empty.", "username": "Robson_Tenorio" }, { "code": ".wherelet results = realm.objects(Property.self).where { $0.specs.rooms == 4 }", "text": "A question (to which the answer may be obvious).Realm is more type safe with the .where clause, however that implementation uses a dynamic lookup which may be related to your issue (or may not)“because we use dynamic lookup it means you can use any property name which may not be declared in the object”The point of adding the .where was to make querying type safe. By using key and value that kinda goes back the other not-type-safe direction.Is there a reason you don’t just use this to query for properties that only have 4 rooms?let results = realm.objects(Property.self).where { $0.specs.rooms == 4 }", "username": "Jay" }, { "code": "PropertySpecificationsPropertyProperty4 roomsmore than 2 bathrooms.where{\n // Property with 4 rooms\n ($0.specs.key == \"rooms\" && $0.specs.value == 4).count == 1\n}\n.where{\n // Property with more than 2 bathrooms\n ($0.specs.key == \"bathrooms\" && $0.specs.value >= 2).count == 1\n}\n\n", "text": "@Jay I am looking for something dynamic.Because each Property can have different Specifications. In that case there are dozen of specs to be chosen when creating a new PropertySo customers can look for : What Property has 4 rooms and more than 2 bathrooms ?Anyway, for future reference, here is my workaround. I don’t think is intuitive, but it works.", "username": "Robson_Tenorio" }, { "code": "roomsbathroomslet results = realm.objects(Property.self).where { $0.specs.rooms == 4 && $0.specs.bathrooms > 2 }class Property: Object {\n @Persisted var name: String\n @Persisted var propertySpecs: Map<String, Int>\n}\nroomsbathroomslet results = realm.objects(Property.self).where { $0.propertySpecs[\"rooms\"] == 4 && $0.propertySpecs[\"bathrooms\"] > 2 }", "text": "Ah. Interesting - yeah, I can see where that would be more dynamic.Obviously there are some standard specs; rooms, bathrooms etc that could just be model properties rooms and bathrooms in the model which leads to;let results = realm.objects(Property.self).where { $0.specs.rooms == 4 && $0.specs.bathrooms > 2 }I still think the .key property is causing the issue but one other thing to consider (which you may have already considered) is to leverage the Realm Map property as it really seems well suited for this use case and isn’t a workaround.So your Model would look like thisAnd then the Map property could contain any spec needed rooms, bathrooms, etc and could be queried like thislet results = realm.objects(Property.self).where { $0.propertySpecs[\"rooms\"] == 4 && $0.propertySpecs[\"bathrooms\"] > 2 }Just a thought.", "username": "Jay" }, { "code": "", "text": "@Jay Thanks for pointing another alternative ", "username": "Robson_Tenorio" } ]
It does not respect `&&` operator when filtering a list of embedded objects
2022-09-16T14:14:50.811Z
It does not respect `&amp;&amp;` operator when filtering a list of embedded objects
1,589
https://www.mongodb.com/…4_2_1024x833.png
[]
[ { "code": "", "text": "I linked my account through github so the name of this account has been a random one.So i wanted to change that name but whenever i am trying to do that some kind of error is arising.I tried on multiple browsers but the its still not fixed.Can anyone help me out in this?\nThe screenshots are attached below–>\n\nimage1067×868 19.4 KB\n\nConsole screenshot—>\n\nimage1003×736 51.3 KB\n", "username": "Shubham_N_A" }, { "code": "", "text": "Contact [email protected]\nSearch our forum threads for similar issue faced by other users", "username": "Ramachandra_Tummala" } ]
Profile name change
2022-09-16T18:55:05.972Z
Profile name change
1,005
null
[]
[ { "code": "", "text": "mongod command run in cmd but port no did not generate", "username": "Pritam_Kumar_Mani" }, { "code": "", "text": "If you ran mongod without any parameters it will come up on default port 27017", "username": "Ramachandra_Tummala" } ]
Port no did not generate
2022-09-17T13:50:08.625Z
Port no did not generate
953
https://www.mongodb.com/…1_2_1024x466.png
[ "monitoring" ]
[ { "code": "", "text": "we have mongodb 4.2, 1 master and 2 slave replicaset. And application reads from secondaries.when i reach high read qps (15k read qps while using 2 secondaries and 9k when using 1 secondary) with small number of writes (~500 qps) the read latencies shoot up.Here are some other details\ncpu stats_pmm2962×1350 454 KB\n\ni have uploaded rest related metric hereThe reads are not random hence i don’t think it is a memory pressure issue (read iops are also not much supplement this argument).Primarily it seems like mongo application limitation but i am not sure how it can be concluded.can someone give pointers about how it can be debugged next?This is not an intermittant issue for sure, since i was consistently able to reproduce this.", "username": "maneesh" }, { "code": "", "text": "Hi @maneeshWelcome to our forums, reading your post I think these questions might be better asked in our [Ops and Admin - MongoDB Developer Community Forums](https://Ops and Admin category) as they don’t appear to be directly related to the M201 MongoDB Performance course.If I’m mistaken, can you clarify what chapter and lesson in M201 you are having difficulties with.Hope this helps,\nEoin", "username": "Eoin_Brazil" }, { "code": "", "text": "@Eoin_Brazil apologies, I have updated the tag. Thanks for correcting.", "username": "maneesh" }, { "code": "", "text": "Hi Team,Any update here? we are still facing this issue.@kevinadi can you please help here?", "username": "maneesh" }, { "code": "", "text": "what we are seeing is that there is a spike in number of forked process. when does mongodb create more processes?", "username": "maneesh" }, { "code": "", "text": "I have found the issue. It was related to https://jira.mongodb.org/browse/SERVER-54805", "username": "maneesh" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Spike in read latencies on high throughput 15k qps
2021-03-11T13:37:46.460Z
Spike in read latencies on high throughput 15k qps
4,977
https://www.mongodb.com/…223507d9be08.png
[ "crud" ]
[ { "code": "", "text": "I have a collection of Master Inventory Lists that includes arrays of objects of items. Is there a way to copy that all over to an Inventory document in a different collection including the array of objects?\nmongosnippet395×533 28.3 KB\n", "username": "Jordan_Hill" }, { "code": "", "text": "you could run an aggregation with a $out stage", "username": "steevej" } ]
Make copy of document including array of objects
2022-09-15T18:22:30.168Z
Make copy of document including array of objects
1,248
null
[ "compass" ]
[ { "code": "", "text": "If you’re a developer on the cutting edge — and you use an Apple laptop for your personal machine — there’s a good chance you are using Apple’s powerful M1 chip. MongoDB Compass has added enhanced support for M1 chips for an even faster experience exploring and querying your data in MongoDB. Make sure you upgrade to Compass 1.33 to take advantage of this new experience. Learn how to upgrade your Compass version here. ", "username": "Shelby_Carpenter" }, { "code": "", "text": "MongoDB Compass has added enhanced support for M1 chips for an even faster experience exploring and querying your data in MongoDB.I wonder if I can use this as a way to get a new MacBook? Congrats to the team for making these improvements.", "username": "Doug_Duncan" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
Enhanced Experience in MongoDB Compass With Apple M1
2022-09-16T20:16:39.739Z
Enhanced Experience in MongoDB Compass With Apple M1
3,816
https://www.mongodb.com/…b_2_1024x639.png
[]
[ { "code": "", "text": "I have a Realm app successfully setup on github such that whenever I make changes locally and commit/push them, they get deployed automatically to Realm.When I try to make changes through the web UI, a draft is created and I can attempt to deploy and push the changes to github.\nimage1194×746 30.9 KB\nWhen I attempt to do that I get an error that says “Failed to Deploy: error processing request”.The deploy api endpoint (…/deploy/github/push) that gets called when I click on deploy returns a status code 500 with the error message being “error processing request” and “InternalServerError”.Any help or ideas would be appreciated!", "username": "Charlie_Mattox" }, { "code": "", "text": "Hi Charlie,Thanks for posting and welcome to the community!How often does this happen and is it consistently occurring?In the screenshot it appears you’re changing a value, does the error happen on any other types of changes?If you can, it would be best to raise a support case for this as we will need more context about your app to investigate.Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "Thank you for the reply!It happens pretty consistently, or at least when it does, the only way I’ve found to fix it is to create a new Realm project.It also happens not only for changing values, but everything else.I don’t think I can raise a support case because I’m on the free version, so when I go to https://support.mongodb.com/welcome and login I get a status 500 error.\nimage1313×579 17.3 KB\nI’d be happy to write down all of my steps to reproduce and describe my project setup if that would help.", "username": "Charlie_Mattox" }, { "code": "", "text": "Hi Charlie,Yes please outline the steps you’re taking and provide your apps hexadecimal Id the error is happening on.", "username": "Mansoor_Omar" }, { "code": "", "text": "Same here but the error is Unauthorized access despite the fact I installed and granted authorization to the github app. It gets weird to know that pushes from the repository to the realm app using the GH app worked well. The problem is only when going backwards, on changes from the Realm App UI itself.I made a ticket for this.", "username": "Demian_Caldelas" } ]
Failure to deploy changes via UI when using GitHub automatic deployment
2022-05-30T22:58:14.774Z
Failure to deploy changes via UI when using GitHub automatic deployment
1,750
null
[]
[ { "code": "ssh_user@server# mongo\n> show dbs\n> exit\nMongoDB shell version: 2.0.7", "text": "I was wondering how can I create a cron task that runs a command on a mongoshell connection.Basically I would like to be able to connect to the mongoshell once a month an run a specific command.\nAs an example, It would look something like this:The version of mongodb that I’m running is MongoDB shell version: 2.0.7Thanks", "username": "Fabio_Perez" }, { "code": "", "text": "Yes this is possible!Here is how you would do it for the above example:\nmongosh --eval “printjson(db.adminCommand(‘listDatabases’))”This one switches to to another DB (user db command) and gets the stats:\nmongosh --eval “printjson(db=db.getSiblingDB(‘inventory’)); printjson(db.stats())”", "username": "tapiocaPENGUIN" } ]
Crontask to run a mongoshell command
2022-09-16T18:11:12.335Z
Crontask to run a mongoshell command
1,264
null
[ "queries", "data-modeling" ]
[ { "code": "{\n\t_id: ObjectId\n\tvariety: string\n\tcolors: [\n\t\t{\n\t\t\tcolor: string,\n owner: [\n {\n friendName: string,\n sizes: string[]\n }\n ]\n }\n\t]\n}\n[\n{\n\t_id: ...\n\tvariety: 'mcintosh'\n\tcolors: [\n\t\t{\n\t\t\tcolor: 'light red',\n owner: [\n {\n friendName: 'jennifer',\n sizes: [1, 2 , 6]\n }\n ]\n },\n \t\t{\n\t\t\tcolor: 'dark red',\n owner: [\n {\n friendName: 'steve',\n sizes: [6, 7]\n }\n ]\n }\n\t]\n},\n{\n\t_id: ...\n\tvariety: 'granny smith'\n\tcolors: [\n\t\t{\n\t\t\tcolor: 'light green',\n owner: [\n {\n friendName: 'jennifer',\n sizes: [2 , 4]\n }\n ]\n },\n \t\t{\n\t\t\tcolor: 'speckled dark green',\n owner: [\n {\n friendName: 'jonathan',\n sizes: [3, 7]\n }\n ]\n }\n\t]\n}\n]\n", "text": "Hi there, hope you’re doing well. I come from relational databases, but I’d like to try out MongoDB.\nHowever, I’m stuck on how to query the following scenario:I have 10 friends and they all have apple trees. Apples belong to different varieties, and have different colors and sizes.\nI want to store all data on the apples, and I want to be able to query for\n- Apple size\n- Apple color\n- Apple variety\n- Any combination of these properties\nIf a friend has a certain color of a certain apple, (s)he can have no, one or multiple sizes of it (hence the “sizes” array).Currently, my data is modelled as follows:Here is a scenario to illustrate my problem:Now I want to get all Apples of size 3 (in this case only the speckled dark green one of jonathan).\nWhich query should I form so that it returns the granny smith object?\nI try to get something like “select all varieties where there is at least one color where the owner has a size 3”.There is probably a better way to model this in the database, would you suggest a different way to structure this?", "username": "Ujlm" }, { "code": "", "text": "I hope I did understand your problem correctly as there is a quite easy way of doing this.Try this query:db = db.getSiblingDB(\"<your DB>\");\ndb.getCollection(\">your Collection>\").find(\n{\n“colors.owner.sizes” : 3.0\n}\n);This should return your document.", "username": "Simon_Bieri" }, { "code": "", "text": "Thank you very much! This solves the problem indeed.\nI knew the dot notation existed, but I thought it only worked for exact equalities.", "username": "Ujlm" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to get all elements for which at least one deeply nested object has an array which contains a certain value?
2022-09-14T16:13:35.432Z
How to get all elements for which at least one deeply nested object has an array which contains a certain value?
2,085
null
[ "dot-net" ]
[ { "code": "", "text": "I’m trying to stream a large byte array into a new collection in MongoDb, but I don’t see any methods that accept a stream. Is there a way to do this?Posted the details of my question on stack overflow here.", "username": "Ryan_Langton1" }, { "code": "while (stream.Position < stream.Length)\n{\n var rawBson = BsonSerializer.Deserialize<RawBsonDocument>(stream);\n await rawRef.InsertOneAsync(rawBson);\n}\n", "text": "solved on the SO question", "username": "Ryan_Langton1" } ]
How do I stream a large byte[] into MongoDb?
2022-09-16T14:14:48.277Z
How do I stream a large byte[] into MongoDb?
1,496
null
[]
[ { "code": "", "text": "Does anyone know if it’s possible to detect the featureCompatibilityVersion of a database simply by looking at a file within the data directory? I would like to be able to (programmatically) inspect a mongo data folder and determine the correct mongod binary to use.I do not have any mongod with which to open the database - I’d be pulling down a version appropriate for the existing database.", "username": "Nick_Judson" }, { "code": "wtPyWTpython3 PyWT.py --dbpath /path/to/data/files --list\npython3 PyWT.py --dbpath /path/to/data/files --table collection-value-from-above --pretty\n", "text": "Hi @Nick_Judson, welcome back to the community! In the past I have used the wt tool to dump out the data from the binary files stored on your system. I’ve followed along with this Percona blog post series. I don’t remember building from source so I probably pulled it down from GitHub. I would recommend making a copy of the files that you’re working with “just in case”.I have however just found PyWT which is written by @kevinadi (a MongoDB employee and regular here on the community). This makes things much easier to work with as you don’t have to remember long Linux pipelines to get the data. Note however that this tool has not been worked on in 5 years so there is no guarantee of usability or future compatibility should things change in the future. I will let Kevin address if this tool is safe to use or not, but it does seem to work on data files created by MongoDB 6.0.Here are the commands that I ran against a newly started database (no user data added), and I was able to get the data that you are interested in.Just replace with your data path and collection name.Here’s a screenshot showing the results:\nimage855×623 92.2 KB\n", "username": "Doug_Duncan" }, { "code": "", "text": "Thanks for the reply Doug. I’ll spend some time trying to figure out how it all goes together.", "username": "Nick_Judson" } ]
Detect FeatureCompatibilityVersion by inspecting database file(s)?
2022-09-15T21:25:54.236Z
Detect FeatureCompatibilityVersion by inspecting database file(s)?
1,433
null
[ "queries" ]
[ { "code": "{\n\t\"_id\" : ObjectId(\"5ff5f7625aa45b9dd4846794\"),\n\t\"id\" : \"4_35763098\",\n\t\"sync_details\" : {\n\t\t\"2058456\" : {\n\t\t\t\"tenant\" : {\n\t\t\t\t\"id\" : \"2058456\",\n\t\t\t\t\"name\" : \"MWH\"\n\t\t\t},\n\t\t\t\"last_updated\" : 1626440150,\n\t\t\t\"test_company\" : \"19322\",\n\t\t\t\"agreement\" : \"3\",\n\t\t\t\"sync_status\" : true\n\t\t},\n\t\t\"2058457\" : {\n\t\t\t\"tenant\" : {\n\t\t\t\t\"id\" : \"2058457\",\n\t\t\t\t\"name\" : \"SFC\"\n\t\t\t},\n\t\t\t\"last_updated\" : 1626440158,\n\t\t\t\"test_company\" : \"19319\",\n\t\t\t\"agreement\" : \"1\",\n\t\t\t\"sync_status\" : true\n\t\t},\n\t\t\"2058459\" : {\n\t\t\t\"tenant\" : {\n\t\t\t\t\"id\" : \"2058459\",\n\t\t\t\t\"name\" : \"MCS -EPS\"\n\t\t\t},\n\t\t\t\"last_updated\" : 1626440165,\n\t\t\t\"test_company\" : \"19305\",\n\t\t\t\"agreement\" : \"8\",\n\t\t\t\"sync_status\" : true\n\t\t}\n\t}\n}\n", "text": "I have a nested documents something as below:Here I need to rename “test_company” key to “comopany” in all the nested dictionary. How to do the smae?", "username": "Shantha_Dodmane" }, { "code": "// Requires official MongoShell 3.6+\ndb = db.getSiblingDB(\"<yourDB>\");\ndb.getCollection(\"<yourCollection>\").aggregate(\n [\n {\n \"$addFields\" : {\n \"sync_details_array\" : {\n \"$objectToArray\" : \"$sync_details\"\n }\n }\n }, \n {\n \"$unwind\" : {\n \"path\" : \"$sync_details_array\",\n \"preserveNullAndEmptyArrays\" : true\n }\n }, \n {\n \"$addFields\" : {\n \"sync_details_array.v.comopany\" : \"$sync_details_array.v.test_company\"\n }\n }, \n {\n \"$unset\" : [\n \"sync_details_array.v.test_company\"\n ]\n }, \n {\n \"$group\" : {\n \"_id\" : \"$_id\",\n \"id\" : {\n \"$first\" : \"$_id\"\n },\n \"sync_details_array\" : {\n \"$push\" : \"$sync_details_array\"\n }\n }\n }, \n {\n \"$addFields\" : {\n \"sync_details\" : {\n \"$arrayToObject\" : \"$sync_details_array\"\n }\n }\n }, \n {\n \"$unset\" : [\n \"sync_details_array\"\n ]\n }\n ]\n);\n", "text": "In my oppinion such an operation requires a recreation of the collection.I would run the following aggregation pipeline and insert the results in another collection. You can then replace your old collection with the new one if everything worked as you wanted.", "username": "Simon_Bieri" }, { "code": "db.getCollection(\"<yourCollection>\").aggregate(\n\n // Pipeline\n [\n // Stage 1\n {\n $addFields: {\n \"sync_details_array\": {\"$objectToArray\": \"$sync_details\"}\n }\n },\n\n // Stage 2\n {\n $unwind: {\n path: \"$sync_details_array\",\n preserveNullAndEmptyArrays: true\n }\n },\n\n // Stage 3\n {\n $addFields: {\n \"sync_details_array.v.comopany\": \"$sync_details_array.v.test_company\"\n }\n },\n\n // Stage 4\n {\n $unset: [\"sync_details_array.v.test_company\"]\n },\n\n // Stage 5\n {\n $group: {\n _id: \"$_id\",\n id: { \"$first\" : \"$_id\" },\n sync_details_array: {\"$push\": \"$sync_details_array\"}\n }\n },\n\n // Stage 6\n {\n $addFields: {\n \"sync_details\": {\"$arrayToObject\": \"$sync_details_array\"}\n }\n },\n\n // Stage 7\n {\n $unset: [\"sync_details_array\"]\n }\n ],\n\n // Options\n {\n\n }\n\n // Created with Studio 3T, the IDE for MongoDB - https://studio3t.com/\n\n);\n", "text": "Here is the .js code to import in your system.", "username": "Simon_Bieri" } ]
Nested documnt querring
2022-09-13T08:14:03.837Z
Nested documnt querring
1,149
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "const {_id, name, email, role, account, subUsers } = await User.findById(req.user.id)\n .populate({\n path : 'account', // need specific values from this accountModel (_id & name)\n populate: [\n { path: 'providers' }, // need specific values from this model (_id & name) \n ]\n })\n .populate('subUsers', {_id: 1, name: 1});\n", "text": "", "username": "Faslu_Rahman_K" }, { "code": "selectconst {_id, name, email, role, account, subUsers } = await User.findById(req.user.id)\n .populate([ \n { path: 'account', select: '_id name', populate: [ { path: 'providers', select: '_id name' } ] },\n { path: 'subUsers', select: '_id name' }\n ])\n", "text": "You can pass additional config select and specify what fields you want to be returned:", "username": "NeNaD" }, { "code": "", "text": "Thank you so much!!!", "username": "Faslu_Rahman_K" } ]
How to query specific values from a two level mongoose populate?
2022-09-14T23:15:57.390Z
How to query specific values from a two level mongoose populate?
5,630
null
[ "queries", "crud" ]
[ { "code": "[[ 'a', 'b' ], [ 'c', 'd' ]][[ 'e', 'b' ], [ 'c', 'd' ]]", "text": "hello I have an issue with using updateMany and $set operator to update an object in a double array for example I want to do that :\n[[ 'a', 'b' ], [ 'c', 'd' ]] → [[ 'e', 'b' ], [ 'c', 'd' ]]\ntransform all the a into e like that with a positional operator.\nI haven’t find anyone having that issue as they use key but I don’t have any array named until I reach the object I want to update. What is the solution?Thanks in advance for the help", "username": "VictorB" }, { "code": "$map$map$conddb.collection.update({},\n[\n {\n \"$set\": {\n \"data\": {\n \"$map\": {\n \"input\": \"$data\",\n \"as\": \"item\",\n \"in\": {\n \"$map\": {\n \"input\": \"$$item\",\n \"as\": \"childItem\",\n \"in\": {\n \"$cond\": {\n \"if\": {\n \"$eq\": [\n \"$$childItem\",\n \"a\"\n ]\n },\n \"then\": \"e\",\n \"else\": \"$$childItem\"\n }\n }\n }\n }\n }\n }\n }\n }\n])\n", "text": "You can do it like this:Working example", "username": "NeNaD" } ]
How to target object in a double array with positional operator
2022-09-16T09:32:01.279Z
How to target object in a double array with positional operator
1,081
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "mongodump --host=“rs0/localhost:27017,localhost:27018,localhost:27019” --readPreference=secondary -d local -c oplog.rs --query=\"{“ts”:{\"$gte\":\"$Timestamp(0000000000,50)\"}}\" -vvv -o /home/anupama/backupec2I am getting outputwriting local.oplog.rs to\n2020-03-29T08:37:08.134+0530 not counting query on local.oplog.rs\n2020-03-29T08:37:08.328+0530 done dumping local.oplog.rs (0 documents)\n2020-03-29T08:37:08.328+0530 ending dump routine with id=0, no more work to do\n2020-03-29T08:37:08.328+0530 dump phase III: the oplog\n2020-03-29T08:37:08.328+0530 finishing dumpBut when try to dump without using query it dumps the file", "username": "raushan_sharma" }, { "code": "", "text": "I think you have to escape $ sign\nor use single quote for json query", "username": "Ramachandra_Tummala" }, { "code": "\"{ \\\"ts\\\":{ \\\"$gte\\\": { \\\"$timestamp\\\": { \\\"t\\\": 1565545664, \\\"i\\\": 1 } } } }\"\\\"", "text": "mongodump --host=“rs0/localhost:27017,localhost:27018,localhost:27019” --readPreference=secondary -d local -c oplog.rs --query=“{“ts”:{”$gte\":“$Timestamp(0000000000,50)”}}\" -vvv -o /home/anupama/backupec2The query is to be formatted according to the mongodump’s Extended JSON v2. This will work with Windows system (e.g.,):\"{ \\\"ts\\\":{ \\\"$gte\\\": { \\\"$timestamp\\\": { \\\"t\\\": 1565545664, \\\"i\\\": 1 } } } }\"The backslash (\\) character is to escape the double-quote (\") within the outer quotes.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks for the update but when i tried it on startup_log give belowdone dumping local.startup_log (0 documents)\n2020-03-30T10:00:32.595+0530 finishing dumpTried lt,gt etc Gives same result\nIn your case did it dump rows?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Please post couple of sample input documents.", "username": "Prasad_Saya" }, { "code": "> db.getCollection('startup_log').find().limit(2).pretty()\n{\n \"_id\" : \"DESKTOP-RIP97MG-1547099484533\",\n \"hostname\" : \"DESKTOP-RIP97MG\",\n \"startTime\" : ISODate(\"2019-01-10T05:51:24Z\"),\n \"startTimeLocal\" : \"Thu Jan 10 11:21:24.533\",\n \"cmdLine\" : {\n \"config\" : \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\4.0\\\\bin\\\\mongod.cfg\",\n \"net\" : {\n \"bindIp\" : \"127.0.0.1\",\n \"port\" : 27017\n },\n \"service\" : true,\n \"storage\" : {\n \"dbPath\" : \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\4.0\\\\data\",\n \"journal\" : {\n \"enabled\" : true\n }\n },\n \"systemLog\" : {\n \"destination\" : \"file\",\n \"logAppend\" : true,\n \"path\" : \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\4.0\\\\log\\\\mongod.log\"\n }\n },\n \"pid\" : NumberLong(13372),\n \"buildinfo\" : {\n \"version\" : \"4.0.5\",\n \"gitVersion\" : \"3739429dd92b92d1b0ab120911a23d50bf03c412\",\n \"targetMinOS\" : \"Windows 7/Windows Server 2008 R2\",\n \"modules\" : [ ],\n \"allocator\" : \"tcmalloc\",\n \"javascriptEngine\" : \"mozjs\",\n \"sysInfo\" : \"deprecated\",\n \"versionArray\" : [\n 4,\n 0,\n 5,\n 0\n ],\n \"openssl\" : {\n \"running\" : \"Windows SChannel\"\n },\n \"buildEnvironment\" : {\n \"distmod\" : \"2008plus-ssl\",\n \"distarch\" : \"x86_64\",\n \"cc\" : \"cl: Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24223 for x64\",\n \"ccflags\" : \"/nologo /EHsc /W3 /wd4355 /wd4800 /wd4267 /wd4244 /wd4290 /wd4068 /wd4351 /wd4373 /we4013 /we4099 /we4930 /WX /errorReport:none /MD /O2 /Oy- /bigobj /utf-8 /Zc:rvalueCast /Zc:strictStrings /volatile:iso /Gw /Gy /Zc:inline\",\n \"cxx\" : \"cl: Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24223 for x64\",\n \"cxxflags\" : \"/TP\",\n \"linkflags\" : \"/nologo /DEBUG /INCREMENTAL:NO /LARGEADDRESSAWARE /OPT:REF\",\n \"target_arch\" : \"x86_64\",\n \"target_os\" : \"windows\"\n },\n \"bits\" : 64,\n \"debug\" : false,\n \"maxBsonObjectSize\" : 16777216,\n \"storageEngines\" : [\n \"devnull\",\n \"ephemeralForTest\",\n \"mmapv1\",\n \"wiredTiger\"\n ]\n }\n}\n{\n \"_id\" : \"DESKTOP-RIP97MG-1547362502494\",\n \"hostname\" : \"DESKTOP-RIP97MG\",\n \"startTime\" : ISODate(\"2019-01-13T06:55:02Z\"),\n \"startTimeLocal\" : \"Sun Jan 13 12:25:02.494\",\n \"cmdLine\" : {\n \"config\" : \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\4.0\\\\bin\\\\mongod.cfg\",\n \"net\" : {\n \"bindIp\" : \"127.0.0.1\",\n \"port\" : 27017\n },\n \"service\" : true,\n \"storage\" : {\n \"dbPath\" : \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\4.0\\\\data\",\n \"journal\" : {\n \"enabled\" : true\n }\n },\n \"systemLog\" : {\n \"destination\" : \"file\",\n \"logAppend\" : true,\n \"path\" : \"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\4.0\\\\log\\\\mongod.log\"\n }\n },\n \"pid\" : NumberLong(3796),\n \"buildinfo\" : {\n \"version\" : \"4.0.5\",\n \"gitVersion\" : \"3739429dd92b92d1b0ab120911a23d50bf03c412\",\n \"targetMinOS\" : \"Windows 7/Windows Server 2008 R2\",\n \"modules\" : [ ],\n \"allocator\" : \"tcmalloc\",\n \"javascriptEngine\" : \"mozjs\",\n \"sysInfo\" : \"deprecated\",\n \"versionArray\" : [\n 4,\n 0,\n 5,\n 0\n ],\n \"openssl\" : {\n \"running\" : \"Windows SChannel\"\n },\n \"buildEnvironment\" : {\n \"distmod\" : \"2008plus-ssl\",\n \"distarch\" : \"x86_64\",\n \"cc\" : \"cl: Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24223 for x64\",\n \"ccflags\" : \"/nologo /EHsc /W3 /wd4355 /wd4800 /wd4267 /wd4244 /wd4290 /wd4068 /wd4351 /wd4373 /we4013 /we4099 /we4930 /WX /errorReport:none /MD /O2 /Oy- /bigobj /utf-8 /Zc:rvalueCast /Zc:strictStrings /volatile:iso /Gw /Gy /Zc:inline\",\n \"cxx\" : \"cl: Microsoft (R) C/C++ Optimizing Compiler Version 19.00.24223 for x64\",\n \"cxxflags\" : \"/TP\",\n \"linkflags\" : \"/nologo /DEBUG /INCREMENTAL:NO /LARGEADDRESSAWARE /OPT:REF\",\n \"target_arch\" : \"x86_64\",\n \"target_os\" : \"windows\"\n },\n \"bits\" : 64,\n \"debug\" : false,\n \"maxBsonObjectSize\" : 16777216,\n \"storageEngines\" : [\n \"devnull\",\n \"ephemeralForTest\",\n \"mmapv1\",\n \"wiredTiger\"\n ]\n }\n}\n>\n", "text": "Will this help?", "username": "Ramachandra_Tummala" }, { "code": "ts", "text": "–query=\"{“ts”:{\"$gte\":\"$Timestamp(0000000000,50)\"}}So, where is the field ts in the documents? Are you querying on some other field?Please check the documents and the field you are querying upon - the field name must be same.", "username": "Prasad_Saya" }, { "code": "", "text": "I tried with startTime but still same result", "username": "Ramachandra_Tummala" }, { "code": "“startTime” : ISODate(“2019-01-13T06:55:02Z”)timestampDate--query \"{ \\\"startTime\\\":{ \\\"$gte\\\": { \\\"$date\\\": \\\"2020-02-14T04:07:34Z\\\" } } }\"", "text": "“startTime” : ISODate(“2019-01-13T06:55:02Z”) is not is timestamp data type; it is Date data type.For your case, use this format with the correct date string:--query \"{ \\\"startTime\\\":{ \\\"$gte\\\": { \\\"$date\\\": \\\"2020-02-14T04:07:34Z\\\" } } }\"", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you very muchIt worked with date format2020-03-30T11:55:05.934+0530 done dumping local.startup_log (4 documents)\n2020-03-30T11:55:05.934+0530 dump phase III: the oplog\n2020-03-30T11:55:05.936+0530 finishing dump", "username": "Ramachandra_Tummala" }, { "code": "", "text": "worked thanks for replying", "username": "raushan_sharma" }, { "code": "", "text": "Now this has a solved answer, when would you want to use a timestamp over a normal date/time ?Thanks for any enlightenment ", "username": "NeilM" }, { "code": "Date", "text": "Hello @NeilM, the timestamp data type is mostly for MongoDB internal usage. The Date data type is what is to be used for application data. Also see: BSON Types - Timestamp.", "username": "Prasad_Saya" }, { "code": "", "text": "@Prasad_SayaThank you for the reply. I had read about it being for internal use/oplog etc, I was was just wondering if there was a situation where the timestamp field would be useful.I wasn’t aware of it as a field type before, that’s all.Thanks.", "username": "NeilM" }, { "code": "", "text": "@NeilM, I haven’t come across such timestamp use case, but here is a discussion about to use or not to use:", "username": "Prasad_Saya" }, { "code": "", "text": "@Prasad_SayaThanks, the link helps highlighting the limited situation where it would be useful.Cheers.", "username": "NeilM" }, { "code": "", "text": "Hi Ramachandra,I have also tried the given Syntax\n–query “{“ts”:{ “$gte”: { “$date”: “2022-04-21T04:07:34Z”}}}” But i am getting 0 results2022-04-22T03:48:21.818-0500 writing local.oplog.rs to /ank/data/oplog_rs_1/local/oplog.rs.bson\n2022-04-22T03:48:24.764-0500 local.oplog.rs 0\n2022-04-22T03:48:25.678-0500 local.oplog.rs 0\n2022-04-22T03:48:25.678-0500 done dumping local.oplog.rs (0 documents)", "username": "Ankur_Varshney1" }, { "code": "", "text": "You have to use $timestamp\nMy query was for startup_log not oplogPlease check Prasad_Saya reply above\nWhat is your os?\nIf it is windows try the query he has given above", "username": "Ramachandra_Tummala" }, { "code": "date -v-1H +\"%Y-%m-%dT%H:%M:%S%z\"", "text": "A bit late but managed to query using shell date like this–query “{“TABLENAME”:{”$gt\":{\"$date\":\"date -v-1H +\"%Y-%m-%dT%H:%M:%S%z\"\"}}}\"where “1H” stands for an hour, “1d” for days", "username": "Mark_Rozovsky" }, { "code": "", "text": "@Prasad_Saya @Ramachandra_TummalaI have tried this but 0 records. did i miss something ?mongoexport.exe --port=27018 --username=user --password=%PS% --db=AlteryxGallery --collection=auditEvents --query=\"{ “startTime”:{ “$gte”: { “$date”: “2022-09-01T00:00:00.000Z” } } }\"\n2022-09-15T17:09:58.121+0100 connected to: mongodb://localhost:27018/\n2022-09-15T17:09:58.127+0100 exported 0 records", "username": "k.latha_4003" } ]
Mongodump --query not able to filter using timestamp
2020-03-29T04:43:27.645Z
Mongodump &ndash;query not able to filter using timestamp
14,894
null
[ "mongodb-shell" ]
[ { "code": "", "text": "I am new to Mongo and MongoSH… I am simply trying to figure out why I am unable to paste in mongo shell. I cannot right click there is no ‘mark’ command or option, it simply will not paste. I have no problem copying FROM the shell. Google no help, forum search no help. I have a possible workaround, but its not clean especially when trying to learn it from scratch. any help out there?", "username": "Kieth_Biasillo" }, { "code": "", "text": "It depends on type of terminal you are using\nWhat terminal are you using?Windows cmd or putty or Unix?\nShift+copy,shift+insert or mouse button combination\nThere are some threads in our forum.Most of them may be related to IDE", "username": "Ramachandra_Tummala" }, { "code": "", "text": "To make it easier to paste you can also consider setting up an external editor: https://www.mongodb.com/docs/mongodb-shell/reference/editor-mode/", "username": "Massimiliano_Marcon" } ]
Paste in mongosh not working
2022-09-16T04:18:29.784Z
Paste in mongosh not working
3,543
null
[ "node-js" ]
[ { "code": "", "text": "Hello,I would like to ask if it is better to use directly the mongo native pool instead of using some external package like generic-pool. Is there any drawbacks or benefits?Thanks", "username": "Anton_Tonchev" }, { "code": "", "text": "Hi @Anton_TonchevMost official drivers implement a connection pool that is codified in this document: specifications/connection-monitoring-and-pooling.rst at master · mongodb/specifications · GitHub and explain here: Connection Pool OverviewSince this feature is already built-in, I don’t think you’ll have more value by connecting an external pooling library (I’m guessing you’ll end up with a pool within a pool). Especially when the official driver’s pooling behaviour was designed specifically with MongoDB in mind.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thank you for the fast reply!", "username": "Anton_Tonchev" } ]
Is it better to use the mongodb driver pool instead of generic-pool for nodejs
2022-09-15T19:48:39.037Z
Is it better to use the mongodb driver pool instead of generic-pool for nodejs
1,666
null
[ "replication", "containers", "installation", "devops" ]
[ { "code": "image:\n debug: false\nauth:\n existingSecret: mongodb-secret # Existing secret with MongoDB credentials (keys: mongodb-password, mongodb-root-password, mongodb-replica-set-key)\n rootUser: user\narchitecture: replicaset\nreplicaCount: 3\npersistence:\n enabled: true\nvolumePermissions:\n enabled: true\n", "text": "Hi all,We recently tried installing MongoDB in our GKE cluster using the Bitnami Helm chart. We deployed a replicaset with 4 replicas using the configuration below:When switching from our current Atlas production cluster (M30) to the new self-hosted one, we experienced that the new cluster was very slow compared to the old one (a factor 20). What can we do to imporve this?\nThings we talked about:We have about 22k users minimum 40 active at the same time and maximum 500 - we want this to scale in the future. What can we do? Any improvements to a production enviroment using the Bitnami Helm chart?", "username": "sinbad_io" }, { "code": "", "text": "Hi @sinbad_io,I expect you’ve found more information by now, but for tuning a self-hosted deployment the following docs could be useful to review for general information:If you are considering upgrading resources like storage or CPU, it would be helpful to start by capturing some monitoring baseline metrics to understand which resources might be limiting your performance. I would also review slow queries to see if there are any easy wins for common queries that could be supported by more optimal indexes.Regards,\nStennie", "username": "Stennie_X" } ]
Installing MongoDB in production using Helm
2022-05-11T08:45:08.659Z
Installing MongoDB in production using Helm
4,689
null
[ "queries", "node-js", "dot-net", "mongoose-odm", "serverless" ]
[ { "code": "", "text": "I am using mongodb atlas serverless product. I found that when I was developing locally, if I did not request the database to fetch data for a period of time (such as 10 minutes), the next request to query the database will take a very long time, up to It takes 60~70s to return data.\nIt feels like the connection to the database was unilaterally closed by you When no data is queried for a period of time. because that The same code, I tried your free version (shared), it is normal, there is no request for more than 10 minutes, the query data can be returned quickly.\nThis problem can be reproduced on nestjs/mongoose and mongodb.", "username": "zeling_guo" }, { "code": "", "text": "This problem can be reproduced on nestjs/mongoose and mongo shell.", "username": "zeling_guo" }, { "code": "mongosh", "text": "Hi @zeling_guo - Welcome the community.This problem can be reproduced on nestjs/mongoose and mongo shell.Would you be able to detail the exact steps on how to reproduce this issue via mongosh? I will attempt these against on my own serverless environment.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "first, i exec ‘mongosh “mongodb+srv://bt-dev.2tt6e.mongodb.net/bt” --apiVersion 1 --username bt’ ,Then I try to query the data, which is normal (eg: db.users.find({})).\nWait for 10 minutes and execute the query statement again. There will be a problem here. It often takes 1 minute to return the result, and sometimes it is directly stuck and nothing is returned.\nimage1750×1276 130 KB\n", "username": "zeling_guo" }, { "code": "mongosh", "text": "Hi @zeling_guo - Sorry for the delay.I wasn’t able to reproduce this issue. I tried a few things:All time periods noted in 2. did not produce a delayed response. However, I did on one occasion get a “connection closed” error but was not able to reproduce this again in future.One thing I did note that was different was the MongoDB version in use (my testing was done using 6.0.1). I was not able to test using 6.0.0 noted in your screenshot.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
After mongodb serverless does not query data for a period of time, it takes 1 minute for the next query to return data
2022-08-17T15:17:59.413Z
After mongodb serverless does not query data for a period of time, it takes 1 minute for the next query to return data
2,923
null
[ "replication" ]
[ { "code": "", "text": "Hi all,\nI have a question on Arbiter Network configuration, Can I have the example of Arbiter Node IP confiuration that used differrent IP Subnet when I deploy in on the 3rd DC or onCloud. I just want to know among Arbiter and replica set can be deployed in deferrent IP subnet? Please kindly advise.Thank you,Deft.", "username": "Chumnan_Pearpiw" }, { "code": "w:1", "text": "Hi @Chumnan_Pearpiw welcome to the community!There’s no special requirements for an arbiter; they’re basically a node that helps with election voting, but do not store data like a normal secondary. Therefore all the requirements & recommendations put forth in the replica set connectivity page applies equally to secondaries as well as arbiters. Basically it boils down to: as long as all parts of a replica set can contact each other, it’s a valid configuration.Having said that, if this is your first MongoDB deployment, I would bring your attention to possible performance issues with arbiters. MongoDB 5.0 changes the default write concern setting to “majority” from the old default of w:1 in the past. While considerations are still being made with the presence of arbiters, in many cases it’s best to use a normal secondary node since it provides you with much better consistency guarantees and availability.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thank you Kevin, I’m new for MongoDB. What you said means now the arbiter is not reommended, right?", "username": "Chumnan_Pearpiw" }, { "code": "", "text": "Yes. In most cases, arbiters are not recommended. It’s there if you absolutely need them, know what their drawbacks are, and can plan around any issues that may crop up with their use. A fully functioning secondary is much preferred, since it gives you much more flexibility, integrity, and high availability.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Good to know and thank you. I will explore more on this.", "username": "Chumnan_Pearpiw" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Arbiter Network configuration
2022-09-14T09:20:28.935Z
Arbiter Network configuration
1,723
null
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "db.users.insert([\n { _id: \"1\", name: \"koko\", age: 15 },\n { _id: \"2\", name: \"momo\", age: 32 },\n { _id: \"3\", name: \"charles\", age: 73 }\n]);\n[\n { _id: \"1\", name: \"koko sho\", age: 17 }, // updates name **and** age\n { _id: \"2\", name: \"momo\", age: 39 }, // updates only age\n { _id: \"3\", name: \"charles\", age: 73 }, // same\n { _id: \"4\", name: \"dian\", age: 43 }, // new\n { _id: \"5\", name: \"joe\", age: 33 } // new\n]\n[\n { _id: \"1\", name: \"koko sho\", age: 17 } // age and name were updated because the upsert had { _id: \"1\", age: 17 },\n { _id: \"2\", name: \"momo\", age: 39 }, // only age was updated\n { _id: \"3\", name: \"charles\", age: 73 }, // same\n { _id: \"4\", name: \"dian\", age: 43 }, // new\n { _id: \"5\", name: \"joe\", age: 33 } // new\n]\n\n", "text": "Hi, what is the best way to do a batch upsert using Mongoose or Nodejs driver?\nAssuming this data:We want to do a batch upsert that will result creating 2 more documents and updating some of the existing ones:\nInput data for batch upsert:which will result:How to achieve it with a 1 upsert operation?", "username": "Benny_Kachanovsky1" }, { "code": "{upsert: true} let newUsers = [\n { _id: \"1\", name: \"koko sho\", age: 17 }, // updates name **and** age\n { _id: \"2\", name: \"momo\", age: 39 }, // updates only age\n { _id: \"3\", name: \"charles\", age: 73 }, // same\n { _id: \"4\", name: \"dian\", age: 43 }, // new\n { _id: \"5\", name: \"joe\", age: 33 } // new\n ]\n\n for (let i in newUsers) {\n console.log(newUsers[i])\n newUsers[i] = {\n updateOne: {\n filter: {_id: newUsers[i]._id},\n update: newUsers[i],\n upsert: true\n }\n }\n }\n\n const res = await User.bulkWrite(newUsers)\nforbulkWrite_id: \"3\"", "text": "hi @Benny_Kachanovsky1I think the most straightforward way to do this is to use mongoose’s bulkWrite operator, specifying multiple updateOne operations to be executed in bulk. To update or insert a new document, specify the {upsert: true} option.For example:In the for loop in the example above, all I did was wrap the documents into a format that bulkWrite requires.MongoDB would not modify a document if the update operation will not change anything, so the document with _id: \"3\" in your example will not be touched.Best regards\nKevin", "username": "kevinadi" } ]
Best way to do a batch upsert?
2022-09-14T07:11:27.542Z
Best way to do a batch upsert?
4,104
null
[ "java" ]
[ { "code": "", "text": "I’m working on a web project in Java, and use mongodb-driver-sync 4.3.\nSince there’s only one database in use, can I keep one MongoDatabase instance (as Singleton) for use in the whole web application while still utilizing the connection pool feature of the MongoClient ?\nThanks.", "username": "Sun_Yu" }, { "code": "", "text": "Yes, you can, though there is virtually no cost in creating a new MongoDatabase instance. So long as you cache the MongoClient you should be good.Jeff", "username": "Jeffrey_Yemin" }, { "code": "", "text": "Thank you for your reply.", "username": "Sun_Yu" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is it OK to use one MongoDatabase instance throughout the web application?
2022-09-15T12:15:45.269Z
Is it OK to use one MongoDatabase instance throughout the web application?
1,794
null
[ "connecting", "configuration" ]
[ { "code": "mongodb://host.example.com/?readPreference=secondaryPreferred&readPreferenceTags=region:uswest;readPreferenceTags=\n.WithReadPreference(ReadPreference.Primary)", "text": "I have a connection URINow, if I use .WithReadPreference(ReadPreference.Primary) , I understand that it overrides the read preference mentioned in the URI, but does it use the same tag preferences or it just tries to use default server selection algorithm?If it is default behaviour, how can we make sure we can set readPreferenceTags for different read preferences.Thanks in advance.", "username": "Vicky" }, { "code": "primarynearestmaxStalenessSecondsprimary", "text": "Hello @Vicky ,Welcome back to the MongoDB forum, hope you are doing well! As per this documentation on Tag Set List and Read Preference ModesTags are not compatible with mode primary , and in general, only apply when selecting a secondary member of a set for a read operation. However, the nearest read mode, when combined with a tag set list, selects the matching member with the lowest network latency. This member may be a primary or secondary.Also, if you specify tag set lists or a maxStalenessSeconds value with primary , the driver will produce below error.MongoInvalidArgumentError : Primary read preference cannot be combined with tagsPlease check this doc for reference.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB connection URI with different readPreferences with readPreferenceTags
2022-09-08T07:23:33.047Z
MongoDB connection URI with different readPreferences with readPreferenceTags
2,323
null
[ "node-js", "mongoose-odm" ]
[ { "code": " userRole:{\n type: mongoose.Schema.ObjectId,\n ref: 'userRoles',\n required: [true, 'Please Select a Role']\n }\n", "text": "Hello everyone.I have two collections, one is for the user data, with firstName, lastName, etc and theres a field called userRole:This field will be responsible to save the role id, it’s a join field.The question is, how may I join the collection users with the collection usersRole, and also I would like to show all the fields for the user Collection but bring from the usersRole only the roleDescription, actually is a user level query, I’m trying to show the user info. inside a table and I need the user name, user address and the user Role.I would like to thank everyone for the help.Best Regards,\nRicardo Scarpim", "username": "ricardo_scarpim" }, { "code": "$lookup", "text": "Hi @ricardo_scarpim - Welcome to the community Sounds like $lookup might suit your use case. However, do you have some sample documents from both collections you could provide here? Please also advise the expected output / what you are attempting to “join” on based off the sample documents.Regards,\nJason", "username": "Jason_Tran" }, { "code": "onst mongoose = require('mongoose')\nconst bcrypt = require('bcryptjs')\nconst jwt = require('jsonwebtoken')\n\nconst userSchema = mongoose.Schema(\n {\n firstName: {\n type: String,\n },\n lastName: {\n type: String,\n },\n emailAddress: {\n type: String,\n },\n phoneNumber: {\n type: String,\n },\n country: {\n type: String,\n },\n streetAddress: {\n type: String,\n },\n city: {\n type: String,\n },\n region: {\n type: String,\n },\n postalCode: {\n type: String,\n },\n userName: {\n type: String,\n required: [true, 'Please add a name.'],\n },\n userPassword: {\n type: String,\n required: [true, 'Please add a password.'],\n },\n userRole:{\n type: mongoose.Schema.ObjectId,\n ref: 'userRoles',\n required: [true, 'Please Select a Role']\n }\n },\n {\n timestamps: true,\n }\n)\n\n/** Encrypting the Password. */\nuserSchema.pre('save', async function(next){\n \n const salt = await bcrypt.genSalt(10)\n\n this.userPassword = await bcrypt.hash(this.userPassword, salt)\n})\n\n/** Sign JWT and Return */\nuserSchema.methods.getSignedJwtToken = function(){\n return jwt.sign({ id: this._id}, process.env.JWT_SECRET, {\n expiresIn: process.env.JWT_EXPIRE\n })\n}\n\nmodule.exports = mongoose.model('userMD', userSchema, 'Users');\n\nconst mongoose = require('mongoose')\n\nconst userRolesSchema = mongoose.Schema(\n {\n\n roleDescription: {\n type: String,\n required:[true, 'Please Add a Valid User Role']\n }\n },\n {\n timestamps: true\n }\n)\n\nmodule.exports = mongoose.model('userRoles', userRolesSchema, 'UserRoles')\n", "text": "And here are the mongoose schema for users and usersroles:", "username": "ricardo_scarpim" }, { "code": "city: \"Hillside\"\ncountry: \"\"\ncreatedAt: \"2022-09-13T21:13:37.173Z\"\nemailAddress: \"[email protected]\"\nfirstName: \"maria\"\nlastName: \"\"\nphoneNumber: \"\"\npostalCode: \"\"\nregion: \"\"\nroleDescription: []\nstreetAddress: \"\nupdatedAt: \"2022-09-13T21:13:37.173Z\"\nuserName: \"[email protected]\"\nuserPassword: \"$2a$10$8Nz9HWmh8x6aO8vYVtu80u.M3nEqH.iXvW9TBuK.PBfmi8IlIVsf2\"\nuserRole: \"6320af5955d957de42c77319\"\n__v: 0\n_id: \"6320f2817618d1af1b17fe65\"\n", "text": "The output I’m getting is the following:", "username": "ricardo_scarpim" }, { "code": " const Users = await User.aggregate([\n {\n $lookup: {\n from: \"userRoles\",\n pipeline: [\n { $project: {roleDescription: 1}}\n ],\n as: \"roleDescription\"\n }\n }\n ])\n", "text": "The method that I used to call the $lookup is the following:Thank you for your help.", "username": "ricardo_scarpim" }, { "code": "const Users = await User.aggregate([\n {\n $lookup: {\n from: \"userRoles\",\n pipeline: [\n { $project: {roleDescription: 1}}\n ],\n as: \"roleDescription\"\n }\n }\n ])\nuserrolesuserRolesmongoshshow collections\nDB>show collections\nuserRoles\nusers\n", "text": "Hi @ricardo_scarpim,I’m not too familiar with mongoose but would it be possible the collection name is userroles instead of userRoles (this is based off my interpretation from the following documentation)? Can you try verify this by connecting to the deployment via mongosh, switching to the database where the two collections in question exist and then running the following:Please advise the output from the above.Example output:Regards,\nJason", "username": "Jason_Tran" }, { "code": "city: \"bla bla\"\ncountry: \"\"\ncreatedAt: \"2022-09-15T20:49:13.325Z\"\nemailAddress: \"[email protected]\"\nfirstName: \"bla\"\nlastName: \"bla\"\nphoneNumber: \"\"\npostalCode: \"00000\"\nregion: \"NN\"\nroleDescription: Array(0)\nlength: 0\n[[Prototype]]: Array(0)\nstreetAddress: \"bla bla\"\nupdatedAt: \"2022-09-15T20:49:13.325Z\"\nuserName: \"[email protected]\"\nuserPassword: \"$2a$10$YzmjTXnYhaDnJXV5H4G0dO3ybl7tajaWaYet9PPksUBb4A2YaBSGC\"\nuserRole: \"630d94884b41135bfa488036\"\n__v: 0\n_id: \"63238fc9ef26265f5d65cb9d\"\n", "text": "Hey Jason, once again thank you so much for the help, still not working lol, here’s the output", "username": "ricardo_scarpim" }, { "code": "show collectionsmongoshmongosh", "text": "Thanks for confirming. Can you advise the output from show collections from mongosh as advised in my previous reply?If connected via mongosh, I would also try the pipeline you wanted to see if the documents return in shell.Regards,\nJason", "username": "Jason_Tran" } ]
How to join 2 tables or more
2022-09-14T22:52:30.947Z
How to join 2 tables or more
11,105
null
[ "swift", "kotlin" ]
[ { "code": "", "text": "Currently in the process of migrating a local app (custom sync) to MongoDb Realm. As a part of this I am doing database migrations to make the current schema better suited for sync, which makes it possible to migrate gradually.In the old schemas, all primary keys are strings. In MongoDb Realm it is allowed to use Strings, Int or ObjectId. Is there any difference between them? Do I loose anything if I keep using strings instead of ObjectIId?", "username": "Simon_Persson" }, { "code": "", "text": "Hi @Simon_Persson – I think that the only thing you’d lose is the ability to generate unique keys (but then you can also create unique UUIDs and convert them to a string).", "username": "Andrew_Morgan" }, { "code": "", "text": "Would there be any performance differences between String (24 character) vs. an actual ObjectId vs an Integer (hex to integer) ? For example with queries like $elemMatch - would MongoDB be more performant when iterating over all documents and looking for matches?", "username": "King_Khan" } ]
ObjectID, String or Int as primary key. Does it matter?
2021-05-03T07:22:12.286Z
ObjectID, String or Int as primary key. Does it matter?
6,980
null
[ "charts" ]
[ { "code": "", "text": "Hi, we’ve been building a dashboard via mongo charts and the highlighting and filtering functionality works as desired. When we embed this dashboard in our application none of the filtering or highlighting is available. Is there anyway to persist this functionally when embedding?I’ve searched the documentation. It appears filtering and highlighting is possible when embedding single charts. I could implement a control to apply a filter to a range of single charts but this still means we would have to embed each one and manage their size and positioning. Do you have any recommend solution for this?Additionally, why does calling getCharts on a dashboard return instances of charts that we are not able to apply a filter to?I really appreciate any help!", "username": "Elliot_Wilkinson" }, { "code": "", "text": "Hey Elliot,Thanks a lot for raising this ticket. We have a beta version released last week which will filter and highlight Charts that are referenced via embedded Dashboard context. You can find the documentation in this list https://www.npmjs.com/package/@mongodb-js/charts-embed-dom/v/2.3.0-beta.2We have planning to move this into a stable version late next week.", "username": "Muthukrishnan_Krishnamurthy" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filtering embedded dashboard
2022-09-15T21:38:06.000Z
Filtering embedded dashboard
2,245
null
[]
[ { "code": "Project1Project2{\n\t\"rules\": {\n\t\t\"Project1\": [{\n\t\t\t\"name\": \"ProjectOwner1\",\n\t\t\t\"applyWhen\": {},\n\t\t\t\"read\": {\n\t\t\t\t\"ownerId\": \"%%user.id\"\n\t\t\t},\n\t\t\t\"write\": {\n\t\t\t\t\"ownerId\": \"%%user.id\"\n\t\t\t}\n\t\t}],\n\t\t\"Project2\": [{\n\t\t\t\"name\": \"ProjectOwner2\",\n\t\t\t\"applyWhen\": {\n\t\t\t\t\"ownerId\": \"%%user.id\"\n\t\t\t},\n\t\t\t\"read\": true,\n\t\t\t\"write\": true\n\t\t}]\n\t}\n}\n", "text": "Hey there!We are struggling to bring the correct permission for Flexible Sync to live. So, just one question:Why does the first one work, but the second don’t (Project1 and Project2 are representing the same collection. They just have different names for demonstration)?Okay, a second question. This question is perhaps more directed at the Realm/MongoDB staff:\nFor me, managing permssions/roles/rules by editing a JSON is not only cumbersome, but also extremely confusing and error-prone. Can we expect a GUI (web frontend) to administer this in the future? That would be a huge relief. Thanks,\nFrank", "username": "phranck" }, { "code": "", "text": "Hello Frank,A sync role is applied per sync session (more specifically, at the start of the session, using the “applyWhen” expression), not per document. Thus, the “applyWhen” expression can’t reference fields in a document (in this case, “ownerId”), which is why the role under “Project2” is invalid.Regarding your question about having a UI to edit sync rules - that is a project that the team is currently working on this quarter!Best,\nJonathan", "username": "Jonathan_Lee" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Help Understanding the Permissions for Flexible Sync
2022-09-13T15:06:34.548Z
Help Understanding the Permissions for Flexible Sync
1,550
null
[ "database-tools" ]
[ { "code": "", "text": "MongoExport is working without query. when i try the below query getting erros\nShell version 4.2.8-q=’{ “CreatedOnUtc”: { “$lt”: { “$date”: “2016-01-01T00:00:00.000Z” } } }’Error message: error parsing command line options: too many positional arguments: [CreatedOnUtc: { $lt: { $date: 2016-01-01T00:00:00.000Z } } }’]", "username": "Ezi" }, { "code": "", "text": "post screenshot of the whole command. your quotes are not shown correctly.", "username": "steevej" }, { "code": "", "text": "\nimage1365×252 14.9 KB\n", "username": "Ezi" }, { "code": "", "text": "\nimage1363×207 9.11 KB\n\nAlso tried this", "username": "Ezi" }, { "code": "cmd-q=\"{ 'CreatedOnUtc': { '$lt': { '$date': '2016-01-01T00:00:00.000Z' } } }\"\n", "text": "Hi @Ezi welcome to the community!This is a common issue in Windows, since cmd doesn’t recognize single quotes as delimiters, unlike Linux.Could you try switching your quotes around (single quotes to double quotes and vice versa) like this and see if it works?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks @kevinadi . When i try , still getting error\nimage1055×136 5.76 KB\n", "username": "Ezi" }, { "code": "", "text": "When I tried this query on my Windows it did not give any errors. Gave the message exported zero rows since I don’t have that collection nor field\nCan you add some sample docs\nSometimes spaces cause issues and interpreted differently.Remove extra space after date:\nor\nPut your query in a json file and use queryfile option of mongoexport", "username": "Ramachandra_Tummala" }, { "code": "", "text": "When collection has zero row. No error. once have the error result have at least one doc.", "username": "Ezi" }, { "code": "", "text": "@Ramachandra_TummalaSET MONGOEXPORT=\"%ProgramFiles%\\Alteryx\\bin\\mongoexport.exe\"\nFOR /f %%a IN (‘WMIC OS GET LocalDateTime ^| FIND “.”’) DO SET DTS=%%a\nSET Date=%DTS:~0,4%-%DTS:~4,2%-%DTS:~6,2%\nSET Time=%DTS:~8,2%:%DTS:~10,2%:%DTS:~12,2%%DTS:~14,4%Z\nSET /a tztemp=%DTS:~21%/60\nSET tzone=UTC%tztemp%\nSET DateTime=%DTS:~0,4%%DTS:~4,2%%DTS:~6,2%_%DTS:~8,2%%DTS:~10,2%%DTS:~12,2%SET JSON=\"{‘Timestamp’:{’$gte’:{’$date’:’%Date%T00:00:00.000Z’}}}\"%MONGOEXPORT% --port=27018 --username=user --password=%PS% --db=AlteryxGallery --collection=auditEvents --query=%JSON% >>%OUTPUT%AuditEvents%datetime%.csvI was using this batch script to export audit events in Mongodb4.0. now my environment is migrated to 4.2 and 4.4 . If i run the query now it s giving below error.2022-09-14T12:33:33.389+0100 connected to: mongodb://localhost:27018/\n2022-09-14T12:33:33.390+0100 Failed: error parsing query as Extended JSON: invalid JSON inputCan you clarify why and what modification should i do", "username": "k.latha_4003" }, { "code": "", "text": "Your set JSON variable on timestamp seems to be the issue\nFor extended json you have to enclose all fields,functions in double quotes and single quote outside the flower bracket\nCheck documentation for exact syntax/example", "username": "Ramachandra_Tummala" }, { "code": "", "text": "@Ramachandra_TummalaSET JSON=’{“Timestamp”:{\"$gte\":{\"$date\":\"%Date%T00:00:00.000Z\"}}}’I have tried this as well but no luck2022-09-15T09:52:01.804+0100 query ‘[39 123 84 105 109 101 115 116 97 109 112 58 123 36 103 116 101 58 123 36 100 97 116 101 58 50 48 50 50 45 48 57 45 49 53 84 48 48 58 48 48 58 48 48 46 48 48 48 90 125 125 125 39]’ is not valid JSON: json: cannot unmarshal string into Go value of type map[string]interface {}", "username": "k.latha_4003" }, { "code": "", "text": "What is your os?\nIf it is Windows format will be different\nAlso your quotes around Timestamp look different.Use double quotes\nSingle quote outside flower bracket also look different\nCheck this thread for Windows query format\nMongodump --query not able to filter using timestamp", "username": "Ramachandra_Tummala" } ]
MongoExport with query
2022-05-03T23:52:12.764Z
MongoExport with query
5,185
null
[ "replication" ]
[ { "code": "", "text": "HI all,I deployed a mongoDB statefulset on a Kubernetes cluster and I associated a persistent volume of 20GB on each instance of my database. I expected a Oplog size of 1GB (as default is 5% of free disk space), but I observed the Oplog Size is equal to 25GB which is equal to 5% of the complete free disk space (500GB). Do you know if there is a solution to use the PV size instead of the total partition size for the Oplog size caculation? If not I must manually modified the oplog size to be coherent with my PV size (or increase the PV size).\nRegards", "username": "Jacques" }, { "code": "db.getReplicationInfo()", "text": "Hi @Jacques and welcome to the MongoDB community!!but I observed the Oplog Size is equal to 25GB which is equal to 5% of the complete free disk space (500GB).If the dbPath for the oplog resides in a persistentVolume, then the database size including the oplog should be bound by that volume, so I believe the oplog size should take into account the volume’s size.\nAlso, Starting in MongoDB 4.0, unlike other capped collections, the oplog can grow past its configured size limit to avoid deleting the majority commit point.Do you know if there is a solution to use the PV size instead of the total partition size for the Oplog size caculation?By default it should do this, but if it doesn’t, could you please provide more details:Let us know if you have any further queries.Best Regards\nAasawari", "username": "Aasawari" } ]
Oplog size and Kubernetes Persitent Volume
2022-08-26T09:15:01.865Z
Oplog size and Kubernetes Persitent Volume
1,555
null
[ "swift" ]
[ { "code": "", "text": "I’m looking at the M10 dedicated tier and it says the pricing includes “3 replica sets”. I see MongoDB docs about replica sets and I think I understand the basic concept: it’s like a fancy load balancer.However, I’m unclear how replica sets integrate with Realm and specifically Realm Sync. Are replica sets something I can take advantage of when using Realm Sync via the Swift SDK? If so, are there any changes I need to make to the client code?Is the M10 tier cheaper if I don’t use replica sets at all? Thanks.", "username": "Bryan_Jones" }, { "code": "", "text": "Hi @Bryan_Jones,I understand the basic concept: it’s like a fancy load balancer.Not really, at least not as their main purpose: Replication is a way to increase resilience and ensure your data is available and consistent at any time, no matter if a server needs to step down. While writing can only happen on the primary, it is possible to offload tasks to secondaries: you can find a lot of examples in the courses available on MongoDB University.Are replica sets something I can take advantage of when using Realm Sync via the Swift SDK?Not directly through the Realm SDK, as that’s a client framework: you can however configure the backend to perform better by tweaking the Data Source Configuration, according to your use case. This is especially true if you’re running backend-side logic, like Triggers or Functions, that may well work by reading data from secondaries, leaving the primary with more resources to operate Device Sync.Is the M10 tier cheaper if I don’t use replica sets at all?At this time, this is not an option, and would be unadvisable to do so, for the reasons outlined above: Three Members Replica Sets are also the bare minimum for the functionality to work, large production systems have usually more nodes.", "username": "Paolo_Manna" }, { "code": "", "text": "Ah, I see. I have no functions or triggers running for this client. Is it the case that device sync does not benefit from replica sets? That is, sync must be run by the Primary node only? If so, the benefit of replica sets is reduced to fault tolerance, correct?Does the pricing in M10 that includes 3 replica sets cover spreading the sets over different physical data centers?", "username": "Bryan_Jones" }, { "code": "", "text": "If so, the benefit of replica sets is reduced to fault tolerance, correct?I wouldn’t say it’s irrelevant, but yes, resilience is the main factor when only Device Sync is involved.Does the pricing in M10 that includes 3 replica sets cover spreading the sets over different physical data centers?Having multi-cloud and multi-region clusters is an option, but it introduces additional costs.One factor to keep in mind, however, is that the App Services app (that includes services like Device Sync) doesn’t run on your cluster, but on specific VM(s), deployed on the same range of providers, but not necessarily the same region: more info in the docs.To maximise performance, then, it’s important to keep your cluster in a region as close as possible (the same, ideally) to the one you choose for the app. While it’s possible to have the app and the cluster run on different providers, it’s a common misconfiguration, as it introduces lag and inter-provider traffic costs.", "username": "Paolo_Manna" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Replica Sets + Realm Sync
2022-09-14T05:48:49.874Z
Atlas Replica Sets + Realm Sync
1,441
null
[]
[ { "code": "", "text": "Hi,\nWe running on MongoDB Community 4.0.12. We are trying to create a new secondary node in existing 3 Node Cluster where we have 1 Primary and 2 Secondary. Our DB Size is 4TB.Steps we took to Add new secondary node:Initial, lag after adding new node to primary was 0.76 hours (and new node state is Secondary)but lag keeps growing and grows till 15 hours and finally changes from Secondary to Recovering. Oplog Size on all Nodes ar 150 GB which is good hold data of 3.7 hours. We are running theses server on EC2 machine with io2 Storage and PIOPs 50K in different Availability Zone.We have tried this 4 times but no luck. We are not able to understand why Secondary node not able to catch up just lag of less than hour? Is there any better way to create and attach new secondary to existing cluster? Is there any replication or any other parameter which can help to fix this?\nThanks in advance.", "username": "Prasun_Pandey" }, { "code": "\"MongoDB v4.0 is already out of support since April 2022\"", "text": "Welcome to The MongoDB Community Forums @Prasun_Pandey ! We are not able to understand why Secondary node not able to catch up just lag of less than hour?I think the most likely cause is that the secondary cannot keep up with the write workload of primary node. The default write concern of w:1 requires that only the primary replica set member acknowledge the write before returning write concern acknowledgment, which means that it’s up to the secondary to keep up with the primary. If it cannot, it will get left behind and eventually fall off the oplog.Is there any replication or any other parameter which can help to fix this?You can check if your secondary has an identical hardware specification to the primary, which it should as in case of primary goes offline due to maintainance or other issues, this could be your next primary node. If it has similar hardware specifications and still cannot catch up then you can check the network. It is possible that the network link between them is too slow for the workload. Some other possible causes of replication lag includesOne possible solution if a secondary cannot keep up is to use w:majority instead of w:1 in the application. This could be set using the connection string URI to set it as default connection-wide. Note that this is also the default write concern starting from MongoDB 5.0.Also, \"MongoDB v4.0 is already out of support since April 2022\" so it is recommended to upgrade to a supported version, which is at least the 4.2 series. However I would recommend you to check if you can upgrade to the latest 6.0 series if possible.For a more complete troubleshooting information regarding replica set, please see: Troubleshoot Replica Sets.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Replica Lag Doesn't Catchup
2022-09-08T23:57:47.080Z
MongoDB Replica Lag Doesn&rsquo;t Catchup
1,525
null
[ "queries" ]
[ { "code": "find({id: {$in: [array of ids, average size 1000]}}).sort({score: -1, id: -1}).limit(10)\nfind({\n $and: [\n {id: {$in: [array of ids, average size 1000]}},\n {\n $or: [\n { score: { $lt: 100 } },\n { score: 100, id: { $lt: \"some id\" } },\n ],\n },\n ],\n}).sort({score: -1, id: -1}).limit(10)\nfind({id: {$in: [\"id 1\",\"id 2\",\"id 3\",\"id 4\",\"id 5\"]}}).sort({score: -1, id: -1}).limit(2)find({\n $and: [\n {id: {$in: [\"id 1\",\"id 2\",\"id 3\",\"id 4\",\"id 5\"]}},\n {\n $or: [\n { score: { $lt: 4 } },\n { score: 4, id: { $lt: \"id 4\" } },\n ],\n },\n ],\n}).sort({score: -1, id: -1}).limit(10)\n$in", "text": "I already know about the Equality-Sort-Range rule when designing optimal indices for MongoDB queries, however, recently I’ve came across some queries that I am not sure which index would be the best fix.The collection has tens of millions of documents.The fields are:The queries are:So basically what I am trying to do, is seek based pagination, based on the result of the first query, I can obtain some anchors to look for data in the next page. Say:I have 5 documents, and the page size to return data is 2.\n[id: “id 1”, score: 2]\n[id: “id 2”, score: 4]\n[id: “id 3”, score: 5]\n[id: “id 4”, score: 4]\n[id: “id 5”, score: 1]First page will be:\n[id: “id 3”, score: 5]\n[id: “id 4”, score: 4]\nwith query find({id: {$in: [\"id 1\",\"id 2\",\"id 3\",\"id 4\",\"id 5\"]}}).sort({score: -1, id: -1}).limit(2)Second page will be:\n[id: “id 2”, score: 4]\n[id: “id 1”, score: 2]\nwith queryAccording to the ESR rule, the index should probably be something like {filterId: -1, score: -1}, so that the $in equality rule is satisfied first. But there would still be in memory sorts. Is there a better index available?", "username": "v_b" }, { "code": "$infilterIdId{\"score\": -1, \"id\": -1}{ \"id\": -1, \"score\": -1}{ \"score\": -1, \"id\": -1}{ \"score\": -1, \"id\": -1}{ \"score\": -1, \"id\": -1}winningPlandb.collection. find({id: {$in: [\"id 1\",\"id 2\",\"id 3\",\"id 4\",\"id 5\"]}}).sort({score: -1, id: -1})winningPlan: {\n stage: 'FETCH',\n filter: { id: { '$in': [ 'id 1', 'id 2', 'id 3', 'id 4', 'id 5' ] } },\n inputStage: {\n stage: 'IXSCAN',\n keyPattern: { score: -1, id: -1 },\n indexName: 'score_-1_id_-1',\n isMultiKey: false,\n multiKeyPaths: { score: [], id: [] },\n isUnique: false,\n isSparse: false,\n isPartial: false,\n indexVersion: 2,\n direction: 'forward',\n indexBounds: { score: [ '[MaxKey, MinKey]' ], id: [ '[MaxKey, MinKey]' ] }\n }\n }\n“id”find()$orIXSCANFETCHLIMIT{score:-1, id:-1}IXSCANSORTFETCH{id:-1, score:-1}$orIXSCANIXSCANSORT_MERGEFETCHLIMIT{score:-1, id:-1}IXSCANFETCHSORT{id:-1, score:-1}IXSCANIXSCANSORTFETCH{id:-1, score:-1}{score:1,id:1}{score:-1,id:-1}{id:-1, score:-1}sort(){score:-1,id:-1}$or‘SORT_MERGE’{score:1, id:1}{id:-1, score:-1}$or$orSORTdb.collection.explain(\"executionStats\").find()", "text": "Hi @v_b - Welcome to the community.According to the ESR rule, the index should probably be something like {filterId: -1, score: -1}, so that the $in equality rule is satisfied first. But there would still be in memory sorts. Is there a better index available?Note, I believe filterId mentioned above should be Id but correct me if I am wrong here.In saying so, based off the sample documents and details you’ve provided I am curious if you have attempted to create an index definition of {\"score\": -1, \"id\": -1}?I compared both the following:The query planner in my test environment which only contains the sample documents you’ve provided selects the index 2. { \"score\": -1, \"id\": -1} to use for both of the queries you had advised.In terms of the { \"score\": -1, \"id\": -1} index, here are some details of the winningPlan for the query:\ndb.collection. find({id: {$in: [\"id 1\",\"id 2\",\"id 3\",\"id 4\",\"id 5\"]}}).sort({score: -1, id: -1})We can see that the index is used before a filter is done for the specified “id” values.Additionally, here’s some further details regarding some of the testing with both indexes (from my test environment with only 5 of the sample documents you have provided):find() without $or:\nIXSCAN → FETCH → LIMIT using {score:-1, id:-1} index\nIXSCAN → SORT → FETCH using {id:-1, score:-1} indexWith $or:\n( IXSCAN + IXSCAN ) → SORT_MERGE → FETCH → LIMIT all using {score:-1, id:-1}\nIXSCAN → FETCH (with conditions) → SORT using {id:-1, score:-1}\nRejected plan: ( IXSCAN + IXSCAN ) → SORT → FETCH using {id:-1, score:-1}Note: {score:1,id:1} would work in the same manner as {score:-1,id:-1} (just in reverse) for this test.Based off the 5 sample documents you’ve provided in my test environment, we can see in both instances that the index {id:-1, score:-1} will have an in memory sort. This is due to the sort() condition you have specified. Where as compared to the index {score:-1,id:-1} , the documents are already sorted and just need to be fetched.For your secondary query, the winning plan differs slightly as there is use of the $or operator with some range conditions in which a ‘SORT_MERGE’ is performed. Although this index may work for your use case, there isn’t a single “silver bullet” index for all queries. You’ll need to consider it on a query-by-query basis. You could possibly consider {score:1, id:1} and {id:-1, score:-1} indexes for the query using $or so that the query planner may decide which index to use for each branch of the $or operator.In saying the above, I believe your main concern is the in-memory sort (correct me if I am wrong here). The SORT stage is also less desirable due to the 100MB limit of memory use (in which the operation will result in an error if it exceeds this amount and allowDiskUse() is not used).It’s important to note that the ESR rule is used as a general rule of thumb and is a good starting place applicable to most use cases (i.e. not all). I would recommend going over the MongoDB .local Toronto 2019: Tips and Tricks for Effective Indexing slides which also has an example of an exception to the ESR rule. Additionally, perhaps the Sort of Multiple Fields documentation may be of use as well.If you’re on Atlas using an M10+ tier cluster, you should have access to the performance advisor which may be able to help you optimize your indexes.It is highly recommended you test this index first against a test environment with more data to verify it meets all your use case / requirements before creating this in production. You can verify the query execution in more detail using db.collection.explain(\"executionStats\").find() .Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Index recommendation for a queries that does not tick all boxes of the ESR rule
2022-09-09T03:28:50.535Z
Index recommendation for a queries that does not tick all boxes of the ESR rule
1,408
https://www.mongodb.com/…9900d3918de3.png
[]
[ { "code": "", "text": "\nScreenshot from 2022-09-10 13-44-25840×303 12.3 KB\n", "username": "Mohit_Jain1" }, { "code": "monogsh", "text": "Hi @Mohit_Jain1,Based off the title, it sounds like there are 2 issues here (possibly related):Regarding 1., could you provide some more information regarding connection failure? I.e.:Regarding 2. I would contact the Atlas support team via the in-app chat to investigate any operational issues related to your Atlas account. You can additionally raise a support case if you have a support subscription. You could possibly try clearing cache or another browser / machine to see if the issue persists.However, if “unable to connect to my cluster” is referring to the error in the screenshot you’ve provided / trying to access Data Explorer, then please contact the Atlas support team as mentioned above.Regards,\nJason", "username": "Jason_Tran" } ]
I am unable to connect to my cluster. getting an error when connecting via atlus dashboard
2022-09-10T08:45:56.887Z
I am unable to connect to my cluster. getting an error when connecting via atlus dashboard
1,113
null
[ "aggregation", "java" ]
[ { "code": "{\n from: \"profile_recommendation_info\",\n localField: \"_id\",\n foreignField: \"profile_id\",\n as: \"result\",\n pipeline : [\n {\n $match : {\n \"new_recommendations\" : {\n $eq : null\n }\n }\n }\n ]\n}\n\tLookupOperation lookupOperation = Aggregation.lookup(CommonConstants.PROFILE_RECOMMENDATION_INFO, CommonConstants.ID, CommonConstants.PROFILE_UNDERSCORE_ID, CommonConstants.RESULT);\n", "text": "below is my $lookup codeBelow code is java equivalent code for lookup without in javaHow to achieve the same in java but with pipeline condition in lookup(As shown in abovw $lookup aggregation query ) in java?", "username": "Sanjay_Naik" }, { "code": "$lookup", "text": "Hi @Sanjay_Naik,My interpretation is that you’re after the Java equivalent of the $lookup you have provided. Please correct me if I am wrong here.If that is the case, you may wish to try getting this using MongoDB Compass’s Export Pipeline to Specific Language feature.Hope this helps.Regards,\nJason", "username": "Jason_Tran" } ]
How to use to use pipeline within a $lookup in mongo aggregation in java
2022-09-04T12:12:52.830Z
How to use to use pipeline within a $lookup in mongo aggregation in java
1,722
https://www.mongodb.com/…9c15a442dca.jpeg
[]
[ { "code": "", "text": "With some no regular email domains, are impossible to access the site admin:\n\nLog_in___MongoDB981×879 93.6 KB\n", "username": "Paulino_R_e_Silva" }, { "code": "", "text": "Hi @Paulino_R_e_Silva - Welcome to the community.I am not sure if you have resolved this yet but I would contact the Atlas support team via the in-app chat for this issue. The community forums are for public discussion and we cannot help with service or account / billing enquiries.Best Regards,\nJason Tran", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Problem with some emails with Chrome
2022-09-08T14:36:21.445Z
Problem with some emails with Chrome
1,070
null
[ "queries", "golang", "transactions" ]
[ { "code": "", "text": "hello, I use mongo-go-driver to connect mongo,an error was encountered while reading in a transaction.\nerror message:\n(SnapshotUnavailable) Unable to read from a snapshot due to pending collection catalog changes; please retry the operation. Snapshot timestamp is Timestamp(1660749105, 13). Collection minimum is Timestamp(1660749106, 16)mongodb 4.2.3", "username": "zheng_connor" }, { "code": "", "text": "Hey @zheng_connor, thanks for the question!Searching for that error message, I found a seemingly relevant ticket SERVER-41532, which suggests that the error can happen when trying to read data in a transaction with read concern “Snapshot” from a MongoDB sharded cluster.A few questions to help me understand your use case better:Unfortunately the above ticket has no resolution, so it’s not clear if updating to a newer server version would resolve the problem. The fix suggested in the ticket is to retry the transaction, which will likely resolve the issue.", "username": "Matt_Dale" }, { "code": "", "text": "· mongo three node replica-set, version: 4.2.3\n· there are many request to add data, each request to do the following:when doing request operations concurrently, errors in the problem may occur when reading collection Anotes:\n· collection A and B already exist\n· when the above request is executed, there is no data change in collection A", "username": "zheng_connor" }, { "code": "", "text": "@Matt_Dale hello,What are the solutions to the above problems?", "username": "zheng_connor" } ]
A SnapshotUnavailable error was encountered while reading in a transaction
2022-08-26T08:48:35.511Z
A SnapshotUnavailable error was encountered while reading in a transaction
2,771
null
[ "atlas-cli" ]
[ { "code": "--profile default--profile default", "text": "I have successfully setup the atlas cli to auth and work with my atlas cluster. However it only works if I specify --profile default at the end of every command I issue.\nIs there a way to set default profile to where i do not have to specify --profile default for every command? I could not find that I info.I could create shell alias but thought if there is a way in CLI itself, it would be better.Thanks", "username": "Abdullah_Rafiq_Paracha" }, { "code": "", "text": "How many profiles you have?\nIs default profile included in the config file?\nCheck this link", "username": "Ramachandra_Tummala" } ]
How to set default profile in atlas CLI
2022-09-14T19:25:30.831Z
How to set default profile in atlas CLI
2,457
null
[ "aggregation" ]
[ { "code": "", "text": "MongoDB 4.4.15When updating a $out (aggregation) on a primary node it runs successful.\nWhen running the same $out aggregation using a secondary node we get this error:BSONObj size: 16953146 (0x102AF3A) is invalid.\nSize must be between 0 and 16793600(16MB) First element:\ninsert: \" tmp.agg_out.55125f65 - 0ef0 - 4c1f - a7ef - dec311c99612 \"According the 4.4 manual https://www.mongodb.com/docs/v4.4/reference/operator/aggregation/out/#behaviors I was under the impression this was possible but I cannot find any information regarding a possible size restriction causing this error.Is this expected behaviour?Thanks,\nL", "username": "Lars_Van_Casteren" }, { "code": "$out", "text": "Hi @Lars_Van_Casteren,The issue you’re experiencing is likely the result of SERVER-66289, which was fixed in MongoDB 6.1.TL;DR of this issue is that though $out can target secondary nodes in 4.4, on occasion when more than 16MB of documents need to be written (the BSON Document Max Size) even with buffering of writes there is a small amount of overhead that can push a batch over this threshold. This overhead is not present when targeting a PRIMARY node, which is why the issue only exists when run on a SECONDARY.We’ve requested this fix be backported to 6.0, 5.0 and 4.4 however this has not yet been completed.", "username": "alexbevi" }, { "code": "", "text": "Hello @alexbevi ,Thanks for your reply!We’re unable to move to 5.x or 6.x for the moment so I guess we’ll have to wait for the backport to 4.4.Gt,\nL", "username": "Lars_Van_Casteren" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.4.15 - BSONObjectTooLarge - aggregation $out update
2022-09-14T15:55:58.507Z
MongoDB 4.4.15 - BSONObjectTooLarge - aggregation $out update
3,370
null
[ "graphql" ]
[ { "code": "", "text": "I have a mutation set up to update a document.\nI have a query in place on the same document.What is currently happening:Why does the query come in with expired data after I’ve sent a mutation?\nI’m 80% sure this was not happening a month ago. Did something change?\nMy UI updates based on the query signal, so every time I save a document, the UI is updating to the expired data, which is not consistent with the DB.", "username": "Ryan_Goodwin" }, { "code": "", "text": "Hello @Ryan_Goodwin,Thank you for raising your concern.Could you please confirm my understanding of your issue-If my understanding is correct, it’s possible the query is returning cached data. Could you let us know how are you running the mutation/query? Are you using a client like Apollo?I look forward to your response.Cheers, \nHenna", "username": "henna.s" }, { "code": "", "text": "Yes, I edit the document via a GQL mutation. I’m using @apollo/client.\nI then see the correct update on Atlas.My UI awaits the return of the update mutation.Then, when the query fires again, it returns with old data (the state of the document just before the update mutation was run).If I then reload the page, thereby triggering the query again, the correct data is returned.Caching seems a reasonable cause to look into.", "username": "Ryan_Goodwin" }, { "code": "const UpdateDocumentMutation = gql`\n mutation UpdateDocument(\n $documentId: String!,\n $updates: DocumentUpdateInput!\n ){\n updatedDocument: updateOneDocument(query: { _id: $documentId }, set: $updates) {\n _id\n }\n }\n`;\n\nfunction useUpdateDocument(document) {\n const [updateDocumentMutation] = useMutation(UpdateDocumentMutation);\n\n const updateDocument = async (document) => {\n const { updatedDocument } = await updateDocumentMutation({\n variables: {\n recipeId: document._id,\n updates: document.updates\n },\n refetchQueries: [\n 'Document'\n ]\n });\n return updatedDocument;\n };\n\n return updateDocument;\n}\n", "text": "Caching was definitely part of the problem.The key to my solution ended up being here in the apollo docs.In case it’s helpful for someone in the future, this is basically my code. I had to add a refetchQueries key when passing options to the mutation. Refer to the apollo docs for configuration details. I found it to be a little confusing figuring out how to reference the query that you want to be reloaded.", "username": "Ryan_Goodwin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
GraphQL query responding with old data
2022-06-14T10:51:10.340Z
GraphQL query responding with old data
4,157
null
[ "node-js" ]
[ { "code": "import Koa from 'koa'\nconst app = new Koa();\n// ...\nconst MONGO_URI = `mongodb+srv://${ATLAS_USER}:${ATLAS_PASS}@${ATLAS_HOST}/${ATLAS_DB}?authSource=admin`\nconst MONGO_CONFIG = { useNewUrlParser: true, useUnifiedTopology: true, maxPoolSize: 30, useFindAndModify: false }\n\nMongoose.Promise = global.Promise;\nMongoose.set('useCreateIndex', true)\ntry {\n\tMongoose.connect(MONGO_URI, MONGO_CONFIG)\n\t\t.then((db) => {\n\t\t\tconsole.log('Mongoose connection Established')\n\t\t\tdb.connection.on('error', (err) => { console.error(err) }) // <- print nothing\n\t\t\tdb.connection.on('disconnected', () => { console.log('disconnected') }) // <- print once\n\t\t\tdb.connection.on('reconnected', () => { console.log('reconnected') }) // <- never printed\n\t\t})\n} catch (error) {\n\tconsole.error(error.message)\n\tconsole.log('Mongoose connection Failed')\n\tprocess.exit(1)\n}\n\n// ...\nrouter.get('/_online', ctx => {\n\tconst mAvailable = mongoose.connection?.readyState === 1\n\tif (!mAvailable) console.error('mongoose readyState : ' + mongoose.connection?.readyState)\n\tctx.body = mAvailable ? 'OK' : 'FAILED'\n\tctx.status = mAvailable ? 200 : 503\n});\n <-- GET /_online\n --> GET /_online 200 1ms 2b\n <-- GET /_online\n --> GET /_online 200 2ms 2b\n <-- GET /_online\n --> GET /_online 200 1ms 2b\n <-- GET /_online\n --> GET /_online 200 1ms 2b\n <-- GET /_online\n --> GET /_online 200 1ms 2b\n <-- GET /_online\n --> GET /_online 200 1ms 2b\n <-- GET /_online\n --> GET /_online 200 1ms 2b\ndisconnected\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 2ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 2ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 2ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 2ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 3ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n <-- GET /_online\n --> GET /_online 503 1ms 6b\nmongoose readyState : 0\n2021-12-20T00:39:23: PM2 log: Stopping app:mm_api id:0\n2021-12-20T00:39:23: PM2 log: App name:mm_api id:0 disconnected\n2021-12-20T00:39:23: PM2 log: App [mm_api:0] exited with code [0] via signal [SIGINT]\n2021-12-20T00:39:23: PM2 log: pid=18 msg=process killed\n2021-12-20T00:39:23: PM2 log: PM2 successfully stopped\n", "text": "My server lost connection to M0 Cluster every few weeks(or days). The server implemented health check api is running on a docker container.Because the API also checks DB connection, the container stops after a number of retries specified in docker-compose.yml but then I re-execute a last CI/CD job, the server runs well again as if nothing had happened.I wanted to find even the smallest hint, but I couldn’t get any error messages.These are What I find out:index.js snippethealthcheck APIdocker logs -n 100 ", "username": "setTimeout" }, { "code": "", "text": "My server lost connection to M0 ClusterSame problemhere, did you solve?", "username": "ocielgp" } ]
Mongoose connections are disconnected without errors
2021-12-20T13:00:16.430Z
Mongoose connections are disconnected without errors
6,838
https://www.mongodb.com/…ff33dbb6f4d4.png
[]
[ { "code": "", "text": "\nimage991×537 30 KB\nGetting this error when trying to connect to Atlas", "username": "Ananthan_Rajasekharan" }, { "code": ";QUESTION\ncluster0.apq2bmf.mongodb.net. IN ANY\n;ANSWER\ncluster0.apq2bmf.mongodb.net. 60 IN TXT \"authSource=admin&replicaSet=atlas-geo4a7-shard-0\"\ncluster0.apq2bmf.mongodb.net. 60 IN SRV 0 0 27017 ac-zx4ig6e-shard-00-00.apq2bmf.mongodb.net.\ncluster0.apq2bmf.mongodb.net. 60 IN SRV 0 0 27017 ac-zx4ig6e-shard-00-01.apq2bmf.mongodb.net.\ncluster0.apq2bmf.mongodb.net. 60 IN SRV 0 0 27017 ac-zx4ig6e-shard-00-02.apq2bmf.mongodb.net.\n", "text": "Make sure you replace <password> with your real password.The error Multiple text records not allowed seems to indicate erroneous DNS information. However, according to:it looks fine.", "username": "steevej" }, { "code": "", "text": "@Ananthan_Rajasekharan have you been able to resolve the issue? If not, could you please let me know which version of Compass you are using and what you are connecting to (sharded cluster, M0, etc.)?Feel free to message me directly if you do not feel comfortable commenting these details publicly.", "username": "Julia_Oppenheim" }, { "code": "", "text": "Sorry for the late reply.I was not able to resolve the issue. The version I used is 1.32.6-win32-x64 and also 1.32.6-beta.4-win32-x64. Both showed the same error.I kind of feels like its my internet connections problem (not sure though), I will tell after testing", "username": "Ananthan_Rajasekharan" }, { "code": "", "text": "Try using Google DNS 8.8.8.8", "username": "steevej" }, { "code": "", "text": "Tried the Google DNS, same result.I tried using MongoDB Shell to connect via one of the instances I have on digital ocean, and when I tried there it works. Its not working on my computer. I have no clue as to why.", "username": "Ananthan_Rajasekharan" }, { "code": "", "text": "Make sure you allow access from anywhere in network access.", "username": "steevej" }, { "code": "", "text": "facing same issue, using google DNS 8888 and 8844, and facing this issue on sharded cluster, M0", "username": "Jeevan_Kumar" } ]
Multiple text records not allowed
2022-08-08T16:35:39.461Z
Multiple text records not allowed
2,257
null
[ "aggregation" ]
[ { "code": "[\n {\n \"RunInfo\": {\n \"Errors:\": [\n 0,\n 0,\n 0\n ],\n \"load\": [\n 108422760,\n 259103136,\n 220934960\n ],\n \"timestamp\": [\n \"2022-09-07T01:51:32Z\",\n \"2022-09-07T01:52:31Z\",\n \"2022-09-07T01:53:31Z\"\n ],\n \"Mem\": [\n 1335040,\n 1335040,\n 1335040\n ]\n },\n \n }\n]\ndb.collection.aggregate([\n {\n \"$unwind\": \"$RunInfo\"\n },\n {\n \"$set\": {\n \"RunInfo.timestamp\": {\n \"$arrayElemAt\": [\n \"$values\"\n ]\n }\n }\n },\n {\n \"$group\": {\n }\n }\n])\n", "text": "I’ve got the following data:Can I query this so that I get it returned as one array of objects (each with timestamp,Error,load,mem) and then have it sorted by timestamp?I’ve been having a hack around with the following:However I’m not following how to get through several collections at once and then re-create as new objects.Any help would be appreciated.", "username": "Rafe" }, { "code": "", "text": "Please provide a sample result document based on the input document you shared.It might be good to have different input data for errors and mem.", "username": "steevej" }, { "code": "", "text": "Look $range and $map as they are probably needed in the solution.", "username": "steevej" }, { "code": "", "text": "Two more things.If consolidating the 4 arrays is a frequent use-case, you should consider doing when you store the document.Personally, I prefer doing this type of data cosmetic in the application side rather than the server. Imagine you have 1000 different users doing this aggregation, they all impact the server. But they could do a simple for-loop in the application and the load of doing the consolidation will be distributed among the users.", "username": "steevej" }, { "code": "$zip db.arrays.find({},\n {combined:{$zip:{inputs:[\"$RunInfo.Errors:\", \"$RunInfo.load\", \"$RunInfo.timestamp\", \"$RunInfo.Mem\"]}}})\n{\n\"_id\" : ObjectId(\"6321e5278a58698cf4bc2758\"),\n\"combined\" : [\n\t[\n\t\t0,\n\t\t108422760,\n\t\t\"2022-09-07T01:51:32Z\",\n\t\t1335040\n\t],\n\t[\n\t\t0,\n\t\t259103136,\n\t\t\"2022-09-07T01:52:31Z\",\n\t\t1335040\n\t],\n\t[\n\t\t0,\n\t\t220934960,\n\t\t\"2022-09-07T01:53:31Z\",\n\t\t1335040\n\t]\n]\n}\ndb.arrays.find({},{combined:{$map:{\n input:{$range:[0,{$size:\"$RunInfo.Errors:\"}]}, \n as:\"idx\", \n in:{ \n \"Errors:\" : { \"$arrayElemAt\" : [ \"$RunInfo.Errors:\", \"$$idx\" ] }, \n \"load\" : { \"$arrayElemAt\" : [ \"$RunInfo.load\", \"$$idx\" ] }, \n \"timestamp\" : { \"$arrayElemAt\" : [ \"$RunInfo.timestamp\", \"$$idx\" ] }, \n \"Mem\" : { \"$arrayElemAt\" : [ \"$RunInfo.Mem\", \"$$idx\" ] } \n } \n }}})\n{\n\"_id\" : ObjectId(\"6321e5278a58698cf4bc2758\"),\n\"combined\" : [\n\t{\n\t\t\"Errors:\" : 0,\n\t\t\"load\" : 108422760,\n\t\t\"timestamp\" : \"2022-09-07T01:51:32Z\",\n\t\t\"Mem\" : 1335040\n\t},\n\t{\n\t\t\"Errors:\" : 0,\n\t\t\"load\" : 259103136,\n\t\t\"timestamp\" : \"2022-09-07T01:52:31Z\",\n\t\t\"Mem\" : 1335040\n\t},\n\t{\n\t\t\"Errors:\" : 0,\n\t\t\"load\" : 220934960,\n\t\t\"timestamp\" : \"2022-09-07T01:53:31Z\",\n\t\t\"Mem\" : 1335040\n\t}\n]\n}\n$sortArrayErrorsErrors:", "text": "Can I query this so that I get it returned as one array of objects (each with timestamp,Error,load,mem) and then have it sorted by timestamp?Yes you can. There are a few ways to do it, one is using $zip:As you can see, the field name are not preserved this way, so you might prefer to do something like:If you’re on the latest version, you can then $sortArray on timestamp field.(Note your sample document has Errors field as Errors: - extra semicolon - which might cause issues with testing)", "username": "Asya_Kamsky" } ]
How do I iterate over several arrays and create a new object array
2022-09-12T07:48:13.562Z
How do I iterate over several arrays and create a new object array
5,824
null
[ "aggregation", "queries", "dot-net" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"631ed2b5e54db93b2196ccc7\"\n },\n \"customerId\": {\n \"$numberLong\": \"10003014\"\n },\n \"vehicleId\": {\n \"$numberLong\": \"1006\"\n },\n \"isPlateNumber\": false,\n \"plateNo\": \"MH43AJ411\",\n \"tag\": {\n \"tagAccountNumber\": 20000046,\n \"tagExceptions\": [\n {\n \"reasonCode\": \"TAGINACTV\",\n \"excCode\": \"03\",\n \"exceptionDate\": \"2015-09-16 14:58:34.000 +05:30\"\n }\n ]\n }\n}\n", "text": "Hello Everyone,Below is my document. Here a field by name exceptionDate is available inside the tag and tagExceptions. I want to update the datatype of the exceptionDate from String to Date.Reason for Updating: Using Zappysys tool we are migrating data from sql server to mongodb. For Datetime fields even though in sql server is date time, once after migration it is coming as string. So, we are trying to update specific field in the document to Date data type from String.Now I am looking for a query to update the Data type in mongodb or using C# to update the Data type from String to Date.Please let me how to fix this issue. Thank you in Advance.", "username": "Amarendra_Krishna" }, { "code": "db.uptype.update(\n {\"tag.tagExceptions.exceptionDate\":{$type:\"string\"}}, \n [{$set:{\"tag.tagExceptions\":{$map:{\n input:\"$tag.tagExceptions\", \n in:{$cond:{\n if:{$eq:[\"string\",{$type:\"$$this.exceptionDate\"}]}, \n then:{$mergeObjects:[\"$$this\", {exceptionDate:{$toDate:\"$$this.exceptionDate\"}}]}, \n else:\"$$this\"\n }}\n }}}}])\n", "text": "Hi there,It’s possible to do this using pipeline updates, I’ll show what it looks like in JS/mongo shell and you can translate it to C# then:Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Update the DataType for a field inside the nested array embedded document
2022-09-12T10:13:25.952Z
Update the DataType for a field inside the nested array embedded document
2,776
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "let logs = await Promise.all(\n tpes.map(async (item) => {\n console.log(item);\n for (let index = 0; index <= item.length; index++) {\n return await this.logModel.aggregate([\n //find({ terminalId: item }).model.\n { $match: { terminalId: item[index] } },\n {\n $group: {\n _id: '$outcome',\n value: {\n $sum: 1,\n },\n },\n },\n {\n $project: {\n name: '$_id',\n value: 1,\n _id: 0,\n },\n },\n ]);\n }\n }),\n );\n console.log('logs', logs);\n return logs.flat();\n", "text": "I want to match data with specific Id each time\nNB: item = [‘id1’,‘id2’]\nI just got the first iteration means it matchs the id1 and it stops any solution or alternative to have loop through the array and get all matchs ?\nI know that I can use find() method but i need the aggregation to make group by and sum after the opertation", "username": "skander_lassoued" }, { "code": "", "text": "You get only the first one because you call return.", "username": "steevej" }, { "code": "", "text": "How can I fix it please ??", "username": "skander_lassoued" }, { "code": "", "text": "If you want to process all the elements of your array with your for loop do not call return inside your for loop.See return - JavaScript | MDN", "username": "steevej" }, { "code": "$matchfind", "text": "$match uses the same syntax as find so I’m a bit confused by your comment that you can use find for this but want to use aggregation.Can you explain exactly what documents you want to match (and then group)? It’s not really clear to me from this code.Asya", "username": "Asya_Kamsky" }, { "code": " {\n $match: {\n terminalId: { $in: item },\n \n },\n },\n", "text": "SOLVED !\nI want to use aggregation for reason to use $group, find() doesn’t allows me to use $group\nSolution : In case any one wants to match item in array just use $inMy fault was the $match once find the matched data will stop automatically but i want to match all item in the array , so mongoose offer to us $in that alows us to match every item in array and stops when it matchs all data in array ", "username": "skander_lassoued" }, { "code": "", "text": "mongoose offer to us $inActually that’s a standard MongoDB query operator, nothing mongoose specific.Glad you were able to work it out.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to use index in $match aggregation
2022-08-07T11:58:38.484Z
How to use index in $match aggregation
2,900
null
[ "aggregation", "queries", "node-js", "atlas-triggers" ]
[ { "code": "{\n \"_id\": 1,\n \"data\": {\n \"key1\": \"value1\",\n \"key2\": \"value2\"\n },\n \"updatedAt\": \"2022-08-16T09:00:09.681+00:00\"\n}\n{\n \"_id\": 1,\n \"data\": \"{\\\"key1\\\": \\\"value1\\\", \\\"key2\\\": \\\"value2\\\"}\",\n \"updatedAt\": \"2022-08-16T09:00:09.681+00:00\"\n}\nexports = function () {\n const pipeline = [\n {\n $match: {\n \"updatedAt\": {\n $gt: new Date(Date.now() - 300 * 1000),\n $lt: new Date(Date.now())\n }\n }\n },\n {\n $addFields: {\n \"test\": {\n $cond: {\n if: {\n $eq: [{$type: \"$data\"}, 'object']\n },\n then: JSON.stringify($data),\n else: \"$data\"\n }\n }\n }\n },\n {\n $project: {\n \"_id\": 1, \n \"test\": 1\n }\n }, {\n \"$out\": {\n \"s3\": {\n \"bucket\": \"bucket\",\n \"region\": \"region\",\n \"filename\":\"filename\",\n \"format\": {\n \"name\": \"parquet\",\n \"maxFileSize\": \"10GB\",\n \"maxRowGroupSize\": \"100MB\" \n }\n }\n }\n }\n ];\n return events.aggregate(pipeline);\n}; \n", "text": "Hello, I am currently trying to export a collection to S3 with a Trigger function.\nI would like to know if it’s possible to create a string from an object (dictionary) field.Example of document:Example of outputThis is some sample of code I tried (JSON.stringify($data) is not working)", "username": "Adrien_Philippe" }, { "code": "$addFields$function", "text": "You can’t use a JS function inside $addFields because it expects aggregation expressions (you should be able to do it using $function though running JS inside the server is usually inefficient).Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Convert a JSON field to JSON string
2022-09-13T16:59:05.900Z
Convert a JSON field to JSON string
5,750
null
[]
[ { "code": "date +%Y-%m-%d -d \"yesterday\"", "text": "#!/bin/bash\nlast_date=date +%Y-%m-%d -d \"yesterday\"\nmongoexport --host=10.234.242.29 --port=27018 --username=dba --password=MRKLg2p2+rByvcRrVmXRTIYk3HVO68AkMHys+ojaRYeqVzOQSdc56M4= --authenticationDatabase=admin --collection=Purchases --db=ProdSettlementCT --out=out.csv --query ‘{\"$and\" : [{ “CreationDate” : {\"$gte\": {\"$date\":\"$last_date\"}}}, {“Customer.SiteId” : {\"$in\" : [20017,297]}}]}’ --verbose=10 --fields=CreationDate,SQLTicketId,Customer.SiteId --type=csvI need to export these fields (CreationDate,SQLTicketId,Customer.SiteId) for last calendar day but wil no success. Please help!!!", "username": "Teodor_Chakalov" }, { "code": "", "text": "Could you please give me any update?", "username": "Teodor_Chakalov" }, { "code": "", "text": "{\"$and\" : [{“CreationDate”:{ “$gte” :{\"$date\":“2021-10-06T00:00:00Z”}}},{“CreationDate”:{\"$lt\" : {\"$date\":“2021-10-07T00:00:00Z”}}},{“Customer.SiteId” : {\"$in\" : [20017,297]}}]this is working but I have to change the date every day. I need to take only the last calendar date. Please assist?", "username": "Teodor_Chakalov" }, { "code": "\"$last_date\"\"'$last_date'\"", "text": "Now I understand the issue.It is all about bash variables. The tricky part is to have $last_date to be a bash variable and to NOT have all others like $gte, $date to be variables. You have to play with the quotes. I am pretty sure that if you put single quotes around $last_date it will work. But you still want the double quotes so the current double quotes around $last_date have to stay. To be clear you want to go from\n\"$last_date\" to \"'$last_date'\" with no space between the double and single quotes (because you want --eval to be a single argument). The single quote just before $last_date will terminate the first single quote you have at the beginning of the query. So $last_date will be substituted by the shell. The single quote after $last_date will start a new string but still part of the single --eval argument.", "username": "steevej" }, { "code": "", "text": "I am also working for the same case,Mongoexport with –query option to export for last calendar day data automatically is not working.mongoexport --host= --port=27018 --username=user --password= --db=Alteryx --collection=auditEvents --query=\"{‘Timestamp’:{’$gte’:{’$date’:’$last_date’}}}\" --out=D:\\data1.csvMongoexport with –query option to export for last calendar day data automatically is not working, Could you please help us with the correct format with automatic querying option.Can anybody help me for the issue ?", "username": "Latha_Karthikeyan" }, { "code": "", "text": "I am also working for the same casePlease apply the same solution. Make sure $last_date is interpreted as a shell variable and that $gte and $date are not. This being written. I notice that you are using Windows. This means that you might want to use %last_date or %last_date% as it is how, I think, Windows identifies variable.", "username": "steevej" }, { "code": "", "text": "@Latha_Karthikeyan, any update on that for Windows?@Teodor_Chakalov, if my post helped pleased mark it as the solution so that the thread can be closed and others know they can follow the suggestion.", "username": "steevej" }, { "code": "", "text": "Hi @steevej,No it didnt word, i am using batch script and i tried environment variable %DATE% but it throws error.\n2021-11-25T08:52:19.210+0000 error validating settings: parsing time “Thu 11/25/2021” as “2006-01-02T15:04:05Z07:00”: cannot parse “Thu 11/25/2021” as “2006”\n2021-11-25T08:52:19.211+0000 try ‘mongoexport --help’ for more informationHope it is not taking the date as the required format. But not sure how to resolve this. If you could help for this it would be great.", "username": "Latha_Karthikeyan" }, { "code": "", "text": "You cannot used Windows default %DATE% since the format, as you have notice, is not appropriate.You need to initialize your own variable with the date in the appropriate format.You may use Get-Date (Microsoft.PowerShell.Utility) - PowerShell | Microsoft Learn to get the date in the appropriate format.", "username": "steevej" }, { "code": "", "text": "Hi Steeve,Somehow i found batch command to fetch current date, but not i have issue is if i give query like --query=\"{‘Timestamp’:{’$gte’:{’$date’:’$PresentData&Time’}}}\" here it is $gte so there will not be any entry in database and if i give $lte it will fetch all the data from scratch.So my requirement is like it should fetch the data from 12:01AM to 11.59PM, hope you understand.Kindly help me how i can i accomplish this.", "username": "Latha_Karthikeyan" }, { "code": "", "text": "As already mentioned the $Variable is not the way to access variables in Windows. See", "username": "steevej" }, { "code": "", "text": "@Latha_Karthikeyan, we you able to pass your variable using Windows syntax?If you did, mark on the post as the solution so that others know they can following same advice.", "username": "steevej" }, { "code": "", "text": "@steevejSET MONGOEXPORT=\"%ProgramFiles%\\Alteryx\\bin\\mongoexport.exe\"\nFOR /f %%a IN (‘WMIC OS GET LocalDateTime ^| FIND “.”’) DO SET DTS=%%a\nSET Date=%DTS:~0,4%-%DTS:~4,2%-%DTS:~6,2%\nSET Time=%DTS:~8,2%:%DTS:~10,2%:%DTS:~12,2%%DTS:~14,4%Z\nSET /a tztemp=%DTS:~21%/60\nSET tzone=UTC%tztemp%\nSET DateTime=%DTS:~0,4%%DTS:~4,2%%DTS:~6,2%_%DTS:~8,2%%DTS:~10,2%%DTS:~12,2%SET JSON=\"{‘Timestamp’:{’$gte’:{’$date’:’%Date%T00:00:00.000Z’}}}\"%MONGOEXPORT% --port=27018 --username=user --password=%PS% --db=AlteryxGallery --collection=auditEvents --query=%JSON% >>%OUTPUT%AuditEvents%datetime%.csvI was using this batch script to export audit events in Mongodb4.0. now my environment is migrated to 4.2 and 4.4 . If i run the query now it s giving below error.2022-09-14T12:33:33.389+0100 connected to: mongodb://localhost:27018/\n2022-09-14T12:33:33.390+0100 Failed: error parsing query as Extended JSON: invalid JSON inputCan you clarify why and what modification should i do", "username": "k.latha_4003" } ]
Mongoexport with --query option to export some columned for last calendar day
2021-10-05T13:25:25.631Z
Mongoexport with &ndash;query option to export some columned for last calendar day
6,148
null
[ "replication" ]
[ { "code": "", "text": "I’ve tried this already 4 times and from what I can gather, it’s the same 30 collections that are not being migrated over.The source MongoDB database is a single node replica set in AWS and I have a M30 setup in Atlas. I’ve run through the Live Migration wizard and everything works. It gets to the cut off point of 00:00 for me to do the turnover. However, when I check the collections in the new database on Atlas, I noticed that 30 collections are entirely missing.I don’t know what to do at this point or what might be causing certain collections to not be migrated over. Any help on this would be most appreciated.", "username": "Dennis_Choi" }, { "code": "", "text": "Hi @Dennis_Choi - Welcome to the communityPlease contact the Atlas support team via the in-app chat to investigate any operational and billing issues related to your Atlas account. You can additionally raise a support case if you have a support subscription. The community forums are for public discussion and we cannot help with service or account / billing enquiries.Some examples of when to contact the Atlas support team:In the case of the live migration, the Atlas support team should have more insight to your Atlas destination cluster and Atlas project.Best Regards,\nJason Tran", "username": "Jason_Tran" }, { "code": "", "text": "So, I did create a case and was able to get a resolution for this. The issue was that with my source MongoDB version being 2.6.12, there is a known bug with the Live Migration with this version where only a max of 100 collections get migrated over. Knowing this, I was able to find a solution to my problem.", "username": "Dennis_Choi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Live Migration finishes but is missing 30 collections out of 130 total
2022-09-13T16:19:39.647Z
Live Migration finishes but is missing 30 collections out of 130 total
1,052
null
[ "compass" ]
[ { "code": "", "text": "Good morning i have this problem since yesterday. My codes alerts me that i am connected to the cluster i created. In my compass i passed in some data i am triyng to display on my dashboard using postman but it returns an empty array please i need some help", "username": "Chi_Samuel" }, { "code": "", "text": "Even when i created a new cluster and did every linking it stills returns an empty array when i try to get the data", "username": "Chi_Samuel" }, { "code": "", "text": "Chi unfortunately there’s not enough information here to understand what exactly happened or to encourage next steps. I recommend you open a support case with mongodb", "username": "Andrew_Davidson" } ]
Information not displayed on postman
2022-09-14T08:26:45.261Z
Information not displayed on postman
1,466
null
[]
[ { "code": "", "text": "approach to migrate mongodb using mongomirror: Setting up mongodb instace on GCE and then migrate to another on GCE onlyThis error has occured has during executing above command:\nmongomirror version: 0.12.2 git version: 9b30f320e44f90afec5c0fa792d4c003b71ead6b Go version: go1.16.7 os: linux arch: amd64 compiler: gc 2021-10-27T07:57:03.252+0000 Error initializing mongomirror: could not initialize source connection: could not connect to server: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: 27017:27017, Type: Unknown, Last error: connection() error occured during connection handshake: dial tcp 0.0.105.137:27017: i/o timeout }, ] }", "username": "Mohammed_Sahil" }, { "code": "", "text": "Could you please help us with this issue?", "username": "Mahammad_Fayiz" }, { "code": "mongomirrormongomirror", "text": "Welcome to the MongoDB Developer Community Forums @Mohammed_Sahil !mongomirror is a tool for migrating an existing replica set to a MongoDB Atlas replica set so it has some assumptions about the destination (for example, that this will be a replica set).I recommend using mongomirror if your destination is an Altas cluster, but for migrating to (or from) a more general destination I suggest using MongoPush.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks a lot Stennie Do you have any solution for the belowWhat is the best solution with No down time or less down time?Do you have any doceumetation for the same", "username": "Mahammad_Fayiz" }, { "code": "mongomirror", "text": "Hi @Mahammad_Fayiz,Your question is about a variation of non-Atlas migration, so I suggest looking into MongoPush:I recommend using mongomirror if your destination is an Altas cluster, but for migrating to (or from) a more general destination I suggest using MongoPush .MongoPush’s developer, @ken.chen, has written some great blog posts and documentation which are linked from the MongoPush download on DockerHub. Definitely test your migration in a staging environment before production.Note: this is not a commercially supported too and per the disclaimer in the Readme any usage is at your own risk.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Stennie_X We have large database (more than 10TB) on azure replica set. We want it to move to another azure replica set, can we use mongomirror. If yes, how? We tried but its not working.", "username": "Suneel_Kumar" } ]
Approach to migrate mongodb using mongomirror using 2 GCE only
2021-10-27T09:20:39.845Z
Approach to migrate mongodb using mongomirror using 2 GCE only
5,001
null
[ "queries", "node-js" ]
[ { "code": "const timeline = async (req, res) => {\n const currentUser = await users.findById(req.params.id);\n const currentUserPost = await posts.find({ userId: currentUser._id });\n\n const friendsPost = await Promise.all(\n currentUser.followings.map((f) => {\n return posts.find({ userId: f });\n })\n );\n\n const finalpost = currentUserPost.concat(...friendsPost);\n //How do I sort the finalpost array with its time timestamps?\n};\n\n", "text": "", "username": "Anthony_Ezeh" }, { "code": "", "text": "Hi @Anthony_Ezeh welcome to the community!I assume you’re using Mongoose? Could you post the schema or some example documents? It’s hard to answer your question without all the information at hand.Best regards\nKevin", "username": "kevinadi" } ]
How do I sort the final result based on the timestamps after concatenating 2 arrays?
2022-09-09T15:47:11.876Z
How do I sort the final result based on the timestamps after concatenating 2 arrays?
1,072