image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
https://www.mongodb.com/…c148863d8ef4.png
[ "aggregation", "queries" ]
[ { "code": "start_dateend_date{start_date: \"2022-11-18 18:00\", end_date: \"2022-11-18 19:00\"}{start_date: \"2022-11-18 09:00\", end_date: \"2022-11-18 9:45\"}", "text": "I have a booking event schema with fields start_date and end_date.\nI need to find available time slots for the specified date grouped by 30 minutes like in the screenshot.\nSo if we have the booked events with {start_date: \"2022-11-18 18:00\", end_date: \"2022-11-18 19:00\"} and {start_date: \"2022-11-18 09:00\", end_date: \"2022-11-18 9:45\"}\n9:00, 9:30, 6:00 and 6:30 slots should not be available\nimage838×338 12.7 KB\nAny ideas, guys?", "username": "Bogdan_Vovchuck" }, { "code": "", "text": "See a related thread.Your date and time should use real date and time. With strings like you have you might need to implement all the nice date library like $dateToParts, $dateAdd. $dateDiff. And string are slower to compare, take more space to store and more bandwidth to transmit.", "username": "steevej" }, { "code": "", "text": "Those fields are Date type", "username": "Bogdan_Vovchuck" }, { "code": "{start_date: \"2022-11-18 18:00\", end_date: \"2022-11-18 19:00\"}{start_date: \"2022-11-18 09:00\", end_date: \"2022-11-18 9:45\"}", "text": "The following examples are not Date type{start_date: \"2022-11-18 18:00\", end_date: \"2022-11-18 19:00\"} and {start_date: \"2022-11-18 09:00\", end_date: \"2022-11-18 9:45\"}", "username": "steevej" }, { "code": "", "text": "I know. This is just for example, the real values are Dates\n\nScreenshot 2022-10-18 at 15.41.531428×452 106 KB\n", "username": "Bogdan_Vovchuck" }, { "code": "", "text": "So, the only solution is to keep booked and not booked time slots all together", "username": "Bogdan_Vovchuck" }, { "code": "", "text": "Probably notthe only solutionbut it is the one I use because it is the simplest I could find. May be someone else can enlighten us with something else.Please share if you come out with something different.", "username": "steevej" }, { "code": "", "text": "I am working on the same kind of website, where i have to book a room, only on working days and each day with 30 minute time period from 9 am to 9 pm. user can book any timeslot but reset should be bookable by other user. I am new developer. so its tough for me.", "username": "Joshua_the_unkown_universe" } ]
Find available time spans for specified date
2022-10-18T10:15:02.666Z
Find available time spans for specified date
2,364
null
[ "graphql" ]
[ { "code": "jwtTokenStringjwtTokenString", "text": "Hi everyone,First off - Kudos to the Realm team for building a great product. I am having a CORS issue when working with the GraphQL endpoint from the browser and hoping someone can lead me in the right direction. My issue is this: (apologies for the somewhat long explanation but I wanted to be as detailed as I could)I am using a custom JWT Authentication solution, so I currently maintain all tokens for identity etc already in my app. I am able to setup a Custom JWT Authentication Authentication provider and input the corresponding JWK URI successfully in the Realm dashboard. It works fine after testing.The problem I have is then trying to pass in the JWT in the header of the post request to Realm GraphQL endpoint https://realm.mongodb.com/api/client/v2.0/app//graphql from the browser using fetch. When I do so I get the below CORS error:Access to fetch at ‘https://realm.mongodb.com/api/client/v2.0/app//graphql’ from origin ‘http://localhost:3000’ has been blocked by CORS policy: Response to preflight request doesn’t pass access control check: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. If an opaque response serves your needs, set the request’s mode to ‘no-cors’ to fetch the resource with CORS disabled.Please note that I am not currently using the realm-web package as I do not want to add an additional JWT token layer (realm-web maintains their own JWT token system in local storage) to my app that already has its own. I simply want to include the JWT token in the header of a fetch request to then have Realm verify against the JWK URI setup in the Realm dashboard. The docs here Authenticate GraphQL Requests make it seem like this is entirely possible as it notes that you can pass the custom JWT trough as a jwtTokenString header using a normal http request. However, I still get the CORS error when attempting this request.I have also tried whitelisting my http://localhost:3000 in the Realm dashboard under Manage > Settings > Allowed Request Origins. Still not able to get it to work.I was able to get several less than ideal workarounds working:Using another server as a proxy - since a node.js environment doesn’t have these CORS restrictions I was able to pass the token through to another endpoint which fetched the Realm GraphQL endpoint from a server environment successfully. (Not great as that essentially creates 2 servers)I was able to use both the realm-web package and my own system to make a more complicated auth system in which the realm-web package gave me a Bearer token to use without any CORS issues. (Not ideal as that requires 2 token systems)Potential Solutions:\nI’m not positive on why the error is happening but I believe it may have something to do with the following that someone on the Realm team would have more context on:Are the URLS whitelisted in Allowed Request Origins mapped to the approved URLS for CORS requests?\nIs the Access-Control-Allow-Headers option on the wherever the Realm server is hosted allowing a header of jwtTokenString ?Really appreciate anyone who can help lead me in the right direction on this.Thank you!", "username": "Matt_Cunningham" }, { "code": "", "text": "Hi @Matt_Cunningham,Are you certain you use a PSOT method to run a qraphql query?Using an HTTPs API to run the query should be done via a POST method as defined here:Also you can use the Bearer method with an Access token you get from your JWT provider.I have noticed those CORS issues when trying to use other HTTP methods like “GET”.Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "jwtTokenString const result = await fetch(\n `https://myrealmgraphqlendpoint`,\n {\n method: \"POST\",\n headers: {\n jwtTokenString: user.token,\n },\n body: JSON.stringify({ query: FIND_MOVIES }),\n }\n );\n const json = await result.json();\n console.log(json);\n", "text": "Hi @Pavel_Duchovny thanks for the reply, yes I am using a POST request and I am confident I am running the right query because it works in a server environment. When I switch to the browser with the same query it gives CORS issues. I am using something like this, passing in the JWT as jwtTokenString shown here Custom JWTAnother interesting note, I found these docs Authenticate HTTP Client Requests which statesMongoDB Realm enforces non-sync rules and sync rules for all client operations. This means that requests must be made by a logged in user of your Realm app.And those docs also offer the ability to get a Client API Access Token through the login endpoint.But I am curious what that means - does that mean I must use the realm-web package or ping the Realm login endpoint to receive a Realm generated Bearer token?In my case I am still wondering if I am able to just provide my custom JWT along with all of my requests to the endpoint and not have to generate another Bearer token through the login endpoint or realm-web package. Does that make sense?", "username": "Matt_Cunningham" }, { "code": "", "text": "Hi @Matt_Cunningham,I think it should work both ways otherwise it might be a bug.You should be able to provide credentials to do both auth + query. It can be email/password but also jwt token.Out of curiosity does a bearer token with access token doesn’t yield cors errors?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": " const result = await fetch(\n `https://myrealmgraphqlendpoint`,\n {\n method: \"POST\",\n headers: {\n Authorization: \"Bearer \" + user.token, \n },\n body: JSON.stringify({ query: FIND_MOVIES }),\n }\n );\njwtTokenString{\n \"error\": \"value of 'kid' has invalid format\",\n \"link\": \"https://realm.mongodb.com/groups/5f3ab9628951c83aa903a0b0/apps/5f5d28bcdda1ce73d48eaa42/logs?co_id=5f5f9cb6a93317dab797e984\"\n}\n", "text": "@Pavel_Duchovny Yeah I think it may be a bug. So I tried using Bearer with my JWT from my provider (not one assigned from Realm) like so:But now I get the below error in the console, which is probably because the Bearer strategy on the Realm endpoint is expecting a Realm assigned token format, not custom JWT format which maybe is why the jwtTokenString header exists in the first place.If it is a bug, I am happy to write out any further steps to reproduce. Let me know!", "username": "Matt_Cunningham" }, { "code": "", "text": "@Matt_Cunningham,The bearer token can’t be your custom one but only realms.I asked you to test it in 2 steps get a realm access token from custom-jwt/login endpoint and use it in the bearer.Pavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,The 2 steps process works when I use the login endpoint - the login endpoint responds with an access_token and refresh_token etc. Using the provided access_token I am able to to query using the Bearer method successfully. While that process works and gives no CORS errors, it would mean I have to deal with handling refreshes of my own custom JWTs as well as the access_token that is provided back in the payload from the Realm login endpoint. Sorry if I am being repetitive here, but as you mentioned in your earlier replyYou should be able to provide credentials to do both auth + query. It can be email/password but also jwt token.Which means there should be a scenario where I wouldn’t have to use the login endpoint at all correct?", "username": "Matt_Cunningham" }, { "code": "", "text": "Hi @Pavel_Duchovny,I’m facing the same problem. I’m using an apiKey header and I’m getting the same error @Matt_Cunningham is getting.The thing is when I try the endpoint using Postman it works but when I use Apollo client with Angular it does not!!I hope that you have any idea about whats happening. ", "username": "Hadi_Albinsaad" }, { "code": "", "text": "Hey Hadi -I’m assuming you’re running into this error because you’re calling it from the browser. Please consider using authorization headers https://docs.mongodb.com/realm/graphql/authenticate/#credential-headers", "username": "Sumedha_Mehta1" }, { "code": "", "text": "me too, having the same error\nfrom browser\nwith apiKey or authorization, getting CORS error\ni added on App Settings > Allowed request origin: localhost but not worked too", "username": "Royal_Advice" }, { "code": "", "text": "HiIs there an update for solution on this? Perhaps an example of how to update the apiKey header?Many thanks!", "username": "Hayden_Foote" }, { "code": "", "text": "The API key shouldn’t be used directly in the browser. Please authenticate with the API token like so.and then use the access token to authenticate to GQL as Pavel mentioned", "username": "Sumedha_Mehta1" }, { "code": "", "text": "I’m unable to get past CORS error (Access to fetch at … from origin … has been blocked by CORS policy: Response to preflight request doesn’t pass access control check: No ‘Access-Control-Allow-Origin’ header is present…) using the browser to query the GQL endpoint. I am passing the bearer token issued by the login endpoint and the POST request works fine in Postman, including receiving the required ‘Access-Control-Allow-Origin’.However, in the browser, the request first sends the OPTIONS request to verify CORS, and the response from the server does not contain the requisite “Access-Control-Allow-Origin”. I can duplicate this result in Postman as well by setting the request type to OPTIONS and including all of the necessary header variables (access-control-request-headers, access-control-request-method, etc.). Any thoughts on what I might be missing?", "username": "Eric_Stimpson" }, { "code": "", "text": "Eric, can you paste the full request that you’re sending to Realm (including all the headers, body, etc)?We display an example of how to use Apollo Client with GraphQL here if you’re using GraphQL in the browser - realm-graphql-apollo-react/index.js at master · mongodb-university/realm-graphql-apollo-react · GitHub", "username": "Sumedha_Mehta1" }, { "code": "", "text": "\nUnacceptable.\nAWS, here we come", "username": "Jason_Steele" }, { "code": "", "text": "G’Day, @Jason_Steele,I acknowledge your frustration Is there something we can help you with? Could you please share your use case and what you are looking for?Could we discuss a win-win solution for you and us? Cheers, ", "username": "henna.s" }, { "code": "", "text": "I’m having the same issue, but with Vue 3 not React and trying to use a apiKey. I’m trying to find code examples of how to get it to work but i’m not finding anything. Is there any examples for realm-graphql-apollo-vue3?", "username": "Mason_Combes" }, { "code": "eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCIsImtpZCI6IldvTmpxMlR5dzZJWWhlR0FhblFHTyJ9.eyJpc3MiOiJodHRwczovL2JlZXotbW9uaXRvci5ldS5hdXRoMC5jb20vIiwic3ViIjoiYXV0aDB8NjQ2Y2MyYWI2NjdkY2JjODk3ZjYyNzEyIiwiYXVkIjoiYmVlei1hcHAtbG9laHgiLCJpYXQiOjE2ODUxODY2OTUsImV4cCI6MTY4NTI3MzA5NSwiYXpwIjoiMXBuT1o3OFRybXlkSU1yMENMZGE2Q01GS1FQVkpqZ0QifQ\n", "text": "Hi guys! I am currently considering a Mongodb Atlas stack, but this issue looks like a deal breaker:This is a token example:This is the JWKs URI: https://beez-monitor.eu.auth0.com/.well-known/jwks.jsonMy app is https://eu-central-1.aws.data.mongodb-api.com/app/beez-app-loehxIs there a workaround, or am I doing something wrong?", "username": "Valentin_Raduti" }, { "code": "", "text": "Never mind, figured it out: as @Pavel_Duchovny mentioned, I cannot use my custom JWT from Auth0, I need to exchange it first for a Mongodb Realm token, by making a request to https://eu-central-1.aws.realm.mongodb.com/api/client/v2.0/app/app-name-here/auth/providers/custom-token/login with the payload {“token”:} – this is sort of documented here: https://www.mongodb.com/docs/atlas/app-services/users/sessions/#authenticate-a-userYou guys could save the world a lot of time spent troubleshooting this if you would:(either of them would do )", "username": "Valentin_Raduti" }, { "code": "curl https://us-east-1.aws.realm.mongodb.com/api/client/v2.0/app/application-0-xxxxx/auth/providers/anon-user/loginfetch(\"https://us-east-1.aws.realm.mongodb.com/api/client/v2.0/app/application-0-xxxxxx/graphql\", {\n \"headers\": {\n \"accept\": \"*/*\",\n \"authorization\": \"Bearer eyJhbGciOiJI .... rest of the token ... _XbIM\",\n \"content-type\": \"application/json\",\n \"sec-ch-ua\": \"\\\"Chromium\\\";v=\\\"116\\\", \\\"Not)A;Brand\\\";v=\\\"24\\\", \\\"Google Chrome\\\";v=\\\"116\\\"\",\n \"sec-ch-ua-mobile\": \"?0\",\n \"sec-ch-ua-platform\": \"\\\"Windows\\\"\"\n },\n \"referrer\": \"http://localhost:5173/\",\n \"referrerPolicy\": \"strict-origin-when-cross-origin\",\n \"body\": \"the long introspection query\",\n \"method\": \"POST\",\n \"mode\": \"cors\",\n \"credentials\": \"include\"\n});\nfetch(\"https://us-east-1.aws.realm.mongodb.com/api/client/v2.0/app/application-0-xxxxx/graphql\", {\n \"headers\": {\n \"accept\": \"*/*\",\n \"accept-language\": \"en-US,en;q=0.9\",\n \"sec-fetch-dest\": \"empty\",\n \"sec-fetch-mode\": \"cors\",\n \"sec-fetch-site\": \"cross-site\"\n },\n \"referrer\": \"http://localhost:5173/\",\n \"referrerPolicy\": \"strict-origin-when-cross-origin\",\n \"body\": null,\n \"method\": \"OPTIONS\",\n \"mode\": \"cors\",\n \"credentials\": \"omit\"\n});\nContent-Encoding: gzip\nDate: Sun, 03 Sep 2023 03:03:20 GMT\nServer: mdbws\nVary: Origin,Access-Control-Request-Method,Access-Control-Request-Headers\nX-Appservices-Request-Id: 64f3f7789874d3b8cb2bfa88\n\nX-Envoy-Decorator-Operation: baas-main.baas-prod.svc.cluster.local:8086/*\nX-Envoy-Upstream-Service-Time: 1\nX-Frame-Options: DENY\n", "text": "Has anything changed since Pawel’s answer, or is it a special case for localhost dev environment ?If I get the access_token from curl https://us-east-1.aws.realm.mongodb.com/api/client/v2.0/app/application-0-xxxxx/auth/providers/anon-user/login,and then use it in graphql query:It fails with this error:Access to fetch at ‘https://us-east-1.aws.realm.mongodb.com/api/client/v2.0/app/application-0-xxxxx/graphql’ from origin ‘http://localhost:5173’ has been blocked by CORS policy: Response to preflight request doesn’t pass access control check: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. If an opaque response serves your needs, set the request’s mode to ‘no-cors’ to fetch the resource with CORS disabled.The preflight request:returned 204 with following headers:", "username": "Alex_Blex" } ]
CORS issue with client side Realm GraphQL Endpoint
2020-09-13T20:37:57.965Z
CORS issue with client side Realm GraphQL Endpoint
13,200
null
[ "queries" ]
[ { "code": "", "text": "root@mongodb:/app# db.accounts.insert({account_id: 111333,limit: 12000, products: [“Commodity”,“Brokerage”],“last_updated”: new Date()})\nbash: syntax error near unexpected token `{account_id:’**This error is on MongoDB lab which is online lab environment **", "username": "Brijesh_Shekhda" }, { "code": "", "text": "If the labs is using a modern shell try insertOne or insertMany as insert is deprecated:", "username": "John_Sewell" }, { "code": "", "text": "From the prompt stringroot@mongodb:/app#which looks like a bash shell prompt, I assume that you are not connected to the database with mongosh.Try to connect with mongosh as instructed.", "username": "steevej" }, { "code": "", "text": "That makes a lot more sense, not sure how I missed that…but the insert comment still stands once you connect as that call will probably give a warning.", "username": "John_Sewell" }, { "code": "", "text": "not sure how I missed thatMy favorite quote since I often am a monkey.Saru mo ki kara ochiru", "username": "steevej" }, { "code": "", "text": "In reality I know how I missed that…apparently reading is indeed hard ", "username": "John_Sewell" }, { "code": "", "text": "In case you need more proof that reading is hard.", "username": "steevej" } ]
I was get error during the MondoDB university courses i try all the things but all time it gave same error
2023-09-02T06:51:52.856Z
I was get error during the MondoDB university courses i try all the things but all time it gave same error
480
null
[]
[ { "code": "", "text": "Hi, an index exists here at the .deb files! However, access is denied here at the archives (.tar.gz)!It would be better if the last index had the same structure and was open like the first one", "username": "Justman10000" }, { "code": "", "text": "Hi @Justman10000These links might be what you are looking for, it will list downloads for package managers and tarballs.Current release for each major version:\nhttps://downloads.mongodb.org/current.jsonAll releases:\nhttps://downloads.mongodb.org/full.json", "username": "chris" }, { "code": "", "text": "The index would be better", "username": "Justman10000" } ]
Providing index of binary files for scripts
2023-09-01T10:35:38.227Z
Providing index of binary files for scripts
235
null
[]
[ { "code": "", "text": "Hi Community,we do own a germany based server hosted by Hetzner which is timing out when trying to establish a connection to our atlas instance. The connection string is correct since it works over a GCP VM and localy on our machines. I know that some of Hetzner IPs were blacklisted in the last months. Hetzner told us the problem is not on their side, is any from the Atlas team present here to check if maybe the Firewall is blocking any IPS in the range 5.75.128.0 - 5.75.255.255.Ips are whitelisted in the Network access category!Kind regards", "username": "PVSNP_N_A" }, { "code": "", "text": "If you are a commercial user of Atlas, contact the Support Portal.", "username": "Jack_Woehr" } ]
Server is not able to establish connection to mongodb atlas
2023-09-01T19:02:16.519Z
Server is not able to establish connection to mongodb atlas
252
null
[]
[ { "code": "", "text": "How to prune, and rename log file to a format.\nWhenever we restart mongo it is creating a new file and old one is renamed.\nwe want to rename those logs in particular format. Is there any way we can do that ?", "username": "Sri_Sai_Ram_Akam" }, { "code": "", "text": "Hi @Sri_Sai_Ram_AkamIf you you’re interested in doing custom naming of your rotated logs then you will need another tool like logrotate to manage that or send it to syslog and configure that logging how you want to.https://www.mongodb.com/docs/manual/tutorial/rotate-log-files/#log-rotation-with—logrotate-reopenExample logrotate config:", "username": "chris" } ]
How to prune, and rename log file to a format
2023-08-29T08:13:25.484Z
How to prune, and rename log file to a format
341
null
[ "queries", "java", "kotlin" ]
[ { "code": "data class MyData(\n val myfield: String?\n)\nException in thread \"main\" org.bson.codecs.configuration.CodecConfigurationException: Unable to decode myfield for MyData data class.\n\tat org.bson.codecs.kotlin.DataClassCodec.decode(DataClassCodec.kt:92)\n\tat com.mongodb.internal.operation.CommandResultArrayCodec.decode(CommandResultArrayCodec.java:52)\n\tat com.mongodb.internal.operation.CommandResultDocumentCodec.readValue(CommandResultDocumentCodec.java:60)\n\tat org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:87)\n\tat org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:42)\n\tat org.bson.internal.LazyCodec.decode(LazyCodec.java:53)\n\tat org.bson.codecs.BsonDocumentCodec.readValue(BsonDocumentCodec.java:104)\n\tat com.mongodb.internal.operation.CommandResultDocumentCodec.readValue(CommandResultDocumentCodec.java:63)\n\tat org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:87)\n\tat org.bson.codecs.BsonDocumentCodec.decode(BsonDocumentCodec.java:42)\n\tat com.mongodb.internal.connection.ReplyMessage.<init>(ReplyMessage.java:48)\n\tat com.mongodb.internal.connection.InternalStreamConnection.getCommandResult(InternalStreamConnection.java:565)\n\tat com.mongodb.internal.connection.InternalStreamConnection.receiveCommandMessageResponse(InternalStreamConnection.java:455)\n\tat com.mongodb.internal.connection.InternalStreamConnection.sendAndReceive(InternalStreamConnection.java:370)\n\tat com.mongodb.internal.connection.UsageTrackingInternalConnection.sendAndReceive(UsageTrackingInternalConnection.java:114)\n\tat com.mongodb.internal.connection.DefaultConnectionPool$PooledConnection.sendAndReceive(DefaultConnectionPool.java:719)\n\tat com.mongodb.internal.connection.CommandProtocolImpl.execute(CommandProtocolImpl.java:76)\n\tat com.mongodb.internal.connection.DefaultServer$DefaultServerProtocolExecutor.execute(DefaultServer.java:203)\n\tat com.mongodb.internal.connection.DefaultServerConnection.executeProtocol(DefaultServerConnection.java:115)\n\tat com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:83)\n\tat com.mongodb.internal.connection.DefaultServerConnection.command(DefaultServerConnection.java:74)\n\tat com.mongodb.internal.connection.DefaultServer$OperationCountTrackingConnection.command(DefaultServer.java:287)\n\tat com.mongodb.internal.operation.CommandOperationHelper.createReadCommandAndExecute(CommandOperationHelper.java:245)\n\tat com.mongodb.internal.operation.FindOperation.lambda$execute$1(FindOperation.java:324)\n\tat com.mongodb.internal.operation.OperationHelper.lambda$withSourceAndConnection$0(OperationHelper.java:345)\n\tat com.mongodb.internal.operation.OperationHelper.withSuppliedResource(OperationHelper.java:370)\n\tat com.mongodb.internal.operation.OperationHelper.lambda$withSourceAndConnection$1(OperationHelper.java:344)\n\tat com.mongodb.internal.operation.OperationHelper.withSuppliedResource(OperationHelper.java:370)\n\tat com.mongodb.internal.operation.OperationHelper.withSourceAndConnection(OperationHelper.java:343)\n\tat com.mongodb.internal.operation.FindOperation.lambda$execute$2(FindOperation.java:321)\n\tat com.mongodb.internal.operation.CommandOperationHelper.lambda$decorateReadWithRetries$3(CommandOperationHelper.java:192)\n\tat com.mongodb.internal.async.function.RetryingSyncSupplier.get(RetryingSyncSupplier.java:67)\n\tat com.mongodb.internal.operation.FindOperation.execute(FindOperation.java:332)\n\tat com.mongodb.internal.operation.FindOperation.execute(FindOperation.java:72)\n\tat com.mongodb.client.internal.MongoClientDelegate$DelegateOperationExecutor.execute(MongoClientDelegate.java:153)\n\tat com.mongodb.client.internal.FindIterableImpl.first(FindIterableImpl.java:213)\n\tat com.mongodb.kotlin.client.MongoIterable.firstOrNull(MongoIterable.kt:38)\n\tat MainKt.main(Main.kt:20)\nCaused by: org.bson.BsonInvalidOperationException: readString can only be called when CurrentBSONType is STRING, not when CurrentBSONType is NULL.\n\tat org.bson.AbstractBsonReader.verifyBSONType(AbstractBsonReader.java:689)\n\tat org.bson.AbstractBsonReader.checkPreconditions(AbstractBsonReader.java:721)\n\tat org.bson.AbstractBsonReader.readString(AbstractBsonReader.java:456)\n\tat org.bson.codecs.StringCodec.decode(StringCodec.java:80)\n\tat org.bson.codecs.StringCodec.decode(StringCodec.java:31)\n\tat org.bson.codecs.DecoderContext.decodeWithChildContext(DecoderContext.java:96)\n\tat org.bson.codecs.kotlin.DataClassCodec.decode(DataClassCodec.kt:90)\n\t... 37 more\ndata class MyData(\n val myfield: String?\n)\n\nfun main(args: Array<String>) {\n val uri = \"XXXXXXXXX\"\n val databaseName = \"YYYYYYYY\"\n val collectionName = \"ZZZZZZZ\"\n\n val mongoClient = MongoClient.create(uri)\n val db = mongoClient.getDatabase(databaseName)\n\n val collection = db.getCollection<MyData>(collectionName)\n val doc = collection.find().firstOrNull()\n if (doc != null) {\n println(doc)\n } else {\n println(\"No matching documents found.\")\n }\n\n mongoClient.close()\n}\nplugins {\n kotlin(\"jvm\") version \"1.9.10\"\n application\n}\n\ngroup = \"org.example\"\nversion = \"1.0-SNAPSHOT\"\n\nrepositories {\n mavenCentral()\n}\n\ndependencies {\n testImplementation(kotlin(\"test\"))\n implementation(\"org.mongodb:mongodb-driver-kotlin-sync:4.10.2\")\n //implementation(\"org.mongodb:mongodb-driver-kotlin-coroutine:4.10.1\")\n}\n\ntasks.test {\n useJUnitPlatform()\n}\n\nkotlin {\n jvmToolchain(8)\n}\n\napplication {\n mainClass.set(\"MainKt\")\n}\n", "text": "Hello,I’m using Kotlin driver => org.mongodb:mongodb-driver-kotlin-sync:4.10.2\n(the same issue exists in org.mongodb:mongodb-driver-kotlin-coroutine:4.10.1 too)I have a data class with a nullable String fieldWhen I read the value from the DB, here are what is printed on console :if myfield does NOT exist in the DB => it successfully prints “MyData(myfield=null)”if myfield exists and contains “myvalue” => it successfully prints “MyData(myfield=myvalue)”if myfield exists and contains null value (BSONType is NULL) => I would expect that it prints “MyData(myfield=null)” but it is not!? It throws the BsonInvalidOperationException “readString can only be called when CurrentBSONType is STRING, not when CurrentBSONType is NULL.”\nCould you please tell me what’s wrong? Thanks!Here is a very simple code to reproduce (replace XXX, YYY, ZZZ with your values)If needed, here is my build.gradle.ktsThanks for your help!", "username": "Sebastien_Perochon" }, { "code": "", "text": "I have created the corresponding issue in JIRA:\nhttps://jira.mongodb.org/browse/JAVA-5134", "username": "Sebastien_Perochon" } ]
Data class with nullable type throw BsonInvalidOperationException when value is explicitly null in the DB
2023-08-30T07:38:22.401Z
Data class with nullable type throw BsonInvalidOperationException when value is explicitly null in the DB
464
null
[ "compass" ]
[ { "code": "", "text": "When connecting to monodb compass this error: connect ETIMEDOUT 3.126.58.56:27017\nLooked at the previous topics and did not help. I checked the current IP and it’s ok.\nPlease help", "username": "Gred_Gredjin" }, { "code": "", "text": "Try to change the Atlas network setting to allow from anywhere.checked the current IP and it’s ok.The IP that you have to allow access is not necessarily the IP of the machine you are using. With VPN, NAT routers you need to specify the public IP. Start by allowing from anywhere then if it works you may make it more secure by allowing the IP address you get with https://www.whatismyip.com/", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connect with mongodb compass
2023-09-02T12:37:14.945Z
Connect with mongodb compass
307
https://www.mongodb.com/…4_2_1024x121.png
[ "serverless" ]
[ { "code": "", "text": "Can you please provide the workflow for getting to realm.\nI’m on this video, but I can’t get to Realm\nI also forgot how to get back in to the database to run queries.\nimage1298×154 31.9 KB\n", "username": "David_Brook" }, { "code": "", "text": "Hi @David_Brook welcome to the community!“Realm” was renamed to Atlas App Services recently, with basically identical functionality at this point. The video was created pre-renaming, so “Atlas” is now called “Data Services” in the interface.Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "\nimage1206×799 127 KB\nThe interface has changed a lot. It would be much more user-friendly if your company can provide an updated guide.", "username": "Ka_Lok_Tam" } ]
I can't find REALM anywhere?
2023-01-26T23:37:58.548Z
I can&rsquo;t find REALM anywhere?
1,293
null
[ "flutter" ]
[ { "code": "@RealmModel()\nclass $ProductEntity {\n @PrimaryKey()\n late int id;\n String name = '';\n\n late List<$ProductCommentEntity> comments;\n}\n\n@RealmModel()\nclass $ProductCommentEntity {\n @PrimaryKey()\n late String id;\n int productId = 0;\n int userId = 0;\n String comment = '';\n\n\n @Backlink(#comments)\n late Iterable<$ProductEntity> product;\n}\nvar products = realm\n\t.query<ProductEntity>('comments.@count > 0 SORT(name ASC)')\n\t.skip(index)\n\t.take(size)\n\t.toList();\nvar realm\n\t.query<ProductEntity>('comments.@count > 0 SORT(name ASC)')\n\t.skip(index)\n\t.take(size)\n\t.toList()\n\t.map((x) {\n\t final product = x.toModel();\n\t return ProductExperienceFull(\n\t\tproduct: product,\n\t\tcommentedByMe: x.comments.any((y) => y.userId == myUserId),\n\t );\n\t})\n\t.toList();\n\nvar realm\n\t.query<ProductEntity>('comments.@count > 0 SORT(name ASC)')\n\t.skip(index)\n\t.take(size)\n\t.toList()\n\t.map((x) {\n\t final product = x.toModel();\n\t return ProductExperienceFull(\n\t\tproduct: product,\n\t\tcommentedByMe: realm.query<ProductCommentEntity>('productId = \\$0 && userId = \\$1', [x.id, myUserId]).isNotEmpty,\n\t );\n\t})\n\t.toList();\n", "text": "Do relationship properties get populated during .toList()?Hi,We’ve recently discovered Realm for Flutter and are finding it really awesome!We are currently migrating to Realm and during the first stage we are injecting Realm db into the Data Access layer instead of http services. It means that we cannot currently use the live data feature (but plan to migrate to it during the subsequent stages) and we have to query and map all data before returning it to presentation level.Maybe you can help us understand how to query data from Realm in more optimal way.Example models:or", "username": "Andrew_M1" }, { "code": ".toListskip(index)indexuserIdproductIdskip", "text": "The .toList will create native handles for all elements in the results, and fix which object is at what index. It is one of the things I typically warn against. It won’t actually hydrate the object though. Whenever you access a property on realm object, it will read lazily from the database, so the objects are still live… that is until you map them later, which brings me to your second questionI’m a bit worried by the skip(index) in both examples. As is, it will create a handle for the first index elements, and while that is not super expensive, it is a total waste in this case. We should probably improve on that.Disregarding that…Well, it depends … on the length of the comment list fx.In the first your reading the userId into dart for each element in the list, but the list is already available, and the overhead of retrieving an int is very low.In the second you avoid the traversal on the dart side, but on the other hand since you have no index on productId the query engine is forced to iterate over all products.However in your example I would expect the first to be fastest, but it never hurts to meassure.That said, creating a window abstraction, that avoids skip will be a good idea… At least for now.", "username": "Kasper_Nielsen1" }, { "code": "skipRealmResultsrealm:mainrealm:kn/results-skip-performance", "text": "I have a PR in review to make skip on RealmResults efficient.- Fetch handle in _RealmResultsIterator.current lazily (it may never be called f…or a given index)\n- Support efficient skip on RealmResults\n\nThis came about due to https://www.mongodb.com/community/forums/t/do-relationship-properties-get-populated-during-tolist/242260.\n\n`skip` is useful if you wan't to do pagination for various reasons, even though realm doesn't require it. In this case to support an existing data access layer previously build on another DB.", "username": "Kasper_Nielsen1" }, { "code": "final firstComment = product.comments[0]; // <-- all the fields of firstComment are fully fetched from db\nfinal firstComment = product.comments[0]; // <-- firstComment not hydrated\nvar userId = firstComment.userId; // <-- firstComment is fully hydrated now\nproduct.comments\n .toList()\n .map(\n\t(x) => ProductCommentEntity(\n\t\tid: x.id, \t\t\t\t// <-- fetch id\n\t\tproductId: x.productId, // <-- fetch productId\n\t\tuserId: x.userId, \t\t// <-- fetch userId\n\t\tcomment: x.comment, \t// <-- fetch comment\n\t),\n )\n .toList()\n", "text": "Hi @Kasper_Nielsen1,Wow! Thanks for the detailed response and for the .skip() fix!One more question regarding the hydration.\nI thought that objects’ non-relationship/non-ListResult properties are hydrated fully when you access it by index:or if you access at least one prop:but after reading you answer I’m getting the feeling that the properties values are fetched one by one only when accessed:Can you please tell which case is true?\nI mean it does not really matter if it works fast, but it is good to understand how the technology you work with works P.S. It would be helpful to have some kind of logs when there is an actual fetch request to db backend - then it would be easier to debug and optimize such things.", "username": "Andrew_M1" }, { "code": "StringsRealm.logging.logLevel = RealmLogLevel.trace; // or what you prefer\nRealm.logger = Logger.detached(\"custom logger\")..level = RealmLogLevel.detail;\n", "text": "They are fetched property by property. That is one reason mapping from realm objects to another is not the best architecture for realm. Not only do you loose the liveness, but you also pay for the hydration of properties you may never need.The only thing that is actually cached is the exact position of the object in the database in a given version. Hence once the native handle is created, we know exactly what memory addresses to fetch all props from.Realm uses mmap, so it is really as efficient as following a native pointer + the overhead of moving the value into Dart memory. In general Dart FFI is very efficient, but for Strings there is slightly larger overhead since they are converted from UTF8 to Dart’s UTF16 based format.But for anything sane we are talking single-digit nanoseconds for property access.Wrt. logging you can increase the verbosity with:You can also set your own logger:You can enable very detailed logging.", "username": "Kasper_Nielsen1" }, { "code": "RealmResults.elementAtìtemExtentListView.builder", "text": "I can also recommend this issue comment from github:### What happened?\n\nI decided to write separate topic continuing (https://gith…ub.com/realm/realm-dart/issues/1133#issuecomment-1529449307) comment.\n\nSo I have been watching on why Realm was in my application stuttering. At start I was thinking it was because I converted all results from RealmResults to List, then I converted all my List to Iterable and even then I didn't get significant performance increase. Then I thought maybe it was because I converted local RealmModels to AppModels, but even after testing that was not the case. So I took most similar available NoSQL database solutions in Flutter **Realm, Isar, ObjectBox** and compared each other, I noticed interesting results. \n\nViews in files are practically the same, only Realm has no drivers.lenght because I removed it for performance testing purposes.\n\nHere are screens of app:\n\nRealm with drivers:\n<img src=\"https://user-images.githubusercontent.com/25333580/235876242-1f51f3ea-4f54-416c-9fac-2700a0446510.png\" alt=\"screenshot\" width=\"300px\"> \nThe insert part of Realm pretty with drivers compared to others DBs, but this is not the main concern of performance.\n\nRealm without drivers:\n<img src=\"https://user-images.githubusercontent.com/25333580/235876296-a1724a97-3d7c-413f-a836-495594a8b990.png\" alt=\"screenshot\" width=\"300px\"> \nThe insert part was logically faster, but as you can see the `all()` part on initState is pretty fast.\n\nIsar:\n<img src=\"https://user-images.githubusercontent.com/25333580/235876424-120efcb9-7533-4627-9cdc-bafcde321eff.png\" alt=\"screenshot\" width=\"300px\"> \nOn Isar I was able to insert 100000 with drivers much faster than in Realm. But as you can see first list drivers start with 1,2,3... and so on, because you have to insert in table record first then link it with parent. Which is a huge stepdown compared to Realm at given moment. Also` isar.cars.where().findAllSync()` but thankfully you can use Future `.findFirst()` if fetching is longer than expected and you can show loading indicator for example. And 0.3s for 100000 records with drivers is reasonable performance.\n\nObjectBox:\n<img src=\"https://user-images.githubusercontent.com/25333580/235876394-526158b7-324b-4395-9aaa-5f779fd11c02.png\" alt=\"screenshot\" width=\"300px\"> \nIt acted similar as Isar but now all drivers was loaded with `carBox.getAll()`. And also you can also you `getAllAsync()`, if you want to show loading indicator, so that UI doesn't stuck.\n\n\nSo if I compare all 3 DB solution you would think that Realm performs the best, not quite as I had stuttering initially at loading and on scrolling (Reason why I made this post). The real surprise was in DevTools performance Tab.\n\nHere is performance Devtools Tab while scrolling fast:\n\nRealm with drivers:\n<img width=\"500\" alt=\"Screenshot 2023-05-03 at 10 07 09\" src=\"https://user-images.githubusercontent.com/25333580/235877759-9bcf447b-cf3f-433a-b959-a91ce1639c3b.png\">\nSo for 1 flutter frame it took 250ms to build listview on scroll, and those long flutter frames where consistent on scroll. As you can see in CPU Flame Chart Realm does the heaviest work while building.\n\n\n\nRealm without drivers:\n<img width=\"500\" alt=\"Screenshot 2023-05-03 at 12 41 06\" src=\"https://user-images.githubusercontent.com/25333580/235882862-87ec5954-da74-4039-ba44-348dd28cccd8.png\">\nWell I thought maybe it's because of 50 drivers, so I got rid of them and... no. Still I had around 250ms. And again Realm did the heaviest work.\n\nIsar:\n<img width=\"500\" alt=\"Screenshot 2023-05-03 at 11 14 01\" src=\"https://user-images.githubusercontent.com/25333580/235877843-d8de1fe8-a0c2-4675-9885-d7a43e069c4c.png\">\nLooking at Isar it took only 10ms compared to Realm 250ms. And the smoothness of scrolling was so much more better, as you can see in performance tab. You can also see that Isar doesn't do heavy work while building widgets, and I could easily even convert to AppModel.\n\nObjectBox:\n<img width=\"500\" alt=\"Screenshot 2023-05-03 at 11 17 20\" src=\"https://user-images.githubusercontent.com/25333580/235877877-a03be26a-4c6d-4671-9066-46e932be06b3.png\">\nObjectBox build widgets in similar fashion as Isar, and frames were also consistently low ms. \n\n### Repro steps\n\nSo here are my observation. While I do like Realm from API standpoint, it suffers in performance compared to other NoSQLs greatly. And there is no way to do async get all, if it freezes UI. I want to point out I would not spend time on observation and writing post if I would not like Realm as DB solution. I have used Realm in Unity project before. I like Realm but this is no go for my next project where client states that performance is mandatory, and yet I have it in my pet project and it stutters, because of Realm (looking in DevTools). I don't know why, maybe because link target system for references works better, maybe I'm doing something wrong, but I would love to hear feedback.\n\n### Version\n\nFlutter 3.7.11\n\n### What Atlas Services are you using?\n\nLocal Database only\n\n### What type of application is this?\n\nFlutter Application\n\n### Client OS and version\n\nGoogle Pixel 6a Android 13\n\n### Code snippets\n\nHere is code that I used:\n\nRealm:\n```dart\nimport 'dart:io';\nimport 'dart:math';\n\nimport 'package:flutter/material.dart';\nimport 'package:realm/realm.dart';\n\npart 'main.g.dart';\n\n///-------------\n\n// class CarApp {\n// String make;\n// String? model;\n// int? kilometers;\n// PersonApp owner;\n// Iterable<PersonApp> drivers;\n//\n// CarApp(this.make, this.model, this.kilometers, this.owner, this.drivers);\n// }\n//\n// class PersonApp {\n// String name;\n// int age;\n//\n// PersonApp(this.name, this.age);\n// }\n//\n// ///-------------\n//\n// CarApp toCarApp(Car l) {\n// var car = CarApp(l.make, l.model, l.kilometers, toPersonApp(l.owner!), l.drivers.map((d) => toPersonApp(d)).toList());\n// return car;\n// }\n//\n// PersonApp toPersonApp(Person l) {\n// var person = PersonApp(l.name, l.age);\n// return person;\n// }\n//\n// ///-------------\n\n@RealmModel()\nclass _Car {\n late String make;\n String? model;\n int? kilometers = 500;\n _Person? owner;\n // late List<_Person> drivers;\n}\n\n@RealmModel()\nclass _Person {\n late String name;\n int age = 1;\n}\n\n///-------------\n\nvoid main() {\n print(\"Current PID $pid\");\n runApp(MyApp());\n}\n\nclass MyApp extends StatefulWidget {\n @override\n _MyAppState createState() => _MyAppState();\n}\n\nclass _MyAppState extends State<MyApp> {\n late Realm realm;\n\n _MyAppState() {\n final config = Configuration.local([Car.schema, Person.schema]);\n realm = Realm(config);\n }\n\n final timer = Stopwatch();\n late final Duration initStateDuration;\n\n late Iterable<Car> cars;\n\n @override\n void initState() {\n timer.start();\n cars = realm.all<Car>();\n initStateDuration = timer.elapsed;\n\n // for (var i = 0; i <= 15000; i++) {\n // realm.write(() {\n // var drivers = List.generate(50, (index) => Person(index.toString(), age: 20));\n // print('Adding a Car to Realm.');\n // var car = realm.add(Car(\"Tesla\", owner: Person(\"John\")));\n // print(\"Updating the car's model and kilometers\");\n // car.model = \"Model 3\";\n // car.kilometers = 5000;\n //\n // print('Adding another Car to Realm.');\n // realm.add(car);\n // });\n // }\n\n super.initState();\n }\n\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n home: Scaffold(\n appBar: AppBar(\n title: const Text('Plugin example app'),\n ),\n body: Center(\n child: Column(\n children: [\n Container(\n width: 100,\n height: 50,\n color: Color((Random().nextDouble() * 0xFFFFFF).toInt() << 0).withOpacity(1.0),\n ),\n Text('Running initState() $initStateDuration on: ${Platform.operatingSystem}'),\n Text('\\nThere are ${cars.length} cars in the Realm.\\n'),\n Expanded(\n child: ListView.builder(\n itemCount: cars.length,\n itemBuilder: (context, i) {\n final car = cars.elementAt(i);\n final textWidget = Text('Car model \"${car.model}\" has owner ${car.owner!.name} ');\n\n return textWidget;\n },\n ),\n ),\n ElevatedButton(\n onPressed: () => setState(() {}),\n child: Text(\"Press\"),\n ),\n ],\n ),\n ),\n ),\n );\n }\n}\n```\n\nIsar:\n```dart\nimport 'dart:io';\nimport 'dart:math';\n\nimport 'package:flutter/material.dart';\nimport 'package:isar/isar.dart';\nimport 'package:path_provider/path_provider.dart';\n\npart 'main_isar.g.dart';\n\n///-------------\n\nclass CarApp {\n String make;\n String? model;\n int? kilometers;\n PersonApp owner;\n List<PersonApp> drivers;\n\n CarApp(this.make, this.model, this.kilometers, this.owner, this.drivers);\n}\n\nclass PersonApp {\n String name;\n int age;\n\n PersonApp(this.name, this.age);\n}\n\n///-------------\n\nCarApp toCarApp(Car l) {\n var car =\n CarApp(l.make, l.model, l.kilometers, toPersonApp(l.owner.value!), l.drivers.map((d) => toPersonApp(d)).toList());\n return car;\n}\n\nPersonApp toPersonApp(Person l) {\n var person = PersonApp(l.name, l.age);\n return person;\n}\n\n///-------------\n\n@collection\nclass Car {\n Id id = Isar.autoIncrement;\n\n late String make;\n String? model;\n int? kilometers = 500;\n\n final owner = IsarLink<Person>();\n final drivers = IsarLinks<Person>();\n\n Car(this.make, this.model, this.kilometers);\n}\n\n@collection\nclass Person {\n Id id = Isar.autoIncrement;\n\n late String name;\n int age = 1;\n\n Person(this.name, this.age);\n}\n\n@collection\nclass User {\n Id id = Isar.autoIncrement; // you can also use id = null to auto increment\n\n String? name;\n\n int? age;\n}\n\n///-------------\n\nlate Isar isar;\n\nFuture<void> main() async {\n WidgetsFlutterBinding.ensureInitialized();\n\n isar = await IsarService.create();\n\n runApp(MyApp());\n}\n\nclass MyApp extends StatefulWidget {\n @override\n _MyAppState createState() => _MyAppState();\n}\n\nclass _MyAppState extends State<MyApp> {\n final timer = Stopwatch();\n late final Duration initStateDuration;\n\n late Iterable<CarApp> cars;\n\n @override\n void initState() {\n timer.start();\n\n // isar.writeTxnSync(() {\n // for (var i = 0; i <= 50000; i++) {\n // var drivers = List.generate(50, (index) {\n // var person = Person(index.toString(), 20);\n // person.id = index;\n // return person;\n // });\n // var car = Car(\"Tesla\", \"Model 3\", 5000);\n // car.owner.value = Person(\"John\", 15);\n // car.drivers.addAll(drivers);\n //\n // isar.cars.putSync(car);\n // print('Adding a Car to Isar.');\n // }\n // });\n\n cars = isar.cars.where().findAllSync().map((c) => toCarApp(c));\n initStateDuration = timer.elapsed;\n\n super.initState();\n }\n\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n home: Scaffold(\n appBar: AppBar(\n title: const Text('Plugin example app'),\n ),\n body: Center(\n child: Column(\n children: [\n Container(\n width: 100,\n height: 50,\n color: Color((Random().nextDouble() * 0xFFFFFF).toInt() << 0).withOpacity(1.0),\n ),\n Text('Running initState() $initStateDuration on: ${Platform.operatingSystem}'),\n Text('\\nThere are ${cars.length} cars in the Isar.\\n'),\n Expanded(\n child: ListView.builder(\n itemCount: cars.length,\n itemBuilder: (context, i) {\n final car = cars.elementAt(i);\n final textWidget =\n Text('Car model \"${car.model}\" has owner ${car.owner.name} with ${car.drivers.length} drivers');\n return textWidget;\n },\n ),\n ),\n ElevatedButton(\n onPressed: () => setState(() {}),\n child: Text(\"Press\"),\n ),\n ],\n ),\n ),\n ),\n );\n }\n}\n\nclass IsarService {\n static Future<Isar> create() async {\n final dir = await getApplicationDocumentsDirectory();\n final isar = await Isar.open(\n [CarSchema, PersonSchema],\n directory: dir.path,\n );\n\n return isar;\n }\n}\n\n```\n\nObjectBox:\n```dart\nimport 'dart:io';\nimport 'dart:math';\n\nimport 'package:flutter/material.dart';\nimport 'package:path/path.dart' as p;\nimport 'package:path_provider/path_provider.dart';\n\nimport 'objectbox.g.dart';\n\n///-------------\n\nclass CarApp {\n String make;\n String? model;\n int? kilometers;\n PersonApp owner;\n Iterable<PersonApp> drivers;\n\n CarApp(this.make, this.model, this.kilometers, this.owner, this.drivers);\n}\n\nclass PersonApp {\n String name;\n int age;\n\n PersonApp(this.name, this.age);\n}\n\n///-------------\n\nCarApp toCarApp(Car l) {\n var car = CarApp(\n l.make, l.model, l.kilometers, toPersonApp(l.owner.target!), l.drivers.map((d) => toPersonApp(d)).toList());\n return car;\n}\n\nPersonApp toPersonApp(Person l) {\n var person = PersonApp(l.name, l.age);\n return person;\n}\n\n///-------------\n\n@Entity()\nclass Car {\n @Id()\n int id = 0;\n\n Color? color;\n late String make;\n String? model;\n int? kilometers = 500;\n\n final owner = ToOne<Person>();\n final drivers = ToMany<Person>();\n\n Car(this.make, this.model, this.kilometers);\n}\n\n@Entity()\nclass Person {\n @Id()\n int id = 0;\n\n late String name;\n int age = 1;\n\n Person(this.name, this.age);\n}\n\n///-------------\n\nlate ObjectBox objectBox;\n\nFuture<void> main() async {\n // This is required so ObjectBox can get the application directory\n // to store the database in.\n WidgetsFlutterBinding.ensureInitialized();\n\n objectBox = await ObjectBox.create();\n\n runApp(MyApp());\n}\n\nclass MyApp extends StatefulWidget {\n @override\n _MyAppState createState() => _MyAppState();\n}\n\nclass _MyAppState extends State<MyApp> {\n late final Box<Car> carBox;\n late final Box<Person> personBox;\n\n final timer = Stopwatch();\n late final Duration initStateDuration;\n\n late Iterable<CarApp> cars;\n\n @override\n void initState() {\n carBox = objectBox.store.box<Car>();\n personBox = objectBox.store.box<Person>();\n timer.start();\n\n // for (var i = 0; i <= 50000; i++) {\n // var drivers = List.generate(50, (index) {\n // var person = Person(index.toString(), 20);\n // return person;\n // });\n // var car = Car(\"Tesla\", \"Model 3\", 5000);\n // car.owner.target = Person(\"John\", 15);\n //\n // car.drivers.addAll(drivers);\n //\n // carBox.put(car);\n // print('Adding a Car to ObjectBox.');\n // }\n\n cars = carBox.getAll().map((c) => toCarApp(c));\n initStateDuration = timer.elapsed;\n\n super.initState();\n }\n\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n home: Scaffold(\n appBar: AppBar(\n title: const Text('Plugin example app'),\n ),\n body: Center(\n child: Column(\n children: [\n Container(\n width: 100,\n height: 50,\n color: Color((Random().nextDouble() * 0xFFFFFF).toInt() << 0).withOpacity(1.0),\n ),\n Text('Running initState() $initStateDuration on: ${Platform.operatingSystem}'),\n Text('\\nThere are ${cars.length} cars in the ObjectBox.\\n'),\n Expanded(\n child: ListView.builder(\n itemCount: cars.length,\n itemBuilder: (context, i) {\n final car = cars.elementAt(i);\n final textWidget =\n Text('Car model \"${car.model}\" has owner ${car.owner.name} with ${car.drivers.length} drivers');\n return textWidget;\n },\n ),\n ),\n ElevatedButton(\n onPressed: () => setState(() {}),\n child: Text(\"Press\"),\n ),\n ],\n ),\n ),\n ),\n );\n }\n}\n\nclass ObjectBox {\n /// The Store of this app.\n late final Store store;\n late final Admin admin;\n\n ObjectBox._create(this.store) {\n if (Admin.isAvailable()) {\n admin = Admin(store);\n }\n // Add any additional setup code, e.g. build queries.\n }\n\n /// Create an instance of ObjectBox to use throughout the app.\n static Future<ObjectBox> create() async {\n final docsDir = await getApplicationDocumentsDirectory();\n final path = p.join(docsDir.path, \"obx-example\");\n final store = await openStore(directory: path);\n\n return ObjectBox._create(store);\n }\n}\n```\n\n### Stacktrace of the exception/crash you're getting\n\n_No response_\n\n### Relevant log output\n\n_No response_and### What happened?\n\nI decided to write separate topic continuing (https://gith…ub.com/realm/realm-dart/issues/1133#issuecomment-1529449307) comment.\n\nSo I have been watching on why Realm was in my application stuttering. At start I was thinking it was because I converted all results from RealmResults to List, then I converted all my List to Iterable and even then I didn't get significant performance increase. Then I thought maybe it was because I converted local RealmModels to AppModels, but even after testing that was not the case. So I took most similar available NoSQL database solutions in Flutter **Realm, Isar, ObjectBox** and compared each other, I noticed interesting results. \n\nViews in files are practically the same, only Realm has no drivers.lenght because I removed it for performance testing purposes.\n\nHere are screens of app:\n\nRealm with drivers:\n<img src=\"https://user-images.githubusercontent.com/25333580/235876242-1f51f3ea-4f54-416c-9fac-2700a0446510.png\" alt=\"screenshot\" width=\"300px\"> \nThe insert part of Realm pretty with drivers compared to others DBs, but this is not the main concern of performance.\n\nRealm without drivers:\n<img src=\"https://user-images.githubusercontent.com/25333580/235876296-a1724a97-3d7c-413f-a836-495594a8b990.png\" alt=\"screenshot\" width=\"300px\"> \nThe insert part was logically faster, but as you can see the `all()` part on initState is pretty fast.\n\nIsar:\n<img src=\"https://user-images.githubusercontent.com/25333580/235876424-120efcb9-7533-4627-9cdc-bafcde321eff.png\" alt=\"screenshot\" width=\"300px\"> \nOn Isar I was able to insert 100000 with drivers much faster than in Realm. But as you can see first list drivers start with 1,2,3... and so on, because you have to insert in table record first then link it with parent. Which is a huge stepdown compared to Realm at given moment. Also` isar.cars.where().findAllSync()` but thankfully you can use Future `.findFirst()` if fetching is longer than expected and you can show loading indicator for example. And 0.3s for 100000 records with drivers is reasonable performance.\n\nObjectBox:\n<img src=\"https://user-images.githubusercontent.com/25333580/235876394-526158b7-324b-4395-9aaa-5f779fd11c02.png\" alt=\"screenshot\" width=\"300px\"> \nIt acted similar as Isar but now all drivers was loaded with `carBox.getAll()`. And also you can also you `getAllAsync()`, if you want to show loading indicator, so that UI doesn't stuck.\n\n\nSo if I compare all 3 DB solution you would think that Realm performs the best, not quite as I had stuttering initially at loading and on scrolling (Reason why I made this post). The real surprise was in DevTools performance Tab.\n\nHere is performance Devtools Tab while scrolling fast:\n\nRealm with drivers:\n<img width=\"500\" alt=\"Screenshot 2023-05-03 at 10 07 09\" src=\"https://user-images.githubusercontent.com/25333580/235877759-9bcf447b-cf3f-433a-b959-a91ce1639c3b.png\">\nSo for 1 flutter frame it took 250ms to build listview on scroll, and those long flutter frames where consistent on scroll. As you can see in CPU Flame Chart Realm does the heaviest work while building.\n\n\n\nRealm without drivers:\n<img width=\"500\" alt=\"Screenshot 2023-05-03 at 12 41 06\" src=\"https://user-images.githubusercontent.com/25333580/235882862-87ec5954-da74-4039-ba44-348dd28cccd8.png\">\nWell I thought maybe it's because of 50 drivers, so I got rid of them and... no. Still I had around 250ms. And again Realm did the heaviest work.\n\nIsar:\n<img width=\"500\" alt=\"Screenshot 2023-05-03 at 11 14 01\" src=\"https://user-images.githubusercontent.com/25333580/235877843-d8de1fe8-a0c2-4675-9885-d7a43e069c4c.png\">\nLooking at Isar it took only 10ms compared to Realm 250ms. And the smoothness of scrolling was so much more better, as you can see in performance tab. You can also see that Isar doesn't do heavy work while building widgets, and I could easily even convert to AppModel.\n\nObjectBox:\n<img width=\"500\" alt=\"Screenshot 2023-05-03 at 11 17 20\" src=\"https://user-images.githubusercontent.com/25333580/235877877-a03be26a-4c6d-4671-9066-46e932be06b3.png\">\nObjectBox build widgets in similar fashion as Isar, and frames were also consistently low ms. \n\n### Repro steps\n\nSo here are my observation. While I do like Realm from API standpoint, it suffers in performance compared to other NoSQLs greatly. And there is no way to do async get all, if it freezes UI. I want to point out I would not spend time on observation and writing post if I would not like Realm as DB solution. I have used Realm in Unity project before. I like Realm but this is no go for my next project where client states that performance is mandatory, and yet I have it in my pet project and it stutters, because of Realm (looking in DevTools). I don't know why, maybe because link target system for references works better, maybe I'm doing something wrong, but I would love to hear feedback.\n\n### Version\n\nFlutter 3.7.11\n\n### What Atlas Services are you using?\n\nLocal Database only\n\n### What type of application is this?\n\nFlutter Application\n\n### Client OS and version\n\nGoogle Pixel 6a Android 13\n\n### Code snippets\n\nHere is code that I used:\n\nRealm:\n```dart\nimport 'dart:io';\nimport 'dart:math';\n\nimport 'package:flutter/material.dart';\nimport 'package:realm/realm.dart';\n\npart 'main.g.dart';\n\n///-------------\n\n// class CarApp {\n// String make;\n// String? model;\n// int? kilometers;\n// PersonApp owner;\n// Iterable<PersonApp> drivers;\n//\n// CarApp(this.make, this.model, this.kilometers, this.owner, this.drivers);\n// }\n//\n// class PersonApp {\n// String name;\n// int age;\n//\n// PersonApp(this.name, this.age);\n// }\n//\n// ///-------------\n//\n// CarApp toCarApp(Car l) {\n// var car = CarApp(l.make, l.model, l.kilometers, toPersonApp(l.owner!), l.drivers.map((d) => toPersonApp(d)).toList());\n// return car;\n// }\n//\n// PersonApp toPersonApp(Person l) {\n// var person = PersonApp(l.name, l.age);\n// return person;\n// }\n//\n// ///-------------\n\n@RealmModel()\nclass _Car {\n late String make;\n String? model;\n int? kilometers = 500;\n _Person? owner;\n // late List<_Person> drivers;\n}\n\n@RealmModel()\nclass _Person {\n late String name;\n int age = 1;\n}\n\n///-------------\n\nvoid main() {\n print(\"Current PID $pid\");\n runApp(MyApp());\n}\n\nclass MyApp extends StatefulWidget {\n @override\n _MyAppState createState() => _MyAppState();\n}\n\nclass _MyAppState extends State<MyApp> {\n late Realm realm;\n\n _MyAppState() {\n final config = Configuration.local([Car.schema, Person.schema]);\n realm = Realm(config);\n }\n\n final timer = Stopwatch();\n late final Duration initStateDuration;\n\n late Iterable<Car> cars;\n\n @override\n void initState() {\n timer.start();\n cars = realm.all<Car>();\n initStateDuration = timer.elapsed;\n\n // for (var i = 0; i <= 15000; i++) {\n // realm.write(() {\n // var drivers = List.generate(50, (index) => Person(index.toString(), age: 20));\n // print('Adding a Car to Realm.');\n // var car = realm.add(Car(\"Tesla\", owner: Person(\"John\")));\n // print(\"Updating the car's model and kilometers\");\n // car.model = \"Model 3\";\n // car.kilometers = 5000;\n //\n // print('Adding another Car to Realm.');\n // realm.add(car);\n // });\n // }\n\n super.initState();\n }\n\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n home: Scaffold(\n appBar: AppBar(\n title: const Text('Plugin example app'),\n ),\n body: Center(\n child: Column(\n children: [\n Container(\n width: 100,\n height: 50,\n color: Color((Random().nextDouble() * 0xFFFFFF).toInt() << 0).withOpacity(1.0),\n ),\n Text('Running initState() $initStateDuration on: ${Platform.operatingSystem}'),\n Text('\\nThere are ${cars.length} cars in the Realm.\\n'),\n Expanded(\n child: ListView.builder(\n itemCount: cars.length,\n itemBuilder: (context, i) {\n final car = cars.elementAt(i);\n final textWidget = Text('Car model \"${car.model}\" has owner ${car.owner!.name} ');\n\n return textWidget;\n },\n ),\n ),\n ElevatedButton(\n onPressed: () => setState(() {}),\n child: Text(\"Press\"),\n ),\n ],\n ),\n ),\n ),\n );\n }\n}\n```\n\nIsar:\n```dart\nimport 'dart:io';\nimport 'dart:math';\n\nimport 'package:flutter/material.dart';\nimport 'package:isar/isar.dart';\nimport 'package:path_provider/path_provider.dart';\n\npart 'main_isar.g.dart';\n\n///-------------\n\nclass CarApp {\n String make;\n String? model;\n int? kilometers;\n PersonApp owner;\n List<PersonApp> drivers;\n\n CarApp(this.make, this.model, this.kilometers, this.owner, this.drivers);\n}\n\nclass PersonApp {\n String name;\n int age;\n\n PersonApp(this.name, this.age);\n}\n\n///-------------\n\nCarApp toCarApp(Car l) {\n var car =\n CarApp(l.make, l.model, l.kilometers, toPersonApp(l.owner.value!), l.drivers.map((d) => toPersonApp(d)).toList());\n return car;\n}\n\nPersonApp toPersonApp(Person l) {\n var person = PersonApp(l.name, l.age);\n return person;\n}\n\n///-------------\n\n@collection\nclass Car {\n Id id = Isar.autoIncrement;\n\n late String make;\n String? model;\n int? kilometers = 500;\n\n final owner = IsarLink<Person>();\n final drivers = IsarLinks<Person>();\n\n Car(this.make, this.model, this.kilometers);\n}\n\n@collection\nclass Person {\n Id id = Isar.autoIncrement;\n\n late String name;\n int age = 1;\n\n Person(this.name, this.age);\n}\n\n@collection\nclass User {\n Id id = Isar.autoIncrement; // you can also use id = null to auto increment\n\n String? name;\n\n int? age;\n}\n\n///-------------\n\nlate Isar isar;\n\nFuture<void> main() async {\n WidgetsFlutterBinding.ensureInitialized();\n\n isar = await IsarService.create();\n\n runApp(MyApp());\n}\n\nclass MyApp extends StatefulWidget {\n @override\n _MyAppState createState() => _MyAppState();\n}\n\nclass _MyAppState extends State<MyApp> {\n final timer = Stopwatch();\n late final Duration initStateDuration;\n\n late Iterable<CarApp> cars;\n\n @override\n void initState() {\n timer.start();\n\n // isar.writeTxnSync(() {\n // for (var i = 0; i <= 50000; i++) {\n // var drivers = List.generate(50, (index) {\n // var person = Person(index.toString(), 20);\n // person.id = index;\n // return person;\n // });\n // var car = Car(\"Tesla\", \"Model 3\", 5000);\n // car.owner.value = Person(\"John\", 15);\n // car.drivers.addAll(drivers);\n //\n // isar.cars.putSync(car);\n // print('Adding a Car to Isar.');\n // }\n // });\n\n cars = isar.cars.where().findAllSync().map((c) => toCarApp(c));\n initStateDuration = timer.elapsed;\n\n super.initState();\n }\n\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n home: Scaffold(\n appBar: AppBar(\n title: const Text('Plugin example app'),\n ),\n body: Center(\n child: Column(\n children: [\n Container(\n width: 100,\n height: 50,\n color: Color((Random().nextDouble() * 0xFFFFFF).toInt() << 0).withOpacity(1.0),\n ),\n Text('Running initState() $initStateDuration on: ${Platform.operatingSystem}'),\n Text('\\nThere are ${cars.length} cars in the Isar.\\n'),\n Expanded(\n child: ListView.builder(\n itemCount: cars.length,\n itemBuilder: (context, i) {\n final car = cars.elementAt(i);\n final textWidget =\n Text('Car model \"${car.model}\" has owner ${car.owner.name} with ${car.drivers.length} drivers');\n return textWidget;\n },\n ),\n ),\n ElevatedButton(\n onPressed: () => setState(() {}),\n child: Text(\"Press\"),\n ),\n ],\n ),\n ),\n ),\n );\n }\n}\n\nclass IsarService {\n static Future<Isar> create() async {\n final dir = await getApplicationDocumentsDirectory();\n final isar = await Isar.open(\n [CarSchema, PersonSchema],\n directory: dir.path,\n );\n\n return isar;\n }\n}\n\n```\n\nObjectBox:\n```dart\nimport 'dart:io';\nimport 'dart:math';\n\nimport 'package:flutter/material.dart';\nimport 'package:path/path.dart' as p;\nimport 'package:path_provider/path_provider.dart';\n\nimport 'objectbox.g.dart';\n\n///-------------\n\nclass CarApp {\n String make;\n String? model;\n int? kilometers;\n PersonApp owner;\n Iterable<PersonApp> drivers;\n\n CarApp(this.make, this.model, this.kilometers, this.owner, this.drivers);\n}\n\nclass PersonApp {\n String name;\n int age;\n\n PersonApp(this.name, this.age);\n}\n\n///-------------\n\nCarApp toCarApp(Car l) {\n var car = CarApp(\n l.make, l.model, l.kilometers, toPersonApp(l.owner.target!), l.drivers.map((d) => toPersonApp(d)).toList());\n return car;\n}\n\nPersonApp toPersonApp(Person l) {\n var person = PersonApp(l.name, l.age);\n return person;\n}\n\n///-------------\n\n@Entity()\nclass Car {\n @Id()\n int id = 0;\n\n Color? color;\n late String make;\n String? model;\n int? kilometers = 500;\n\n final owner = ToOne<Person>();\n final drivers = ToMany<Person>();\n\n Car(this.make, this.model, this.kilometers);\n}\n\n@Entity()\nclass Person {\n @Id()\n int id = 0;\n\n late String name;\n int age = 1;\n\n Person(this.name, this.age);\n}\n\n///-------------\n\nlate ObjectBox objectBox;\n\nFuture<void> main() async {\n // This is required so ObjectBox can get the application directory\n // to store the database in.\n WidgetsFlutterBinding.ensureInitialized();\n\n objectBox = await ObjectBox.create();\n\n runApp(MyApp());\n}\n\nclass MyApp extends StatefulWidget {\n @override\n _MyAppState createState() => _MyAppState();\n}\n\nclass _MyAppState extends State<MyApp> {\n late final Box<Car> carBox;\n late final Box<Person> personBox;\n\n final timer = Stopwatch();\n late final Duration initStateDuration;\n\n late Iterable<CarApp> cars;\n\n @override\n void initState() {\n carBox = objectBox.store.box<Car>();\n personBox = objectBox.store.box<Person>();\n timer.start();\n\n // for (var i = 0; i <= 50000; i++) {\n // var drivers = List.generate(50, (index) {\n // var person = Person(index.toString(), 20);\n // return person;\n // });\n // var car = Car(\"Tesla\", \"Model 3\", 5000);\n // car.owner.target = Person(\"John\", 15);\n //\n // car.drivers.addAll(drivers);\n //\n // carBox.put(car);\n // print('Adding a Car to ObjectBox.');\n // }\n\n cars = carBox.getAll().map((c) => toCarApp(c));\n initStateDuration = timer.elapsed;\n\n super.initState();\n }\n\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n home: Scaffold(\n appBar: AppBar(\n title: const Text('Plugin example app'),\n ),\n body: Center(\n child: Column(\n children: [\n Container(\n width: 100,\n height: 50,\n color: Color((Random().nextDouble() * 0xFFFFFF).toInt() << 0).withOpacity(1.0),\n ),\n Text('Running initState() $initStateDuration on: ${Platform.operatingSystem}'),\n Text('\\nThere are ${cars.length} cars in the ObjectBox.\\n'),\n Expanded(\n child: ListView.builder(\n itemCount: cars.length,\n itemBuilder: (context, i) {\n final car = cars.elementAt(i);\n final textWidget =\n Text('Car model \"${car.model}\" has owner ${car.owner.name} with ${car.drivers.length} drivers');\n return textWidget;\n },\n ),\n ),\n ElevatedButton(\n onPressed: () => setState(() {}),\n child: Text(\"Press\"),\n ),\n ],\n ),\n ),\n ),\n );\n }\n}\n\nclass ObjectBox {\n /// The Store of this app.\n late final Store store;\n late final Admin admin;\n\n ObjectBox._create(this.store) {\n if (Admin.isAvailable()) {\n admin = Admin(store);\n }\n // Add any additional setup code, e.g. build queries.\n }\n\n /// Create an instance of ObjectBox to use throughout the app.\n static Future<ObjectBox> create() async {\n final docsDir = await getApplicationDocumentsDirectory();\n final path = p.join(docsDir.path, \"obx-example\");\n final store = await openStore(directory: path);\n\n return ObjectBox._create(store);\n }\n}\n```\n\n### Stacktrace of the exception/crash you're getting\n\n_No response_\n\n### Relevant log output\n\n_No response_However, note that you no longer needs to do these tricks, since RealmResults.elementAt has since been implemented efficiently. Still the discussion is illuminating I think.There is also this issue, that I think does a good job explaining why you don’t need to do pagination with realm, and why you should be aware of ìtemExtent in ListView.builder when working with large lists.### Description\n\nWe have checked the Flutter docs at https://www.mongodb.com/d…ocs/realm/sdk/flutter and can not see anything around pagination.\n\nWe have Realm collections/models where some individual collections/models have 100,000+ documents. Obviously when displaying the documents in this collection/model in the Flutter UI, there is no option but to paginate for memory/performance reasons. Is there a recommended approach for pagination in Realm? Cursor based pagination is the preferred option (that preserves the current Realm query query/sort/etc) if possible as this is likely the most memory efficient way.\n\n### How important is this improvement for you?\n\nDealbreaker", "username": "Kasper_Nielsen1" }, { "code": "Iterable.elementAtRealmResultstoListelementAt", "text": "I would also like to point out that Iterable.elementAt is implemented efficeintly for RealmResults. Think about this before calling toList. Even if you need to use indexing later, you can rest assured that elementAt is O(1) instead of O(n).", "username": "Kasper_Nielsen1" }, { "code": ".query().where().toList()", "text": "Hi @Kasper_Nielsen1,Thank you for the explanation, it’s clearer now.I also appreciate the tip on log levels - it’s quite helpful!Regarding the attached issues - I actually went through them before starting the implementation. They, along with your Youtube streams on Realm, were instrumental in helping me grasp the proper use of Realm for Flutter. This was the main reason I chose Realm over other NoSQL database implementations. Live data Iterables are truly an amazing and superior concept!Our initial goal is to quickly transition our app to use offline storage by replacing the http DAL with a local database. If all goes well in production, and we find that Realm is stable enough for such a strong coupling, we can gradually rewrite the app to use Realm the right way.However, it would be immensely helpful to have more elaborate documentation on best practices, given that the core concept differs from the typical “query and map everything” or “data streams” approach that everyone is used to. It’s also not immediately clear when and why one should or shouldn’t use .query() as opposed to .where(), those .toList() gotchas, etc.In any case, thank you very much for your dedication!", "username": "Andrew_M1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Do relationship properties get populated during .toList()?
2023-08-31T15:55:06.070Z
Do relationship properties get populated during .toList()?
539
null
[ "aggregation", "queries", "crud" ]
[ { "code": "{\n \"_id\": \"F9F39JQH\",\n \"field\": {\n \"array\": [\n {\n \"itemId\": 1\n },\n {\n \"itemId\": 2\n }\n ]\n }\n}\narray{\n \"_id\": \"F9F39JQH\",\n \"field\": {\n \"array\": [\n {\n \"itemId\": 1,\n \"newField\": \"val\" <- add this field & value.\n },\n {\n \"itemId\": 2\n }\n ]\n }\n}\nconst pipelineUpdate = [\n { $addFields: { \"field.array.0\": { newField: \"val\" } } }\n];\nconst pipelineUpdate2 = [\n { $addFields: { \"field.array.0.newField\": \"val\" } }\n];\ndb.users.updateOne({ _id: \"F9F39JQH\" }, pipelineUpdate);\n{\n \"_id\": \"F9F39JQH\",\n \"field\": {\n \"array\": [\n {\n \"0\": {\n \"newField\": \"val\"\n },\n \"itemId\": 1\n },\n {\n \"0\": {\n \"newField\": \"val\"\n },\n \"itemId\": 2\n }\n ]\n }\n}\n$addFieldsTo add a field or fields to embedded documents (including documents in arrays) use the dot notation", "text": "Please consider the following data:I’m trying to add a single field inside the first element or the array.I tried executing either of the following pipeline updates:But the result is quite unexpected:$addFields creates a field “0” in all array elements instead of using the “.0” as path selector to identify the target.The documentation states: To add a field or fields to embedded documents (including documents in arrays) use the dot notationHowever I found it not to be the case here. What am I missing ?", "username": "Benjamin_Hallion" }, { "code": "db.getCollection(\"Test\").deleteMany({})\n\ndb.getCollection(\"Test\").insertMany([\n{\n _id:0,\n languages:[\n {\n itemID:1,\n itemVal:'A'\n },\n {\n itemID:2,\n itemVal:'B'\n }\n ]\n},\n{\n _id:1,\n languages:[\n {\n itemID:3,\n itemVal:'C'\n },\n {\n itemID:4,\n itemVal:'D'\n }\n ]\n},\n])\n\n\ndb.getCollection(\"Test\").updateMany(\n{\n _id:0\n},\n {\n $set:{\n 'languages.0.newVal2':'test'\n }\n }\n)\n\ndb.getCollection(\"Test\").find({})\n", "text": "Don’t pass the update statement in this case as an array but single object:Before:\nAfter:\n", "username": "John_Sewell" }, { "code": "", "text": "I’m sorry if this was not clear but my question about $addField.\nI already know about the $set instruction.\n$addField is supposed to handle this kind of update in a different manner that better fit my use case and the documentation states that I should be able to use it for my case.", "username": "Benjamin_Hallion" }, { "code": "", "text": "You could then either get the aggregate pipeline to merge back in for the update or this is the documentation for using aggregation stages in the update operation:The following page provides examples of updates with aggregation pipelines.I’ll have a play tomorrow with addFields instead see if I can get it working, unless someone else has an example to hand…", "username": "John_Sewell" }, { "code": "\"field.array.0.newField\"$mapfield.array$arrayElemAtfield.array.itemId$cond$eqitemIditemId$$this$mergeObjectsnewField$$thisconst pipelineUpdate = [\n {\n $addFields: {\n \"field.array\": {\n $map: {\n input: \"$field.array\",\n in: {\n $cond: [\n {\n $eq: [\n { $arrayElemAt: [\"$field.array.itemId\", 0] },\n \"$$this.itemId\"\n ]\n },\n { $mergeObjects: [{ newField: \"val\" }, \"$$this\"] },\n \"$$this\"\n ]\n }\n }\n }\n }\n }\n];\ndb.users.updateMany({ _id: \"F9F39JQH\" }, pipelineUpdate);\n", "text": "Hello @Benjamin_Hallion,As I can see you are using update with aggregation pipeline and it is for to handle some exceptional cases that are not handled by normal update query.So \"field.array.0.newField\" is the syntax of a normal update query, the update with aggregation pipeline doesn’t have the privilege of using any of the syntax from normal update query syntax.You can do something like this in an update with aggregation pipeline syntax,", "username": "turivishal" }, { "code": "dot notation", "text": "Thanks you @John_Sewell and @turivishal for your responses and your time.@turivishal wrote “the update with aggregation pipeline doesn’t have the privilege of using any of the syntax from normal update query syntax”.\nDo you have an official statement or documentation about specific dot notation differences between the aggregation and normal update pipeline ? From my point of view, this is either a bug or the documentation is not telling the whole story.", "username": "Benjamin_Hallion" }, { "code": "", "text": "Hi @Benjamin_Hallion,I appreciate your follow-up question. While I don’t find any explicit documentation that highlights the differences in dot notation between the aggregation pipeline and the normal update query syntax, it’s understandable that this can be a bit confusing.The aggregation pipeline and the normal update query syntax are indeed separate mechanisms in MongoDB, each with its own set of rules and behaviors.If you know what is the aggregation pipeline then it is easy to understand that an update with an aggregation pipeline supports an aggregation pipeline in an update query.I have already provided the required resource links in my above post.", "username": "turivishal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$addFields inside an array of objects
2023-09-01T15:54:14.416Z
$addFields inside an array of objects
525
https://www.mongodb.com/…2_2_1024x318.png
[ "unity" ]
[ { "code": "\n{\n \"collection\": \"player\",\n \"database\": \"herofishing\",\n \"roles\": [\n {\n \"name\": \"PlayerSelf\",\n \"apply_when\": {\n \"_id\": \"%%user.id\"\n },\n \"document_filters\": {\n \"write\": false,\n \"read\": true\n },\n \"fields\": {\n \"deviceUID\": {\n \"write\": true,\n \"read\": true\n },\n \"onlineState\": {\n \"write\": true,\n \"read\": true\n }\n },\n \"additional_fields\": {\n \"write\": false,\n \"read\": true\n },\n \"insert\": false,\n \"delete\": false,\n \"search\": true\n },\n {\n \"name\": \"OtherPlayer\",\n \"apply_when\": {\n \"%%user.custom_data.role\": \"OtherPlayer10\"\n },\n \"document_filters\": {\n \"write\": false,\n \"read\": true\n },\n \"read\": true,\n \"write\": false,\n \"insert\": false,\n \"delete\": false,\n \"search\": true\n },\n {\n \"name\": \"Unknown\",\n \"apply_when\": {},\n \"document_filters\": {\n \"write\": false,\n \"read\": false\n },\n \"read\": false,\n \"write\": false,\n \"insert\": false,\n \"delete\": false,\n \"search\": false\n }\n ]\n}\n\n", "text": "In Client(unity), I write document to realm and got permission error because of wrong role.\nThis is my rule\nimage1419×442 39.2 KB\n", "username": "Scoz_Auro" }, { "code": "_id\"document_filters.write\"", "text": "Hello,Based on the screenshot, it seems like you are wondering the reason that the first role wasn’t being applied to that user’s write.An important thing to note here is that roles are applied at the beginning of a sync session, before any documents have been seen. Hence, it is necessary that a role’s apply_when expression cannot reference fields in a document in order to be used in Flexible Sync. Please see the docs (Permissions with Device Sync, Sync Compatible Expressions) for more information.It looks like your first role is referencing a document field (_id); thus, this role will fail to match during role evaluation. Consequently, due to the nature of role order evaluation, the next applicable role will match and determine the set of permissions to be applied during the session. From the logs, this appears to be “OtherPlayer10” in this case. Since, that role has a value of \"document_filters.write\" set to false, then writes will be disallowed during this session.Let me know if you have any other questions,\nJonathan", "username": "Jonathan_Lee" }, { "code": "", "text": "Thank you for your reply. Just make sure. Is it correct to say that there is no way to allow players to modify their own documents directly from the client using Flexible Sync?", "username": "Scoz_Auro" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Data Access Role Rule writing field value is not working
2023-09-01T10:54:15.563Z
Data Access Role Rule writing field value is not working
415
https://www.mongodb.com/…5_2_1024x576.png
[ "student-developer-pack" ]
[ { "code": "", "text": "I have Github account with my college id (which is a outlook mail) as a primary id. I have gotten access to student developer pack and chose to get free MongoDB certification.I am unable to signup for MongoDB account to continue my learning path. I have made my outlook mail public by checking off private option in Github, but issue still persists. I am posting this inquiry in this forum with the help of a MongoDB account created with another personal gmail. Look into the error and resolve the issue.\nScreenshot (1)1920×1080 224 KB\n", "username": "Siva_Ganesh2" }, { "code": "", "text": "Hi Siva and welcome to the forums! Sorry you encountered this issue.Are you trying to sign into MongoDB for Students (MongoDB Student Pack) or MongoDB University (learn.mongodb.com)? Be aware that these accounts are separate and can have separate email addresses.If you email us at [email protected], we can take a closer look and try to resolve your issue.", "username": "Aiyana_McConnell" }, { "code": "", "text": "I am trying to sign into MongoDB University, I have already signed into MongoDB for Students from Github.\nHow to finish learning path and get free certification voucher ?", "username": "Siva_Ganesh2" }, { "code": "", "text": "Hi Siva, thanks for the additional context.I think I see the issue. If you signed into MongoDB for Students, you still need to create a MongoDB University account.A step-by-step guide to receiving your free certification voucher:If you do not have a MongoDB University (learn.mongodb.com) accountSign into MongoDB for Students (MongoDB Student Pack) with your GitHub info (you’ve done this)Create a MongoDB University account using the same email address you used to sign into MongoDB for StudentsRegister for and complete a MongoDB University learning pathYour free certification voucher will be sent to you immediately after completing a learning pathIf you already have a MongoDB University accountLog into MongoDB for Students (MongoDB Student Pack) with your GitHub infoRegister for and complete a MongoDB University learning pathIf the email addresses used for MongoDB University and MongoDB for Students (GitHub email address) are the same, your free certification voucher will be sent to you automatically upon completing a learning pathIf the email addresses for MongoDB University and MongoDB for Students (GitHub email address) are different, sign into MongoDB for Students and follow the instructions on the post-login page to receive your free certification voucherSorry if this is a bit confusing! Please follow-up if you have any additional questions.", "username": "Aiyana_McConnell" }, { "code": "", "text": "Thanks for being patient and reaching out. Let me summarizeProblem : Unable to sign-up for MongoDB university, using Github account.Solution : So, instead of singing up using github account, i should just use the mail id ( which is in my Github account ) to create a new MongoDB university account and I will be sent free voucher.But the problem is, the mail id in my Github account is student mail id(which doesn’t end with ‘gmail.com’), it is not gmail. We can only create new MongoDB account using gmail.Let’s say i have proceeded with my personal gmail id which is different from my Github’s one to finish learning path. Would you be kind enough to tell me which ‘post login page’ were you taking about ? On MongoDB students pack web page, I have seen some instructions to fill a google form incase someone has not recieved voucher after they finished learning path(look the attached picture). Are you talking about the same instructions. ?Please correct me if i am wrong. Again thanks for being patient.\nScreenshot (2)1920×1080 219 KB\n", "username": "Siva_Ganesh2" }, { "code": "", "text": "Hi Siva, thank you for the additional context and the helpful video!You can absolutely create a MongoDB University account using your school email (unless your school has some sort of rule against that). You do not need to use a gmail account to create a MongoDB University account.And yes! That form is precisely what I’m referring to. It is only available to those who are successfully logged into MongoDB for Students. So yes, if you have proceeded to signup for MongoDB University with a personal gmail account you would fill that form out after you complete a MongoDB University learning path.Remember that regardless of whether or not you successfully create a MongoDB University account with your gmail email address or your GitHub email address, you must complete a MongoDB University learning path to receive the free certification voucher.", "username": "Aiyana_McConnell" }, { "code": "", "text": "Thank you for clarifying it", "username": "Siva_Ganesh2" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to signup for MongoDB Account using Github Account which has student developer pack
2023-08-29T11:34:16.952Z
Unable to signup for MongoDB Account using Github Account which has student developer pack
531
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "Hi Team,We have mongodb version of 6.0.9 in Cloud Atlas. We need to migrated the data to onpremies.\nInstalled same version and dowload the snapshot backup from cloud, then try to restore it using mongorestore command in Onpremise newly installed mongodb 6.0.9 version. But unable to restore the tar backup from cloud. I am struck in this point . Please advise.", "username": "Kiran_Joshy" }, { "code": "", "text": "Hi @Kiran_Joshy,\nWith the mongorestore, you have to use the mongodump necessarly.From the documentation:Or you can use this new tool:Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Backup can be logical or physical, you need to make sure the way you back up on atlas is compatible with the way to restore on on-premise server. (e.g. binary format? disk format?)", "username": "Kobe_W" }, { "code": "", "text": "Can you help on this", "username": "Kiran_Joshy" } ]
Unable to restore Mongodb Database in Onpremises
2023-09-01T10:09:51.597Z
Unable to restore Mongodb Database in Onpremises
357
null
[ "java", "cxx", "c-driver" ]
[ { "code": "PersonCustomerPerson person;\nMongoCollection<Person> col = db.getCollection(\"MachineData\", Person.class);\ncol.insertOne(person);\n // Creating customer object\n Customer *cust = new Customer(\"Apple\", \"California\", \"Los Angeles\", \"1 Apple Way\", \"00000\", \"Customer Name\", \"012-345-6789\", \"[email protected]\");\n\n // Grabbing the collection\n mongocxx::collection collection = db[\"Quotes\"];\n\n // Make the document to insert\n auto doc_value = bsoncxx::builder::basic::make_document(\n bsoncxx::builder::basic::kvp(\"name\", \"Name\"),\n bsoncxx::builder::basic::kvp(\"type\", \"database\"),\n bsoncxx::builder::basic::kvp(\"count\", 1),\n bsoncxx::builder::basic::kvp(\"versions\", bsoncxx::builder::basic::make_array(\"v6.0\", \"v5.0\", \"v4.4\", \"v4.2\", \"v4.0\", \"v3.6\")),\n bsoncxx::builder::basic::kvp(\"info\", bsoncxx::builder::basic::make_document(bsoncxx::builder::basic::kvp(\"x\", 203), bsoncxx::builder::basic::kvp(\"y\", 102))));\n auto doc_view = doc_value.view();\n\n collection.insert_one(doc_view);\nbsoncxx::builder::basic::kvp(\"info\", bsoncxx::builder::basic::make_document(bsoncxx::builder::basic::kvp(\"x\", 203), bsoncxx::builder::basic::kvp(\"y\", 102))));\n", "text": "Not sure if the tags, or even the topic is right but if it isn’t please let me know and I will change it.Otherwise, I have a pretty simple question. In Java, I am able to insert custom class objects such as Person, Customer, etc. into the collection by using the following codeAlthough, I cannot seem to figure out how to accomplish this within C++. Would this be more of a question for a C++ forum? Is this not how it is intended to be used being Mongo isn’t an ORDB?Currently, I have thisWhich I generally received from this link, following along with the sections. If this is possible, I’d love to be lead into the direction of how to go about it. Thanks a lot!Edit:So I should have realized this prior to posting, but I noticed thatcreates an embedded document, which is somewhat of the object I am looking for because I believe? Is this the only way to accomplish how I did it in Java?", "username": "bigorca54" }, { "code": "void Quote::objectifyDocument(std::optional<bsoncxx::document::value> jsonData) {\n setQuoteNumber(jsonData->view().find(\"quoteNumber\")->get_string().value);\n rep.setRepName(jsonData->view().find(\"Rep\")->get_document().view().find(\"repName\")->get_string().value);\n rep.setRepPhone(jsonData->view().find(\"Rep\")->get_document().view().find(\"repPhone\")->get_string().value);\n rep.setRepEmail(jsonData->view().find(\"Rep\")->get_document().view().find(\"repEmail\")->get_string().value);\n specs.setMachineType(jsonData->view().find(\"Specifications\")->get_document().view().find(\"type\")->get_string().value);\n specs.setTableSize(jsonData->view().find(\"Specifications\")->get_document().view().find(\"size\")->get_string().value);\n specs.setBeginningDeliveryTime(jsonData->view().find(\"Specifications\")->get_document().view().find(\"beginningDeliveryTime\")->get_string().value);\n specs.setEndingDeliveryTime(jsonData->view().find(\"Specifications\")->get_document().view().find(\"endingDeliveryTime\")->get_string().value);\n customerInfo.setCompanyName(jsonData->view().find(\"Customer Information\")->get_document().view().find(\"companyName\")->get_string().value);\n customerInfo.setCustomerState(jsonData->view().find(\"Customer Information\")->get_document().view().find(\"customerState\")->get_string().value);\n customerInfo.setCustomerCity(jsonData->view().find(\"Customer Information\")->get_document().view().find(\"customerCity\")->get_string().value);\n customerInfo.setCustomerAddress(jsonData->view().find(\"Customer Information\")->get_document().view().find(\"customerAddress\")->get_string().value);\n customerInfo.setCustomerZipCode(jsonData->view().find(\"Customer Information\")->get_document().view().find(\"customerZipCode\")->get_string().value);\n customerInfo.setCustomerName(jsonData->view().find(\"Customer Information\")->get_document().view().find(\"customerName\")->get_string().value);\n customerInfo.setCustomerPhone(jsonData->view().find(\"Customer Information\")->get_document().view().find(\"customerPhone\")->get_string().value);\n customerInfo.setCustomerEmail(jsonData->view().find(\"Customer Information\")->get_document().view().find(\"customerEmail\")->get_string().value);\n}\n", "text": "Got it.I just had to make a docifyObject method within the class.Basically taking every field, and making it into an embedded document within the document itself, being there were three objects to one document in my case. Then when receiving this object from a find method, you have to put them into the class variables one by one, I called this objectifyDocument (sure there is a better name for that lol).Then with the docifyObject you basically just make a document with your object name as the field of the embedded document, and use getters to set the values of the document fields.", "username": "bigorca54" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Inserting C++ object into MongoDB collection
2023-08-29T20:21:06.823Z
Inserting C++ object into MongoDB collection
454
null
[ "aggregation" ]
[ { "code": "", "text": "The past two days I’ve been having issues with my cluster giving me “An unexpected error occurred. Please try again in a few minutes.” error, the last one prevented me from connecting to the database for over 24 hours. As soon as I was able to connect again, I ran an aggregation query which caused the issue once again, preventing me from connecting to the database.I’ve been trying to tweak and optimize my aggregation query since it’s been giving me timeout errors and out of memory errors. It would see that running these queries frequently may have caused it, but otherwise there is no additional information on the exact cause of the error nor how to prevent it from happening again.", "username": "Bouhm_N_A" }, { "code": "", "text": "i am aslo facing the same problem. did you find any solution?", "username": "Iftekhar_Salmin" } ]
MongoDB cluster becomes unavailable after aggregation query
2021-11-01T20:26:28.684Z
MongoDB cluster becomes unavailable after aggregation query
1,844
https://www.mongodb.com/…4fd9e829c30a.png
[ "data-modeling", "mumbai-mug" ]
[ { "code": "Co-Founder @ PetavueConsulting Engineer @ MongoDBProduct Head @ DronaHQTech Consultant @ Deloitte", "text": "\nMUG slide edited960×540 127 KB\nMongoDB User Group Mumbai is excited to announce its second meetup on Sep 2nd at DevX, Mumbai in association with DronaHQ. The gathering will feature three engaging presentations complete with demonstrations, a collaborative fun exercise, lunch , an opportunity to meet fellow MongoDB enthusiasts and win some exciting swag! The event aims to provide you with an overview of MongoDB’s Data Modelling, Schema Design , Vector Search and Building GUIs with MongoDBWe invite you to join us for a day filled with learning and networking! To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Event Type: In-Person\nLocation: 10th Floor, 215 Atrium, B Wing, Vijay Nagar Colony, J B Nagar, Andheri East, Mumbai, Maharashtra 400047\nijas pic800×800 186 KB\nCo-Founder @ Petavue\nroshan pic1920×2433 174 KB\nConsulting Engineer @ MongoDB\nfenil pic1920×3015 429 KB\nProduct Head @ DronaHQ\nimage640×641 66.2 KB\nTech Consultant @ Deloitte", "username": "Nilesh_32704" }, { "code": "", "text": "I have RSVPed to this event .Do we get some sort of confirmation mail ,didn’t get any email as such", "username": "Shweta_Kadam" }, { "code": "", "text": "Hey Anuj,\nWe will get back to you on this!", "username": "Nilesh_32704" }, { "code": "", "text": "Yes, Confirmed attendees will receive a separate confirmation email.", "username": "Nilesh_32704" }, { "code": "", "text": "By when will we get the mail?", "username": "Manthan_Singh" }, { "code": "", "text": "Along with MongoDB basics, what should be prerequisites for this event?", "username": "Om_Bhojane" }, { "code": "", "text": "2-3 days before the event", "username": "Nilesh_32704" }, { "code": "", "text": "havent received confirmation yet", "username": "Suraj_Chilgar" }, { "code": "", "text": "We will be rolling out confirmations in sometime", "username": "Nilesh_32704" }, { "code": "", "text": "All slots have been filled, indicating that the event is fully booked. Is there any possibility of securing a registration spot?", "username": "Vinit_Upadhyay" }, { "code": "", "text": "Hello Vinit,\nWe have already sent out the confirmation emails and registrations are closed for the event.\nStay tuned for the upcoming events.", "username": "Nilesh_32704" }, { "code": "", "text": "I have RSVPed to this event .Do we get some sort of confirmation mail ,didn’t get any email as such", "username": "Avinash_Kumar8" }, { "code": "", "text": "Hello Nilesh,\nI have done RSVP on 26th August but haven’t received any confirmation. Is there any selection criteria happened ? or there is some glitch?", "username": "Saurav_Singh10" }, { "code": "", "text": "Hello @Saurav_Singh10\nWe got a total of 130 registrations through this event page.\nWe did not find your registration in that.\nPlease check if it shows RSVPd on your end.", "username": "Nilesh_32704" }, { "code": "", "text": "Hello,\nAs I have checked my email on that day I have mistakenly done RSVP with my Sister email name Bhavika Singh with her email Id from her laptop and confirmation email was recieved on my sister email. can I attend the session with that if possible? please look into that and allow me to attend.\nThanking you!", "username": "Saurav_Singh10" }, { "code": "", "text": "@Nilesh_32704 I had rsvpd into the event. There was a green tick too. But I haven’t received any confirmation mail. When I registered there were 5 more seats remaining, yet I haven’t received the email.", "username": "Bhavya_Desai" }, { "code": "", "text": "Hey @Bhavya_Desai\nWe haven’t received your registrations.\nAlso, we have already sent the confirmation emails to attendees.Please stay tuned and try for future events.", "username": "Nilesh_32704" } ]
Mumbai MUG: Data Modeling,Vector Search and Building GUIs with MongoDB
2023-08-23T07:00:10.680Z
Mumbai MUG: Data Modeling,Vector Search and Building GUIs with MongoDB
2,609
null
[ "dot-net" ]
[ { "code": "", "text": "I completed the “MongoDB C# Developer Path” learning path and purchased the exam\nbut I realized that my name on certifications and exam is my social name “moment nemrat” and it is different than my formal name “Moumen Alnemrat”I emailed [email protected]‬ and [email protected] on 28-8 but didn’t get any response back!where I can get a proper help\nthanks in advance", "username": "momen_nemrat" }, { "code": "", "text": "Hello Moumen,We apologize for the delay. I was not able to locate your email request in our records.\nWe have updated your name in Examity to show as Moumen Alnemrat. Your certifications will now show your formal name as well.\nPlease reach out to [email protected] if you need further assistance.Thank you!\n~Certification Operations Team", "username": "Heather_Davis" }, { "code": "", "text": "My certificates and Examity profile are showing the right name now\nThank you", "username": "momen_nemrat" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Certificate Name Issue
2023-08-31T17:30:19.541Z
Certificate Name Issue
423
null
[ "connector-for-bi" ]
[ { "code": "", "text": "Hi,I have been testing the power bi SQL connector. It works great on a dataset level, however when I try to connect in a Power BI Dataflow or Datamart I get this error everytime;An exception occurred: The given data source kind is not supported. Data source kind: MongoDBAtlasODBC.Is this a issue with the connector? It would be good to get dataflows working with the connector as we could then use incremental refreshes, as currently powerbi with ODBC connection does a full COLSCAN on every refresh.Thanks", "username": "Colin_Mallon" }, { "code": "", "text": "Hi there @Colin_Mallon welcome to the community and thanks for posting. My name is Alexi Antonino and I am the Product Manager for Atlas SQL and the new customer Power BI Connector. I have not tried the connector with DataFlows or Datamart yet. I will try it out and see what I can figure out. I will see if I can get some time today or tomorrow to explore.Based on the error you have reported, it would indicate that Dataflow does not work with our ODBC driver. But it might just be that it can’t be accessed or something to that nature. I will let you know what I find.Best,\nAlexi", "username": "Alexi_Antonino" }, { "code": "", "text": "Hi Alexi, thanks for the reply. That would be greatly appreciated, lets hope it is something that is fixable ", "username": "Colin_Mallon" }, { "code": "", "text": "Hi Alexi, have you any update on this?", "username": "Colin_Mallon" }, { "code": "", "text": "@Alexi_Antonino Any ideas on why the connector isn’t working?", "username": "Alexander_Najem" }, { "code": "", "text": "Hi There - I am working on getting this to work. I have a message into MS to ask if the data in my collection already needs to be flattened as I keep getting an error: An exception occurred: We cannot convert a value of type Record to type Text.\nIt is my expectation that our connector works with Dataflows, but I need to see if it can accept our nested data as is, or some upstream transformation is necessary (based on the error I am getting, this is what I am thinking).\nI will let you know what what I find out.", "username": "Alexi_Antonino" }, { "code": "", "text": "Good News! I was able to get this to work. Let me document the steps I took and add them here for you. It might take me a bit to recreate this as I was trying a lot of different things, so give me a few hours and I will let you know how I got this working.", "username": "Alexi_Antonino" }, { "code": "", "text": "@Alexander_Najem and @Colin_Mallon\nAs mentioned in my last post, I was able to get Dataflows working with our new MongoDB Atlas SQL Connector. I will discuss with Microsoft if this approach is correct, because while it’s working, the steps I took were a bit out of the ordinary. I got it working with “Blank Query” as opposed to connecting to the database and selecting a table from the navigation list. And it might be that the navigation list (list of databases and tables) is just not supported with our connector just yet (more to come on this). Here are the steps for connection:Power BI DataFlowsRequirements:Instructionslet\nSource = MongoDBAtlasODBC.Query(“mongodb://atlassqlsandbox-rotpc.a.query.mongodb.net/Supplies?ssl=true&authSource=admin”, “Supplies”, “select * from Sales”, null)\nin\nSource", "username": "Alexi_Antonino" }, { "code": "The given data source kind is not supported. Data source kind: MongoDBAtlasODBC\n", "text": "Hi thanks for your help.I tested this out, however data still wont load. I get this errorI double checked that the gateway server has the correct drivers installed but still no joy.", "username": "Colin_Mallon" }, { "code": "", "text": "Hi @Colin_Mallon Have you gotten the gateway to work prior to trying it with Dataflows? I want to understand if this error is coming from the on-premise gateway or is Dataflows specific. Also, if you can tell me when you get this error (list your steps or based on the ones I provided above tell me at which point this error occurs).And finally, you may want to make sure you are running the most up to date versions of the connector and ODBC driver - you can check our download center to verify.If you’d like to share any screenshots or need to provide information that you don’t want on the public forum, feel free to email me as well: [email protected],\nAlexi", "username": "Alexi_Antonino" }, { "code": "", "text": "Hi Alexi,I have now got it working thank you!\nI had the most up to date ODBC connector but had 1.0.0 custom powerbi connecter and not 1.0.1.\nUpdating solved this.Thanks!", "username": "Colin_Mallon" }, { "code": "", "text": "@Alexi_Antonino Amazing that you responded and got this working!!! One additional question, can this connector be used with on premise MongoDB Installations?", "username": "Alexander_Najem" }, { "code": "", "text": "@Alexander_Najem This SQL Interface connector and the driver only support Atlas at this point in time. Our existing on-premise BI Connector does work with Power BI, but I don’t know if it would work with DataFlows - I can check. Our on-premise BI Connector does support the on-premise gateway though (so that is half of the equation).", "username": "Alexi_Antonino" }, { "code": "", "text": "Thanks @Alexi_Antonino We already use the existing connector for on-premise (via data flows and via on-prem gateway) it works fine, but its very kludgy, and difficult to use and maintain, we were hoping for a more modern solution (which this new connector seems to be). It would be nice if there was a unified way to work with hybrid environments so we didn’t need to learn 2 solutions, but we’ll take whatever we can get.", "username": "Alexander_Najem" }, { "code": "", "text": "@Alexander_Najem thanks for the feedback. I will definitely thank this into consideration when deciding what the eventual replacement of on-prem BI Connector looks like.", "username": "Alexi_Antonino" }, { "code": "", "text": "Thanks @Alexi_Antonino for this information, it helped me solve my issue.After finally getting the gateway setup properly with the connector, I was still getting the error message “We cannot convert a value of type Record to type Text” every time I tried to to refresh my published dataset on PowerBI cloud/service. I converted all my Power Query tables to use your method instead of using the navigation list. After re-publishing, it refreshed with no errors!", "username": "Rob_Shuter" }, { "code": "", "text": "@Rob_Shuter I’m so glad this was helpful. I need to understand why the typical connection to the db using the navigation list doesn’t work and the blank query does. Hoping to get that answered soon.Cheers!", "username": "Alexi_Antonino" } ]
Power BI Dataflow setup
2023-07-17T15:11:24.398Z
Power BI Dataflow setup
940
https://www.mongodb.com/…2_2_1024x405.png
[ "queries", "node-js", "mongoose-odm" ]
[ { "code": "const userSchema = new mongoose.Schema(\n {\n _id: mongoose.Schema.Types.ObjectId,\n createdAt: {type: Date},\n updatedAt: {type: Date},\n deletedAt: {type: Date},\n createdBy: {type: mongoose.Schema.Types.ObjectId},\n updatedBy: {type: mongoose.Schema.Types.ObjectId},\n deletedBy: {type: mongoose.Schema.Types.ObjectId},\n name: {\n type: String,\n trim: true,\n required: 'Please enter user name'\n },\n email: {\n type: String,\n trim: true,\n required: 'Please enter user email',\n lowercase: true,\n unique: true,\n match: [/^\\w+([\\.-]?\\w+)*@\\w+([\\.-]?\\w+)*(\\.\\w{2,3})+$/, 'Please fill a valid email address']\n },\n role: {\n type: mongoose.Schema.Types.ObjectId,\n required: true,\n ref: 'roles'\n }\n }\n);\nconst roleSchema = mongoose.Schema({\n _id: mongoose.Schema.Types.ObjectId,\n createdAt: {type: Date},\n updatedAt: {type: Date},\n deletedAt: {type: Date},\n createdBy: {type: mongoose.Schema.Types.ObjectId},\n updatedBy: {type: mongoose.Schema.Types.ObjectId},\n deletedBy: {type: mongoose.Schema.Types.ObjectId},\n name: {\n type: String,\n trim: true,\n required: 'Please enter a valid role name'\n }\n});\nUsers\n.find()\n.populate(\"role\")\n", "text": "Here we are, I show you my case\nI’m working with nodejs + express + mongose\nI have a Users model which has a role field which is the id of the roles collectionHere is the Roles modelTo view the users including role I used the statementThe goal is that by writing “Editor” in the search input only users who have the Editor role will be filtered and displayed.\nAny idea?\nscreen 11174×465 22.2 KB", "username": "Giorgio_Brugnone" }, { "code": ".aggregate(\n [\n {\n $lookup: \n {\n from: \"roles\",\n localField: \"role_id\",\n foreignField: \"_id\",\n as: \"role_obj\",\n pipeline: [\n {\n $project: {\n name:1\n }\n }\n ]\n }\n },\n {\n $set: \n {\n role_name: { $arrayElemAt: [ \"$role_obj.name\", 0 ] }\n }\n }\n ]\n)\n", "text": "Ok I found it an elegant solution:\nInstead using find().populate(“roel”) i had to use aggregate() with $lookup and $set", "username": "Giorgio_Brugnone" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Search and filter documents on main document fields and populted fields too
2023-08-31T13:41:14.114Z
Search and filter documents on main document fields and populted fields too
343
null
[ "queries" ]
[ { "code": "", "text": "Dear all,\nsuppose we have books collection like this\n{\n_id: “54353445kjh4j5h34j5h4”\nname: “IT”,\npages: “1200”,\nauthor: “648b162444af170e04de1020”\n},\n{\n_id: “234544jke908897sd787s”\nname: “Congo”,\npages: “1000”,\nauthor: “985urue3423osi2334ii”\n}And Author collection like this\n{\n_id: “648b162444af170e04de1020”,\nname: “Stephen King”,\n},\n{\n_id: “985urue3423osi2334ii”,\nname: \"Michael Crichton\n\",\n}I need to filter and show only Stephen King’s books\nHow can i do this?Tnx", "username": "Giorgio_Brugnone" }, { "code": "", "text": "You can use a $lookup to get the author of a book and then $match on that:/Edit to add that if this is a common query then you may want to think about embedding the author name in the books, after all how often does an author of a book change…", "username": "John_Sewell" }, { "code": "", "text": "I forgot to specify i’m working on Nodejs + Express\n$lookup and $match seem not workingI’ve tryed tu use match inside .populate but i retrive even all books at least only field author populated with Stephen King and blank on other booksi.e.\n.populate({\npath: ‘author’,\nmatch : {\nname: 'Stephen King ,\n}\n})", "username": "Giorgio_Brugnone" }, { "code": "", "text": "I assume Mongoose is also in your stack given the fragment, I don’t have much experience with Mongoose and use the raw driver instead so won’t be able to help with that.\nYou could share the source files so someone who does know can take a look.", "username": "John_Sewell" }, { "code": "", "text": "Ok tnx, i will close this topic and create a new one with the real case and some print screens\nThank you", "username": "Giorgio_Brugnone" }, { "code": "{\n $lookup: \n {\n from: \"roles\",\n localField: \"role_id\",\n foreignField: \"_id\",\n as: \"role_obj\",\n pipeline: [\n {\n $project: {\n name:1\n }\n }\n ]\n }\n },\n {\n $set: \n {\n role_name: { $arrayElemAt: [ \"$role_obj.name\", 0 ] }\n }\n },\n", "text": "After 3 days lost on web without success I found it\nI must only define a $lookup for a join than a $set to extract the filed i need for queries.\nThat’s all", "username": "Giorgio_Brugnone" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query documents by text into populted fields
2023-08-31T10:10:06.490Z
Query documents by text into populted fields
252
null
[]
[ { "code": "sudo service mongod startJob for mongod.service failed because the control process exited with error code. See \"systemctl status mongod.service\" and \"journalctl -xe\" for details.\n", "text": "i am on centos 7 when i trying to run sudo service mongod start i got an error like this", "username": "Brijesh_Kalkani" }, { "code": "See \"systemctl status mongod.service\" and \"journalctl -xe\" for details.", "text": "The first thing to do when you get an error message that saysSee \"systemctl status mongod.service\" and \"journalctl -xe\" for details.is to see “systemctl status mongod.service” and “journalctl -xe” for details.Even us, we cannot help you without the details.", "username": "steevej" }, { "code": "", "text": "Any news on the details? If you were able to fix your issue by following the advice, please mark my post as the solution so that others follow your steps.", "username": "steevej" }, { "code": "", "text": "@Brijesh_Kalkani, please follow up on your post. Did you follow the tip I provided in my post? Did it help you find a solution? If so please mark it as the solution. This will help this forum to be efficient.", "username": "steevej" }, { "code": "", "text": "Hi,\nyou can see what are the errors, by typing in the shell:\nsystemctl status mongod.service\njournactl -xe\ntail -f /path/to/log_file/mongod.logRegards", "username": "Fabio_Ramohitaj" }, { "code": "journactl -xeJan 18 15:31:33 rylan-ThinkPad-E590 kernel: CPU5: Core temperature/speed normal\nJan 18 15:31:33 rylan-ThinkPad-E590 kernel: CPU1: Core temperature/speed normal\nJan 18 15:31:33 rylan-ThinkPad-E590 kernel: CPU1: Package temperature/speed normal\nJan 18 15:31:33 rylan-ThinkPad-E590 kernel: CPU5: Package temperature/speed normal\nJan 18 15:31:33 rylan-ThinkPad-E590 kernel: CPU0: Package temperature/speed normal\nJan 18 15:31:33 rylan-ThinkPad-E590 kernel: CPU6: Package temperature/speed normal\nJan 18 15:31:33 rylan-ThinkPad-E590 kernel: CPU4: Package temperature/speed normal\nJan 18 15:31:33 rylan-ThinkPad-E590 kernel: CPU2: Package temperature/speed normal\nJan 18 15:31:33 rylan-ThinkPad-E590 kernel: CPU7: Package temperature/speed normal\nJan 18 15:31:33 rylan-ThinkPad-E590 kernel: CPU3: Package temperature/speed normal\nJan 18 15:31:38 rylan-ThinkPad-E590 slack.desktop[14549]: [01/18/22, 15:31:38:424] info: [RTM] (T2BRHD5EC) Processed 1 user_typing event(s) in channel(s) D02F5BU9B18 over 6.70ms\nJan 18 15:31:43 rylan-ThinkPad-E590 slack.desktop[14549]: [01/18/22, 15:31:43:710] info: [MSG-SHARED-RTM] (TC98Y5AQ3) dispatch to single store 1 msgs\nJan 18 15:31:43 rylan-ThinkPad-E590 slack.desktop[14549]: [01/18/22, 15:31:43:719] warn: (TC98Y5AQ3) Notification (message) suppressed because:\nJan 18 15:31:43 rylan-ThinkPad-E590 slack.desktop[14549]: Channel is muted\nJan 18 15:31:43 rylan-ThinkPad-E590 slack.desktop[14549]: [01/18/22, 15:31:43:720] info: [RTM] (TC98Y5AQ3) Processed 1 message:bot_message event(s) in channel(s) D02D5RQ95LL over 21.60ms\nJan 18 15:31:43 rylan-ThinkPad-E590 slack.desktop[14549]: [01/18/22, 15:31:43:780] info: [COUNTS] (TC98Y5AQ3) Updated unread_cnt for D02D5RQ95LL: 894\nJan 18 15:31:43 rylan-ThinkPad-E590 slack.desktop[14549]: [01/18/22, 15:31:43:943] info: [RTM] (T2BRHD5EC) Processed 1 user_typing event(s) in channel(s) D02F5BU9B18 over 7.30ms\nJan 18 15:31:46 rylan-ThinkPad-E590 sudo[13424]: rylan : TTY=pts/1 ; PWD=/home/rylan/Documents/physics-benchmarking-neurips2021/experiments/dominoes_redyellow_pilot ; USER=root ; COMMAND=/bin/systemctl start \nJan 18 15:31:46 rylan-ThinkPad-E590 sudo[13424]: pam_unix(sudo:session): session opened for user root by (uid=0)\nJan 18 15:31:46 rylan-ThinkPad-E590 systemd[1]: Starting LSB: An object/document-oriented database...\n-- Subject: Unit mongodb.service has begun start-up\n-- Defined-By: systemd\n-- Support: http://www.ubuntu.com/support\n-- \n-- Unit mongodb.service has begun starting up.\nJan 18 15:31:46 rylan-ThinkPad-E590 mongodb[13427]: * Starting database mongodb\nJan 18 15:31:47 rylan-ThinkPad-E590 mongodb[13427]: ...fail!\nJan 18 15:31:47 rylan-ThinkPad-E590 systemd[1]: mongodb.service: Control process exited, code=exited status=1\nJan 18 15:31:47 rylan-ThinkPad-E590 systemd[1]: mongodb.service: Failed with result 'exit-code'.\nJan 18 15:31:47 rylan-ThinkPad-E590 systemd[1]: Failed to start LSB: An object/document-oriented database.\n-- Subject: Unit mongodb.service has failed\n-- Defined-By: systemd\n-- Support: http://www.ubuntu.com/support\n-- \n-- Unit mongodb.service has failed.\n\n", "text": "I had the same error as OP and this is the output of journactl -xe:", "username": "Rylan_Schaeffer" }, { "code": "/var/log/mongodb/mongodb.log2022-01-18T15:21:05.419-0800 I CONTROL [initandlisten] MongoDB starting : pid=6332 port=27017 dbpath=/var/lib/mongodb 64-bit host=rylan-ThinkPad-E590\n2022-01-18T15:21:05.419-0800 I CONTROL [initandlisten] db version v3.6.3\n2022-01-18T15:21:05.419-0800 I CONTROL [initandlisten] git version: 9586e557d54ef70f9ca4b43c26892cd55257e1a5\n2022-01-18T15:21:05.419-0800 I CONTROL [initandlisten] OpenSSL version: OpenSSL 1.1.1 11 Sep 2018\n2022-01-18T15:21:05.419-0800 I CONTROL [initandlisten] allocator: tcmalloc\n2022-01-18T15:21:05.419-0800 I CONTROL [initandlisten] modules: none\n2022-01-18T15:21:05.419-0800 I CONTROL [initandlisten] build environment:\n2022-01-18T15:21:05.419-0800 I CONTROL [initandlisten] distarch: x86_64\n2022-01-18T15:21:05.419-0800 I CONTROL [initandlisten] target_arch: x86_64\n2022-01-18T15:21:05.419-0800 I CONTROL [initandlisten] options: { config: \"/etc/mongodb.conf\", net: { bindIp: \"127.0.0.1\", unixDomainSocket: { pathPrefix: \"/run/mongodb\" } }, storage: { dbPath: \"/var/lib/mongodb\", journal: { enabled: true } }, systemLog: { destination: \"file\", logAppend: true, path: \"/var/log/mongodb/mongodb.log\" } }\n2022-01-18T15:21:05.420-0800 I STORAGE [initandlisten]\n2022-01-18T15:21:05.420-0800 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine\n2022-01-18T15:21:05.420-0800 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem\n2022-01-18T15:21:05.420-0800 I STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7429M,session_max=20000,eviction=(threads_min=4,threads_max=4),config_base=false,statistics=(fast),log=(enabled=true,archive=true,path=journal,compressor=snappy),file_manager=(close_idle_time=100000),statistics_log=(wait=0),verbose=(recovery_progress),\n2022-01-18T15:21:06.112-0800 I CONTROL [initandlisten]\n2022-01-18T15:21:06.112-0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2022-01-18T15:21:06.112-0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2022-01-18T15:21:06.112-0800 I CONTROL [initandlisten]\n2022-01-18T15:21:06.113-0800 I STORAGE [initandlisten] createCollection: admin.system.version with provided UUID: 59618846-6a71-4606-b435-ec583269d327\n2022-01-18T15:21:06.120-0800 I COMMAND [initandlisten] setting featureCompatibilityVersion to 3.6\n2022-01-18T15:21:06.124-0800 I STORAGE [initandlisten] createCollection: local.startup_log with generated UUID: c5e16804-4ad9-4bf1-93e7-0282622efc94\n2022-01-18T15:21:06.130-0800 I FTDC [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongodb/diagnostic.data'\n2022-01-18T15:21:06.130-0800 I NETWORK [initandlisten] waiting for connections on port 27017\n2022-01-18T15:22:27.230-0800 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends\n2022-01-18T15:22:27.231-0800 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...\n2022-01-18T15:22:27.231-0800 I NETWORK [signalProcessingThread] removing socket file: /run/mongodb/mongodb-27017.sock\n2022-01-18T15:22:27.231-0800 I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture\n2022-01-18T15:22:27.232-0800 I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down\n2022-01-18T15:22:27.340-0800 I STORAGE [signalProcessingThread] shutdown: removing fs lock...\n2022-01-18T15:22:27.340-0800 I CONTROL [signalProcessingThread] now exiting\n2022-01-18T15:22:27.340-0800 I CONTROL [signalProcessingThread] shutting down with code:0\n", "text": "This is my output from looking at the first few lines of /var/log/mongodb/mongodb.log:", "username": "Rylan_Schaeffer" }, { "code": "", "text": "Hey there!\nThis is what i got. Could you help me out here?\nimage919×153 19 KB\n", "username": "Mirna_Daniel" }, { "code": "", "text": "The first thing to do is to look at the log.", "username": "steevej" }, { "code": "", "text": "I could see that the logs folder from /var/log was deleted and this caused the db to shut down.\nThe below steps resolved the issue -\n- switch to root user “sudo su”\n- cd /var/log/\n- mkdir mongodb\n- chown -R mongod:mongod mongodb\n- systemctl start mongod.service", "username": "Mirna_Daniel" }, { "code": "", "text": "Hi Steeve,I’m again facing issues with MongoDB. This time its a different error. I think its the space issue.\nimage1364×670 174 KB\n", "username": "Mirna_Daniel" }, { "code": "", "text": "Check free space on your /var/log mount point\ndf -k /var/log\ndf -i /var/log\nClean unnecessary files", "username": "Ramachandra_Tummala" }, { "code": "/tmp/mongodb-27017.sockrm -rf /tmp/mongodb-27017.socksystemctl restart mongod", "text": "For me its was the permission conflict on the sock file /tmp/mongodb-27017.sock/tmp/mongodb-27017.sock was owned by root:root while mongod was not which cause permission denied error.\na quick fix was to\nrm -rf /tmp/mongodb-27017.sock\nand the rum\nsystemctl restart mongod", "username": "Christian_Augustine" } ]
Job for mongod.service failed because the control process exited with error code. See "systemctl status mongod.service" and "journalctl -xe" for details
2021-12-03T04:56:26.546Z
Job for mongod.service failed because the control process exited with error code. See &ldquo;systemctl status mongod.service&rdquo; and &ldquo;journalctl -xe&rdquo; for details
22,797
null
[]
[ { "code": "", "text": "I can’t start mongodb. Here are some log and config files that I could trace.\nAny help would be much appriciated. Thanks.\n\nUntitled1869×710 105 KB\n", "username": "Trung_Le_Dinh" }, { "code": "journalctl -xejournalctl -u mongod/var/log/mongo/mongod.log", "text": "14\tReturned by MongoDB applications which encounter an unrecoverable error, an uncaught exception or uncaught signal. The system exits without performing a clean shutdown.You’ll have more helpful output in journalctl journalctl -xe (as per the output from systemctl start/restart) or journalctl -u mongod possibly even in /var/log/mongo/mongod.log", "username": "chris" }, { "code": "/var/log/mongo/mongod.log", "text": "even in /var/log/mongo/mongod.logThis is definitely the first place I would check.", "username": "Doug_Duncan" } ]
MongoDB won't start!
2022-06-16T19:48:13.765Z
MongoDB won&rsquo;t start!
3,124
null
[ "queries", "compass", "mongodb-shell", "golang" ]
[ { "code": "update := bson.A{\n\tbson.D{{\"$set\", bson.D{\n\t\t{\"profile\", bson.D{\n\t\t\t{\"fieldName\", bson.D{}},\n\t\t}},\n\t}}},\n}\nInvalid $set :: caused by :: an empty object is not a valid value. Found empty object at path profile.fieldName", "text": "I’m trying to write a $set query which contain empty values.\nThe golang bson command is pretty simple:The error => Invalid $set :: caused by :: an empty object is not a valid value. Found empty object at path profile.fieldNameI’v search the internet for a solution without luck. I’ve discovered that this should work on any mongodb server since version 5.0. My server is at version 6.0The exact same query works fine when using the mongosh console inside Mongo Compass on the same server.Did I miss something ?", "username": "Benjamin_Hallion" }, { "code": "", "text": "Ok ! I found my issue, I need to remove the bson.A wrapped around my bson.D object. The bson.A cause the $set to be treated as an aggregation command, which does not support empty objects.\nSee: MongoServerError: Invalid $set :: caused by :: an empty object is not a valid value - #4 by Jason_Tran", "username": "Benjamin_Hallion" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Golang driver: $set: an empty object is not a valid value
2023-09-01T08:45:34.500Z
Golang driver: $set: an empty object is not a valid value
463
null
[ "unity" ]
[ { "code": "WaitForDownloadAsync public static async Task<Dictionary<string, object>> CallAtlasFunc(AtlasFunc _func, bool _waitSync, Dictionary<string, object> _data) {\n\n string jsonResult = null;\n if (_data == null) jsonResult = await MyApp.CurrentUser.Functions.CallAsync<string>(_func.ToString());\n else jsonResult = await MyApp.CurrentUser.Functions.CallAsync<string>(_func.ToString(), _data);\n try {\n //WriteLog.LogColorFormat(\"jsonResult: {0}\", WriteLog.LogType.Realm, jsonResult);\n var dataDic = HandleReplyData(jsonResult);\n if (_waitSync) await MyRealm.SyncSession.WaitForDownloadAsync(); //<----- I use WaitForDownloadAsync to make sure downalod finished but it's not working.\n //dataDic.Log();\n return dataDic;\n } catch (Exception _e) {\n WriteLog.LogError(\"CallAtlasFunc Error: \" + _e);\n return null;\n }\n\n } \n", "text": "In client(Unity), I am calling an Atlas function to create a player’s document. However, the Unity client is not able to retrieve the newly created document immediately after the Atlas function finishes writing it. I have tried using WaitForDownloadAsync to ensure that the download is complete, but it doesn’t seem to be working. Here is the code snippet for my Atlas functionThis is my atlas functionI use WaitForDownloadAsync to make sure downalod finished but it’s not working.\nSo I can’t make sure I got the newest player data.\nvar player = MyRealm.Find(MyApp.CurrentUser.Id);// <–player’s data may be old", "username": "Scoz_Auro" }, { "code": "WaitForDownloadAsyncasync Task InsertSomeData(Dictionary<string, object> data)\n{\n // Assuming the function returns an id here, otherwise you can read it from the data or\n // somewhere else depending on your app needs\n var insertedId = await MyApp.CurrentUser.Functions.CallAsync<string>(\"insertFunc\", data);\n\n var tcs = new TaskCompletionSource();\n using var token = realm.All<UserData>().Where(d => d.Id == instertedId).SubscribeForNotifications((sender, changes) =>\n {\n // When we see the item inserted, we resolve the task completion source\n // and return from the function.\n if (sender.Count > 0)\n {\n tcs.TrySetResult();\n }\n });\n\n await tcs.Task;\n}\nUpdatedAtvar updatedAt = DateTimeOffset.UtcNow;\ndata[\"updatedAt\"] = updatedAt;\nvar updatedId = await MyApp.CurrentUser.Functions.CallAsync<string>(\"insertFunc\", data);\nusing var token = realm.All<UserData>().Where(d => d.Id == updatedId && d.UpdateAt >= updatedAt).SubscribeForNotifications(...);\n", "text": "The issue here is caused by the fact that data from Atlas is asynchronously picked up by Atlas Device Sync, so there’s a small window between inserting a document in Atlas and it being propagated to sync clients. Calling WaitForDownloadAsync is not going to solve this here because at the time the request is issues, Sync doesn’t know about the new documents in Atlas. Instead, you could register a notification listener on the collection to be notified when it changes. I don’t have enough details to write an example using your entities, but it would look something like this:Of course, this will only work for insertions as we’re relying on having an empty initial collection, which then gets populated with the item we just inserted. If you have updates, you’ll probably want to have some other property on the object you use as a marker that you’ve seen your update - for example an UpdatedAt field, making the query something like:", "username": "nirinchev" }, { "code": "", "text": "Got it, many thanks.", "username": "Scoz_Auro" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Get up-to-date atlas database
2023-09-01T07:52:23.074Z
Get up-to-date atlas database
342
null
[]
[ { "code": "", "text": "Hello Everyone My name is Jose ManuelI am trying to test a RH9 Enterprise Linux Server with graylog server, graylog is a program to take logs and use MongoDB and openseach, the problem is that graylog 5.0 version needs mongodb 5.0 or 6.0 (with redhat9 is 6.0) and that versions of MongoDB use Virtualization Procesor (AVX if my mind don´t fail), so the question is can I compile MongoDB 6.0 to not need AVX?I hope you can understandme, if somebody have some question please ask to me.Best regards. Thanks a lot.", "username": "Jose_Manuel" }, { "code": "", "text": "Hello BrosI made a new installation and all works but I don´t know what ever made wrong a few months ago, really sorry.Best regards.", "username": "Jose_Manuel" } ]
Compile MongoDB 6.0 to not need AVX procesor hardware virtualization
2023-08-24T07:44:25.080Z
Compile MongoDB 6.0 to not need AVX procesor hardware virtualization
593
null
[ "compass" ]
[ { "code": "", "text": "Hi everyone!\nI’m the new Mug leader here in sunny Stockholm. Very excited and honoured to be involved in this fantastic community!\nI’m a full-stack Javascript dev and my latest role was as an integration consultant at Julius Baer in Zurich. I have used mongo technologies such as Atlas and Compass and am looking forward to delving more into other Mongo technologies.\nI am also looking forward to building an inclusive, welcoming and fun community of devs here in Stockholm.\nYou can see more about me on my linked in:\nhttps://www.linkedin.com/in/claire-hardman-dev/\nAnd here is my portfolio:\nchardma3.github.io/hardman-dev\nLooking forward to connecting with you all!\nKind Regards,\nClaire", "username": "Claire_Hardman" }, { "code": "", "text": "Welcome Stockholm <3", "username": "David_Onoh" } ]
Introducing myself as the new MUG Stockholm Leader!
2023-07-31T07:21:23.105Z
Introducing myself as the new MUG Stockholm Leader!
628
null
[ "queries", "atlas-functions", "atlas-search" ]
[ { "code": "/my/restful/path/item/(?P<id>[0-9]+) /my/restful/path/item/123/my/restful/path/item?id=123", "text": "I’m new to MongoDB Atlas. Having been a WordPress user for many years, I’m wondering if it’s possible to mimic the WordPress REST API inside Atlas as it pertains to custom HTTPS endpoints?For example, when creating a custom API endpoint in WordPress, pathnames can be defined as follows: /my/restful/path/item/(?P<id>[0-9]+). So when you call the endpoint /my/restful/path/item/123, the result is the same as if you called /my/restful/path/item?id=123.Ultimately, I trying to use endpoint paths as the means of querying data instead of having to rely on query parameters. Does anyone know if this is possible? If it’s not, how are query parameters passed to the endpoint functions? Any advice would be greatly appreciated. Cheers ", "username": "DaveyJake21" }, { "code": "", "text": "Is this possible?I am also stuck with the same problem.", "username": "Shyjal_Raazi" }, { "code": "", "text": "Still trying to figure it out. No luck yet.", "username": "DaveyJake21" }, { "code": "", "text": "Did you solve it? If so how ?", "username": "Christopher_Eavestone" }, { "code": "", "text": "Does anyone know of any plans to add this? Would be a very nice feature to allow bracket notation to indicate a variable inside a path e.g. /foo/{id}/bar", "username": "Paul_David_Utesch" } ]
Using Endpoint Pathnames Instead Of Query Parameters
2022-12-07T18:45:56.981Z
Using Endpoint Pathnames Instead Of Query Parameters
2,269
null
[ "queries", "mongodb-shell", "database-tools" ]
[ { "code": "", "text": "I am trying to export data updated in mongodb in last 7 days with mongoexport command from linux terminal I am using the following commandmongoexport --db your_database --collection your_collection --query ‘{ “updatedAt”: { “$gt”: new Date(new Date().getTime() - (7 * 24 * 60 * 60 * 1000)) } }’ --out output.jsonbut it’s throwing error not is not a validjson invalid character ‘.’ after constructor argument", "username": "Teo_Thomas" }, { "code": "", "text": "I found this which looks like a similar issue, basically use $expr instead of the date constructor in the query filter:https://www.reddit.com/r/mongodb/comments/12qtbu4/constructor_error_when_using_mongoexport/", "username": "John_Sewell" }, { "code": "", "text": "I have edited the time and run the command with expr from rediff bt still throwing error too many postional arguments", "username": "Teo_Thomas" }, { "code": "", "text": "I got that when playing as well, I swapped to using /query:“xxxxx”What’s your exact command you’re running now?", "username": "John_Sewell" }, { "code": "", "text": "@John_Sewell the command i am running is\nmongoexport --db your_database --collection your_collection --fields field1 --query ‘{ “$expr”: { “$gt”: [ “$createdAt”, { “$dateSubtract”: { “startDate”: new Date(), “unit”: “second”, “amount”: 604800 } } ] } }’ --out output.csvI am trying to get last 7 days data", "username": "Teo_Thomas" }, { "code": "mongoexport --db Cinema --collection Cinemas --fields field1 --query \"{ \\\"$expr\\\": {\\\"$gt\\\": [ \\\"$createdAt\\\" , {\\\"$dateSubtract\\\": {\\\"startDate\\\": \\\"$$NOW\\\", \\\"unit\\\": \\\"day\\\", \\\"amount\\\": 1}}]}}\" --out output.csv\n", "text": "If you look at the example I linked to, they use the $$NOW variable instead, I tested this locally and it seemed to work:Obviously you’ll need to update the query to suit your needs as I just took 1 day off the current date", "username": "John_Sewell" }, { "code": "mongoexport --db Cinema --collection Cinemas --fields field1 --query \"{ \\\"$expr\\\": {\\\"$gt\\\": [ \\\"$createdAt\\\" , {\\\"$dateSubtract\\\": {\\\"startDate\\\": \\\"$NOW\\\", \\\"unit\\\": \\\"day\\\", \\\"amount\\\": 1}}]}}\" --out output.csv\n", "text": "@John_Sewell command you provided seems to be working but it shows 0 reports exported.", "username": "Teo_Thomas" }, { "code": "", "text": "So what are you running now? How did you update what i gave as an example to suit your needs?", "username": "John_Sewell" }, { "code": "", "text": "@John_Sewell I just updated the collection fields and other attributes query part i haven’t made any change it ran without errors", "username": "Teo_Thomas" }, { "code": "mongoexport --db Export --collection Test --fields myDate --query \"{\\\"$expr\\\": {\\\"$gt\\\": [ \\\"$myDate\\\" , {\\\"$dateSubtract\\\": {\\\"startDate\\\": \\\"$$NOW\\\", \\\"unit\\\": \\\"day\\\", \\\"amount\\\": 3}}]}}\"\n", "text": "Think I had a typo, use $$NOW and not $NOW.This worked locally for some data I put into a test collection:", "username": "John_Sewell" }, { "code": "--query \"{\\\"$expr\\\": {\\\"$gt\\\": [ \\\"$myDate\\\" , {\\\"$dateSubtract\\\": {\\\"startDate\\\": \\\"$NOW\\\", \\\"unit\\\": \\\"day\\\", \\\"amount\\\": 3}}]}}\"", "text": "--query \"{\\\"$expr\\\": {\\\"$gt\\\": [ \\\"$myDate\\\" , {\\\"$dateSubtract\\\": {\\\"startDate\\\": \\\"$NOW\\\", \\\"unit\\\": \\\"day\\\", \\\"amount\\\": 3}}]}}\"yeah tried with that too but still 0 reports exported . are you running this in linux termnal @John_Sewell", "username": "Teo_Thomas" }, { "code": "mongoexport --db Export --collection Test --fields myDate --query \"{\\\"\\$expr\\\": {\\\"\\$gt\\\": [ \\\"\\$myDate\\\" , {\\\"\\$dateSubtract\\\": {\\\"startDate\\\": \\\"\\$\\$NOW\\\", \\\"unit\\\": \\\"day\\\", \\\"amount\\\": 3}}]}}\"\n", "text": "Nope, running on a windows command, you’ll need to use appropriate escaping for a *nix shell.I swapped to cygwin I had installed and this worked:Note the escaped $ symbols as I think using double quotes, the shell is interpreting them as replacement variables or some such.", "username": "John_Sewell" }, { "code": "", "text": "Anyway thanks @John_Sewell still not able to proceed it shows feature not supported in putty terminal linux", "username": "Teo_Thomas" }, { "code": "", "text": "That’s strange, can you share a screenshot of the error, blanking out any sensitive information?", "username": "John_Sewell" }, { "code": "", "text": "@John_Sewell after executing the command it shows connected to mongodb and then shows Failed: feature not supported.", "username": "Teo_Thomas" }, { "code": "", "text": "What version of the server / client tools are you using?", "username": "John_Sewell" }, { "code": "", "text": "I am not sure of this but it looks like one of the $ is dropped whenever the command is cut-n-pasted.What is published by John contains 2 $ but what is shared by Teo has only one $.Teo, make sure you manually add the second $ so that you execute with $$NOW rather than $NOW.", "username": "steevej" }, { "code": "", "text": "server 3.x and mongoexport client running on 4.2.24", "username": "Teo_Thomas" }, { "code": " mongoexport --db Export --collection Test --fields myDate --query \"{\\\"myDate\\\":{\\\"\\$gt\\\":{\\\"\\$date\\\":\\\"2023-08-28T00:00:00Z\\\"}}}\"\n", "text": "Ahhh, that’s a pretty ancient server! If you embed the date/time in the query as opposed to trying to use the $$now stuff to recalculate the date range does it work?Can you try running the query you’re passing in against the server directly via mongo shell or compass etc?You could try this:And have the calling script create the date/time string and embed in the query as a variable?", "username": "John_Sewell" }, { "code": ":", "text": ":No still it shows no reports exported and in compass when i give find it doesn’t return anything @John_Sewell", "username": "Teo_Thomas" } ]
Mongodb export data updated in last 7 days with mongoexport
2023-08-30T09:40:38.753Z
Mongodb export data updated in last 7 days with mongoexport
769
null
[]
[ { "code": "", "text": "There was an unexpected error (type=Internal Server Error, status=500).\nCommand failed with error 2 (BadValue): 'Field ‘locale’ is invalid in: { locale: “movies” }", "username": "THAKKAR_HARSH_SHAILESHBHAI" }, { "code": "", "text": "What command were you executing to result in this error?", "username": "John_Sewell" }, { "code": "", "text": "I was running spring boot application", "username": "THAKKAR_HARSH_SHAILESHBHAI" }, { "code": "", "text": "And what was the code you executed to throw this error? Do you have schema validation enabled, and if so is it set to strict?What is the exact code you’re running, if you give more details it’s easier for someone to see what could be the issue.", "username": "John_Sewell" }, { "code": "package dev.harsh.movies;\n\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\n\n@SpringBootApplication\npublic class MoviesApplication {\n\n\tpublic static void main(String[] args) {\n\t\tSpringApplication.run(MoviesApplication.class, args);\n\t}\n\n\n}\n", "text": "MoviesApplication file:", "username": "THAKKAR_HARSH_SHAILESHBHAI" }, { "code": "package dev.harsh.movies;\n\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.stereotype.Service;\n\nimport java.util.List;\n\n@Service\npublic class MovieService {\n\n @Autowired\n private MovieRepository movieRepository;\n public List<Movie> allMovies(){\n return movieRepository.findAll();\n }\n}\n", "text": "MovieService file:", "username": "THAKKAR_HARSH_SHAILESHBHAI" }, { "code": "package dev.harsh.movies;\n\nimport org.bson.codecs.ObjectIdCodec;\nimport org.springframework.data.mongodb.repository.MongoRepository;\nimport org.springframework.stereotype.Repository;\n\n@Repository\npublic interface MovieRepository extends MongoRepository<Movie, ObjectIdCodec> {\n}\n\n", "text": "MovieRepository file:", "username": "THAKKAR_HARSH_SHAILESHBHAI" }, { "code": "package dev.harsh.movies;\n\nimport org.springframework.beans.factory.annotation.Autowired;\nimport org.springframework.http.HttpStatus;\nimport org.springframework.http.ResponseEntity;\nimport org.springframework.web.bind.annotation.GetMapping;\nimport org.springframework.web.bind.annotation.Mapping;\nimport org.springframework.web.bind.annotation.RequestMapping;\nimport org.springframework.web.bind.annotation.RestController;\nimport org.springframework.web.servlet.mvc.method.annotation.ResponseEntityExceptionHandler;\n\nimport java.util.List;\n\n@RestController\n@RequestMapping(\"api/v1/movies\")\npublic class MovieController {\n\n @Autowired\n private MovieService movieService;\n @GetMapping\n public ResponseEntity<List<Movie>> allMovies(){\n return new ResponseEntity<List<Movie>>(movieService.allMovies(), HttpStatus.OK);\n }\n}\n", "text": "MovieController", "username": "THAKKAR_HARSH_SHAILESHBHAI" }, { "code": "package dev.harsh.movies;\n\nimport lombok.AllArgsConstructor;\nimport lombok.Data;\nimport lombok.NoArgsConstructor;\nimport org.springframework.data.mongodb.core.mapping.Document;\nimport org.springframework.data.annotation.Id;\nimport org.bson.types.ObjectId;\nimport org.springframework.data.mongodb.core.mapping.DocumentReference;\n\nimport java.util.List;\n\n@Document(collation = \"movies\")\n@Data\n@AllArgsConstructor\n@NoArgsConstructor\npublic class Movie {\n private ObjectId id;\n private String imdbId;\n private String title;\n private String releaseDate;\n private String trailerLink;\n private String poster;\n private List<String> genres;\n private List<String> backdrops;\n @DocumentReference\n private List<Review> reviewIds;\n}\n\n", "text": "Movie", "username": "THAKKAR_HARSH_SHAILESHBHAI" }, { "code": "", "text": "Spring / Java is not my speciality but do the documents in the collection have the locale field but the class definition it’s trying to map the data to does not?", "username": "John_Sewell" }, { "code": "", "text": "Use @Document(collection = “movies”) instead @Document(collation = “movies”)\nI got the same ", "username": "BodyakoV_N_A" }, { "code": "", "text": "Good spot, I missed that!", "username": "John_Sewell" }, { "code": "", "text": "Brother, you are a lifesaver, I am new to everything, and I have no clue of what I am doing. I cannot believe there was a person with the exact specific problem as me and someone else who figured it out too. Thanks so much for the solution, really appreciate it.", "username": "shoaib_hossain" }, { "code": "", "text": "Take it easy bro ", "username": "BodyakoV_N_A" }, { "code": "", "text": "Bro Thanks a million for your amazing support", "username": "Bhanuka_Swarnajith" }, { "code": "", "text": "thanks a lot, you save me", "username": "Pablo_Gallegos_Gonzalez" } ]
Getting this error while accessing collection
2023-07-10T14:10:45.293Z
Getting this error while accessing collection
1,040
null
[]
[ { "code": "", "text": "I would like to pause the Mongodb M10 cluster", "username": "PEDA_RAYUDU_DOLA" }, { "code": "", "text": "Check this link\nhttps://www.mongodb.com/docs/atlas/pause-terminate-cluster/#:~:text=If%20there%20is%20no%20activity,email%20after%20pausing%20the%20cluster.", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @PEDA_RAYUDU_DOLA,\nkmdudamyvojt8jn9q-mongodb-atlas-maincluster-created~2797×426 38.6 KB\nHere you can pause your cluster.Regards", "username": "Fabio_Ramohitaj" } ]
How to pause or resuem the Mongodb M10 cluster?
2023-08-30T10:21:53.845Z
How to pause or resuem the Mongodb M10 cluster?
256
null
[ "queries" ]
[ { "code": "db.my_collection.find({_id: ObjectId(\"123abc815c237fcd9ad50744\")})\n", "text": "Hi all,I currently have a collection, with multiple 15MB JSON documents stored on an M5 database.Using the following command, It’s takes 2.5 seconds to fetch a document in dev, from my M2 primary replica (this is good).The same command however in my mock production (M5, same document, but about 10x the number of documents), is taking 2 minutes to fetch a single document!In summary:I’m fetching the document by _id, so there shouldn’t be a delay in the time it takes to query the document.I’m the only user currently on the system.I can’t tell what else could be the cause.Why am I getting this slow performance for downloading a single document in my primary replica only?", "username": "Nick_Grealy" }, { "code": ".explain(\"executionStats\"){\n explainVersion: '1',\n queryPlanner: {\n namespace: 'mydb.my_collection',\n indexFilterSet: false,\n parsedQuery: {\n _id: {\n '$eq': ObjectId(\"123abc815c237fcd9ad50744\")\n }\n },\n queryHash: '740C02B0',\n planCacheKey: 'E351FFEC',\n maxIndexedOrSolutionsReached: false,\n maxIndexedAndSolutionsReached: false,\n maxScansToExplodeReached: false,\n winningPlan: {\n stage: 'IDHACK'\n },\n rejectedPlans: []\n },\n executionStats: {\n executionSuccess: true,\n nReturned: 1,\n executionTimeMillis: 1,\n totalKeysExamined: 1,\n totalDocsExamined: 1,\n executionStages: {\n stage: 'IDHACK',\n nReturned: 1,\n executionTimeMillisEstimate: 0,\n works: 2,\n advanced: 1,\n needTime: 0,\n needYield: 0,\n saveState: 0,\n restoreState: 0,\n isEOF: 1,\n keysExamined: 1,\n docsExamined: 1\n }\n },\n command: {\n find: 'my_collection',\n filter: {\n _id: ObjectId(\"123abc815c237fcd9ad50744\")\n },\n '$db': 'mydb'\n },\n serverInfo: {\n host: 'mydb-shard-00-01.abcde.mongodb.net',\n port: 27017,\n version: '6.0.9',\n gitVersion: '90c65f9cc8fc4e6664a5848230abaa9b3f3b02f7'\n },\n serverParameters: {\n internalQueryFacetBufferSizeBytes: 104857600,\n internalQueryFacetMaxOutputDocSizeBytes: 104857600,\n internalLookupStageIntermediateDocumentMaxSizeBytes: 16793600,\n internalDocumentSourceGroupMaxMemoryBytes: 104857600,\n internalQueryMaxBlockingSortMemoryUsageBytes: 33554432,\n internalQueryProhibitBlockingMergeOnMongoS: 0,\n internalQueryMaxAddToSetBytes: 104857600,\n internalDocumentSourceSetWindowFieldsMaxMemoryBytes: 104857600\n },\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1693412063, i: 5 }),\n signature: {\n hash: Binary(Buffer.from(\"abcdefa76c986469f16ba2c5ae5f348475bc8743\", \"hex\"), 0),\n keyId: 1234894767884730000\n }\n },\n operationTime: Timestamp({ t: 1693412063, i: 5 })\n}\n", "text": "Attaching the .explain(\"executionStats\"), in case it sheds light.", "username": "Nick_Grealy" }, { "code": "_id executionStats: {\n executionSuccess: true,\n nReturned: 1,\n executionTimeMillis: 1\n\"executionStats\"mongosh", "text": "Hi @Nick_Grealy,The same command however in my mock production (M5, same document, but about 10x the number of documents), is taking 2 minutes to fetch a single document!Firstly - thanks for providing the detailed summary It’s definitely interesting (just from an initial glance) for the 120 second document fetch using _id on the M5 tier cluster.Based off the \"executionStats\", it shows 1ms. Just to provide some extra context so I can better understand the issue, can you advise how you are measuring the 120 seconds / 2 minutes from the M5 tier cluster? Additionally, are you seeing this 120 seconds query time when you are running the same find command when connecting to the M5 tier cluster via mongosh shell?Lastly, has this behaviour always been experienced on this M5 tier cluster? Or is this something more recent? I am thinking that perhaps exceeding the data transfer limitation may be a possibility here.Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "{_id: ObjectId(\"123abc815c237fcd9ad50744\")}mongoshdata transfer limitation", "text": "Hi @Jason_Tran ,I measure the time time in Compass, from the moment I execute the query ({_id: ObjectId(\"123abc815c237fcd9ad50744\")}) to the moment it returns the document.I experience the same behaviour in my application (NodeJS), and in a second DB client (NoSQL).I haven’t tried mongosh… I can try it, but I don’t think it’ll make much difference.I think you might’ve hit the nail on the head with the data transfer limitation. How do I find out:Kind regards,\nNick", "username": "Nick_Grealy" }, { "code": "{_id: ObjectId(\"123abc815c237fcd9ad50744\")}data transfer limitation", "text": "I measure the time time in Compass, from the moment I execute the query ({_id: ObjectId(\"123abc815c237fcd9ad50744\")}) to the moment it returns the document.Sorry, just to clarify here, is this using a particular feature in compass or more so just counting seconds (general)?I think you might’ve hit the nail on the head with the data transfer limitation. How do I find out:You’ll need to contact the atlas in-app chat support team since they’ll have more insight into your Atlas project / cluster. Provide them with the cluster name / link.Regards,\nJason", "username": "Jason_Tran" }, { "code": "primaryreadPreference=secondary", "text": "Compass - didn’t show execution time so I’m “counting seconds”.NoSQL - shows an execution time of 112.581 sec\nNodeJS - performance logging shows 112,437 ms(primary - 112,437ms)\n(readPreference=secondary = 2,115ms)Support haven’t been helpful - they seem to only be interested in funnelling me to purchasing a higher tier db, despite already being a paying customer.How do I find out if I’m being rate limited?", "username": "Nick_Grealy" }, { "code": "", "text": "UpdateI upgraded to M10, and somehow the performance is worse. Waiting for support to get back to me…", "username": "Nick_Grealy" }, { "code": "How do I see on the dashboard, whether I have hit my rate limit?", "text": "Shout out to Seemi on support ( @Seemi_Hasan ?) - who was actually able to help confirm…Hi Nick,\nI apologise for the delay in my response as this required some in depth log analysis. To address your > original concern before your upgrade to M10 cluster tier:\nI database has become extremely slow to return documents. I am on the M5 tier.\nI believe I have hit the Data Transfer Limits, as decribed here, and am being rate limited.https://www.> mongodb.com/docs/atlas/reference/free-shared-limitations/#operational-limitationsHow do I see on the dashboard, whether I have hit my rate limit?I have confirmed from the internal logs that your M5 cluster was throttled due to the Network Limit.[2023/08/31 01:14:38.577] [ProxySession(_,mydb-shard-00-01.abcde.mongodb.net,139.> 59.100.19:43514).info] [commands.go:InterceptMongoToClient:1376] Network transfer limit (inbound: 50.000 GB/> week, outbound: 50.000 GB/week) exceeded on mydb-shard-00-01.abcde.mongodb.net. Weekly > (past 7 days) inbound usage = 0.053 GB, weekly outbound usage = 189.617 GB. Throttling down to 100000 bytes/> week by sleeping 0.002 secs.I hope that this provides a clearer understanding of the earlier instance of slow cluster response.", "username": "Nick_Grealy" }, { "code": "", "text": "Thanks for the update Nick! Glad to hear you were helped out by Seemi on the chat support team.As you are working with the in-app chat support team would it be okay to close this particular post? I presume the original M5 issue was due to the throttling you’ve mentioned in the most recent reply.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "@Jason_Tran - before you close this off, I’d like to know how to see the following from the dashboard? → Weekly (past 7 days) inbound usage / weekly outbound usageThat way I can monitor the limits, and determine whether I’m going to be rate limited (proactively) in the future.\n(Perhaps even setup an alert! I hate finding out Production is down, after the fact.)", "username": "Nick_Grealy" }, { "code": "Network", "text": "I’d like to know how to see the following from the dashboard?Weekly (past 7 days) inbound usage / weekly outbound usageUnfortunately there are no alerts available for this limit. In a previous post about whether this could be verified, I had advised checking with the in-app chat support as well since they have more insight about it per Atlas account.To my knowledge, the only way currently is to manually approximate using calculations from the Network chart for shared tier clusters. You could change the zoom to 1 week and adjust the granularity. I believe the limit is for all nodes (i.e. You need to add up the network usage for all 3 and then if that total value exceeds the limit, you will be throttled). Hope this makes sense.", "username": "Jason_Tran" }, { "code": "", "text": "Feature requested… hopefully this saves someone else from a sleepless night!Suggestion: When a database is being rate limited, please indicate this on the dashboard.\n\nAnd/or send an email to the developer, so they can take action, like upgrading the DB tier.\n(Preferably before they're locked out of the database for 7...", "username": "Nick_Grealy" }, { "code": "", "text": "@Jason_Tran - while I have you. Does M10 have any rate limits?", "username": "Nick_Grealy" }, { "code": "", "text": "Hi @Nick_GrealyGlad to hear that your issue was solved to your satisfaction.However I’d like to circle back to one of your earlier comments:Support haven’t been helpful - they seem to only be interested in funnelling me to purchasing a higher tier db, despite already being a paying customer.Sorry if you feel that way, but for the record, let me assure you that no one in support is incentivized to sell you anything. They are not sales. Their goal is to help you be successful with MongoDB. However if it was determined that your workload is too large for your current deployment size, then support is obliged to tell you that fact.As with your question:Does M10 have any rate limits?If you deploy on AWS, as per MongoDB on AWS Cloud Pricing, you pay for data transfer costs instead of being throttled. There are more details on the page Atlas Cluster Sizing and Tier Selection that may be of interest as well.Thanks for your patience and welcome to the community!Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Primary replica slow to download a single document
2023-08-30T16:30:53.432Z
Primary replica slow to download a single document
633
null
[]
[ { "code": "{\"t\":{\"$date\":\"2023-08-30T21:33:42.603-04:00\"},\"s\":\"E\", \"c\":\"CONTROL\", \"id\":20557, \"ctx\":\"initandlisten\",\"msg\":\"DBException in initAndListen, terminating\",\"attr\":{\"error\":\"Location18656: Cannot start server with an unknown storage engine: ephemeralForTest\"}}\n", "text": "A month or so ago, I refreshed the jaraco.mongodb package and confirmed it was working. Today, I tried to test another package that relies on it to start up an ephemeral MongoDB instance, but it appears with the release of MongoDB 7.0, the ephemeralForTest engine is no longer available. The error I see is:I don’t see anything in the changelog referencing the removal of this engine. Is it no longer supported? What is the best replacement for running lightweight tests?", "username": "jaraco" }, { "code": "", "text": "I found https://jira.mongodb.org/browse/SERVER-65151. It was intentionally removed. This change should probably be added to the changelog.", "username": "jaraco" } ]
Unknown storage engine: ephemeralForTest since 7.0.0 release
2023-08-31T02:09:36.220Z
Unknown storage engine: ephemeralForTest since 7.0.0 release
311
https://www.mongodb.com/…2955221e6e65.png
[ "ahmedabad-mug" ]
[ { "code": "DevOps EngineerData EngineerTechpreneur | Founder | Ahmedabad MUG LeaderSoftware Engineer | Ahmedabad MUG Leader", "text": "\nPosters960×540 139 KB\nThe MongoDB User Group in Ahmedabad is thrilled to announce its very first meetup on 26th Aug 2023 in Ahmedabad .This gathering promises to be a delightful event, featuring two engaging presentations with live demonstrations. Attendees will also have the opportunity to participate in a collaborative and enjoyable exercise . It’s the perfect occasion to connect with fellow MongoDB enthusiasts and stand a chance to win exciting swag ! We look forward to seeing you there! To RSVP - Kindly click on the “✓ RSVP” link located at the top of this event page if you’re planning to attend. The link should change to a green button once you’ve successfully RSVPd. Make sure you’re signed in to access the button and secure your spot for the event! See you there! Please remember to bring a Government-Issued ID Card for identification purposes.\n We kindly request that you stay within the designated event premises and maintain a respectful atmosphere in the office.\n After use, please ensure that you dispose of any used plates in the designated bins. Let’s keep the space tidy and enjoyable for everyone.\n Let’s create a positive and friendly environment by being respectful and kind to each other.\n Secure your spot now to be part of an unforgettable experience! Don’t miss out! Please take note that joining the waitlist is the initial step to secure your registration. To finalise your registration, you will receive a separate confirmation email. Kindly present this email at the event entrance for smooth access.To ensure a seamless and well-organised event, we kindly request that you refrain from relying on on-spot registrations. Your understanding and cooperation are greatly appreciated. We eagerly await the pleasure of welcoming you to the event! Event Type: In-Person\nLocation: AHMEDABAD MANAGEMENT ASSOCIATION\nATIRA CAMPUS, DR. VIKRAM SARABHAI MARG,\nVASTRAPUR, AHMEDABAD, GUJARAT, 380009.DevOps Engineer\n\n1jWUZuoHiAsmMdrhZpt2NF4DH2C-8UEShY42alc7uCuc800×800 18.2 KB\n LinkedInData Engineer\n\n1jWUZuoHiAsmMdrhZpt2NF4DH2C-8UEShY42alc7uCuc800×800 18.2 KB\n LinkedInTechpreneur | Founder | Ahmedabad MUG Leader\n\n1jWUZuoHiAsmMdrhZpt2NF4DH2C-8UEShY42alc7uCuc800×800 18.2 KB\n LinkedInSoftware Engineer | Ahmedabad MUG Leader\n\n1jWUZuoHiAsmMdrhZpt2NF4DH2C-8UEShY42alc7uCuc800×800 18.2 KB\n LinkedIn", "username": "turivishal" }, { "code": "", "text": "Hey I want to contribute how can I contribute?", "username": "Anjesh_Agrawal" }, { "code": "", "text": "Hello @Anjesh_Agrawal, Welcome to the MongoDB community forum Thank you for your interest,\nCan you share the context of how you want to contribute?Thanks,\nVishal", "username": "turivishal" }, { "code": "", "text": "@viraj_thakrar @turivishal can you please share the meetup PPT here ?", "username": "Sanjay_Makwana" }, { "code": "", "text": "Hi @Sanjay_Makwana ,We have sent you links to the PPTs.Thanks,\nViraj", "username": "viraj_thakrar" } ]
Ahmedabad MUG Inaugural Meetup
2023-07-27T07:26:10.459Z
Ahmedabad MUG Inaugural Meetup
2,120
null
[ "compass", "connecting" ]
[ { "code": "", "text": "Hello, i encountered a problem with installing MongoDB Compass:I think I know what the issue is and I have it for a while, but I don’t know how to solve it.I cannot find the MongoDB service in my services list, so I cannot start the service and thats why I get the error in the Compass app: connect ECONNREFUSED 127.0.0.1:27017How can I be able to add the service to the services list?", "username": "bram_van_overveld" }, { "code": "--install", "text": "You may have skipped or deselected the service option when installing.You can use the --install option to create the service after initial install.", "username": "chris" }, { "code": "", "text": "Sorry for the late reply, but yes it works now!\nI did a reinstallation and then i selected the services option and now its installed with the service mongodb!\nThanks for helping!", "username": "bram_van_overveld" } ]
Cannot find MongoDB in services list on Windows laptop
2023-08-19T17:56:43.058Z
Cannot find MongoDB in services list on Windows laptop
535
https://www.mongodb.com/…1_2_1024x512.png
[ "java", "atlas-cluster", "kotlin" ]
[ { "code": "import com.mongodb.MongoException\nimport com.mongodb.kotlin.client.coroutine.MongoClient\nimport com.mongodb.kotlin.client.coroutine.MongoDatabase\nimport kotlinx.coroutines.flow.count\nimport kotlinx.coroutines.runBlocking\nimport org.bson.BsonInt64\nimport org.bson.Document\nimport java.util.*\n\nfun main() {\n val databaseName = \"sample_restaurants\"\n\n runBlocking {\n\n val database = setupConnection(databaseName = databaseName, \"MONGODB_URI\")\n\n if (database != null) {\n listAllCollection(database = database)\n\n dropCollection(database = database)\nlistAllCollection(database)dropCollection(database)suspend fun dropCollection(database: MongoDatabase) {\n database.getCollection<Objects>(collectionName = \"collectionName\").drop()\n}\n10:58:12.611 [AsyncGetter-7-thread-1] DEBUG org.mongodb.driver.protocol.command - Command \"drop\" started on database sample_restaurants using a connection with driver-generated ID 7 and server-generated ID 9902 to ac-hvvz70k-shard-00-02.vgkddrs.mongodb.net:27017. The request ID is 16 and the operation ID is 13. Command: {\"drop\": \"collectionName\", \"writeConcern\": {\"w\": \"majority\"}, \"$db\": \"sample_restaurants\", \"$clusterTime\": {\"clusterTime\": {\"$timestamp\": {\"t\": 1693472292, \"i\": 5}}, \"signature\": {\"hash\": {\"$binary\": {\"base64\": \"Md/e/RXCwkBoZYHkPCV8mtxVsXI=\", \"subType\": \"00\"}}, \"keyId\": 7230892126479843330}}, \"lsid\": {\"id\": {\"$binary\": {\"base64\": \"RU6FhcJ5TJmbsxeZpSxHtA==\", \"subType\": \"04\"}}}}\n10:58:12.642 [async-channel-group-0-handler-executor] DEBUG org.mongodb.driver.protocol.command - Command \"drop\" failed in 18.6061 ms using a connection with driver-generated ID 7 and server-generated ID 9902 to ac-hvvz70k-shard-00-02.vgkddrs.mongodb.net:27017. The request ID is 16 and the operation ID is 13.\ncom.mongodb.MongoCommandException: Command failed with error 26 (NamespaceNotFound): 'ns not found' on server ac-hvvz70k-shard-00-02.vgkddrs.mongodb.net:27017. The full response is {\"ok\": 0.0, \"errmsg\": \"ns not found\", \"code\": 26, \"codeName\": \"NamespaceNotFound\", \"$clusterTime\": {\"clusterTime\": {\"$timestamp\": {\"t\": 1693472292, \"i\": 5}}, \"signature\": {\"hash\": {\"$binary\": {\"base64\": \"Md/e/RXCwkBoZYHkPCV8mtxVsXI=\", \"subType\": \"00\"}}, \"keyId\": 7230892126479843330}}, \"operationTime\": {\"$timestamp\": {\"t\": 1693472292, \"i\": 5}}}\n\tat com.mongodb.internal.connection.ProtocolHelper.getCommandFailureException(ProtocolHelper.java:205)\n\tat com.mongodb.internal.connection.InternalStreamConnection.lambda$sendCommandMessageAsync$0(InternalStreamConnection.java:544)\n\tat com.mongodb.internal.connection.InternalStreamConnection$MessageHeaderCallback$MessageCallback.onResult(InternalStreamConnection.java:847)\n\tat com.mongodb.internal.connection.InternalStreamConnection$MessageHeaderCallback$MessageCallback.onResult(InternalStreamConnection.java:810)\n\tat com.mongodb.internal.connection.InternalStreamConnection$3.completed(InternalStreamConnection.java:669)\n\tat com.mongodb.internal.connection.InternalStreamConnection$3.completed(InternalStreamConnection.java:666)\n\tat com.mongodb.internal.connection.AsynchronousChannelStream$BasicCompletionHandler.completed(AsynchronousChannelStream.java:251)\n\tat com.mongodb.internal.connection.AsynchronousChannelStream$BasicCompletionHandler.completed(AsynchronousChannelStream.java:234)\n\tat com.mongodb.internal.connection.tlschannel.async.AsynchronousTlsChannel.lambda$read$4(AsynchronousTlsChannel.java:122)\n\tat java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n\tat java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\n", "text": "I’m following this tutorial on Kotlin Multi-platform (KMM) with MongoDB Atlas.This is an introductory article on how to build an application in Kotlin using MongoDB Atlas and the MongoDB Kotlin driver, the latest addition to our list of official drivers.\nwhich points to\nI managed to listAllCollection(database)\n\nimage1598×102 21.7 KB\n\nbut the function dropCollection(database) didn’t work.I’m gettingEnvironment", "username": "Ka_Lok_Tam" }, { "code": "", "text": "i solved the problem myself. it’s due to nonexistent collection.\n\nimage1737×142 37.8 KB\n", "username": "Ka_Lok_Tam" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
[SOLVED] Can't drop collections in Kotlin
2023-08-31T09:07:00.255Z
[SOLVED] Can&rsquo;t drop collections in Kotlin
406
https://www.mongodb.com/…e_2_1024x544.png
[]
[ { "code": " var config = new FlexibleSyncConfiguration(MyApp.CurrentUser) {\n PopulateInitialSubscriptions = (realm) => {\n var players = realm.All<DBPlayer>().Where(a => a.ID.ToString() == MyApp.CurrentUser.Id);\n realm.Subscriptions.Add(players, new SubscriptionOptions() { Name = \"player\" });\n }\n };\n MyRealm = await Realm.GetInstanceAsync(config);\n", "text": "When I starting GetInstanceAsync, I got an error below:Realms.Exceptions.RealmException: The following changes cannot be made in additive-only schema mode:This is my FlexibleSyncConfigurationI don’t know where is different from my schema and atlas schema. They are both set to required field.\nimage1532×815 98.2 KB\n", "username": "Scoz_Auro" }, { "code": "", "text": "I found a generated ‘DBPlayer_generated’ by unity realm sdk, it may be the problem. It set some field to nullable is true, but my player.cs is not nullable field. Now, I am finding the document about how to make it correct.\n\nimage1362×408 60.3 KB\n", "username": "Scoz_Auro" }, { "code": "", "text": "Solved it.\nI had to add the ‘required’ property for the string type. Although I eventually found this information in the documentation, it was easy to overlook. I suggest that the auto-generated schema example in the web UI should include it.\nimage1626×855 102 KB\n", "username": "Scoz_Auro" }, { "code": "", "text": "The issue is that the generated models assume that nullable reference types are enabled in your project, which is the default for regular .NET projects, but it isn’t for Unity. We have an item in the backlog to allow customizing that, but haven’t gotten to it yet.", "username": "nirinchev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
UnityC# Schema Sync error
2023-08-31T05:32:03.413Z
UnityC# Schema Sync error
398
null
[]
[ { "code": "ObservedRealmObject@ObservedResults(Item.self) var items\n\nvar body: some View {\n\tList(selection: $navigationModel.selectedItem) {\n\t ForEach(items, id: \\.self) { item in\n\t NavigationLink(value: item) {\n\t ItemRow(item: item)\n\t }\n\t }\n\t .onDelete(perform: $items.remove)\n\t}\n}\nObservedRealmObject@ObservedRealmObject var item: Item\n \nvar body: some View {\n\tText(item.name)\n}\nItemItemonChangeitem.name", "text": "After years of working with RealmSwift, I’m trying to get my head around using Realm with SwiftUI. So far everything is going well except I have an issue whereby I can’t detect when an ObservedRealmObject has been deleted.My setup is fairly straightforward in that I have a list in a sidebar which is populated from an @ObservedResults and then later a ForEach loop:I then have a detail view which uses an ObservedRealmObject to display an item:If I change the name of the Item within my sidebar, then the item is instantly updated in the detail view.However, if I delete the Item from the sidebar, there doesn’t seem to be a way to detect that within the detail view and thus navigate the user back. I assumed I’d be able to check if the object was invalidated (perhaps with an onChange modifier) but the object insists it is still valid even after deletion. I can also read the properties (i.e. checking item.name on a button press) without any crashes. If I try to unthaw the item, then it returns nil and I can go back but it doesn’t seem optimal to have to unthaw the item on every render to check if it still exists.Am I missing something obvious? I’m sure there must be a way for SwiftUI to be notified when an ObservedRealmObject has been deleted…", "username": "Ben_Dodson" }, { "code": "", "text": "Yes, you can check the IsInvalidated property of the object. Please check this question: Procedure when object is invalidated - #3 by Joao_Serra", "username": "Sandeepani_Senevirathna" } ]
Detecting when an ObservedRealmObject is deleted
2023-05-05T15:09:05.895Z
Detecting when an ObservedRealmObject is deleted
441
null
[ "aggregation", "dot-net" ]
[ { "code": "$match$facet{\n $match: {\n $or: [\n {\n $and: [\n {\n PropertyAddressState: {\n $eq: \"OH\",\n },\n PropertyAddressCity: {\n $eq: \"DAYTON\",\n },\n },\n ],\n },\n {\n $and: [\n {\n PropertyAddressState: {\n $eq: \"CA\",\n },\n PropertyAddressCity: {\n $eq: \"LOS ANGELES\",\n },\n },\n ],\n },\n ],\n },\n },\n{\n $facet: {\n results: [\n {\n $group: {\n _id: null,\n tax_delinquent: {\n $sum: {\n $toInt: \"$IsTaxDelinquent\",\n },\n },\n absentee_owners: {\n $sum: {\n $toInt: \"$IsAbsenteeOwners\",\n },\n },\n out_of_state_owners: {\n $sum: {\n $toInt: \"$IsOutOfStateOwners\",\n },\n },\n purchased_2010_2012: {\n $sum: {\n $toInt: \"$IsPurchasedin20102012\",\n },\n },\n multi_family_owners: {\n $sum: {\n $toInt: \"$IsMultiFamilyOwners\",\n },\n },\n properties_in_a_trust: {\n $sum: {\n $toInt: \"$IsPropertiesInATrust\",\n },\n },\n vacant: {\n $sum: {\n $toInt: \"$IsVacant\",\n },\n },\n land_residential: {\n $sum: {\n $toInt: \"$IsLand\",\n },\n },\n pre_foreclosure: {\n $sum: {\n $toInt: \"$IsPreforecloure\",\n },\n },\n pre_foreclosure_purchased_2010_2012: {\n $sum: {\n $toInt: {\n $and: [\n \"$IsPreforecloure\",\n \"$IsPurchasedin20102012\",\n ],\n },\n },\n },\n pre_foreclosure_out_of_state: {\n $sum: {\n $toInt: {\n $and: [\n \"$IsPreforecloure\",\n \"$IsOutOfStateOwners\",\n ],\n },\n },\n },\n pre_foreclosure_vacant: {\n $sum: {\n $toInt: {\n $and: [\n \"$IsPreforecloure\",\n \"$IsVacant\",\n ],\n },\n },\n },\n absentee_owners_purchased_2010_2012: {\n $sum: {\n $toInt: {\n $and: [\n \"$IsAbsenteeOwners\",\n \"$IsPurchasedin20102012\",\n ],\n },\n },\n },\n out_of_state_purchased_2010_2012: {\n $sum: {\n $toInt: {\n $and: [\n \"$IsOutOfStateOwners\",\n \"$IsPurchasedin20102012\",\n ],\n },\n },\n },\n },\n },\n ],\n absentee_owners_with_multiple_properties_vacant:\n [\n {\n $match: {\n IsAbsenteeOwners: true,\n IsVacant: true,\n },\n },\n {\n $group: {\n _id: \"$PartyOwner1NameFull\",\n count: {\n $sum: 1,\n },\n },\n },\n {\n $match: {\n _id: {\n $ne: null,\n },\n count: {\n $gte: 2,\n $lte: 10000,\n },\n },\n },\n {\n $count: \"count\",\n },\n ],\n absentee_owners_multiple_properties: [\n {\n $match: {\n IsAbsenteeOwners: true,\n },\n },\n {\n $group: {\n _id: \"$PartyOwner1NameFull\",\n count: {\n $sum: 1,\n },\n },\n },\n {\n $match: {\n _id: {\n $ne: null,\n },\n count: {\n $gte: 2,\n $lte: 10000,\n },\n },\n },\n {\n $count: \"count\",\n },\n ],\n },\n }\n\"executionStats\" : {\n\t\t\t\t\t\"executionSuccess\" : true,\n\t\t\t\t\t\"nReturned\" : 518601,\n\t\t\t\t\t\"executionTimeMillis\" : 34743,\n\t\t\t\t\t\"totalKeysExamined\" : 582076,\n\t\t\t\t\t\"totalDocsExamined\" : 518601,\nresults: \n Array (1) \n Object _id: null \ntax_delinquent: 2619 \nabsentee owners: 32681\nout_of_state_owners: 2415 \npurchased_2010_2012: 11543 \nmulti_family_owners: 20944\nproperties_in_a_trust: 10484\nvacant: 1230\nland residential: 4173\npre_foreclosure: 38854\npre_foreclosure_purchased_201: 4893\nabsentee_owners_with_multiple: \n Array (1) \n Object count: 18\nabsentee_owners_multiple_prop:\n Array (1) \n Object count: 1801\n$group$facet$groupIsTaxDelinquent: Array (1)\n • 0: Object count: 2619 \nIsAbsenteeOwners: Array (1)\n • 0: Object count: 32681\nIsOut0f5tateOwners: Array (1) \n • 0: Object count: 2415 \nIsPurchasedin20102012: Array (1) \n • 0: Object count: 11543 \nIsMultiFamilyOwners: Array (1) \nIsPropertiesInATrust: Array (1)\nIsVacant: Array (1)\nIsLand: Array (1) \nIsPreforecloure: Array (1) \npre_foreclosure_purchased_2010_2012: Array (1) \npre_foreclosure_out_of_state: Array (1) \npre_foreclosure_vacant: Array (1) \nabsentee_owners_vacant: Array (1) \nout_of_state_vacant: Array (1) \nvacant_purchased_2010_2012: Array (1) absentee_owners_with_multiple_properties_vacant: Array (1)\n{\n $facet: {\n IsTaxDelinquent: [\n {\n $match: {\n IsTaxDelinquent: true,\n },\n },\n {\n $count: \"count\",\n },\n ],\n IsAbsenteeOwners: [\n {\n $match: {\n IsAbsenteeOwners: true,\n },\n },\n {\n $count: \"count\",\n },\n ],\n IsOutOfStateOwners: [\n {\n $match: {\n IsOutOfStateOwners: true,\n },\n },\n {\n $count: \"count\",\n },\n ],\n IsPurchasedin20102012: [\n {\n $match: {\n IsPurchasedin20102012: true,\n },\n },\n {\n $count: \"count\",\n },\n ],\n IsMultiFamilyOwners: [\n {\n $match: {\n IsMultiFamilyOwners: true,\n },\n },\n {\n $count: \"count\",\n },\n ],\n IsPropertiesInATrust: [\n {\n $match: {\n IsPropertiesInATrust: true,\n },\n },\n {\n $count: \"count\",\n },\n ],\n IsVacant: [\n {\n $match: {\n IsVacant: true,\n },\n },\n {\n $count: \"count\",\n },\n ],\n IsLand: [\n {\n $match: {\n IsLand: true,\n },\n },\n {\n $count: \"count\",\n },\n ],\n IsPreforecloure: [\n {\n $match: {\n IsPreforecloure: true,\n },\n },\n {\n $count: \"count\",\n },\n ],\n pre_foreclosure_purchased_2010_2012: [\n {\n $match: {\n IsOutOfStateOwners: true,\n },\n },\n {\n $count: \"count\",\n },\n ],\n pre_foreclosure_purchased_2010_2012: [\n {\n $match: {\n $and: [\n {\n IsPreforecloure: true,\n },\n {\n IsPurchasedin20102012: true,\n },\n ],\n },\n },\n {\n $count: \"count\",\n },\n ],\n absentee_owners_with_multiple_properties_vacant:\n [\n {\n $match: {\n IsAbsenteeOwners: true,\n IsVacant: true,\n },\n },\n {\n $group: {\n _id: \"$PartyOwner1NameFull\",\n count: {\n $sum: 1,\n },\n },\n },\n {\n $match: {\n _id: {\n $ne: null,\n },\n count: {\n $gte: 2,\n $lte: 10000,\n },\n },\n },\n {\n $count: \"count\",\n },\n ],\n absentee_owners_multiple_properties: [\n {\n $match: {\n IsAbsenteeOwners: true,\n },\n },\n {\n $group: {\n _id: \"$PartyOwner1NameFull\",\n count: {\n $sum: 1,\n },\n },\n },\n {\n $match: {\n _id: {\n $ne: null,\n },\n count: {\n $gte: 2,\n $lte: 10000,\n },\n },\n },\n {\n $count: \"count\",\n },\n ],\n },\n }\n$facet", "text": "I had written a MongoDB aggregation pipeline in C#. I was able to obtain the correct result initially. However, after a couple of months, the dataset grew larger, and the aggregation started to perform very slowly. While the $match stage, benefiting from an index, continued to deliver fast results, the issue was observed with the $facet stage.This is the execution stats for the above aggregation.The response for the above aggregation is like this.In the provided aggregation, I employed a $group stage within the $facet. Consequently, I removed the $group stage and attempted parallel processing by separating each category.Despite producing accurate results, this approach still resulted in slow loading times. I’m inquiring whether there exists a more effective method to achieve results using the $facet stage.I’m using MongoDB version 4.4.Below I have set up a sample data set with the necessary fields for testing purposestemp_tax_data2", "username": "Shehan_Vanderputt" }, { "code": "COLLSCAN", "text": "Hi @Shehan_Vanderputt and welcome to MongoDB community forums!!Thank you for sharing the details but it would be helpful if you share the index definition as well which would help me analyse the slow query in more detail.\nHowever, in saying so, the query performance depends on various factors.Warm regards\nAasawari", "username": "Aasawari" } ]
How to improve the performance of $facet stage in Aggregation Pipeline in MongoDb C#
2023-08-20T16:01:23.612Z
How to improve the performance of $facet stage in Aggregation Pipeline in MongoDb C#
608
null
[ "app-services-cli" ]
[ { "code": "", "text": "Hi, does Realm allow pushing a single function to Realm? I’m accustomed to writing code in VS Code, but it seems that I can only push the entire app to Realm. This could potentially overwrite functions that have been modified by other team members.", "username": "Scoz_Auro" }, { "code": "realm-cli pull <your_realm_app_id>realm-cli push --remote=<your_realm_app_id> --", "text": "Seeing your previous question, on this forum it appears you may be confusing the Atlas UI with the Atlas CLI, when in reality both provide the ability to modify the same underlying Atlas App which is essentially a series of special directories/files that Realm uses for its various roles. The Atlas UI will show the Realm App in a browser-viewable interface, but it is also possible to also modify the filesystem of the Realm App and push the directory up to Atlas using the Realm CLI. You view steps for downloading the CLI onto your machine here.Follow the steps below toCheck out the following page from the Atlas App Documentation for more details on updating Realm Apps from the CLI.", "username": "Cyrus_Ordoubadian" }, { "code": "", "text": "Thank you for your response, and I apologize for the confusion earlier. My concern is that I seem to only have the option to push all the functions in a batch, rather than updating a single function individually. While pushing all functions is not a problem in itself, there are scenarios where I would prefer to push just one function. Additionally, my colleagues might be working on different functions, so it would be more convenient if I could update a single function at a time.\nBut for now, push all the functions is enough for me.", "username": "Scoz_Auro" }, { "code": "", "text": "Hey Scot,\nJust following up here. It looks like selective pushes to realm is an open feature request according to this thread. However, if you would really like to have stronger version control, you may want to use git on a remote repository in order to reconcile changes between your team, and then pull/push the repo using the Realm CLI when ready for deployment. That would also give you more control over the specific files in each commit, which appears to be what you are interested in.", "username": "Cyrus_Ordoubadian" }, { "code": "", "text": "Just following up here. It looks like selective pushes to realm is an open feature request according to this thread. However, if you would really like to have stronger version control, you may want to use git on a remote repository in order to reconcile changes between your team, and then pull/push the repo using the Realm CLI when ready for deployment. That would also give you more control over the specific files in each commit, which appears to be what you are interested in.Got it, many thanks.", "username": "Scoz_Auro" }, { "code": "realm-clirealm-clirealm-clirealm-clirealm-clinpm install -g mongodb-realm-cli\nrealm-cli login\nrealm-cli select\nrealm-cli apps create\nrealm-clirealm-cli push --path /path/to/your/function\n/path/to/your/functionrealm-clirealm-clirealm-cli", "text": "As of my last knowledge update in September 2021, MongoDB Realm allows you to use the realm-cli to deploy various components of your application, including functions. However, the realm-cli primarily deals with deployment tasks and doesn’t have the capability to analyze the content or keywords within your functions.To deploy a single function using the realm-cli, you generally follow these steps:Copy codeCopy codecsharpCopy codeIf you’re creating a new app, you can use:luaCopy codecssCopy codeReplace /path/to/your/function with the actual path to your function’s source code.Remember that the realm-cli doesn’t analyze the content of your function or look for specific keywords like “online earning.” It’s designed to manage deployment tasks and configurations for your MongoDB Realm app.If you’re looking to implement a function related to “online earning,” you would need to write the function code yourself and make sure it adheres to MongoDB Realm’s function requirements. The realm-cli will then help you deploy this function to your Realm app.Keep in mind that the tools and features provided by platforms like MongoDB Realm might have evolved since my last update in September 2021. I recommend checking the official MongoDB Realm documentation or resources for the most up-to-date information on using the realm-cli and its capabilities.", "username": "Hasan_Wajid" }, { "code": "--pathpush", "text": "\nThank you for your reply. I’ve just checked the document, and I didn’t find the --path option. It seems it’s not a valid option for the push command.", "username": "Scoz_Auro" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is that possible using realm-cli to push a single function to Realm?
2023-08-29T04:52:42.667Z
Is that possible using realm-cli to push a single function to Realm?
742
null
[ "server" ]
[ { "code": "", "text": "Is there any way to enable mongodb services, if the db services is running in the fork mode. There may be chance if the server reboot. so need to start manually in the fork mode", "username": "Samrat_Mehta" }, { "code": "", "text": "Hi @Samrat_MehtaWhat OS are you using and what installation method did you use?In general the answer is yes, the specifics are dependent on the answers to the above questions.", "username": "chris" }, { "code": "", "text": "NAME=“CentOS Linux”\nVERSION=“7 (Core)”\nID=“centos”\nID_LIKE=“rhel fedora”\nVERSION_ID=“7”\nPRETTY_NAME=“CentOS Linux 7 (Core)”\nANSI_COLOR=“0;31”\nCPE_NAME=“cpe:/o:centos:centos:7”\nHOME_URL=“https://www.centos.org/”\nBUG_REPORT_URL=“https://bugs.centos.org/”MONGO DB VERSION\n[root@localhost ~]# mongod --version\ndb version v5.0.20\nBuild Info: {\n“version”: “5.0.20”,\n“gitVersion”: “2cd626d8148120319d7dca5824e760fe220cb0de”,\n“openSSLVersion”: “OpenSSL 1.0.1e-fips 11 Feb 2013”,\n“modules”: ,\n“allocator”: “tcmalloc”,\n“environment”: {it was there since long back before I joined the organsation\nand normal method was used to install the mongodb and systemctl start mongod was not working . so they started using fork mode", "username": "Samrat_Mehta" }, { "code": "rpmverify mongodb-org-server", "text": "There was an issue with one release that affected the systemd unit. That was resolved in 5.0.16 so it should work fine now.Perhaps run rpmverify mongodb-org-server to check the packaged version of the systemd units.", "username": "chris" } ]
Is there any way to enable mongodb services, if the db services is running in the fork mode. There may be chance if the server reboot. so need to start manually in the fork mode
2023-08-28T13:49:29.656Z
Is there any way to enable mongodb services, if the db services is running in the fork mode. There may be chance if the server reboot. so need to start manually in the fork mode
404
null
[ "python", "spark-connector" ]
[ { "code": "", "text": "i run data pipelines in airflow gcp using pyspark, it was running perfectly before but suddenly the py4j.protocol.Py4JJavaError: An error occurred while calling o29.load.\n: org.apache.spark.SparkClassNotFoundException: [DATA_SOURCE_NOT_FOUND] Failed to find the data source: mongodb appears. is there any new updates about mongodb spark connector? i get more confused when i run it in dataproc, it runs perfectly", "username": "Gigih_Haryo_Yudhanto" }, { "code": "", "text": "Which version of mongoDB Spark Connector are you using? This error seems to be stemming from the missing Spark connector in the environment. How are you including the Spark connector - is it installed in the environment or something you are passing when invoking the Spark execution?Since you mentioned that the pipeline runs perfectly on Dataproc, but not on other environments, there might be a difference in the cluster configurations. Compare the configuration of your Dataproc cluster to the other Spark environment where you are encountering issues. Pay attention to the Spark version, the installed libraries, and the classpath configurations.", "username": "Prakul_Agarwal" } ]
py4j.protocol.Py4JJavaError: An error occurred while calling o29.load. : org.apache.spark.SparkClassNotFoundException: [DATA_SOURCE_NOT_FOUND] Failed to find the data source: mongodb
2023-07-22T19:51:39.033Z
py4j.protocol.Py4JJavaError: An error occurred while calling o29.load. : org.apache.spark.SparkClassNotFoundException: [DATA_SOURCE_NOT_FOUND] Failed to find the data source: mongodb
897
null
[ "java", "spark-connector", "scala" ]
[ { "code": "23/07/27 13:50:54 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1) (10.0.7.9 executor 0): java.lang.NoSuchMethodError: 'scala.collection.immutable.Seq org.apache.spark.sql.types.StructType.toAttributes()'\n at com.mongodb.spark.sql.connector.schema.InternalRowToRowFunction.<init>(InternalRowToRowFunction.java:46)\n at com.mongodb.spark.sql.connector.schema.RowToBsonDocumentConverter.<init>(RowToBsonDocumentConverter.java:84)\n at com.mongodb.spark.sql.connector.write.MongoDataWriter.<init>(MongoDataWriter.java:74)\n at com.mongodb.spark.sql.connector.write.MongoDataWriterFactory.createWriter(MongoDataWriterFactory.java:53)\n at org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run(WriteToDataSourceV2Exec.scala:459)\n at org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run$(WriteToDataSourceV2Exec.scala:448)\n at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:514)\n at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:411)\n at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)\n at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)\n at org.apache.spark.scheduler.Task.run(Task.scala:139)\n at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)\n at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)\n at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)\n at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n at java.base/java.lang.Thread.run(Thread.java:833)\n\n23/07/27 13:50:54 INFO TaskSetManager: Starting task 0.1 in stage 1.0 (TID 2) (10.0.7.9, executor 0, partition 0, PROCESS_LOCAL, 7213 bytes)\n23/07/27 13:50:54 INFO TaskSetManager: Lost task 0.1 in stage 1.0 (TID 2) on 10.0.7.9, executor 0: java.lang.NoSuchMethodError ('scala.collection.immutable.Seq org.apache.spark.sql.types.StructType.toAttributes()') [duplicate 1]\n23/07/27 13:50:54 INFO TaskSetManager: Starting task 0.2 in stage 1.0 (TID 3) (10.0.7.9, executor 0, partition 0, PROCESS_LOCAL, 7213 bytes)\n23/07/27 13:50:54 INFO TaskSetManager: Lost task 0.2 in stage 1.0 (TID 3) on 10.0.7.9, executor 0: java.lang.NoSuchMethodError ('scala.collection.immutable.Seq org.apache.spark.sql.types.StructType.toAttributes()') [duplicate 2]\n23/07/27 13:50:54 INFO TaskSetManager: Starting task 0.3 in stage 1.0 (TID 4) (10.0.7.3, executor 1, partition 0, PROCESS_LOCAL, 7213 bytes)\n23/07/27 13:50:55 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 10.0.7.3:35857 (size: 6.4 KiB, free: 413.9 MiB)\n23/07/27 13:51:03 INFO TaskSetManager: Lost task 0.3 in stage 1.0 (TID 4) on 10.0.7.3, executor 1: java.lang.NoSuchMethodError ('scala.collection.immutable.Seq org.apache.spark.sql.types.StructType.toAttributes()') [duplicate 3]\n23/07/27 13:51:03 ERROR TaskSetManager: Task 0 in stage 1.0 failed 4 times; aborting job\n23/07/27 13:51:03 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool\n23/07/27 13:51:03 INFO TaskSchedulerImpl: Cancelling stage 1\n23/07/27 13:51:03 INFO TaskSchedulerImpl: Killing all running tasks in stage 1: Stage cancelled\n23/07/27 13:51:03 INFO DAGScheduler: ResultStage 1 (save at Main.java:25) failed in 12.753 s due to Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4) (10.0.7.3 executor 1): java.lang.NoSuchMethodError: 'scala.collection.immutable.Seq org.apache.spark.sql.types.StructType.toAttributes()'\n at com.mongodb.spark.sql.connector.schema.InternalRowToRowFunction.<init>(InternalRowToRowFunction.java:46)\n at com.mongodb.spark.sql.connector.schema.RowToBsonDocumentConverter.<init>(RowToBsonDocumentConverter.java:84)\n at com.mongodb.spark.sql.connector.write.MongoDataWriter.<init>(MongoDataWriter.java:74)\n at com.mongodb.spark.sql.connector.write.MongoDataWriterFactory.createWriter(MongoDataWriterFactory.java:53)\n at org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run(WriteToDataSourceV2Exec.scala:459)\n at org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run$(WriteToDataSourceV2Exec.scala:448)\n at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:514)\n at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:411)\n at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)\n at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)\n at org.apache.spark.scheduler.Task.run(Task.scala:139)\n at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)\n at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)\n at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)\n at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n at java.base/java.lang.Thread.run(Thread.java:833)\n\nDriver stacktrace:\n23/07/27 13:51:03 INFO DAGScheduler: Job 1 failed: save at Main.java:25, took 12.760984 s\n23/07/27 13:51:03 ERROR AppendDataExec: Data source write support com.mongodb.spark.sql.connector.write.MongoBatchWrite@52539624 is aborting.\n23/07/27 13:51:03 ERROR AppendDataExec: Data source write support com.mongodb.spark.sql.connector.write.MongoBatchWrite@52539624 failed to abort.\nException in thread \"main\" org.apache.spark.SparkException: Writing job failed.\n at org.apache.spark.sql.errors.QueryExecutionErrors$.writingJobFailedError(QueryExecutionErrors.scala:916)\n at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:434)\n at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2$(WriteToDataSourceV2Exec.scala:382)\n at org.apache.spark.sql.execution.datasources.v2.AppendDataExec.writeWithV2(WriteToDataSourceV2Exec.scala:248)\n at org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run(WriteToDataSourceV2Exec.scala:360)\n at org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run$(WriteToDataSourceV2Exec.scala:359)\n at org.apache.spark.sql.execution.datasources.v2.AppendDataExec.run(WriteToDataSourceV2Exec.scala:248)\n at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result$lzycompute(V2CommandExec.scala:43)\n at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.result(V2CommandExec.scala:43)\n at org.apache.spark.sql.execution.datasources.v2.V2CommandExec.executeCollect(V2CommandExec.scala:49)\n at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)\n at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:118)\n at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195)\n at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:103)\n at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)\n at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)\n at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:98)\n at org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:94)\n at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512)\n at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:104)\n at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:512)\n at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)\n at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)\n at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)\n at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)\n at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)\n at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:488)\n at org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:94)\n at org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:81)\n at org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:79)\n at org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:133)\n at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:856)\n at org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:311)\n at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:247)\n at org.example.Main.main(Main.java:25)\n at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\n at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\n at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\n at java.base/java.lang.reflect.Method.invoke(Method.java:566)\n at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)\n at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1020)\n at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:192)\n at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:215)\n at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)\n at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1111)\n at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1120)\n at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)\nCaused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4) (10.0.7.3 executor 1): java.lang.NoSuchMethodError: 'scala.collection.immutable.Seq org.apache.spark.sql.types.StructType.toAttributes()'\n at com.mongodb.spark.sql.connector.schema.InternalRowToRowFunction.<init>(InternalRowToRowFunction.java:46)\n at com.mongodb.spark.sql.connector.schema.RowToBsonDocumentConverter.<init>(RowToBsonDocumentConverter.java:84)\n at com.mongodb.spark.sql.connector.write.MongoDataWriter.<init>(MongoDataWriter.java:74)\n at com.mongodb.spark.sql.connector.write.MongoDataWriterFactory.createWriter(MongoDataWriterFactory.java:53)\n at org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run(WriteToDataSourceV2Exec.scala:459)\n at org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run$(WriteToDataSourceV2Exec.scala:448)\n at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:514)\n at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:411)\n at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)\n at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)\n at org.apache.spark.scheduler.Task.run(Task.scala:139)\n at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)\n at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)\n at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)\n at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n at java.base/java.lang.Thread.run(Thread.java:833)\n\nDriver stacktrace:\n at org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2785)\n at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2721)\n at org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2720)\n at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)\n at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)\n at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)\n at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2720)\n at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1206)\n at org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1206)\n at scala.Option.foreach(Option.scala:407)\n at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1206)\n at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2984)\n at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2923)\n at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2912)\n at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)\n at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:971)\n at org.apache.spark.SparkContext.runJob(SparkContext.scala:2263)\n at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:408)\n ... 45 more\n Suppressed: com.mongodb.spark.sql.connector.exceptions.DataException: Write aborted for: db33b953-e190-4af1-bfc8-2a30545a0967. 0/1 tasks completed.\n at com.mongodb.spark.sql.connector.write.MongoBatchWrite.abort(MongoBatchWrite.java:91)\n at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:429)\n ... 45 more\nCaused by: java.lang.NoSuchMethodError: 'scala.collection.immutable.Seq org.apache.spark.sql.types.StructType.toAttributes()'\n at com.mongodb.spark.sql.connector.schema.InternalRowToRowFunction.<init>(InternalRowToRowFunction.java:46)\n at com.mongodb.spark.sql.connector.schema.RowToBsonDocumentConverter.<init>(RowToBsonDocumentConverter.java:84)\n at com.mongodb.spark.sql.connector.write.MongoDataWriter.<init>(MongoDataWriter.java:74)\n at com.mongodb.spark.sql.connector.write.MongoDataWriterFactory.createWriter(MongoDataWriterFactory.java:53)\n at org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run(WriteToDataSourceV2Exec.scala:459)\n at org.apache.spark.sql.execution.datasources.v2.WritingSparkTask.run$(WriteToDataSourceV2Exec.scala:448)\n at org.apache.spark.sql.execution.datasources.v2.DataWritingSparkTask$.run(WriteToDataSourceV2Exec.scala:514)\n at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.$anonfun$writeWithV2$2(WriteToDataSourceV2Exec.scala:411)\n at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)\n at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)\n at org.apache.spark.scheduler.Task.run(Task.scala:139)\n at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)\n at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)\n at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)\n at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)\n at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)\n at java.base/java.lang.Thread.run(Thread.java:833)\nspark-submit --class org.example.Main --master spark://spark:7077 --packages org.mongodb.spark:mongo-spark-connector_2.13:10.2.0 app.jar\npackage org.example;\n\nimport org.apache.spark.sql.Dataset;\nimport org.apache.spark.sql.Row;\nimport org.apache.spark.sql.SparkSession;\n\nimport java.io.Serializable;\nimport java.util.Properties;\npublic class Main implements Serializable {\n public static void main(String[] args) {\n SparkSession spark = SparkSession.builder()\n .appName(\"PostgresToMongoDB\")\n .config(\"spark.mongodb.read.connection.uri\", \"mongodb://spark:spark@HOST:PORT/spark.users?authSource=admin\")\n .config(\"spark.mongodb.write.connection.uri\", \"mongodb://spark:spark@HOST:PORT/spark.users?authSource=admin\")\n .getOrCreate();\n\n Dataset<Row> usersDF = getAllUsers2(spark);\n usersDF.show();\n // Save data to MongoDB\n usersDF\n .write()\n .format(\"mongodb\")\n .option(\"uri\",\"mongodb://spark:spark@HOST:PORT/spark.users?authSource=admin\")\n .mode(\"append\")\n .save();\n spark.stop();\n }\n\n private static Dataset<Row> getAllUsers2(SparkSession sparkSession){\n return sparkSession\n .read()\n .format(\"jdbc\")\n .option(\"url\", \"jdbc:postgresql://PHOST:PPORT/spark\")\n .option(\"dbtable\", \"users\")\n .option(\"user\", \"spark\")\n .option(\"password\", \"spark\")\n .load();\n }\n}\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n <modelVersion>4.0.0</modelVersion>\n\n <groupId>org.example</groupId>\n <artifactId>sparksql</artifactId>\n <version>1.0-SNAPSHOT</version>\n\n <properties>\n <maven.compiler.source>11</maven.compiler.source>\n <maven.compiler.target>11</maven.compiler.target>\n <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n </properties>\n <build>\n <finalName>app</finalName>\n <plugins>\n <!-- Maven shade plug-in that creates uber JARs -->\n <plugin>\n <groupId>org.apache.maven.plugins</groupId>\n <artifactId>maven-shade-plugin</artifactId>\n <version>3.5.0</version>\n <executions>\n <execution>\n <phase>package</phase>\n <goals>\n <goal>shade</goal>\n </goals>\n </execution>\n </executions>\n </plugin>\n </plugins>\n </build>\n\n <dependencies>\n <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-core -->\n <dependency>\n <groupId>org.apache.spark</groupId>\n <artifactId>spark-core_2.13</artifactId>\n <version>3.4.1</version>\n </dependency>\n <!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql -->\n <dependency>\n <groupId>org.apache.spark</groupId>\n <artifactId>spark-sql_2.13</artifactId>\n <version>3.4.1</version>\n <scope>provided</scope>\n </dependency>\n <!-- https://mvnrepository.com/artifact/org.postgresql/postgresql -->\n <dependency>\n <groupId>org.postgresql</groupId>\n <artifactId>postgresql</artifactId>\n <version>42.6.0</version>\n </dependency>\n <!-- https://mvnrepository.com/artifact/org.mongodb.spark/mongo-spark-connector -->\n <dependency>\n <groupId>org.mongodb.spark</groupId>\n <artifactId>mongo-spark-connector_2.13</artifactId>\n <version>10.2.0</version>\n </dependency>\n </dependencies>\n</project>\n", "text": "I am trying to submit a job to my standalone spark cluster and I am getting these errorsMy Spark submit commandMy java codemy pom.xml", "username": "Jaideep_C" }, { "code": "", "text": "This seems to be stemming from the Spark version not matching the version supported by of the MongoDB Spark Connector. Can you try with Spark version 3.2.4 please (instead of 3.4.1 that you seem to be using)?", "username": "Prakul_Agarwal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Getting java.lang.NoSuchMethodError: when trying to write to a collection via spark mongo connector
2023-07-27T13:58:24.256Z
Getting java.lang.NoSuchMethodError: when trying to write to a collection via spark mongo connector
1,058
null
[ "java", "python", "spark-connector", "scala" ]
[ { "code": "from Trademe_MongoDB.Credentials.Credentials import uri\nfrom datetime import datetime\n# from motor.motor_asyncio import AsyncIOMotorClient\nfrom Trademe_MongoDB.logger_config import Logger_config\nfrom pyspark.sql import SparkSession\n\n\n\nlogger = Logger_config().get_logger()\n\nspark = SparkSession.\\\nbuilder.\\\nappName(\"pyspark-notebook2\").\\\nconfig(\"spark.executor.memory\", \"1g\").\\\nconfig(\"spark.mongodb.read.connection.uri\", uri).\\\nconfig(\"spark.jars.packages\", \"org.mongodb.spark:mongo-spark-connector:10.0.3\").\\\ngetOrCreate()\n\ndf = spark.read.format(\"mongodb\").option('database', '1').option('collection', '2').load()\n\ndf.show()\n\n\n\nThe system cannot find the path specified.\nError: Missing application resource.\n\nUsage: spark-submit [options] <app jar | python file | R file> [app arguments]\nUsage: spark-submit --kill [submission ID] --master [spark://...]\nUsage: spark-submit --status [submission ID] --master [spark://...]\nUsage: spark-submit run-example [options] example-class [example args]\n\nOptions:\n --master MASTER_URL spark://host:port, mesos://host:port, yarn,\n k8s://https://host:port, or local (Default: local[*]).\n --deploy-mode DEPLOY_MODE Whether to launch the driver program locally (\"client\") or\n on one of the worker machines inside the cluster (\"cluster\")\n (Default: client).\n --class CLASS_NAME Your application's main class (for Java / Scala apps).\n --name NAME A name of your application.\n --jars JARS Comma-separated list of jars to include on the driver\n and executor classpaths.\n --packages Comma-separated list of maven coordinates of jars to include\n on the driver and executor classpaths. Will search the local\n maven repo, then maven central and any additional remote\n repositories given by --repositories. The format for the\n coordinates should be groupId:artifactId:version.\n --exclude-packages Comma-separated list of groupId:artifactId, to exclude while\n resolving the dependencies provided in --packages to avoid\n dependency conflicts.\n --repositories Comma-separated list of additional remote repositories to\n search for the maven coordinates given with --packages.\n --py-files PY_FILES Comma-separated list of .zip, .egg, or .py files to place\n on the PYTHONPATH for Python apps.\n --files FILES Comma-separated list of files to be placed in the working\n directory of each executor. File paths of these files\n in executors can be accessed via SparkFiles.get(fileName).\n --archives ARCHIVES Comma-separated list of archives to be extracted into the\n working directory of each executor.\n\n --conf, -c PROP=VALUE Arbitrary Spark configuration property.\n --properties-file FILE Path to a file from which to load extra properties. If not\n specified, this will look for conf/spark-defaults.conf.\n\n --driver-memory MEM Memory for driver (e.g. 1000M, 2G) (Default: 1024M).\n --driver-java-options Extra Java options to pass to the driver.\n --driver-library-path Extra library path entries to pass to the driver.\n --driver-class-path Extra class path entries to pass to the driver. Note that\n jars added with --jars are automatically included in the\n classpath.\n\n --executor-memory MEM Memory per executor (e.g. 1000M, 2G) (Default: 1G).\n\n --proxy-user NAME User to impersonate when submitting the application.\n This argument does not work with --principal / --keytab.\n\n --help, -h Show this help message and exit.\n --verbose, -v Print additional debug output.\n --version, Print the version of current Spark.\n\n Spark Connect only:\n --remote CONNECT_URL URL to connect to the server for Spark Connect, e.g.,\n sc://host:port. --master and --deploy-mode cannot be set\n together with this option. This option is experimental, and\n might change between minor releases.\n\n Cluster deploy mode only:\n --driver-cores NUM Number of cores used by the driver, only in cluster mode\n (Default: 1).\n\n Spark standalone or Mesos with cluster deploy mode only:\n --supervise If given, restarts the driver on failure.\n\n Spark standalone, Mesos or K8s with cluster deploy mode only:\n --kill SUBMISSION_ID If given, kills the driver specified.\n --status SUBMISSION_ID If given, requests the status of the driver specified.\n\n Spark standalone, Mesos and Kubernetes only:\n --total-executor-cores NUM Total cores for all executors.\n\n Spark standalone, YARN and Kubernetes only:\n --executor-cores NUM Number of cores used by each executor. (Default: 1 in\n YARN and K8S modes, or all available cores on the worker\n in standalone mode).\n\n Spark on YARN and Kubernetes only:\n --num-executors NUM Number of executors to launch (Default: 2).\n If dynamic allocation is enabled, the initial number of\n executors will be at least NUM.\n --principal PRINCIPAL Principal to be used to login to KDC.\n --keytab KEYTAB The full path to the file that contains the keytab for the\n principal specified above.\n\n Spark on YARN only:\n --queue QUEUE_NAME The YARN queue to submit to (Default: \"default\").\n \n'w' is not recognized as an internal or external command,\noperable program or batch file.\nTraceback (most recent call last):\n File \"C:\\Projects\\Web projects\\Trademe_MongoDB\\Data analysis\\Load From Spark.py\", line 11, in <module>\n spark = SparkSession.\\\n File \"C:\\Projects\\Web projects\\venv\\lib\\site-packages\\pyspark\\sql\\session.py\", line 477, in getOrCreate\n sc = SparkContext.getOrCreate(sparkConf)\n File \"C:\\Projects\\Web projects\\venv\\lib\\site-packages\\pyspark\\context.py\", line 512, in getOrCreate\n SparkContext(conf=conf or SparkConf())\n File \"C:\\Projects\\Web projects\\venv\\lib\\site-packages\\pyspark\\context.py\", line 198, in __init__\n SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)\n File \"C:\\Projects\\Web projects\\venv\\lib\\site-packages\\pyspark\\context.py\", line 432, in _ensure_initialized\n SparkContext._gateway = gateway or launch_gateway(conf)\n File \"C:\\Projects\\Web projects\\venv\\lib\\site-packages\\pyspark\\java_gateway.py\", line 106, in launch_gateway\n raise RuntimeError(\"Java gateway process exited before sending its port number\")\nRuntimeError: Java gateway process exited before sending its port number\n\nProcess finished with exit code 1\n\n", "text": "Hi there, want to read from MongoDB Atlas data to Pyspark.The read script is from this forum:Throwing errors:Background: Download Pyspark not hadoop. Spark_home envrioments variabls set up, Java envrionment variables set up. Both can read from system path.", "username": "JJ_J" }, { "code": "", "text": "Hi @JJ_J ,How is the Spark cluster Hosted. Is this a self hosted cluster or something like databricks?\n“Java gateway process exited before sending its port number” - makes me think of a possible configuration with Spark env. Can you verify that network access exists via connecting to Atlas cluster directly from the Spark worker node?\nAlso can you verify that you have the pymongo installed in the environment?", "username": "Prakul_Agarwal" } ]
Troubleshoot MongoDB Altas To PySpark
2023-08-11T06:35:13.401Z
Troubleshoot MongoDB Altas To PySpark
525
null
[ "node-js" ]
[ { "code": "", "text": "Hi\nI am preparing to take my Associate Developer Exam.\nI have completed the NodeJS developer path Including the optional parts.\nI gave the practice exam & have got 64%. Bu passing criteria is 76%\nWhat I’m looking for is More practice resources based on CRUD operations & Indexes as these two consist of 51% & 17% question criteria. I’ve see the topic suggestions for CRUD & everything else. But it would be nice to have more practice options.\nPlease suggest any resources for more practice questions.", "username": "Shahriar_Shatil" }, { "code": "", "text": "I asked chatGPT, providing it with examples from the practice exam. Passed the certification exam with ~85%", "username": "Emanuel_N_A1" }, { "code": "", "text": "Great suggestion man!\nThanks!", "username": "Shahriar_Shatil" }, { "code": "", "text": "The tricky thing is to validate that chatGPT is telling the true answer…\nThere are options to hint chatGPT with docs to make sure that it uses the correct resources … but when you are that far than you most likely passed the needed level anyway \nSince chatGTP seemed to be helpful, I do not want to drive you away from using it – just use it with care\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Practice Suggestion for Associate Developer Exam
2023-08-08T11:49:41.534Z
Practice Suggestion for Associate Developer Exam
699
null
[ "aggregation" ]
[ { "code": "", "text": "Hi! I am using dashboard filter for my charts. I am curious that whether the dashboard filtering happens before or after the aggregation pipeline I specified in the query bar of my charts? Can I get the aggregation pipeline with dashboard filtering included? Thanks!", "username": "Xinying_Hu" }, { "code": "", "text": "Hello @Xinying_Hu Dashboard filters are processed before the query bar. More details here\nBacking Aggregation Pipeline — MongoDB Charts", "username": "Avinash_Prasad" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does dashboard filter applies before or after the aggregation?
2023-08-30T00:19:06.652Z
Does dashboard filter applies before or after the aggregation?
368
null
[ "backup", "field-encryption" ]
[ { "code": "", "text": "Hi Team,I am encountering errors while trying to restore encrypted data. Is there a specific reason for this issue, and could you provide guidance on how to successfully restore it? plz help2023-08-29T22:38:40.805+0530\tFailed: medicalRecord.patients: error restoring from dump/medicalRecord/patients.bson.gz: bulk write exception: write errors: [Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, +965 more errors…]Mongo version: 7.0we got while testing queryable encryption.", "username": "ram_Kumar3" }, { "code": "", "text": "Hello Ram, Can you give me more detail on what you are trying to do? For example, it looks like you had a collection with encrypted fields but then somehow lost the collection and now you are trying to rebuild that collection from a copy that you have stored somewhere. What method are you using? That error you are seeing is a server-side error that is preventing encrypted data from being written via a method that is not Queryable Encryption compatible. The list of Queryable Encryption compatible drivers is here.Cynthia", "username": "Cynthia_Braund" }, { "code": "", "text": "Hi Cynthia_Braund,I have enabled queryable encryption on the cluster following the doc https://www.mongodb.com/docs/manual/core/queryable-encryption/reference/shared-library/#std-label-qe-reference-shared-library. I have created queryable encryption for medicalRecord. patients collection for two fields. I have took the full dump and tried to restore it in anther cluster while doing restore I am getting this error.Sample doc:Enterprise red [direct: secondary] medicalRecord> db.patients.findOne()\n{\n_id: ObjectId(“64ecb389756b6a79f89983c1”),\npatientName: Binary(Buffer.from(“10bcd609889f8845e686820b9013d7962f021c64a50635131f55fc7f49d7de11a05fd0fb3743150dd8c7552c98832fd5c7c4320b4f22b70024f4fe10616f782c00d7a22d08812698551954bd9c71ebe5125f”, “hex”), 6),\npatientId: Binary(Buffer.from(“0e534ec25ecd744507aa0e9e7e204c5f1002a1cd16ef5f8b873caa2ab4acc7e67077442bb2a5b171c5d71790793350a71f3a8ff5dbc48538d18785465798e2340baa1e3c5c479a319835f101a97a1137e599c8d0b712f61c2f07a522d1a9755a632d76a7961dfe0d60681bf579f66fa0ccc46373bfbafb8a776c3da207af6119af2268a1ba43e01fa30bd49e544ce42e4bc5d0327f7383afb3d00c55e98dcb36a50e71a951f16e2f2b11df06d1ec788fbc38d8470b63f2d46615f82e777b51459b068232abe3dff51ad4248684765a1bb08a”, “hex”), 6),\npatientRecord: ‘yes’,\nsafeContent: [\nBinary(Buffer.from(“d0327f7383afb3d00c55e98dcb36a50e71a951f16e2f2b11df06d1ec788fbc38”, “hex”), 0)\n]\n}Mongo dump command:\nmongodump --host 127.0.0.1 --port 27040 --gzip --out ./dumpRestore command\nmongorestore --port 27043 --gzip --dir=dump --gzipRestoration error:2023-08-30T04:38:07.591+0530 using --dir flag instead of arguments2023-08-30T04:38:07.591+0530 using write concern: &{majority false 0}2023-08-30T04:38:07.607+0530 checking options2023-08-30T04:38:07.607+0530 dumping with object check disabled2023-08-30T04:38:07.607+0530 will listen for SIGTERM, SIGINT, and SIGKILL2023-08-30T04:38:07.608+0530 connected to node type: replset2023-08-30T04:38:07.608+0530 mongorestore target is a directory, not a file2023-08-30T04:38:07.608+0530 preparing collections to restore from2023-08-30T04:38:07.608+0530 using dump as dump root directory2023-08-30T04:38:07.608+0530 reading collections for database admin in admin2023-08-30T04:38:07.608+0530 found collection admin.system.version bson to restore to admin.system.version2023-08-30T04:38:07.608+0530 found collection metadata from admin.system.version to restore to admin.system.version2023-08-30T04:38:07.608+0530 adding intent for admin.system.version2023-08-30T04:38:07.608+0530 reading collections for database encryption in encryption2023-08-30T04:38:07.608+0530 found collection encryption.__keyVault bson to restore to encryption.__keyVault2023-08-30T04:38:07.608+0530 found collection metadata from encryption.__keyVault to restore to encryption.__keyVault2023-08-30T04:38:07.608+0530 adding intent for encryption.__keyVault2023-08-30T04:38:07.608+0530 reading collections for database medicalRecord in medicalRecord2023-08-30T04:38:07.608+0530 found collection medicalRecord.enxcol_.patients.ecoc bson to restore to medicalRecord.enxcol_.patients.ecoc2023-08-30T04:38:07.608+0530 found collection metadata from medicalRecord.enxcol_.patients.ecoc to restore to medicalRecord.enxcol_.patients.ecoc2023-08-30T04:38:07.608+0530 adding intent for medicalRecord.enxcol_.patients.ecoc2023-08-30T04:38:07.608+0530 found collection medicalRecord.enxcol_.patients.esc bson to restore to medicalRecord.enxcol_.patients.esc2023-08-30T04:38:07.608+0530 found collection metadata from medicalRecord.enxcol_.patients.esc to restore to medicalRecord.enxcol_.patients.esc2023-08-30T04:38:07.608+0530 adding intent for medicalRecord.enxcol_.patients.esc2023-08-30T04:38:07.608+0530 found collection medicalRecord.patients bson to restore to medicalRecord.patients2023-08-30T04:38:07.608+0530 found collection metadata from medicalRecord.patients to restore to medicalRecord.patients2023-08-30T04:38:07.608+0530 adding intent for medicalRecord.patients2023-08-30T04:38:07.608+0530 reading collections for database test in test2023-08-30T04:38:07.609+0530 found collection test.sample bson to restore to test.sample2023-08-30T04:38:07.609+0530 found collection metadata from test.sample to restore to test.sample2023-08-30T04:38:07.609+0530 adding intent for test.sample2023-08-30T04:38:07.609+0530 reading metadata for encryption.__keyVault from dump/encryption/__keyVault.metadata.json.gz2023-08-30T04:38:07.610+0530 reading metadata for medicalRecord.enxcol_.patients.ecoc from dump/medicalRecord/enxcol_.patients.ecoc.metadata.json.gz2023-08-30T04:38:07.610+0530 reading metadata for medicalRecord.enxcol_.patients.esc from dump/medicalRecord/enxcol_.patients.esc.metadata.json.gz2023-08-30T04:38:07.610+0530 reading metadata for medicalRecord.patients from dump/medicalRecord/patients.metadata.json.gz2023-08-30T04:38:07.610+0530 reading metadata for test.sample from dump/test/sample.metadata.json.gz2023-08-30T04:38:07.610+0530 finalizing intent manager with longest task first prioritizer2023-08-30T04:38:07.610+0530 restoring up to 4 collections in parallel2023-08-30T04:38:07.611+0530 starting restore routine with id=32023-08-30T04:38:07.611+0530 starting restore routine with id=12023-08-30T04:38:07.611+0530 starting restore routine with id=02023-08-30T04:38:07.611+0530 starting restore routine with id=22023-08-30T04:38:07.616+0530 restoring to existing collection medicalRecord.patients without dropping2023-08-30T04:38:07.616+0530 collection medicalRecord.patients already exists - skipping collection create2023-08-30T04:38:07.616+0530 restoring to existing collection medicalRecord.enxcol_.patients.ecoc without dropping2023-08-30T04:38:07.616+0530 collection medicalRecord.enxcol_.patients.ecoc already exists - skipping collection create2023-08-30T04:38:07.616+0530 restoring to existing collection medicalRecord.enxcol_.patients.esc without dropping2023-08-30T04:38:07.616+0530 collection medicalRecord.enxcol_.patients.esc already exists - skipping collection create2023-08-30T04:38:07.616+0530 restoring medicalRecord.patients from dump/medicalRecord/patients.bson.gz2023-08-30T04:38:07.617+0530 restoring to existing collection test.sample without dropping2023-08-30T04:38:07.617+0530 collection test.sample already exists - skipping collection create2023-08-30T04:38:07.617+0530 restoring medicalRecord.enxcol_.patients.esc from dump/medicalRecord/enxcol_.patients.esc.bson.gz2023-08-30T04:38:07.617+0530 restoring medicalRecord.enxcol_.patients.ecoc from dump/medicalRecord/enxcol_.patients.ecoc.bson.gz2023-08-30T04:38:07.617+0530 using 1 insertion workers2023-08-30T04:38:07.617+0530 restoring test.sample from dump/test/sample.bson.gz2023-08-30T04:38:07.617+0530 using 1 insertion workers2023-08-30T04:38:07.618+0530 using 1 insertion workers2023-08-30T04:38:07.618+0530 using 1 insertion workers2023-08-30T04:38:07.682+0530 finished restoring medicalRecord.patients (0 documents, 1000 failures)2023-08-30T04:38:07.682+0530 Failed: medicalRecord.patients: error restoring from dump/medicalRecord/patients.bson.gz: bulk write exception: write errors: [Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, Cannot insert a document with field name safeContent, +965 more errors…]2023-08-30T04:38:07.682+0530 0 document(s) restored successfully. 1000 document(s) failed to restore.", "username": "ram_Kumar3" }, { "code": "", "text": "Hi Cynthia_Braund,Plz help", "username": "ram_Kumar3" }, { "code": "", "text": "Hi Ram,mongoDump/Restore and mongoImport/Export are not currently Queryable Encryption compatible. If you need to move data from one cluster to another you can use Compass or mongosh (shell) in encrypting mode to export and then import the data. We are investigating Queryable Encryption support for mongoDump/Restore and mongo Import/Export.Thanks,Cynthia", "username": "Cynthia_Braund" }, { "code": "", "text": "Here are instructions on setting up Compass so that it is in encrypting mode - https://www.mongodb.com/docs/compass/current/connect/advanced-connection-options/in-use-encryption/", "username": "Cynthia_Braund" }, { "code": "", "text": "Hi Cynthia_Braund,Thanks for the info.", "username": "ram_Kumar3" } ]
Restoration for encrypted data is not happening
2023-08-29T17:27:15.901Z
Restoration for encrypted data is not happening
429
null
[]
[ { "code": "Server permissions for this file ident have changed since the last time it was used (IDENT)A fatal error occured during client reset: 'Requested index 2 calling get() on set 'anotherschema.arrayfield' when max is 1'\"document_filters\": { \"read\": { \"roomId\": { \"$in\": \"%%user.custom_data.rooms\" } }, ... }anotherschema.arrayfieldinitiateClientReset...\nonError: async (session, error) => {\n if (error.name === \"ClientReset\" && realm) {\n const realmPath = realm.path;\n session.pause();\n Realm.App.Sync.initiateClientReset(app, realmPath);\n session.resume();\n }\n}\n...\n", "text": "Hi,I’m facing this error Server permissions for this file ident have changed since the last time it was used (IDENT) (1) when a usercustom data that is used in the permission roles is updated, which should handled by a ClientReset. But the problem is when client reset is triggered, I get an error when realm tries to perform the client reset. The error is A fatal error occured during client reset: 'Requested index 2 calling get() on set 'anotherschema.arrayfield' when max is 1' (2).The permission rule that uses the usercustom data is \"document_filters\": { \"read\": { \"roomId\": { \"$in\": \"%%user.custom_data.rooms\" } }, ... }The first error itself shouldn’t be a problem, since it will only cause a ClientReset, which is expected, but then while doing the client reset, the second error that doesn’t seem to make any sense to me. The error is in another schema that has nothing to do with any of the changes done, which is why I don’t really understand the error. I tried to find any part of the code trying to access anotherschema.arrayfield by index, but I’m guessing this is an internal code of realm initiateClientReset.This is my client reset handler:I would appreciate any help on this matter, I did find similar people with this error, but could only find people with the first error, but not the second.", "username": "Rossicler_Junior" }, { "code": "anotherschema.arrayfieldA fatal error occurred during client reset: Relationship object not found at schema.field", "text": "After some more debugging I found out that the issue it’s because anotherschema.arrayfield is a relationship field, and the array has an ObjectId from a object that doesn’t exist anymore.It’s still weird how this error is shown, since login and sync works fine with this, but once it tries to perform a ClientReset it will throw this error. Ideally realm would handle non-existing relationships, or at least throw an error when this happens with a more meaningful message. I would imagine something like A fatal error occurred during client reset: Relationship object not found at schema.field. Either way, thanks for all the support .", "username": "Rossicler_Junior" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Solution for A fatal error occured during client reset: 'Requested index 2 calling get() on set 'myschema.arrayfield' when max is 1'
2023-08-28T17:26:19.743Z
Solution for A fatal error occured during client reset: &lsquo;Requested index 2 calling get() on set &lsquo;myschema.arrayfield&rsquo; when max is 1&rsquo;
435
https://www.mongodb.com/…0_2_1024x513.png
[ "python" ]
[ { "code": "", "text": "\nimage1131×567 33.5 KB\nSystem unable to highlight correct answer. Claims both are correct (inspite of choose 1 condition) however the correct answer A.", "username": "Prnam" }, { "code": "", "text": "I am having the same problem as you. The correct answer should be A, as you mentioned.", "username": "Alvaro_R" }, { "code": "", "text": "Same issue for me too.\n\nimage1199×601 17.7 KB\n", "username": "ajinkya_shidhore1" }, { "code": "", "text": "Hi all,Thanks for highlighting this. We have updated the question to reflect the right option.In case of any further questions, feel free to reach out.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Incorrect highlight of correct answer in practice test
2023-07-25T14:21:47.615Z
Incorrect highlight of correct answer in practice test
822
null
[]
[ { "code": "name[\n {\n _id: \"ABC\",\n name: [\n \"foo\",\n \"bar\",\n \"lorem\",\n \"ipsum\"\n ]\n },\n {\n _id: \"DEF\",\n name: [\n \"bar\",\n \"foo\",\n \"something\"\n ]\n }\n]\ndb.collection.update({\n name: {\n $type: \"array\"\n }\n},\n[\n {\n $set: {\n \"name\": \"new name here\"\n }\n }\n],\n{\n multi: true\n})\n", "text": "I want to replace a the field name with the first element of its current value.Below is an example databasethe name for id ABC for example should have its name field changed to “foo” and DEF should become “bar”Below is the query I have so far", "username": "Curtis_L" }, { "code": "db.getCollection(\"Test\").updateMany(\n{},\n[\n {\n $set:{\n name:{\n $arrayElemAt:['$name', 0]\n }\n }\n }\n]\n)\n", "text": "You can make use of:Something like this?", "username": "John_Sewell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
$set field to first element of its previous array value
2023-08-30T13:34:32.150Z
$set field to first element of its previous array value
196
null
[ "dot-net", "storage" ]
[ { "code": "", "text": "Hello! We have a problem with our MongoDB service, that crashes constantly after running a short or very short period. It is running on windows server with 16Gb of Ram (which does not seem to be saturated). The error is the following :{“t”:{“$date”:“2023-07-20T13:17:49.845-03:00”},“s”:“E”, “c”:“WT”, “id”:22435, “ctx”:“conn20”,“msg”:“WiredTiger error message”,“attr”:{“error”:12,“message”:{“ts_sec”:1689869869,“ts_usec”:845107,“thread”:“128:140718687146976”,“session_dhandle_name”:“file:collection-7–7905082630262738936.wt”,“session_name”:“WT_CURSOR.next”,“category”:“WT_VERB_DEFAULT”,“category_id”:9,“verbose_level”:“ERROR”,“verbose_level_id”:-3,“msg”:“int __cdecl __realloc_func(struct __wt_session_impl *,unsigned __int64 *,unsigned __int64,bool,void *):134:memory allocation of 662780 bytes failed”,“error_str”:“Not enough space”,“error_code”:12}}}We have tried to change the configuration of WireTiger to limit memory, but no success\nwiredTiger:\nengineConfig:\ncacheSizeGB: 2Anyone having an idea of the source of the problem and how to fix it?Thanks a lot!", "username": "Nico_R1" }, { "code": "", "text": "memory allocation of 662780 bytes failedFailed to allocate only 1MB of mem?\nwhat’s your memory usage rate?", "username": "Kobe_W" }, { "code": "", "text": "Hello\nRate is around 70-75% of 16Gb.", "username": "Nico_R1" }, { "code": "", "text": "Hi @Nico_R1 welcome to the community!Anecdotally, I’ve seen a similar error message when MongoDB is running inside a Docker environment. There was an issue where older versions of MongoDB would detect the system’s RAM instead of the container’s RAM, and would be stopped from acquiring memory. Since this situation is outside of WiredTiger’s control (requesting memory but refused by the OS), it will raise an exception similar to this one. This was fixed in SERVER-16571.If you’re not using the latest supported version of MongoDB, could you try again with the latest version and see if this is still happening?If this still happens, could you post your MongoDB version, your deployment environment, and steps that can reliably produce this error?Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Hello! If it can help others, here is what we did to avoid this MongoDB service crash. In the advanced system settings, we have allowed the system to create a paging file for the virtual memory when needed. It seems that the crash were happening when no more Virtual Memory was available for the system. In that case, the system was not allowed to swap on the disk, and Mongo DB was stopped when failing to allocate memory. I must say that we are not 100% sure whether it is the true reason why the problem happened, neither if this solution will fix it definately… But, since that change in the settings, it works fine, without any impact on the system performances. No more issue for the last month (when it used to happen several times a day before the fix). Here is a screenshot to explain what has been done :\nimage1334×824 189 KB\nPS: If our conclusion is right (crash due to lack of virtual memory) this behaviour of MongoDB would be worth being changed in a future release. It would be better to downgrade a bit the performances the time virtual memory is released, rather than crashing the system.", "username": "Nico_R1" } ]
MongoDB services crashes unexpectedly / WiredTiger error message
2023-07-20T16:36:53.052Z
MongoDB services crashes unexpectedly / WiredTiger error message
766
null
[ "queries", "python", "compass" ]
[ { "code": "", "text": "hi!\nI want to get all the documents which occured after 8th august 2023 format saved in my document is “mm/dd/yyyy hh:mm:ss AM” in mongodb collection orderstatusExample:\ndate_time: 12/31/2023 07:58:32 PMi am trying ISO and newDate query but they are not working. Please recommend me a solution.\nI want to get these documents in mongo compass and python as well.Your help will be really appreciated.", "username": "Bisma_Nazir" }, { "code": "", "text": "Can you paste in exacly what a document looks like and what you have tried?", "username": "John_Sewell" }, { "code": "", "text": "The major issue is that you stored your dates in the format“mm/dd/yyyy hh:mm:ss AM”With this format you cannot compared 2 dates and determine directly if one is before the other. You need to convert the stored date into the ISO format before you are able to compare, sort, … That will be slow since you will compare on a computed field.Please recommend me a solution.The only smart thing to do is to migrate your data to ISO date.", "username": "steevej" }, { "code": "", "text": "I want to save time as my dataset is so large and it will take alot og time to convert the date format and then find my required data out of my converted data.", "username": "Bisma_Nazir" }, { "code": "", "text": "This is my document\n{\n“_id”: {\n“$oid”: “63c933d4cf7b48bfa00875g1”\n},“date_time”: “01/18/2023 03:16:02 PM”, // date format is mm/dd/yyyy hh/mm/ss AM\n“order_no”: “63942369”,\n}I want to get all the documents that occured after 8th august 2023i tried ISODate and new Date query with various styles and formats but didn’t get the right answer.", "username": "Bisma_Nazir" }, { "code": "", "text": "I think you’re out of luck using that date_time field as stevej pointed out, it’s in a horrible format for any date comparrison.Going forward I think the best option would be to convert them all to dates and store it as an actual date going forward. This will be searchable and also require less space for storage.An alternative may be to use the object id, this has the date time embedded within it, so assuming you’re just pushing data into the server without doing anything funky with the objectIds (i.e. letting the driver or server generate them as opposed to hand crafting them) you could find the document ID that was created on that date, and get anything with an ID higher.\nSee:", "username": "John_Sewell" }, { "code": "", "text": "I want to save time as my dataset is so large and it will take alot og time to convert the date formatYour date format is so bad for your use case that you cannot save time.You have 2 choices1 - permanently convert and migrate your data to an appropriate date format and waste time once for the migration2 - dynamically convert your data to an appropriate date format and waste time every time you query your dataWHY because your date format does not have a natural chronological order so there is not way to say find date_time greater than 8th august 2023. Because in your date format 9th august 2022 is greater than 8th august 2023.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error in querying data on field type like this date_time: 12/31/2023 07:58:32 PM
2023-08-29T18:27:03.556Z
Error in querying data on field type like this date_time: 12/31/2023 07:58:32 PM
406
null
[]
[ { "code": "", "text": "Hi currently i am working on a personal project where i want to implement mongodb, i am trying to create a collection with a document which has a role field (customer, seller, administrator) but all these tree users have extra fields that make them different so i am trying to know what should i do maybe creating 3 different collections? but if that so i am also having an issue where i have a collection that has movement of the items sold in which it has the customer and the seller so i dont know what should i do i can do", "username": "Jurgen_Ruegenberg_Buezo" }, { "code": "", "text": "I just need some help please", "username": "Jurgen_Ruegenberg_Buezo" }, { "code": "", "text": "Please be a little bit more patient.Most of us are not in the same timezone as you.Most of us are voluntary and are not here full time because we are here during lunch breaks, before work or after work.With MongoDB and its flexible schema, you can certainly have all your users within the same collection despite the fact that they have different fields. With MongoDB you have the choice. You could created 3 different collections. My preference is to start with 1 collection for simplicity. Most likely I would implement the attribute pattern for the extra fields.Learn about the Attribute Schema Design pattern in MongoDB. This pattern is used to target similar fields in a document and reducing the number of indexes.", "username": "steevej" }, { "code": "", "text": "ok i understand sorry.But as i read the blog that attribute pattern is more likely to use to sort data or i didnt understand?, and how about if i implement the Polymorphic Patterns", "username": "Jurgen_Ruegenberg_Buezo" }, { "code": "", "text": "if i implement the Polymorphic PatternsVery good choice. More appropriate in certain situations.As for the attribute pattern, for sorting, for indexing and searching. It is more or less finer-grain polymorphic. For example, an inventory: metal, color, size, dimensions, textile could be attributes, so items of the inventory can have some of the attributes. Shoes will have color=Red, size=46, textile=leather. A wrench will have metal=vanadium and size=12, …I find, polymorphic easier to implement (no array manipulations) but it is less flexible.You can have a mix of the two and that would be legitimate.", "username": "steevej" }, { "code": "", "text": "Sorry i have an extra question maybe in my users how should i validate the roles and use the fields that should use i cant figure that out i am still learning, the blog does not specify how to differentiate this, like maybe the logic would be role: “customer”, and besides the common fields like name, how should i change the pre schema to use only the fields that i want and not all that it has", "username": "Jurgen_Ruegenberg_Buezo" }, { "code": "{\n role : \"customer\" ,\n name : \"steevej\" ,\n extra : {\n credit_card : \"xxxx xxxx xxxx\" ,\n interest : [ \"mongodb\" , \"sailing\" ]\n }\n}\n{\n role : \"seller\" ,\n name : \"jurgen_r_b\" ,\n extra : {\n products : [ \"shoes\" , \"phone\" , ... ]\n }\n}\n", "text": "I do not know what is the pre schema?role: “customer”Yes you really need a field that specify which type of data that is stored in the given document.You can put extra fields in a sub-document. It could be named extra like", "username": "steevej" }, { "code": "", "text": "sorry i say as pre schema as predefined, but like maybe i put a new user with role: “customer”, that subdocument i just pass maybe the credit_car and interest field but what about the others field if i dont pass the other fields are they gone be null?", "username": "Jurgen_Ruegenberg_Buezo" }, { "code": "", "text": "If using native mongo, the extra fields will not exist if you do not specify it.If using an abstraction layer like mongoose, I do not know.", "username": "steevej" }, { "code": "", "text": "Dear @Shadow_Fight,You might wonder why I started following you. Simple. You are a spammer. I flagged your last post as spam and all the others I could find in this forum. You have been good in hiding your spamming activity so far. But by following you I will get noticed as soon as you write.", "username": "steevej" } ]
I want to create a user collection with role field
2022-07-11T19:13:21.669Z
I want to create a user collection with role field
2,618
null
[]
[ { "code": "", "text": "What are the best practices to configure storage class in cloud platfroms for mongodb enterprise ?", "username": "Sandeep_Mutkule" }, { "code": "", "text": "Looking for some suggestions on this ", "username": "Sandeep_Mutkule" } ]
What are the best practices to configure storage class in cloud platfroms for mongodb enterprise
2023-08-24T05:43:13.128Z
What are the best practices to configure storage class in cloud platfroms for mongodb enterprise
411
null
[ "unity" ]
[ { "code": "", "text": "I’m developing a game using Unity and have several specific requirements related to data access:Can these requirements be met using the MongoDB Realm Unity SDK? Or is there a more appropriate approach to achieve this?", "username": "Scoz_Auro" }, { "code": "", "text": "Yes, this is entirely possible - you can define fine grained permissions that will be applied whenever a write comes from the client allowing you to flat our reject all writes via Atlas Device Sync or allow only some of them based on rule expressions. Invoking functions can also be done directly via the Unity SDK.", "username": "nirinchev" }, { "code": "", "text": "Many thanks, I will give it a try.", "username": "Scoz_Auro" } ]
Configuring MongoDB Realm for Unity with Specific Access Requirements
2023-08-30T05:07:25.679Z
Configuring MongoDB Realm for Unity with Specific Access Requirements
335
null
[ "cxx", "c-driver" ]
[ { "code": "A breakpoint instruction Error Critical error detected c0000374\nA breakpoint instruction (__debugbreak() statement or a similar call) was executed in mongoDBReleaseTesting.exe.\n\nThe program '[20728] mongoDBReleaseTesting.exe' has exited with code 0 (0x0).\nFile: C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.35.32215\\include\\xmemory\nLine: 255\n", "text": "Building application in Release mode in Visual Studio 2022 fails.\nFollowed this tutorial Getting Started with MongoDB and C++ | MongoDBWorks great for Debug x64 mode, but switching it to Release mode fails(i have changed the project settings for release mode, just like it was done for Debug mode in that tutorial)\nRunning gives an A breakpoint instruction Error \nError:Visual Studio shows error in xmemory file", "username": "Abhay_More" }, { "code": "", "text": "Hi Abhay,If you have updated all the config for release mode, I reckon the code in the shared tutorial should work fine.\nHave you changed the code being executed? Can you share the callstack & code?", "username": "Rishabh_Bisht" }, { "code": "No suitable servers found (`serverSelectionTryOnce` set): [Server closed connection. [Server closed connection. calling hello on ac-3***]: genetic server error.\n#include <mongocxx/client.hpp>\n#include <bsoncxx/builder/stream/document.hpp>\n#include <bsoncxx/json.hpp>\n#include <mongocxx/uri.hpp>\n#include <mongocxx/instance.hpp>\n#include <algorithm>\n#include <iostream>\n#include <vector>\n#include <mongocxx/exception/operation_exception.hpp>\n\nusing namespace std;\n\nstatic const mongocxx::uri mongoURI(\"mongodb+srv://<userName>:<password>@<dbName>.kobcgfr.mongodb.net/?retryWrites=true&w=majority\");\n// Get all the databases from a given client.\nvector<string> getDatabases(mongocxx::client& client)\n{\n\t\tvector<string> cldn = client.list_database_names();\n\t\treturn cldn;\n\t}\nint main()\n{\n\t// Create an instance.\n\n\tmongocxx::instance inst{};\n\tmongocxx::options::client client_options;\n\tauto api = mongocxx::options::server_api{ mongocxx::options::server_api::version::k_version_1 };\n\tclient_options.server_api_opts(api);\n\tmongocxx::client conn{ mongoURI, client_options };\n\n\tcout << \"password: \" << conn.uri().password() << std::endl;\n\tcout << \"username: \" << conn.uri().username() << std::endl;\n\tcout << \"auth_source: \" << conn.uri().auth_source() << std::endl;\n\n\tauto dbs = getDatabases(conn);\n\tfor (auto& db : dbs)\n\t{\n\t\tcout << db << endl;\n\t}\n\tcin.get();\n\treturn 0;\n}\nntdll.dll!00007ffc38aef3d2()\nntdll.dll!00007ffc38af8192()\nntdll.dll!00007ffc38af847a()\nntdll.dll!00007ffc38afe101()\nntdll.dll!00007ffc38a97482()\nntdll.dll!00007ffc38a147b1()\nucrtbase.dll!00007ffc3663f05b()\n[Inline Frame] mongoDBReleaseTesting.exe!std::_Deallocate(void * _Ptr, unsigned __int64 _Bytes) Line 255\n\tat C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.35.32215\\include\\xmemory(255)\n[Inline Frame] mongoDBReleaseTesting.exe!std::allocator<char>::deallocate(char * const _Count, const unsigned __int64) Line 829\n\tat C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.35.32215\\include\\xmemory(829)\n[Inline Frame] mongoDBReleaseTesting.exe!std::string::_Tidy_deallocate() Line 5019\n\tat C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.35.32215\\include\\xstring(5019)\n[Inline Frame] mongoDBReleaseTesting.exe!std::string::{dtor}() Line 3270\n\tat C:\\Program Files\\Microsoft Visual Studio\\2022\\Community\\VC\\Tools\\MSVC\\14.35.32215\\include\\xstring(3270)\nmongoDBReleaseTesting.exe!main() Line 49\n\tat D:\\WorkSpace\\VisualStudio2017\\Projects\\c++ playground\\mongoDBReleaseTesting\\mongoDBReleaseTesting.cpp(49)\n[External Code]\n", "text": "No i havent changed the code.\nAlthough at first the provided code(connection to database) didnt even ran for Debug as well, error wasFollowed this Fatal error building C++ drivers with VS 2022 under Win10 - are errors OK? - #2 by david_d\nDid some tinkering and it worked.Heres the code that currently works in Debug x64 mode.Heres the call stack :", "username": "Abhay_More" }, { "code": "", "text": "This seems like a config issue, given that debug mode works for you. This code should work (I tried it locally).Can you try comparing the config you set for debug and release? You should be able to find the config file if you go to the folder where your solution is saved - and look for .vcxproj file. You can open it in a text editor and compare the release vs debug config.\nAlso ensure that you’re rebuilding the code.", "username": "Rishabh_Bisht" }, { "code": "conn = mongocxx::client(mongoURI, client_options);1. A breakpoint instruction (__debugbreak() statement or a similar call) was executed in Project.exe.2. Exception thrown at 0x00007FFCBF916F95 (ntdll.dll) in Project.exe: 0xC0000005: Access violation reading location 0x0000000000000000.3. Unhandled exception at 0x00007FFCBF96F449 (ntdll.dll) in Project.exe: 0xC0000374: A heap has been corrupted (parameters: 0x00007FFCBF9D97F0).#pragma once\n\n#include <cstdint>\n#include <string>\n#include <iostream>\n#include \"../Header Files/Config.h\"\n\n#include \"bsoncxx/builder/stream/document.hpp\"\n#include \"bsoncxx/json.hpp\"\n#include \"bsoncxx/oid.hpp\"\n#include \"mongocxx/client.hpp\"\n#include \"mongocxx/database.hpp\"\n#include \"mongocxx/uri.hpp\"\n#include <mongocxx/exception/operation_exception.hpp>\n\nnamespace learning {\n\tconst mongocxx::uri mongoURI(mongodb_uri);\n\tconst std::string dbName = databaseName;\n\tconst std::string collName = colectionName;\n\tclass MongoDB {\n\tpublic:\n\t\tMongoDB();\n\t\tvoid insertDocument(const bsoncxx::document::value document);\t\t\n\t\tstd::tuple<std::string, std::string, std::string> findDocument(const std::string& value);\n\t\tbool isDataPresent(const std::string& key, const std::string& value);\n\t\tint findScore(const std::string& value);\n\t\tvoid updateDocument(const std::string& key, const int& value, const std::string& newKey, const int& newValue);\n\t\tstd::vector<std::pair<std::string, int>> getTopScores(int limit);\t\t\t\t\n\tprivate:\n\t\tmongocxx::options::client client_options;\n\t\tmongocxx::options::server_api api;\n\t\tmongocxx::client conn;\n\t\tmongocxx::v_noabi::database ammpedUPDB;\n\t\tmongocxx::v_noabi::collection loginInfoCollection;\n\t};\n}\n#include \"../Header Files/MongoDB.h\"\n\nlearning::MongoDB::MongoDB() :\n\tapi(mongocxx::options::server_api::version::k_version_1)\n{\n\tclient_options.server_api_opts(api);\n\tconn = mongocxx::client(mongoURI, client_options);\n\tammpedUPDB = conn.database(dbName);\n\tloginInfoCollection = ammpedUPDB.collection(collName);\n}\n\nvoid learning::MongoDB::insertDocument(const bsoncxx::document::value document)\n{\n\tloginInfoCollection.insert_one(document.view());\n}\n\nstd::tuple<std::string, std::string, std::string> learning::MongoDB::findDocument(const std::string& value)\n{\n\tstd::string key;\n\tif (value.find('@') != std::string::npos) {\n\t\t// Contains '@' symbol, so it looks like an email\n\t\tkey = \"email\";\n\t}\n\telse {\n\t\t// Doesn't contain '@', so it looks like a username\n\t\tkey = \"username\";\n\t}\n\tauto filter = bsoncxx::builder::stream::document{} << key << value << bsoncxx::builder::stream::finalize;\n\t// Add query filter argument in find\n\tauto cursor = loginInfoCollection.find({ filter });\n\n\tif (cursor.begin() == cursor.end()) {\n\t\treturn { \"\", \"\", \"\" };; // No data found\n\t}\n\n\t// Extract the first document from the cursor\n\tauto document = *cursor.begin();\n\n\t// Extract the individual components of the retrieved data\n\tstd::string retrievedUsername = std::string(document[\"username\"].get_string().value);\n\tstd::string retrievedEmail = std::string(document[\"email\"].get_string().value);\n\tstd::string retrievedPassword = std::string(document[\"pwd\"].get_string().value);\n\n\treturn { retrievedUsername, retrievedEmail, retrievedPassword };\n}\n\nbool learning::MongoDB::isDataPresent(const std::string& key, const std::string& value)\n{\n\t// Create the query filter\n\tauto filter = bsoncxx::builder::stream::document{} << key << value << bsoncxx::builder::stream::finalize;\n\t// Add query filter argument in find\n\tauto cursor = loginInfoCollection.find({ filter });\n\treturn (cursor.begin() != cursor.end());\n}\n\nint learning::MongoDB::findScore(const std::string& value)\n{\n\tauto filter = bsoncxx::builder::stream::document{} << \"username\" << value << bsoncxx::builder::stream::finalize;\n\tmongocxx::options::find opts;\n\topts.projection(bsoncxx::builder::basic::make_document(bsoncxx::builder::basic::kvp(\"score\", 1)));\n\tauto cursor = loginInfoCollection.find({ filter }, opts);\n\tfor (auto&& doc : cursor)\n\t{\n\t\tbsoncxx::document::element curvename = doc[\"score\"];\n\t\tint score = curvename.get_int32().value;\n\t\treturn score;\n\t}\n}\n\nvoid learning::MongoDB::updateDocument(const std::string& key, const int& value, const std::string& newKey, const int& newValue)\n{\n\tloginInfoCollection.update_one(bsoncxx::builder::stream::document{} << key << value << bsoncxx::builder::stream::finalize,\n\t\tbsoncxx::builder::stream::document{} << \"$set\" << bsoncxx::builder::stream::open_document << newKey << newValue << bsoncxx::builder::stream::close_document << bsoncxx::builder::stream::finalize);\n}\n\nstd::vector<std::pair<std::string, int>> learning::MongoDB::getTopScores(int limit)\n{\n\tstd::vector<std::pair<std::string, int>> topScores;\n\tmongocxx::options::find opts;\n\topts.sort(bsoncxx::builder::basic::make_document(\n\t\tbsoncxx::builder::basic::kvp(\"score\", -1)));\n\topts.limit(limit);\n\tauto cursor = loginInfoCollection.find({}, opts);\n\tfor (auto&& doc : cursor) {\n\t\tstd::string username = std::string(doc[\"username\"].get_string().value);\n\t\tint score = doc[\"score\"].get_int32().value;\n\n\t\ttopScores.emplace_back(username, score);\n\t}\n\treturn topScores;\n}\n", "text": "Hey sorry couldn’t get back to you early.\nAlthough i tried what you said(checked again, did the whole process again)…but it didnt work atleast for my project which i made just to test the release mode.\nFor my project where i actually want to use mongoDB. I did check the config files of it, did the process again, and it worked, atleast the project ran in the release mode…but only for few seconds. It failed in my mongoDB handler class. The instance of mongoDB is being created using a singleton class(as i want to use this handler class in various different places and want to see if the instance is already created or not).The error happens on line 7 of MongoDB.cpp file\nconn = mongocxx::client(mongoURI, client_options);\nError says :\n1. A breakpoint instruction (__debugbreak() statement or a similar call) was executed in Project.exe.\n2. Exception thrown at 0x00007FFCBF916F95 (ntdll.dll) in Project.exe: 0xC0000005: Access violation reading location 0x0000000000000000.\n3. Unhandled exception at 0x00007FFCBF96F449 (ntdll.dll) in Project.exe: 0xC0000374: A heap has been corrupted (parameters: 0x00007FFCBF9D97F0).MongoDB.h fileMongoDB.cpp file", "username": "Abhay_More" }, { "code": "", "text": "I could reproduce the problem. I suspect the singleton implementation may not be working correctly.Hypothesis:Suggestions", "username": "Rishabh_Bisht" }, { "code": " \tvcruntime140d.dll!00007ff84ba51340()\tUnknown\n \tbsoncxx.dll!00007ff83c351798()\tUnknown\n \tbsoncxx.dll!00007ff83c349077()\tUnknown\n \tbsoncxx.dll!00007ff83c34aefc()\tUnknown\n \tbsoncxx.dll!00007ff83c34a873()\tUnknown\n \tbsoncxx.dll!00007ff83c354e17()\tUnknown\n \tmongocxx.dll!00007ffff0a1f7aa()\tUnknown\n \tmongocxx.dll!00007ffff0a6ed53()\tUnknown\n>\tproject.exe!learning::MongoDB::MongoDB() Line 23\tC++\n \tproject.exe!LoginPageState::LoginPageState(std::shared_ptr<Context> & context) Line 7\tC++\n \tproject.exe!LoginState::update(sf::Time deltaTime) Line 168\tC++\n \tproject.exe!main() Line 6\tC++\n \t[External Code]\t\n\n#pragma once\n#include <mongocxx/instance.hpp>\nclass MongoInstance {\nprivate:\n static MongoInstance* m_instance;\n mongocxx::instance instance;\n // Private constructor\n MongoInstance();\npublic:\n // Deleted copy constructor and assignment operator to enforce singleton\n MongoInstance(const MongoInstance&) = delete;\n MongoInstance& operator=(const MongoInstance&) = delete;\n ~MongoInstance();\n static MongoInstance* getInstance();\n};\n#include \"../Header Files/MongoInstanceManager.h\"\nMongoInstance* MongoInstance::m_instance = nullptr;\nMongoInstance::MongoInstance() : instance() {}\nMongoInstance::~MongoInstance() {}\nMongoInstance* MongoInstance::getInstance() {\n if (!m_instance) {\n if (!m_instance) {\n m_instance = new MongoInstance;\n }\n }\n return m_instance;\n}\n", "text": "Ok i think i had some problem with the singleton class. I fixed it.\nAnd i added the try and catch block as per shown in the link attached.\nNow the error appears on lineammpedUPDB = conn.database(dbName);\nerror :Here’s the call stack if you need itFor reference here is the singleton class\nMongoInstanceManager.hMongoInstanceManager.cpp", "username": "Abhay_More" }, { "code": "const std::string_view dbName = \"StudentRecords\";\nconst std::string_view collName = \"StudentRecords\";\nconn = mongocxx::client(mongoURI, client_options);\ndatabase = conn.database(bsoncxx::string::view_or_value(dbName));\ncollection = database.collection(bsoncxx::string::view_or_value(collName));\n", "text": "For people stumbling across this.\nThe tutorial has be updated with the additional cmake flags for the c++ driver build\nwhich fixes the debug and release error. (Tutorial)\nFor the database connection error, make sure to use the std::string_view for the database string name\nand also use the appropriate constructor explicitly.Doing this fixed the errors i got.", "username": "Abhay_More" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error building mongoDB c++ driver application in Release mode
2023-06-01T21:30:04.820Z
Error building mongoDB c++ driver application in Release mode
1,383
https://www.mongodb.com/…1630261cd20f.png
[ "documentation" ]
[ { "code": "", "text": "G’day folks,Ever wondered if (or how) you can contribute to the MongoDB documentation?There are several paths for contributing feedback or even directly to MongoDB documentation via GitHub PRs. I’ve outlined these paths below in increasing order of effort for both yourself and the documentation team.The bottom right of most MongoDB documentation pages will have a “Share Feedback” button pinned near the bottom right of the visible page:The feedback widget is designed to capture quick directional feedback as well as more detailed suggestions.The first step of feedback asks about helpfulness:The second (and final) step prompts for a comment:The button to the left of “Send” allows you to take a screenshot, which can be useful if there is a problem with how the page is rendered in your browser.You can Report an issue or make a change request by creating a DOCS issue in the MongoDB Jira Issue Tracker.This path doesn’t capture as much context as the feedback widget embedded in the online documentation and may take longer for the team to review.A great DOCS issue should include enough details to be actionable such as:MongoDB server and driver documentation source is available on GitHub and you can contribute to the documentation by signing the contributor agreement and making a pull request on GitHub.With this path I recommend first creating a DOCS issue so there is context for why you are making a change as well as an issue for the documentation team to track via their Jira dashboards and workflow.Any significant changes may require some discussion to agree on approach, and there are expectations around writing, style, and process to help create a consistent high quality experience as per The MongoDB Documentation Project. The Practices and Processes section provides a good overview for contributors.We appreciate any feedback and contributions provided, including suggestions on how to improve those processes!Regards.\nStennie", "username": "system" }, { "code": "", "text": "Is it possible to get employed to review documentation, since we all want documentation??", "username": "David_Onoh" } ]
How to contribute to the MongoDB documentation
2023-01-10T00:43:43.219Z
How to contribute to the MongoDB documentation
1,708
https://www.mongodb.com/…_2_1024x388.jpeg
[ "sharding" ]
[ { "code": "", "text": "Hi all, I am new to MongoDB and I am trying to set up a Sharded MongoDB cluster. I Have configured the following clusters\nCluster 1 - 3 MongoDB Config Servers (1 Primary 2 replicas) ReplicaSets configured\nCluster 2 - 1 Mongos Node\nCluster 3 - 3 MongoDB servers (1 Primary 2 Replica) ReplicaSets ConfiguredSteps followed -The data is not getting distributed between the newly added shard.I read somewhere that chunk distribution happens when the chunk limit is reached. So I set up my chunk size as 64MB.I entered 1 Million more documents and still, the data remains the same.\n\nimage2554×968 163 KB\n", "username": "Dhruv_Kansal" }, { "code": "[direct: mongos] test> sh.status()\nshardingVersion\n{ _id: 1, clusterId: ObjectId(\"64ec6faa9a535bc69658da8e\") }\n---\nshards\n[\n {\n _id: 'shard1rs',\n host: 'shard1rs/host.docker.internal:50001,host.docker.internal:50002,host.docker.internal:50003',\n state: 1,\n topologyTime: Timestamp({ t: 1693216965, i: 3 })\n },\n {\n _id: 'shard2rs',\n host: 'shard2rs/host.docker.internal:50004,host.docker.internal:50005,host.docker.internal:50006',\n state: 1,\n topologyTime: Timestamp({ t: 1693217686, i: 1 })\n }\n]\n---\nactive mongoses\n[ { '7.0.0': 1 } ]\n---\nautosplit\n{ 'Currently enabled': 'yes' }\n---\nbalancer\n{ 'Currently enabled': 'yes', 'Currently running': 'no' }\n---\ndatabases\n[\n {\n database: { _id: 'config', primary: 'config', partitioned: true },\n collections: {\n 'config.system.sessions': {\n shardKey: { _id: 1 },\n unique: false,\n balancing: true,\n chunkMetadata: [ { shard: 'shard1rs', nChunks: 1024 } ],\n chunks: [\n 'too many chunks to print, use verbose if you want to force print'\n ],\n tags: []\n }\n }\n },\n {\n database: {\n _id: 'test',\n primary: 'shard1rs',\n partitioned: false,\n version: {\n uuid: new UUID(\"72d1d845-9d3a-4cf1-a690-6a626c253c14\"),\n timestamp: Timestamp({ t: 1693217049, i: 1 }),\n lastMod: 1\n }\n },\n collections: {\n 'test.test_collection': {\n shardKey: { client: 'hashed' },\n unique: false,\n balancing: true,\n chunkMetadata: [ { shard: 'shard1rs', nChunks: 1 } ],\n chunks: [\n { min: { client: MinKey() }, max: { client: MaxKey() }, 'on shard': 'shard1rs', 'last modified': Timestamp({ t: 1, i: 0 }) }\n ],\n tags: []\n }\n }\n }\n]\n\n", "text": "", "username": "Dhruv_Kansal" }, { "code": "128MB384MB", "text": "You’ll have to add a few more documents, then you will see some movement to the other shard.A collection is considered balanced if the difference in data between shards (for that collection) is less than three times the configured range size for the collection. For the default range size of 128MB, two shards must have a data size difference for a given collection of at least 384MB for a migration to occur.https://www.mongodb.com/docs/manual/core/sharding-balancer-administration/#migration-thresholds", "username": "chris" } ]
Unable to Distribute Data among newly added shards
2023-08-28T10:46:11.557Z
Unable to Distribute Data among newly added shards
410
https://www.mongodb.com/…2_2_1024x389.png
[]
[ { "code": "", "text": "After creating admin user and changing Authorization to enabled, mongodb service does not start with the message:\n\nimage1545×587 47.3 KB\nCan someone help?", "username": "Antonis_Apostolidis" }, { "code": "", "text": "Check your config file again\nDo not use tab but use space bar\nMost likely Yaml indentation issue", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I had already read and applied what you said, but no change…", "username": "Antonis_Apostolidis" }, { "code": "", "text": "Check two places for errors:", "username": "chris" } ]
Mongod 7 windows service refuses to start after Authorization enabled
2023-08-25T18:05:51.985Z
Mongod 7 windows service refuses to start after Authorization enabled
303
null
[ "queries", "flutter" ]
[ { "code": "", "text": "So here is the scenario,\nI initialize realm with flexiSync and sync the data, That works fine. Now new requirement is to get data from a collection which is huge around 50K objects. I cannot add the whole collection to flexiSync because it will take forever for initial sync.\nI tried Updating subscriptions with query with .Update and .add described in Managing subscriptions documentation. But it gets Stuck.\nHow I can Sync the data after once the flexiSync is done?\nUsing flutter SDK.", "username": "Shashank_mathur" }, { "code": "", "text": "What do you mean by “it gets stuck”? How do you observe this and are there any errors on the server?", "username": "nirinchev" } ]
Unable to add more subscriptions once done with sync already!
2023-08-29T21:25:49.625Z
Unable to add more subscriptions once done with sync already!
338
null
[ "aggregation", "dot-net" ]
[ { "code": ".Call System.Linq.Queryable.Take(\n .Call System.Linq.Queryable.Skip(\n .Call System.Linq.Queryable.OrderByDescending(\n .Call System.Linq.Queryable.Where(\n .Constant<MongoDB.Driver.Linq.Linq3Implementation.MongoQuery`2[Senior.SapiensNfe.DataAccess.DadosMongo.Sistema.Log,Senior.SapiensNfe.DataAccess.DadosMongo.Sistema.Log]>(edocs.edocs_padrao_Log.Aggregate([])),\n '(.Lambda #Lambda1<System.Func`2[Senior.SapiensNfe.DataAccess.DadosMongo.Sistema.Log,System.Boolean]>)),\n '(.Lambda #Lambda2<System.Func`2[Senior.SapiensNfe.DataAccess.DadosMongo.Sistema.Log,System.DateTimeOffset]>)),\n 40),\n 10)\n\n.Lambda #Lambda1<System.Func`2[Senior.SapiensNfe.DataAccess.DadosMongo.Sistema.Log,System.Boolean]>(Senior.SapiensNfe.DataAccess.DadosMongo.Sistema.Log $log)\n{\n .Call System.Linq.Enumerable.Contains(\n .Constant<Senior.SapiensNfe.DataAccess.DaoMongo.Sistema.LogDao+<>c__DisplayClass2_2>(Senior.SapiensNfe.DataAccess.DaoMongo.Sistema.LogDao+<>c__DisplayClass2_2).valores,\n ($log.Mensagem).TipoLog)\n}\n\n.Lambda #Lambda2<System.Func`2[Senior.SapiensNfe.DataAccess.DadosMongo.Sistema.Log,System.DateTimeOffset]>(Senior.SapiensNfe.DataAccess.DadosMongo.Sistema.Log $var1)\n{\n ($var1.Mensagem).DataHora\n}\n", "text": "Hey guys,I’m having a problem where I’m setting up a Queryable and setting some conditions on it (with .Where) and at the end I apply an ordering in a DateTimeOffset field and perform a pagination of 10 in 10 items.The problem itself is that this guy’s result is “sorted” but it looks like it’s somehow grouped by the Where they were applied.For example:A, 1, 2023/08/29 12:00\nA, 1, 2023/08/29 12:01\nA, 1, 2023/08/29 12:02\nA, 1, 2023/08/29 12:03\nA, 2, 2023/08/29 12:07\nA, 2, 2023/08/29 12:07\nA, 2, 2023/08/29 12:08\nA, 1, 2023/08/29 12:04\nA, 1, 2023/08/29 12:05\nA, 2, 2023/08/29 12:10\nA, 2, 2023/08/29 12:11\nA, 2, 2023/08/29 12:11\nA, 1, 2023/08/29 12:06\nA, 1, 2023/08/29 12:09And so it goes through the entire query.Can you give me a light at the end of the tunnel for me?Here is the expression extracted via DebugPS: I am using Driver version 2.20.0", "username": "Lucas_Silva5" }, { "code": "", "text": "One new piece of information, I found that the Index in the collection has an impact on the problem, why I don’t know.I also updated to 2.21, something different happened, if I filter only one field and without any index, it works correctly, but if I change the ordering from desc to asc, it happens again", "username": "Lucas_Silva5" } ]
Problem with .Where multiples in a Query undergoing Orderby C#
2023-08-29T16:34:43.902Z
Problem with .Where multiples in a Query undergoing Orderby C#
351
null
[ "dot-net" ]
[ { "code": "", "text": "I’ve been playing around with atlas search and wrote a routine that tests a number of different methodologies for it’s use case. This has spawned a few questions that I either cannot find answers for or which have some documentation that I may not understand fullyWhat I am ultimately trying to solve with #2 is avoiding the multi-stage pipeline of .Search + .Match, and of course Match cannot go first in the pipeline.My use case is that I 100% know I need to filter based on moderately complex set of natural filters. Let’s say something like “belongs to this group, is not flagged for quarantine, i have access to it, etc” and then I want to take that subset and search based on user input which is where the power of the Atlas Search comes in. However I want to artificially limit the possible subset they are searching on based on a dynamic set of other field restrictions.FYI .Search + .Match works perfectly but it is slow, as your documentation indicates it would be. What sort of options do I have here other than finding a way to completely redo all my FilterDefinitions as a collection of .Must(something’s)? I also noticed not all of my filters would be supported via that style, though I think it would cover most.I hope these questions make sense and I would be happy to provide examples upon request, but it is somewhat theoretical at the moment.", "username": "Mark_Mann" }, { "code": "", "text": "Replying to my own post!I re-read the documentation and discovered that I can set returnStoredSource=true in combination with Stored Source Fields = ALL to match the .Search().Match().Sort() extremely fast and produce the same results. So that works at the cost of a larger index. I think that is the only downside. So this will work as a solution for me until our database reaches whatever size that I need to revisit this.However I do still have the following questions:It seems compound is basically adding “things” to my search beyond whatever text I am trying to search for. While I think this works and I probably could have converted my normal filters to a Compound.Must syntax, it is not really what I am looking for. What I am really looking for is to “pre-filter” my collection prior to doing the search.", "username": "Mark_Mann" }, { "code": "compoundcompoundautocompletefilter$matchfilter", "text": "Hi @Mark_Mann,I know it may not be ideal since you have mentioned the complexity of compound in your second question but have you considered doing it within a compound operator? More information / examples on the Search Across Multiple Fields using autocomplete documentation.Again, I do mention this in case it has not been investigated but would filter work for you? Theres an example at the start of the same documentation which demonstrates the $match stage being replaced with the use of filter.Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "1.) Thank you. I will research this further2.) I build my filter statements dynamically, so I cannot say with certainty that all of them would work with the compound $filter, but I believe most will. The primary issue I have is that most of my filters are in the format of LINQ or FilterDefinition, so I would have to write some type of translation script to get them in the format the $filter for Search wants. I know this is not the end of the world, but was hoping search would take the other format for $filter.", "username": "Mark_Mann" }, { "code": "$match$search", "text": "The primary issue I have is that most of my filters are in the format of LINQ or FilterDefinition, so I would have to write some type of translation script to get them in the format the $filter for Search wants.Hey @Mark_Mann,Thanks for getting back to me regarding my previous responses. In response to another part of your 2nd question - to my knowledge and at the moment $match isn’t possible prior to the $search stage.I know this is not the end of the world, but was hoping search would take the other format for $filter.Just to clarify here, do you have a link to what “other format for $filter” (mentioned above) appears like? I just want to confirm for better understanding Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you for responding Jason. I made another specific thread for this piece of the question as well so this single thread does not get cluttered with my thoughts.I did read that Search must be the first stage in any pipeline. I also read the documentation of how any Filter/Sort should be included in the search stage or it will have significant performance implications. I was also able to confirm this via empirical testing.I am specifically looking to pass one of the following to the C# driver for search, and it does not seem possible. Thus I would have to create what I need in the BsonDocument format in order for it to be recognized and executed.FilterDefinition\nor\nc# linq: Expression<Func<T, bool>>", "username": "Mark_Mann" }, { "code": "filter()$match$sortfiltercompound$search", "text": "A builder for compound search definitions.Apologies as I am not too familiar with the C# driver for search but does either filter() method mentioned in the above link work for you? This is not the same as an additional $match or $sort stage in the pipeline but is the filter clause within the compound operator which works within the single $search stage.Thank you for responding Jason. I made another specific thread for this piece of the question as well so this single thread does not get cluttered with my thoughts.Thanks for raising this thread. I believe a colleague of mine will respond to you here Regards,\nJason", "username": "Jason_Tran" }, { "code": "\"filter\" :[\n {\n \"in\":{\n \"path\":\"CreatedByUserID\",\n \"value\":[ObjectID('A'),\n ObjectID('B')]\n }\n },\n {\n \"in\":{\n \"path\":\"CreatedByCompanyID\",\n \"value\":[ObjectID('C'),\n ObjectID('D')]\n }\n }\n ]\n{\n \"compound\": {\n \"minimumShouldMatch\": 1,\n \"should\": [\n {\n \"regex\": {\n \"allowAnalyzedField\": true,\n \"path\": \"Fields.Value\",\n \"query\": [\n \".*0001.*\"\n ]\n }\n }\n ]\n },\n \"highlight\": {\n \"path\": [\n \"Fields.Value\"\n ]\n },\n \"index\": \"TextSearch\",\n \"returnStoredSource\": true,\n \"sort\": {\n \"Created\": -1\n }\n }\n", "text": "Jason,I am starting to notice the performance difference between Search + Match and using “Filter”(or MUST/MUST NOT, knowing the affect the score).I was able to get some basic concepts to work, but I still cannot figure out how to make our situation work. What I am really looking for, all code and semantics aside, is a “pre-filter” prior to sorting. I could then make it my own responsibility to properly index whatever I am pre-filter on, then only use the text search for those records.However I understand I cannot do that now(yet?) so I have tried to accomplish my task with the tools available to me.I am struggling with an “OR” clause against two different fields when attempt to MATCH within a Search pipeline stage.Example below.Ignore the fact that it is compound with only one “Should”, I removed a bunch of them for readability and they do not impact this conversation. They only serve to use the mongo search to match other data against other fields. This works just fine and this particular discussion is not about the accuracy or style of the Mongo Atlas Search.As I’ve mentioned before, my “dream world” would be allow for me to pass my existing FilterDefinition to the Atlas Search routine somehow, which I have already made because prior to implementing atlas search I would take my FilterDefintion and add a “$regex” match to it. However since I cannot do that, I am trying to convert my FilterDefintion to what Atlas Search wants.Let’s say my filter is something fairly basic like:\nCreatedByUserID == myUserID OR CreatedByCompanyID == myCompanyID. I cannot figure out how to do this.I tried something like the below, but obviously it only works if BOTH are true. I also experimented with “QueryString”, which was less effective and only allowed for OR within a single Field. How would I go about something very basic like this? I don’t see any way to put and OR clause between two filters or put two “paths” in a single filter(it claims it must be a string not an array).My filter has a few other components to it, but the concept is similar. I am trying to take a giant collection and limit the users search to things in their “bucket”. The “bucket” scope can vary depending on the intentions and thus I build my “FilterDefintion” accordinglyBasic Pipeline:", "username": "Mark_Mann" }, { "code": "ORfiltercompoundfilter", "text": "Hey Mark,I am struggling with an “OR” clause against two different fields when attempt to MATCH within a Search pipeline stage.If you could provide me with the following to try simplify my understanding of the scenario - hopefully I can try work something out that suits your use case.I have some idea of what you’re after and the issue of using the filter with both of the conditions but I think with some sample documents I can try recreate something at least for troubleshooting.Please redact any personal or sensitive information before posting here. Feel free to DM me the sample documents if you believe they are too large to post here.Wondering if even a nested compound might work here… Is this something you tried with regards to the filter?Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "filterOR{\n\t\"name\": \"jason\",\n\t\"company_id\": 1 /// <--- Doc to be returned\n},\n{\n\t\"name\": \"test\", /// <--- Doc to be returned\n\t\"company_id\": 2\n},\n{\n\t\"name\": \"jason\",\n\t\"company_id\": 2\n},\n{\n\t\"name\": \"jason\",\n\t\"company_id\": 3\n},\n{\n\t\"name\": \"test\", /// <--- Doc to be returned\n\t\"company_id\": 1\n}\nfilter{\"name\":\"test\"}{\"company_id\":1}db.search.aggregate({\n\t$search: {\n\t\tcompound: {\n\t\t\tshould: [{\n\t\t\t\tcompound: {\n\t\t\t\t\tfilter: {\n\t\t\t\t\t\ttext: {\n\t\t\t\t\t\t\tpath: \"name\",\n\t\t\t\t\t\t\tquery: \"test\"\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t},\n\t\t\t{\n\t\t\t\tcompound: {\n\t\t\t\t\tfilter: {\n\t\t\t\t\t\tequals: {\n\t\t\t\t\t\t\tpath: \"company_id\",\n\t\t\t\t\t\t\tvalue: 1\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}]\n\t\t}\n\t}\n})\n[\n {\n _id: ObjectId(\"64ed469c820dce241360b7ac\"),\n name: 'jason',\n company_id: 1\n },\n {\n _id: ObjectId(\"64ed469c820dce241360b7ad\"),\n name: 'test',\n company_id: 2\n },\n {\n _id: ObjectId(\"64ed469c820dce241360b7b0\"),\n name: 'test',\n company_id: 1\n }\n]\nfilterOR", "text": "Mark - I created a very basic example using the following sample documents to maybe try understand what you’re after regarding filter and use of an OR boolean.Let’s say we have these 5 test documents:I want to filter for documents with - {\"name\":\"test\"} OR {\"company_id\":1}. I do so using the following:This returns the 3 documents:Note: I’m using the default index definition in the above testsWondering if this is something you were after with specific regards to filter and OR?If not, i’ll await for the specifics regarding my previous reply.Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "\n", "text": "Jason,I think you are on to something here with nested compounds(did not know I could do that).What I have now is:\nCompound\n-Filter\n–Compound\n—Must(things that are mandatory here, like status = good status\n—Should(min should match of 1, essentially making this an OR)\n----My checks here: made by companyid made by userid, granted read access to user, public flag == true, etc\n-Should(this is my generic “regex” text search, min match of 1)\n–search properties A/B/C for the text stringSo far so good. It was a but confusing at first since when I added the above Filter it did not work until I specifically created a field mapping for the involved fields(example below). I am not sure if this is specifically because I was comparing object IDs or I would need to do this always…My hunch is that it was because they were ObjectID’s and that is how I was using the “in” operator.Anyway, it seems like progress has been made. Now I need to go through my entire logic for where I build a FilterDefintion and write a matching function to build an above compound filter BSON. Not terrible, but less than ideal.Feature Request at least for the C# driver: A way to pass \"FilterDefinition into the Search Operator to filter in addition to the desired text search or just a better way to pre-filter prior to search in general{\n“mappings”: {\n“dynamic”: true,\n“fields”: {\n“CreatedByCompany”: {\n“fields”: {\n“CompanyID”: {\n“type”: “objectId”\n},", "username": "Mark_Mann" }, { "code": "CompanyIDCreatedByCompany", "text": "Glad to hear progress has been made So far so good. It was a but confusing at first since when I added the above Filter it did not work until I specifically created a field mapping for the involved fields(example below).Not 100% sure since I am unaware of the index definition before you made the changes but its possible that it is due to CompanyID existing within the CreatedByCompany field. However, I am not certain since I am not sure of what the index definition was beforehand. Perhaps the Static and Dynamic mappings documentation might be of use in this case.Feature Request at least for the C# driver: A way to pass \"FilterDefinition into the Search Operator to filter in addition to the desired text search or just a better way to pre-filter prior to search in generalYou could raise this as a feedback request here.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Jason,It was due to me doing an equals/in with ObjectID without specifying the field mapping as an ObjectID.I actually think I have this working, but I did have to completely rewrite my logic that builds a dynamic LINQ statement to instead build a dynamic BsonDocument matching the specific format required for Atlas Search. This is unfortunate as it is somewhat hardcoded and I just lost a huge benefit of the C# driver, which is using linq to create a “more natural” filter…at least more natural in the sense of matching C# code.I’ll take you up on that feedback request.", "username": "Mark_Mann" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
C# Atlas Search: General Questions
2023-08-10T18:05:32.864Z
C# Atlas Search: General Questions
973
null
[ "aggregation" ]
[ { "code": "", "text": "So, I’d like to store permanent collection queries in another collection to later populate these with $lookup. Unfortunately, when I try to $match $$ROOT it fails no matter how I try to work this around.Here’s what I expected to work, in a nutshell: Mongo playgroundI ended up with $function to match the documents JS-way but I’m still interested if there’s a way to make the use of $match while storing the query in an object. Maybe there’s an option to “unpack” the variable into an object?", "username": "Alex_Smith2" }, { "code": "", "text": "Ended up using $switch with multiple branches that cover my queries that sets a flag and then matching by this flag. This approach has an issue that I have to cover all possible queries either by a huge tree or by a series of match cases. Still wondering if there’s a better approach to this issue.\nTake a look at the current result: Mongo playground", "username": "Alex_Smith2" } ]
$lookup with $match inside pipeline
2023-08-27T05:55:26.011Z
$lookup with $match inside pipeline
344
null
[ "node-js", "crud" ]
[ { "code": "const quoteRanking = await quoteRankingSchema.findOne({});\n\n if (!quoteRanking) {\n return await new quoteRankingSchema({\n guilds: [\n { \n id: guild.id,\n quotes: [\n { \n messageId: msg.id,\n stars: 0 \n }\n ]\n }\n ]\n }).save();\n };\n\n await quoteRankingSchema.findOneAndUpdate(\n {\n 'guilds.id': guild.id\n },\n {\n $push: {\n 'guilds.quotes': { 'messageId': msg.id, 'stars': 0 }\n }\n } \n );\n\n/*\nquoteRankingSchema: {\n guilds: [\n { \n id: '12345',\n quotes: [\n {\n messageId: '12345',\n stars: 0\n },\n ...\n ]\n },\n ...\n ]\n}\n*/\n", "text": "So I have this code hereAnd I get this err: “err: MongoServerError: Plan executor error during findAndModify :: caused by :: Cannot create field ‘quotes’ in element {guilds: [ { id: “1066352952983433317”, quotes: [ { messageId: “1146123009380331640”, stars: 0 } ] } ]}”.I searched on google how to do it an in every example they do the same thing, but mine ain’t working", "username": "Klyde_eee" }, { "code": "", "text": "Does the query run if you run it directly from a shell as opposed to via Mongoose?", "username": "John_Sewell" }, { "code": "", "text": "How? I don’t know how to do that", "username": "Klyde_eee" }, { "code": "use myDataBaseName\ndb.getCollection('myCollectionName').findOneAndUpdate(\n {\n 'guilds.id': guild.id\n },\n {\n $push: {\n 'guilds.quotes': { 'messageId': msg.id, 'stars': 0 }\n }\n } \n );\n\n", "text": "If you open a shall prompt onto the sever (mongosh, or compass where you can pop up a shell from the bottom of the screen) or use a 3rd party tool like studio3T you can run a script on its own.So if you connect it’ll be something like this:When looking into issues like this I always find it easier to isolate moving parts, in your case you’re executing the query through a wrapper, so try and get the query working first and then put it into the wrapper and test again.Out of interest I did try and put the query straight into a shell and it seemed to work. Can you show the complete document that you’re trying to update based on the ID in your post?", "username": "John_Sewell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't use $push
2023-08-29T17:12:06.262Z
Can&rsquo;t use $push
341
null
[ "java", "kafka-connector" ]
[ { "code": "testDBinit{\n \"name\": \"mongo-source-connector\",\n \"config\": {\n \"connector.class\": \"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"connection.uri\": \"mongodb://mongo:27017\",\n \"database\": \"testDB\",\n \"collection\": \"init\"\n }\n}\n2023-07-05 18:43:27 [2023-07-05 13:13:27,026] ERROR WorkerSourceTask{id=mongo-source-connector-0} Task threw an uncaught and unrecoverable exception. Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask)\n2023-07-05 18:43:27 org.apache.kafka.connect.errors.ConnectException: Unexpected error: null\n2023-07-05 18:43:27 at com.mongodb.kafka.connect.source.StartedMongoSourceTask.getNextBatch(StartedMongoSourceTask.java:597)\n2023-07-05 18:43:27 at com.mongodb.kafka.connect.source.StartedMongoSourceTask.pollInternal(StartedMongoSourceTask.java:211)\n2023-07-05 18:43:27 at com.mongodb.kafka.connect.source.StartedMongoSourceTask.poll(StartedMongoSourceTask.java:188)\n2023-07-05 18:43:27 at com.mongodb.kafka.connect.source.MongoSourceTask.poll(MongoSourceTask.java:173)\n2023-07-05 18:43:27 at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.poll(AbstractWorkerSourceTask.java:462)\n2023-07-05 18:43:27 at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.execute(AbstractWorkerSourceTask.java:351)\n2023-07-05 18:43:27 at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:202)\n2023-07-05 18:43:27 at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:257)\n2023-07-05 18:43:27 at org.apache.kafka.connect.runtime.AbstractWorkerSourceTask.run(AbstractWorkerSourceTask.java:75)\n2023-07-05 18:43:27 at org.apache.kafka.connect.runtime.isolation.Plugins.lambda$withClassLoader$1(Plugins.java:181)\n2023-07-05 18:43:27 at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)\n2023-07-05 18:43:27 at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)\n2023-07-05 18:43:27 at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n2023-07-05 18:43:27 at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n2023-07-05 18:43:27 at java.base/java.lang.Thread.run(Thread.java:829)\n2023-07-05 18:43:27 Caused by: java.lang.NullPointerException\n2023-07-05 18:43:27 at com.mongodb.kafka.connect.source.StartedMongoSourceTask.getNextBatch(StartedMongoSourceTask.java:579)\n2023-07-05 18:43:27 ... 14 more\nconfluentinc/cp-zookeeperlatestconfluentinc/cp-kafkalatestconfluentinc/cp-schema-registrylatestconfluentinc/cp-kafka-connectlatestmongolatestmongodb/kafka-connect-mongodb1.10.1", "text": "I am trying to use the MongoDB Source connector -I have created the DB with name testDB & a collection named initFollowing is the connector config -I have tried using multiple different properties which are mentioned in MongoDB connector doc but always got some error.Following is the error -Following is the version details - confluentinc/cp-zookeeper: latestconfluentinc/cp-kafka: latestconfluentinc/cp-schema-registry: latestconfluentinc/cp-kafka-connect:latestmongo: latestPlugin:mongodb/kafka-connect-mongodb: 1.10.1Just a note that the sink connector works fine", "username": "Siya_Sosibo" }, { "code": "", "text": "Can you try this with the latest version of the connector ?A null pointer issue was identified https://jira.mongodb.org/browse/KAFKA-383 and it was addressed.Note, the confluent hub is currently being updated so it might still show 1.10.1 as the latest, if it does you can grab the latest from https://github.com/mongodb/mongo-kafka/releases/tag/r.11.0", "username": "Robert_Walters" }, { "code": "If the resume token is no longer available then there is the potential for data loss.\nSaved resume tokens are managed by Kafka and stored with the offset data.\n\nTo restart the change stream with no resume token either: \n * Create a new partition name using the `offset.partition.name` configuration.\n * Set `errors.tolerance=all` and ignore the erroring resume token. \n * Manually remove the old offset from its configured storage.\n\nResetting the offset will allow for the connector to be resume from the latest resume\ntoken. Using `startup.mode = copy_existing` ensures that all data will be outputted by the\nconnector but it will duplicate existing data.\n=====================================================================================\n (com.mongodb.kafka.connect.source.MongoSourceTask)\n[2023-08-24 08:38:18,471] INFO Unable to recreate the cursor (com.mongodb.kafka.connect.source.MongoSourceTask)\n[2023-08-24 08:38:18,477] INFO Watching for collection changes on 'avd.vehicles' (com.mongodb.kafka.connect.source.MongoSourceTask)\n[2023-08-24 08:38:18,478] INFO New change stream cursor created without offset. (com.mongodb.kafka.connect.source.MongoSourceTask)\n[2023-08-24 08:38:18,480] WARN Failed to resume change stream: The $changeStream stage is only supported on replica sets 40573\n\n", "text": "@Robert_Walters I have updated to version 1.11.0 and I’m no longer getting the nullpointer but I notice an error “Unable to recreate the cursor” this error keeps getting printed in a continuous look in the logs non-stop", "username": "Siya_Sosibo" }, { "code": "", "text": "Are you using MongoDB or a third party MongoDB API like CosmosDB, DocumentDB, etc?If MongoDB, do you have it running as a replica set or a single stand alone instance?", "username": "Robert_Walters" }, { "code": " documentdb:\n platform: ${PLATFORM:-linux/amd64}\n image: mongo:5.0.15\n restart: \"unless-stopped\"\n ports:\n - '27017:27017'\n environment:\n MONGO_INITDB_ROOT_USERNAME: <username>\n MONGO_INITDB_ROOT_PASSWORD: <password>\n healthcheck:\n test: echo 'db.runCommand(\"ping\").ok' | mongo localhost:27017/productiondb --quiet\n interval: 10s\n timeout: 10s\n retries: 5\n start_period: 40s\n", "text": "@Robert_Walters Locally I’m running mongo:5.0.15 as a Docker container, should be running as a single stand alone instanceBelow is the snippet of our docker-compose.yaml file", "username": "Siya_Sosibo" }, { "code": "command: --replSet rs0 _id: \"rs0\",\n members: [{ _id: 0, host: \"mongo1:27017\", priority: 1.0 }],\n};\nrs.initiate(rsconf);\nrs.status();\n", "text": "that is why it doesn’t work, you need to run Mongodb as a replica set because single node MongoDBs do not have change streams. When you connect from the MongoDB Connector for Apache Kafka it opens a change stream on the collection you specify.if you are using this just as a test scenario you can run a single node replica set justin your dockerfile under documentdb: (odd name) add this\ncommand: --replSet rs0 then once it is up, connect and run this script", "username": "Robert_Walters" }, { "code": "", "text": "@Robert_Walters thank you, will try thisThe service name is documentDB because we running AWS DocumentDB in Production, will this be an issue?", "username": "Siya_Sosibo" }, { "code": "", "text": "No idea, DocumentDB is not MongoDB. Why not use MongoDB Atlas? It is available in the AWS Marketplace and you can use VPC Peering just like with DocumentDB.", "username": "Robert_Walters" } ]
MongoDB Kafka Source Connector Nullpointer
2023-08-24T09:14:23.276Z
MongoDB Kafka Source Connector Nullpointer
535
null
[ "atlas-cluster", "serverless" ]
[ { "code": "mongodb+srv://<AWS access key>:<AWS secret key>@devcluster.6k6ngpc.mongodb.net/?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority&authMechanismProperties=AWS_SESSION_TOKEN:<session token (for AWS IAM Roles)>MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelist", "text": "I have an AWS API Gateway, it’s a node serverless api, i have set the MONGO_URL like this:mongodb+srv://<AWS access key>:<AWS secret key>@devcluster.6k6ngpc.mongodb.net/?authSource=%24external&authMechanism=MONGODB-AWS&retryWrites=true&w=majority&authMechanismProperties=AWS_SESSION_TOKEN:<session token (for AWS IAM Roles)>However, I get this error when trying to access it from the api:MongooseServerSelectionError: Could not connect to any servers in your MongoDB Atlas cluster. One common reason is that you're trying to access the database from an IP that isn't whitelisted. Make sure your current IP address is on your Atlas cluster's IP whitelistIs there a way to bypass this without allowing access from anywhere?", "username": "Rafael_Tessarollo" }, { "code": "", "text": "Hi there and welcome to the MongoDB forums!One thing I might consider is utilizing Elastic IP addresses to create a static outbound IP address for your lambda functions. You will then be able to allowlist only these IP addresses for your Atlas project.Another option is utilizing Private Endpoints in Atlas, which will result in network traffic transiting only within AWS’s network and therefore not require you to allowlist all addresses.", "username": "Charlie_Xu" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't access my private database from API Gateway
2023-08-29T17:19:20.824Z
Can&rsquo;t access my private database from API Gateway
405
https://www.mongodb.com/…2_2_1024x576.png
[ "serverless", "lebanon-mug", "conference", "mug-virtual-mena" ]
[ { "code": "\nMeeting ID: 988 6019 4213\n\nPasscode: 059606\n\nSoftware Engineering Manager, Chain Reaction | MongoDB Champion Engineering Manager, TECHlarious | Founder & CEO. Two Of Us L.L.C | MongoDB Champion", "text": "MUG-MENA1920×1080 213 KBJoin us for our first virtual MongoDB meetup tailored for the Middle East and North Africa region in Arabic! Get ready to dive into the intricacies of MongoDB and supercharge your skills in building event-driven applications and learning all about the best practices for optimizing indexing and query performance. Session 1: Crafting Event-Driven Applications with MongoDB Atlas\n@eliehannouch, our MongoDB champion, will guide you through building event-driven serverless applications using MongoDB Atlas. Discover the power of event-driven architecture and learn how to leverage AWS Lambda Functions, MongoDB Triggers, and MongoDB Change Streams to create scalable and responsive applications. Session 2: Best Practices - Indexing and Query Performance\nGain from the expertise of MongoDB Champion @MalakMSAH as she delves into proven techniques and practices for optimizing indexing and query performance, ensuring your applications run at peak efficiency.Interactive Wrap-up: Challenges, Trivia, and Networking\nChallenge yourself with stimulating problems, test your knowledge in a lively trivia session, and expand your network through engaging interactions.Event Type: Online Join Zoom Meeting (passcode is encoded in the link) Find your local number (phone dial-in)Software Engineering Manager, Chain Reaction | MongoDB Champion Engineering Manager, TECHlarious | Founder & CEO. Two Of Us L.L.C | MongoDB Champion", "username": "Harshit" }, { "code": "", "text": "Hey Everyone!\nGentle Reminder, We will be starting in 30 mins, join us on the zoom link below:Hope to see most of you join the event!", "username": "Harshit" }, { "code": "", "text": "is this recorded? if so where can I watch the recording?", "username": "Mohammed_Alhila" }, { "code": "", "text": "Hello everyone,Thank you for attending the event. If you were unable to attend or would like to revisit something, you can find the recording attached here.We are currently editing and will publish the event on YouTube at a later time. In the meantime, please use the link below to access the recording of the event.Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars across mobile, desktop, and room systems. Zoom Rooms is the original software-based conference...Passcode: UM+B5?*q", "username": "Harshit" } ]
MENA vMUG: (in Arabic): Building Event Driven Applications, Indexing and Query Performance
2023-08-03T17:29:27.053Z
MENA vMUG: (in Arabic): Building Event Driven Applications, Indexing and Query Performance
2,017
null
[ "queries" ]
[ { "code": "", "text": "Hello MongoDB Community and Treehouse members,I’ve recently been working on the website for my massage service in Busan called “부산출장마사지” which you can visit here.Our goal is to provide the best in-house massage services to the residents and tourists in Busan, and we’re trying to enhance the user experience on our website. We’ve chosen MongoDB to handle our database needs, especially to store customer reviews, bookings, and therapist details.I’m reaching out to the community for advice and tips on:If anyone has prior experience with similar businesses or websites and has leveraged MongoDB for the same, I’d love to hear your insights and suggestions.Thank you in advance for all your help!", "username": "Ali_Hayyan" }, { "code": "", "text": "Surprisingly, this used to be a crazy idea to ponder quite while after brainstorms. It’s also very awkward to disregard this as a proper [most pronounced] regular session to keep dataframe sessions benchmarked Here’s a piece of advice from our friends at twilio for broader suggestions… Hope it helps: https://www.twilio.com/blog/automatically-trigger-twilio-sms-mongodb-mongodb-functions/", "username": "David_Onoh" } ]
Looking for MongoDB Implementation Tips for my Massage Service Website
2023-08-29T08:47:44.919Z
Looking for MongoDB Implementation Tips for my Massage Service Website
367
null
[]
[ { "code": "", "text": "Hi!\nI have an HTTP Endpoint in mongo Atlas App service name: /movies which is calling a function (moviesfunc) work on a database.\nI have Set API Key authentication.\nI am trying to add function authentication for type User Id.\nIn Function settings I am trying to add Authorization function to match the API Key of the user.\nI am trying to run the HTTP request in Postman.\nI am selecting authentication type as API Key and setting the name and API key same as used in mongo Atlas.\nthe request gives error {\n“error”: “rule not matched for function \"moviefunc\"”,\n“error_code”: “FunctionExecutionError”,} or {\n“error”: “cannot compare to undefined”,\n“error_code”: “FunctionExecutionError”}I have tried different Json Expresssions in Function authentication settings like this (Json Expression)\n{\n“$and”: [\n{ “%%request.authentication.username”: “%%user.name” },\n{ “%%request.authentication.password”: “%%user.id” }\n]\n}\nBut I failed. how can I add function level authentication using API Key.\nOr authentication using User Id method.", "username": "Bisma_Nazir" }, { "code": "rule not matched for function", "text": "Are you sure you are using the API Key provider from App Services -This is separate from the Atlas API Key.rule not matched for function is kind of a poorly worded error, this is actually referring to the can evaluate field on a function, its not referring to rules + roles + filters https://www.mongodb.com/docs/atlas/app-services/functions/#specify-an-authorization-expression", "username": "Ian_Ward" }, { "code": "", "text": "HI!\nI checked that and resolved the issue, i was making wrong function to fetch the query.\nthank you for your reply", "username": "Bisma_Nazir" } ]
HTTP Endpoints and Function Authentication in Mongo Atlas App Service
2023-06-09T08:39:44.680Z
HTTP Endpoints and Function Authentication in Mongo Atlas App Service
566
null
[ "swift" ]
[ { "code": "", "text": "Working with a legacy project that still has a big mix of ObjC and Swift. Using RealmSwift, how should you access your Realm instance from ObjC? Should you create a separate RLMRealm instance with an identical configuration? Is there a bridge between a Swift Realm struct and an ObjC RLMRealm instance?", "username": "Brian_Grimal" }, { "code": "", "text": "@Brian_Grimal Welcome to the forums!If the project has both ObjC as well as Swift code, the Realm file itself doesn’t care and is platform agnostic so why try to access Realm using legacy ObjC calls? How about accessing it using Swift?", "username": "Jay" }, { "code": " @objc func rlmRealm() -> RLMRealm? {\n guard let realm = self.realm else { return nil }\n return ObjectiveCSupport.convert(object: realm)\n }\n\n", "text": "More of a thought experiment than a code question. So yes, writing a Swift wrapper for the legacy ObjC classes is easy enough, but I did find you can do something like this too. Probably not the smartest thing, but it would probably work.Swift vs. ObjC properties need a little attention too (e.g. List vs NSArray, Double vs NSNumber). Still figuring out best practices here.", "username": "Brian_Grimal" }, { "code": "ListList ListDoubleDoubleDouble@Persisted var doubleName: Double@objc dynamic var doubleName: Double = 0.0@Persisted var listName: RealmSwift.List<SomeRealmObjectTypeOrPrimative>let someList = List<Type>()", "text": "Let me attempt to add some clarity but it’s been a while since I’ve used ObjC so anyone can feel free to correct me.If you write a Realm app in pure ObjC, and then write a Realm App in Swift, both apps can access the same underlying Realm Objects and data. Just like your Android app can access the same data using Android Objects.If your project is already both, you may not need to write a wrapper for ObjC classes - they will work side by side with Swift Classes as the underlying data is the same.Swift vs. ObjC properties need a little attention too (e.g. List vs NSArray, Double vs NSNumber). Still figuring out best practices here.Hmm. Those data types do not tie to Realm objects in that way. For example, there is no List vs NSArray. NSArray is purely an ObjC construct and unrelated to Realm. An NSArray more corresponds to Swift Array. In Realm, we’ve always had a List object.A RealmSwift List is tied to an Realm ObjC List, and a Double is well… a Double, with the note that a optional Double is a special case.The datatypes can be be compared here Supported Types and there’s a tab so you can look between the Swift and ObjC Pre 10.10 types.I other words in Swift a Double property is managed like this:@Persisted var doubleName: Doublein ObjC (pre 10.0) it’s this@objc dynamic var doubleName: Double = 0.0likewise, RealmSwift List is this:@Persisted var listName: RealmSwift.List<SomeRealmObjectTypeOrPrimative>and in Realm ObjC:let someList = List<Type>()Feel free to chime in if I stated anything incorrectly.", "username": "Jay" }, { "code": "@objc class SomeObject: Object {\n @Persisted(primaryKey: true) var _id: ObjectId?\n\n @Persisted @objc var aBoolean: Bool\n @Persisted var anOptionalBoolean: Bool?\n\n @Persisted @objc var aDouble: Double\n @Persisted var anOptionalDouble: Double?\n\n @Persisted @objc aString: String?\n\n @Persisted var aListOfStrings: List<String>\n}\n\n@objc extension SomeObject {\n @objc var objC_anOptionalBoolean: ObjCBool {\n get { return ObjCBool(self.anOptionalBoolean ?? false) }\n set { self.anOptionalBoolean = newValue.boolValue }\n }\n @objc var objC_anOptionalDouble: NSNumber {\n get { return NSNumber(floatLiteral: self.anOptionalDouble ?? 0.0) }\n set { self.anOptionalDouble = newValue.doubleValue }\n }\n @objc var objC_aListOfStrings: NSArray {\n get {\n let ret = NSMutableArray()\n for idx in 0..<self.aListOfStrings.count { ret[idx] = self.details[idx] }\n return ret as NSArray\n }\n set {\n self.aListOfStrings = List<String>()\n for item in newValue { self.aListOfStrings.append(item as! String) }\n }\n }\n}\nSomeObject *yourObj = [SomeObject new]; \n\nyourObj.aBoolean = true;\nbool aBoolean = yourObj.aBoolean;\n\nyourObj.objC_anOptionalBoolean = true;\nbool anOptionalBoolean = yourObj.objC_anOptionalBoolean;\n\nyourObj.aDouble = 1.23;\ndouble aDouble = yourObj.aDouble;\n\nyourObj.objC_anOptionalDouble = 1.23;\ndouble anOptionalDouble = yourObj.objC_anOptionalDouble;\n\nNSString *aString = yourObj.aString;\nyourObj.objC_aListOfStrings = @[@\"Hello\", @\"World\"];\n\nNSArray<NSString *> *aListOfStrings = yourObj.objC_aListOfStrings;\n", "text": "Thanks for the link to the supported data types, that helps.If I export the model as a Swift class, there’s by default nothing exposed to ObjC. You can annotate the class and some of the properties with @objc, except for certain optionals and lists that can’t be represented. For those it looks like you would have to create your own setters and getters. Is that correct? Something perhaps like this, or is there a better way?In objC, you can then do:", "username": "Brian_Grimal" }, { "code": "@objc class SomeObject: Object {\n @Persisted(primaryKey: true) var _id: ObjectId?\n @Persisted @objc var aBoolean: Bool\n @Persisted var anOptionalBoolean: Bool?\n @Persisted @objc var aDouble: Double\n @Persisted var anOptionalDouble: Double?\n @Persisted var aListOfStrings: List<String>\n}\nlet x = SomeObject()\nx.aBoolean = true\nx.anOptionalBoolean = true\nx.aDouble = 3.14\nx.anOptionalDouble = 3.14\nx.aListOfStrings.append(objectsIn: [\"a\", \"b\", \"c\"])\n\ntry! realm.write {\n realm.add(x)\n}\nlet myObject = realm.objects(SomeObject.self).first! //assuming it was written successfully\nprint(myObject.aBoolean)\nprint(myObject.anOptionalBoolean)\nprint(myObject.aDouble)\nprint(myObject.anOptionalDouble)\nprint(myObject.aListOfStrings)\ntrue\nOptional(true)\n3.14\nOptional(3.14)\nList<string> <0x600002f0de40> (\n\t[0] a,\n\t[1] b,\n\t[2] c\n)\n@Persisted @objc aString: String?class SomeObject: Object {\n @Persisted(primaryKey: true) var _id: ObjectId?\n @Persisted var aBoolean: Bool\n @Persisted var anOptionalBoolean: Bool?\n @Persisted var aDouble: Double\n @Persisted var anOptionalDouble: Double?\n @Persisted var aListOfStrings: List<String>\n}\n", "text": "I am not clear on the use case of that code, but my gut feeling is it’s more complicated that it needs to be.Given a SomeObject model which contains a mix of ObjC and SwiftHere’s how it’s populated and writtenand then to read and work with that objectand the outputAs you can see, you don’t need a wrapper, an extension, or really anything else to work with the properties of the object.Note I removed @Persisted @objc aString: String? as it’s not valid without a ‘var’So now the important bit. If the @ObjC object is completely commented out in code, it could simply be replaced with a pure Swift versionAnd everything works as expected.Another thing to note is that Realm was/is actually ObjC under the hood - the ObjectiveCSupport class provides the interoperability so as projects are moved from ObjC to Swift, in many cases it’s transparent. e.g. Results in Swift is RLMResults in ObjC, a Swift List is actually a RLMArray in ObjC.However, you’ll never need to know that. Just write Swift code and Realm will take care of the underlying assignments.", "username": "Jay" }, { "code": "", "text": "Everything you’re saying about Swift is all fine and good. You haven’t posted any Objective C code though, which is where my questions lie.What I’m looking for is the best practices for integrating Realm into a project where a substantial portion of the code is still Objective C. New work being in Swift. The goal being a stable and supportable integration to maintain that book of work still in ObjC, providing the data via Realm.Maybe the simple solution is to step back and use an Objective C Realm model class instead of Swift, and expose them to Swift via the bridging header.", "username": "Brian_Grimal" }, { "code": "@objc var objC_anOptionalBoolean: ObjCBool {\n get { return ObjCBool(self.anOptionalBoolean ?? false) }\n set { self.anOptionalBoolean = newValue.boolValue }\n }\n@Persisted var optBoolName: Bool?let value = RealmProperty<Bool?>()", "text": "No ObjC code was provided as it’s not clear (to me) what it’s needed for since everything new is Swift and everything legacy is ObjC.New work being in Swift.The old code and models will continue to work as-is even after adding new Swift code and new models.So, you can create your new Swift models and interact with them in a Swift way. If desired, the existing legacy ObjC models could be replaced over time with Swift models and refactor the code to access them in a Swift way (maybe not necessary).Remember - the underlying data is the same so making an identical model in Swift can access that same data per my above example.We’re kinda talking at a 10,000’ level here - IMO, there really isn’t a best practice since ObjC can live within the same project as Swift and the underlying data is the same and can be accessed by either.Using your above example:What is the purpose of that code? You can have an optional boolean in both Swift@Persisted var optBoolName: Bool?and ObjClet value = RealmProperty<Bool?>()so why add an extension since that functionality already exists?I think it would provide clarity and help us (me, lol) understand what the use case is if you can provide a specific example of what kind of ObjC code you need for a specific task. Just need to clarify on what that is.", "username": "Jay" } ]
Swift / ObjC interoperability
2023-08-28T13:39:47.407Z
Swift / ObjC interoperability
438
null
[]
[ { "code": "{\n \"arr\": [\n {\n \"id\": 1,\n \"title\": \"title1\"\n },\n {\n \"id\": 2,\n \"title\": \"title2\"\n }\n ]\n}\narrid{id: 1, title: \"whatever\"}\n{id: 1, title: \"doesn't matter\"}\n", "text": "Hi everyone.\nHave the following document:I need to add a new object to arr field but only if it’s not present there.\nI need to be able to specify what “present” means. In most languages that’s done by using “comparators”. How do I provide some sort of comparator to mongo’s $addToSet?In my case I want to compare objects by id field\nSo the following objects should be equal:The official documentation of $addToSet provides only trivial examples when array consists of simple elements like strings or numbers, which obviously don’t need any comparators.", "username": "Mykola_Ilminsky" }, { "code": "db.comparators.insertMany([\n {\n _id: 'A',\n arr: [\n {\n id: 1,\n title: 'title 1'\n },\n {\n id: 2,\n title: 'title 2'\n }\n ]\n },\n {\n _id: 'B',\n arr: [\n {\n id: 1,\n title: 'title 1'\n },\n {\n id: 2,\n title: 'title 2'\n },\n {\n id: 1,\n title: 'title 3'\n }\n ]\n }\n]);\narrAbrandNewItemAid=2idconst brandNewItem = {\n id: 2,\n title: 'title 2',\n};\ndb.comparators.aggregate([\n {\n $match: {\n _id: 'A', // match specific document to update\n }\n },\n {\n // check if array already contains object with the same id\n $addFields: {\n totalFound: {\n $reduce: {\n input: '$arr',\n initialValue: 0,\n in: {\n $cond: [\n // brandNewItem variable is used\n { $eq: ['$$this.id', brandNewItem.id] }, \n { $sum: ['$$value', 1] },\n { $sum: ['$$value', 0] }\n ]\n },\n }\n }\n }\n },\n {\n $project: {\n arr: {\n $cond: [\n { $eq: ['$totalFound', 0 ] },\n { \n // brandNewItem variable is used\n $concatArrays: ['$arr', [brandNewItem]]\n },\n '$arr', // reuturn 'arr' array as it was initally\n ],\n }\n }\n }\n]);\nid=23idarrdb.comparators.aggregate([\n {\n $match: {\n _id: 'B', // match specific document\n }\n },\n {\n $unwind: '$arr'\n },\n {\n $group: {\n _id: {\n docId: '$_id',\n arrItemId: '$arr.id'\n },\n arr: {\n $addToSet: {\n id: '$arr.id',\n title: '$arr.title'\n }\n }\n }\n },\n {\n $group: {\n _id: '$_id.docId',\n arr: {\n $push: \n { $arrayElemAt: ['$arr', 0] }\n }\n }\n }\n]);\n$accumulatordb.comparators.aggregate([\n {\n $match: {\n _id: 'B', // match specific document\n }\n },\n {\n $unwind: '$arr'\n },\n {\n $group: {\n _id: '$_id',\n arr: {\n $accumulator: {\n init: function () {\n return {};\n },\n accumulate: function (state, arrItem) {\n state[arrItem.id] = arrItem;\n return state;\n },\n accumulateArgs: ['$arr'],\n merge: function () {\n return {};\n },\n finalize: function (state) {\n return Object.keys(state).map(function (key) {\n return state[key];\n });\n },\n lang: 'js'\n }\n }\n }\n }\n]);\n", "text": "Hello, @Mykola_Ilminsky ! Welcome to the MongoDB community! How do I provide some sort of comparator to mongo’s $addToSet?You can’t do that. $addToSet treats object like a big single value. So, from the $addToSet operator’s point of view, if two objects have same field set and exact same values for those fields - those two objects are considered equal. To compare only selected fields in objects, you need to find other solutions.Let me demonstrate how it can be done without $addToSet.First, I will create some sample dataset to work with:Use case 1. Update document arr field with new, but “unique” object.Let’s suppose we want to update document A by adding brandNewItem into the array of document A. Check with the dataset: item with id=2 already in array, so we should check for its existence and do not insert item with duplicated id.This is how it can be done in the aggregation pipeline:The aggregation pipeline above won’t add item with id=2, but if you change it to 3 - item will be added. You can persist the result by adding a $merge stage in the end of the pipeline.Use case 2: inside $group stage\nWhat if you want to remove items with duplicated id from your arr array?\nIt can be done like this:OR\nYou can go crazy and use $accumulator for the same purpose. Note, that solution with $accumulator may be slower, than the previous one, as it contains custom js-code. Only use the $accumulator operator if the provided pipeline operators cannot fulfill your application’s needs.", "username": "slava" }, { "code": "{ \"arr.id\" : { \"$ne\" : item.id } } \n{ _id: 101,\n arr: [\n { id: 1, title: 'title1' },\n { id: 2, title: 'title2' }\n ]\n}\n{\n _id: 102,\n arr: [\n { id: 1, title: 'title1' },\n { id: 3, title: 'title3' }\n ]\n}\nconst brandNewItem = {\n id: 2,\n title: 'title 2',\n};\nc.updateOne( { \"_id\" : 101 , \"arr.id\" : { \"$ne\" : brandNewItem.id } } , { \"$push\" : { \"arr\" : brandNewItem } } )\nc.updateOne( { \"_id\" : 102 , \"arr.id\" : { \"$ne\" : brandNewItem.id } } , { \"$push\" : { \"arr\" : brandNewItem } } )\n", "text": "An alternative for very simple cases is that rather than using $addToSet use $push but add the following to your query.Using the collection:Using the same:The following update will not succeed:while the following will succeed", "username": "steevej" } ]
$addToSet with custom comparator
2023-08-25T13:17:56.879Z
$addToSet with custom comparator
464
null
[ "node-js", "crud" ]
[ { "code": "async function reply(collection: Collection, body: MyBody) {\n const {reply} = body\n if (!reply) return\n await collection.findOneAndUpdate({\n _id: new ObjectId(reply)\n }, {\n $inc: { subCount: 1},\n $push: { children: reply }\n })\n}\nTS2322: Type { children: string; } is not assignable to type PushOperator<Document> \nType { children: string; } is not assignable to type NotAcceptedFields<Document, readonly any[]> \nProperty children is incompatible with index signature.\nType string is not assignable to type undefined \nmongodb.d.ts(6361, 5): The expected type comes from property $push which is declared here on type UpdateFilter<Document>\n$pushnodejs [email protected]", "text": "My code is as follows:But the compiler throws an error when I try to compile it to JS:The compiler says my $push is written incorrectly, but I don’t seem to be doing anything differently than the example in the documentation.The dependency I’m using is nodejs [email protected].", "username": "Dreams_Empty" }, { "code": "pushObejectId(reply)push", "text": "I am not 100% certain, but I think the wrong element is being pushed into the push operator. I think the ObejectId(reply) is being passing in the wrong value to push operator.It would be better if you could provide some more detail or a link to your full code. Cause I can’t determine what MyBody is here…", "username": "Shahriar_Shatil" }, { "code": "// @ts-ignorefindOneAndUpdateupdateOneupdateManyreply", "text": "The code itself should be fine, because if I use // @ts-ignore to block this compile error, the runtime does not report an error and I get the results I expect.Also if I change findOneAndUpdate to updateOne in the code I gave above, the compile error goes away, but if I change it to updateMany the error reappears.I’ll add one more detail, reply is a string of length 24 storing hexadecimal numbers.", "username": "Dreams_Empty" }, { "code": "updateOneupdateOnefindOneAndUpdateupdateMany// @ts-ignore", "text": "Now, the error no longer exists when I use updateOne, but it still exists when I replace updateOne with other actions that have an update attached to them (e.g., findOneAndUpdate updateMany, etc.), and I have to use // @ts-ignore to block the error.", "username": "Dreams_Empty" } ]
I'm getting a TS2322 error when using `$push` with `updateOne`
2023-08-25T13:30:38.280Z
I&rsquo;m getting a TS2322 error when using `$push` with `updateOne`
497
https://www.mongodb.com/…3_2_1024x532.png
[ "node-js", "atlas-cluster" ]
[ { "code": "minion:\n image:\"none\"\n name: \"Minion\"\n preferredTarget: \"Everything\"\n attackType: \"Ranged\"\n housingSpace: 2\n movementSpeed: 32\n attackSpeed:\n value: 1\n unit: \"s\"\n darkBarracklvlReq: 5\n attackRange:\n range: 2.75\n value: \"tiles\"\n superTroop: true\n superTroopReqLvl: 8\n levels: (contain a document per level with info like above)\n levelOne:\n levelTwo\n//it keeps going\nconst express = require(\"express\");\nconst { MongoClient } = require(\"mongodb\");\nconst uri = require(\"./db/connection.js\");\nconst app = express();\n// const port = 3001;\n// const routes = require(\"./Routes\");\n// app.use(\"/\", routes);\n\nconsole.log(\"connectDB:\", uri);\nconst client = new MongoClient(uri);\nconst dbName = \"ClashOfClans\";\nconst collName = \"elexirTroop\";\nconst wholeColl = client.db(dbName).collection(collName);\n\nconst connectDB = async () => {\n try {\n await client.connect();\n console.log(\"Connected: \", dbName);\n } catch (err) {\n console.error(`Error connecting: ${err}`);\n }\n};\nconst documentToFind = { elexirTroop: \"barbarian\" };\n\nconst main = async () => {\n try {\n await connectDB();\n let result = await wholeColl.findOne(documentToFind);\n console.log(result);\n } catch (err) {\n console.error(`Error DID COME UPP:${err}`);\n }\n};\nmain();\n\napp.listen(3000, async () => {\n await connectDB();\n console.log(\"App is running\");\n});\n\nmodule.exports = app;\nmodule.exports = uri =\n \"mongodb+srv://oscarThroedsson:[email protected]/?retryWrites=true&w=majority\";\nconst wholeColl = client.db(dbName).collection(collName);\nconst connectDB = async () => {\n try {\n await client.connect();\n console.log(\"Connected: \", dbName);\n } catch (err) {\n console.error(`Error connecting: ${err}`);\n }\n};\nconst documentToFind = { elexirTroop: \"barbarian\" };\n\nconst main = async () => {\n try {\n await connectDB();\n let result = await wholeColl.findOne(documentToFind);\n console.log(result);\n } catch (err) {\n console.error(`Error DID COME UPP:${err}`);\n }\n};\nmain();\n ~/Documents/MyOwnProjects/clashOfClanStats/server   main  node index.js\nconnectDB: mongodb+srv://oscarThroedsson:[email protected]/?retryWrites=true&w=majority\nConnected: ClashOfClans\nConnected: ClashOfClans\nApp is running\nnull\n", "text": "PROBLEM: Cant figure out the syntax to get out specific data from my documents\nGOAL: I want to have access to my collection and use dot notation to get in to an document and get its information.I have a database that looks like this:\nDatabaseName\nCollection1\nCollection2\nCollection3\n\nSkärmavbild 2023-08-24 kl. 13.43.472620×1362 331 KB\nIn every Collection do i have multiply objects and in that i have data about the object.I have tried to extract data for my database in over 10 hours and haven´t find the solution. I may have forgotten something from the introduction course or have understand. That is why that i would appriciate som visuals with your answers, code examples and so on. Down below can you see my database and code in VS.MongoDB\n ClashOfClans (database)\n… darkelexirTroop (collection)\n… elexirTroop (collection)\n… superTroops (collection)Inside darkElexirTroop img is attached.My code in VS:\nI have two files that has to do with mongoDB. I have followed the node.js guide. I have installed Express, what i know do this not provide any problems or change the way to get the data from the documents?!FileOne: index.js\nCode:File two: connection.jsExplanation to the code:This part is just russion roulette.I have also connected the message i get in terminal but the msg is below:I dont know what more information that you would want to be able to help me. If can provide any code, please do. But also any docs would be appreciated.", "username": "Oscar_Throedsson" }, { "code": "let result = await wholeColl.findOne({});\nconsole.log(result);\nlet result = await wholeColl.findOne({\n 'minion.name': 'Minion'\n});\nconsole.log(result);\n", "text": "Hello, @Oscar_Throedsson ! Welcome to the MongoDB community! I tried your code and it should work fine, if you specify proper conditions in the .findOne() method.Try to change your query to something like this:ORI would suggest you to read more about CRUD operations using MongoDB Node.js driver. You can start with reading every chapter about read operations and write operations.", "username": "slava" }, { "code": "const express = require(\"express\");\nconst { MongoClient } = require(\"mongodb\");\nconst uri = require(\"./db/connection.js\");\nconst app = express();\n// const port = 3001;\n// const routes = require(\"./Routes\");\n// app.use(\"/\", routes);\n\nconsole.log(\"connectDB:\", uri);\nconst client = new MongoClient(uri);\nconst dbName = \"ClashOfClans\";\nconst collName = \"elexirTroop\";\nconst wholeColl = client.db(dbName).collection(collName);\n\nconst connectDB = async () => {\n try {\n await client.connect();\n console.log(\"Connected: \", dbName);\n } catch (err) {\n console.error(`Error connecting: ${err}`);\n }\n};\nconst documentToFind = { barbarian: { name: \"barbarian\" } };\n\nconst main = async () => {\n try {\n await connectDB();\n let result = await wholeColl.find({}).toArray(); // Take out the whole doc\n let barbarianData = result[0].barbarian; // looking at the first place\n console.log(\"barbarianData\", barbarianData);\n console.log(\"modify: \", barbarianData.name);\n\n // console.log(\"result\", result);\n } catch (err) {\n console.error(`Error DID COME UPP:${err}`);\n }\n};\nmain();\n\napp.listen(3000, async () => {\n await connectDB();\n console.log(\"App is running\");\n});\n\nmodule.exports = app;\nconst dbName = \"ClashOfClans\";\nconst collName = \"elexirTroop\";\nconst dataDoc = client.db(dbName).collection(collName); //Want to catch the doc data here. \n\nconsole.log(dataDoc.name) \nconst dataDoc = client.db(dbName).collection(collName); //Want to catch the doc data here. \nconsole.log(´Your firstname is ${dataDoc.name.firstName} and your lastname is ${dataDoc.name.lastName}`)\n", "text": "Hey Slava!Thank you for your response.I figure it out and this is the code i came up with.What I am trying to do is:\nI want to find a collection with a certain name. Then go to a doc with a certain name. Then I want all the keys and values to be declared/returned in to a variable so i just can access the data by writingvarible.keyI would like it to be this easy.I just want the data from a doc to be saved in the varible. Think of handling object from a API…in this row am i Thinking the followingI am fetching the data from the database ClashOfClans (client.db(dbName) ang wan the following data in this document returned (.collection(collName);This would also make it easier to fetch nested docs.Maybe we can make it work like that but i", "username": "Oscar_Throedsson" }, { "code": "const client = new MongoClient(uri);\nconst dbName = \"ClashOfClans\";\nconst collName = \"elexirTroop\";\nconst wholeColl = client.db(dbName).collection(collName);\nconst retriveTroopDoc = { name: \"barbarian\" };\nconst main = async () => {\n try {\n await connectDB();\n // let result = await wholeColl.find(documentToFind).toArray();\n // troop = result[0].barbarian;\n troop = await wholeColl.findOne(retriveTroopDoc);\n\n console.log(\"retriveTroopDoc\", retriveTroopDoc);\n console.log(\"troopData: \", troop);\n\n // console.log(\"result\", result);\n } catch (err) {\n console.error(`Error DID COME UPP:${err}`);\n }\n};\nmain();\n", "text": "I tried this:Then in the method I tried this:It is really important I get an object… The front-end will be third world war for my head to code if i have an array.My database look like this:ClashOfClans - (database)darkElexirTroop (collection)_id:“randome id”\nminion: (object/document)\nhogRider: (object/document)\nvalkyrie: (object/document)\ngolem: (object/document)\nwitch: (object/document)\nlavaHound: (object/document)\nbowler: (object/document)\niceGolem: (object/document)\nheadhunter: (object/document)\napprenticeWarden: (object/document)elexirTroop (collection)_id:“randome id”\nbarbarian: (object/document)\narcher: (object/document)\ngiant: (object/document)\ngoblin: (object/document)\nwallBreaker: (object/document)\nballoon: (object/document)\nwizard: (object/document)\nhealer: (object/document)\ndragon: (object/document)\npekka: (object/document)\nbabyDragon: (object/document)\nminer: (object/document)\nelectroDragon: (object/document)\nyeti: (object/document)\ndragonRider: (object/document)\nelectroTitan: (object/document)I have read everything you linked, but I dont find it helpfule. I dont understand what I am doing wrong. Everything i read it looks like I am doing the right thing to get the data.", "username": "Oscar_Throedsson" }, { "code": "{ name: \"barbarian\" }", "text": "what I am doing wrongYou are querying a field called name for the value barbarian as in{ name: \"barbarian\" }While the little amount of data you shared does not even have a field called name.I strongly recommend that you take a look atDiscover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.", "username": "steevej" }, { "code": "", "text": "Hey Steve.\nEverything is on 3000 lines, so i didnt share everything. But here you go.Here can you see the barbarian doc with the name of Barbarian.\n\nSkärmavbild 2023-08-25 kl. 15.46.512604×1400 375 KB\nI want all the data in barbarian be returned as an object so i can do dot notation in VS.", "username": "Oscar_Throedsson" }, { "code": "{ \"barbarian.name\" : \"Barbarian\" }\n", "text": "so your query must be", "username": "steevej" }, { "code": "const express = require(\"express\");\nconst { MongoClient } = require(\"mongodb\");\nconst uri = require(\"./db/connection.js\");\nconst app = express();\n// const port = 3001;\n// const routes = require(\"./Routes\");\n// app.use(\"/\", routes);\n\nconsole.log(\"connectDB:\", uri);\nconst client = new MongoClient(uri);\nconst dbName = \"ClashOfClans\";\nconst collName = \"elexirTroop\";\nconst wholeColl = client.db(dbName).collection(collName);\nconst retriveTroopDoc = { barbarian: { name: \"Barbarian\" } };//here i tried your suggestion\n\nconst connectDB = async () => {\n try {\n await client.connect();\n console.log(\"Connected: \", dbName);\n } catch (err) {\n console.error(`Error connecting: ${err}`);\n }\n};\n// const documentToFind = { barbarian: { name: \"barbarian\" } }; / i had tried it before\n\nconst main = async () => {\n try {\n await connectDB();\n // let result = await wholeColl.find(documentToFind).toArray();\n // troop = result[0].barbarian;\n\n troop = await wholeColl.findOne(retriveTroopDoc);\n console.log(\"URI: \", uri);\n console.log(\"troop: \", troop);\n\n // console.log(\"result\", result);\n // console.log(\"troop: \", troop.name);\n } catch (err) {\n console.error(`Error DID COME UPP:${err}`);\n }\n};\nmain();\n\napp.listen(3000, async () => {\n await connectDB();\n console.log(\"App is running\");\n});\n\nmodule.exports = app;\nmodule.exports = uri =\n \"mongodb+srv://oscarThroedsson:[email protected]/?retryWrites=true&w=majority\";\n", "text": "I have tried that as well. I have done the courses, I have read the documentation, i have watched a lot of youtube videos. I have asked in the discord. I don´t understand what I´am doing wrong!\nfile: index.jsNot to get confused. Code that is commented out is code i have tried before and have giving me null or undefined.The code above game me null.connectDB: mongodb+srv://oscarThroedsson:[email protected]/?retryWrites=true&w=majority\nConnected: ClashOfClans\nConnected: ClashOfClans\nApp is running\nURI: mongodb+srv://oscarThroedsson:[email protected]/?retryWrites=true&w=majority\ntroop: null ← Here is console.log(\"troop: \", troop.name);This is how i have set up my VS fil.Folders:\ndb… contain js file: “connection.js”This string is console.log above client and in method main.index.js- All the code is aboveI really don´t know what to show you or what information to give you. As my understanding, null means it cant find what I am searching for.", "username": "Oscar_Throedsson" }, { "code": "{ barbarian: { name: \"Barbarian\" } };//here i tried your suggestion{ \"barbarian.name\" : \"Barbarian\" }", "text": "{ barbarian: { name: \"Barbarian\" } };//here i tried your suggestionthe above is not my suggestion. my suggestion is{ \"barbarian.name\" : \"Barbarian\" }", "username": "steevej" }, { "code": "console.log(troop);\nconsole.log(troop.barbarian);\nconsole.log(troop.barbarian.name);\n", "text": "Oh, thanks!!if II get all the documents inside the collection elexirtroop with this code.\nAnd if i doI get the document barbarian.If i want a value i need to do.Which will give me the value on the key “name”.I thought that find returned a document not a whole collection, or have I misunderstand how it works!?If you look at my mongoDB database:\nClashOfClans is the Database.\nelexirTroop is the Collection\nBarbarian is a DocumentI interpret that the troop variable returns a collection that contains the key value barbarian, and not the document.If you want, can you please help me clarify it, if I have misunderstood something.Thank you so much for your help.", "username": "Oscar_Throedsson" }, { "code": "QUERY RESULTS:1-1 OF 1\n", "text": "You seem to be confused between what is a document in a collection and what is an object in a document.If you look at your own screenshots you will see that your elexirTroop collection as 1 document. It is indicatedJust beside the field barbarian you will see that it is an Object within the single document, the single _id shown. As such, the fields archer, giant, goblin, … all refer to Object within the same unique document from your collection.", "username": "steevej" }, { "code": "", "text": "Hey Steve.Have tried to read a littlebit before I answer you.I interperate your text that a collection is a document? But that must be wrong, becuase a collection contains document, right?.I thought all of the data inside elexirtroops was one document, but when i have read more i would say the following.Everything between the curlybrackets is a single document. So every troop is a document in elexirTroops.elexirtroops: Collection\n-barbarian: Document\n-archer: DocumentInside every Document, we have fields av values. What i learned is key: value. But that doesn´t matter right now. Correct?\nSkärmavbild 2023-08-29 kl. 10.27.32883×622 53 KB\nDoes that mean i should have curlybrackets {} efter barbarian: and in the end to separate the document to its fields and values?", "username": "Oscar_Throedsson" }, { "code": "", "text": "I interperate your text that a collection is a document? But that must be wrong, becuase a collection contains document, right?.This interpretation is wrong. A collection contains one or more documents. Your elexirTroop contains only one document and its _id is ObjectId( “64e…d22” ).The fields barbarian and archer within the only document of your elixirTroop are Object. If you look closely beside barbarian you will see a colon and the word Object.Does that mean i should have curlybrackets {} efter barbarian: and in the end to separate the document to its fields and values?Yes if you want barbarian be a separate document from archer.", "username": "steevej" }, { "code": "", "text": "I got it!I understand the structure now. I sorted it out!\n\nSkärmavbild 2023-08-29 kl. 15.37.221912×1058 198 KB\nThank for your patience and advises!", "username": "Oscar_Throedsson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How do i get my data?!
2023-08-24T11:50:56.522Z
How do i get my data?!
630
null
[]
[ { "code": "", "text": "db.getCollection(‘students’).save( { “timeStamp”: “$NOW”,“test”:“fdsfds”})", "username": "kishan_agarwal" }, { "code": "const date = ISODate();\ndb.test.insertOne({ d: date });\nconst date = new Date();\ndb.test.insertOne({ d: date });\nconst date = ObjectId().getTimestamp();\ndb.test.insertOne({ d: date });\ndb.test.updateOne({ _id: <ID> }, [{ $set: { d: '$$NOW' } }]);\ndb.test.updateOne({ _id: <ID> }, { $currentDate: { d: { $type: 'date' } } });\ndb.test.updateOne({ _id: { $lt: 0 } }, [{ $set: { d: '$$NOW' } }], { upsert: true });\n", "text": "Hello, @kishan_agarwal! Welcome to the community!$$NOW is an aggregation variable, which means that it can by used only within aggregation pipeline.Due to the documentation, save() method is deprecated. use insertOne() instead.To insert current date you need to provide date value on the client side.\nFor example, ISODate() function:Or using javascript Date constructor:You can extract current date from newly constructed ObjectId value:It is possible to set date value on the MongoDB server’s side, using update operations.So, you can use $$NOW when you update one or more documents using pipeline:Or, by updating documents using $currentDate operator:It is possible, to insert documents using update operations using workaround:Although the above workaround will work, avoid doing so, becauseSo, prefer insertOne with dates, generated on the app (client) side.", "username": "slava" }, { "code": "", "text": "Hi Slava!Is it possible in never versions to insert a document with a server side timestamp without using an upsert based workaround?", "username": "Balazs_Piszkor" } ]
Not able to perform “$$NOW” variable to get date time value on MongoDB 4.2 version
2020-06-26T19:28:41.031Z
Not able to perform “$$NOW” variable to get date time value on MongoDB 4.2 version
7,388
null
[]
[ { "code": "", "text": "As indicated in the title, is there a way to update Atlas functions using the CLI? Atlas functions are excellent, but I’ve created an npm module as an external module for Atlas functions. After updating my npm module, I need to also update the version of this external module. Does the official documentation offer a way to update via CLI?", "username": "Scoz_Auro" }, { "code": "package.jsonnpm uninstall <your_package_name>npm install <your_package_name>--include-package-json", "text": "Hi Scoz_Auro,Please view my reply to your other posting here, as it may clarify concepts relevant to this answer.While it’s possible to adjust the dependency version in the Dependencies section of the Functions in Atlas UI, you can also opt to make the change locally and push back to Atlas via the CLI. You will need to edit the dependency version within the package.json of the functions directory. Once you’ve done this, run npm uninstall <your_package_name>, followed by npm install <your_package_name> in the functions directory to get the new version in your directory.To push the new version to Atlas, make sure to use the --include-package-json in your push.", "username": "Cyrus_Ordoubadian" }, { "code": "", "text": "It works, many thanks!", "username": "Scoz_Auro" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there a way to update atlas function using the CLI?
2023-08-29T04:43:52.764Z
Is there a way to update atlas function using the CLI?
509
null
[ "storage" ]
[ { "code": "", "text": "Looking at mongo’s code, I see that that it still allows partial modifications of documents via RecordDamageEvent on the document instance.\nWhich user operations trigger mongo to use those?Thanks,\nRoey.", "username": "Roey_Maor" }, { "code": "", "text": "bumping question, trying my luck.", "username": "Roey_Maor" }, { "code": "", "text": "Hi @Roey_MaorYes you are correct. I think this is internally called “updates with damages”. The ticket that implemented this feature is WT-2972, and this feature has been in place since MongoDB 3.6 series. I believe updates to a document will trigger this code (see SERVER-29250).Having said that, this is an internal feature, the spirit of which is to increase the performance of WiredTiger for general workloads. This is not user-tunable or even user-visible, so other than a curiosity, I don’t think there’s anything you can see here Best regards\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks Kevin, much appreciated!So I guess a continuing question would be - which updateOne/updateMany operations use it?\nLooking at wiredtiger_record_store.cpp the answer seems to be that upon every update call we fetch the old value, and then only if:Then we use the partial modification, otherwise just insert the entire combined value as the update.Roey.", "username": "Roey_Maor" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
In-place/partial updates
2023-08-21T12:06:26.736Z
In-place/partial updates
589
null
[ "python", "motor-driver" ]
[ { "code": "", "text": "We are pleased to announce the 3.3.0 release of Motor - MongoDB’s Asynchronous Python Driver. This release brings support for PyMongo 4.5 and Python 3.12.See the changelog for a high-level summary of what is in this release or see the Motor 3.3.0 release notes in JIRA for the complete list of resolved issues.Documentation: Motor: Asynchronous Python driver for MongoDB — Motor 3.3.0 documentationChangelog: Changelog — Motor 3.3.0 documentationSource: GitHub - mongodb/motor at 3.3.0Thank you to everyone who contributed to this release!", "username": "Steve_Silvester" }, { "code": "def mongo_client() -> AsyncIOMotorClient: # <- Variable \"motor.motor_asyncio.AsyncIOMotorClient\" is not valid as a type [valid-type]mypy(error)\n return AsyncIOMotorClient(os.environ[\"MONGO_URI\"])\nmotor.core.AgnosticClient", "text": "Is there any documentation on how to use the type hints? This fails:I can type it as a motor.core.AgnosticClient, but that just leads to other type errors.", "username": "Mark_Edwards" }, { "code": "AgnosticCursor.to_list()AgnosticConnection.find_one()", "text": "I more or less figured it out. From what I can tell AgnosticCursor.to_list() is not fully typed, and also AgnosticConnection.find_one() typing fails for me with mypy 1.5.1.Some documentation here would be nice, because I had to dig through code to figure this out.", "username": "Mark_Edwards" }, { "code": "", "text": "Thanks for the feedback @Mark_Edwards, I’ve opened https://jira.mongodb.org/browse/MOTOR-1177.", "username": "Steve_Silvester" }, { "code": " File \"/home/insiderinternal/.local/lib/python3.11/site-packages/motor/motor_asyncio.py\", line 53, in <module>\n AsyncIOMotorCollection = create_asyncio_class(core.AgnosticCollection)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/insiderinternal/.local/lib/python3.11/site-packages/motor/motor_asyncio.py\", line 41, in create_asyncio_class\n return create_class_with_framework(cls, asyncio_framework, \"motor.motor_asyncio\")\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/insiderinternal/.local/lib/python3.11/site-packages/motor/metaprogramming.py\", line 289, in create_class_with_framework\n new_class_attr = attr.create_attribute(new_class, name)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n File \"/home/insiderinternal/.local/lib/python3.11/site-packages/motor/metaprogramming.py\", line 153, in create_attribute\n method = getattr(cls.__delegate_class__, name)\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\nAttributeError: type object 'Collection' has no attribute 'create_search_index'\n", "text": "Hi,Upgrading motor from 3.2.0 to 3.3.0 causes our process to crash with following error stackWe are using mongodb 6.0.3, and motor as python’s asynchronous library interface.", "username": "Shashank_Nigam" }, { "code": "pymongo>=4.5,<5pymongopip install --upgrade pymongo", "text": "Hi @Shashank_Nigam, it looks like your upgrade didn’t pick up the new requirement of pymongo>=4.5,<5. What command(s) did you use to upgrade? In the mean time, you can install the latest pymongo manually as pip install --upgrade pymongo.", "username": "Steve_Silvester" }, { "code": "", "text": "Thanks for pointing that out. After updating pymongo, library works fine", "username": "Shashank_Nigam" } ]
Motor 3.3.0 Released
2023-08-24T20:49:39.101Z
Motor 3.3.0 Released
1,149
null
[ "react-native" ]
[ { "code": "Exception Type: EXC_BAD_ACCESS (SIGSEGV)\nException Subtype: KERN_INVALID_ADDRESS at 0x754665766974617e -> 0x000065766974617e (possible pointer authentication failure)\nException Codes: 0x0000000000000001, 0x754665766974617e\nVM Region Info: 0x65766974617e is not in any region. Bytes after previous region: 6005596643711 \n REGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL\n MALLOC_NANO (reserved) 600018000000-600020000000 [128.0M] rw-/rwx SM=NUL ...(unallocated)\n---> \n UNUSED SPACE AT END\nTermination Reason: SIGNAL 11 Segmentation fault: 11\nTerminating Process: exc handler [5039]\n\"react-native\": \"0.70.6\", \"realm\": \"^11.2.0\"", "text": "After installing realmjs the app crashes / quit unexpectedly when opening. There following error is shownStacktrace & log outputReproduction Steps\non a clean app w/:\n\"react-native\": \"0.70.6\",\nbuild ios - successful/\nadd\n \"realm\": \"^11.2.0\"\nbuild ios - launches the app and it crashes immediately", "username": "Siso_Ngqolosi" }, { "code": "", "text": "@Siso_Ngqolosi Is it possible to provide a larger stacktrace? Also some sample code that produces this crash?", "username": "Andrew_Meyer" }, { "code": "", "text": "Cause was incompatible realm and react native version", "username": "Siso_Ngqolosi" } ]
App is Crashing at launch After installing Realm into React Native Project(Only IOS)
2023-07-27T17:49:16.993Z
App is Crashing at launch After installing Realm into React Native Project(Only IOS)
971
null
[ "node-js" ]
[ { "code": "Error: ENOENT: no such file or directory, open '/var/lib/jenkins/workspace/xx-yy-zz/backend/node_modules/mongodb-connection-string-url/src/index.ts'\nconst config: Config.InitialOptions = {\n preset: 'ts-jest',\n testEnvironment: 'node',\n verbose: true,\n coveragePathIgnorePatterns: [ \"\\\\\\\\node_modules\\\\\\\\\"],\n coverageProvider: \"v8\",\n testMatch: ['**/__tests__/**/*.ts?(x)', '**/?(*.)+(spec|test).ts?(x)'],\n setupFilesAfterEnv: ['./jest.setup.ts'],\n collectCoverage: true,\n coverageDirectory: \"<rootDir>/coverage\",\n}\nexport default config;\n", "text": "When I run jest test cases for nodejs application. I am facing this error after tests execution succeed.However this error doesn’t come on my local system. This file doesn;t exist neither on local system nor on jenkin workspace but still it give this error. I am not getting the reason. Due to which coverage folder is not generated on jenkin workspace however on local it works.This is my jest.config.js", "username": "Mayank_Sharma1" }, { "code": "Error: ENOENT: no such file or directory, \nopen '/var/lib/jenkins/workspace/xx-yy-zz/backend/node_modules/mongodb-connection-string-url/src/index.ts'\n", "text": "Hey @Mayank_Sharma1,Welcome to the MongoDB Community!I suspect there may be a few things that could be causing this error:Could you confirm that Jenkins and your local machine have the same environments such as Node.js/npm versions, dependencies installed, etc?Could you also double check your Jest config on Jenkins matches your local config, especially around test matches and module name mapper which controls what files get processed?The caching issue on jest can sometimes cache files between runs which could cause unexpected behavior. Try clearing Jest’s cache folder on Jenkins between builds.Also the error indicates an absolute path that may not resolve correctly on Jenkins. Try using relative paths in your config.The root cause is probably an environmental difference between the two systems. Doing some troubleshooting can help identify the specifics of where the issue is occurring.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi @Kushagra_Kesav thanks for your comments. To further find out the issue, I run the jest test cases using docker on local machine and able to replicate issue. When I run unit-test.sh script on local bash terminal.docker build -f DockerfileUnitTest -t my-node-app-back .docker run --rm my-node-app-backif [[ $? -ne 0 ]]; then\necho “Tests failed. Exiting pipeline.”\nexit 1\nfiI got error \"Error: ENOENT: no such file or directory, open ‘/usr/src/app/node_modules/mongodb-connection-string-url/src/index.ts’This was my simple docker file.FROM node:18.14-alpine\nWORKDIR /usr/src/app\nCOPY ./studio-backend/package*.json ./\nRUN npm install\nCOPY ./studio-backend .\nCMD [“npm”, “test”]If I run npm test on window terminal then no such error came", "username": "Mayank_Sharma1" } ]
Error in executing jest test cases for nodejs application with mongodb
2023-08-18T12:04:37.129Z
Error in executing jest test cases for nodejs application with mongodb
835
https://www.mongodb.com/…7_2_1024x365.png
[ "flutter" ]
[ { "code": "", "text": "\nScreenshot 2566-08-28 at 16.36.361104×394 30.6 KB\n\nRealm in Flutter : I want to define object or any type in field of realmmodel. Can do it ??\nin line 31\nlate Object referredType; so Object is not a realm model type\ni want to how to implement this\nwhen has a object such as Catalog Category Product", "username": "zmonx_gg" }, { "code": "RealmValueObject", "text": "We have the type RealmValue that can hold many different realm types, but not a generic dart Object. Would that be useful to you?", "username": "Kasper_Nielsen1" }, { "code": "", "text": "Have an example for me ?", "username": "zmonx_gg" }, { "code": "", "text": "A place to start RealmValue class - realm library - Dart API", "username": "Kasper_Nielsen1" }, { "code": "", "text": "Thank you so much.\nI have another question.\n\nScreenshot 2566-08-29 at 11.31.27732×453 54 KB\nin line 42 - 50\nhave you idea refactor logic or have a altenative way when type it moreThank you in advance", "username": "zmonx_gg" } ]
Realm in Flutter : I want to define object or any type in field of realmmodel. Can do it?
2023-08-28T09:48:06.989Z
Realm in Flutter : I want to define object or any type in field of realmmodel. Can do it?
426
null
[ "atlas-cluster", "schema-validation" ]
[ { "code": "npx prisma db push Environment variables loaded from .env Prisma schema loaded from prisma/schema.prisma Datasource \"db\": MongoDB database\n\nError: Prisma schema validation - (get-config wasm) Error code: P1012 error: Environment variable not found: DATABASE_URL. --> schema.prisma:10 | 9 | provider = \"mongodb\" 10 | url = env(\"DATABASE_URL\") |\n\nValidation Error Count: 1 [Context: getConfig]\n\nPrisma CLI Version : 5.2.0\n\nDATABASE_URL=\"mongodb+srv://username:[email protected]/test\" NEXTAUTH_JWT_SECRET= NEXTAUTH_SECRET=\n datasource db { provider = \"mongodb\" url = env(\"DATABASE_URL\") }\n", "text": "i am new to mongoDB and trying to connect to mongoDB server.the error:.env fileschema.prisma fileI’ve tried", "username": "Nicholas_Nelson" }, { "code": "", "text": "Hi @Nicholas_Nelson,I believe the error is for Prisma and not for any connection failures to a MongoDB instance based off my interpretation of the error:Error code: P1012 error: Environment variable not found: DATABASE_URLHave you tried raising this with prisma support regarding this? My guess is there should be some guides on how to connect to an Atlas instance using prisma.An additional note, for troubleshooting purposes, you can try connecting without environment variables as a to see if you’re able to connect to a MongoDB instance.Regards,\nJason", "username": "Jason_Tran" }, { "code": "process.envprocess.env.MONGODB_URLosos.environ['MONGODB_URL']SystemSystem.getenv(\"MONGODB_URL\")", "text": "If you’re encountering the issue where the MongoDB database URL environment variable is not found, it’s likely due to an issue with how the environment variables are being set or accessed in your application. Here are some steps you can take to troubleshoot and resolve the issue:Remember that the exact steps might vary based on your application’s architecture and the programming language/framework you’re using. By following these general steps and paying attention to your application’s specific setup, you should be able to resolve the issue of the MongoDB database URL environment variable not being found.", "username": "Hasan_Wajid" }, { "code": "", "text": "Thanks to ChatGPT for this very generic answer.", "username": "steevej" } ]
mongoDB database url environment variable not found
2023-08-26T17:43:45.968Z
mongoDB database url environment variable not found
1,074
null
[ "aggregation", "queries", "mongodb-shell", "data-api" ]
[ { "code": "db.getCollection(\n 'core.usersAttributes'\n).aggregate(\n [\n { $unwind: { path: '$attributes' } },\n { $unwind: { path: '$attributes.values' } },\n {\n $sort: { 'attributes.values.createdAt': -1 }\n },\n {\n $group: {\n _id: {\n id: '$_id',\n attribute: '$attributes.attribute'\n },\n user: { $first: '$user' },\n attribute: {\n $first: '$attributes.attribute'\n },\n value: {\n $first: '$attributes.values.value'\n },\n createdAt: {\n $first: '$attributes.values.createdAt'\n }\n }\n },\n {\n $lookup: {\n from: 'attributes',\n localField: 'attribute',\n foreignField: '_id',\n as: 'attributeInfo'\n }\n },\n {\n $project: {\n _id: '$_id.id',\n user: 1,\n attribute: {\n id: '$attribute',\n title: {\n $arrayElemAt: [\n '$attributeInfo.title.en',\n 0\n ]\n },\n value: '$value',\n createdAt: '$createdAt'\n }\n }\n },\n {\n $group: {\n _id: '$_id',\n user: { $first: '$user' },\n attributes: { $push: '$attribute' }\n }\n }\n ],\n { maxTimeMS: 60000, allowDiskUse: true }\n);\n{\n \"dataSource\": \"{{dataSource}}\",\n \"database\": \"{{dataBase}}\",\n \"collection\": \"core.userAttributes\",\n \"pipeline\": [\n { \"$unwind\": \"$attributes\" },\n { \"$unwind\": \"$attributes.values\" },\n { \"$sort\": { \"attributes.values.createdAt\": -1 } },\n {\n \"$group\": {\n \"_id\": {\n \"id\": \"$_id\",\n \"attribute\": \"$attributes.attribute\"\n },\n \"user\": { \"$first\": \"$user\" },\n \"attribute\": { \"$first\": \"$attributes.attribute\" },\n \"value\": { \"$first\": \"$attributes.values.value\" },\n \"createdAt\": { \"$first\": \"$attributes.values.createdAt\" }\n }\n },\n {\n \"$lookup\": {\n \"from\": \"attributes\",\n \"localField\": \"attribute\",\n \"foreignField\": \"_id\", \n \"as\": \"attributeInfo\"\n }\n },\n {\n \"$project\": {\n \"_id\": \"$_id.id\",\n \"user\": 1,\n \"attribute\": {\n \"id\": \"$attribute\",\n \"title\": { \"$arrayElemAt\": [\"$attributeInfo.title.en\", 0] },\n \"value\": \"$value\",\n \"createdAt\": \"$createdAt\"\n }\n }\n },\n {\n \"$group\": {\n \"_id\": \"$_id\",\n \"user\": { \"$first\": \"$user\" },\n \"attributes\": { \"$push\": \"$attribute\" }\n }\n }\n ]\n}\n", "text": "Hello there,My first topic here So be patient, LoL.I need this query to work on the data API. So I converted it to:The endpoint I’m sending is working normally for other aggregation queries.\nWith this query, no error (200 response) is returned, as well as no document.Could you help me find out what is wrong with my data API call?Thanks in advance!", "username": "Uelinton_Santos" }, { "code": "$unwind", "text": "Hi @Uelinton_Santos - Welcome to the community With this query, no error (200 response) is returned, as well as no document.Could you help me find out what is wrong with my data API call?You could try doing the request with just 1 pipeline stage at a time (starting with the first $unwind only). If documents are being returned, add more pipeline stages on until you encounter nothing being returned. This may at least identify which stage in the pipeline the issue could be created from.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hey @Jason_Tran ,Thanks for your suggestion.\nI ended up discovering by myself the dumb error I made.\nAs you can see in my original post, I put in the data API call the incorrect name of the collection ‘core.userAttributes’ instead of ‘core.usersAttributes’. \nThank you anyway.Best,", "username": "Uelinton_Santos" }, { "code": "", "text": "Thanks for the update / marking the solution and good catch on the small typo ", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Aggregate query in data api not working like in mongosh
2023-08-22T21:13:49.292Z
Aggregate query in data api not working like in mongosh
476
https://www.mongodb.com/…5_2_1024x590.png
[ "flutter" ]
[ { "code": "Future<void> addTracker() async {\n var realm = Realm(realmConfig);\n\n Items item = Items(id: 3, name: 't3');\n realm.write(() {\n final wo = get();\n wo?.trackers.add(item);\n });\n\n realm.close();\n}\n@RealmModel()\nclass _Orders {\n int? id;\n String? woName;\n late List<$Items> trackers;\n}\n\n@RealmModel()\nclass $Items {\n int? id;\n String? name;\n}\nOrders? get() {\n var realm = Realm(realmConfig);\n var result = realm\n .all<Orders>()\n .map((e) => Orders(\n id: e.id,\n trackers: e.trackers.map((e) => Items(id: e.id, name: e.name)),\n woName: e.woName,\n ))\n .firstOrNull;\n realm.close();\n return result;\n}\n", "text": "Hi All I need a quick solution or update on the following. I am trying to add / update / delete items from realm and the changes are not getting populated in the database. Check below code that can help you all to understand.Error : Realm object is not managedIf I print the list length after adding, I can get it correctly, but the changes are not reflected in the realm.PS: Just ignore naming conventions this is just my sample app.\nimage1414×816 66.3 KB\nTrackers are getting added but these changes are not reflecting and showing isManaged: false", "username": "Rohit_Daftari" }, { "code": "import 'package:realm_dart/realm.dart';\n\npart 'forum241674.g.dart';\n\n@RealmModel()\nclass _Orders {\n @PrimaryKey()\n late int id;\n late String woName;\n late List<$Items> trackers;\n\n @override\n String toString() => 'Orders{id: $id, woName: $woName, trackers: $trackers}';\n}\n\n@RealmModel()\nclass $Items {\n // Prefer pseudo-random classes like ObjectId or Uuid over int for primary keys.\n // It is hard to ensure uniqueness with int in a distributed system.\n @PrimaryKey()\n late ObjectId id;\n late String name;\n\n @override\n String toString() => 'Items{id: $id, name: $name}';\n}\n\nfinal realmConfig = Configuration.local(\n [Orders.schema, Items.schema],\n shouldDeleteIfMigrationNeeded: true, // only for testing\n);\n// Don't open an close realms all the time. One per isolate will serve you well.\nfinal realm = Realm(realmConfig);\n\nOrders get() {\n // This is a bit contrieved, but I'm just trying to follow your sample.\n // It finds the first order, or create a new one if none exists.\n // I assume you want more orders eventually.\n return realm.all<Orders>().firstOrNull ?? realm.add(Orders(1, 'wo1'));\n}\n\nFuture<void> addTracker() async {\n final item = Items(ObjectId(), 'an item');\n realm.write(() {\n final wo = get();\n wo.trackers.add(item);\n });\n}\n\nFuture<void> main(List<String> args) async {\n await addTracker();\n print(realm.all<Orders>());\n // Don't close until you are done with all the objects served by the realm.\n // In general you don't need to close realms explicitly\n realm.close();\n Realm.shutdown(); // only needed for non-Flutter apps, due to Dart VM issue.\n}\n(Orders{id: 1, woName: wo1, trackers: [Items{id: 64eced0fbda108e828d1dc82, name: an item}, Items{id: 64eced13abd6bb7b17ed0372, name: an item}, Items{id: 64eced18b0019cf0772a7e8b, name: an item}, Items{id: 64eced309533aea59065fec8, name: an item}, Items{id: 64eced3881db05fb23ff246a, name: an item}, Items{id: 64eced5e75f7626e497739f3, name: an item}, Items{id: 64ecee3e0601449d4414c33a, name: an item}]})\n\nExited.\n", "text": "Don’t close the realm until you are done using the objects it serve. Also I’m not sure what you are trying to do with the map call?I have tried re-writing your code the best of my understanding, in a way that works:It adds one new item to the one-and-only order on each run.Sample output after a few runs:", "username": "Kasper_Nielsen1" }, { "code": "", "text": "Hi, Thanks for the super quick reply.It’s working, Realm Initialization & closing was the only issue. Moreover, the map I did was throwing invalidated or deleted errors but the same was fixed when only single initialization & close calls were carried out.Thanks Again.", "username": "Rohit_Daftari" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Flutter Add/Update/Delete in List
2023-08-28T17:58:04.856Z
Realm Flutter Add/Update/Delete in List
490
null
[ "golang" ]
[ { "code": "SetHTTPClient", "text": "I’m writing integration tests for a Go program that queries a MongoDB cluster using Mongo Go Driver. I’d like to record those requests and later replay them in the tests.Is there a library that I could use? I’m currently familiar with 2 similar libraries, but those are used for different purposes:I’ve already tried overriding HTTP client using SetHTTPClient option, but it seems like the driver doesn’t use the HTTP client at all (I’ve set the transport to log a string to see if that gets used, but it doesn’t).Any help would be appreciated. Thank you.", "username": "Aleksandar_Jelic" }, { "code": "net.ConnContextDialertype myConn struct {\n\tnet.Conn\n}\n\nfunc (mc *myConn) Read(b []byte) (n int, err error) {\n\t// Capture data read here.\n\treturn mc.Conn.Read(b)\n}\n\nfunc (mc *myConn) Write(b []byte) (n int, err error) {\n\t// Capture data written here.\n\treturn mc.Conn.Write(b)\n}\n\ntype myDialer struct {\n\tdialer *net.Dialer\n}\n\nfunc (md *myDialer) DialContext(ctx context.Context, net, addr string) (net.Conn, error) {\n\tconn, err := md.dialer.DialContext(ctx, network, address)\n\tif err != nil {\n\t\treturn nil, err\n\t}\n\treturn &myConn{Conn: conn}, nil\n}\n\nfunc main() {\n\tclient, err := mongo.Connect(\n\t\tcontext.Background(),\n\t\toptions.Client().\n\t\t\tApplyURI(\"my URI\").\n\t\t\tSetDialer(myDialer{dialer: &net.Dialer{}}))\n\t// ...\n", "text": "@Aleksandar_Jelic thanks for the question! AFAIK there’s no plug-and-play library that can record and/or replay requests with the MongoDB Go Driver. The best way to capture all data sent/received is by implementing a net.Conn that captures data and then providing a custom connection dialer that returns the capturing conn via SetDialer.Example of how to implement net.Conn and ContextDialer wrappers:P.S. As you discovered, the HTTP client that you can configure with SetHTTPClient is not used for database communication. It is only used for OCSP TLS certificate revocation checks.", "username": "Matt_Dale" } ]
How can I record and replay MongoDB requests in Go?
2023-08-28T09:44:04.406Z
How can I record and replay MongoDB requests in Go?
423
null
[ "node-js", "cxx" ]
[ { "code": "", "text": "I’m considering using Realm for a SAAS web application which also has a desktop application/client as well. But, is Realm really an appropriate choice for commercial software? The ability to use the Realm app object and then invoke queries, setup watches on backend data, etc is great and convenient. But, from a security perspective how is this a best practice to expose directly in my web applications where any casual View Source will reveal my backend app name, my backend document names and the exact field names as well?Am I misinpreting the use case for Realm? Is it more designed for mobile apps where viewing source isn’t possible or for internal corporate apps that live behind a firewall, inside a private network?I’m building a commercial app that will be run in a web browser (I’m using JS Realm SDK) and on desktop (I’m using electron and soon will replace that with a C++ based host but currently I’m using the Node JS SDK) and I’ve always, like everyone else, used an API for my app’s to keep backend details out of the client code/hard-coded string values for document queries, etc. I don’t feel comfortable exposing this much detail about my backend.Obviously, I could put my own API in front of every query to Mongo or I could use Realm App Functions potentially as well and put all document names / fields in those Node functions. But, I’m just doing a quick sanity check on security architecture and expected use cases for Realm.Anyone else have any concerns or public facing apps with Realm?", "username": "d33p" }, { "code": "", "text": "Some good thoughts in your question. There are a bunch of technical answers but let me provide our take on it (we are not MongoDB Realm employees)is Realm really an appropriate choice for commercial softwareWhy would you think it’s not?expose directly in my web applicationsThat’s pretty much how browsers / web apps work. The code has to go somewhere but the bigger picture is; who cares if an end user sees a field name is ‘user_name’…Is it more designed for mobile appsIt’s designed for all kinds of apps from desktop to mobile apps to web apps and more. If it was only designed for mobile there wouldn’t be all the flavors of SDKs; Swift, .net, web, flutter, node etcI’m building a commercial app…I don’t feel comfortable exposing this much detail about my backend.Sounds exciting! You can expose as much or as little as you want. You could craft an app made entirely of calls to Atlas using Application Services only if the use case fits (I am not recommending that)Anyone else have any concerns or public facing apps with Realm?We don’t have any concerns but it’s a absolutely valid question. Do you have a specific example of a security issue? Or perhaps some example code of where you feel a bad actor could/would hijack your app or access sensitive data?I think looking at a specific use case may reveal more about potential security issues.", "username": "Jay" } ]
Security concerns for Realm
2023-08-28T16:07:03.444Z
Security concerns for Realm
393
null
[ "java", "transactions" ]
[ { "code": "", "text": "This is using the reactive java driver: On my local dev machine I just saw the following error:`com.mongodb.MongoCommandException: Command failed with error 112 (WriteConflict): ‘WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.’ on server 127.0.0.1:27017.The full response is:{“errorLabels”: [“TransientTransactionError”], “operationTime”: {\"$timestamp\": {“t”: 1602354840, “i”: 5}}, “ok”: 0.0, “errmsg”: “WriteConflict error: this operation conflicted with another operation. Please retry your operation or multi-document transaction.”, “code”: 112, “codeName”: “WriteConflict”, “$clusterTime”: {“clusterTime”: {\"$timestamp\": {“t”: 1602354840, “i”: 5}}, “signature”: {“hash”: {\"$binary\": “AAAAAAAAAAAAAAAAAAAAAAAAAAA=”, “$type”: “00”}, “keyId”: {\"$numberLong\": “0”}}}}`I also checked the MongoDB log, but there wasn’t a single entry at the timestamp when the transaction failed. Is there any way to find out which query has caused the transaction to fail?", "username": "st-h" }, { "code": "", "text": "What is the write concern for your write operation(s)?", "username": "Jack_Woehr" }, { "code": "", "text": "write concernWriteConcern is the default - w: 1 afaik", "username": "st-h" }, { "code": "", "text": "Just because it’s easier than thinking, can you try “majority”?", "username": "Jack_Woehr" }, { "code": "", "text": "umm, just out of curiosity: why would that change anything if the driver only connects to a single node replica set?However, I can unfortunately not try that easily, because I haven’t seen that exception ever since. Tried to replicate, doing the same things, but it never showed up again so far.Our web clients (browser) connect to a web socket, which modifies the current user. There is a chance that this operation took place when a transaction (which also modifies the user document) was still running. Just a vague guess, but I haven’t found anything how I could check if this really is what has happened.", "username": "st-h" }, { "code": "", "text": "Hi @st-h …I was looking for easy first debugging steps, thinking along the lines of “If this happens a lot maybe it’s the volume of writes and there’s an inconsistency …”But what you say … that there exists the possibility of a transaction interfering with another write … well, if you know that’s a possibility in your application, it certainly sounds like the first thing to look at.Perhaps your code should handle exceptions and retry writes a few times in these instances.", "username": "Jack_Woehr" }, { "code": "", "text": "thanks for your reply. It’s not happening a lot. I observed it running the full stack on my local machine, with me being the only active user (that’s why I am worried about it). I am used to being able to lookup what the cause of an aborted transaction is due to using mysql before. It’s quite a surprise to find out this seems to not be possible at all with MongoDB.Do you by any chance know more details about how the java drivers handle transactions? I asked a related question quite a while ago (without any replies), as there is nothing mentioned in the docs: How to efficiently handle transactions with the reactive java driver / clarifications on apiHowever, the main issue here is that I would like to confirm if my suspicion is accurate - however, it seems that’s currently not possible at all - as there seems to be no way to find out why exactly a transaction failed?", "username": "st-h" }, { "code": "", "text": "@st-h I do no know more details about the Java drivers or how to find the cause of an aborted transaction.It’s quite a surprise to find out this seems to not be possible at all with MongoDB.Yes, I ran into a similar frustration. I had 3 tables to which I added validation. 2 worked; 1 did not. I searched and determined definitively there’s no way to cause MongoDB to log why a validation failed. There’s an open issue to add that feature.\nEdit: I did figure out my problem after exhaustive trial-and-error … a misspelling in one place My takeaway is that, as good as MongoDB is, and I like it and use it daily, it’s not really “enterprise-ready”. It doesn’t have the completeness and stability of the mature relational products.", "username": "Jack_Woehr" }, { "code": "", "text": "This might give a better insight:Transactions are new in MongoDB but have been existed in SQL databases for more than 30 years. Transactions are used to maintain…\nReading time: 5 min read\n", "username": "svision" }, { "code": "", "text": "Hi, may I know if you found a solution to the WriteConflict error?", "username": "Shi_Qi_Low" }, { "code": "", "text": "Hello\nI have seens this error before.\nHere is what i found as root cause for this. May be your scenario is different but still would like to shareThe document we were trying to update was under Transactional block. So any document under transactional block if tried to be updated would give you this exact error", "username": "Vidyadhar_V_Bhutaki" } ]
Find cause of write conflict
2020-10-10T18:48:15.809Z
Find cause of write conflict
17,621
null
[ "aggregation", "indexes" ]
[ { "code": "db().collection('Webhookalerts').aggregate([\n {\n $addFields: {\n timestamp: {\n $toDate: \"$timestamp\"\n }\n }\n },\n {\n $match: {\n timestamp: {\n $gte: new Date();,\n $lte: new Date()\n }\n }},\n {$sort: {timestamp: -1}},\n {\n $facet: {\n \n TotalCount: [\n { $count: \"Totalcount\" }\n ],\n event_log: [\n { $skip: 0 },\n { $limit: 500000 }\n ]\n }\n }\n ],{ allowDiskUse: true }).toArray() \n", "text": "this query does not fetch 5lakh data so how can fetch and optimize", "username": "Deepak_Tak" }, { "code": "", "text": "If you need to do$addFields: {\ntimestamp: {\n$toDate: “$timestamp”\n}\n}in most of your use-cases, you should be storing your timestamp as date rather than string. Anyway dates as Date are more efficient to store and compare and provide a richer date oriented API.Since you $match and $sort on the computed timestamp your indexes are useless.Finally, returning 500_000 document is not very efficient. You should leverage more all the power of the aggregation pipeline. I am pretty sure, it is not a human that looks at the 500_000 documents so what ever computation or filtering you do after could be done in the server.", "username": "steevej" }, { "code": "", "text": "ok thanks for your suggestion", "username": "Deepak_Tak" }, { "code": " db.alerts.aggregate([ {\n $addFields: {\n timestamp: {\n $toDate: \"$timestamp\"\n }\n }\n }, \n {\n $match: {\n timestamp: {\n $gte: new Date(Date.now() - 24* 60 * 60 * 1000),\n $lte: new Date() \n },\n }\n },{\n $facet: {\n Total_log: [\n { $count: \"Total_log\" }\n ] } }] ,{ allowDiskUse: true }).toArray();\n", "text": "In this query I want to fetch data last 24 hours but it getting huge time to fetch in my collection data approx 300 million so i use an index also but it taking so much time", "username": "Deepak_Tak" }, { "code": "", "text": "thanks for your suggestionYou are thanking me for the suggestion but you do not apply it. You still find things slow but you still do things with $addFields. I wroteSince you $match and $sort on the computed timestamp your indexes are useless.and you still think thatso i use indexI will repeat your indexes are useless because you $match on a computed field. One thing you can do despite your unwillingness to store your timestamp in the appropriate format is to match using the string version of your 2 new Date(). It will still be much slower than using the appropriate format for timestamp. Because string comparison of date is much slower than comparing timestamp in the appropriate format.See", "username": "steevej" } ]
Optimizing Data Retrieval for the MongoDB Queries
2023-06-27T09:30:32.829Z
Optimizing Data Retrieval for the MongoDB Queries
573
https://www.mongodb.com/…a1275f3cb84d.png
[ "python", "php", "weekly-update", "thailand-mug", "pune-mug" ]
[ { "code": "", "text": "It’s FRIDAY! You know what that means…Each week, we bring you the latest and greatest from our Developer Relations team — from blog posts and YouTube videos to meet-ups and conferences — so you don’t miss a thing.Everything you see on Developer Center is by developers, for developers. This is where we publish articles, tutorials, and beyond. How to Build a Laravel and MongoDB Back-End Service by @Hubert_Nguyen1This Laravel MongoDB tutorial addresses prospective and existing Laravel developers considering using MongoDB as a database.Every month, all across the globe, we organize, attend, speak at, and sponsor events, meetups, and shindigs to bring the DevRel community together. Here’s what we’ve got cooking:Ahmedabad MUG: August 25th 2023, 9:30pm – August 26th 2023, 12:00am, (GMT-07:00) Pacific Time\nMENA vMUG: August 26th 2023, 12:00am – 2:30am, (GMT-07:00) Pacific Time\nNYC AI Hackathon with Modal: August 26th 2023, 6:30am – 7:00pm, (GMT-07:00) Pacific Time\nDeveloper Day Chicago: Aug 29, 2023 | 5:00 AM - 2:00 PM PDT\nGoogle Cloud Next: Aug 29, 2023 - Aug 31, 2023\nMumbai MUG: September 1st 2023, 10:00pm – September 2nd 2023, 1:30am, (GMT-07:00) Pacific Time\nFrankfurt MUG: September 5th 2023, 9:00am – 11:00am, (GMT-07:00) Pacific Time\nToronto Meetup: September 5th 2023, 3:00pm – 5:00pm, (GMT-07:00) Pacific TimeMongoDB is heading out on a world tour to bring the best, most relevant content directly to you! Join us to connect with MongoDB experts, meet fellow users building the next big thing, and be among the first to hear the latest announcements. Register now.\n.local1200×627 78.3 KB\nUse code DEVELOPERFAM50 to secure 50% off your ticket!Thank you to everyone who joined us at the Americas West Virtual MUG.In the first session, @Darshana_Paithankar and @Zuhair_Ahmed discussed alternative methods of utilizing MongoDB Atlas beyond the UI. They spoke on powerful approaches like the MongoDB Atlas Terraform Provider, CloudFormation Resources, CDK, Quick Start Partner Solution Deployments, and the Atlas Kubernetes Operator. They also gave a live demo on quickly creating your first MongoDB Atlas cluster using the Atlas Terraform Provider.In the next session, Vijay Tolani from Grafana Labs showed how to connect MongoDB data and other sources to a unified dashboard. He demonstrated how to visualize metrics with charts, gauges, geo-maps, and more. He also showed how to receive real-time alerts and query MongoDB and MongoDB Atlas data without migration or ingestion.In the final lightning session, MongoDB Community Champion @Roman_Right discussed race conditions. He gave a comprehensive overview of race conditions and explored strategies to mitigate them.\nAmericas West vMUG2048×1145 269 KB\n\nAmericas West vMUG2048×1063 325 KB\nWe also met up in Thailand to talk about PyMongo on Cloud Native! Co-chair @Kanin_Kearpimy spoke about PyMongo LangChain with Atlas Vector Search for AI assistance in the Monitoring system.Our MUG leader @Piti.Champeethong discussed how to customize GitHub dev containers for FastAPI and PyMongo and run them on GitHub Codespaces.Many thanks to all the folks who showed up for our Pune inaugural meetup.The event kicked off with a welcome from event leaders @Faizan_Akhtar and @Vishal_Alhat. This was followed by a talk by Vishal on how to use MongoDB on AWS. Vishal gave a comprehensive overview of the topic and shared some helpful tips.The next talk was by Yogini on how MongoDB is being used at Peerlist. Yogini gave a real-world example of how MongoDB can be used to solve business problems.\nScreen Shot 2023-08-25 at 8.37.08 AM896×666 111 KB\n\nScreen Shot 2023-08-25 at 8.37.23 AM1191×669 114 KB\nWe’ve got even more from our community.Community Champion and Sāo Paulo MUG Leader @Leandro_Domingues shared tips for upgrading your MongoDB version.Community Enthusiast @Justin_Jenkins has started a series of posts on MongoDB Arrays, including the basis, sorting, and removing elements.Big congrats to the folks behind the Dallas MUG. They celebrated the one-year anniversary of their meetup. They even brought a cake!\nMUG cake813×873 75.4 KB\nMongoDB Principal Developer Advocate Karen Huaulme spoke about adding AI and machine learning to your data application.\nKaren Huaulme668×891 61.7 KB\nConfluent’s Britton LaRoche spoke about data migration practices to keep your data in sync while you move from Oracle to MongoDB.And MUG leader Allen Cordrey spoke about why you would move your data from a relational database to a NoSQL database.If reading’s not your jam, you might love catching up on our podcast episodes with Michael Lynn and Shane McAllister.In this episode, we chat with Sean Korten, the head of solution engineering at Akeyless Security, a cloud-based solution that focuses on managing secrets and enhancing security.As we continue our discussions on security, Michael meets with Sean Korten, Head of Solution Engineering at Akeyless Security, a cloud-based solution provider specializing in Secrets Management. Secrets refer to confidential information such as...Not listening on Spotify? We got you! We’re also on Apple Podcasts, PlayerFM, Podtail, and Listen Notes. (We’d be forever grateful if you left us a review.)Have you visited our YouTube channel lately? We have some exciting news about the MongoDB for VS Code extension!Don’t forget that you can watch the video version of The Index, with yours truly!Remember to view what live streams we’ve got coming up. You can click “Notify me” and YouTube will ping you when we’re about to go live.Be sure you subscribe so you never miss an update.That’ll do it for now, folks! Like what you see? Help us spread the love by tweeting this update or sharing it on LinkedIn.", "username": "Megan_Grant" }, { "code": "", "text": "So much in this article! Thanks for sharing ", "username": "Faizan_Akhtar" }, { "code": "", "text": "Great work at the meetup! ", "username": "Megan_Grant" } ]
The Index #129 (August 25, 2023): Thailand, Pune, and Virtual Meetups!
2023-08-25T16:09:06.700Z
The Index #129 (August 25, 2023): Thailand, Pune, and Virtual Meetups!
674
null
[ "aggregation" ]
[ { "code": "\"_id\" : ObjectId(\"63ecb0ba9726636b1ffc7611\"),\n\"Ippr\" : \"54203\",\n\"DateDerniereInteraction\" : ISODate(\"2019-07-09T11:21:16.043+0000\"),\n\"IdTechPs\" : \"01d63ab8ab054b8fbf45203a0b2279\"\ndb.InteractionPatient.aggregate(\n [\n // On considère unique les dates d'interaction plus grand que \n {\n $match: {DateDerniereInteraction:{$gte:ISODate(\"2020-01-01T13:21:32.692+0000\")}}\n },\n //On compte le nombre d'intéraction par IPPR \n {\n $group :\n {\n _id : \"$Ippr\",\n NBinteraction: {$sum:1}\n }\n },\n // On ne compte que les IPPR ayant au moins un certain nombre d'intéraction \n {\n $match: { \"NBinteraction\": { $gte: 3 } }\n },\n {\n // On groupe par nombre d'intéraction \n $group :\n {\n _id : \"$NBinteraction\",\n Nb_patient: {$sum:1}\n }\n }\n \n ]\n )\n", "text": "HelloMy collection have file 50 000 000 and i try to do this aggregate but\nI would like optimising my aggregate because Mongodb can’t run itOne file :The aggregate :", "username": "Marion_Bresson1" }, { "code": "", "text": "What indexes are there?", "username": "John_Sewell" }, { "code": "", "text": "Mongodb can’t run itWhat do you mean by that?no results?wrong results?errors? please share if any", "username": "steevej" } ]
Aggregation optimising
2023-08-25T15:49:39.912Z
Aggregation optimising
309
null
[ "queries", "node-js" ]
[ { "code": "{\n MonthYear: \"05-2023\",\n ...\n}\n\n{\n MonthYear: \"06-2023\",\n ...\n}\n\n{\n MonthYear: \"07-2023\",\n ...\n}\ndb.collection.find({\"MonthYear\": \"06-2023\"}).explain(\"executionStats\")\n\"executionStats\": {\n \"executionStages\": {\n ...\n \"docsExamined\": 3,\n }\n...\n}\nMonthYear_idMonthYearMonthYear", "text": "I’m new to mongoDB and I’m trying to understand how to fetch data from the databaseI have a collection containing month wise documents, an example of the collection would be:When I search a collection as such:It searches through all the available documents (which may be a huge quantity) till it finds the date, which seems like a waste when the MonthYear is always unique and hence should be known exactly where to look i.e. with no looping searchSo my question is should IMake a custom _id from MonthYear since it is always unique?or should I index MonthYear and if I should, will it lead to any performance loss?Or should I be taking a completely different approach?", "username": "MRM" }, { "code": "", "text": "Searching on a field that is not indexed will always be slow, rather then overwrite the default document IDs I’d just add an index to the field you want to to search on.\nYes, it will add a slight overhead as it maintains the index on new inserts etc, but index update should be pretty fast so not add that much overhead to insert or updates.Worst case try it out with your workload, insert 100K documents and check timings, create the index and do the same.\nYou could also profile searches with and without the index but it’s pretty pointless as it’ll be so much faster with it!", "username": "John_Sewell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Avoid searching all documents when query field is unique?
2023-08-28T09:33:08.848Z
Avoid searching all documents when query field is unique?
209
null
[ "student-developer-pack" ]
[ { "code": "", "text": "Hi Everyone, My name is JamesBrook. I have recently started my job as Marketing Manager at IT Empire. Feel free to connect with me on the Website: https://itempire.ae/.\nIf you want any help, then tell me. How can I assist you?", "username": "James_Brook" }, { "code": "", "text": "Hello, I’m Isabel Debra, an experienced Marketing Manager with a passion for driving business growth through innovative strategies. With a strong background in e-commerce and outsourcing services, I’ve successfully leveraged platforms like Workerman to optimize operations and enhance customer experiences. My track record includes developing data-driven campaigns, optimizing conversion funnels, and collaborating cross-functionally to achieve outstanding results. I’m dedicated to delivering impactful solutions and fostering lasting partnerships in the ever-evolving world of e-commerce.", "username": "Isabel_Debra" } ]
Introduction of my Self
2023-03-29T06:51:58.349Z
Introduction of my Self
1,144
null
[ "queries", "java", "spring-data-odm" ]
[ { "code": "[\n {\n \"_id\": 1,\n \"favorite\": {\n \"color\": \"red\",\n \"foods\": {\n \"fruits\": \"banana\",\n \"fastfood\": [\n \"burger\",\n \"sandwich\"\n ]\n }\n }\n },\n {\n \"_id\": 2,\n \"favorite\": {\n \"color\": \"green\",\n \"foods\": {\n \"noodles\": \"ramen\",\n \"fastfood\": [\n \"fries\",\n \"burger\",\n \"corn dog\"\n ]\n }\n }\n },\n {\n \"_id\": 3,\n \"favorite\": {\n \"color\": \"red\",\n \"foods\": {\n \"soup\": \"cream soup\"\n }\n }\n }\n]\ndb.collection.find({\n $expr: {\n $eq: [\n [\n \"burger\",\n \"sandwich\"\n ],\n \"$favorite.foods.fastfood\"\n ]\n }\n})\n", "text": "Following is my collection:I am getting the desired result using the following mongo shell query:But, I need this same result in mongoTemplate. I am unable to do that.", "username": "Abhijit_Mondal_Abhi" }, { "code": "package com.example.demo;\nimport org.springframework.boot.SpringApplication;\nimport org.springframework.boot.autoconfigure.SpringBootApplication;\nimport java.util.Arrays;\nimport java.util.List;\n\nimport org.bson.Document;\nimport org.springframework.boot.CommandLineRunner;\nimport org.springframework.context.annotation.Bean;\nimport org.springframework.data.mongodb.core.MongoTemplate;\nimport org.springframework.data.mongodb.core.query.Criteria;\nimport org.springframework.data.mongodb.core.query.Query;\n\n@SpringBootApplication\npublic class DemoApplication {\n\n public static void main(String[] args) {\n SpringApplication.run(DemoApplication.class, args);\n }\n @Bean\n public CommandLineRunner demo(MongoTemplate mongoTemplate) {\n return (args) -> {\n Query query = new Query(Criteria.where(\"favorite.foods.fastfood\").in(Arrays.asList(\"burger\", \"sandwich\")));\n List<Document> result = mongoTemplate.find(query, Document.class, \"post240337\");\n\n for (Document document : result) {\n System.out.println(document);\n }\n };\n }\n}\nDocument{{_id=1, favorite=Document{{color=red, foods=Document{{fruits=banana, fastfood=[burger, sandwich]}}}}}}\nDocument{{_id=2, favorite=Document{{color=green, foods=Document{{noodles=ramen, fastfood=[fries, burger, corn dog]}}}}}}\napplication.properties", "text": "Hi @Abhijit_Mondal_Abhi and welcome to MongoDB community forums!!Based on the sample data and the query provided, you can use the code given below using mongoTemplate:Output:Place your configuration for the connection url in the application.properties file.Reach out in case of any further questions.Warm Regards\nAasawari", "username": "Aasawari" } ]
What is the equivalent to this mongo shell query in mongoTemplate?
2023-08-20T10:27:18.927Z
What is the equivalent to this mongo shell query in mongoTemplate?
451
null
[]
[ { "code": "", "text": "yesterday I had dba examination there were 66 questions instead of 60 and and the examination duration was the same and that question too were hard and there were lots of questions that were not covered in the course", "username": "Zinkal_Desai" }, { "code": "", "text": "Hey @Zinkal_Desai,Apologies for the late reply. As per exam details on DBA Exam page, the MongoDB DBA exam is supposed to be 66 questions, multiple choices.If you found the exam tough, I would suggest completing the DBA Learning Path and then giving the Practice Exam to be better prepared for the exam. Also, kindly refer to the exam guide to know what topics can a test taker expect during the test.Hope this helps. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
In DBA examination there were 66 questions instead of 60
2023-07-24T07:27:51.937Z
In DBA examination there were 66 questions instead of 60
747
null
[]
[ { "code": "", "text": "What is the difference between the slow log recorded by mongod.log and the slow log recorded by db.setProfilingLevel?\ndb.setProfilingLevel(0, 20) – there are still slow log records in mongod.log", "username": "xinbo_qiu" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
The difference between the slow log recorded by mongod.log and the slow log recorded by db.setProfilingLevel
2023-08-18T08:07:04.150Z
The difference between the slow log recorded by mongod.log and the slow log recorded by db.setProfilingLevel
233
null
[ "python", "containers" ]
[ { "code": "http://0.0.0.0:9999/NNNNN/mongodb/", "text": "I am facing challenges in setting up and connecting my application (which is running in a local Docker container) to MongoDB Atlas. I have detailed my issues below:Localhost URL with MongoDB Atlas:Atlas Administration API:MongoDB Driver Operations:Questions:Any guidance or solutions to address the above issues would be greatly appreciated.", "username": "Wang_Michael" }, { "code": "Data Serviceslocalhost", "text": "Hi @Wang_Michael,I encountered an error stating, “Endpoint route can only include letters, underscores, numbers, and /.”Not sure where you are seeing this under Data Services in Atlas - Do you have a screenshot or exact steps to get here? However, in saying so and based off Q3 as well, it sounds like you’re attempting to connect a pymongo application to MongoDB Atlas. If this is the case, what’s the reason for adding this local docker url? Trying to understand the context here.For your question regarding Atlas accepting a locally hosted URL as an endpoint, is this so that your pymongo driver application hosted in docker can connect? Just trying to clear a few things up here before I’m able to provide a more informative response.The following Optional: Require an IP Access List for the Atlas Administration API documentation should help you here although I’m not too sure what the relevance of the localhost you mention here.There is limited information on what the application is trying to achieve or which particular requests from the atlas administration api you’re thinking of utilising so it’s a bit difficult to answer. If you can provide some more context to what you’re attempting to perform / your use case then we can try help further with any suggestions / documentation.Look forward to hearing from you.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "", "username": "Jason_Tran" } ]
Issues with MongoDB Atlas Connectivity and API Endpoint Configuration
2023-08-25T20:03:00.245Z
Issues with MongoDB Atlas Connectivity and API Endpoint Configuration
448
https://www.mongodb.com/…1974db0bdcbf.png
[]
[ { "code": "", "text": "Hi, while participating in the schwarzitlearnathon, I can’t proceed with the unit training “Getting Started with MongoDB Atlas”\nWhen the Integrated Developer Environment is loaded the terminal present the following error:\nSince the learnathon has an expiration date, Could you please help in this issue?", "username": "Tiago_Mortagua" }, { "code": "", "text": "Hey @Tiago_Mortagua,Welcome to the MongoDB Community Forums! Can you try to clear the cache or cookies or refresh the page? This might help. If the issue still persists, kindly mail our team at [email protected], they will be better able to help you.Hoping this helps. Feel free to reach out for anything else as well.Regards,\nSatyam", "username": "Satyam" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't use terminal in training "Getting Started with MongoDB Atlas"
2023-08-23T16:19:02.886Z
Can&rsquo;t use terminal in training &ldquo;Getting Started with MongoDB Atlas&rdquo;
502
null
[ "queries" ]
[ { "code": "$filter{\n $and: [\n {\n \"transports.events.category\": \"Temperatures\",\n \"transports.events.event\": \"technicalFailure\"\n },\n {\n $expr: {\n $eq: [\n {\n $arrayElemAt: [\n {\n $filter: {\n input: \"$transports[0].events\",\n cond: {\n $and: [\n { $eq: [\"$$this.category\", \"Temperatures\"] },\n { $eq: [\"$$this.event\", \"technicalFailure\"] }\n ]\n }\n }\n },\n -1\n ]\n },\n {\n category: \"Temperatures\",\n event: \"technicalFailure\"\n }\n ]\n }\n }\n ]\n}\n[\n {\n transports: [\n {\n _id: \"123\",\n events: [\n {\n category: \"Payment\",\n event: \"ok\",\n date: \"2023-02-21T16:32:07.740Z\"\n },\n {\n category: \"Temperatures\",\n event: \"technicalFailure\",\n date: \"2023-02-21T16:36:07.740Z\"\n },\n {\n category: \"Temperatures\",\n event: \"auto\",\n date: \"2023-02-21T16:55:07.740Z\"\n }\n ]\n }\n ]\n },\n {\n transports: [\n {\n _id: \"456\",\n events: [\n {\n category: \"Payment\",\n event: \"ok\",\n date: \"2023-02-21T16:28:07.740Z\"\n },\n {\n category: \"Temperatures\",\n event: \"auto\",\n date: \"2023-02-21T16:29:07.740Z\"\n },\n {\n category: \"Payment\",\n event: \"failed\",\n date: \"2023-02-21T17:01:07.740Z\"\n }\n ]\n }\n ]\n },\n {\n transports: [\n {\n _id: \"127\",\n events: [\n {\n category: \"Payment\",\n event: \"ok\",\n date: \"2023-02-21T16:29:07.740Z\"\n },\n {\n category: \"Temperatures\",\n event: \"auto\",\n date: \"2023-02-21T17:18:07.740Z\"\n },\n {\n category: \"Temperatures\",\n event: \"technicalFailure\",\n date: \"2023-02-21T18:53:07.740Z\"\n }\n ]\n }\n ]\n }\n]\n", "text": "I would like to get documents according to 2 conditions (see below a sample of documents) :I am trying to use the operator $filterlike below, but it is not working expectedlyAnyone can help for this query ?Thanks !A sample of documents :", "username": "Theo_Bollecker" }, { "code": "db.test.aggregate([\n // initial document filter\n {\n $match: {\n \"transports.events.category\": \"Temperatures\",\n \"transports.events.event\": \"technicalFailure\"\n },\n },\n // add temporary shortcut field for last event object\n {\n $addFields: {\n tmp: {\n $firstN: {\n n: 1,\n input: '$transports'\n }\n }\n }\n },\n {\n $unwind: '$tmp'\n },\n {\n $addFields: {\n tmp: {\n $lastN: {\n n: 1,\n input: '$tmp.events'\n }\n }\n }\n },\n {\n $unwind: '$tmp'\n },\n // query your event\n {\n $match: {\n 'tmp.category': 'Temperatures',\n 'tmp.event': 'technicalFailure'\n }\n },\n // clean up\n {\n $project: {\n tmp: false,\n }\n }\n]);\n", "text": "Hello, @Theo_Bollecker ! Welcome to the MongoDB community You can do it like this:", "username": "slava" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Query filter in subarray
2023-08-27T13:59:39.871Z
Query filter in subarray
289
null
[]
[ { "code": "", "text": "I am wondering whether people are using any prevention at MongoDB side for any applications that may have the risk of making stupid queries that may cause MongoDB ram/cpu usage increase and cause other apps fail or slow down at their daily routines?", "username": "Oguz_Yarimtepe" }, { "code": "", "text": "I don’t recall mongodb has built-in support for such prevention (e.g. user quota), you may have to create your own.", "username": "Kobe_W" }, { "code": "", "text": "Is there any documentation for it or some tips?", "username": "Oguz_Yarimtepe" } ]
Any quoata or limit definitions per database usage at mongodb or related solution?
2023-08-26T20:09:30.527Z
Any quoata or limit definitions per database usage at mongodb or related solution?
342
null
[ "replication", "sharding" ]
[ { "code": "addShardenableShardingshardCollectionsh.status the[direct: mongos]> sh.status()\nshardingVersion\n{\n _id: 1,\n minCompatibleVersion: 5,\n currentVersion: 6,\n clusterId: ObjectId(\"64d62ba7e9d199311e7be6b5\")\n}\n---\nshards\n[\n {\n _id: 'myshard1',\n host: 'myshard1/node3:27018',\n state: 1,\n topologyTime: Timestamp(6, 1691758010)\n },\n {\n _id: 'myshard2',\n host: 'myshard2/node2:27018',\n state: 1,\n topologyTime: Timestamp(206, 1691773056)\n },\n {\n _id: 'myshard3',\n host: 'myshard3/node1:27018',\n state: 1,\n topologyTime: Timestamp(51, 1691773082)\n }\n]\n---\nactive mongoses\n[ { '6.0.4': 1 } ]\n---\nautosplit\n{ 'Currently enabled': 'yes' }\n---\nbalancer\n{\n 'Currently enabled': 'yes',\n 'Currently running': 'no',\n 'Failed balancer rounds in last 5 attempts': 0,\n 'Migration Results for the last 24 hours': {\n '3': \"Failed with error 'aborted', from myshard3 to myshard1\",\n '30': \"Failed with error 'aborted', from myshard1 to myshard2\",\n '157': \"Failed with error 'aborted', from myshard1 to myshard3\",\n '2669': 'Success'\n }\n}\n---\ndatabases\n[\n {\n database: {\n _id: 'mydb',\n primary: 'myshard1',\n partitioned: false,\n version: {\n uuid: UUID(\"2dcac9aa-595c-4515-8033-27298507f20c\"),\n timestamp: Timestamp(1, 1691758436),\n lastMod: 1\n }\n },\n collections: {\n 'mydb.user': {\n shardKey: { _id: 1 },\n unique: false,\n balancing: true,\n chunks: [],\n tags: []\n }\n }\n },\n {\n database: { _id: 'config', primary: 'config', partitioned: true },\n collections: {\n 'config.system.sessions': {\n shardKey: { _id: 1 },\n unique: false,\n balancing: true,\n chunks: [],\n tags: []\n }\n }\n }\n]\n[direct: mongos]> db.user.getShardDistribution()\nShard myshard1 at myshard1/node3:27018\n{\n data: '248.01GiB',\n docs: 210646,\n chunks: 0,\n 'estimated data per chunk': '0B',\n 'estimated docs per chunk': 0\n}\n---\nShard myshard3 at myshard3/node1:27018\n{\n data: '249.23GiB',\n docs: 44758,\n chunks: 0,\n 'estimated data per chunk': '0B',\n 'estimated docs per chunk': 0\n}\n---\nShard myshard2 at myshard2/node2:27018\n{\n data: '248.63GiB',\n docs: 65768,\n chunks: 0,\n 'estimated data per chunk': '0B',\n 'estimated docs per chunk': 0\n}\n---\nTotals\n{\n data: '745.88GiB',\n docs: 321172,\n chunks: 0,\n 'Shard myshard1': [\n '33.25 % data',\n '65.58 % docs in cluster',\n '1.2MiB avg obj size on shard'\n ],\n 'Shard myshard3': [\n '33.41 % data',\n '13.93 % docs in cluster',\n '5.7MiB avg obj size on shard'\n ],\n 'Shard myshard2': [\n '33.33 % data',\n '20.47 % docs in cluster',\n '3.87MiB avg obj size on shard'\n ]\n}\n14G\tnode1\n15G\tnode2\n33G\tnode3\n", "text": "I had a mongodb instance which was up as a replicaset and was a single node in the replicaset. Now I have added to other replicasets each with one node and connected them to clutser through mongos using addShard command, enabled sharding on a db using enableSharding command and sharded a collection using shardCollection . However when I run sh.status the output says that I have no chunk and also my data did not get balanced. Is there anything wrong?\nmongodb version: 6.0.2\nmongodb sh.status() results:getShardDistribution on sharded collection:also my data size is the following which is not balanced:Can anyone please help me with this?", "username": "Sobhan_Safdarian" }, { "code": "user", "text": "Hi @Sobhan_Safdarian and welcome to MongoDB community forums!!I had a mongodb instance which was up as a replicaset and was a single node in the replicaset. Now I have added to other replicasets each with one node and connected them to clutser through mongosAs per my understanding from the above statements, you are trying to connect multiple single node shard servers together. Please correct me if my understanding is wrong here.\nHowever, could you confirm, if the config server is also a single node replica set ?\nIf my understanding is correct, as mentioned in the official documentation for Sharded cluster, for the production environment, you should use a replica set with three members.\nIt would be helpful, if you could convert all the replica set with three node replica set and connect to form a sharded cluster and let us know if you are still facing the similar issue.Also, it would be helpful to assist you further if you could help me with the below mentioned info:Let me know if you have any further questions.Warm Regards\nAasawari", "username": "Aasawari" } ]
Why mongodb sharded cluster does not have any chunk
2023-08-11T20:29:20.508Z
Why mongodb sharded cluster does not have any chunk
506
null
[ "storage" ]
[ { "code": "", "text": "Hi,I need to add more storage to our mongodb server as the disk is getting full.\nI am using mongodb community edition version 4.2 with WiredTiger Storage EngineWhen I check my large collection size I see there is a large difference between the storageSize and the data size, where the Storage size is much smaller then the size (about a quarter).I read that WiredTiger Storage Engine is compressing the data, hence the difference.My question is:\nWhen I add more disk space, shouId base my calculation on the storageSize or the size?\nAlthough I understand that at the end the data will be compressed by WiredTiger Storage Engine, I wonder if Mongo need the full size capacity for doing its internals?Thanks,\nTamar", "username": "Tamar_Nirenberg" }, { "code": "", "text": "Hi @Tamar_Nirenberg and welcome to MongoDB community forums!!When I add more disk space, shouId base my calculation on the storageSize or the size?In practice, it’s advisable to evaluate the compressibility of your documents. This assessment should guide the decision of how much disk space to add, taking into account the anticipated growth of your data.As mentioned in the Wired Tiger documentation, the data on the wired tiger cache is uncompressed while the data on the disk will be in the compressed format. And if I understand the question correctly, I believe you should be basing their calculation on storageSize (compressed versions of their documents)Let us know if you have any further questions.Warm Regards\nAasawari", "username": "Aasawari" } ]
Storage estimation based on existing collections
2023-08-21T11:55:36.988Z
Storage estimation based on existing collections
405