image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "aggregation", "atlas-search", "scala" ]
[ { "code": "", "text": "I am using mongodb’s scala driver 2.9[ Mongo Scala Driver - org.mongodb.scala.model.Aggregates ].I wish to use the $search stage after a $lookup stage (as mentioned here: https://www.mongodb.com/docs/atlas/atlas-search/tutorial/lookup-with-search/).However, according to the mongodb’s scala driver 2.9 (link above), I am unable to find a $lookup api which allows me to perform a lookup on a specific field, as well as use the sub pipeline in which I can use the $search stage.Any help would be appreciated!", "username": "Nemin_Shah" }, { "code": "", "text": "Hi @Nemin_Shah and welcome to the MongoDB community forum!!I believe the link below would be the right Api to be used to create the pipeline inside the $lookupMongo Scala Driver - org.mongodb.scala.model.AggregatesLet us know if you have any other questions .Regards\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hello @Aasawari,I dont seem to find a lookup API with the following fields as depicted here\n$lookup:{\n“from”:\n“localField”:\n“foreignField”:\n“as”:\n“pipeline”:\n}Either it has the first four fields, or the last one. Im looking for the lookup API with all these five fields.Thank you!", "username": "Nemin_Shah" } ]
How to use $search after $loookup
2023-04-11T07:18:59.609Z
How to use $search after $loookup
908
null
[ "queries", "data-modeling", "atlas-data-lake" ]
[ { "code": "", "text": "Hello!For starters, I am new to MongoDB. I started using MongoDB for my capstone, as I needed to work with a client. The client requested that I use MongoDB for this project. He also is requesting a lot of specifics about the cluster, and one of them being that he wants to be able to query across multiple databases.After doing some research, I see that I will most likely need to make use of the Data Lake feature. But, all of the documentation about this feature was for querying across multiple databases in different clusters.So, my question is can I query across multiple databases in the same cluster using a data lake? Or is there a better route that I should take?Thank you!", "username": "Thomas_Hanley" }, { "code": "mongosh", "text": "Hi @Thomas_Hanley and welcome to the MongoDB community forum!!To query the data from different collection in different database, you can make use of db.getSiblingsDB() to access the other database. However, this command is a mongosh method and cannot be run in the Atlas Data Explorer UI.If it is not a compulsory requirement, you can move the collection inside the same database and make use of $lookup.\nYou can visit the $lookup documentation for more information.I see that I will most likely need to make use of the Data Lake feature.Just for further clarification, Atlas Data Lake and Atlas Data Federation (previously called Atlas Data Lake) are different. Would you be able to confirm if Atlas Data Lake is the feature you were looking into? In saying so:The Data Lake are storage repositories to store the data in its original format.The MongoDB Atlas Data Federation allows you to query among different MongoDB systems, including Clusters, Databases, and even AWS S3 buckets.\nYou can read the blog post on Data Lake federation for more information.Let us know if you have further questions.Regards\nAasawari", "username": "Aasawari" } ]
Query across collections in multiple databases in the same cluster
2023-04-10T22:46:04.101Z
Query across collections in multiple databases in the same cluster
1,153
null
[ "atlas-online-archive" ]
[ { "code": "", "text": "I need to create an online archive in a collection and the date field stores a Epoch Millis.I selected the option for this kind o date. However, it doesn’t work because the field that stores this data is Double, not Long. So there is this error:Error starting archive: Invalid criteria field type field name: event.expiration_at_ms, detected field type: doubleHow can I fix it?", "username": "Rafael_Martins" }, { "code": "dateintlongobjectIdstringuuid", "text": "Hi @Rafael_Martins,Not sure if it works with your use case but perhaps you can try use the custom criteria filter for the date field but I do believe a different partition field has to be used as Atlas currently supports the following partition attribute types:Regards,\nJason", "username": "Jason_Tran" } ]
Online Archive: Epoch Millis in a field type double
2023-04-03T14:54:33.447Z
Online Archive: Epoch Millis in a field type double
915
null
[ "time-series" ]
[ { "code": "", "text": "Hi, I understand the benefits that the MongoDB time-series collections provide in that they store time-series data in an efficient way (due to the bucketing mechanism and improved enhanced compression) and therefore storage and index size is reduced significantly and I/O for read operations is reduced.I was wondering if there were any use-cases where time-series collections for time-series data would not be the best fit? For example would the number of documents that need to be stored have an impact on the decision to prefer time-series collections over regular collections?Thanks,\nPeter", "username": "Peter_B1" }, { "code": "", "text": "Hello @Peter_B1 ,Time series data, which reflects measurements taken at regular time intervals, plays a critical role in a wide variety of use cases for a diverse range of industries and are recommended based on particular requirements. For example, a stock day trader constantly looking at feeds of stock prices over time and running algorithms to analyze trends to identify opportunities. They are looking at data over a time interval with hourly or daily ranges. Another example could be how a connected weather measurement device might obtain telemetry such as humidity levels and temperature change to forecast weather. Additionally, it could monitor air pollution to produce alerts or analysis before a crisis occurs. The gathered information can be looked at over a time range to calculate trends over time.While time-series collections in MongoDB provide many benefits for storing and querying time-series data, they may not be the best fit for all use cases. It is important to carefully consider your application requirements and use case before deciding whether to use a time-series collection or a traditional collection to store your data.For example would the number of documents that need to be stored have an impact on the decision to prefer time-series collections over regular collections?It could be a factor in deciding whether to use a time-series collection or not. For example, if your dataset is small, then the benefits of using a time-series collection may be less pronounced. Time-series collections are most effective when dealing with large volumes of time-series data, where the reduced storage and I/O requirements can have a significant impact on performance.I would recommend you go through below links to get a better understanding around Time series collection and it’s use-cases.Time series, IOT, time series analysis, time series data, time series dbMongoDB allows you to store and process time series data at scale. Learn how to store and analyze your time series data using a MongoDB cluster.Code, content, tutorials, programs and community to enable developers of all skill levels on the MongoDB Data Platform. Join or follow us here to learn more!Use MongoDB’s new time series collections to organize and query this unique type of data. Take advantage of the MongoDB developer data platform to analyze and extract insights from time series data.Time Series, IOTRegards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Are MongoDB Time-Series Collections always the best choice for storing Time-Series data?
2023-04-04T14:24:34.121Z
Are MongoDB Time-Series Collections always the best choice for storing Time-Series data?
865
null
[ "java", "spark-connector" ]
[ { "code": "java.lang.UnsupportedOperationException: Data source mongodb does not support microbatch processing.query=(spark.readStream.format(\"mongodb\")\n.option('spark.mongodb.connection.uri', 'mongodb+srv://<<mongo-connection>>')\n \t.option('spark.mongodb.database', 'xxx') \\\n \t.option('spark.mongodb.collection', 'xxx') \\\n .option('spark.mongodb.change.stream.publish.full.document.only','true') \\\n \t.option(\"forceDeleteTempCheckpointLocation\", \"true\") \\\n \t.load())\n\nquery.printSchema()\n\nquery.writeStream \\\n .outputMode(\"append\") \\\n .format(\"parquet\") \\\n .option(\"checkpointLocation\", \"s3://xxxx/checkpoint\") \\\n .option(\"path\", \"s3://xxx\") \\\n .start()\n", "text": "Hi - I am currently trying to read the change data from MongoDB and persisting the results to a file sink but getting a java.lang.UnsupportedOperationException: Data source mongodb does not support microbatch processing. errorHere is my code snippet:Environment details :\nMongoDB Atlas : 5.0.8\nSpark : 3.2.1\nMongoDB-Spark connector : 10.0.2Does this connector support writing to file sinks? Any suggestions?", "username": "Yh_G" }, { "code": "java.lang.UnsupportedOperationException: Data source mongodb does not support microbatch processing.\n\tat org.apache.spark.sql.errors.QueryExecutionErrors$.microBatchUnsupportedByDataSourceError(QueryExecutionErrors.scala:1579)\n\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:123)\n\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$1.applyOrElse(MicroBatchExecution.scala:97)\n\tat org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:575)\n\tat org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:167)\n\tat org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:575)\n\tat org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:268)\n\tat org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:264)\n\tat org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)\n\tat org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)\n\tat org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:551)\n\tat org.apache.spark.sql.catalyst.trees.TreeNode.transform(TreeNode.scala:519)\n\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution.planQuery(MicroBatchExecution.scala:97)\n\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan$lzycompute(MicroBatchExecution.scala:194)\n\tat org.apache.spark.sql.execution.streaming.MicroBatchExecution.logicalPlan(MicroBatchExecution.scala:194)\n\tat org.apache.spark.sql.execution.streaming.StreamExecution.$anonfun$runStream$1(StreamExecution.scala:342)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:968)\n\tat org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:323)\n\tat org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:250)\n", "text": "full error log:", "username": "Yh_G" }, { "code": "query.writeStream \\\n .format(\"mongodb\") \\\n .option(\"checkpointLocation\", \"s3://xxxx\") \\\n .option(\"forceDeleteTempCheckpointLocation\", \"true\") \\\n .option('spark.mongodb.connection.uri', 'mongodb+srv://xxx') \\\n .option('spark.mongodb.database', 'xxx') \\\n .option('spark.mongodb.collection', 'xxx') \\\n .outputMode(\"append\") \\\n .start()\n", "text": "I have also tried writing the change data back to mongoDB using the example given here but also giving the same error. is this connector working?", "username": "Yh_G" }, { "code": "", "text": "At this time the Spark Connector only support continuous processing not microbatch. The file sink supports microbatch which is why it doesn’t work.", "username": "Robert_Walters" }, { "code": "", "text": "so any idea if the connector would eventually support microbatch?", "username": "Yh_G" }, { "code": "Data source mongodb does not support microbatch processing.", "text": "I’m here at the same stage any solution for the . Data source mongodb does not support microbatch processing. error", "username": "Krishna_Kumar_Sahu" }, { "code": "Data source mongodb does not support microbatch processing.", "text": "Did you got any solution. for Data source mongodb does not support microbatch processing. error", "username": "Krishna_Kumar_Sahu" }, { "code": "", "text": "@Krishna_Kumar_Sahu Can you add your requirements to this ticket? Specifically the use case and what destinations you are writing to that require microbatch?https://jira.mongodb.org/browse/SPARK-368We are considering this for next quarter as it has come up a few times.If anyone else has this issue please comment on the ticket.", "username": "Robert_Walters" }, { "code": "", "text": "Hi @Robert_Walters as per my requirement I’m added the details into that ticket https://jira.mongodb.org/browse/SPARK-368 please Give me solution as soon as posible. Thanks", "username": "Krishna_Kumar_Sahu" }, { "code": "", "text": "no i did not find any soluition - saw you have created a ticket, thanks for helping to create a ticket on this!", "username": "Yh_G" }, { "code": "", "text": "According to ticket https://jira.mongodb.org/browse/SPARK-368 the seems it’s resolved in fixed Version 10.1.0 of mongo spark connector but I can’t find the updated mongo spark connector in Maven please provide the link. or makes available that fixed version of it.", "username": "Krishna_Kumar_Sahu" }, { "code": "", "text": "10.1 is not released yet, if you’d like to try it out now you can download the build herehttps://oss.sonatype.org/content/repositories/snapshots/org/mongodb/spark/mongo-spark-connector_2.12/10.1.0-SNAPSHOT/for scala 2.12or Index of /repositories/snapshots/org/mongodb/spark/mongo-spark-connector_2.13/10.1.0-SNAPSHOTfor scala 2.13", "username": "Robert_Walters" }, { "code": "", "text": "Hi @Robert_Walters,to try version 10.1 on databricks, Do I have to download all these files and add to the cluster? or which files from that list are needed? how can I reference in my spark session? Thank you very much.", "username": "Thiago_Aniceto" }, { "code": "query=(slidingWindows.writeStream.format(\"mongodb\").option('spark.mongodb.connection.uri', 'mongodb://mongo1:27017,mongo2:27018,mongo3:27019/Stocks.Source?replicaSet=rs0')\\\n .option('spark.mongodb.database', 'Pricing') \\\n .option('spark.mongodb.collection', 'NaturalGas') \\\n .option(\"checkpointLocation\", \"/tmp\") \\\n .outputMode(\"complete\") \\\n .start())\n", "text": "I do not have the Databricks setup available to verify but I think you would download the latest “-all.jar” file like this one for 2.12mongo-spark-connector_2.12-10.1.0-20221215.104635-18-all.jarThen in the Databricks once you create your cluster, under Libraries import the library and download the -all\nimage1192×982 77.3 KB\nThat is from a blog I wrote ^^ you might find helpful Exploring Data with MongoDB Atlas, Databricks, and Google Cloud | MongoDB Blog.Just remember that in Spark 3.x we used\"mongo\" in Spark 10.x version of the connector its “mongodb” such as", "username": "Robert_Walters" }, { "code": "mongodbwriteStreammongodbforeachBatchforeachBatchdef write_mongodb(df, epoch_id):\n df.write \\\n .format(\"mongo\") \\\n .option(\"uri\", \"mongodb+srv://<<mongo-connection>>\") \\\n .option(\"database\", \"xxx\") \\\n .option(\"collection\", \"xxx\") \\\n .mode(\"append\") \\\n .save()\n\nquery = (spark.readStream\n .format(\"mongodb\")\n .option(\"uri\", \"mongodb+srv://<<mongo-connection>>\")\n .option(\"database\", \"xxx\")\n .option(\"collection\", \"xxx\")\n .option(\"changeStream.fullDocument\", \"updateLookup\")\n .load())\n\nstream = (query.writeStream\n .foreachBatch(write_mongodb)\n .option(\"checkpointLocation\", \"s3://xxxx/checkpoint\")\n .outputMode(\"append\")\n .start())\n\nstream.awaitTermination()\nquerywrite_mongodbforeachBatchwrite_mongodb", "text": "The error message indicates that the MongoDB Spark Connector does not support microbatch processing for the mongodb data source. This means that you cannot use the writeStream API with the mongodb data source. Instead, you may want to consider using the foreachBatch API to write the streaming data to MongoDB.Here’s an example of how you can use foreachBatch to write streaming data to MongoDB:In this example, the query variable reads data from the MongoDB collection, and the write_mongodb function writes the data to MongoDB. The foreachBatch API is used to write the data in batches to MongoDB. You can modify the write_mongodb function to write the data to a file instead of MongoDB.I hope this helps! Let me know if you have any further questions.", "username": "Brock" } ]
MongoDB Spark Connector v10 - Failing to write to a file sink
2022-06-13T02:15:34.553Z
MongoDB Spark Connector v10 - Failing to write to a file sink
7,999
https://www.mongodb.com/…3_2_1023x281.png
[ "graphql" ]
[ { "code": "", "text": "Hi everyone!I’m using MongoDB Realm and Angular as a framework. I can create, read and delete documents with queries/mutations of GraphQL and using the API of MongoDB Data Access but I cannot update the documents.I tried with the GraphQL endpoint and I can use the upsertOneUser but not the updateOneUser (and the others update functions). It says that it is not allowed. Also using the collections it does not work.Whare is the problem? I have all the permission for write/read/insert documents.\nImmagine 2021-05-04 1033021315×362 11.8 KB\n", "username": "Giacomo_Ondesca" }, { "code": "", "text": "Did you ever figure this out?", "username": "Try_Catch_Do_Nothing" }, { "code": "mutation {\n updateOneUser(input: {filter: { _id: \"user123\" }, set: { name: \"New Name\" }}) {\n _id\n name\n }\n}\n", "text": "@Try_Catch_Do_Nothing @Giacomo_Ondesca\nThere could be several reasons why you are not able to update documents in MongoDB Realm using GraphQL or the Data Access API. Here are a few things to check:Make sure that your MongoDB Realm app’s rules allow updates. You can check this in the MongoDB Realm UI by going to your app, selecting “Rules” from the left-hand menu, and checking the “Update” permission for the relevant collection.Make sure that you are passing the correct parameters to the updateOneUser mutation. The mutation requires an “input” parameter that specifies the filter to identify the document to update, and a “set” parameter that specifies the fields to update. Here is an example:This would update the “name” field of the user with ID “user123” to “New Name”.If none of these solutions work, you may want to reach out to MongoDB support for further assistance.", "username": "Brock" } ]
Update not allowed in MondoDB Realm
2021-05-04T08:52:16.647Z
Update not allowed in MondoDB Realm
2,708
null
[ "graphql", "next-js", "react-js" ]
[ { "code": "GraphQLProvider/componentsimport * as Realm from \"realm-web\";\nimport {\n ApolloClient,\n ApolloProvider,\n HttpLink,\n InMemoryCache\n} from \"@apollo/client\";\n\n// import { useApp } from \"./useApp\";\n\n// Connect to MongoDB Realm app\nconst app = new Realm.App(process.env.NEXT_PUBLIC_APP_ID);\n\n// Gets a valid Realm user access token to authenticate requests\nasync function getValidAccessToken() {\n // Guarantee that there's a logged in user with a valid access token\n if (!app.currentUser) {\n // If no user is logged in, log in an anonymous user. The logged in user will have a valid\n // access token.\n await app.logIn(Realm.Credentials.anonymous());\n } else {\n // An already logged in user's access token might be stale. Tokens must be refreshed after \n // 30 minutes. To guarantee that the token is valid, we refresh the user's access token.\n await app.currentUser.refreshAccessToken();\n }\n // console.log(app.currentUser.accessToken);\n return app.currentUser.accessToken;\n }\n\n\n// Add GraphQL client provider\nfunction GraphQLProvider({ children }) {\n\n // const app = useApp();\n\n console.log()\n \n const client = new ApolloClient({\n link: new HttpLink({\n uri: process.env.NEXT_PUBLIC_GRAPHQL_API_ENDPOINT,\n // Get the latest access token on each request\n fetch: async (uri, options) => {\n const accessToken = await getValidAccessToken();\n options.headers.Authorization = `Bearer ${accessToken}`;\n return fetch(uri, options);\n },\n }),\n cache: new InMemoryCache(),\n });\n\n return (\n <ApolloProvider client={client}>\n {children}\n </ApolloProvider>\n );\n}\n\nexport default GraphQLProvider;\npages/_app.jsimport GraphQLProvider from '@/components/GraphQLProvider';\n\nexport default function App({ Component, pageProps }) {\n return (\n // Wraps Apollo Provider and provides authentication.\n <GraphQLProvider>\n <Component {...pageProps} />\n </GraphQLProvider>\n );\n}\n", "text": "I’ve managed to get anonymous user authentication working from my Next.js app to MongoDB GraphQL API using Apollo Client. However, I’m not quite sure if this is a recommended approach or if there are any issues to be aware of using this method. I’d also appreciate any feedback as to how to improve things if anyone can suggest anything.First of all I create a GraphQLProvider component in /components directory which wraps the Apollo Provider. It retrieves a valid access token on every request and adds it to the headers:Then in pages/_app.js I wrap my app with the new provider (which includes the Apollo Provider):I then just make the GraphQL query in any components where I need to make an external request for data.It should be noted that at this point I’m only using cient-side rendering and I don’t have any user accounts on my site and I have enabled anonymous authentication in App Services. It’s purely just to be able to make the connection to MongoDB Atlas App Services and retrieve generic data.Thanks for any feedback.", "username": "Ian" }, { "code": "import * as Realm from \"realm-web\";\nimport {\n ApolloClient,\n ApolloProvider,\n HttpLink,\n InMemoryCache\n} from \"@apollo/client\";\nimport { useState, useEffect } from \"react\";\n\nconst app = new Realm.App(process.env.NEXT_PUBLIC_APP_ID);\n\nasync function getValidAccessToken() {\n if (!app.currentUser) {\n await app.logIn(Realm.Credentials.anonymous());\n } else {\n await app.currentUser.refreshAccessToken();\n }\n return app.currentUser.accessToken;\n}\n\nfunction useAccessToken() {\n const [accessToken, setAccessToken] = useState(null);\n\n useEffect(() => {\n async function fetchAccessToken() {\n const token = await getValidAccessToken();\n setAccessToken(token);\n }\n\n fetchAccessToken();\n const interval = setInterval(fetchAccessToken, 29 * 60 * 1000);\n\n return () => clearInterval(interval);\n }, []);\n\n return accessToken;\n}\n\nfunction GraphQLProvider({ children }) {\n const accessToken = useAccessToken();\n\n if (!accessToken) {\n return <div>Loading...</div>;\n }\n\n const client = new ApolloClient({\n link: new HttpLink({\n uri: process.env.NEXT_PUBLIC_GRAPHQL_API_ENDPOINT,\n headers: {\n Authorization: `Bearer ${accessToken}`,\n },\n }),\n cache: new InMemoryCache(),\n });\n\n return (\n <ApolloProvider client={client}>\n {children}\n </ApolloProvider>\n );\n}\n\nexport default GraphQLProvider;\nuseStateuseEffect", "text": "@IanYour approach seems reasonable for anonymous authentication in a Next.js app using Apollo Client to connect to a MongoDB Realm app. However, there are a few things you could improve:Caching the access token: You could improve performance by caching the access token instead of fetching it on every request. You can use a state management library like Redux to store the token in the store and update it as needed.Error handling: You should add error handling to your GraphQLProvider component to handle cases where the user is not authenticated or the access token is invalid.Proper server-side rendering: If you plan to implement server-side rendering in the future, you should modify your code to handle authentication on the server-side as well.Here’s an updated version of your GraphQL Provider component that addresses some of these concerns:In this updated version, we’re using the useState and useEffect hooks to fetch the access token once and then update it periodically. We also added some error handling in case the access token is not available.", "username": "Brock" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Authenticating with MongoDB GraphQL API in Next.js
2023-03-06T10:47:22.741Z
Authenticating with MongoDB GraphQL API in Next.js
1,729
null
[ "aggregation", "node-js" ]
[ { "code": "const EntrySchema = {\n name: \"Entry\",\n properties: {\n _id: { type: \"objectId\", indexed: true },\n cells: \"Entry_cells[]\",\n },\n primaryKey: \"_id\",\n};\n\nconst Entry_cellsSchema = {\n name: \"Entry_cells\",\n embedded: true,\n properties: {\n _schema: { type: \"int\", optional: true, default: 0 },\n body: { type: \"string\", optional: true, default: \"\" },\n columnId: { type: \"string\", optional: true, default: \"\" },\n },\n};\n.objects(\"entries\")\n.filtered(\"cells.columnId == $0\", columnId)\n.sorted([[\"cells.body\", true][\"_id\", true]]);\n", "text": "Hello,\nI have an Electron app with a Realm database using flexible sync.\nI have this schema:I would like to be able to sort entries by cells.body. I know this is possible with an aggregation.\nBut I’m offline so I can’t use aggregate.I would like to do something like the following but I’m afraid it is not supported:Does anyone know if it can be achieved? Thanks.", "username": "Benoit_Werner" }, { "code": "Entrycells.bodyEntry_cells.sorted()const EntrySchema = {\n name: \"Entry\",\n properties: {\n _id: { type: \"objectId\", indexed: true },\n cells: \"Entry_cells[]\",\n cellsBody: { type: \"string\", indexed: true }\n },\n primaryKey: \"_id\",\n};\n\nconst Entry_cellsSchema = {\n name: \"Entry_cells\",\n embedded: true,\n properties: {\n _schema: { type: \"int\", optional: true, default: 0 },\n body: { type: \"string\", optional: true, default: \"\" },\n columnId: { type: \"string\", optional: true, default: \"\" },\n },\n};\n\n// Whenever a new Entry or Entry_cells object is added or modified, update cellsBody\nrealm.write(() => {\n const entries = realm.objects(\"Entry\");\n for (const entry of entries) {\n let cellsBody = \"\";\n for (const cell of entry.cells) {\n cellsBody += cell.body;\n }\n entry.cellsBody = cellsBody;\n }\n});\n\n// Query and sort by cellsBody\nconst entries = realm.objects(\"Entry\")\n .filtered(\"cells.columnId == $0\", columnId)\n .sorted(\"cellsBody\", true);\ncells.bodycellsBody", "text": "Unfortunately, sorting by a property of an embedded object is not currently supported in Realm. As you mentioned, one way to achieve this would be to use aggregation, but since you are offline, this is not possible.One possible workaround is to denormalize your data by adding a new property to the Entry object that stores the value of cells.body. You can update this property whenever a new Entry_cells object is added or modified.With this approach, you can then sort by the new property using the .sorted() method. Here is an example:This approach does have the disadvantage of increased storage space and potential inconsistency if cells.body and cellsBody are not kept in sync. However, it may be a feasible workaround for your use case.", "username": "Brock" } ]
Sort by array field, Realm Flex sync offline
2023-03-15T10:13:59.074Z
Sort by array field, Realm Flex sync offline
829
null
[]
[ { "code": "{$add: [ {$count: [ \"$NAMES\" ]}, 40000]}", "text": "Hey, I have a field called names that I’m counting, I have 20,000 values.When I display the count it say 20,000. No problem.However, I’m trying to add the number 40,000 to that count in order to have: 40,000 + count($names) displayed.How can I do that in the calculated fields section? Nothing works.I tried {$add: [ {$count: [ \"$NAMES\" ]}, 40000]}", "username": "Lior_S" }, { "code": "[\n { \n $group: {\n _id: {},\n rawCount: { $sum: 1 }\n }\n },\n { \n $set: { \n modifiedCount: { $sum: [ \"$rawCount\", 20000 ]}\n }\n } \n]\n", "text": "Hi @Lior_S -You can’t use calculated fields for this, since a calculated field only acts on a single document at a time. To accomplish what you’re after, you’ll need to use the query bar to pre-group the data (i.e. to count it), after which you can apply further transformations. Here’s one way to do it:\nimage1315×716 31.9 KB\n", "username": "tomhollander" } ]
How to add an int to a $count and display it?
2023-04-11T20:33:42.868Z
How to add an int to a $count and display it?
786
null
[ "aggregation", "queries" ]
[ { "code": "exports = function()\n\n{\n\n const datalake = context.services.get(\"v3ProdCluster-us-east-1\");\n const db = datalake.db(\"v3StagingDB\");\n const events = db.collection(\"work_sessions\");\n\n const pipeline = [\n {\n $match: {\n \"time\": {\n $gte: new Date(Date.now() - 60 * 60 * 10000000000000000),\n $lt: new Date(Date.now())\n }\n }\n }, {\n \"$out\": {\n \"s3\": {\n \"bucket\": \"mongodb-s3-staging\",\n \"region\": \"us-east-1\",\n \"filename\":\n { \"$concat\": [\n \"work_sessions/\",\n \n \"$_id\"\n ]\n },\n \"format\": {\n \"name\": \"json\",\n \"maxFileSize\": \"10GB\",\n \n }\n }\n }\n }\n ];\n\n return events.aggregate(pipeline);\n};\n", "text": "I am trying to copy data from Mongo DB to s3 bucket. I followed this tutorial : How to Automate Continuous Data Copying from MongoDB to S3 | MongoDBSteps :Created s3 bucket and IAM role with all the required permissions (including access policy)\nCreated a data lake in mongo DB\nConnected the data lake with S3\nWhile Creating the Trigger I am facing this issue.", "username": "Nirmal_Patil" }, { "code": "", "text": "i am having the same problem", "username": "Marina_Stolet" }, { "code": "M_", "text": "Welcome to the MongoDB Community @Marina_Stolet !Can you confirm the MongoDB Atlas cluster tier you are using (M_)? Are you following the same tutorial as the original poster?Regards,\nStennie", "username": "Stennie_X" }, { "code": "exports = function () {\n\n const datalake = context.services.get(\"FederatedDatabaseInstance-analytics\");\n const db = datalake.db(\"analytics\");\n const coll = db.collection(\"assessments\");\n\n const pipeline = [\n {\n \"$out\": {\n \"s3\": {\n \"bucket\": \"322104163088-mongodb-data-ingestion\",\n \"region\": \"eu-west-2\",\n \"filename\": \"analytics/\",\n \"format\": {\n \"name\": \"json\",\n \"maxFileSize\": \"100GB\"\n }\n }\n }\n }\n ];\n\n return coll.aggregate(pipeline);\n};\n", "text": "I am using M10 and yes, the same tutorial, although I now adapted it. What I did was I created a federated database using my cluster and a “analytics” db and assessments coll. I am not getting that same error anymore, but no data comes into my s3 bucket. That’s the code:", "username": "Marina_Stolet" }, { "code": "const datalake = context.services.get(\"v3ProdCluster-us-east-1\");\n", "text": "Hi All,I am not getting that same error anymore, but no data comes into my s3 bucket. That’s the code:I would possibly raise a new topic as it may be a completely separate issue since you are not getting the same error message.If an object is passed to $out it must have exactly 2 fields: ‘db’ and ‘coll’Regarding the original topic title and error message, one thing you may wish to check is that you are specifying the Federated Database Instance service rather than your Atlas cluster. As per the the blog page:You must connect to your Federated Database Instance to use $out to S3.Regards,\nJason", "username": "Jason_Tran" }, { "code": "exports = function() {\n const datalake = context.services.get(\"v3ProdCluster-us-east-1\");\n const db = datalake.db(\"v3StagingDB\");\n const events = db.collection(\"work_sessions\");\n\n const pipeline = [\n {\n $match: {\n \"time\": {\n $gte: new Date(Date.now() - 60 * 60 * 1000),\n $lt: new Date(Date.now())\n }\n }\n },\n {\n $out: {\n s3: {\n bucket: \"mongodb-s3-staging\",\n region: \"us-east-1\",\n filename: { $concat: [\"work_sessions/\", \"$_id\"] },\n format: {\n name: \"json\",\n maxFileSize: \"10GB\"\n }\n }\n }\n }\n ];\n\n return events.aggregate(pipeline).toArray();\n};\nexports = function () {\n const datalake = context.services.get(\"FederatedDatabaseInstance-analytics\");\n const db = datalake.db(\"analytics\");\n const coll = db.collection(\"assessments\");\n\n const pipeline = [\n {\n $out: {\n s3: {\n bucket: \"322104163088-mongodb-data-ingestion\",\n region: \"eu-west-2\",\n filename: \"analytics/\",\n format: {\n name: \"json\",\n maxFileSize: \"100GB\"\n }\n }\n }\n }\n ];\n\n return coll.aggregate(pipeline).toArray();\n};\n", "text": "@Nirmal_Patil\nCodes fixed, please review the changes/differences, I also made it easier to read.@Marina_Stolet I corrected yours as well.The issues with both:I hope this helps.", "username": "Brock" } ]
If an object is passed to $out it must have exactly 2 fields: 'db' and 'coll'
2022-06-04T20:29:45.164Z
If an object is passed to $out it must have exactly 2 fields: &lsquo;db&rsquo; and &lsquo;coll&rsquo;
3,453
null
[ "crud" ]
[ { "code": "const match = { \"gm_id\": 11 };\nconst update = {\n '$set': { \"nxt_plyr_clr\" : 100 }\n }; \nconst options = { returnNewDocument: true };\n\nreturn await context.services\n .get(\"mongodb-atlas\")\n .db(\"db1\")\n .collection(\"coll1\")\n .findOneAndUpdate(match, update , options)\n .then(updatedDocument => {\n if(updatedDocument) {\n console.log(`Successfully updated document: ${updatedDocument}.`)\n } else {\n console.log(\"No document matches the provided query.\")\n }\n return updatedDocument\n })\n .catch(err => console.error(`Failed to find and update document: ${err}`))\n \n};\n const update = {\n '$set': { \"nxt_plyr_clr\" : \"$gm_mv_cnt\" }\n };\n\"nxt_plyr_clr\": 5,\n\"nxt_plyr_clr\": \"$gm_mv_cnt\",\n", "text": "Function code - this works and updates field nxt_plyr_clr = 100:This does not work the way I want it to so I think my syntax is wrong:gm_mv_cnt is an existing field in this document with a value of 5, instead of the update producing:I am getting:", "username": "Mic_Cross" }, { "code": "$set$set$setfindOneAndUpdatefindOneAndUpdateupdateOnefindOneconst coll = context.services\n .get(\"mongodb-atlas\")\n .db(\"db1\")\n .collection(\"coll1\")\n\nawait coll.updateOne(match, [update]) // note the \"[]\"; this is now a pipeline update\nawait coll.findOne(match)\n", "text": "Hi @Mic_Cross,Your $set expression is relying on a field path expression, which is supported in the aggregation $set operator, but not the update $set operator. To trigger this behavior, you need to perform a pipeline update.Atlas Functions do not support pipeline updates in findOneAndUpdate calls at the moment. If you don’t need the atomic guarantees of findOneAndUpdate, you can instead perform an updateOne with a pipeline update followed by a findOne, like so:Let me know if that works for you!", "username": "Kiro_Morkos" }, { "code": "", "text": "Thank you for this great explanation - exactly what I needed to know.", "username": "Mic_Cross" }, { "code": "const match = { \"gm_id\": 11 };\nconst update = {\n '$set': { \"nxt_plyr_clr\": 100 }\n};\nconst options = { returnNewDocument: true };\n\ntry {\n const updatedDocument = await context.services\n .get(\"mongodb-atlas\")\n .db(\"db1\")\n .collection(\"coll1\")\n .findOneAndUpdate(match, update, options);\n if (updatedDocument) {\n console.log(`Successfully updated document: ${updatedDocument}.`);\n return updatedDocument;\n } else {\n console.log(\"No document matches the provided query.\");\n return null;\n }\n} catch (err) {\n console.error(`Failed to find and update document: ${err}`);\n return null;\n}\n\nconst match = { \"gm_id\": 11 };\nconst update = {\n '$set': { \"nxt_plyr_clr\": 100 }\n};\nconst options = { returnNewDocument: true };\n\ntry {\n const collection = context.services\n .get(\"mongodb-atlas\")\n .db(\"db1\")\n .collection(\"coll1\");\n if (!collection) {\n console.log(\"Collection not found.\");\n return null;\n }\n const updatedDocument = await collection.findOneAndUpdate(match, update, options);\n if (updatedDocument) {\n console.log(`Successfully updated document: ${updatedDocument}.`);\n return updatedDocument;\n } else {\n console.log(\"No document matches the provided query.\");\n return null;\n }\n} catch (err) {\n console.error(`Failed to find and update document: ${err}`);\n return null;\n}\n\n", "text": "Hello Mic:Some corrections to your code:This is for better error handling and detection if there’s issues with the collection as well.", "username": "Brock" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Syntax for pipeline code in function?
2023-04-05T04:53:16.308Z
Syntax for pipeline code in function?
828
https://www.mongodb.com/…49d9d89bfaf2.png
[ "dot-net" ]
[ { "code": "Builders<MeasurementDetails>.Filter.Lt(field, myValue);System.InvalidCastException : Unable to cast object of type 'System.String' to type 'System.DateTime'.\n at MongoDB.Bson.Serialization.Serializers.DowncastingSerializer`2.Serialize(BsonSerializationContext context, BsonSerializationArgs args, TBase value)\n at MongoDB.Bson.Serialization.IBsonSerializerExtensions.Serialize[TValue](IBsonSerializer`1 serializer, BsonSerializationContext context, TValue value)\n at MongoDB.Driver.OperatorFilterDefinition`2.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry, LinqProvider linqProvider)\n at MongoDB.Driver.OrFilterDefinition`1.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry, LinqProvider linqProvider)\n at MongoDB.Driver.AndFilterDefinition`1.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry, LinqProvider linqProvider)\n at MongoDB.Driver.AndFilterDefinition`1.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry, LinqProvider linqProvider)\n at MongoDB.Driver.AndFilterDefinition`1.Render(IBsonSerializer`1 documentSerializer, IBsonSerializerRegistry serializerRegistry, LinqProvider linqProvider)\n at MongoDB.Driver.MongoCollectionImpl`1.CreateFindOperation[TProjection](FilterDefinition`1 filter, FindOptions`2 options)\n at MongoDB.Driver.MongoCollectionImpl`1.FindAsync[TProjection](IClientSessionHandle session, FilterDefinition`1 filter, FindOptions`2 options, CancellationToken cancellationToken)\n at MongoDB.Driver.MongoCollectionImpl`1.<>c__DisplayClass48_0`1.<FindAsync>b__0(IClientSessionHandle session)\n at MongoDB.Driver.MongoCollectionImpl`1.UsingImplicitSessionAsync[TResult](Func`2 funcAsync, CancellationToken cancellationToken)\n at MongoDB.Driver.IAsyncCursorSourceExtensions.ToListAsync[TDocument](IAsyncCursorSource`1 source, CancellationToken cancellationToken)\n", "text": "Hello,when adding filter like this\nBuilders<MeasurementDetails>.Filter.Lt(field, myValue);\nwhere field is Expression<Func<MyObjectType, object>> field = x => x.CreationDate\nand CreationDate property type is DateTime, but myValue is string (because we filter dynamically depending on user request)LinqProvider V2 works fine, but V3 version throws exception\nimage1019×167 8.25 KB\nSo it seems on this line in DowncastingSerializer it throws exception because it simply tries to cast string to DateTime. Not sure if that is intended behavior in new Linq version or a bug?", "username": "Laurynas" }, { "code": "", "text": "Hi, @Laurynas,Welcome to the MongoDB Community Forums. I have reproduced the issue that you encountered when switching from the LINQ2 to LINQ3 provider. I created CSHARP-4609 to track this bug. Thank you for reporting this issue. Please follow CSHARP-4609 for updates.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
InvalidCastException with LinqProvider V3 when passing string as DateTime
2023-04-11T08:03:48.722Z
InvalidCastException with LinqProvider V3 when passing string as DateTime
671
null
[ "node-js" ]
[ { "code": "async function database(req, res, next) {\n // original code\n // if (!client.isConnected()) await client.connect();\n // modified code: to not check for client.isConnected\n await client.connect();\n req.dbClient = client;\n req.db = client.db('MCT');\n return next();\n}\n", "text": "I was following the tutorial Building Modern Applications with Next.js and MongoDB | MongoDB but get an internal server error 500 when calling the api to mongoDB. Upon further investigation, I found that the response from the api call was ‘client.isConnected is not a function’.After doing some searching, I think the most likely reason is that the code in the tutorial is outdated hence it gives an error. I modified this part of the code according to something I saw from my searching and it seems to work.I removed the condition checking for client.isConnected as I read it may not be needed anymore, but I’m not sure about this so I was wondering if this is the correct approach and if not, what can I do to solve this issue? Thanks in advance.", "username": "Yi_Jiun_Tay" }, { "code": "MongoClient.isConnected", "text": "Hi @Yi_Jiun_Tay,The legacy MongoClient.isConnected method was removed in version 4 of the MongoDB Node.js Driver (see NODE-2317).Good catch!Update - We’ve updated the article at Building Modern Applications with Next.js and MongoDB | MongoDB to remove the incorrect line of code ", "username": "alexbevi" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
client.isConnected is not a function error
2023-04-11T18:37:12.269Z
client.isConnected is not a function error
1,011
null
[ "queries", "crud" ]
[ { "code": "db.channels.updateMany({}, { $addToSet: { guests: '$user' } })", "text": "i want to get the userId of a document and store it in an array of guests. but this syntax isn’t working, it just hard-codes the $user string in an array in the document:db.channels.updateMany({}, { $addToSet: { guests: '$user' } })", "username": "Gagandeep_Suie" }, { "code": "", "text": "You needThe following page provides examples of updates with aggregation pipelines.", "username": "steevej" }, { "code": "db.channels.aggregate({\n $addFields: {\n guests: [{ $toString: \"$user\" }]\n }\n})\n", "text": "great! i have this and its returning the correct results but its not actually updating any documents.", "username": "Gagandeep_Suie" }, { "code": "", "text": "The link I provided does not indicate to call the aggregate() method.If you look at the updateMany example you will see that you still call updateMany as usual with the exception of the extra brackets around your update operation.", "username": "steevej" } ]
Update array with document's own user property
2023-04-11T06:30:11.679Z
Update array with document&rsquo;s own user property
596
https://www.mongodb.com/…_2_1024x567.jpeg
[ "node-js", "mongoose-odm" ]
[ { "code": "// Export mongoose\nconst mongoose = require(\"mongoose\");\n\nrequire('dotenv-flow').config();\n\n//Assign MongoDB connection string to Uri and declare options settings\nvar uri = `mongodb+srv://${process.env.MONGO_URL_NAME}:${process.env.MONGO_PASSWORD}@${process.env.MONGO_URL_CLUSTER}/${process.env.MONGO_DATABASE}?retryWrites=true&w=majority`\nconsole.log(`url: `+uri)\nconst db = require(\"../models\");\nconst User = db.user;\n\ndb.mongoose\n .connect(uri, {\n useNewUrlParser: true,\n useUnifiedTopology: true\n })\n .then(() => {\n console.log(\"Successfully connect to MongoDB.\");\n initial();\n })\n .catch(err => {\n console.error(\"Connection error\", err);\n process.exit();\n });\n\nfunction initial() {\n\tUser.estimatedDocumentCount((err, count) => {\n\t if (!err && count === 0) {\n\t\tnew User({\n\t\t email: \"[email protected]\",\n\t\t password: \"$2a$08$hVnfdemp6cpovhm0uOvDeOqPcwiO7Ek0SWcGqLwlTTytFRBg7C.TW\", // KeepingHumanSafe101\n\t\t accountType: \"admin\",\n\t\t fname: \"Admin\",\n\t\t lname: \"\",\n\t\t accountType: \"admin\",\n\t\t plan: \"Ultimate\",\n\t\t status: \"active\",\n\t\t credits: 10000,\n\t\t}).save(err => {\n\t\t if (err) {\n\t\t\tconsole.log(\"error\", err);\n\t\t }\n\t\t console.log(\"admin user added\");\n\t\t});\n\n\t\tnew User({\n\t\t\temail: \"[email protected]\",\n\t\t\tpassword: \"$2a$08$hVnfdemp6cpovhm0uOvDeOqPcwiO7Ek0SWcGqLwlTTytFRBg7C.TW\", // KeepingHumanSafe101\n\t\t\taccountType: \"user\",\n\t\t\tfname: \"OpenAI\",\n\t\t\tlname: \"Support\",\n\t\t\tplan: \"Ultimate\",\n\t\t\tstatus: \"active\",\n\t\t\tcredits: 1000,\n\t\t }).save(err => {\n\t\t\tif (err) {\n\t\t\t console.log(\"error\", err);\n\t\t\t}\n\t\t\tconsole.log(\"admin user added\");\n\t\t });\n\n\t }\n\t});\n}\n", "text": "Screen Shot 2566-01-07 at 01.48.231920×1064 305 KB\nI am trying to connect VScode program to my cluster but it keeps giving me “URI does not have hostname, domain name and tld” error. I have searched multiple forums with the same problem, all of which are solved with encoding special characters in the password. But my password is auto-generated on Mongodb and doesn’t have any special characters.This is the code I use to connect to the Mongo database", "username": "popp_yu" }, { "code": "mongodb+srv://", "text": "Hi @popp_yuAre you using MongoDB Atlas? And every time?Creating the uri with mongodb+srv:// is always going to restrict you to the DNS Seed style connection string(Primarily used by Atlas) and would not permit use of standard connection strings. See Connection String URI FormatThis could complicate unit testing and any local development against a local mongodb.", "username": "chris" }, { "code": "mongodb://tlsssl", "text": "Yes Atlas, and yes every time. I have tried using just mongodb:// for standard connection as well but still doesn’t work. Setting the tls and ssl to false in the query string doesn’t seem to work either.P.S. I am trying to connect to a shared cluster.", "username": "popp_yu" }, { "code": "urimongoshuri", "text": "It is an error from the parser, so I would definitely suspect the building of the uri string. Without seeing the connection string(sans credentials) it is difficult to diagnose.Have you used the URI with mongosh or compass, copied directly from the Atlas connect screen?If that works I would assign the uri directly without using the environment variable interpolation.", "username": "chris" }, { "code": "", "text": "I too have this same issue.I’m guessing we both purchased the same template.\n\nScreenshot 2023-01-16 at 12.46.46 AM1083×763 317 KB\n", "username": "Chris_Celaya" }, { "code": "", "text": "Make sure all of your passwords and users are correct. And be sure to command + S to save the inputs. That worked for me.", "username": "Jay_Harnkul" }, { "code": "url: mongodb+srv://MONGO_URL_NAME:MONGO_PASSWORD@MONGO_URL_CLUSTER/...\n", "text": "Your code literally output the lineThis means one of 3 things:1 - your .env is not configured correctly\n2 - your .env is not processed correctly, the lines that starts with dotenv-flow: might give some clues\nOR\n3 - your code does not use the content of your .env correctlyYou will need to share your .env and the code that uses your .env.But first read Formatting code and log snippets in posts.", "username": "steevej" }, { "code": "", "text": "All values are correct. I was able to confirm by connecting through the terminal.", "username": "Chris_Celaya" }, { "code": "MONGO_URL_CLUSTER", "text": "I found the problem with this error.The issue was having a forward slash in MONGO_URL_CLUSTER.Originally it was: MONGO_URL_CLUSTER=openai.02ppfro.mongodb.net/myFirstDatabase.\n/myFirstDatabase should not be there.After changes: MONGO_URL_CLUSTER=openai.02ppfro.mongodb.netThis allowed me to connect successfully. But now I have an issue with deploying the frontend.Thank you for your help.", "username": "Chris_Celaya" }, { "code": "", "text": "Hi Chris! Are you able to help me solve this issue please? I am in the setup phase of the project. My MONGO_URL_CLUSTER is set as this:MONGO_URL_CLUSTER=cluster0.vk9amrl.mongodb.netSo I do not see the ‘/’ issue you are talking about. Was there something else you did to solve this issue? Thanks.", "username": "Angela_Zhou" } ]
URI does not have hostname, domain name and tld
2023-01-06T18:48:47.230Z
URI does not have hostname, domain name and tld
5,134
null
[]
[ { "code": "non-recoverable error processing event: dropping namespace ns='db.collection' being synchronized with Device Sync is not supported. Sync cannot be resumed from this state and must be terminated and re-enabled to continue functioning.", "text": "This is the errors we’re facing all of a sudden:Synchronization between Atlas and Device Sync has been stopped, due to error:non-recoverable error processing event: dropping namespace ns='db.collection' being synchronized with Device Sync is not supported. Sync cannot be resumed from this state and must be terminated and re-enabled to continue functioning.Now I know from other threads that this happens when you drop a collection. But how do I fix it? I’ve terminated the sync and re-enabled it twice but the same error persists.", "username": "Renie_Ravin" }, { "code": "", "text": "Hi @Renie_Ravin,Thanks for posting! Can you confirm that you followed these instructions to terminate sync? Note that if you have drafts enabled, you’ll need to deploy the draft in between terminating and re-enabling for the termination to go through.", "username": "Kiro_Morkos" }, { "code": "", "text": "@Renie_RavinYou need to establish CLIENT RESET LOGIC.As Realm/Device Sync does not automatically do this, so when something happens to disconnect Device Sync from the clients, you’ll never recover and your app will be in a pit of doom. But luckily, there is a super easy fix that isn’t in the QuickStart guides but should be so people are aware you need this functionality.You can find this by going to Mongodb.com going to search, type in Language + Client Reset Logic.This is an example for C# https://www.mongodb.com/docs/realm/sdk/dotnet/sync/client-reset/", "username": "Brock" }, { "code": "", "text": "This was it, thank you so much - I had not deployed the change in between termination and re-syncing.", "username": "Renie_Ravin" }, { "code": "", "text": "Thank you, we had not implemented this, we’re building in React Native, so we’re using that SDK now.", "username": "Renie_Ravin" }, { "code": "", "text": "This was it, thank you so much - I had not deployed the change in between termination and re-syncing.I knew immediately that’s what the issue was, I used to support Realm/Device Sync/App Services for 2 years working for MBD.The reason Kiro didn’t mention this, is because Client Reset Logic is literally unknown to people in other areas of MBD, and those inside Realm have no idea it exists unless someone more senior to them mentions it like I mentioned it to you.Where Client Reset Logic for you, is right here hidden in the docs where you’d never find it if you didn’t take a lot of time reading for it, or knew exactly what you were looking for. https://www.mongodb.com/docs/realm/sdk/react-native/sync-data/handle-sync-errors/#handle-client-reset-errorsWhy Client Reset Logic is so important.Client Reset Logic is important because it allows your app to recover after issues like this, anytime the app client disconnects from Sync for literally any reason, even a period of time offline, it will fail to ever sync again if this isn’t in place.Client Reset Logic if it’s not in place can cause global app outages should a major event take place.This is MongoDB Device Syncs/Realm’s most critical recovery feature, and I know you were not aware it existed, which is why I’m clarifying this and bringing it to your attention.", "username": "Brock" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" }, { "code": "", "text": "", "username": "Dave_Nielsen" }, { "code": "", "text": "", "username": "Dave_Nielsen" } ]
Synchronization between Atlas and Device Sync has been stopped. Terminating doesn't help
2023-03-31T07:58:33.327Z
Synchronization between Atlas and Device Sync has been stopped. Terminating doesn&rsquo;t help
881
null
[ "charts" ]
[ { "code": "", "text": "Charts are awesome, but it would be nice if the dashboard would be responsive. New charts should overlfow to next row if there isn’t available width. More over some charts should shrink accordingly to the viewports width. A simple flexbox wrap would solve the dashboard responsiveness. However for each charts that may be a bit more technical.", "username": "Rishi_uttam" }, { "code": "", "text": "I am having the same issue. Is there a solution to it?", "username": "Ariana_Daka" }, { "code": "", "text": "me too… i have the same issue here… anyone found any solution?", "username": "Subin_Viju" }, { "code": "", "text": "are there solutions about this?", "username": "Orlando_Herrera" }, { "code": "", "text": "@Subin_Viju @Ariana_Daka did you find a solution?", "username": "Orlando_Herrera" }, { "code": "", "text": "Any updates on this?", "username": "Servicios_Yu-Track" } ]
Make charts responsive
2022-05-18T04:45:29.706Z
Make charts responsive
3,563
null
[]
[ { "code": "", "text": "I’d like to know if there is a way to find a document that has an array that includes all and only the elements passed in the query’s array. I know i could use {members: [id1, id2]} or (with the $all operator) but what if i the elements order is not the same?Basically if the document has {members: [1, 2]} i want to be able to find it passing [2, 1] to the query, but it shouldn’t return the document if the query’s array includes any other number. The arrays should be the same except for the order.", "username": "iKingNinja" }, { "code": "", "text": "Test if the $size of $setIntersection is the same as members.", "username": "steevej" }, { "code": "", "text": "That seems a good approach but how would i combine the $all and $size operators? I think i need to use aggregations pipelines but I might be wrong as I tried and it didn’t work.", "username": "iKingNinja" }, { "code": "array_variable = [ 2 , 1 ]\ncollection.find( { \"members\" : { \"$all\" : array_variable , \"$size\" : array_variable.size() } }\n", "text": "It is not$all and $sizesince I mentioned$size of $setIntersectionI think i need to use aggregations pipelinesI believe too since $setIntersection seems to be only available in aggregation.I tried and it didn’t work.Shared what you tried. You might be closer to a solution than you think.While I was looking for $setIntersection examples to share, I discover $setEquals which would be a more direct way to do that compared to $size of $setIntersection.And while I was writing I thought a way you can do it with $all and $size.", "username": "steevej" }, { "code": "array_variable = [ 2 , 1 ]\ncollection.find( { \"members\" : { \"$all\" : array_variable , \"$size\" : array_variable.size() } }\n", "text": "While I was looking for $setIntersection examples to share, I discover $setEquals which would be a more direct way to do that compared to $size of $setIntersection.$setIntersection and $setEquals are not what i’m looking for, since the first one just returns documents with arrays that have elements in common and the second one returns arrays that have the same elements in other fields.Shared what you tried. You might be closer to a solution than you think.This is exactly what i tried", "username": "iKingNinja" }, { "code": "$setEqualstruefalse$setEquals{ $setEquals: [ <expression1>, <expression2>, ... ] }array_variable = [ 2 , 1 ]\ncollection.find( { \"members\" : { \"$all\" : array_variable , \"$size\" : array_variable.length } } )\nc.find()\n{ _id:0, members: [ 1, 2 ] }\n{ _id:1, members: [ 1, 2, 3 ] }\narray_variable = [ 2 , 1 ]\n/* note that in JS it is array_variable.length rather than array_variable.size() like I originally wrote */\nc.find( { \"members\" : { \"$all\" : array_variable , \"$size\" : array_variable.length } } )\n{ _id:0, members: [ 1, 2 ] }\n", "text": "the first one just returns documents with arrays that have elements in commonThis is way I wrote$size of $setIntersectionIf the $size of $setIntersection is the same as the size of the original array then the intersection contains all the elements.As for $setEquals, according to the documentation I linked:$setEquals Compares two or more arrays and returns true if they have the same distinct elements and false otherwise.$setEquals has the following syntax:{ $setEquals: [ <expression1>, <expression2>, ... ] }The arguments can be any valid expression as long as they each resolve to an array. For more information on expressions, see Expressions.So I really do not understand what you mean byreturns arrays that have the same elements in other fieldsIfis really what you tried, then I should have work because it does:", "username": "steevej" } ]
Find document with array that has all and only the elements passed in the query array but in a shuffled order
2023-04-10T20:26:21.546Z
Find document with array that has all and only the elements passed in the query array but in a shuffled order
285
null
[ "connecting", "atlas-cluster" ]
[ { "code": "", "text": "hi everyone, hope u are doing fine. I tried connecting to mongo atlas on nextjs, i followed the tutorial from How to Integrate MongoDB Into Your Next.js App | MongoDB\nafter putting the env variables, and running yarn dev, i get the following error:Error: querySrv ETIMEOUT _mongodb._tcp.djamelcluster.mqvig3i.mongodb.net\nat QueryReqWrap.onresolve [as oncomplete] (node:internal/dns/promises:251:17) {\nerrno: undefined,\ncode: ‘ETIMEOUT’,\nsyscall: ‘querySrv’,\nhostname: ‘_mongodb._tcp.djamelcluster.mqvig3i.mongodb.net’\n}as for the env variable, i did set in the correct username and password as well as put in the database name.\nand i also added my ip address to the network access section of the mongo atlas (i added 0.0.0.0/0 so that it can be accessed from anywhere)i did try to delete the cluster and initialize another one, but the problem still persists, it’s seems to be an issue with my internet or something like that.any suggestions on how i can fix this issue?", "username": "Djamel_Abbou" }, { "code": "", "text": "Hi @Djamel_Abbou and welcome to the MongoDB community forum!!The error mentioned looks like an srv lookup failure.Could you try using the connection string from the connection modal that specifies all 3 hostnames instead of the SRV record?\nYou can follow the steps below to get the connection URI.You can follow the documentation on Set up your Atlas cluster to make sure you are using the right steps to set up the cluster.You can also follow the documentation on Atlas troubleshoot Connection issue documentation for more information.Lastly, can you confirm, if you are able to connect to the Atlas cluster outside the application using Shell or Compass.?Regards\nAasawari", "username": "Aasawari" } ]
Unable to connect to mongo atlas on my local environment
2023-04-10T13:22:50.965Z
Unable to connect to mongo atlas on my local environment
897
null
[ "serverless", "migration" ]
[ { "code": "", "text": "Hi everyone,I’m looking for a way to transfer my database from my serverless instance to a shared cluster. Is there a simple way to do it? If not, how can i export my whole database (data, tables & indexes) from my serverless instance and then import in my shared cluster ?Thanks in advance for your answers.", "username": "Valentin_CORSAIN" }, { "code": "Target cluster dropdownrestore", "text": "Hi @Valentin_CORSAIN,Welcome back to the MongoDB Community forums Is there a simple way to do it? If not, how can I export my whole database (data, tables & indexes) from my serverless instance and then import it into my shared cluster?Currently, Atlas does not support live migrating from Serverless instances to Shared/Dedicated Clusters at the moment, however, we expect to add this functionality in the future. Please refer to the documentation for further reference into the current Atlas Serverless limitations.Also, if the serverless instance runs on a rapid-release version of MongoDB, you can’t migrate to a shared cluster. To learn more, please refer to the Operational Considerations.In the interim, the easiest way would be to restore your Serverless instance to a Shared/Dedicated Cluster. Please follow the procedure below:Create a new Dedicated cluster by following our Guide.Note: Please make sure you select the MongoDB version of your dedicated cluster as Latest Rapid Release which would deploy in the same version as the Serverless instance.Once the new cluster is created it will appear in the ‘Target cluster dropdown’ and you can restore the two most recent snapshots from your Serverless instance to the Dedicated cluster by choosing from the Target cluster dropdown.You can also use the MongoDB tools available such as mongodump/mongorestore to migrate the data from Serverless to Shared/Dedicated Cluster, which would include downtime.Here you can use the MongoDB tool mongodump to create a binary export of the data. With the binary export (mongodump file) you could create a new database or add data to an existing database with mongorestore.Meanwhile, you can also refer to the FAQ - Atlas Serverless Instances to learn more about the difference between serverless instances and shared/dedicated clusters.I hope this helps. Please let me know if you have any further questions.Regards,\nKushagra", "username": "Kushagra_Kesav" } ]
Migrating from serverless cluster to shared cluster
2023-04-08T18:24:29.469Z
Migrating from serverless cluster to shared cluster
1,664
null
[ "node-js", "mongoose-odm" ]
[ { "code": "C:\\Users\\DELL\\Desktop\\MERN Ecommerce\\node_modules\\mongodb\\lib\\operations\\add_user.js:16\n this.options = options ?? {};\n ^\nSyntaxError: Unexpected token '?'\n at wrapSafe (internal/modules/cjs/loader.js:1070:16)\n at Module._compile (internal/modules/cjs/loader.js:1120:27)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1176:10)\n at Module.load (internal/modules/cjs/loader.js:1000:32)\n at Function.Module._load (internal/modules/cjs/loader.js:899:14)\n at Module.require (internal/modules/cjs/loader.js:1042:19)\n at require (internal/modules/cjs/helpers.js:77:18)\n at Object.<anonymous> (C:\\Users\\DELL\\Desktop\\MERN Ecommerce\\node_modules\\mongodb\\lib\\admin.js:4:20)\n at Module._compile (internal/modules/cjs/loader.js:1156:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1176:10)\nconst mongoose = require(\"mongoose\");\n\nconst connectDatabase = async () => {\n try {\n await mongoose.connect(`${process.env.DB_URI}`, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n });\n console.log('MongoDB connected');\n } catch (error) {\n console.log(error.message);\n process.exit(1);\n }\n};\n\nmodule.exports = connectDatabase;\n", "text": "Not able to connect with database getting this errorANd this is the code", "username": "shrikant_kumar1" }, { "code": "C:\\Users\\DELL\\Desktop\\MERN Ecommerce\\node_modules\\mongodb\\lib\\operations\\add_user.js:16\n this.options = options ?? {};\n ^\nSyntaxError: Unexpected token '?'\nnullundefinednode_modules[10:37:31] ➜ ~ node --version \nv12.13.0\n[10:38:35] ➜ ~ node test.js \n/Users/kushagra/node_modules/mongodb/lib/operations/add_user.js:16\n this.options = options ?? {};\n ^\n\nSyntaxError: Unexpected token '?'\n at Module._compile (internal/modules/cjs/loader.js:892:18)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:973:10)\n at Module.load (internal/modules/cjs/loader.js:812:32)\n at Function.Module._load (internal/modules/cjs/loader.js:724:14)\n at Module.require (internal/modules/cjs/loader.js:849:19)\n at require (internal/modules/cjs/helpers.js:74:18)\n at Object.<anonymous> (/Users/kushagra/node_modules/mongodb/lib/admin.js:4:20)\n at Module._compile (internal/modules/cjs/loader.js:956:30)\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:973:10)\n at Module.load (internal/modules/cjs/loader.js:812:32)\n[10:38:42] ➜ ~ node --version\nv14.20.0\n[10:39:20] ➜ ~ node test.js \nConnected to MongoDB\n", "text": "Hi @shrikant_kumar1,Welcome to the MongoDB Community forums The following error is occurring due to the compatibility issue with the Nullish coalescing operator (??) which is introduced in node version 14.0.0 and above. Please refer to mdn web docs for version compatibility.Here, the internal MongoDB node module package is utilizing this operator, resulting in the error.Here, Nullish coalescing operator (??) is a logical operator that returns its right-hand side operand when its left-hand side operand is null or undefined, and otherwise returns its left-hand side operand.Although, I presume that you are currently using a node version below 14.0.0. I suggest upgrading at least to version 14.20.1 and above which is also the new minimum supported Node.js version by MongoDB Node.js Driver v5 and then reinstalling the node_modules packages. This should hopefully resolve any issues you are experiencing.As a point of reference, I attempted to replicate your error on my own environment by running the provided code with a node version below 14.0.0 and encountered a similar error.After upgrading the node version to 14.0.0 or higher, it works as expected.I hope it helps. Let us know if you have any further questions.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Not able to connect to database from the VS Code
2023-04-11T04:23:33.887Z
Not able to connect to database from the VS Code
3,970
null
[ "time-series" ]
[ { "code": "", "text": "Can the time field in the timeseries collection be in milliseconds versus an ISODATE?", "username": "Allan_Chase" }, { "code": "timeFieldtest> var mydate1 = new Date()\ntest> mydate1\nISODate(\"2023-04-10T06:59:40.755Z\")\n\ntest> var mydate2 = ISODate()\ntest> mydate2\nISODate(\"2023-04-10T06:59:53.231Z\")\ntstruets> db.sensor.insert({_id: ObjectId(\"642e640ebba9b652048e9be3\"), timestamp: new Date()})\n{\n acknowledged: true,\n insertedIds: { '0': ObjectId(\"642e640ebba9b652048e9be3\") }\n}\nmillisecondts> db.sensor.insert({_id: ObjectId(\"642e640ebba9b652048e9be3\"), timestamp: new Date().getTime()})\nUncaught:\nMongoBulkWriteError: 'timestamp' must be present and contain a valid BSON UTC datetime value\nResult: BulkWriteResult {\n insertedCount: 1,\n matchedCount: 0,\n modifiedCount: 0,\n deletedCount: 0,\n upsertedCount: 0,\n upsertedIds: {},\n insertedIds: { '0': ObjectId(\"642e640ebba9b652048e9be3\") }\n}\nWrite Errors: [\n WriteError {\n err: {\n index: 0,\n code: 2,\n errmsg: \"'timestamp' must be present and contain a valid BSON UTC datetime value\",\n errInfo: undefined,\n op: {\n _id: ObjectId(\"642e640ebba9b652048e9be3\"),\n timestamp: 1681110368277\n }\n }\n }\n]\ntest> timestamp.getTime();\n1681110368277\n", "text": "Hi @Allan_Chase,Welcome to the MongoDB Community Forums Can the time field in the time-series collection be in milliseconds versus an ISODATE?The timeField is a required field, that references the name of the field for the date in each document and it has to be of BSON type Date. It is a 64-bit integer that represents the number of milliseconds since the Unix epoch (Jan 1, 1970).For example, if you try to insert the timestamp with the ISODate, it will get inserted into the ts collection and will return an acknowledged true:However, if you try to insert the data after converting it into a millisecond, it will throw an error:To work around this you can convert it into a millisecond for your use case at the application level, like as follows:I hope it helps. Let us know if you have any further queries.Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Thank you Kushagra, that was a perfect example and explanation. I might, however, think about duplicating the column and create one to please the timeseries requirements and keep our original ts (so really they would be the same thing, different format).", "username": "Allan_Chase" }, { "code": "tstimeField", "text": "Hi @Allan_Chase,Thanks for your kind words. Yes, that could be a workaround of creating an additional field in the ts collection to store the timestamp in milliseconds apart from the default timeField field.~ Kushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Timeseries date support for millisecond format
2023-04-07T17:20:43.472Z
Timeseries date support for millisecond format
1,392
null
[ "php" ]
[ { "code": "", "text": "Hello - I have been following along the article series for PHP drier setup and am running into problems with the PHP driver extension installation. I have successfully installed all prereqs (including the libbson and libmongoc packages which I saw in another thread from August 2022 were the root of someone’s similar issues). While searching through the multiple php.ini files, I saw that in each one the “extension=mongodb.so” line already existed? And so I made no changes, restarted Apache, but when I run:php -i | grep mongodbI receive the following output:mongodb\nmongodb.debug => no value => no valueI haven’t been able to find a solution in any online documentation or prior threads about this topic.I am operating on macOS Ventura 13.3. Any suggestions would be much appreciated.Thank you in advance.", "username": "K_Bonner" }, { "code": "php --ri mongodbdebug", "text": "Hi @K_Bonner,the output you receive is expected: it lists the available config options for the mongodb extension, of which there is only one. Given the output it looks like the extension is correctly installed for the CLI SAPI. Another way to check this and get more information about the build environment is php --ri mongodb. This lists information about the various libmongoc and libmongocrypt options, along with the debug option you’ve already seen above.Could you elaborate what problems you are running into specifically?", "username": "Andreas_Braun" } ]
PHP driver extension installation issues
2023-04-03T13:47:31.849Z
PHP driver extension installation issues
879
null
[]
[ { "code": "", "text": "I am experiencing issues after several attempts.", "username": "Abdoulaye_DIALLO" }, { "code": "", "text": "Hi @Abdoulaye_DIALLO,Welcome to the MongoDB Community forums Can you please share the link to the lab and elaborate on what specific problem you are facing while attempting it?Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Lab: Managing Databases, Collections, and Documents in Atlas Data Explorer issues
2023-04-10T21:45:16.061Z
Lab: Managing Databases, Collections, and Documents in Atlas Data Explorer issues
929
null
[]
[ { "code": "{\n \"hostname1\": {\n \"Port 80\": {\n \"Protocol\": \"tcp\",\n \"State\": \"\",\n \"Service\": \"open\"\n },\n \"Port 443\": {\n \"Protocol\": \"tcp\",\n \"State\": \"open\",\n \"Service\": \"\"\n }\n },\n \"hostname2\": {\n \"Port 80\": {\n \"Protocol\": \"tcp\",\n \"State\": \"\",\n \"Service\": \"open\"\n },\n \"hostname3\": {\n \"Protocol\": \"tcp\",\n \"State\": \"open\",\n \"Service\": \"\"\n }\n },\n \"hostname4\": {\n \"Port 80\": {\n \"Protocol\": \"tcp\",\n \"State\": \"\",\n \"Service\": \"open\"\n },\n \"Port 443\": {\n \"Protocol\": \"tcp\",\n \"State\": \"open\",\n \"Service\": \"\"\n }\n },\n \"hostname5\": {\n \"Port 80\": {\n \"Protocol\": \"tcp\",\n \"State\": \"\",\n \"Service\": \"open\"\n },\n \"Port 443\": {\n \"Protocol\": \"tcp\",\n \"State\": \"open\",\n \"Service\": \"\"\n }\n }\n}\n_id\n64345263495f463da1b28bbb\n\nPort 80\nObject\nProtocol\n\"tcp\"\nState\n\"\"\nService\n\"open\"\n\nPort 443\nObject\nProtocol\n\"tcp\"\nState\n\"open\"\nService\n\"\"\n_id\n64345263495f463da1b28bbc\n\nPort 80\nObject\nProtocol\n\"tcp\"\nState\n\"\"\nService\n\"open\"\n\nPort 443\nObject\nProtocol\n\"tcp\"\nState\n\"open\"\nService\n\"\"\n_id\n64345263495f463da1b28bbd\n\nPort 80\nObject\nProtocol\n\"tcp\"\nState\n\"\"\nService\n\"open\"\n\nPort 443\nObject\nProtocol\n\"tcp\"\nState\n\"open\"\nService\n\"\"\n_id\n64345263495f463da1b28bbe\n\nPort 80\nObject\nProtocol\n\"tcp\"\nState\n\"\"\nService\n\"open\"\n\nPort 443\nObject\nProtocol\n\"tcp\"\nState\n\"open\"\nService\n\"\"\n", "text": "I am trying to import a json file to an existing MongoDB collection. Here is the format to my json file:I want the structure of this collection to have the hostname as the index to each output scan. But for some reason the import does not include each hostname, and places a _ID where I would like to have the hostname. Here is what the structure of the collection looks like after the import:", "username": "Thomas_Hanley" }, { "code": "", "text": "You may pre process the file with jq to set the _id field to whatever value you want before importing to MongoDB.", "username": "steevej" } ]
Complications on importing a json file to an existing MongoDB collection
2023-04-10T18:34:50.484Z
Complications on importing a json file to an existing MongoDB collection
309
null
[ "atlas-cluster", "flutter" ]
[ { "code": "", "text": "Greetings, I’ve recently started developing a flutter app which needs to be able to work offline and sync data and figured I’d try realm but I am struggling to sync about 511 documents to the client app from the server.Below error constantly shows up on App Service Logs.connection(ac-xfugm72-shard-00-02.bke6qnv.mesh.mongodb.net:30444[-15871]) incomplete read of message header: context canceled; connection(ac-xfugm72-shard-00-02.bke6qnv.mesh.mongodb.net:30444[-15871]) incomplete read of message header: context canceled (ProtocolErrorCode=201)", "username": "Mfundo_Sydwell" }, { "code": "", "text": "Hi, all of those errors are transient and will actually no longer be logged in our next release. The log indicates that the client terminated the sync connection. This can happen for a variety of reasons including networking issues, closing the application, it transitioning to the background, etc.What other issues are you seeing/is there any bad behavior here or is it just the error messages worrying you?Everything seems to be fine in the logs outside of these errors (which I admit are confusing).Another thing I noticed is that you have an M0, which should be fine for nowm but we recommend trying to test with at least an M10 wen testing anything performance related.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "If I update the subscription query to only sync student collection that has a particular payment method for example I can see the students being synced to the client app (This syncs down a few documents compared to all the students of course ) but if the subscription query is for all the students which is about 511 documents, It doesn’t sync any students. Below is all it shows on the IDE logs and nothing ever happens. I left it overnight thinking that maybe it syncs in the background but nothing.I/flutter ( 3327): [INFO] Realm: Connection[1]: Session[1]: client_reset_config = false, Realm exists = true, client reset = falseI/flutter ( 3327): [INFO] Realm: Connected to endpoint ‘3.9.230.118:443’ (from ‘192.168.232.2:41213’)I/flutter ( 3327): [INFO] Realm: Verifying server SSL certificate using 155 root certificatesI/flutter ( 3327): [INFO] Realm: Connection[1]: Connected to app services with request id: “6432fec6d270bb39fb5a7e27”", "username": "Mfundo_Sydwell" }, { "code": "", "text": "Hi, I am taking a deeper look into this and I think the error is just that you are on an M0 which is not very powerful and has a lot of rate limiting occuring. Can you possibly upgrade to an M10 and try again to see if the error still exists?Please note that upgrading from the Shared Tier to the Dedicated Tier involves terminating sync first.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "I’ve upgraded to M10 and the following are errors i see on the IDE logsI/flutter (10087): [ERROR] Realm: Failed to connect to endpoint ‘35.177.238.136:443’: Connection refusedI/flutter (10087): [ERROR] Realm: Failed to connect to ‘ws.eu-west-2.aws.realm.mongodb.com:443’: All endpoints failedI/flutter (10087): [INFO] Realm: Connection[4]: Closing the websocket with status=‘SystemError: Connection refused’, was_clean=‘false’I/flutter (10087): [ERROR] Realm: SyncWebSocketError message: WebSocket: Connection Failed category: SyncErrorCategory.webSocket code: SyncWebSocketErrorCode.websocketConnectionClosedClientI/flutter (10087): [ERROR] Realm: Failed to connect to endpoint ‘35.177.238.136:443’: Connection refusedI/flutter (10087): [ERROR] Realm: Failed to connect to ‘ws.eu-west-2.aws.realm.mongodb.com:443’: All endpoints failedI/flutter (10087): [INFO] Realm: Connection[4]: Closing the websocket with status=‘SystemError: Connection refused’, was_clean=‘false’I/flutter (10087): [ERROR] Realm: SyncWebSocketError message: WebSocket: Connection Failed category: SyncErrorCategory.webSocket code: SyncWebSocketErrorCode.websocketConnectionClosedClient\n/flutter (10087): [ERROR] Realm: Failed to connect to endpoint ‘35.177.238.136:443’: Connection refusedI/flutter (10087): [ERROR] Realm: Failed to connect to ‘ws.eu-west-2.aws.realm.mongodb.com:443’: All endpoints failedI/flutter (10087): [INFO] Realm: Connection[4]: Closing the websocket with status=‘SystemError: Connection refused’, was_clean=‘false’I/flutter (10087): [ERROR] Realm: SyncWebSocketError message: WebSocket: Connection Failed category: SyncErrorCategory.webSocket code: SyncWebSocketErrorCode.websocketConnectionClosedClient", "username": "Mfundo_Sydwell" }, { "code": "", "text": "Hi, I believe those errors are transient and due to a deploy doing on. It seems like your app is now back to connecting but running into a different issue. Do you know how large these objects are? We seem to have some indications in the logs that they are all on the order of 3 megabytes and that is what is causing the problem. Going to toggle a setting for you that should hopefully unblock you.", "username": "Tyler_Kaye" }, { "code": "", "text": "I am not sure how big each document is but It has a base64 image in it which shouldn’t be that big", "username": "Mfundo_Sydwell" }, { "code": "", "text": "Got it. Our metrics show that the average document size is 3MB. We applied a setting and think we have confirmed you should be working now. Let me know if you are still seeing issues.Thanks,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "It is syncing now. I can see the documents come up on the client. just the to confirm, the issue was a setting on your side that needed to be changed ?", "username": "Mfundo_Sydwell" }, { "code": "", "text": "This is a setting on our side. We also just pulled in some work to prevent this from happening for anyone else. The TLDR is that the initial bootstrap was generating too large of entries due to the large average size of documents you have.", "username": "Tyler_Kaye" } ]
Struggling to sync about 511 documents to the client app using flexible sync
2023-04-09T16:07:33.710Z
Struggling to sync about 511 documents to the client app using flexible sync
886
https://www.mongodb.com/…ec8885977aa.jpeg
[ "dallas-mug" ]
[ { "code": "Staff Solution Engineer for Confluent Director of Presales Engineering at Eliassen Group", "text": "MongoDB and Confluent : Better Together!\n\nGraphic for February MongoDB Meet Up (1)960×540 68.3 KB\nConfluent Senior Architect Britton LaRoche will be presenting how to create super scalable event driven applications using MongoDB and Confluent.Event runs from 6 pm to 7 pm with dinner provided. Vegan options available.More details at Dallas MongoDB Meet Up - February 2023 - YouTubeEvent Type: In-Person\nLocation: 222 Betchan Dr, Lake Dallas, TX 75065Staff Solution Engineer for Confluent\nhttps://www.linkedin.com/in/brittonlaroche/Britton recently authored a brilliant LinkedIn article, Effective Digital Transformation with the New \"Kafcongo\"​ Tech Stack: Kafka, Confluent & MongoDB Director of Presales Engineering at Eliassen Group\nhttps://www.linkedin.com/in/allen-cordrey-9533a0b/ To RSVP - Please click on the “ ✓ RSVP ” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going. You need to be signed in to access the button.Do you use Eventbrite? You can also RSVP on Eventbrite here.", "username": "Allen_Cordrey" }, { "code": "", "text": "Hi, when is the next Dallas meet-up? Thanks!", "username": "Christina_Chao" }, { "code": "", "text": "Hi Christina!It will be held on May 17th. You can find additional details here: May Dallas MongoDB User Meet Up Tickets, Wed, May 17, 2023 at 5:30 PM | EventbriteThis event will be posted soon in the Dallas group within this community platform as well.", "username": "Veronica_Cooley-Perry" }, { "code": "", "text": "Thanks, Veronica! Are there any other Texas cities that host MUGs?", "username": "Christina_Chao" }, { "code": "", "text": "Not currently - if you know of new location and people who are interested in starting MUGs let me know!", "username": "Veronica_Cooley-Perry" } ]
Dallas MongoDB February Meet-up
2023-01-17T16:00:50.945Z
Dallas MongoDB February Meet-up
2,411
https://www.mongodb.com/…_2_1024x575.jpeg
[ "charts" ]
[ { "code": "", "text": "I am getting the error \"Error loading data for this chart \" when embedding one of my charts. The chart data loads fine when viewing it through mongodb Atlas dashboards. Is it possible to get more details as to why the data cannot be retrieved when embedding?The error i am receiving is considered an unknown error in the embedded charts error docs https://www.mongodb.com/docs/charts/embedded-chart-error-codes/\n\nimage1526×858 50.4 KB\n", "username": "Paula_farrugia" }, { "code": "", "text": "Hi @Paula_farrugia -Sorry to hear you are having problems. Can you help me with a bit more info:Tom", "username": "tomhollander" }, { "code": "", "text": "Hi @tomhollander , Thanks for your reply!I am using the charts-embed-dom SDK and i am running an aggregation to filter the data before showing it in the chart. The chart IDs are as follows:\n7e45fd46-b026-424a-93ce-96a631a071c3\n638e4800-6b6e-4263-8bd2-43649d886979\n5eeaf218-c8ca-45ba-b805-084be3b2c8e0The same aggregation is also being used in a similar chart (638e4505-6b6e-491c-8f53-43649d83c00f) but the data is loading, which is why i was curious to see the errors as the other charts were also loading fine up till a few days ago.My guess was that the only difference between them is the one that’s loading has to filter a lot less records, but there’s no way for me to confirm this as when loading through the charts dashboard itself, everything loads fine.", "username": "Paula_farrugia" }, { "code": "", "text": "It doesn’t look like these charts are enabled for embedding? Did you check this setting?\nimage898×508 21.9 KB\n", "username": "tomhollander" }, { "code": "", "text": "I am actually using authenticated access and it is enabled on both dashboard and the chart itself. The data was loading up till a few days ago and the settings have not changed", "username": "Paula_farrugia" }, { "code": "", "text": "Oh right, sorry I forgot to check the dashboard settings. Is it possible to briefly enable unauthenticated embedding and see if that renders without the token? If so that would imply that issue is to do with the token verification (or if not it will give other clues).", "username": "tomhollander" }, { "code": " {\n $addFields: {\n lastMonth: {\n $dateToParts: {\n date: {\n $dateSubtract: {\n startDate: \"$$NOW\",\n unit: \"month\",\n amount: 1\n }\n }\n }\n }\n }\n },\n", "text": "Hi Tom, i enabled it on both dashboard and chart but still got the same error.After doing some debugging it seems that after removing this part of the aggregation, it works:Is there a reason you could think of why this works fine on atlas but not when embedding?", "username": "Paula_farrugia" }, { "code": "", "text": "That’s really strange. The aggregation stage is valid, and I just tested it on one of my embedded charts and it works fine. Can you email me at tom.hollander @ mongodb.com so we can troubleshoot further?", "username": "tomhollander" }, { "code": "", "text": "Just to close off the loop here. As discussed offline, in this particular case, the authenticated embedding uses App Service as authentication provider with the “Fetch data using Atlas App Services” enabled. When rules configured in App Services, it will only support aggregation stages/expressions supported by MongoDB 4.4 for user functions. $dateSubtract is an aggregation expression that has been added with MongoDB 5.0, hence the App Service raised an error which causes the “Cannot retrieve data” error in Charts.Until the 5.0 aggregation expression is supported, the workaround for this issue is to disable the “Fetch data using Atlas App Services” option from the authentication provider setting.", "username": "James_Wang1" }, { "code": "", "text": "@Paula_farrugia was this issue ever resolved? im having exactly same behavior for new users, they get the “Cannot retrieve data” message, even when they have same permissions as old users…", "username": "Agustin_Osorio" }, { "code": "", "text": "embed-dom SDKThe embed-dom SDK is a JavaScript library that enables developers to embed interactive 3D models into webpages. It provides a simple API that allows developers to quickly and easily add 3D models to their webpages. The SDK also provides a range of features such as support for multiple 3Dhttps://latestgbapps.com/yowhatsapp-apk/", "username": "Amelia_emma" }, { "code": "", "text": "You can have a look at James’ comment above which explains the temporary solution we found MongoDB Chart \"Cannot retrieve data\" - #9 by James_Wang1", "username": "Paula_farrugia" } ]
MongoDB Chart "Cannot retrieve data"
2023-01-12T17:40:52.556Z
MongoDB Chart &ldquo;Cannot retrieve data&rdquo;
2,646
null
[ "sharding", "ruby" ]
[ { "code": " \"_configsvrShardCollection\"=>\n {\"help\"=>\n \"Internal command, which is exported by the sharding config server. Do not call directly. Shards a collection. Requires key. Optional unique. Sharding must already be enabled for the database\",\n \"requiresAuth\"=>true,\n \"slaveOk\"=>false,\n \"adminOnly\"=>true},\n", "text": "Is there a way to run shardCollection using the ruby driver? I did a listCommands and the closest thing seems to be an internal command:", "username": "Neil_Ongkingco" }, { "code": "shardCollectionenableShardingadminrequire 'mongo'\n# connect to local cluster's test database\nclient = Mongo::Client.new('mongodb://localhost:27017/test')\n# connect to local cluster's admin database\nadmin = client.use(:admin)\n# enable sharding of the \"test\" database\nadmin.database.command(enableSharding: \"test\")\n# create an index on test.foo\nclient[:foo].indexes.create_one(_id: \"hashed\")\n# shard the test.foo collection on the index we just created\nadmin.database.command(shardCollection: \"test.foo\", key: { _id: \"hashed\" })\n", "text": "The shardCollection command can be run via the Ruby driver directly. Note that if sharding hasn’t been enabled for the database yet you’d need to run enableSharding first.Both commands above are admin commands, so they’d need to be run against the admin database. For example:", "username": "alexbevi" }, { "code": "", "text": "Oh I wasn’t using the admin db. Thanks, I’ll try this out", "username": "Neil_Ongkingco" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
shardCollection in ruby
2023-04-10T16:24:36.213Z
shardCollection in ruby
834
null
[ "field-encryption" ]
[ { "code": "", "text": "I have try to install libmongocrypt using apt-get on Unbuntu jammy distribution in according to the guide: https://www.mongodb.com/docs/manual/core/queryable-encryption/reference/libmongocrypt.\nI run the following command to set the respository:\necho “deb https://libmongocrypt.s3.amazonaws.com/apt/ubuntu jammy/libmongocrypt/1.7 universe” | sudo tee /etc/apt/sources.list.d/libmongocrypt.listAnd\nsudo apt-get install -y libmongocrypt\noutput the error: Unable to locate package libmongocryptI check the repository url by Chrome: https://libmongocrypt.s3.amazonaws.com/apt/ubuntu/dists/jammy/libmongocrypt/1.7 It just shows an error xml document.Is the repository url wrong or there any other problem?", "username": "kurt_zhu" }, { "code": "libmongocrypt-devsudo apt-get update", "text": "Try libmongocrypt-dev also make sure you run sudo apt-get update first too.", "username": "chris" }, { "code": "", "text": "Hello kurt_zhu and welcome,Aside from a few languages, you shouldn’t need to install libmongocrypt yourself as it is packaged with most drivers. What language driver are you using? And please make sure to check either the Compatibility table - https://www.mongodb.com/docs/manual/core/queryable-encryption/reference/compatibility/#std-label-qe-compatibility-reference for Queryable Encryption or the Compatibility table here - https://www.mongodb.com/docs/manual/core/csfle/reference/compatibility/#std-label-csfle-compatibility-reference for Client-Side Field Level Encryption to ensure you are using a supported driver.Cynthia", "username": "Cynthia_Braund" }, { "code": "", "text": "Thanks. I have run “apt-get install libmongocrypt-dev”. And I have found it is installed at “/usr/lib/x86_64-linux-gnu/libmongocrypt.so.0.0.0”,But i have a doubt still: why can not i enter the directory? https://libmongocrypt.s3.amazonaws.com/apt/ubuntu/dists/jammy/libmongocrypt/1.7For other repositories, such as\nIndex of /ubuntu/dists/jammy/multiverse,\nI can enter always.", "username": "kurt_zhu" }, { "code": "", "text": "Hi Cynthia,\nI use nodejs and i have installed the mongodb-client-encryption for nodejs. Now I know i needn’t install libmongocrypt.\nAnd the test has success.\nThanks,\nKurt.", "username": "kurt_zhu" }, { "code": "", "text": "That is great to hear. I’m glad it is working!Cynthia", "username": "Cynthia_Braund" } ]
Unable to locate package libmongocrypt using apt-get
2023-04-07T09:29:02.697Z
Unable to locate package libmongocrypt using apt-get
1,216
null
[ "crud" ]
[ { "code": "{\n _id: new ObjectId(\"64301d2402240ae566748671\"),\n user_id: new ObjectId(\"6426fae9ff0a6b4366bfb471\"),\n queries: [\n {\n text: 'ovo je rtestset',\n _id: new ObjectId(\"64301f1402240ae5667486c5\")\n },\n {\n text: 'ads fasdf asd f',\n _id: new ObjectId(\"64314dad9078b3cbf3bb583c\")\n },\n {\n text: 'sadf asdf asdf ',\n _id: new ObjectId(\"6431518c9078b3cbf3bb58d7\")\n },\n {\n text: 'asdf asdf asdf asdf ',\n _id: new ObjectId(\"6431518f9078b3cbf3bb58e0\")\n },\n {\n text: 'asdf asdf asdf asdf ',\n _id: new ObjectId(\"643151929078b3cbf3bb58ea\")\n }\n ],\n __v: 0\n}\n1. Queries.updateOne({\"user_id\": user.id}, {$pull: {\"queries.$._id\": query_id}})\n2. Queries.updateOne({\"user_id\": user.id}, {$pull: {\"queries\": {\"_id\": query_id}}})\n", "text": "I just can’t seen to get this thing to work. I have this data set:I’m try to delete one specific query from the nested array based on id. I get the user_id form the session and the query id from req.query. My attempts were as follows:I’ve tried multiple other options but none with the result I wanted. If I did get the req to get to the db, but it just deletes the last object from the array for some reason.Any suggestions?", "username": "Mo_F" }, { "code": "2. Queries.updateOne({\"user_id\": user.id}, {$pull: {\"queries\": {\"_id\": query_id}}})Queries.updateOne({\"user_id\": user.id}, {$pull: {\"queries\": {\"_id\": new ObjectId(query_id)}}})`\n", "text": "Given the data you shared and assuming that query_id is an ObjectId, the correct form would be2. Queries.updateOne({\"user_id\": user.id}, {$pull: {\"queries\": {\"_id\": query_id}}})But what I suspect is that query_id is a string rather than an ObjectId while your data is correctly stored as an ObjectId. I would try", "username": "steevej" }, { "code": "", "text": "Yup that’s it, ty man, have a good one.", "username": "Mo_F" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Delete object from nested array by id
2023-04-08T14:01:10.808Z
Delete object from nested array by id
975
null
[ "queries" ]
[ { "code": "", "text": "error connecting to host: could not connect to server: connection() error occurred during connection handshake: auth error: sasl conversation error: unable to authenticate using mechanism “SCRAM-SHA-1”: (AtlasError) bad auth : authentication failed.", "username": "Naga_Tere" }, { "code": "", "text": "Can you connect to your cluster by shell?\nHave you whitelisted your IP?\nAre you giving correct user id/PWD?", "username": "Ramachandra_Tummala" }, { "code": "mongoimport -h $host --port 27117 -d <dbname> -c <coll_name> --type json --file collection.date.json\nmongo_uri=\"mongodb://$user:$pass@$host:27117/$dbname?authSource=$authdb\"\n\nmongoimport --uri=$mongo_uri -c <collection_name> --type json --file collection.data.json\n", "text": "i was facing the same issue when i try to import using, look like some Security config enabled while hardening the password Encryption.worked with the below approachnot sure what the magic is. give a try** Does your password contain special char?", "username": "psram" }, { "code": "", "text": "Thanks for the reply, but i solved it on my own by recreating the different cluster.", "username": "Naga_Tere" }, { "code": "", "text": "@Syed_Sheraz, your clueless answer seems to be coming straight out of ChatGPT’s mouth.The issue is about is authentication failed.Only ChatGPT would point to a formatting error of the JSON file.Only ChatGPT would recommend to review the error message which is shared in the original post.Only ChatGPT would recommend to consult the community forums in the community forums. Most likely because you did not tell ChatGPT that you were already in the community forums.Anyway I am flagging you post since you trying to hide SPAM in it. I will also flag your other post for the same reason.", "username": "steevej" } ]
When i am trying to import json file data into mongodb atlas i am getting following error
2023-04-08T18:11:31.699Z
When i am trying to import json file data into mongodb atlas i am getting following error
667
null
[ "queries" ]
[ { "code": "", "text": "what is wildcard in mongoDB ? Give me details of wildcard index or search with example…", "username": "Ajay_Moladiya" }, { "code": "?*\\productMetadata{ \"productMetadata\": { \"category\": \"electronics\", \"brand\": \"Apple\" } } \n{ \"productMetadata\": { \"category\": \"clothing\", \"brand\": \"Nike\" } } \n{ \"productMetadata\": { \"category\" : \"electronics\", \"manufacturer\": \"Sony\" } } \n{ \"productMetadata\": \"out of stock\" }\nproductMetadataproductMetadataproductMetadataproductMetadata.categoryproductMetadata.brandproductMetadata.manufacturerdb.products.createIndex( { \"productMetadata.$**\" : 1 } )\ndb.products.find({ \"productMetadata.category\" : \"electronics\" })\ndb.products.find({ \"productMetadata.brand\" : \"Nike\" })\ndb.products.find({ \"productMetadata.manufacturer\" : \"Sony\" })\ndb.products.find({ \"productMetadata\" : \"out of stock\" })\nproductMetadata.$**$**productMetadataproductMetadata", "text": "Hi @Ajay_Moladiya,Welcome to the MongoDB Community forums The wildcard operator enables queries that use special characters in the search string that can match any character.To learn more about wildcard operators, please refer to this linkUse a wildcard operator in an Atlas Search query to match any character.Wildcard indexing is an index that can filter and automatically matches any field, sub-document, or array in a collection and then index those matches.Consider an application that captures product data under the productMetadata field and supports querying against that data:Now, the user wants to create indexes to support queries on any subfield of productMetadata. A wildcard index on productMetadata can support single-field queries on productMetadata, productMetadata.category, productMetadata.brand, and productMetadata.manufacturer:The index can support the following queries:In all of these queries, the index on productMetadata.$** would be used to speed up the query. The $** wildcard specifier matches all subfields of the productMetadata field, so the index can be used to satisfy any query that involves productMetadata.To learn more, please refer to this link:I hope it helps. Let us know if you have any further questions.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What is Wildcard in mongoDB
2023-04-09T15:26:55.933Z
What is Wildcard in mongoDB
2,610
null
[ "aggregation" ]
[ { "code": "$lookuplocalFieldforeignField$lookupdb.mycollection.aggregate([\n {\n $lookup: {\n from: 'mycollection',\n localField: 'symbol',\n foreignField: 'index_id',\n as: 'foo',\n },\n },\n]);\n", "text": "I have two collection and need to performs a left outer join on it. With $lookup is works good but the problem is that the collections come from some API where the value of localField is in lower case and foreignField is in upper case.How to use $lookup to ignore or to lowerCase the value(s)? I try something but I have no chance. I found a solution but is for a older version of mongodb. If you have any experience please help ", "username": "Duracell" }, { "code": "//Case Insensitive ID collection \n\ndb.createCollection(\"caseinsensitive_up2\", {\n collation: {\n \"locale\": \"en\",\n \"strength\": 1\n }\n})\n\ndb.caseinsensitive_up2.find({})\n\ndb.caseinsensitive_up2.save({ _id: \"RAMKUMAR\", name: \"ramkumar\" })\ndb.caseinsensitive_up2.save({ _id: \"ramkumar\", name: \"ramkumar232\" })\ndb.caseinsensitive_up2.save({ _id: \"rAmKumar\", name: \"ramkumar\" })\n\ndb.caseinsensitive_up2.find({ _id: 'rAmKumar' })\n\ndb.caseinsensitive_up2.update(\n { _id: 'RAmKuMaR' },\n { name: \"ramkumar232\" },\n { upsert: true }\n)\n\n", "text": "this can be achieved automated way supporting both case with out touching the data.\nCreate Case Insensitive index and then try to match it.", "username": "psram" }, { "code": "", "text": "The solution:\nhttps://stackoverflow.com/questions/75961851/how-to-lookup-with-case-sensitive-values-localfield-in-mongodb-v6/75961969#75961969", "username": "Duracell" }, { "code": "", "text": "pretty over complicated ", "username": "Duracell" } ]
How to $lookup with case sensitive values localField, in mongoDB V6
2023-04-07T21:04:20.228Z
How to $lookup with case sensitive values localField, in mongoDB V6
534
null
[ "java", "database-tools" ]
[ { "code": "", "text": "I want to get the output of printReplicationInfo or getReplicationInfo using Java but I could not get can any one help me out.I am using mongo reactive streams 4.8.2 driver.", "username": "viveka_ps" }, { "code": "DB db = mongoClient.getDatabase(\"admin\");\nDocument documentA = db.runCommand(new Document(\"getReplicationInfo\",1));\n", "text": "getReplicationInfo**make sure you have admin rights for the user.\nthe below snippet will provide you the required data", "username": "psram" } ]
How to get data for printReplicationInfo() using java
2023-04-08T12:44:15.216Z
How to get data for printReplicationInfo() using java
637
null
[]
[ { "code": "", "text": "Hello,I am more familiar with Relational databases and have a design question for a set of Collections.I am working on a database that has Event Types, Categories, Sub Categories and Products.For each Event Type, Category and SubCategory there can be many Products listed under each combination.Do I just need to create one more collection that references one Event Type, Category, and SubCategory with multiple products?The idea is that on a site a customer would select the event type, category and subCategory and a list of products will show up.The example is a Wedding is the Event Type, Reception is the Category and Goblet is the SubCategory. I would get a list of goblets as the products.Thank you in advance", "username": "Chad_Elofson" }, { "code": "product_collectionvar product_collection = {\n Id: UUID(),\n name: \"name of the product\",\n event: \"WEDDING\",\n category: \"RECEPTION\",\n sub_category: \"GOBLET\"\n}\n\nvar event_collection = [{\n Id: \"WEDDING\",\n name: \"name\",\n description: \"Wedding Event\"\n}]\nvar category_collection = [{\n Id: \"RECEPTION\",\n name: \"name\",\n description: \"description\"\n}]\nvar sub_category_collection = [{\n Id: \"GOBLET\",\n name: \"name\",\n description: \"sub_category\"\n}]\n", "text": "Gobletyou can do this in many ways.\nwhen it comes to Mongo DB you can Embed any level of Sub Documents.\nif your sub document contain many fields then better to use reference collection. SpringData support with annotation @Dbref, as below data model.if your sub document is simple just to do categorise just product_collection should work for you.when you come from RDBMS to NoSQL this is some thing we need to consider how it is consumed.", "username": "psram" } ]
Design Question for multiple relationships
2023-04-09T18:43:50.238Z
Design Question for multiple relationships
420
null
[ "aggregation" ]
[ { "code": " \"operationsHistory\": [\n {\n \"operationName\": \"buying\",\n \"amount\": 5,\n },\n {\n \"operationName\": \"deleting\",\n },\n {\n \"operationName\": \"buying\",\n \"amount\": 5,\n },\n {\n \"operationName\": \"buying\",\n \"amount\": 2,\n },\n {\n \"operationName\": \"deleting\",\n },\n {\n \"operationName\": \"deleting\",\n },\n {\n \"operationName\": \"buying\",\n \"amount\": 5,\n },\n {\n \"operationName\": \"deleting\",\n },\n {\n \"operationName\": \"buying\",\n \"amount\": 3,\n }\n ]\n \"operationsCount\": {\n \"buying\": 5,\n \"selling\": 0,\n \"deleting\": 4\n },\n", "text": "I have an array with objects with operations.\nI need to count the number of all operations and display them in a separate object.\nThank you in advance!Array with objects with operationsWhat do I want to get", "username": "Gleb_Ivanov" }, { "code": "", "text": "You simply $unwind the array operationsHistory, then $group: by _id:operationName using the $sum accumulator.", "username": "steevej" }, { "code": "db.sample.aggregate()\n.unwind(\"$operationsHistory\")\n.project({amount: \"$operationsHistory.amount\", operationName: \"$operationsHistory.operationName\"})\n.group({ _id: \"$operationName\", count: {$sum: \"$amount\"}})\n", "text": "use this below MQL to do the above request", "username": "psram" } ]
How to count the number of certain elements in an array with objects?
2023-04-09T04:08:46.621Z
How to count the number of certain elements in an array with objects?
421
null
[ "queries" ]
[ { "code": "", "text": "I have troubles with my evaluation of the Managing Databases, Collections, and Documents in Atlas Data Explorer I did all the step but the instruqt send the next message “The users collection document count was incorrect. Please try again”", "username": "Gerardo_Zavala" }, { "code": "", "text": "I am having the exact same issue which is extremly frustrating. I guess it’s an issue on the product …", "username": "Clement_Enjolras" }, { "code": "", "text": "Same exact problem, can’t complete the exercise - “The users collection document count was incorrect. Please try again”", "username": "Toms_Feierabends" }, { "code": "", "text": "Same issue here for both my partner and I. Support email link doesn’t work either.", "username": "Steven_Panfil" }, { "code": "", "text": "This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Experiencing issues after several attempts
2023-03-08T03:09:32.634Z
Experiencing issues after several attempts
619
null
[ "aggregation", "queries", "java" ]
[ { "code": "", "text": "Here is my sample Collection data :{\n“_id”: “63f52a5240c6bf8fa4dd9a79”,\n“Protection_Level”: “1”,\n“Stock_Count”: “10”,\n“Product_ID”: “11468752”,\n“LAST_UPDATED_DTTM”: “20230216134424”,\n“Store_ID”: “100”,\n“ABC_Protection_Level”: “4”\n}I got like these 1.7Million records with product/store and its stock. How would I create aggregation as below ?I am exploring $Group / $bucket / $bucketauto - couldn’t find easy way to write the pipeline according to my need. Any help would be appreciated!", "username": "Rajavel_Selvaraj_Ganesan" }, { "code": "", "text": "Please share what you tried so far. This way we avoid losing time working on a solution you already know it is not working. This could also save us by simply adapting or improving on a partially working solution.Stock count by store for each productsPlease share the sample result you would like.Bucket them into chunks to process the collection data easily for posting via API ?The above requirement is not clear enough to propose anything meaningful.", "username": "steevej" }, { "code": "", "text": "I got 3 Million records within the collection.10K Unique products200+ StoresI was thinking to create aggregation pipeline as below to utilize within my springboot API to post into a vendor api.Stock/Inventory count for each product across all the stores (so, it will be 10K API Calls instead of 3M API Calls) [OR]Bucket them into chunks to process the collection data easily for posting into vendor api from the pipelineHere are the options I am trying ://Mongo Community\n//Group Inventory data\n{\n$group: {\n“_id”: {\nProduct_id: “$Product_ID”,\nStore_id: “$Store_ID”,\nStock_cnt : “$Stock_Count”\n}\n}\n}// Bucket Auto\n//Bucket Auto Inventory Example ; I couldn’t think through how to then create subsequent stagesdb.artwork.aggregate( [\n{\n$bucketAuto: {\ngroupBy: “$Product_ID”,\nbuckets: 10\n}\n}] )//Another Bucket approach//Bucket Stock Data ; But it defaults them into “OTHER” as overall SUM INVENTORY DATAdb.artists.aggregate( [\n// First Stage\n{\n$bucket: {\ngroupBy: “$Stock_Count”, // Field to group by\nboundaries: [ 10, 25, 35, 45, 55, 80, 100 ], // Boundaries for the buckets\ndefault: “Other”, // Bucket ID for documents which do not fall into a bucket\noutput: { // Output for each bucket\n“count”: { $sum: 1 },\n“artists” :\n{\n$push: {\n“Product_ID”: $Product_ID ,\n“Store_ID”: $Store_ID ,\t\t\n“Stock_Count”: “$Stock_Count”\n}\n}\n}\n}\n}Please let me know if this helps.", "username": "Rajavel_Selvaraj_Ganesan" }, { "code": "\n[\n {\n $group:\n /**\n * _id: The id of the group.\n * fieldN: The first field name.\n */\n {\n _id: {\n Product_ID: \"$Product_ID\",\n Store_ID: \"$Store_ID\",\n Stock_Count: \"$Stock_Count\",\n },\n },\n },\n {\n $group:\n /**\n * _id: The id of the group.\n * fieldN: The first field name.\n */\n {\n _id: \"$_id.Product_ID\",\n Subset: {\n $push: {\n Store_ID: \"$_id.Store_ID\",\n Stock_Count:\n \"$_id.Stock_Count\",\n },\n },\n },\n },\n]\n", "text": "I was able to find the solution by myself - thanks to MongoDB University videos & Stackoverflow Discover our MongoDB Database Management courses and begin improving your CV with MongoDB certificates. Start training with MongoDB University for free today.Sharing the working code below for anyone coming across similar scenario to achieve it.//Getting Products and its stores + inventory count using nested group scenario", "username": "Rajavel_Selvaraj_Ganesan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Group By Specific Column within a Collection
2023-04-09T00:15:36.255Z
Group By Specific Column within a Collection
779
null
[]
[ { "code": "classroomrollnumrollnum", "text": "Okay so this is my first time in mongoDB.\nConsider that I have a classroom database which has 5 collections, history, geography, science, mathematics and politics. In each collection, every entry represents a student’s perfomance in that subject.\nThe rollnum field uniquely identifies each student in a collection.Now is there a way to get grades of the given rollnum from all the 5 collections, from a single query?Also, I am working with Node.js if that helps.", "username": "Akash_Shanmugaraj" }, { "code": "[\n {\n rollnum: 1,\n grades: {\n history: A,\n geography: B,\n science: A,\n mathematics: C,\n politics: B\n }\n },\n {\n rollnum: 2,\n grades: {\n history: B,\n geography: C,\n science: A,\n mathematics: A,\n politics: C\n }\n },\n]\n", "text": "Hello @Akash_Shanmugaraj, Welcome to the MongoDB community forum!I want to suggest you make one collection with students and each will have an object with grades like:And when you want to change the grades of a certain student, you just refer to his rollnum", "username": "Gleb_Ivanov" }, { "code": "", "text": "Thanks for the reply @Gleb_Ivanov!Actually, there are extra fields like percentile, date of completion, etc.\nAnd I am talking about a scale of over 80 databases, 3000 collections and about 4,000 to 6,000 pings a day, within a short span.Would this be, say time efficient and query efficient, rather than having 5 separate find query for each collection?", "username": "Akash_Shanmugaraj" }, { "code": "", "text": "I can’t say anything about this, because unfortunately I haven’t worked with projects of this size yet.", "username": "Gleb_Ivanov" }, { "code": "{\n rollnum: 1,\n grades: [\n { class:history, grade:A },\n { class:geography, grade: B },\n { class:science, grade: A },\n { class:mathematics, grade: C },\n { class:politics, grade: B }\n ]\n },\n {\n rollnum: 2,\n grades: [\n { class:history: grade:B } ,\n /* ... */\n ]\n },\n", "text": "I would recommend a slight variation of this that uses the attribute pattern.", "username": "steevej" }, { "code": "", "text": "Would this be, say time efficient and query efficient, rather than having 5 separate find query for each collection?Yes it will.Withover 80 databases, 3000 collections and about 4,000 to 6,000 pings a dayyou are about to implement the massive number of collection anti-pattern.", "username": "steevej" }, { "code": "{\n rollnum: 1,\n grades: [\n { class:history, grade: A, percentile: x, completion_date: y } ,\n { class:geography, grade: B , /* ... */ } ,\n /* ... */\n ]\n }\n", "text": "Actually, there are extra fields like percentile, date of completion, etc.With the attribute pattern you may easily have an object rather than a single grade field such as:", "username": "steevej" } ]
Querying data with a common key across multiple collections under same database
2023-04-09T08:34:25.505Z
Querying data with a common key across multiple collections under same database
529
null
[ "replication", "python" ]
[ { "code": "", "text": "Hi, while I am connecting to database via Python script by using Pymongo facing the below error on LDAP-enabled replica set servers (2 data nodes & 1 Arbiter)Error connecting to MongoDB: Private key doesn’t match certificate: [(‘SSL routines’, ‘’, ‘ca md too weak’)Mongodb Version: Percona 5.0.14\nPython Version:3.7.5\nOS: CentOS 7", "username": "Jagan_62817" }, { "code": "", "text": "The settings on the CA that issued the certificate are too weak. The CA needs to be recreated and certificates reissued.", "username": "chris" }, { "code": "", "text": "Could you share step by step instructions to create certs with openssl using strong algorithm.", "username": "Kalyan_Kumar_A" }, { "code": "", "text": "This is really out of scope of the community forums. While not too difficult to do, creating and managing a CA is also easy to get wrong.I would not recommend using openssl to manage a certificate authority. Two that I would recommend are below.Hashicorp Vault:\nhttps://developer.hashicorp.com/vault/tutorials/secrets-management/pki-engineStep CA:\nhttps://smallstep.com/docs/step-ca/basic-certificate-authority-operations/#table-of-contents", "username": "chris" } ]
Mongodb connect time ldap replica set server facing -->SSL routines', '', 'ca md too weak issue
2023-04-06T22:53:03.038Z
Mongodb connect time ldap replica set server facing &ndash;&gt;SSL routines&rsquo;, &rdquo;, &lsquo;ca md too weak issue
959
null
[ "python" ]
[ { "code": "", "text": "I have a collection that has a field such as “publicationDate”: “2023-03-30T00:01:51”, and another with a dd/mm/YYYY string such as “articledatepub”: “24/03/2023”. as I am really at my first steps with MongoDB I would greatly appreciate an hint on how to identify “missing days” between the oldest and most recent date, especially in the first case. Around 100K records on a MongoDB 4.4 self hosted collections. Using python and pymongo. Thanks a lot.", "username": "Robert_Alexander" }, { "code": "", "text": "The first think you should do is to use real dates rather thandd/mm/YYYY stringReal dates, take less space, are faster to compare and are ordered correctly. With dd/mm/YYYY you would need to convert to date anyway for any calculation, sorting and order comparison. And because of the need to convert all the time, you won’t be able to use indexes with the field.", "username": "steevej" }, { "code": "", "text": "Thanks Steeve. Very good point. Alas that date format is pre-existing in the original data. Would it be worth to do a one shot replacement with a proper date value?", "username": "Robert_Alexander" }, { "code": "", "text": "Would it be worth to do a one shot replacement with a proper date value?I mentioned it because it is.", "username": "steevej" } ]
Identify gaps in date fields of a collections
2023-04-07T14:54:47.435Z
Identify gaps in date fields of a collections
582
null
[]
[ { "code": "", "text": "Hi,I’m trying to use the Realm Administration API and I can call the base endpoints like create an access token and get a list of applications. But when I take any of the application ids (from the _id field) they all return 404 Not Found for all endpoints.I gave the API key I set up all the permissions available.What am I doing wrong? ", "username": "Max_Karlsson" }, { "code": "", "text": "Hi @Max_Karlsson,I’m having the same issue, all calls to the API return 404 page not found.Did you find a solution to this?Thanks\nWill", "username": "varyamereon" }, { "code": "", "text": "Hey @varyamereon,Unfortunately, I didn’t. Haven’t heard anything from Realm about it either.I’m honestly pretty close to giving up on Realm altogether and replace it with a competitor.", "username": "Max_Karlsson" }, { "code": "", "text": "@Max_Karlsson Can you give an example code snippet of how you are making this admin API call that always returns a 404?", "username": "Ian_Ward" }, { "code": "", "text": "Hi All – In addition to the code snippet that Ian mentioned, it would also be helpful to confirm that you’re following the steps for an Atlas Programmatic API key and that the key has project owner permissions for the project where your applications are. Sounds like this is the case, but sometimes we see confusion between Atlas API keys and Realm’s API key authentication.", "username": "Drew_DiPalma" }, { "code": "", "text": "Hi @Ian_Ward and @Drew_DiPalma, if someone would have reached out to this ticket 4 months ago when I posted it I might have been able to oblige, but I can’t even remember why I needed to use that endpoint now, so unfortunately I can’t help.", "username": "Max_Karlsson" }, { "code": "", "text": "Sorry about that @Max_Karlsson - we don’t really have a SLA governing our community forums but in the future you can always open a support ticket which does have a SLA tracking system in place.For this specific issue, we don’t see any reason why this wouldn’t work so likely there is just some confusion in the implementation. Would love to clean this up in the docs but we would need to know what is not clear in order to fix it.", "username": "Ian_Ward" }, { "code": "", "text": "Sorry, @Ian_Ward I didn’t mean to come across as entitled. I realise that this is just the community support forum. I just wanted to point out that the time that’s passed since I posted this question is too long for me to remember what I was trying to achieve or how. I don’t even remember which documentation I used, but I know I followed it to a tee Perhaps @varyamereon will be better equipped to answer since he’s worked on it more recently.", "username": "Max_Karlsson" }, { "code": "access_token", "text": "@Drew_DiPalma re-reading the instructions I have found out what I was doing wrong. The call to get an access_token was always successful. I was then making calls with Postman and adding the token to the Authorization like so:image901×256 24.3 KBThis was always returning 404 Page not Found. This morning I copies the curl request and replaced with my access token. It’s added to the headers in a different way and I was able to successfully make requests this way:image703×44 2.84 KBSo it’s all in the details Thanks for your help anyway, @Max_Karlsson I hope you manage to get it sorted.Will", "username": "varyamereon" }, { "code": "private string _authUrl = \"https://realm.mongodb.com/api/admin/v3.0/auth/providers/mongodb-cloud/\";\nprivate string _baseUrl = \"https://realm.mongodb.com/api/admin/v3.0/\";\n\nstring body = JsonConvert.SerializeObject(new { username = _publicApiKey, apiKey = _privateApiKey });\nStringContent requestContent = new StringContent(body, Encoding.UTF8, \"application/json\");\n\nusing (HttpClient client = new HttpClient { BaseAddress = new Uri($\"{_authUrl}\") })\n{\n\tclient.DefaultRequestHeaders.Clear();\n\tclient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(\"application/json\"));\n\n\tHttpResponseMessage response = await client.PostAsync(\"login\", requestContent).ConfigureAwait(true);\n\n\tif (response.IsSuccessStatusCode)\n\t{\n\t\tstring content = await response.Content.ReadAsStringAsync();\n\t\t_authResponse = JsonConvert.DeserializeObject<AuthResponse>(content);\n\t\t_authResponse.Expires = DateTime.Now.AddMinutes(25);\n\t}\n}\nusing (HttpClient client = new HttpClient { BaseAddress = new Uri($\"{_baseUrl}\") })\n{\n\tclient.DefaultRequestHeaders.Clear();\n\tclient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(\"application/json\"));\n\tclient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue(\"Bearer\", _authResponse.AccessToken);\n\n\tHttpResponseMessage response = await client.GetAsync($\"groups/{_projectId}/apps\").ConfigureAwait(true);\n}\n\tstring body = JsonConvert.SerializeObject(new { email = userName, password = $\"{pin}\" });\n\tStringContent requestContent = new StringContent(body, Encoding.UTF8, \"application/json\");\n\n\tusing (HttpClient client = new HttpClient { BaseAddress = new Uri($\"{_baseUrl}\") })\n\t{\n\t\tclient.DefaultRequestHeaders.Clear();\n\t\tclient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(\"application/json\"));\n\t\tclient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue(\"Bearer\", _authResponse.AccessToken);\n\n\t\tHttpResponseMessage response = await client.PostAsync($\"groups/{_projectId}/apps/{_appId}/users\", requestContent).ConfigureAwait(true);\n\t}\n", "text": "Hi Drew,I’m experiencing a similar problem. I authenticate successfully using the following code;Then I try getting the app IDs using;This returns a “403 Forbidden” response.If I attempt to create a user with the following;I get a “404 Not Found”.I have followed the instructions at Authentication API, Find a Project ID,\nApplication ID and User APIs but have not managed to resolve the issues.", "username": "Raymond_Brack" }, { "code": "", "text": "Hi All,The 404 error can happen if you copied the API path by highlighting the string from the documentation instead of using the copy icon. Doing so can lead to copying across hidden UTF characters which will cause this issue. Please use the copy icon as shown below to avoid this issue.\nRealm Administration API — MongoDB Realm 2021-06-28 at 11.54.41 am721×226 42.2 KB\nRegards\nManny", "username": "Mansoor_Omar" }, { "code": "/logscurl", "text": "I, too, am running into this issue. @Drew_DiPalma , I see that you mentioned following the steps for an Atlas Programmatic API key. The only thing that I found different was the lack of a configured access list. I went ahead and configured that and was still met with a 404, “app not found” message. It’s also worth noting that the docs for the access list mention the following: “An empty API access list grants access to all API endpoints except those that explicitly require an API access list.” When viewing the Admin API docs page, I see no reference to anything about the access list. I therefore assume that there are no endpoints that require said access list.The API key has organization member and organization read only roles assigned. At the project level, the key is a project owner.I’m simply trying to hit the /logs endpoint to see what data they actually return. I’m using Postman for these requests. I’ve also tried with curl and was met with the same message.Would love to know what I’m missing here.", "username": "Justin_Jarae" }, { "code": "realm-cli", "text": "It’s saying I posted too long ago to make edits. So minor updates:", "username": "Justin_Jarae" }, { "code": "", "text": "Hi Justin – Would you mind posting/messaging the API request that you’re making (minus any sensitive information) this could help debug here as the above doesn’t point to any specific issue.", "username": "Drew_DiPalma" }, { "code": "curl --request GET \\ --header 'Authorization: Bearer <access_token>' \\ 'https://realm.mongodb.com/api/admin/v3.0/groups/{groupId}/apps/{appId}/logs'curl --request GET \\ --header 'Authorization: Bearer <access_token>' \\ https://realm.mongodb.com/api/admin/v3.0/groups/{groupId}/apps", "text": "Sure thing. There’s not really anything crazy. It’s really just the standard API endpoint:\ncurl --request GET \\ --header 'Authorization: Bearer <access_token>' \\ 'https://realm.mongodb.com/api/admin/v3.0/groups/{groupId}/apps/{appId}/logs'As others have said, I, too, can get a valid response from:\ncurl --request GET \\ --header 'Authorization: Bearer <access_token>' \\ https://realm.mongodb.com/api/admin/v3.0/groups/{groupId}/appsSame key being used for both.", "username": "Justin_Jarae" }, { "code": "{appId}client_app_id_idhttps://realm.mongodb.com/api/admin/v3.0/groups/{groupId}/apps", "text": "My issue has been resolved. (Thanks, Drew!) It’s important to note that the {appId} in the API path is NOT the client App ID. The docs specifically say this in this section. In my case, I was passing client_app_id and not the _id that was returned from the call to https://realm.mongodb.com/api/admin/v3.0/groups/{groupId}/apps.", "username": "Justin_Jarae" } ]
Call Realm Administration API with Application ID results in 404 Not Found
2020-12-02T12:42:27.609Z
Call Realm Administration API with Application ID results in 404 Not Found
5,652
null
[ "migration", "cluster-to-cluster-sync" ]
[ { "code": "./mongosync --verbosity=DEBUG \\\n --cluster0 'mongodb://USER:PASS@HOST:27017/admin?tls=true' \\\n --cluster1'mongodb://USER:PASS@HOST2:27017/admin?tls=true' ```\n\nBest Regards,\n\nPhilippe Oliveira", "text": "Hi everyone, I’ve been testing mongosync and it works great.I’m facing a problem, one of our databases needs to be connected by passing a --sslCAFileBut I couldn’t how I can pass this parameter with mongosync , does anyone know if it’s possible?when database and without certificate it works fine and I use below command:", "username": "Philippe_Oliveira" }, { "code": "", "text": "Script at the bottom will make this much easier.The main issue with MongoSync is it doesn’t have the ability to pass the --sslCAFile, at least in 2022 it didn’t, and in fact a lot of methods attempted to do so caused extreme slowdowns or grinded MongoDB 4.4, 5.0, and 6.0 to a halt.No fix has been implemented, just fyi to help save you from causing a potential outage in your prod environment.You’re better off setting up a script to sync your DBs in batches via BSON/JSON pushes as it’s safer, and you can script in anything and everything you want and set it up to automatically send the batches, or just listen to changes and send the new files. General template is at the end in the script, the middle script segment can be used to help as well, but that’s what I’d do.", "username": "Brock" }, { "code": "", "text": "Thanks for the answer o/For example, I can configure it to sync everything after syncing everything the next day and only sync the last 24 hours…And 10 minutes before the migration, just the previous 30 minutes. Do a kind of incremental import and almost continuous synchronization!I want to make a full backup and import it and then just send the differential changes in my database.can i do this with mongoexport/mongoimport?Best Regards,", "username": "Philippe_Oliveira" }, { "code": "", "text": "Easily, you will have to modify the script a bit like is provided at the end, but you can implement replace.one for what’s changed, etc. or add new documents etc. The key part is just establishing a listener for when changes have been made, or just setting it up to just send timed batches either or.Also, this method in tests back in August of 2022, indicated less resource strain on MongoDB’s operation.You can also set a time/date for what changes occurred and so on, you can also create a network drive and send back-up copies of the JSON documents or BSON files to, in addition to the MongoDB instances all at the same time.The other option is to build and install an Apollo GraphQL server, and have it route data between both MongoDB instances and it’ll sync everything and manage all of it.Apollo GraphQL Server is a huge plus though, in long-term scalability, you can stack and orchestrate data between numerous databases irrelevant of what they are.Redis, MongoDB, MySQL, etc. All of them can be connected to Apollo, and synced together so they all have the same data whether as caches or data stores.", "username": "Brock" }, { "code": "", "text": "The single largest concern I have though, is by using MongoSync to pass CAs, in a lot of tests it either shut down connections to MongoDB, froze MongoDB, or broke MongoDB Encrypted builds. In each use case and experiment the results to test environments were catastrophic.This is the basis of why I suggest not using MongoSync if you’re trying to launch an --sslCAFile, because it can quite literally bring your entire prod down.", "username": "Brock" }, { "code": "mongodb://$USER:$PASS@$HOST:27017/?tls=true&tlsCAFile=tmp/my-ca-bundle.pem\n", "text": "Mongosync supports CA file via the connection URI: https://www.mongodb.com/docs/upcoming/reference/connection-string/#mongodb-urioption-urioption.tlsCAFilefor example something like this should work:", "username": "Alexander_Komyagin" }, { "code": "", "text": "I’ll try this out, I’ve always been unsuccessful with this so far, will see how it goes.", "username": "Brock" } ]
Mongosync for a zero-downtime migration
2023-03-28T19:05:25.809Z
Mongosync for a zero-downtime migration
1,828
https://www.mongodb.com/…7fc22cb500c.jpeg
[]
[ { "code": "", "text": "Hello. Recently I wrote a post at another website about books I read. I see that some of you might like browsing the list:I recently noted couple of posts on this website about books (both technical and non-technical). I...", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks for sharing @Prasad_Saya! That’s quite a comprehensive list.Would love to know which books are you reading for the topics Refactoring and Clean Code? ", "username": "Harshit" }, { "code": "", "text": "Thanks for reading my post. Those books are by Kent Beck and Robert Martin, respectively. Those two topics are part of good programming practices.", "username": "Prasad_Saya" }, { "code": "", "text": "Wow! An AMAZING list! It’s always so interesting to see what sorts of things inspire people, especially across a varied scope of fiction, non-fiction, and poetry.You’re inspiring me to make more time for reading this year. My own similar list would be pitifully short atm. ", "username": "webchick" }, { "code": "", "text": "OMG!\nYou are scratching a nerv here. Time to just read and relax is best for charging my battery. Think of complete non tech stuff and suddenly ideas just fly in… So why do I sit at my desk at 8 pm?? I’ll grab the book I am reading and close the day. Thanks for the reminder best Michael@Prasad_Saya an amazing list!", "username": "michael_hoeller" }, { "code": "", "text": "So why do I sit at my desk at 8 pm?? I’ll grab the book I am reading and close the day.Thinking about it, I read most of the books after dinner. Read one chapter a day (I am a reader). That way I have a chance to absorb what I am reading. It also allows me re-visit a random chapter in any book at anytime (I do pick a book from my shelf randomly and read a few paragraphs once in a while).But, there were times I read early in the morning before starting my day ", "username": "Prasad_Saya" }, { "code": "", "text": "When my work upgraded to 6.0 this book was a great help! Mastering MongoDB 6.x: Expert techniques to run high-volume and fault-tolerant database solutions using MongoDB 6.x, 3rd Edition https://a.co/d/5dbyETt", "username": "UrDataGirl" }, { "code": "", "text": "This topic was automatically closed after 180 days. New replies are no longer allowed.", "username": "system" } ]
About books I read
2023-01-24T16:58:24.906Z
About books I read
1,655
null
[ "java", "react-native", "android" ]
[ { "code": "", "text": "I want to install MongoDB Realm app with Expo in an Expo Development client (expo-dev-client) on my Android device (I am on Windows 10).I followed the “Bootstrap with Expo - React Native SDK” procedure at https://www.mongodb.com/docs/realm/sdk/react-native/bootstrap-with-expo/ step by step.The Android Bundling completes successfully apparently but when I select the app in the “development servers” screen of the dev-client on my Android device I have the following error:ERROR Error: Exception in HostObject::get(propName:Realm): java.lang.UnsatisfiedLinkError: couldn’t find DSO to load: librealm.so caused by: dlopen failed: cannot locate symbol “__emutls_get_address” referenced by “/data/app/~~RByWYhVyWSSKeF6nBOxNCg==/com.anonymous.realm11-yQ4HV5E7diRX4gos2lNp9w==/lib/arm64/librealm.so”… result: 0ERROR Invariant Violation: Failed to call into JavaScript module method AppRegistry.runApplication(). Module has not been registered as callable. Registered callable JavaScript modules (n = 11): Systrace, JSTimers, HeapCapture, SamplingProfiler, RCTLog, RCTDeviceEventEmitter, RCTNativeAppEventEmitter, GlobalPerformanceLogger, JSDevSupportModule, HMRClient, RCTEventEmitter.*What annoys me the most, apart from the fact that I lost hours on this problem already, is that I did not change anything in the @realm/expo-template-ts template. I may have missed something but it seems that the template and the procedure provided by MongoDB does not work today.Any ideas? Thanks in advance", "username": "Gilles_Jack" }, { "code": "", "text": "I haven’t been able to build an Expo + Realm app in the past month. Not a single template/repo/tutorial works. Truly disappointing. Our development has been stopped pretty much and it doesn’t look like they are gonna fix it anytime soon.", "username": "Damian_Danev" }, { "code": "", "text": "I made it work by using “realm”: “11.0.0” and getting rid of the babel stuff.", "username": "Gilles_Jack" }, { "code": "package.json", "text": "@Gilles_Jack , could you please share a repo or a package.json file?", "username": "Damian_Danev" }, { "code": "", "text": "Contribute to u2gilles/realm1 development by creating an account on GitHub.I had to change Task.ts (without babel) and I added some nice traces.I tested it on my Android device with USB cable and I used the local Expo CLI recomended by expo not the global one recommended by mondoDb. That’s another proof that MondoDB does not maintain and test Realm on React Native at the moment.npm install\nnpx expo run:android (first time)\nnpx expo start --dev-client", "username": "Gilles_Jack" }, { "code": "npx expo start --dev-clientError: Missing Realm constructor. Did you run \"pod install\"? Please see https://docs.mongodb.com/realm/sdk/react-native/install/ for troubleshooting\nat node_modules\\react-native\\Libraries\\Core\\ExceptionsManager.js:null in reportException\nat node_modules\\react-native\\Libraries\\Core\\ExceptionsManager.js:null in handleException\nat node_modules\\react-native\\Libraries\\Core\\setUpErrorHandling.js:null in handleError\nat node_modules\\expo-dev-launcher\\build\\DevLauncherErrorManager.js:null in errorHandler\nat node_modules\\expo-dev-launcher\\build\\DevLauncherErrorManager.js:null in <anonymous>\nat node_modules\\expo\\build\\errors\\ExpoErrorManager.js:null in errorHandler\nat node_modules\\expo\\build\\errors\\ExpoErrorManager.js:null in <anonymous>\nat node_modules\\@react-native\\polyfills\\error-guard.js:null in ErrorUtils.reportFatalError\nat node_modules\\metro-runtime\\src\\polyfills\\require.js:null in guardedLoadModule\nat http://192.168.100.7:19000/index.bundle?platform=android&dev=true&hot=false&strict=false&minify=false:null in global code \n\nInvariant Violation: Failed to call into JavaScript module method AppRegistry.runApplication(). Module has not been registered as callable. Registered callable JavaScript modules (n = 11): Systrace, JSTimers, HeapCapture, SamplingProfiler, RCTLog, RCTDeviceEventEmitter, RCTNativeAppEventEmitter, GlobalPerformanceLogger, JSDevSupportModule, HMRClient, RCTEventEmitter.\n A frequent cause of the error is that the application entry file path is incorrect. This can also happen when the JS bundle \nis corrupt or there is an early initialization error when loading React Native.\nat node_modules\\react-native\\Libraries\\Core\\ExceptionsManager.js:null in reportException\nat node_modules\\react-native\\Libraries\\Core\\ExceptionsManager.js:null in handleException\nat node_modules\\react-native\\Libraries\\Core\\setUpErrorHandling.js:null in handleError\nat node_modules\\expo-dev-launcher\\build\\DevLauncherErrorManager.js:null in errorHandler\nat node_modules\\expo-dev-launcher\\build\\DevLauncherErrorManager.js:null in <anonymous>\nat node_modules\\expo\\build\\errors\\ExpoErrorManager.js:null in errorHandler\nat node_modules\\expo\\build\\errors\\ExpoErrorManager.js:null in <anonymous>\nat node_modules\\@react-native\\polyfills\\error-guard.js:null in ErrorUtils.reportFatalError\nat node_modules\\react-native\\Libraries\\BatchedBridge\\MessageQueue.js:null in __guard\nat node_modules\\react-native\\Libraries\\BatchedBridge\\MessageQueue.js:null in callFunctionReturnFlushedQueue\n\nInvariant Violation: Failed to call into JavaScript module method AppRegistry.runApplication(). Module has not been registered as callable. Registered callable JavaScript modules (n = 11): Systrace, JSTimers, HeapCapture, SamplingProfiler, RCTLog, RCTDeviceEventEmitter, RCTNativeAppEventEmitter, GlobalPerformanceLogger, JSDevSupportModule, HMRClient, RCTEventEmitter.\n A frequent cause of the error is that the application entry file path is incorrect. This can also happen when the JS bundle \nis corrupt or there is an early initialization error when loading React Native.\nat node_modules\\react-native\\Libraries\\Core\\ExceptionsManager.js:null in reportException\nat node_modules\\react-native\\Libraries\\Core\\ExceptionsManager.js:null in handleException\nat node_modules\\react-native\\Libraries\\Core\\setUpErrorHandling.js:null in handleError\nat node_modules\\expo-dev-launcher\\build\\DevLauncherErrorManager.js:null in errorHandler\nat node_modules\\expo-dev-launcher\\build\\DevLauncherErrorManager.js:null in <anonymous>\nat node_modules\\expo\\build\\errors\\ExpoErrorManager.js:null in errorHandler\nat node_modules\\expo\\build\\errors\\ExpoErrorManager.js:null in <anonymous>\nat node_modules\\@react-native\\polyfills\\error-guard.js:null in ErrorUtils.reportFatalError\nat node_modules\\react-native\\Libraries\\BatchedBridge\\MessageQueue.js:null in __guard\nat node_modules\\react-native\\Libraries\\BatchedBridge\\MessageQueue.js:null in callFunctionReturnFlushedQueue\n", "text": "Thank you for the help. I was able to run the app successfully in the android emulator, but not in Expo Go. I uninstalled expo-cli from global and installed it locally, but still getting a warning that the command npx expo start --dev-client is getting executed from he global package. I am unable to get the app to run with Expo Go on my android phone even with an USB cable ( I am also running on windows 10):", "username": "Damian_Danev" }, { "code": "", "text": "Because “npx expo run:android” called “npx expo prebuild”, the app escapes from the “managed workflow”.\nSo you can’t use Expo Go anymore. But that’s perfectly fine because you now have an “Expo Development client” that can now contain the native android code of Realm and that can be used in the same maner as Expo Go.", "username": "Gilles_Jack" }, { "code": "", "text": "Okay, thanks. I think I am understanding what you are explaining. I will have to do a bit of reading, new to mobile development. Just one final question that is of the most important to me: Will I be able to use EAS (Expo App Services) to publish and update my app?PS: you are a saver, thanks again!", "username": "Damian_Danev" }, { "code": "", "text": "In fact, I am new to React (and Javacript world) too. I did the reading last week.\nAnd I have not tried EAS yet.", "username": "Gilles_Jack" }, { "code": "", "text": "I wish that we could just focus on developing apps and not spend our time setting up projects that should come ready out of the box…", "username": "Damian_Danev" }, { "code": "realmreact-nativerealmdev-client", "text": "Hey all (@Damian_Danev @Gilles_Jack)! It seems our templates did not have versions for realm and react-native pinned, so the ended up causing issues when new releases came out. We just made a release of the expo template that should now work out of the box. Let us know if you run into any issues. Thanks!PS - in regards to Expo Go, this app is unfortunately not compatible with realm or any other third party library that isn’t part of the base Expo SDK. Although Expo Go, is a great way to get going and familiarize yourself with React Native, it’s unfortunately not built for adding third-party libraries to it on the fly (unless they are pure JS libraries). But through the dev-client one is able to build their own custom “Expo Go”-ish app which one can use to update code live to whatever device is running it. More information", "username": "Andrew_Meyer" }, { "code": "\nAndroid Bundling complete 11692ms\n ERROR Error: Exception in HostObject::get(propName:Realm): java.lang.UnsatisfiedLinkError: couldn't find DSO to load: librealm.so caused by: dlopen failed: cannot locate symbol \"__emutls_get_address\" referenced by \"/data/app/~~03La_fqEioklXzthzhDdmA==/com.podcastcutter1-Q7Dmqzp7e-cL_yoxCHSZWQ==/lib/arm64/librealm.so\"... result: 0\n ERROR Invariant Violation: Failed to call into JavaScript module method AppRegistry.runApplication(). Module has not been registered as callable. Registered callable JavaScript modules (n = 11): Systrace, JSTimers, HeapCapture, SamplingProfiler, RCTLog, RCTDeviceEventEmitter, RCTNativeAppEventEmitter, GlobalPerformanceLogger, JSDevSupportModule, HMRClient, RCTEventEmitter.\n A frequent cause of the error is that the application entry file path is incorrect. This can also happen when the JS bundle is corrupt or there is an early initialization error when loading React Native.\n ERROR Invariant Violation: Failed to call into JavaScript module method AppRegistry.runApplication(). Module has not been registered as callable. Registered callable JavaScript modules (n = 11): Systrace, JSTimers, HeapCapture, SamplingProfiler, RCTLog, RCTDeviceEventEmitter, RCTNativeAppEventEmitter, GlobalPerformanceLogger, JSDevSupportModule, HMRClient, RCTEventEmitter.\n A frequent cause of the error is that the application entry file path is incorrect. This can also happen when the JS bundle is corrupt or there is an early initialization error when loading React Native.\n", "text": "Hi Andrew,I won’t test the new template myself because this procedure still requires to install expo-cli globally which is not recommanded by Expo and can cause problems.Moreoverver, realm 11.7.0 still does not work in my react native project (“react”: “18.1.0”, “react-native”: “0.70.8”). I get the following error at launch :Only realm 11.0.0 work for me.\nRegards", "username": "Gilles_Jack" }, { "code": "", "text": "Correction, realm 11.7.0 now works with “react”: “18.2.0” and “react-native”: “0.71.6”.\nThanks.", "username": "Gilles_Jack" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Fail to Bootstrap a MongoDB Realm app with Expo (@realm/expo-template-ts )
2023-03-07T03:52:44.857Z
Fail to Bootstrap a MongoDB Realm app with Expo (@realm/expo-template-ts )
2,572
null
[ "queries" ]
[ { "code": "for( i=1;i<=1000;i++ ){\n db.w1000.insertOne({\"name\":\"ajay\"+i})\n}\n\"name\":\"ajay1\"\n\"age\":21\n\n\"name\":\"ajay2\"\n\"age\":22\nAnd \nSo\nOn\n. \n. \n", "text": "I inserted 1000 data using for loop.for example :Now what if I want to insert a new column with the same collection likewhat we do?", "username": "Ajay_Moladiya" }, { "code": "agefor (i = 1; i <= 1000; i++) { \ndb.w1000.updateMany( \n { name: \"ajay\" + i }, { $set: { age: i + 20 } }\n)}\nage{\n \"_id\": ObjectID(\"643179077d77041fb1eaea8a\"),\n \"name\": \"ajay1\",\n \"age\": 21\n},\n{\n \"_id\": ObjectID(\"643179077d77041fb1eaea8b\"),\n \"name\": \"ajay2\",\n \"age\": 22\n}\n... so on\n", "text": "Hi @Ajay_Moladiya,Welcome to the MongoDB Community forums insert a new column with the same collection likeI assume you want to add a new field named “age” to the existing collection and have its value automatically increment starting from 21.If yes, you can use db.collection.updateMany() and $set:It will add an additional field age to your document, and the collection will appear as follows:I hope it answers your question. Let us know if you have any further questions.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Okay sir , Thank you !!", "username": "Ajay_Moladiya" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Solve this problem
2023-04-08T11:56:03.122Z
Solve this problem
438
null
[ "dot-net" ]
[ { "code": "Database.GetCollection<T>(_collectionName).AsQueryable().OrderBy(\"Name\")\nSystem.ArgumentException: Value cannot be empty. (Parameter 'name')\n at MongoDB.Driver.Core.Misc.Ensure.IsNotNullOrEmpty(String value, String paramName)\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToAggregationExpressionTranslators.ExpressionToAggregationExpressionTranslator.TranslateLambdaBody(TranslationContext context, LambdaExpression lambdaExpression, IBsonSerializer parameterSerializer, Boolean asRoot)\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToPipelineTranslators.OrderByMethodToPipelineTranslator.Translate(TranslationContext context, MethodCallExpression expression)\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToExecutableQueryTranslators.ExpressionToExecutableQueryTranslator.Translate[TDocument,TOutput](MongoQueryProvider`1 provider, Expression expression)\n at MongoDB.Driver.Linq.Linq3Implementation.MongoQuery`2.Execute()\n at MongoDB.Driver.Linq.Linq3Implementation.MongoQuery`2.GetEnumerator()\n at F4e.Database.QueryWrapper`1.System.Collections.Generic.IEnumerable<T>.GetEnumerator() in D:\\F4e\\Api\\F4e\\F4e.EntityContext\\QueryWrapper.cs:line 60\n at System.Collections.Generic.LargeArrayBuilder`1.AddRange(IEnumerable`1 items)\n at System.Collections.Generic.EnumerableHelpers.ToArray[T](IEnumerable`1 source)\n at System.Linq.Enumerable.ToArray[TSource](IEnumerable`1 source)\n at F4e.Controllers.EntityWithTenantControllerBase`5.List(TFilter filter) in D:\\F4e\\Api\\F4e\\F4e\\Controllers\\EntityWithTenantControllerBase.cs:line 114\n at lambda_method2(Closure , Object , Object[] )\n at Microsoft.AspNetCore.Mvc.Infrastructure.ActionMethodExecutor.SyncObjectResultExecutor.Execute(IActionResultTypeMapper mapper, ObjectMethodExecutor executor, Object controller, Object[] arguments)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeActionMethodAsync>g__Logged|12_1(ControllerActionInvoker invoker)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.<InvokeNextActionFilterAsync>g__Awaited|10_0(ControllerActionInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Rethrow(ActionExecutedContextSealed context)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ControllerActionInvoker.InvokeInnerFilterAsync()\n--- End of stack trace from previous location ---\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeNextResourceFilter>g__Awaited|25_0(ResourceInvoker invoker, Task lastTask, State next, Scope scope, Object state, Boolean isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Rethrow(ResourceExecutedContextSealed context)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.Next(State& next, Scope& scope, Object& state, Boolean& isCompleted)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.InvokeFilterPipelineAsync()\n--- End of stack trace from previous location ---\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Logged|17_1(ResourceInvoker invoker)\n at Microsoft.AspNetCore.Mvc.Infrastructure.ResourceInvoker.<InvokeAsync>g__Logged|17_1(ResourceInvoker invoker)\n at Microsoft.AspNetCore.Routing.EndpointMiddleware.<Invoke>g__AwaitRequestTask|6_0(Endpoint endpoint, Task requestTask, ILogger logger)\n at Swashbuckle.AspNetCore.SwaggerUI.SwaggerUIMiddleware.Invoke(HttpContext httpContext)\n at Swashbuckle.AspNetCore.Swagger.SwaggerMiddleware.Invoke(HttpContext httpContext, ISwaggerProvider swaggerProvider)\n at Microsoft.AspNetCore.Diagnostics.DeveloperExceptionPageMiddleware.Invoke(HttpContext context)\n\n", "text": "I’m using MongoDB C# Driver v:2.19.0\nWhen I try an OrderBy dynamic query with System.Linq.Dynamic.Core library, I get the following error.Code:This code works perfectly with v2.18.0 driver. But throws following error with v2.19.0 driver. Also OrderBy(x => x.Name) works with both drivers.Error:", "username": "Emrah_Dogru" }, { "code": "using System.Linq.Dynamic.Core;\nvar result = users.WhereInterpolated($\"{fieldToQueryOn} == {ValueToSearch} and IsEnabled==true\").FirstOrDefault();\nSystem.ArgumentException: Value cannot be empty. (Parameter 'name')\n at MongoDB.Driver.Core.Misc.Ensure.IsNotNullOrEmpty(String value, String paramName)\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToFilterTranslators.ExpressionToFilterTranslator.TranslateLambda(TranslationContext context, LambdaExpression lambdaExpression, IBsonSerializer parameterSerializer, Boolean asRoot)\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToPipelineTranslators.WhereMethodToPipelineTranslator.Translate(TranslationContext context, MethodCallExpression expression)\n at MongoDB.Driver.Linq.Linq3Implementation.Translators.ExpressionToExecutableQueryTranslators.FirstMethodToExecutableQueryTranslator`1.Translate[TDocument](MongoQueryProvider`1 provider, TranslationContext context, MethodCallExpression expression)\n at MongoDB.Driver.Linq.Linq3Implementation.MongoQueryProvider`1.Execute[TResult](Expression expression)\n at System.Linq.Queryable.FirstOrDefault[TSource](IQueryable`1 source)\n", "text": "Hi, I’ve the same issue after updating to version 2.19.", "username": "stefano_de_simone" }, { "code": "", "text": "This has been fixed in 2.19.1", "username": "Stefan_Tataran" } ]
MongoDB C# 2.19 Driver Linq.Dynamic.Core OrderBy Error
2023-02-22T19:32:55.694Z
MongoDB C# 2.19 Driver Linq.Dynamic.Core OrderBy Error
1,506
null
[ "atlas", "serverless" ]
[ { "code": "", "text": "We’re considering switching to serverless clusters, and as a step to evaluate the feasibility of this, I’ve created a new serverless cluster and pointed the staging environment of our SaaS product to it, in order to compare load times etc with our production environment (which uses a normal M10 cluster).One weird thing I’ve noticed is that if I leave a page open and go grab a coffee and come back, the next request times out. If I try a few different endpoints, they all time out, until after half a minute or so, it comes back to life and starts working again.For some reason I can’t see any traces of this in our telemetry so I haven’t been able to pinpoint the root cause of this behaviour, but once I pointed the staging environment back to our normal M10 cluster, I stopped seeing these timeouts.So I’m wondering, based on this loose description, if anyone knows whether or not Atlas Serverless clusters might be the cause of this? Like if it gets into sleep mode or something after some idle time, and then is a bit slow to warm up again?", "username": "John_Knoop" }, { "code": "MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.\n ---> System.IO.IOException: Unable to read data from the transport connection: Connection timed out.\n ---> System.Net.Sockets.SocketException (110): Connection timed out\n --- End of inner exception stack trace ---\n at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)\n at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource<System.Int32>.GetResult(Int16 token)\n at System.Net.Security.SslStream.EnsureFullTlsFrameAsync[TIOAdapter](TIOAdapter adapter)\n at System.Net.Security.SslStream.ReadAsyncInternal[TIOAdapter](TIOAdapter adapter, Memory`1 buffer)\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytesAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\n", "text": "Update: I was able to find some telemetry for these requests after all.Apparently there was an exception after 15.7 minutes (!!) with this stack trace:Are these timeouts a known bug/feature of Serverless Clusters or might I have misconfigured it somehow?", "username": "John_Knoop" }, { "code": "", "text": "Hi John,\nThe behavior you are experiencing is unexpected. Serverless database should not behave differently in this the example you posted. I will be reaching out to you directly to further debug the issue.Sincerely,\nServerless PM team", "username": "Vishal_Dhiman" }, { "code": "", "text": "Hi @Vishal_DhimanFeel free to DM me or e-mail at any time.John", "username": "John_Knoop" }, { "code": "ConnectorError(ConnectorError { user_facing_error: None, kind: RawDatabaseError { code: \"unknown\", message: \"Operation timed out (os error 110)\" } })\n", "text": "I have a similar issue.\nError asCan someone help?", "username": "Karishma_Bothara" }, { "code": " Operation timed out (os error 110)", "text": "i have same issue with prisma and mongodb the message is Operation timed out (os error 110)\n@Karishma_Bothara how do you fix that.", "username": "Trieu_Boo" } ]
Does Atlas Serverless clusters suffer from really slow cold starts?
2022-05-09T18:10:31.796Z
Does Atlas Serverless clusters suffer from really slow cold starts?
4,909
null
[ "sharding" ]
[ { "code": "", "text": "Hello my question is: At what point is it advisable to use a shared cluster instead of a single mongodb server.Are there any recommendations for that?Thanks in advance.", "username": "Roland_Bole1" }, { "code": "", "text": "Hello @Roland_Bole1 ,Welcome to The MongoDB Community Forums! It is advisable to use a sharded cluster instead of a single MongoDB server when your application requires high availability, scalability, and fault tolerance. Sharded clusters can provide you with the ability to distribute your data across multiple nodes, enabling you to handle larger volumes of data and higher numbers of concurrency.Some specific situations where a sharded cluster may be beneficial include:When considering a sharded cluster, it is important to consider your application’s specific needs and requirements. MongoDB provides guidance and best practices for designing, deploying, and operating sharded clusters, including recommendations for hardware configurations, network settings, and security settings. It is recommended to consult the official MongoDB documentation and seek advice from experienced MongoDB developers before setting up a sharded cluster. You can contact us at MongoDB Get-in-touch.For more details regarding Sharding, I would recommend you to visit below linkRegards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "Thank you very much for you explaining.", "username": "Roland_Bole1" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Sharded Cluster
2023-04-02T08:56:57.277Z
Sharded Cluster
791
null
[ "mongoose-odm", "connecting" ]
[ { "code": "testversion: '3.9'\nservices:\n api:\n build: .\n container_name: api-mongodb\n restart: unless-stopped\n environment:\n - DB_URL=mongodb://myself:pass123@mongo:27017\n ports:\n - '3131:3131'\n depends_on:\n - mongo\n mongo:\n container_name: mongo_container\n image: mongo:latest\n restart: always\n volumes:\n - mongo_dbv:/data/db/\n environment:\n - MONGO_INITDB_ROOT_USERNAME=myself\n - MONGO_INITDB_ROOT_PASSWORD=pass123\n - MONGO_INITDB_DATABASE=api\nvolumes:\n mongo_dbv: {}\nmongoose.connect(process.env.DB_URL)MONGO_INITDB_DATABASEdocker exec -it mongo_container mongosh -u myself -p pass123\nshow dbsapiapiapi", "text": "Hi, I am a newbie to both mongo (and Docker too), so please bear with me Started a REST api example project express/mongo/mongoose/typescript.\nThe API works fine, does what it is supposed to do, persists data, all nice and dandy, except - all the collections it creates are placed into the implicit database named test. Even if this database were to be the only one in the container, I would like to have more control and be more explicit about the db name I am using.Below is my docker-compose.ymlI connect via mongoose.connect(process.env.DB_URL).To be honest, I don’t see the effect of MONGO_INITDB_DATABASE anywhere.\nWhen I cli into the container viaand then show dbs, I don’t see a database named api anywhere. Tried appending api to the DB_URL string - this resulted in auth error.So how can I force mongo into creating the database called api?\n(Let me know if I can provide more info).Thanks\nND", "username": "NutonDev" }, { "code": "", "text": "One year later and now I’m in the same situation. Did you find any solution by now?", "username": "Rafael_Jordao" }, { "code": "testmongoshuse <new_database_name>import { MongoClient } from \"mongodb\";\n\nconst client = new MongoClient(uri);\nasync function run() {\n try {\n const database = client.db(\"sample_mflix\");\n const movies = database.collection(\"movies\");\n...\nsample_mflixdatabasemoviesmovies", "text": "Hi @Rafael_Jordao welcome to the community!Are you in the exact same situation where you would like to use a database other than test?If you’re using the mongosh shell, you can switch database with use <new_database_name>If you’re using a driver, there are documentations on how to do this for each driver. For example in the node driver findOne page:This will use the database called sample_mflix and put it in the database variable, and collection called movies in the movies variable.There will be similar constructs in other drivers. Please consult the relevant example in your driver’s documentation (e.g. findOne) and you should be able to see similar examples to the above.If this is not what you’re after, please open a new thread detailing what you need and any relevant background information (MongoDB versions, what you have tried, what’s the goal, etc.)Best regards\nKevin", "username": "kevinadi" }, { "code": "MONGO_INITDB_DATABASEmongosh", "text": "Actually the MONGO_INITDB_DATABASE is working fine for me.When I use mongosh I’m specifying its value.", "username": "mohamad_khubayb" } ]
Explicit mongo database name in Docker app
2021-12-17T16:10:54.484Z
Explicit mongo database name in Docker app
6,955
null
[ "java", "production" ]
[ { "code": "", "text": "The 4.9.1 MongoDB Java & JVM Drivers release is a patch to the 4.9.0 release.The documentation hub includes extensive documentation of the 4.9 driver.You can find a full list of bug fixes here .", "username": "Jeffrey_Yemin" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Java Driver 4.9.1 Released
2023-04-07T22:09:34.800Z
MongoDB Java Driver 4.9.1 Released
1,328
null
[ "java", "production" ]
[ { "code": "", "text": "The 3.12.13 MongoDB Java Driver release is a patch to the 3.12.12 release and a recommended upgrade.The documentation hub includes extensive documentation of the 3.12 driver, includingand much more.You can find a full list of bug fixes here.", "username": "Jeffrey_Yemin" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Java Driver 3.12.13 Released
2023-04-07T22:07:26.384Z
MongoDB Java Driver 3.12.13 Released
937
null
[ "sharding", "database-tools", "backup" ]
[ { "code": "", "text": "get the following error while doing a backup of shard server.error :\nFailed: error creating intents to dump: error creating intents for database config: error counting config.system.sharding_ddl_coordinators: (Unauthorized) not authorized on config to execute command { count: “system.sharding_ddl_coordinators”,user that is doing the backup has the following roles\n“roles” : [\n{\n“role” : “backup”,\n“db” : “admin”\n},\n{\n“role” : “read”,\n“db” : “config”\n}\n]\n}also tried upgrading mongodump to 100.7.0, but get the same error", "username": "Brian_DM" }, { "code": "", "text": "Check this jira ticket\nhttps://jira.mongodb.org/browse/TOOLS-3203\nTry the workaround or upgrade to latest tools version", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thank you Ramachandra!.. the solution worked after creating a custom role and granting find privileges on system.sharding_ddl_coordinators", "username": "Brian_DM" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to backup the shard server - mongodb 5.0.13 mongodump ver 100.6.1
2023-04-06T22:46:30.969Z
Unable to backup the shard server - mongodb 5.0.13 mongodump ver 100.6.1
797
null
[ "compass" ]
[ { "code": "", "text": "Hi, before unninstalling mongodb compass whenever i got that problem i went to windows services and click on start mongo service and everything run perfect, but now that i download it again, i can’t see the mongodb service on windows services, how can i run mongodb service in order to stop getting that error, because i know that the problem is that the service is stopped. thanks for your time", "username": "roberto_rolando" }, { "code": "", "text": "If it is just Compass uninstall & install you should be able to start your service\nCan you see mongod exec under install/bin dir?\nor did you try to start the service from cmd line or start mongod from cmd line", "username": "Ramachandra_Tummala" }, { "code": "", "text": "i tried unninstalling and reinstalling mongo, didn’t work. i can’t see the installation/bin dir, i don’t know how to start it from cmd line if you mean by going to the installation folder and running mongodb.exe i did it, and didn’t work, whenever i got that error i used to open task administrator from windows go to services tab and run mongodb, but now i can’t see mongo service", "username": "roberto_rolando" }, { "code": "", "text": "I meant netstart mongodb\nWhat error you got when you run mongod?\nIt could be issue with user you logged in as\nAlso check local user vs network user while starting the service from stackoverflow threads", "username": "Ramachandra_Tummala" }, { "code": "", "text": "it’s not the user, im running it locally(and without user or password), the problem is that the mongo service is not running, and i can’t see it from windows services so that i run it manually, if there’s a cmd command to run mongo service please tell me, (i try to execute netstart mongodb and it says that the command is not recognized), the problem is not when i run mongo is when i try to connect using the localhost:27017 string given by mongo, also the only two directories i found on mongodb dir(where is installed) is local and resources, and thats it.", "username": "roberto_rolando" }, { "code": "", "text": "It is net start mongodb\nPlease refer to these docs\nThe user i am referring to is your computer user not mongodb user", "username": "Ramachandra_Tummala" }, { "code": "", "text": "thanks for the help @Ramachandra_Tummala luckily i could solve the problem, the thing was that when i unninstalled mongo i also deleted mongo 2008r2 plus ssl, that allowed me to run mongo as a service, now i can see and run mongo service. anyways thanks so much for your time and help", "username": "roberto_rolando" } ]
Errconnrefused 127.0.0.1:27017
2023-04-06T05:49:52.372Z
Errconnrefused 127.0.0.1:27017
571
null
[ "compass", "mongodb-shell" ]
[ { "code": "", "text": "I’m looking to automate always installing the latest available version of mongosh in a CI pipeline. I get that I can go to the downloads page here, pick the version, and the platform, and click “copy link”. That will give me a URL like this:https://downloads.mongodb.com/compass/mongosh-1.8.0-linux-x64.tgzWell, what about when a new version comes out? I don’t want to have to update this URL in my CI pipeline every time a new version comes out. Is there a URL equivalent of this that I can use?:https://downloads.mongodb.com/compass/mongosh-LATEST-linux-x64.tgzSurely there must be a way to get the ‘latest’ rather than needing to specify a version. Any help would be greatly appreciated. Thanks!", "username": "Ryan_Baxendell" }, { "code": "", "text": "Not a direct answer to your question, but what about using the mongodb yum/apt repo ?", "username": "chris" }, { "code": "", "text": "I’m considering the distro package manager route, but Im trying to fit the mongosh install into an existing workflow that does a curl for the tarball, unzips it, and puts it in the PATH (among other things). So I’m trying to keep things consistent and distro-independent if I can.", "username": "Ryan_Baxendell" }, { "code": "curl --silent https://repo.mongodb.org/apt/ubuntu/dists/focal/mongodb-org/6.0/multiverse/binary-amd64/Packages | grep -A 5 -x \"Package: mongodb-mongosh\" | grep \"Version:\" | cut -d: -f 2 | sed 's/ //g' | sort -r\n", "text": "I’ve figured out how to at least list all of the available versions from a particular repo - in this case Ubuntu Focal:A bit ugly but it works. A drawback here is that this is the .deb repo that I’m finding the available versions from. There’s not guarantee that all of the same versions will exist for the tarballs (although I would assume the same versions will almost always exist between .deb and tarball)There really, really should be a ‘linux’ (or ‘tarball’ or whatever) directory at this level so that the tarballs can be reached programatically:https://repo.mongodb.org/Since there is a yum and apt equivalent:https://repo.mongodb.org/yum/\nhttps://repo.mongodb.org/apt/If there is a URL for what I’m looking for, someone please let me know.", "username": "Ryan_Baxendell" }, { "code": "", "text": "There is a good answer for mongod but not one I’ve found so far for mongosh.If the CI is container based mongosh is already in the container image.", "username": "chris" } ]
URL to always download the latest version of mongosh
2023-04-07T15:37:48.763Z
URL to always download the latest version of mongosh
726
null
[ "crud" ]
[ { "code": "[\n {\n \"_id\": \"642fe50af4723c9936dbb366\",\n \"name\": \"test two\",\n \"priority\": 0,\n \"users\": [\n {\n \"admin\": false,\n \"role\": 3,\n \"_id\": \"642fe553a1b8adf605c53167\"\n },\n {\n \"admin\": false,\n \"role\": 1,\n \"_id\": \"642bf3865808d8888a1995b4\"\n }\n ],\n \"createdAt\": \"2023-04-07T09:40:26.550Z\",\n \"updatedAt\": \"2023-04-07T09:50:13.354Z\",\n \"__v\": 0\n }\n ]\n const projects = await Project.findOneAndUpdate(\n {\n _id: projectId,\n \"users._id\": user._id,\n },\n {\n $set: {\n users: update,\n },\n }\n );\n const projects = await Project.findOneAndUpdate(\n { projectId },\n { $set: { users: update } },\n { arrayFilters: [{ \"users._id\": user._id }] }\n );\nMongoServerError: The array filter for identifier 'users' was not used in the update { $setOnInsert: { createdAt: new Date(1680862292699) }, $set: { users: [ { admin: false, role: 3, _id: ObjectId('642fec54b0a5eea143444e16') } ], updatedAt: new Date(1680862292699) } }\n", "text": "hi all,\ni have my document as follows.i would like to search this collection by ID,\nthen search users by ID ,\nand update the user by specific fields that come from req.bodyso fr i have tried a few methods:The problem with this is that it changes the whole users array not the specific user.\nanyone so kind to help me out would be hugely appreciatedand alsowhich seems the right way however i get", "username": "Daniel_Chochlinski" }, { "code": "const projects = await Project.findOneAndUpdate(\n {\n _id: projectId,\n \"users._id\": user._id,\n },\n\n {\n $set: { \"users.$\": { ...update, _id: user._id } },\n },\n {\n new: true,\n }\n );\n", "text": "this worked for me for any beginners like myself who struggle a little bit, please note “users.$” where $ tells to only update that specific user", "username": "Daniel_Chochlinski" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
Finding nested object and updating it
2023-04-07T09:52:51.789Z
Finding nested object and updating it
528
null
[ "node-js", "atlas", "field-encryption" ]
[ { "code": "const mongodb = require(\"mongodb\");\nconst { ClientEncryption } = require(\"mongodb-client-encryption\");\nconst { MongoClient, Binary } = mongodb;\nconst { join } = require('path');\nrequire('dotenv').config();\n\n// const { getCredentials } = require(\"./your_credentials\");\n// credentials = getCredentials();\n\nvar db = \"medicalRecords\";\nvar coll = \"patients\";\nvar namespace = `${db}.${coll}`;\n// start-kmsproviders\nconst {readFileSync} = require(\"fs\");\nconst provider = \"local\";\nconst path = join(__dirname,\"./master-key.txt\")\nconst localMasterKey = readFileSync(path);\nconsole.log(localMasterKey)\nconst kmsProviders = {\n local: {\n key: localMasterKey,\n },\n};\n// end-kmsproviders\n\nconst connectionString = process.env.URI;\n\n// start-key-vault\nconst keyVaultNamespace = \"encryption.__keyVault\";\n// end-key-vault\n\n// start-schema\nconst schema = {\n bsonType: \"object\",\n encryptMetadata: {\n // keyId: {\n // $binary:{\n // base64: \"PadTrVggQL+MaHprhtzdcA==\",\n // subType: \"04\",\n // }\n // },\n keyId: [ new Binary(Buffer.from(\"PadTrVggQL+MaHprhtzdcA==\", \"base64\"),4)],\n },\n properties: {\n insurance: {\n bsonType: \"object\",\n properties: {\n policyNumber: {\n encrypt: {\n bsonType: \"int\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\",\n },\n },\n },\n },\n medicalRecords: {\n encrypt: {\n bsonType: \"array\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\",\n },\n },\n bloodType: {\n encrypt: {\n bsonType: \"string\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\",\n },\n },\n ssn: {\n encrypt: {\n bsonType: \"int\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\",\n },\n },\n },\n};\n\nvar patientSchema = {};\npatientSchema[namespace] = schema;\n// end-schema\n\n// start-extra-options\nconst extraOptions = {\n // mongocryptdSpawnPath: '.'\n mongocryptdBypassSpawn: true,\n};\n// end-extra-options\n\n// start-client - \n\nconst secureClient = new MongoClient(connectionString, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n autoEncryption: {\n keyVaultNamespace,\n kmsProviders,\n schemaMap: patientSchema,\n extraOptions: extraOptions,\n },\n});\n// end-client\nconst regularClient = new MongoClient(connectionString, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\nasync function main() {\n console.log(\"inside main\");\n try {\n //await regularClient.connect();\n console.log('reg client')\n try {\n await secureClient.connect();\n console.log('sec client')\n // start-insert\n try {\n const writeResult = await secureClient\n .db(db)\n .collection(coll)\n .insertOne({\n name: \"Jon Doe\",\n ssn: 241014209,\n bloodType: \"AB+\",\n \"key-id\": \"demo-data-key\",\n medicalRecords: [{ weight: 180, bloodPressure: \"120/80\" }],\n insurance: {\n policyNumber: 123142,\n provider: \"MaestCare\",\n },\n });\n } catch (writeError) {\n console.error(\"writeError occurred:\", writeError);\n }\n // end-insert\n // start-find\n console.log(\"Finding a document with regular (non-encrypted) client.\");\n console.log(\n await regularClient.db(db).collection(coll).findOne({ name: /Jon/ })\n );\n\n console.log(\n \"Finding a document with encrypted client, searching on an encrypted field\"\n );\n console.log(\n await secureClient.db(db).collection(coll).findOne({ name: /Jon/ })\n );\n // end-find\n } catch(err) {\n console.log(\"secure\",err);\n }\n finally {\n await secureClient.close();\n }\n } finally {\n await regularClient.close();\n }\n}\nmain();\nMongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27020\n at Timeout._onTimeout (D:\\Node js\\MongoDB_encryption\\node_modules\\mongodb\\lib\\sdam\\topology.js:277:38)\n at listOnTimeout (internal/timers.js:557:17)\n at processTimers (internal/timers.js:500:7) {\n reason: TopologyDescription {\n type: 'Unknown',\n servers: Map(1) { 'localhost:27020' => [ServerDescription] },\n stale: false,\n compatible: true,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n setName: null,\n maxElectionId: null,\n maxSetVersion: null,\n commonWireVersion: 0,\n logicalSessionTimeoutMinutes: null\n },\n code: undefined,\n [Symbol(errorLabels)]: Set(0) {}\n}\n", "text": "trying to implement client side field level encryption with free mongodb atlas cluster, completed the part of generating data key id.\nThen while trying to do insert encrypted doc facing problem, the code is working till connecting to reg client and after that having trouble to connect to secureClientquestion 1 : do we need to install mongocryptd for atlas as well\nquestion 2: am doing something wrong with extraOptions what should be value to the field\nmongocryptdSpawnPath in extraOptions for atlas clusterif am doing wrong anything please correct meThank youcode:Error:", "username": "haswanth_reddy" }, { "code": "", "text": "Hello. The “connect ECONNREFUSED 127.0.0.1:27020” indicates the Node code was attempting to perform automatic encryption, but could not locate either the mongocryptd or shared_crypt libraries (you only need one, and both packages are available from the Enterprise Downloads page (MongoDB Enterprise Server Download | MongoDB); note you only need the cryptd or crypt_shared package, not the entire server. Both libraries are fully licensed for all Atlas users (including the free tiers) and Enterprise, but one or the other need to be specifically installed, they won’t get pulled down automatically with your language dependencies like libmongocrypt will.Hope that helps!-Kenn", "username": "Kenneth_White" }, { "code": "", "text": "Hi Kennth_White,I have downloaded the crypt-shared package for the above example. Can you please guide me on what particular file in that package I need to specify in my code?", "username": "Rajat_Singla1" }, { "code": "extraOptionscryptSharedLibPathExtraOptions: {\n cryptSharedLibPath: \"/home/appUser/node/mongo_crypt_v1.so\"\n}\n$ ( ./mongocryptd 2>&1 >> /var/log/mongocryptd.log &)crypt_shared", "text": "Hi Rajat.For crypt_shared, you need to either put the mongo_crypt_v1.so (mongo_crypt_v1.dylib on Mac) on the default path, or specify it explicitly on the extraOptions cryptSharedLibPath, e.g.:Alternatively, if you choose to use the mongocryptd package, then you can just run that on bootup/container start, or launch into a detached background process, e.g.:\n$ ( ./mongocryptd 2>&1 >> /var/log/mongocryptd.log &)See https://www.mongodb.com/docs/manual/core/csfle/reference/mongocryptd/ and\nhttps://www.mongodb.com/docs/manual/core/queryable-encryption/reference/shared-library/#configuration.There’s a full code example with switchable language snippets for setting crypt_shared here: https://www.mongodb.com/docs/manual/core/queryable-encryption/quick-start/#create-your-encrypted-collectionCheers.\nKenn", "username": "Kenneth_White" }, { "code": "crypt_shared", "text": "And here’s the full repo for Node that’s referenced in the Quick Start (I’ve highlighted the crypt_shared part, but take a look at the whole ./node/local/reader folder in the repo) here: docs-in-use-encryption-examples/insert_encrypted_document.js at main · mongodb-university/docs-in-use-encryption-examples · GitHub", "username": "Kenneth_White" } ]
Client Side Field Level Encryption, having trouble connecting to secureClient
2023-03-06T05:47:54.706Z
Client Side Field Level Encryption, having trouble connecting to secureClient
1,289
null
[ "queries", "node-js" ]
[ { "code": "{\"status\": false, \"message\": \"MongoServerSelectionError:\n\nconnect EADDRNOTAVAIL 127.0.0.1:27017 - Local (127.0.0.1:0) In at Timeout. onTimeout (/var/www/CertificationPlannerNode/node modules/mongo db/lib/sdam/topology.is:292:38)In at listOnTimeout (internal/timers.js:554:17) In\n\nat processTimers\n\n(internal/timers.js:497:7)\", \"data\": {}}\n", "text": "I’m getting the following error every day on my newly developed website. The site is based on Mangodb and Node-js. Developer is unable to find a reason behind such error. Every day at least once or twice the website stops working and I have to go to my server vendor site and restart it to access again. It has started to impact my business as well. Please help", "username": "Nomad_Family" }, { "code": "", "text": "Hello @Nomad_Family ,Welcome to The MongoDB Community Forums! To understand your use-case better, could you please confirm below details:I have to go to my server vendor site and restart it to access againThe error message you posted suggests that the MongoDB driver running on your Node.js application is unable to establish a connection to the MongoDB server running on your local machine and [EADDRNOTAVAIL] suggests that address is not available on this machine. Below are some steps you can take to troubleshoot the issue:Please make sure to follow production notes for smooth and efficient server performance.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "The error could be caused by various factors such as network connectivity, database configuration issues, or server hardware problems. To troubleshoot the issue, you can start by checking the MongoDB server logs for any errors or warnings.", "username": "halim_abakke" }, { "code": "", "text": "Additionally, you can try restarting the MongoDB service to see if that resolves the issue temporarily. It’s also worth checking if there are any updates available for your MongoDB version. Also, if you need any SEO help for your website in the future, feel free to check out SEO Temecula for some great tips and tricks.", "username": "halim_abakke" } ]
My newly developed website is giving this error every day
2023-02-15T17:41:33.257Z
My newly developed website is giving this error every day
1,792
null
[ "connector-for-bi" ]
[ { "code": "", "text": "We connected Tableau desktop to MongoDB using the BI connector, and some of the fields are showing in Tableau but not array fields. How can we view array fields in Tableau?", "username": "Brenton_Klassen" }, { "code": "", "text": "I am also facing same problem and few other fields are not getting into the tableau.", "username": "Lavanya_Gunnam" }, { "code": "", "text": "@Lavanya_Gunnam - this could be from your data sampling and not picking up the array. Is your database on-prem or Atlas?If you are using the BI Connector, the BI Connector usually takes all arrays and presents them as separate, child tables within Tableau.If you are using Atlas, you can try out our new Tableau Connector that is in preview as that could provide better results for surfacing your document data. https://www.mongodb.com/docs/atlas/data-federation/query/sql/tableau/connect/", "username": "Alexi_Antonino" }, { "code": "", "text": "@Alexi_Antonino\nMy database Atlas.\nI am experiencing an issue while trying to upload a DRDL file into Tableau using the MongoDB BI Connector. Specifically, when connecting to MongoDB using the BI Connector, some data is missing, and I believe that uploading a DRDL file would solve this problem.However, I am having trouble finding the “Upload” option for the DRDL file in the Tableau interface. I have followed the instructions provided in the Tableau documentation, but I still cannot locate the option. I have also checked that I am using the latest version of Tableau and the MongoDB BI Connector.Could you please advise me on how to upload a DRDL file into Tableau using the MongoDB BI Connector, or let me know if there is another way to ensure that all data is correctly mapped when connecting to MongoDB?", "username": "Lavanya_Gunnam" }, { "code": "", "text": "@Lavanya_Gunnam - you do not upload the DRDL into Tableau. A few things to establish:As mentioned in my previous reply, if you are on Atlas and you are using Tableau, you can use the new custom Tableau Connector with Atlas SQL. Atlas SQL does allow you to view and set your SQL schema.", "username": "Alexi_Antonino" } ]
BI Connector not showing arrays in Tableau
2021-04-12T14:39:08.530Z
BI Connector not showing arrays in Tableau
3,336
null
[ "database-tools", "backup" ]
[ { "code": "", "text": "I have taken the backup from Atlas mongodb,but the Atlas account was suspended. by using the backup i tried to restore it to aws documnentdb using the below commandmongorestore --ssl \n–host=“:27017” \n–username= \n–password= \n–sslCAFile rds-combined-ca-bundle.pem /home/ubuntu/fileGetting error for the above command: “Unknown admin command atlasVersion”How can i restore it to aws documnentdb. Can someone help me to overcome the issue.", "username": "KRISHNAKUMAR_K" }, { "code": "", "text": "Which version of mongodb-tools are you using?\nIf it is latest try with older version\nCheck this linkStarting from 2023-03-01 the `mongodump` command fails to exec on documentDB 4.0 instances. Before this date the `mongodump` command was working just fine.\n\nError trace:\n\n**Failed: error checking f...", "username": "Ramachandra_Tummala" }, { "code": "", "text": "mongodbI am using 6.0.5 version", "username": "KRISHNAKUMAR_K" }, { "code": "", "text": "I was asking about mongodb-tools version\nTry with latest version of tools", "username": "Ramachandra_Tummala" } ]
Getting Error Unknown admin command atlasVersion
2023-04-07T07:02:55.467Z
Getting Error Unknown admin command atlasVersion
1,999
null
[ "node-js", "mongoose-odm", "mongodb-shell" ]
[ { "code": "mongooseconst mongoose = require('mongoose');\n\nrequire('dotenv').config();\n\nconst itemadd = require(\"./model/schema.js\") // Js file having schema \n\nasync function start()\n{\n try {\n \n await mongoose.connect(process.env.mongo_uri) // line ABC \n console.log(\"We are successful\")\n } catch (error) {\n console.log(\"We encountered an error :-\\n\",error)\n }\n\n}\n\nstart();\n\nline ABC", "text": "I read in official documentation on MongoDB thatMongoDB only creates the database when you first store data in that database.I created database using mongosh and found it to be true. Now I am trying to create database using mongoose module in Nodejs. I am just creating database and not adding any records.\nThis is my codeNow in this piece of code I am just connecting to my server at atlas in line ABC. However upon checking the atlas I am seeing that the database has been created with collection inside it (collection name as per given in schema). Although there is no document/record in it , but still database has been created.\nCan anyone tell me how without any record, database was created.", "username": "Brijesh_Roy" }, { "code": "_id// no test db \ns0 [direct: primary] test> show dbs\nadmin 172.00 KiB\nconfig 164.00 KiB\nlocal 556.00 KiB\n\n// Create an index in test.foo\ns0 [direct: primary] test> db.test.foo.createIndex({a:1})\na_1\n\n// Hello database test\ns0 [direct: primary] test> show dbs\nadmin 172.00 KiB\nconfig 164.00 KiB\nlocal 556.00 KiB\ntest 12.00 KiB\n", "text": "ODM does ODM things.Could also depend on your model, if there is an indexed field (other than _id) then creating the index will create the namespace (database and collection).", "username": "chris" } ]
Mongoose creating Database without any Record/document
2023-04-07T13:02:15.579Z
Mongoose creating Database without any Record/document
572
null
[ "aggregation", "mongodb-shell" ]
[ { "code": "db.Users.aggregate([\n {\n $lookup: {\n from: \"Logs\",\n localField: \"_id\",\n foreignField: \"user_id\",\n as: \"dd\"\n }\n },\n {\n $match:{\n \"dd\": { \"$exists\": false }\n ] \n }\n },\n \n]);\n\n</>\n", "text": "I have two collections. The “Users” collection has all the users and “Logs” collection has all the users’ daily logs. I need to find the users who dont have any logs between two dates.I am able get the users who have data in logs collection between 2 dates but not able to get the users who dont have any data in logs collection between given dates.Here is the query i triedI’m struggling with adding the date condition here", "username": "Ananth_Bhoomirajan" }, { "code": "", "text": "I’m struggling with adding the date condition hereWe are struggling too since we have no clue about your schema.Share sample documents from both collections so that we can experiment with your real data.", "username": "steevej" }, { "code": "#users collection \n[\n{\n_id:1,\nname:'user1',\nemail:'[email protected]'\n},\n{\n_id:2,\nname:'user2',\nemail:'[email protected]'\n},\n{\n_id:3,\nname:'user3',\nemail:'[email protected]'\n}\n\n]\n\n#logs collection\n\n[\n{\n_id:1,\nuser_id:1,\nhours:3,\ndate:'2023-04-01'\n},\n{\n_id:2,\nuser_id:1,\nhours:5,\ndate:'2023-04-02'\n},\n{\n_id:3,\nuser_id:1,\nhours:8,\ndate:'2023-04-03'\n},\n{\n_id:4,\nuser_id:2,\nhours:3,\ndate:'2023-04-02'\n},\n\n\n]\n\n", "text": "please find the sample schema below", "username": "Ananth_Bhoomirajan" }, { "code": "{ \"$match\" : {\n \"date\" : { \"$gte\" : from_date_variable , \"$lte\" : to_date_variable }\n} }\n", "text": "Simply add a pipeline: in your $lookup that has a $match stage such asYour current $match will have to test for $size:0 of result field dd rather than the existence.", "username": "steevej" }, { "code": "", "text": "adding a pipeline would return the users with logs in those dates, but my requirement is to get the users who dont have any logs between those dates", "username": "Ananth_Bhoomirajan" }, { "code": "", "text": "Have you tried my suggestion?Most likely you did not. The $match inside the $lookup pipeline will find logs of the users between the from and to date. So a user with logs will have a non empty array and a user without logs will have an empty array. ThenYour current $match will have to test for $size:0 of result field dd rather than the existence.which means, that only users with an empty array will be matched. That is users with no logs.", "username": "steevej" }, { "code": "", "text": "Hey! I missed out the $ sign while adding size. Its working fine now. Thanks!", "username": "Ananth_Bhoomirajan" }, { "code": "{ \"$lookup\" : {\n \"from\" : \"Logs\" ,\n /* other fields */\n \"pipeline:\" : [\n { \"$match\" : {\n \"date\" : { \"$gte\" : from_date_variable , \"$lte\" : to_date_variable }\n } } ,\n { \"$limit\" : 1 }\n ]\n} }\n", "text": "Now that we have something working a little optimization is in order.Since you only want the users that have 0 logs, there is no point to $lookup all the logs of a given user. We can stop when we find the first one. To do that we add $limit:1 to the $lookup pipeline: right after the $match. Like:", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to find users who do not have any data between two dates in mongodb
2023-04-06T09:56:19.237Z
How to find users who do not have any data between two dates in mongodb
693
null
[ "dot-net", "atlas-cluster", "serverless" ]
[ { "code": "dbug: MongoDB.Connection[0]\n 1 1 p9cd6-3m-acceptance-lb.izjha.mongodb.net 27017 Connection failed MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.\n ---> System.IO.IOException: Unable to read data from the transport connection: Connection timed out.\n ---> System.Net.Sockets.SocketException (110): Connection timed out\n --- End of inner exception stack trace ---\n at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)\n at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource<System.Int32>.GetResult(Int16 token)\n at System.Net.Security.SslStream.EnsureFullTlsFrameAsync[TIOAdapter](CancellationToken cancellationToken)\n at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token)\n at System.Net.Security.SslStream.ReadAsyncInternal[TIOAdapter](Memory`1 buffer, CancellationToken cancellationToken)\n at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token)\n at System.Threading.Tasks.ValueTask`1.ValueTaskSourceAsTask.<>c.<.cctor>b__4_0(Object state)\n --- End of stack trace from previous location ---\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Misc.StreamExtensionMethods.ReadBytesAsync(Stream stream, Byte[] buffer, Int32 offset, Int32 count, TimeSpan timeout, CancellationToken cancellationToken)\n at MongoDB.Driver.Core.Connections.BinaryConnection.ReceiveBufferAsync(CancellationToken cancellationToken)\n --- End of inner exception stack trace ---\ndbug: MongoDB.Command[0]\n 1 1 p9cd6-3m-acceptance-lb.izjha.mongodb.net 27017 35647 5 2 Command failed insert 953047.9342 MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.\n ---> System.IO.IOException: Unable to read data from the transport connection: Connection timed out.\n ---> System.Net.Sockets.SocketException (110): Connection timed out\n --- End of inner exception stack trace ---\n at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)\n at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.System.Threading.Tasks.Sources.IValueTaskSource<System.Int32>.GetResult(Int16 token)\n at System.Net.Security.SslStream.EnsureFullTlsFrameAsync[TIOAdapter](CancellationToken cancellationToken)\n at System.Runtime.CompilerServices.PoolingAsyncValueTaskMethodBuilder`1.StateMachineBox`1.System.Threading.Tasks.Sources.IValueTaskSource<TResult>.GetResult(Int16 token)\n at System.Net.Security.SslStream.ReadAsyncInternal[TIOAdapter](Memory`1 buffer, CancellationToken cancellationTo... 640855d4039f92600d14c171\n", "text": "We are experiencing strange timeouts when inserting a document into MongoDB Atlas with a Azure Serverless backend.Our software is build in .net7 and we are using version 2.19 of the MongoDB.Driver.Sometimes the application takes very long to insert the document, through the debug log from the driver we see the above messages.The insert is completed however, but it took 15+ minutes to do so.I created an example project on Github that demonstrates the issue, there you can find more details including logs.We are looking forward to your reply.", "username": "Jeffrey_Tummers" }, { "code": "", "text": "Hi @Jeffrey_Tummers and welcome to the MongoDB community forum!!In the Production Notes for Azure, it was recommended to set the TCP timeout setting to 120 sec, down from its default 240 sec. Is it possible for you to try using this value and see if the issue persists?If the above problem persists, could you help me with a few details of the deployment.Best Regards\nAasawari", "username": "Aasawari" }, { "code": "InsertOneAsync", "text": "Hi @Aasawari thank you for your reply \nAnd sorry for my late reply, but we switched over to using Google backend and everything has been running smoothly since!I didn’t use the 120 sec timeout, but I’ll run the same test and report back after.", "username": "Jeffrey_Tummers" }, { "code": "# reading the current TCP timeout setting\ncat /proc/sys/net/ipv4/tcp_keepalive_time\n\n# setting it (resets on my machine after reboot)\nsudo sysctl -w net.ipv4.tcp_keepalive_time=120\n", "text": "Hi @Aasawari, reporting back with an update around the 120sec TCP timeout.I can confirm no timeouts happening when I configured the TCP timeout to 120 (default 7200 on my Linux machine)Why is this setting not needed for the Azure statefull server instances?For us configuring the tcp timeout on each pod/server is not something we desire, so we are sticking to the Google serverless instances, which don’t require this setting.", "username": "Jeffrey_Tummers" } ]
Timeout on insert Mongo Atlas with Azure Serverless backend
2023-03-10T08:51:53.812Z
Timeout on insert Mongo Atlas with Azure Serverless backend
1,457
null
[ "node-js" ]
[ { "code": "Access to fetch at 'https://us-west-2.aws.data.mongodb-api.com/app\n/data-rrmea/endpoint/data/v1/action/find' \nfrom origin 'http://localhost:63342' has been blocked by CORS policy: \nResponse to preflight request doesn't pass access control check: No \n'Access-Control-Allow-Origin' header is present on the requested \nresource. If an opaque response serves your needs, set the request's \nmode to 'no-cors' to fetch the resource with CORS disabled.\nconst endpointUrl = `https://us-west-2.aws.data.mongodb-api.com/app/data-rrmea/endpoint/data/v1/action/find`;\n\nconst headers = {\n 'Content-Type': 'application/json',\n 'api-key': `${apiKey}`,\n 'Access-Control-Allow-Origin': '*',\n 'Access-Control-Allow-Methods': 'GET,HEAD,OPTIONS,POST,PUT,DELETE',\n 'Access-Control-Allow-Headers': 'Origin, X-Requested-With, Content-Type, Accept, Authorization'\n};\n\nvar payload = {\n 'dataSource': 'Cluster0',\n 'database': 'GoogleForms',\n 'collection': 'reponses'\n};\n\nconst options = {\n method: 'POST',\n headers,\n payload: JSON.stringify(payload)\n};\n\nfetch(endpointUrl, options)\n .then(response => response.json())\n .then(data => {\n const dataList = document.getElementById('data-list');\n console.log(JSON.stringify(data));\n data.forEach(item => {\n const li = document.createElement('li');\n li.textContent = JSON.stringify(item);\n dataList.appendChild(li);\n });\n })\n .catch(error => console.log(error));\n", "text": "Hello, I have a simple static website using HTML, CSS and JS. I am trying to connect to my MongoDB instance and have been running into the following error:My code is as follows:My application does not use NodeJS and would like to stick to it.", "username": "Alan_Mathew" }, { "code": "", "text": "Requesting help to drop this ticket, as I was able to resolve this using the Bearer Authentication.", "username": "Alan_Mathew" }, { "code": "", "text": "", "username": "Kushagra_Kesav" } ]
CORS and Atlas data API
2023-04-07T05:50:51.799Z
CORS and Atlas data API
717
null
[ "node-js", "next-js" ]
[ { "code": "", "text": "Dear community, I am facing a problem while starting a next.js project with mongodb: by default it builds in with typescript → OK this is normal, so I put the flag --js. If I don’t use “–example withmongodb” it works fine, project is creataed with JS instead of TS. (“npx create-next-app --js project_name”). However, with “–example with-mongodb” it keeps creating with TS, ignoring the “–js” flag apparently (“npx create-next-app --js --example with-mongodb project_name”).\nMay someone help me out with this?\nMany thanks in advance", "username": "Nicolas_ZARAGOZA" }, { "code": ".TSX.JS--js", "text": "Hi @Nicolas_ZARAGOZA,Welcome to the MongoDB Community forums “npx create-next-app --js --example with-mongodb project_name”This is basically happening because it fetches the file from the following Github repo: next.js/examples/with-mongodb at canary · vercel/next.js · GitHubAnd, here the files have been updated to .TSX from .JS. I’m sharing the GitHub PR for your reference: [Docs] Update mongodb example by HaNdTriX · Pull Request #39658 · vercel/next.js · GitHub. So providing the --js would probably not work here.I hope it answers your question.Let me know if you have any further queries.Regards,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "Hi Kushagra! Many thanks for the quick solution. Best regards, Nicolas", "username": "Nicolas_ZARAGOZA" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Project created with TS instead of JS in Next.js create-next-app --example withmongodb
2023-04-06T09:03:18.367Z
Project created with TS instead of JS in Next.js create-next-app &ndash;example withmongodb
1,234
null
[ "performance" ]
[ { "code": "", "text": "Hi,We are doing PoC for migration of our data from SQL Server to MongoDB. Our workload is performing far better on MongoDB as compared to SQL server. But before the final migration, we have some specific questions that need to be answered. The main thrust of the questions is centred around the idea as to what makes MongoDB perform better than SQL Server.also, it would be great if links to any supporting documentation could be provided that will help us explain the superior performance of MongoDB over SQL Server.", "username": "b_singh" }, { "code": "", "text": "Hi @b_singh, what is the basis of the performance comparison ? SQL Server is RDBMS Normal form Data Model,whereas MongoDB is Document NoSQL.and compare it ?", "username": "Dominic_Kumar" }, { "code": "", "text": "Hi @Dominic_Kumar, the questions were asked to know that is there anything architecturally different in MongoDB that will add to a better performance when compared to SQL server. These questions direct to specific areas in MongoDB and try to compare them with SQL Server, to find answer for better performance of MongoDB.We had changed the data model to de-normalized data form with embedded documents and arrays when we had designed our MongoDB schema. These questions were asked to get an in-depth view of MongoDB.", "username": "b_singh" }, { "code": "", "text": "Hi @b_singh, MongoDB and SQL Server are totally different database. SQL Server is RDBMS and MongoDB is NoSQL Database.For your Application, do you have Performance Baseline to compare with SQL Server (before) and MongoDB (current state) ??Since you have changed the Data Model from SQL Server to MongoDB, how is your application performning now ?", "username": "Dominic_Kumar" }, { "code": "", "text": "@b_singh\nWhat exact workload do you test? For example, Mongo can be drastically faster for writes at the cost of durability.", "username": "Vlad_H" }, { "code": "", "text": "MongoDB and SQL databases serve different purposes and have different strengths, so it’s not necessarily a matter of one being universally “better” than the other. However, MongoDB does offer several advantages over traditional SQL databases in certain contexts.One of the main advantages of MongoDB is its flexibility and scalability. MongoDB is a document-oriented database, which means that it stores data in flexible, semi-structured documents instead of tables with fixed schemas.@b_singh MongoDB also offers strong support for modern application development practices, such as microservices and containerization. It has a rich set of APIs and integrations that make it easy to work with other popular tools and frameworks, such as Node.js and Docker. Ultimately, the choice between MongoDB and SQL will depend on your specific use case and requirements. Explore more on MongoDB and SQL and other databases", "username": "Jacelyn_Sia" } ]
Why MongoDB has better performance than SQL Server
2020-05-16T08:58:48.791Z
Why MongoDB has better performance than SQL Server
7,342
null
[ "dot-net", "compass" ]
[ { "code": "", "text": "I’m using firely to create/read FHIR objects stored in the Atlas cloud. Things work fine in .net framework. Upgrading to .net Core and downloading the new drivers and serialize software, I am storing objects fine - - at least I see the FHIR objects correctly formed using Compass. When I read one of those objects as a bson document, now I get a bunch of junk (timestamp, machine, etc.) in what is returned which, of course, does not parse as a FHIR object. How do I get rid of the system generated data when reading a document with the new core driver ?", "username": "Dennis_Brox" }, { "code": "", "text": "Hi Dennis! I’m a PM on the .NET team and happy to help. I understand you’re experiencing some issues with your reads following upgrading from .NET Framework to .NET Core. Could you share (1) a code snippet that reproduces the issue in .NET Core and (2) a snippet that worked fine in .NET Framework as well as the driver version, server version and TFMs you’re using in each case?", "username": "Patrick_Gilfether1" }, { "code": "", "text": "How do I attach a log file with the calls and returned data? There’s some kind of “3 links for new members” error when I try to copy the text into the reply.Thanks.", "username": "Dennis_Brox" }, { "code": "var filter = Builders<Restaurant>.Filter\n .Eq(restaurant => restaurant.Name, \"Bagels N Buns\");\nvar restaurant = _restaurantsCollection.Find(filter).FirstOrDefault();\nConsole.WriteLine(restaurant);\n", "text": "\nScreen Shot 2023-03-31 at 2.36.16 PM1501×292 42 KB\n\nYou can attach files using the box in red. Alternatively you can use markup and paste the log file directly in here. For example:", "username": "Patrick_Gilfether1" }, { "code": "", "text": "New users aren’t allowed to upload files. See if you can follow this link to look at the file.Shared with Dropbox", "username": "Dennis_Brox" }, { "code": "Here I create the document\n\nvar collection = Life.GetCollection<BsonDocument>(collectiontype);\n\nresource = resource.Replace(q + \"id\" + q, q + \"\\_id\" + q);\n\nBsonDocument document = BsonDocument.Parse(resource);\n\ncollection.InsertOne(document);\n\n\n\nHere is what got written:\t\t\t\n\ndocument.ToString()\n\n\"{ \\\"resourceType\\\" : \\\"MedicationStatement\\\", \\\"identifier\\\" : [{ \\\"system\\\" : \\\"rx number/branch\\\", \\\"value\\\" : \\\"367950\\\", \\\"assigner\\\" : { \\\"reference\\\" : \\\"BC000001BN\\\", \\\"type\\\" : \\\"Organization\\\" } }], \\\"status\\\" : \\\"active\\\", \\\"category\\\" : { \\\"coding\\\" : [{ \\\"system\\\" : \\\"http://www.whocc.no/atc\\\", \\\"code\\\" : \\\"C09AA05\\\" }, { \\\"system\\\" : \\\"Vigilance\\\", \\\"code\\\" : \\\"24080405\\\" }], \\\"text\\\" : \\\"ATC\\\" }, \\\"medicationCodeableConcept\\\" : { \\\"text\\\" : \\\"RAMIPRIL\\\" }, \\\"subject\\\" : { \\\"reference\\\" : \\\"180000193\\\", \\\"type\\\" : \\\"Person\\\" }, \\\"effectiveDateTime\\\" : \\\"2023-03-31T08:48:44+00:00\\\", \\\"dateAsserted\\\" : \\\"2023-03-31T08:48\\\", \\\"informationSource\\\" : { \\\"reference\\\" : \\\"XXAWE/91\\\", \\\"type\\\" : \\\"Practitioner\\\", \\\"display\\\" : \\\"MACHINERY ALCOVE\\\" }, \\\"dosage\\\" : [{ \\\"sequence\\\" : 367950, \\\"text\\\" : \\\"Take.\\\", \\\"timing\\\" : { \\\"event\\\" : [\\\"2023-03-31T08:48\\\"], \\\"repeat\\\" : { \\\"boundsPeriod\\\" : { \\\"start\\\" : \\\"2023-03-31T08:48\\\", \\\"end\\\" : \\\"2023-05-30T08:48\\\" }, \\\"period\\\" : 10, \\\"periodUnit\\\" : \\\"d\\\"\n\n} }, \\\"asNeededBoolean\\\" : false, \\\"route\\\" : { \\\"coding\\\" : [{ \\\"system\\\" : \\\"http://snomed.info/sct\\\", \\\"code\\\" : \\\"261665006\\\", \\\"display\\\" : \\\"po\\\" }], \\\"text\\\" : \\\"none\\\" }, \\\"doseAndRate\\\" : [{ \\\"doseQuantity\\\" : { \\\"value\\\" : 10, \\\"unit\\\" : \\\"MG\\\" }, \\\"rateQuantity\\\" : { \\\"unit\\\" : \\\"daily\\\" } }] }] }\"\n\nHere is what I see on Compass, which is CORRECT in both the old and new versions.\n\n{\n\n\"\\_id\": {\n\n\"$oid\": \"64270173d513a771d2e3b5a2\"\n\n},\n\n\"resourceType\": \"MedicationStatement\",\n\n\"identifier\": [\n\n{\n\n\"system\": \"rx number/branch\",\n\n\"value\": \"367950\",\n\n\"assigner\": {\n\n\"reference\": \"BC000001BN\",\n\n\"type\": \"Organization\"\n\n}\n\n}\n\n],\n\n\"status\": \"active\",\n\n\"category\": {\n\n\"coding\": [\n\n{\n\n\"system\": \"http://www.whocc.no/atc\",\n\n\"code\": \"C09AA05\"\n\n},\n\n{\n\n\"system\": \"Vigilance\",\n\n\"code\": \"24080405\"\n\n}\n\n],\n\n\"text\": \"ATC\"\n\n},\n\n\"medicationCodeableConcept\": {\n\n\"text\": \"RAMIPRIL\"\n\n},\n\n\"subject\": {\n\n\"reference\": \"180000193\",\n\n\"type\": \"Person\"\n\n},\n\n\"effectiveDateTime\": \"2023-03-31T08:48:44+00:00\",\n\n\"dateAsserted\": \"2023-03-31T08:48\",\n\n\"informationSource\": {\n\n\"reference\": \"XXAWE/91\",\n\n\"type\": \"Practitioner\",\n\n\"display\": \"MACHINERY ALCOVE\"\n\n},\n\n\"dosage\": [\n\n{\n\n\"sequence\": 367950,\n\n\"text\": \"Take.\",\n\n\"timing\": {\n\n\"event\": [\n\n\"2023-03-31T08:48\"\n\n],\n\n\"repeat\": {\n\n\"boundsPeriod\": {\n\n\"start\": \"2023-03-31T08:48\",\n\n\"end\": \"2023-05-30T08:48\"\n\n},\n\n\"period\": 10,\n\n\"periodUnit\": \"d\"\n\n}\n\n},\n\n\"asNeededBoolean\": false,\n\n\"route\": {\n\n\"coding\": [\n\n{\n\n\"system\": \"http://snomed.info/sct\",\n\n\"code\": \"261665006\",\n\n\"display\": \"po\"\n\n}\n\n],\n\n\"text\": \"none\"\n\n},\n\n\"doseAndRate\": [\n\n{\n\n\"doseQuantity\": {\n\n\"value\": 10,\n\n\"unit\": \"MG\"\n\n},\n\n\"rateQuantity\": {\n\n\"unit\": \"daily\"\n\n}\n\n}\n\n]\n\n}\n\n]\n\n}\n\nHere is where I read the document back in:\n\nvar collection = FHIRSubs.Life.GetCollection<BsonDocument>(\"MedicationStatement\");\n\nvar meds = collection.Find(filterc).ToList();\n\n//foreach (BsonDocument n in meds)\n\nfor (n = 0; n < meds.Count; n++)\n\n{\n\nms = FHIRSubs.ReturnMed(meds[n]);\n\n`\t\t\t\t`\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\\_\n\n\n\nHere is what comes back.\t\t\t\t\n\n\n\nmeds[n]: monbdb driver v 4.0.30319\t-- CORRECT\n\n\"{ \\\"\\_id\\\" : ObjectId(\\\"64270173d513a771d2e3b5a2\\\"), \\\"resourceType\\\" : \\\"MedicationStatement\\\", \\\"identifier\\\" : [{ \\\"system\\\" : \\\"rx number/branch\\\", \\\"value\\\" : \\\"367950\\\", \\\"assigner\\\" : { \\\"reference\\\" : \\\"BC000001BN\\\", \\\"type\\\" : \\\"Organization\\\" } }], \\\"status\\\" : \\\"active\\\", \\\"category\\\" : { \\\"coding\\\" : [{ \\\"system\\\" : \\\"http://www.whocc.no/atc\\\", \\\"code\\\" : \\\"C09AA05\\\" }, { \\\"system\\\" : \\\"Vigilance\\\", \\\"code\\\" : \\\"24080405\\\" }], \\\"text\\\" : \\\"ATC\\\" }, \\\"medicationCodeableConcept\\\" : { \\\"text\\\" : \\\"RAMIPRIL\\\" }, \\\"subject\\\" : { \\\"reference\\\" : \\\"180000193\\\", \\\"type\\\" : \\\"Person\\\" }, \\\"effectiveDateTime\\\" : \\\"2023-03-31T08:48:44+00:00\\\", \\\"dateAsserted\\\" : \\\"2023-03-31T08:48\\\", \\\"informationSource\\\" : { \\\"reference\\\" : \\\"XXAWE/91\\\", \\\"type\\\" : \\\"Practitioner\\\", \\\"display\\\" : \\\"MACHINERY ALCOVE\\\" }, \\\"dosage\\\" : [{ \\\"sequence\\\" : 367950, \\\"text\\\" : \\\"Take.\\\", \\\"timing\\\" : { \\\"event\\\" : [\\\"2023-03-31T08:48\\\"], \\\"repeat\\\" : { \\\"boundsPeriod\\\" : { \\\"start\\\" : \\\"2023-03-31T08:48\\\", \\\"end\\\" : \\\"2023-05-30T\n\n08:48\\\" }, \\\"period\\\" : 10, \\\"periodUnit\\\" : \\\"d\\\" } }, \\\"asNeededBoolean\\\" : false, \\\"route\\\" : { \\\"coding\\\" : [{ \\\"system\\\" : \\\"http://snomed.info/sct\\\", \\\"code\\\" : \\\"261665006\\\", \\\"display\\\" : \\\"po\\\" }], \\\"text\\\" : \\\"none\\\" }, \\\"doseAndRate\\\" : [{ \\\"doseQuantity\\\" : { \\\"value\\\" : 10, \\\"unit\\\" : \\\"MG\\\" }, \\\"rateQuantity\\\" : { \\\"unit\\\" : \\\"daily\\\" } }] }] }\"\n\n\n\n\n\nmeds[n]: .core from nuget current -- GARBAGE\t\t\n\n\n\n\"{\\\"id\\\":{\\\"Timestamp\\\":1680277875,\\\"Machine\\\":13964199,\\\"Pid\\\":29138,\\\"Increment\\\":14923170,\\\"CreationTime\\\":\\\"2023-03-31T15:51:15Z\\\"},\\\"resourceType\\\":\\\"MedicationStatement\\\",\\\"identifier\\\":[{\\\"system\\\":\\\"rx number/branch\\\",\\\"value\\\":\\\"367950\\\",\\\"assigner\\\":{\\\"reference\\\":\\\"BC000001BN\\\",\\\"type\\\":\\\"Organization\\\"}}],\\\"status\\\":\\\"active\\\",\\\"category\\\":{\\\"coding\\\":[{\\\"system\\\":\\\"http://www.whocc.no/atc\\\",\\\"code\\\":\\\"C09AA05\\\"},{\\\"system\\\":\\\"Vigilance\\\",\\\"code\\\":\\\"24080405\\\"}],\\\"text\\\":\\\"ATC\\\"},\\\"medicationCodeableConcept\\\":{\\\"text\\\":\\\"RAMIPRIL\\\"},\\\"subject\\\":{\\\"reference\\\":\\\"180000193\\\",\\\"type\\\":\\\"Person\\\"},\\\"effectiveDateTime\\\":\\\"2023-03-31T08:48:44\\\\u002B00:00\\\",\\\"dateAsserted\\\":\\\"2023-03-31T08:48\\\",\\\"informationSource\\\":{\\\"reference\\\":\\\"XXAWE/91\\\",\\\"type\\\":\\\"Practitioner\\\",\\\"display\\\":\\\"MACHINERY ALCOVE\\\"},\\\"dosage\\\":[{\\\"sequence\\\":367950,\\\"text\\\":\\\"Take.\\\",\\\"timing\\\":{\\\"event\\\":[\\\"2023-03-31T08:48\\\"],\\\"repeat\\\":{\\\"boundsPeriod\\\":{\\\"start\\\":\\\"2023-03-31T08:48\\\",\\\"end\\\":\\\"2023-05-30T08:48\\\"},\\\"period\\\":10,\\\n\n\"periodUnit\\\":\\\"d\\\"}},\\\"asNeededBoolean\\\":false,\\\"route\\\":{\\\"coding\\\":[{\\\"system\\\":\\\"http://snomed.info/sct\\\",\\\"code\\\":\\\"261665006\\\",\\\"display\\\":\\\"po\\\"}],\\\"text\\\":\\\"none\\\"},\\\"doseAndRate\\\":[{\\\"doseQuantity\\\":{\\\"value\\\":10,\\\"unit\\\":\\\"MG\\\"},\\\"rateQuantity\\\":{\\\"unit\\\":\\\"daily\\\"}}]}]}\"\n", "text": "", "username": "Dennis_Brox" }, { "code": "64270173d513a771d2e3b5a2{ $oid: \"64270173d513a771d2e3b5a2\" }\n64270173d513a771d2e3b5a2ObjectId_id\"{ \\\"\\_id\\\" : ObjectId(\\\"64270173d513a771d2e3b5a2\\\")\n\"{\\\"id\\\":{\\\"Timestamp\\\":1680277875,\\\"Machine\\\":13964199,\\\"Pid\\\":29138,\\\"Increment\\\":14923170,\\\"CreationTime\\\":\\\"2023-03-31T15:51:15Z\\\"}\n_ididObjectId.ToString()ObjectId(\"64270173d513a771d2e3b5a2\")ToString()", "text": "Hi, @Dennis_Brox,Just jumping in to help @Patrick_Gilfether1 understand the issue that you’ve encountered.I see that you’ve inserted an ObjectId with value 64270173d513a771d2e3b5a2. ObjectId is a 12-byte identifier consisting of a timestamp, random value, and counter. In some implementations, the random value is a hash of the machine name and process ID (PID).In EJSON, the ObjectId is represented as, which is what Compass displays:When you parse and insert this ObjectId into MongoDB, it gets inserted as an ObjectId with value 64270173d513a771d2e3b5a2. When it is read back into the C# application, it will be deserialized as an ObjectId. No surprises so far. And you in fact see exactly this value in _id from your .NET Framework v4.0.30319 application:But when you attempt the same from your .NET Core application, you see:Two points to note…First the _id is now id, which is odd and unexpected. Given that you are working with BsonDocument objetcs, the driver would not be modifying the mapping of the BSON to change field names.Second the “GARBAGE” is actually the properties that make up the ObjectId. The ObjectId.ToString() implementation displays ObjectId(\"64270173d513a771d2e3b5a2\"). But that ObjectId corresponds to a Timestamp of 1680277875, Machine of 13964199, etc. The discrepancy might be as simple as whether your debugger is configured to display an object’s properties or its ToString() representation.Hopefully that helps. If you continue to see discrepancies between the output from different versions of the driver, please provide a self-contained, runnable repro demonstrating the problem including the .NET version, .NET/C# Driver version, operating system, and any other relevant data so that we can investigate further.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "", "text": "Unfortunately you reply does not answer the question: “why does reading the bson document give a different result with .net 6 than framework”? Doing an insert without _id set, which is what I do, generates it in your database. When the bson document is read, I convert it to “id” so it parses in FHIR. I don’t want “how the _id is generated when I read the document”, I want the _id value returned as it is in framework. There is no “debugger” involved, just a c# program reading data. It is hard to believe that is no longer possible with .net core.", "username": "Dennis_Brox" }, { "code": "_id_id_ididFHIRSubs.ReturnMed(meds[n]);using System;\nusing MongoDB.Bson;\n\npublic static class Program\n{\n public static void Main()\n {\n var json = \"{ _id: { '$oid': '64270173d513a771d2e3b5a2' } }\";\n\n var bson = BsonDocument.Parse(json);\n\n Console.WriteLine(bson);\n }\n}\n{ \"_id\" : ObjectId(\"64270173d513a771d2e3b5a2\") }\n_idBsonDocumentusing System;\nusing MongoDB.Bson;\nusing MongoDB.Driver;\n\npublic static class Program\n{\n public static void Main()\n {\n var client = new MongoClient();\n var db = client.GetDatabase(\"test\");\n var coll = db.GetCollection<BsonDocument>(\"sample\");\n\n var filter = Builders<BsonDocument>.Filter.Empty;\n coll.DeleteMany(filter);\n\n coll.InsertOne(new BsonDocument{{ \"_id\", ObjectId.Parse(\"64270173d513a771d2e3b5a2\")}});\n\n var query = coll.Find(filter);\n foreach (var doc in query.ToList())\n {\n Console.WriteLine(doc);\n }\n }\n}\n{ \"_id\" : ObjectId(\"64270173d513a771d2e3b5a2\") }\nObjectId", "text": "All documents in MongoDB require an _id. When you save an object without an _id, the .NET/C# Driver will generate one for you and update the object before sending the insert command to the database. I’m not sure what you mean by:When the bson document is read, I convert it to “id” so it parses in FHIR.How do you convert the _id to an id for FHIR. I assume that this is implemented in FHIRSubs.ReturnMed(meds[n]); but I don’t see the definition for this method in your sample code.I tried parsing the provided ObjectId from JSON using .NET Framework 4.7.2, .NET Core 2.1, .NET Core 3.1, and .NET 6 using the following code:The displayed result using any of those TFMs is:Saving this _id to the database and reading it back as a BsonDocument:This produces the same results on all TFMs as well:We have been unable to reproduce the issue that you are describing. Both from a JSON string and reading a document in from the database, .NET Framework 4.7.2 and .NET 6 both return the same, expected ObjectId. In order to investigate further, we require a self-contained, runnable repro of the issue that you are observing so that we can debug through it.Sincerely,\nJames", "username": "James_Kovacs" } ]
Mongodb Core problems
2023-03-29T16:25:05.740Z
Mongodb Core problems
883
null
[ "replication", "java", "atlas-cluster", "serverless" ]
[ { "code": "\t\tMongoClient mongoClient = MongoClients.create(settings);\n", "text": "Trying to connect Atlas server less database using below proposed connection string:mongodb+srv://USER_NAME:PASSWORD@MONGO_CLUSTER.6zmxa.mongodb.net/MONGO_DATABASE?retryWrites=true&w=majority;getting below exceptionA TXT record is only permitted to contain the keys [authsource, replicaset], but the TXT record for ‘serverlessinstance0.6zmxa.mongodb.net’ contains the keys [loadbalanced, authsource]POM.xmlCodeConnectionString connectionString = new ConnectionString(MONGO_URL);\nMongoClientSettings settings = MongoClientSettings.builder()\n.applyConnectionString(connectionString)\n.applyToSslSettings(builder → \nbuilder.enabled(true))\n.serverApi(ServerApi.builder()\n.version(ServerApiVersion.V1)\n.build())\n.build();any help would be highly appreciated", "username": "Lakshmi_Narayana" }, { "code": " String uri = \"mongodb+srv://<username>:<password>@serverless1.abcde.mongodb.net/myFirstDatabase?retryWrites=true&w=majority\";\n MongoClient mongoClient = MongoClients.create(uri) ;\n MongoDatabase database = mongoClient.getDatabase(\"test\");\n", "text": "Hi @Lakshmi_Narayana ,You probably took this code from the Atlas UI,If you look in the quick guide documentation you can have a simpler connection which works fine for me:thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "I suspect you have a conflict somewhere in your build configuration and you’re loading classes from a previous release of the driver. Check for older releases of either mongodb-driver-core or mongo-java-driver, and exclude them.", "username": "Jeffrey_Yemin" }, { "code": "", "text": "thanks. Issue resolved.", "username": "Lakshmi_Narayana" }, { "code": "", "text": "@Lakshmi_Narayana how did you resolved the issue ?", "username": "fdedc567f1f7da01a44f7e7b0ec3f8f" } ]
Connecting ATLAS Serverless using Java and DNS txt record issue
2022-03-08T09:57:16.168Z
Connecting ATLAS Serverless using Java and DNS txt record issue
4,682
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.4.20-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.4.19. The next stable release 4.4.20 will be a recommended upgrade for all 4.4 users.Fixed in this release:4.4 Release Notes | All Issues | All DownloadsAs always, please let us know of any issues.– The MongoDB Team", "username": "Britt_Snyman" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.4.20-rc0 is released
2023-04-06T21:15:14.560Z
MongoDB 4.4.20-rc0 is released
946
null
[ "data-api" ]
[ { "code": "Exception: Request failed for https://data.mongodb-api.com returned code 404. \n\nTruncated server response: {\"error\":\"cannot find app using Client App ID 'data-rrmea'\"} \n\n(use muteHttpExceptions option to examine full response)\n", "text": "I am running into the following intermitently error while using mongodb data API. This is run via Google App Scripts.It works for some requests and fails for other. Any idea what could be going wrong here? Thanks.", "username": "Alan_Mathew" }, { "code": "", "text": "hey @Alan_Mathew can you share your appId URL with me please so I can take a look? The URL of your data api in a web browser would work", "username": "Ian_Ward" }, { "code": "", "text": "Hi Ian. Sure thing - https://us-west-2.aws.data.mongodb-api.com/app/data-rrmea/endpoint/data/v1. Please let me know if you need anything else.", "username": "Alan_Mathew" }, { "code": "", "text": "This is working as expected, you should not be using the global URL (data.mongodb-api.com) but instead the regional URL for their single region app (https://us-west-2.aws.data.mongodb-api.com/app/data-rrmea/endpoint/data/v1) - you have your Data API configuration set to a single local region in AWS Oregon. That’s where you need to be queryingWhere did you get the URL you are querying?", "username": "Ian_Ward" }, { "code": "", "text": "Thanks for letting me know. The URL that I was querying, I got it by navigating to “Data API” on the left hand side of my mongodb console.The URL that you provided, I see in the under the “Settings” tab of “Data API”, next to “Data API Supported Versions”.", "username": "Alan_Mathew" }, { "code": "", "text": "Quick update: After using the endpoint that you provided, it is working as expected.", "username": "Alan_Mathew" } ]
"error":"cannot find app using Client App ID"
2023-04-04T07:16:03.672Z
&ldquo;error&rdquo;:&rdquo;cannot find app using Client App ID&rdquo;
1,128
null
[ "aggregation", "node-js" ]
[ { "code": "db.myCollection.aggregate([\n {\n $project: {\n size_bytes: { $bsonSize: \"$$ROOT\" },\n size_KB: {\n $divide: [{ $bsonSize: \"$$ROOT\" }, 1000],\n },\n size_MB: {\n $divide: [\n { $bsonSize: \"$$ROOT\" },\n 1000000,\n ],\n },\n },\n },\n])\n", "text": "I can see the document size in BSON using the below command.But when I insert an object to the document from nodeJs I need to calculate the size of the document, just to make sure it does not exceed the 16MB document limit. If it does, then I wanted to create another document and keep inserting multiple objects into the newly created document. So every time I do not want to run aggregation on the database instance before inserting a new object into the document.I tried using the BSON library which can calculate the BSON size of the JSON object before inserting it into the document. But it seems it does not calculate BSON size correctly.Can someone please suggest to me the correct way to calculate the BSON size of the object which I want to insert into the document?", "username": "Prasanna_Sasne" }, { "code": "calculateObjectSizeconst bson = require(\"bson\");\nnew bson.calculateObjectSize(doc)\n", "text": "I tried using the BSON library which can calculate the BSON size of the JSON object before inserting it into the document. But it seems it does not calculate BSON size correctly.The calculateObjectSize method from the BSON library should be able to do this:Can you share how you were doing this or why you believe the size reported is incorrect?", "username": "alexbevi" }, { "code": "", "text": "I tried the same way. But the size of all objects present in one document does not match with size which I calculate from the mongo command line interface using $bsonSize.Thank you for your help. But now I am using bson from mongo. It works with the same approach. But the library of bson which I installed using ‘npm install bson’ does not provide me correct object size.", "username": "Prasanna_Sasne" }, { "code": "calculateObjectSizeaggregate$bsonSizecalculateObjectSize", "text": "Hi @Prasanna_Sasne, we recently opened NODE-5153 as BigInts aren’t properly supported by calculateObjectSize.In the event there are additional issues you may have surfaced as a result of your tests, would you be able to share a complete failing test that showcases:This should help us properly identify where the discrepancy in these calculations may be occurring.", "username": "alexbevi" }, { "code": "", "text": "Sure. I will add all details. Thank you", "username": "Prasanna_Sasne" } ]
Calculate bsonSize of the object in NodeJs
2023-03-30T23:25:12.513Z
Calculate bsonSize of the object in NodeJs
1,021
null
[ "node-js", "mongoose-odm", "containers" ]
[ { "code": "version: '3'\nservices:\n redis:\n image: 'redis:alpine'\n container_name: redis\n logging:\n driver: 'none'\n mongodb:\n image: mongo\n container_name: mongodb\n logging:\n driver: 'none'\n environment:\n - PUID=1000\n - PGID=1000\n volumes:\n - mongodb:/data/db\n ports:\n - 27017:27017\n restart: unless-stopped\n api:\n image: 'ghcr.io/novuhq/novu/api:0.11.0'\n depends_on:\n - mongodb\n - redis\n container_name: api\n environment:\n NODE_ENV: ${NODE_ENV}\n API_ROOT_URL: ${API_ROOT_URL}\n DISABLE_USER_REGISTRATION: ${DISABLE_USER_REGISTRATION}\n PORT: ${API_PORT}\n FRONT_BASE_URL: ${FRONT_BASE_URL}\n MONGO_URL: ${MONGO_URL}\n REDIS_HOST: ${REDIS_HOST}\n REDIS_PORT: ${REDIS_PORT}\n REDIS_DB_INDEX: 2\n REDIS_CACHE_SERVICE_HOST: ${REDIS_CACHE_SERVICE_HOST}\n REDIS_CACHE_SERVICE_PORT: ${REDIS_CACHE_SERVICE_PORT}\n S3_LOCAL_STACK: ${S3_LOCAL_STACK}\n S3_BUCKET_NAME: ${S3_BUCKET_NAME}\n S3_REGION: ${S3_REGION}\n AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID}\n AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY}\n JWT_SECRET: ${JWT_SECRET}\n STORE_ENCRYPTION_KEY: ${STORE_ENCRYPTION_KEY}\n SENTRY_DSN: ${SENTRY_DSN}\n NEW_RELIC_APP_NAME: ${NEW_RELIC_APP_NAME}\n NEW_RELIC_LICENSE_KEY: ${NEW_RELIC_LICENSE_KEY}\n ports:\n - '3000:3000'\n ws:\n image: 'ghcr.io/novuhq/novu/ws:0.11.0'\n depends_on:\n - mongodb\n - redis\n container_name: ws\n environment:\n PORT: ${WS_PORT}\n NODE_ENV: ${NODE_ENV}\n MONGO_URL: ${MONGO_URL}\n REDIS_HOST: ${REDIS_HOST}\n REDIS_PORT: ${REDIS_PORT}\n JWT_SECRET: ${JWT_SECRET}\n ports:\n - '3002:3002'\n web:\n image: 'ghcr.io/novuhq/novu/web:0.11.0'\n depends_on:\n - api\n container_name: web\n environment:\n REACT_APP_API_URL: ${API_ROOT_URL}\n REACT_APP_ENVIRONMENT: ${NODE_ENV}\n REACT_APP_WIDGET_EMBED_PATH: ${WIDGET_EMBED_PATH}\n REACT_APP_DOCKER_HOSTED_ENV: 'true'\n ports:\n - 4200:4200\n widget:\n image: 'ghcr.io/novuhq/novu/widget:0.11.0'\n depends_on:\n - api\n - web\n container_name: widget\n environment:\n REACT_APP_API_URL: ${API_ROOT_URL}\n REACT_APP_WS_URL: ${REACT_APP_WS_URL}\n REACT_APP_ENVIRONMENT: ${NODE_ENV}\n ports:\n - 4500:4500\n embed:\n depends_on:\n - widget\n image: 'ghcr.io/novuhq/novu/embed:0.11.0'\n container_name: embed\n environment:\n WIDGET_URL: ${WIDGET_URL}\n ports:\n - 4701:4701\nvolumes:\n mongodb: ~\n\nws | [Nest] 272 - 04/06/2023, 4:46:43 AM LOG [NestFactory] Starting Nest application...\nws | [Nest] 272 - 04/06/2023, 4:46:43 AM LOG [InstanceLoader] JwtModule dependencies initialized +1ms\nws | [Nest] 272 - 04/06/2023, 4:46:43 AM LOG [InstanceLoader] AppModule dependencies initialized +0ms\nws | [Nest] 272 - 04/06/2023, 4:46:43 AM LOG [InstanceLoader] LoggerModule dependencies initialized +1ms\nws | [Nest] 272 - 04/06/2023, 4:46:43 AM LOG [InstanceLoader] TerminusModule dependencies initialized +0ms\nws | [Nest] 272 - 04/06/2023, 4:46:43 AM LOG [InstanceLoader] HealthModule dependencies initialized +0ms\nws | [Nest] 272 - 04/06/2023, 4:46:43 AM LOG [InstanceLoader] SocketModule dependencies initialized +0ms\nws | [Nest] 272 - 04/06/2023, 4:46:43 AM ERROR [ExceptionHandler] getaddrinfo EAI_AGAIN mongodb\nws | MongooseServerSelectionError: getaddrinfo EAI_AGAIN mongodb\nws | at NativeConnection.Connection.openUri (/usr/src/app/node_modules/.pnpm/[email protected]/node_modules/mongoose/lib/connection.js:825:32)\nws | at /usr/src/app/node_modules/.pnpm/[email protected]/node_modules/mongoose/lib/index.js:414:10\nws | at /usr/src/app/node_modules/.pnpm/[email protected]/node_modules/mongoose/lib/helpers/promiseOrCallback.js:41:5\nws | at new Promise (<anonymous>)\nws | at promiseOrCallback (/usr/src/app/node_modules/.pnpm/[email protected]/node_modules/mongoose/lib/helpers/promiseOrCallback.js:40:10)\nws | at Mongoose._promiseOrCallback (/usr/src/app/node_modules/.pnpm/[email protected]/node_modules/mongoose/lib/index.js:1288:10)\nws | at Mongoose.connect (/usr/src/app/node_modules/.pnpm/[email protected]/node_modules/mongoose/lib/index.js:413:20)\nws | at DalService.<anonymous> (/usr/src/app/libs/dal/dist/dal.service.js:21:45)\nws | at Generator.next (<anonymous>)\nws | at /usr/src/app/libs/dal/dist/dal.service.js:8:71\nws | at new Promise (<anonymous>)\nws | at __awaiter (/usr/src/app/libs/dal/dist/dal.service.js:4:12)\nws | at DalService.connect (/usr/src/app/libs/dal/dist/dal.service.js:16:16)\nws | at /usr/src/app/apps/ws/src/shared/shared.module.ts:44:24\nws | at Generator.next (<anonymous>)\nws | at /usr/src/app/apps/ws/dist/src/shared/shared.module.js:14:71\n[Nest] 344 - 04/06/2023, 4:50:47 AM ERROR [ExceptionHandler] getaddrinfo EAI_AGAIN mongodb", "text": "Hi, I am completely new to MongoDB, I was working on the setup of an open-source notification system called NOVU, and I was deploying with docker,this is the docker-compose fileand this is the errorPlease note the Error message line\n[Nest] 344 - 04/06/2023, 4:50:47 AM ERROR [ExceptionHandler] getaddrinfo EAI_AGAIN mongodb\nany ideas?", "username": "Taha_Babi" }, { "code": "EAI_AGAIN", "text": "Hi @Taha_Babi,ERROR [ExceptionHandler] getaddrinfo EAI_AGAIN mongodbThe error message implies that there is a problem resolving the hostname of your MongoDB server. The error code EAI_AGAIN indicates that the DNS lookup failed.Can you please confirm if the MongoDB container is running? It appears that the container is not running, could you check the logs for any error messages related to the MongoDB container?Best,\nKushagra", "username": "Kushagra_Kesav" } ]
Mongodb docker is not working
2023-04-06T04:52:02.546Z
Mongodb docker is not working
1,923
null
[ "java" ]
[ { "code": "Command failed with error 40415 (Location40415): 'BSON field 'listDatabases.apiVersion' is an unknown field.' on server ... \"errmsg\": \"BSON field 'listDatabases.apiVersion' is an unknown field.\", \"code\": 40415, \"codeName\": \"Location40415\"\nConnectionString connectionString = new ConnectionString(url);\n MongoClientSettings settings = MongoClientSettings.builder()\n .applyConnectionString(connectionString)\n .serverApi(ServerApi.builder()\n .version(ServerApiVersion.V1)\n .build())\n .build();\n MongoClient client = MongoClients.create(settings);\n", "text": "The above request results in the following error on an Atlas version 4.2 cluster:Here’s how I set up the MongoDB client in the Java code:Is there something I’m missing here?", "username": "Julio_Montes_de_Oca" }, { "code": "", "text": "The Stable API was introduced in MongoDB 5.0, so it’s necessary to upgrade your Atlas cluster to 5.0+ before starting to use it.Regards,\nJeff", "username": "Jeffrey_Yemin" } ]
Java MongoDB Sync 4.7.2 driver error with listDatabaseNames request
2023-04-06T03:23:28.208Z
Java MongoDB Sync 4.7.2 driver error with listDatabaseNames request
649
null
[ "queries", "atlas-search" ]
[ { "code": "{\n \"analyzer\": \"lucene.whitespace\",\n \"searchAnalyzer\": \"lucene.whitespace\",\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"client\": {\n \"type\": \"string\"\n },\n \"fulltext\": {\n \"type\": \"string\"\n },\n \"dateRange\": {\n \"type\": \"date\"\n }\n }\n }\n}\n[\n {\n \"$search\":{\n \"index\":\"fulltext\",\n \"compound\":{\n \"filter\":{\n \"range\":{\n \"path\":\"dateRange\",\n \"gte\":\"ISODate(\"\"2023-03-01T00:00:00.000Z\"\")\",\n \"lte\":\"ISODate(\"\"2023-03-31T18:29:59.000Z\"\")\"\n }\n },\n \"must\":{\n \"phrase\":{\n \"query\":\"ABC\",\n \"path\":[\n \"client\"\n ]\n }\n },\n \"should\":[\n {\n \"phrase\":{\n \"query\":\"on-time\",\n \"path\":[ \n \"fulltext\"\n ]\n }\n },\n {\n \"phrase\":{\n \"query\":\"on time\",\n \"path\":[ \n \"fulltext\"\n ]\n }\n }\n ]\n }\n }\n }\n]\n", "text": "I am trying to search for words like - “on time”, “on-time” and \"on.\" I am using whitespace analyzer to get the exact result and using phrase rather than text as I need the word together, but it is not giving me accurate result it gave me some result which has only “on”and My search query is like this -Why I am getting a result also without the phrase “on time” , I mean data which has on will also included in result why ?", "username": "Utsav_Upadhyay2" }, { "code": "db.collection.find({},{_id:0})\n[\n { fulltext: 'on time' },\n { fulltext: 'on' },\n { fulltext: 'on-time' }\n]\n$searchrangemustdb.collection.aggregate([ { $search: { index: 'ftindex', compound: { \"should\": [ { \"phrase\": { \"query\": \"on-time\", \"path\": [ \"fulltext\"] } }, { \"phrase\": { \"query\": \"on time\", \"path\": [ \"fulltext\"] } }] } } },{$project:{_id:0}}])\n[\n { fulltext: 'on time' },\n { fulltext: 'on-time' }\n]\nrangemustfulltext", "text": "Hi Utsav,and My search query is like this -Can you provide the output you’re getting with the search query you have provided and identify which document’s you are / aren’t expecting in the results?but it is not giving me accurate result it gave me some result which has only “on”I have tried with 3 sample documents:With a similar $search query and same analyzers (removing the range and must options), I received:I didn’t include the range and must options as sample documents for me to test with weren’t available and I only created 3 simple documents containing the fulltext field which had string values of those which you provided in this post as an example in my test environment.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "@Jason_Tran , Below are some documents in which I am trying to find phrases and words like - “on time”, “on-time”above are 4 documents if you search together for both words - “on time”, “on-time” with should operator you can see you get all these results. I do not understand why, though I am using phrases and whitespace analyzer", "username": "Utsav_Upadhyay2" }, { "code": "\"client\"\"dateRange\"\"client\" : \"ABC\"testdb> db.collection.find({},{_id:0})\n[\n {\n fulltext: 'I think airlines have always underpriced their product Tony Fernandes, CEO, Air Asia, on airlines taking advantage of the post-pandemic travel boom to lock in higher airfares.',\n client: 'ABC'\n },\n {\n fulltext: 'AI Express, AirAsia India move to unified reservation system'\n },\n {\n fulltext: 'The ad film that captures the essence perfectly as it shows a group of friends on a road trip in a yellow vintage Microbus is conceptualised by Makani Creatives'\n },\n {\n fulltext: 'CIVIL Aviation Minister Jyotiraditya Scindia, who was addressing the first-time voters at ‘Yuva Samvada’ organised by East Point College of Engineering on Tuesday, said that youth are the future of India, and they should make their political choices carefully'\n }\n]\n$searchminimumShouldMatchctestdb> b /// NO minimumShouldMatch used\n[\n {\n '$search': {\n index: 'ftindex',\n compound: {\n must: [\n { phrase: { query: 'ABC', path: [ 'client' ] } }\n ],\n should: [\n { phrase: { query: 'on-time', path: [ 'fulltext' ] } },\n { phrase: { query: 'on time', path: [ 'fulltext' ] } }\n ]\n }\n }\n },\n { '$project': { _id: 0 } }\n]\ntestdb> c /// minimumShouldMatch of 1 used\n[\n {\n '$search': {\n index: 'ftindex',\n compound: {\n must: [\n { phrase: { query: 'ABC', path: [ 'client' ] } }\n ],\n should: [\n { phrase: { query: 'on-time', path: [ 'fulltext' ] } },\n { phrase: { query: 'on time', path: [ 'fulltext' ] } }\n ],\n minimumShouldMatch: 1\n }\n }\n },\n { '$project': { _id: 0 } }\n]\nbtestdb> db.collection.aggregate(b)\n[\n {\n fulltext: 'I think airlines have always underpriced their product Tony Fernandes, CEO, Air Asia, on airlines taking advantage of the post-pandemic travel boom to lock in higher airfares.',\n client: 'ABC'\n }\n]\nctestdb> db.collection.aggregate(c)\n/// no documents returned\ntestdb>\nminimumShouldMatch", "text": "Thanks for providing those strings. However, in future, please provide sample documents that match the aggregation you provided at the start. For example, you include \"client\" and \"dateRange\" fields in your initial aggregation which could affect the results. It also makes it easier to reproduce the behaviour you’re experiencing as the sample documents could be more readily imported into any test environments.In saying so, I tried to create several test documents one containing \"client\" : \"ABC\":Please take a look at the two inidividual $search stages i’ve used. The main difference being that minimumShouldMatch is used in one (var c):Output using var b:Output using var c:It could possibly be just minimumShouldMatch is required to be used for your use case but please let me know if works.above are 4 documents if you search together for both words - “on time”, “on-time” with should operator you can see you get all these results. I do not understand why, though I am using phrases and whitespace analyzerAre you expecting none of the documents you provided to be returned?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "solved, thanks, I used miniShouldMatch", "username": "Utsav_Upadhyay2" }, { "code": "", "text": "Glad to hear and thanks for marking the solution Utsav ", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Analyzer is not working as expected in Atlas search text?
2023-03-30T16:39:20.406Z
Analyzer is not working as expected in Atlas search text?
745
https://www.mongodb.com/…_2_1024x575.jpeg
[ "node-js" ]
[ { "code": "", "text": "#HelpWhen I am trying to connect to the MongoDB database it shows this error. Here I have submitted my index.js code and submitted also error reason before console.log(err) and after console.log(err). Can anyone suggest some techniques to solve this type of error?I am using a Wi-Fi (ISP) connection for browsing the internet and I had also tried mobile data but it cannot solve it.\n\nScreenshot (142)_LI1366×768 84.3 KB\n\n\nScreenshot (143)1366×768 104 KB\n\n\nScreenshot (144)_LI1366×768 94.5 KB\n\n\nScreenshot (145)_LI1366×768 111 KB\n\n\nScreenshot (146)_LI1366×768 90.1 KB\n\n\nScreenshot (147)_LI1366×768 109 KB\n", "username": "Md_Shihab" }, { "code": "", "text": "Hello @Md_Shihab, you can refer these MongoDB NodeJS Driver examples and try to use them as a guide for your coding - like, connecting to the database and performing some operations on the database.", "username": "Prasad_Saya" }, { "code": "", "text": "if the function is asynchronous check if you’ve used the await method before the appropriate function", "username": "Haneya_Seemein" }, { "code": "", "text": "my problem solved after updating nodemon", "username": "Sifat_Niloy" } ]
MongoTopologyClosedError: Topology is closed
2022-02-21T06:26:12.593Z
MongoTopologyClosedError: Topology is closed
22,638
https://www.mongodb.com/…3_2_1024x254.png
[ "node-js" ]
[ { "code": "", "text": "\nimage1184×294 10.4 KB\n", "username": "Hung_Viet" }, { "code": "", "text": "Hello @Hung_Viet,I don’t think this is related to MongoDB.\nIf you can read the error carefully it says “The package ‘vite’ is not found”, which means you have not installed the ‘vite’ package, can you try installing it?", "username": "turivishal" }, { "code": "", "text": "sorry for not mongo related but this is the best help community i know. i tried installing vite but the node_module folder is not showing as it should.", "username": "Hung_Viet" } ]
How to fix this so I need help?
2023-04-05T16:19:26.570Z
How to fix this so I need help?
670
null
[ "security" ]
[ { "code": "match / databases / { database } / documents {\n match / student / { documentId }{\n allow read;\n allow write: if ((isSuperAdmin() || isManager() || (isTeacher() && request.resource.data.diff(resource.data).affectedKeys().hasOnly(['grade'])) && noStudentWithSameDocId())\n }\n function noStudentWithSameDocId() {\n return get(/databases/$(database) / documents / student / $(documentId)) == null;\n }\n}\nconst student = { \n name: string\n class: string\n grade: number\n fees: number\n} \nsuper_admin\nmanager\nteacher\n(cannot delete the document)", "text": "We are migrating from Firestore to MongoDB.In Firestore they have security rules which specify which role can access which set of collections / sub-collections, documents, or even specific fields in a documentExample :so for example we have a student collectionand 3 roles :super_admin role can edit all the fields in the student document and can delete a student documentthe manager role can edit all the fields in the student document but cannot delete a student documentthe teacher role can only edit the grade field in the student document (cannot delete the document) and cannot read the fees field from the studentThank you in advance", "username": "Hassan_Edelbi" }, { "code": "$redactfeesconst express = require('express');\nconst app = express();\n\n// Define middleware to check if the user has the \"super_admin\" role\nconst checkSuperAdmin = (req, res, next) => {\n if (!req.user.roles.includes('super_admin')) {\n return res.status(403).send('Forbidden');\n }\n next();\n};\n\n// Define middleware to check if the user has the \"manager\" role\nconst checkManager = (req, res, next) => {\n if (!req.user.roles.includes('manager')) {\n return res.status(403).send('Forbidden');\n }\n next();\n};\n\n// Define middleware to check if the user has the \"teacher\" role\nconst checkTeacher = (req, res, next) => {\n if (!req.user.roles.includes('teacher')) {\n return res.status(403).send('Forbidden');\n }\n next();\n};\n\n// Route for the super_admin to edit all fields and delete the document\napp.put('/student/:id', checkSuperAdmin, (req, res) => {\n});\n\napp.delete('/student/:id', checkSuperAdmin, (req, res) => {\n});\n\n// Route for the manager to edit all fields but not delete the document\napp.put('/student/:id', checkManager, (req, res) => {\n});\n\n// Route for the teacher to edit the grade field only\napp.put('/student/:id', checkTeacher, (req, res) => {\n // Update the \"grade\" field in the student document with the new value\n // And Return the updated student document (without the \"fees\" field)\n});\n", "text": "Hi @Hassan_Edelbi,Welcome to the MongoDB Community forums MongoDB has a built-in feature called “Field-Level Redaction” that can be used to restrict access to certain fields within a document. It uses the $redact pipeline operator to restrict the contents of the documents based on information stored in the documents themselves. This feature will be particularly useful in your case if you need to hide the “fees” field from users who only have a “teacher” role.If you are creating a user at the database level, you can utilize the built-in roles. However, for application-level users, I’ll recommend using middleware in your application code. In your application code, you can define the roles and their corresponding permissions, and then use middleware to verify if the user making the request has the required permissions before allowing the request to proceed.Sharing the code snippet for your reference using Express.js middleware:Please note that this is for the above sample document created, however, this might change as per your document. Also, I would suggest having thorough testing before using it in production.I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Field level access control - Firestore to MongoDB
2023-04-04T21:41:50.733Z
Field level access control - Firestore to MongoDB
1,247
null
[ "golang" ]
[ { "code": "", "text": "I am totally new to MongoDB and i am trying to map the planet example data to golang structs with the original MongoDb golang driver and i would greatly appreciate the help.I ran into a problem as i was trying to map the temperature data, which exists in the database as int32, double and null, to a golang struct.To be specific, the planet temperature data is a sub document and has 3 fields, min, mean and max. Each field can exist in some documents as int, in some as double and in some as null. Google was no help so after that i asked ChatGPT a series of questions, the best solution was:i did not get it to work yet, but is that the best way to deal with that kind of Situation?\nAny help, pointers or suggestions are most welcome.\nThank you", "username": "Marcus_Bonhagen1" }, { "code": "float64*float64niltype surfaceTemperatureC struct {\n\tMin *float64\n\tMax *float64\n\tMean *float64\n}\n", "text": "Hey @Marcus_Bonhagen1 welcome to the forum and thanks for the question! The Go BSON library allows unmarshaling BSON “int32” and “double” types into a Go float64. If you also need to support BSON “null”, you can use a *float64, which will be set to nil for BSON “null”.To unmarshal documents like the ones from the sample_guides.planets dataset into a Go struct, use a struct like this:Check out an example on the Go Playground here.", "username": "Matt_Dale" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Golang Driver, how to map to a document to a struct if document fields can have multiple types
2023-04-05T05:58:57.420Z
Golang Driver, how to map to a document to a struct if document fields can have multiple types
1,358
null
[ "node-js", "typescript" ]
[ { "code": "@mongodb-js/charts-embed-domERROR Failed to compile with 3 errors6:59:02 PM\n\n error in /home/runner/work/xavier/xavier/node_modules/@mongodb-js/charts-embed-dom/dist/declarations/src/types.d.ts\n\n ERROR Build failed with errors.\nERROR in /home/runner/work/xavier/xavier/node_modules/@mongodb-js/charts-embed-dom/dist/declarations/src/types.d.ts(5,29):\n5:29 Type expected.\n 3 | LIGHT = \"light\"\n 4 | }\n > 5 | export declare type Theme = `${THEME}`;\n | ^\n 6 | export declare enum SCALING {\n 7 | FIXED = \"fixed\",\n 8 | SCALE = \"scale\"\n\n error in /home/runner/work/xavier/xavier/node_modules/@mongodb-js/charts-embed-dom/dist/declarations/src/types.d.ts\n\nERROR in /home/runner/work/xavier/xavier/node_modules/@mongodb-js/charts-embed-dom/dist/declarations/src/types.d.ts(10,31):\n10:31 Type expected.\n 8 | SCALE = \"scale\"\n 9 | }\n > 10 | export declare type Scaling = `${SCALING}`;\n | ^\n 11 | export declare enum ENCODING {\n 12 | BASE64 = \"base64\",\n 13 | BINARY = \"binary\"\n\n error in /home/runner/work/xavier/xavier/node_modules/@mongodb-js/charts-embed-dom/dist/declarations/src/types.d.ts\n\nERROR in /home/runner/work/xavier/xavier/node_modules/@mongodb-js/charts-embed-dom/dist/declarations/src/types.d.ts(15,36):\n15:36 Type expected.\n 13 | BINARY = \"binary\"\n 14 | }\n > 15 | export declare type EncodingType = `${ENCODING}`;\n | ^\n 16 | export declare type PlainObject = Record<string, unknown>;\n 17 | /**\n 18 | * Shared options for embedding\n\nnpm ERR! code ELIFECYCLE\nnpm ERR! errno 1\nnpm ERR! [email protected] build:staging: `vue-cli-service build --mode staging`\nnpm ERR! Exit status 1\nnpm ERR! \nnpm ERR! Failed at the [email protected] build:staging script.\nnpm ERR! This is probably not a problem with npm. There is likely additional logging output above.\n\nnpm ERR! A complete log of this run can be found in:\nnpm ERR! /home/runner/.npm/_logs/2023-03-27T18_59_02_757Z-debug.log\nError: Process completed with exit code 1.\n\"build:staging\": \"vue-cli-service build --mode staging\",\n", "text": "Hello guys,I was trying to use the package @mongodb-js/charts-embed-dom served by mongodb in this doc in my Vue application and it runs normally in my localhost. But when I try to build it i got some typescript errors inside this package.I’m using:“vue”: “^2.6.11”,\n“typescript”: “~3.9.3”,and my build command is:", "username": "Matheus_Martins" }, { "code": "4.4.4", "text": "Hey @Matheus_Martins, based on this answer on Stack Overflow it appears this issue may be due to the version of TypeScript being used. Can you try updating to at least 4.4.4 of TypeScript to see if that addresses the issue?", "username": "alexbevi" } ]
Typescript error in Vue build
2023-04-05T18:11:25.001Z
Typescript error in Vue build
1,287
null
[ "aggregation", "compass" ]
[ { "code": " [{ \"$match\" : { \"$and\" : [{ \"creator.identity_id\" : 1}, { \"memlevel\" : \"MEMBLOCK\"}, { \"$or\" : [{ \"is_session\" : true}, { \"is_draft\" : true}]}]}}, { \"$project\" : { \"memlabel\" : 0}}, { \"$sort\" : { \"modified_at_utc\" : -1}}, { \"$limit\" : 25}]\n[{ \"$match\" : { \"$and\" : [{ \"creator.identity_id\" : 1}, { \"memlevel\" : \"MEMBLOCK\"}, { \"$or\" : [{ \"is_session\" : true}, { \"is_draft\" : true}]}]}}, { \"$project\" : { \"memlabel\" : 0, \"metadata\" : { \"feed_text\" : 0}}}, { \"$sort\" : { \"modified_at_utc\" : -1}}, { \"$limit\" : 25}] \n{\n \"_id\": {\n \"$oid\": \"62b73c651f9bc7269439d9d4\"\n },\n \"memlevel\": \"MEMBLOCK\",\n \"type\": \"MEMORY\",\n \"event\": \"luther.text\",\n \"memlabel\": \"This is where her is, Claire. That's why what you can found is that a government funded now and like like a Obama's comments are coming from or going, oh, and that makes sense. So you go No. One in time on your MySpace or but although I also know the like, I go by what I saw and and the those reps on this one is I said, well, that's why we found the government on this country. where her name is.\",\n \"created_at_utc\": {\n \"$date\": {\n \"$numberLong\": \"1656175717001\"\n }\n },\n \"modified_at_utc\": {\n \"$date\": {\n \"$numberLong\": \"1656175717001\"\n }\n },\n \"start_time_utc\": {\n \"$date\": {\n \"$numberLong\": \"1656175710069\"\n }\n },\n \"end_time_utc\": {\n \"$date\": {\n \"$numberLong\": \"1656175710069\"\n }\n },\n \"creator\": {\n \"identity_id\": 1,\n \"name\": \"person\",\n \"domain_id\": 0,\n \"lead_id\": 0,\n \"propertyBag\": {\n \"email\": {\n \"key\": \"email\",\n \"value\": \"[email protected]\n },\n \"picture\": {\n \"key\": \"picture\",\n \"value\": \"asdfasfd.jpg\"\n }\n }\n },\n \"owner\": {\n \"identity_id\": 1,\n \"name\": \"person\",\n \"domain_id\": 1,\n \"lead_id\": 0,\n \"propertyBag\": {\n \"email\": {\n \"key\": \"email\",\n \"value\": \"email.personal.ai\"\n },\n \"picture\": {\n \"key\": \"picture\",\n \"value\": \"asdff.jpg\"\n }\n }\n },\n \"source\": {\n \"name\": \"WebApp\",\n \"type\": \"text\",\n \"device\": \"desktop\",\n \"os\": \"\"\n },\n \"feed_event\": [],\n \"visibility\": \"PRIVATE\",\n \"scope\": \"PERSONAL\",\n \"metadata\": {\n \"title\": \"\",\n \"feed_text\": \"<p>This is where her is, Claire. That's why what you can found is that a government funded now and like like a Obama's comments are coming from or going, oh, and that makes sense. So you go No. One in time on your MySpace or but although I also know the like, I go by what I saw and and the those reps on this one is I said, well, that's why we found the government on this country. where her name is. </p>\",\n \"is_draft_delete\": \"true\"\n },\n \"status\": \"CREATED\",\n \"is_draft\": true,\n \"_class\": \"com.personalai.MemoryAPI.Common.Model.FeedMemory\"\n}\n", "text": "Problem: Nested Projections before a sort stage do not carry to a sort stage. In fact they seem to ruin the rest of the projections. When run in Compass, it seems the projection takes out the fields as intended but I am still getting a out of memory error (did not enable allow disk use). As you can see below, just excluding a non nested field seems to work perfectly. One thing to add is that metadata does hold dynamic fields.\nSample Query that works:Sample Query with Nested Projection that results in sort out of memory Error:Sample Document:", "username": "Aidan_Tan" }, { "code": "", "text": "You are sorting on 1 of 2 fields.A field that is present in the original document.A computed field, not present in the original document, that is the result of $project, $set/$addFields.If you $sort as point 1, sort as soon as possible and before doing any alteration to the original document.If you $sort as point 2, it will always be a memory sort since you cannot have an index on computed fields. If you cannot sort your computed field in memory for any reason, store the computed value and create an index, then it will be $sort as point 1.In the document and pipelines you shared you are sorting on a field that is present in the original document, so simply move the $sort at the right place right after the $match. The same apply to your $limit, move it after the $sort and before the $project. Why $project, that is altering, all matching documents if your are only interested in only 25.When altering a top-level field, the optimizer is smart enough to not alter the document right away. When done on a nested field it will be much harder to keep track of the excluded vs included sub-fields, so the optimizer probably do not optimize.But the right thing to do is to $sort sooner than what you did.", "username": "steevej" } ]
Mongo Aggregation Projections not working for nested fields
2023-04-05T02:14:49.195Z
Mongo Aggregation Projections not working for nested fields
1,200
null
[ "aggregation", "queries", "python" ]
[ { "code": " \"array1.pollClosing.first.timestamp\": {\n \"$dateToString\": {\n \"format\": \"%Y-%m-%dT%H:%M:%SZ\",\n \"date\": \"$states.pollClosing.first.timestamp\"\n }\n },\nSAMPLE DOCUMENT\n\n{\n \"_id\" : \"20221108\",\n \"array1\" : [\n {\n \"id\" : \"11\",\n \"pollClosing\" : {\n \"first\" : {\n \"timestamp\" : ISODate(\"2022-11-09T00:00:00.000+0000\")\n }\n }\n },\n {\n \"id\" : \"13\",\n \"pollClosing\" : {\n \"first\" : {\n \"timestamp\" : ISODate(\"2022-11-09T03:00:00.000+0000\")\n }\n }\n } ]\n}\n", "text": "Given the sample document (below), would like to convert all occurrences of field timestamp into string without using an unwind/group stage. The timestamp field appears in more than one array (e.g.: array1, array2, etc.), so need something generic.I can convert it using the the project stage when not using an array as shown below.Thanks\nP", "username": "Pramod" }, { "code": "", "text": "Take a look at $map.You input: will be $array1.Your in: expression will use $dateToString like you did.", "username": "steevej" } ]
Convert all occurrences of DateTime in a document to String, including arrays
2023-04-04T20:19:11.944Z
Convert all occurrences of DateTime in a document to String, including arrays
568
https://www.mongodb.com/…_2_1024x651.jpeg
[ "node-js", "data-api" ]
[ { "code": "// This function is the endpoint's request handler.\n\nexports = function({ query, headers, body}, response) {\n if(!body){\n throw new Error(\"Bad Request. Body Missing\");\n }\n const {image, name, about, resume, twitter, type} = body;\n console.log(\"This is body\")\n return body\n \n if(!image || !name || !about || !type){\n throw new Error(\"Bad Request. Required parameters are missing\")\n }\n \n const contentTypes = headers[\"Content-Type\"];\n const doc = context.services.get(\"test-buddy\").db(\"marketleague\").collection(\"speakers\").insertOne(body)\n \n return doc;\n};\n", "text": "I am using mongo data api for one of my project. The issue I am facing is when I am making call to the endpoint specified in mongodb realm, it shows empty body. And when i return the body from the backend it shows me a giant buffer.Now according to mongodb, if you want to post data, you have to send it like this\n\nimage1920×1221 197 KB\nBut if I show you the body it looks like this\n\nimage1920×1221 106 KB\nAnd if I return the body directly, it shows me a giant string. You can refer to postman image.PS: You can say I forgot to mention collection name, db name & all, but it works fine for get api as I am not using those keys anywhere. These api’s are strictly for a single database.", "username": "Joy_Gupta" }, { "code": "", "text": "Hello @Joy_Gupta ,I responded to a similar question in below thread, please check if that works for you.Regards,\nTarun", "username": "Tarun_Gaur" } ]
Can't send data to data api
2023-04-04T14:27:15.054Z
Can&rsquo;t send data to data api
1,383
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi all,I am new to MongoDB and planning my first bigger project where I want to use it.So, while figuring out how I could design the database I got some questions in my head. This project relates to kind of article management, so I wondered what I should use, embedded or referenced documents.In a SQL database I would have at least 8 tables:With following relationships in SQL:So, in MongoDB I see three designs I could use – for sure there are more which I don’t see.The first:I would aspect this could have low performance and complex queries when adding new data to the second reference.The second:Probably the best way?The third:I would aspect this could end up in complex queries and data mess when getting some specific information.I would be happy if someone can get a newbie into the right lane Thanks, in advanced.", "username": "Jake_Dickson" }, { "code": "", "text": "Hi Jake - I also am very new to MongoDB, my prior DB utilization being with Apple CoreData which is more like a database interface or driver into sqlite.So with mongo I faced the same dilemma as you, quite a few tables (20), one with one to one relationship and the rest with one to many. Initially I thought I needed some many to many but as I got more familiar with the querying I decided it wasn’t necessary.My first pass was with 1 database and 20 collections (documents). I found this very easy to work with, querys were pretty straight forward, editing/updating of data went smoothly as did creation of new document elements. Only document deletion was more than trivial. My biggest problem was seeding the database. The project requires a database with 30,000 ish collections which have default data, the user then goes forward with a process in which the user gathers data and inserts into the database. On completion the entire database is downloaded with the default data replaced by current data. With this design 12 of the collections must be loaded with the 30000 default items before the remaining 8 can be loaded with the related objectIDs for the one to many relationships. At least that was my experience. This appeared to be a monumental task so I put that on hold and moved to a second approach.My second pass was to structure a database with one collection. I embedded some misc data and then nested the remaining. Actually pretty much like your Second. The database seeding then became very straight forward, I just created a JSON file with the desired nesting and used mongoimport to load my database. I am in the process of redesigning my app around this new database structure and have gotten to the stage where queries are pretty straight forward but I haven’t developed much beyond fetching the data. I haven’t yet got a clear picture of how to interface with the user to gather data.I work alone and would love to have someone to kibitz with, interested?Jim Rublee", "username": "Jim_Rublee" } ]
Newbie need help with first bigger project (Design the project)
2023-04-05T17:10:42.850Z
Newbie need help with first bigger project (Design the project)
593
null
[ "node-js", "mongoose-odm", "atlas-cluster" ]
[ { "code": "import express from 'express';\nimport bodyParser from 'body-parser';\nimport mongoose from 'mongoose';\nimport cors from 'cors';\n\nconst app = express();\n\napp.use(bodyParser.json({limit: \"30mb\", extended: true}));\napp.use(bodyParser.urlencoded({limit: \"30mb\", extended: true}));\napp.use(cors());\n\nimport MongoClient from 'mongodb';\nimport ServerApiVersion from 'mongodb';\nconst uri = \"mongodb+srv://pratham:[email protected]/?retryWrites=true&w=majority\";\nconst client = new MongoClient(uri, { useNewUrlParser: true, useUnifiedTopology: true, serverApi: ServerApiVersion.v1 });\nclient.connect(err => {\n const collection = client.db(\"test\").collection(\"devices\");\n // perform actions on the collection object\n client.close();\n});\n", "text": "First of all, Thanks in Advance and a huge sorry if my Doubt sounds you stupid.\nBut , you know I literally wasted whole day on this error.\nI am new to MongoDB and is trying to connect our MongoDB cluster with ExpressJs server. And endup in lot of errors.\nI will owe you for any help.", "username": "PRATHMESH_SANJAYRAO_NIKAM" }, { "code": "", "text": "It is always better if you share the error you get.I get authentication failed when I tried your URI. It means either the user name and/or the password is wrong for the given cluster.", "username": "steevej" }, { "code": "import express from 'express';\nimport * as dotenv from 'dotenv';\nimport cors from 'cors';\n\nimport connectDB from './mongodb/connect.js'\n\ndotenv.config();\n\nconst app = express();\napp.use(cors());\napp.use(express.json({ limit: '50mb' }));\n\napp.get('/', (req, res) => {\n res.send({ message: 'Hello World' });\n});\n\nconst startServer = async () => {\n try {\n // connect to the database...\n connectDB(process.env.MONGODB_URL);\n\n app.listen(8080, () => console.log(\"Server started on port http://localhost:8080\"));\n } catch (error) {\n console.log(error);\n }\n}\n\nstartServer();\nimport mongoose from 'mongoose';\n\nconst connectDB = (url) => {\n mongoose.set('strictQuery', true);\n\n mongoose.connect(url)\n .then(() => console.log('MongoDB connected'))\n .catch((error) => console.log('Not connected'));\n}\n\nexport default connectDB;\n> [email protected] start\n> nodemon index.js\n\n[nodemon] 2.0.22\n[nodemon] to restart at any time, enter `rs`\n[nodemon] watching path(s): *.*\n[nodemon] watching extensions: js,mjs,json\n[nodemon] starting `node index.js`\nServer started on port http://localhost:8080\nNot connected\n", "text": "Thanks steevej for letting me know,\nForget about the code I have posted , I made some changes after that code :.\n├── client\n│ ├── README.md\n│ ├── package-lock.json\n│ ├── package.json\n│ ├── public\n│ │ ├── favicon.ico\n│ │ ├── index.html\n│ │ ├── logo192.png\n│ │ ├── logo512.png\n│ │ ├── manifest.json\n│ │ └── robots.txt\n│ └── src\n│ ├── App.css\n│ ├── App.js\n│ ├── components\n│ └── index.js\n└── server\n├── db\n│ └── config.js\n├── index.js\n├── models\n│ └── Book.js\n├── mongodb\n│ └── connect.js\n├── package-lock.json\n└── package.jsonHere’s my new code for index.js :Here is my cluster’s credentials :\nusername : pratham786\nfollowing is my connection string :\nMONGODB_URL = mongodb+srv://pratham786:pratham%[email protected]/?retryWrites=true&w=majorityHere’s my connect.js file code :Here’s what actually I am getting after doing npm start :I was expecting that by backend will get connected with the cluster, but its not why?\nPlease I need your help really bad.\nThanks in advance!", "username": "PRATHMESH_SANJAYRAO_NIKAM" }, { "code": " mongoose.connect(url)\n .then(() => console.log('MongoDB connected'))\n .catch((error) => console.log('Not connected'));\n", "text": "Most API return error codes, error messages or throw exceptions to help investigation issues with code or configuration. When you code likewhere you write Not connected rather than the error you hide all the information you can get to help you resolve the issue.When we writeshare the error you getwe want the real error message that the API provides you. Not a message that simply say it failed.If you prefer to still hide the cause of the failure with the Not connected, you may always use Compass or mongosh with your URI to see what is the cause of Not connected. I did and your modified URI still produceauthentication failedI suggest that rather than trying to make your code work, try your URI with Compass or mongosh until you can connect. Once, you can connect with software that produce real error messages, try using the same URI with your code.", "username": "steevej" }, { "code": " mongoose.connect(url)\n .then(() => console.log('MongoDB connected'))\n .catch((error) => console.log('Not connected'));\n mongoose.connect(url)\n .then(() => console.log('MongoDB connected'))\n .catch((error) => console.log(error));\nrsnode index.js", "text": "I replaced :with :nodemon index.js\nPS C:\\Users\\DELL-PC\\Desktop\\Memories\\server> npm [email protected] start\nnodemon index.js", "username": "PRATHMESH_SANJAYRAO_NIKAM" }, { "code": "", "text": "@steevej\nThanks!!! a lot for your help.\nI read some community post where people had same issue even after the did everything correct.\nIt was the problem of DNS resolver. I changed my DNS server from settings and Yup MongoDB got connected.", "username": "PRATHMESH_SANJAYRAO_NIKAM" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
ExpressJs server not getting connected with the MongoDB atlas Cluster
2023-04-04T14:42:07.410Z
ExpressJs server not getting connected with the MongoDB atlas Cluster
963
null
[ "aggregation", "atlas-triggers" ]
[ { "code": "", "text": "Hi,We recently started using Data Federation and Triggers in order to export data as Parquet files to an S3 bucket. It’s working perfectly fine for the most part, however, for one of our collections we are getting the following error:(DuplicateKey) cannot write document with duplicate key: “countryIso” to parquet file: cannot create BSONSchema from document with duplicate keys: countryIsoThe documents in that collection contains a “deliveryAddress” field which contains an array of embedded documents, each containing fields such as “countryIso”, “city”, and so on. Is this what’s causing it? I’m fairly sure that I read that embedded documents should be supported. If not, how do we get around this?Thanks in advance,\nTom", "username": "Tom_Lindelius" }, { "code": "", "text": "Were you able to solve this issue? I am facing the same problem.", "username": "Javian_Martin" }, { "code": "", "text": "Unfortunately not, but I also haven’t really looked into it any further. We could do without that particular data, so we ended up using a $project stage to strip those fields.", "username": "Tom_Lindelius" } ]
Duplicate Key Using Data Federation $out to S3
2022-11-11T10:36:40.102Z
Duplicate Key Using Data Federation $out to S3
1,801
null
[ "compass" ]
[ { "code": "", "text": "Hi !, I’m facing this issue while trying to install the MongoDB community server for learning purposes. I’m using windows 10 node version 18.12.0. I have tried some of the solutions available online but nothing works for me. can anybody please help me out with this? I’ll be very thankful.", "username": "Vishwajeet_Singh4" }, { "code": "", "text": "Have you choosen default dir path or custom ones?\nCheck this jira ticket.Some bug associated with shorter dbpath dirs\nhttps://jira.mongodb.org/browse/SERVER-47138or it could be due to a missing dll or lack of admin privs or some invalid entry in cfg file like #mp\nAlso check stackoverflow threads on this topic", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I choose the default path while installing the DB, and I’ve tried most of the solutions online like on StackOverflow but it didn’t work for me. I’ve all the admin privileges and also checked the mongod.cfg file but could not locate anything like #mp in it.", "username": "Vishwajeet_Singh4" }, { "code": "", "text": "My issue got resolved. I followed some processes mentioned on StackOverflow and it worked. Thanks for your support Ramachandra.", "username": "Vishwajeet_Singh4" }, { "code": "", "text": "And what helped?\nI have the same issue with 5.0.16 upwards", "username": "L_Z" }, { "code": "", "text": "Hi @L_Z , you can refer to this link: installation - Service 'MongoDB Server'(MongoDB) failed to start.Verify that you have sufficient privileges to start system services - Stack Overflow", "username": "Vishwajeet_Singh4" }, { "code": "", "text": "My issue were multiple Windows Server Users, I used the right one then it worked.", "username": "L_Z" } ]
Service 'mongodb server' failed to start. verify that you have sufficient privileges to start system services
2023-03-31T20:39:37.727Z
Service &lsquo;mongodb server&rsquo; failed to start. verify that you have sufficient privileges to start system services
1,227
null
[ "java" ]
[ { "code": "", "text": "Any references will be helpful. Please suggest how to integrate this", "username": "Amaresh_Sahoo" }, { "code": "", "text": "Hi @Amaresh_Sahoo , have you seen our autocomplete tutorial?", "username": "amyjian" } ]
Can anyone suggest how to integrate mongo atlas search and autocomplete with java
2023-04-05T12:26:00.144Z
Can anyone suggest how to integrate mongo atlas search and autocomplete with java
631
null
[]
[ { "code": "", "text": "I have that case:\nI work with Romanian language and I want to fin by this “stef” this value “Ştefan”For single index:Maybe you have some ideas how to solve this problem", "username": "Maxim_Leanca" }, { "code": "", "text": "Hi @Maxim_Leanca , can you share your index definition (in JSON) and an example query?", "username": "amyjian" } ]
Searching by a part of word with specific characters
2023-04-05T10:42:09.308Z
Searching by a part of word with specific characters
545
null
[ "kafka-connector" ]
[ { "code": "", "text": "We are seeing significant latency with our source connector during high load periods. This delays can reach over an hour. We are currently using v1.9.1, but recently upgraded from v1.6 and noticed the issue on both versions. We believe we’ve isolated the issue to the connector and not the MongoDB instance, Kafka cluster, or network.I can share the config if desired, but we’ve played around with several of the attributes, so it may be more helpful starting with what attributes we could modify to help with this scenario and I can share current and previous values we’ve set.", "username": "Calvin_Moore" }, { "code": "", "text": "What version of MongoDB are you using?", "username": "Robert_Walters" }, { "code": "", "text": "We are currently using Mongo v4.0.4.", "username": "Calvin_Moore" }, { "code": "", "text": "You are using an old version of MongoDB, lots of performance improvements were made to Change Streams over the releases. If you can get to the latest that would be helpful. https://www.mongodb.com/docs/manual/administration/change-streams-production-recommendations/#change-stream-optimization", "username": "Robert_Walters" }, { "code": "", "text": "Unfortunately that’s not an option for the short term, though it is on our radar. That being said, we don’t believe the issue is with MongoDB. When we open a change stream locally, we can see an order of magnitude more updates coming through than we see flowing through the Kafka Connector.", "username": "Calvin_Moore" }, { "code": "", "text": "To add to the above, we are not seeing issues with CPU, memory, or network utilization on MongoDB. We also don’t see significant replication lag (usually <1sec).", "username": "Calvin_Moore" }, { "code": "", "text": "We added JMX metrics support to 1.9, https://www.mongodb.com/docs/kafka-connector/current/monitoring/The documentation goes into detail on the different metrics to monitor for performance.Check the Kafka Connect logs for errors/warnings as well.Are you in a sharded environment?", "username": "Robert_Walters" }, { "code": "", "text": "Upgrading to v1.9 was a thought we had, too. Unfortunately, it’s not working well for us: MongoDB Kafka Connector Logs and Metrics not Showing After Upgrading to 1.9.1Our MongoDB instance is not sharded. Our Kafka Connectors are deployed in Kubernetes and spread across 6 pods.", "username": "Calvin_Moore" }, { "code": "linger.msbatch.size", "text": "We were able to solve this. We’re not sure what did it, but we modified the linger.ms and batch.size settings for the Connector. We also found a very inefficient process that was putting quite a bit of load on Mongo every couple of minutes and improved that system.I also posted an update on the JMX metrics thread.", "username": "Calvin_Moore" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Significant Latency with Mongo Kafka Connector
2023-03-28T00:22:23.851Z
Significant Latency with Mongo Kafka Connector
1,578
null
[ "kafka-connector" ]
[ { "code": "", "text": "We recently upgraded our Mongo Kafka Connector from v1.6.1 to v1.9.1 to debug issues we were seeing using the new metrics that were added. After upgrading we stopped seeing logs for some of our collections/connectors. Those same collections/connectors that aren’t showing logs also don’t show metrics. We get some logs/metrics from some connectors, but not all.We’ve compared the config between a connector that’s working and a connector that’s not working and haven’t noticed any differences other than the collection name. It does not appear to always be the same connectors either.For some background, we are using the Mongo Kafka Connector as a source connector to feed into changes to some of our collections into Kafka. There are a total of 49 collections, so 49 source connectors. These source connectors are deployed to a Kubernetes cluster with a total of 6 pods used for the connectors.My current best guess is only 1 connector per pod is logging, but I’m not sure if that’s accurate.", "username": "Calvin_Moore" }, { "code": "", "text": "This was solved. The issue was with our metrics ingestion tool hitting a limit. We are using DD and there’s an annotation to increase the metric limit for JMX metrics. The Mongo-specific stats, though, only show up as logs and aren’t entirely useful since they are identified as an incrementing integer, which is difficult to correlate to a connector when spread across 6 nodes.", "username": "Calvin_Moore" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Kafka Connector Logs and Metrics not Showing After Upgrading to 1.9.1
2023-03-27T23:01:07.180Z
MongoDB Kafka Connector Logs and Metrics not Showing After Upgrading to 1.9.1
1,081
null
[ "server" ]
[ { "code": "{\"t\":{\"$date\":\"2023-03-30T00:03:11.041+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"terminate() called. An exception is active; attempting to gather more information\"}}\njournalctl -u mongod.serviceMar 30 00:03:11 ip-xxx-xxx-xxx-xxx mongod[27818]: {\"t\":{\"$date\":\"2023-03-30T00:03:11.933+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\",Mar 30 14:30:05 ip-xxx-xxx-xxx-xxx systemd[1]: Stopping MongoDB Database Server...\nsystemctl status mongod.servicesudo journalctl -u mongod.service● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: active (running) since Mon 2023-03-27 11:14:27 UTC; 72h ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 29624 (mongod)\n CGroup: /system.slice/mongod.service\n └─29624 /usr/bin/mongod --config /etc/mongod.conf\n# mongod.conf\nstorage:\n dbPath: /var/lib/mongodb\n journal:\n enabled: true\n# engine:\n# mmapv1:\n# wiredTiger:\n\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\nnet:\n port: 27017\n bindIp: 0.0.0.0\n\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n#operationProfiling:\n#replication:\n#sharding:\n## Enterprise-Only Options:\n#auditLog:\n#snmp:\n", "text": "OS: Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-1099-aws x86_64)\nMongoDB: v4.4.5MongoDB experiences fatal exceptions such this one logged in mongod.logAfter that, the mongod.service log shows systemd indicating that it is stopping mongod (output from journalctl -u mongod.service)But checking the status of the service with systemctl status mongod.service shows that it is still active (output from sudo journalctl -u mongod.service) but there are not further entries in mongod.log.My mongod.conf is pretty cookie cutter, as far as I can tell (will include it below).Is there anything I can do get mongod to shutdown and restart properly? Or anything I can do to help mongod recovery from the exception?This is my first post in this community so, if I haven’t included something that would be helpful, please just let me know and I will add it as quickly as I can.Thank you!", "username": "Nathaniel_brewer" }, { "code": "", "text": "Try with latest mongodb version\nCheck this thread", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Ah yes. I do always see those same FTDC log lines. I will start working on updating mongo to v6 in a test env, but would like to try disabling FTDC in production and see if that helps. Do you think that’s a good thing to try?", "username": "Nathaniel_brewer" }, { "code": "", "text": "Yes, you could try that. I guess you will get rid of these log lines (because FTDC is creating them), but the Mongo will be still crashing. You should check ulimit (if you are on Linux), antivirus (if any) activity for Mongo directories, and sure, disk space and RAM memory utilization. Keep in mind that disk space and memory utilization could be temporarily high, which means you should have more space than it is consuming at the load.", "username": "Benjamin_Beganovic" }, { "code": "", "text": "Great! I will carry on updating mongodb. Hopefully that will do the trick \nThank you for helping.", "username": "Nathaniel_brewer" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongod service doesn't stop and restart after fatal exception
2023-04-04T14:53:35.007Z
Mongod service doesn&rsquo;t stop and restart after fatal exception
1,321
null
[ "queries", "node-js" ]
[ { "code": "", "text": "I am having an error while sending signup request to bakend (node.js + mongo).\nerror - Query.prototype.exec() no longer accepts a callback", "username": "Rahul_Sutar" }, { "code": "mongodb-legacymongodb-legacy", "text": "@Rahul_Sutar, as per the release announcement at MongoDB Nodejs Driver 5.0.0 Released, callbacks were removed in favour of a Promise-based API.To ease the migration to a Promise-only approach when using the Node.js driver, callback support is available via the mongodb-legacy package. You can read more about this change in the Optional callback support migrated to mongodb-legacy section of the migration guide.", "username": "alexbevi" } ]
Error in Mongo connection
2023-04-05T05:57:53.640Z
Error in Mongo connection
1,212
null
[ "devops" ]
[ { "code": "", "text": "Hi Mongo Team,We are unable to start mongo db. It says ‘’ ERROR: child process failed, exited with 51\"We are using MongoDB shell version: 3.2.22Error Logs:\n2021-07-03T15:12:43.248+0000 E STORAGE [initandlisten] WiredTiger (2) [1625325163:248109][31381:0x7f31adb9fdc0], file:collection-233-3971070436943107\n128.wt, WT_SESSION.open_cursor: /data/mongo_data/mongo/collection-233-3971070436943107128.wt: handle-open: open: No such file or directory\n2021-07-03T15:12:43.248+0000 E STORAGE [initandlisten] no cursor for uri: table:collection-233-3971070436943107128\n2021-07-03T15:12:43.248+0000 F - [initandlisten] Invalid access at address: 0x58\n2021-07-03T15:12:43.258+0000 F - [initandlisten] Got signal: 11 (Segmentation fault).Could you please help us to start mongo without any data loss.", "username": "Vishal_Yadav1" }, { "code": "", "text": "I had the sam issue. I was using wsl2 and switched to wsl1 which resulted in the crush of the db server. Acording to Microsoft docs, MongoDB can be installed to the wsl2. My opinion is that it is because of the file-system.", "username": "Mashxurbek_N_A" } ]
Getting error code 51
2021-07-03T15:36:20.124Z
Getting error code 51
3,895
null
[ "ops-manager" ]
[ { "code": "Starting pre-flight checks\nmongodb-mms[18716]: Ops Manager may only be used with a backing MongoDB database that is at least at version 4.4.0: Config{loadBalance=false, encryptedCredentials=false, ssl='false', dbNames='[mmsdbautomation, mmsdbserverlog, mmsdbpings, monitoringdiagnostics, mmsdbprofile, mmsdbjobs, agentlogs, automationcore, monitoringstatus, automationstatus, mmsdbrealmrrd, mmsdbserverlessrrd, chartsmetadata, mmsdbprovisionlog, mmsdb, mmsrt, mmsdbrrd, mmsdbconfig, mmsdblogcollection, mongologsmetadata, backupusage, mmsdbagentlog, mmsdbbilling, backuplogs, backupstatus, mmsdbmetering, mmsdbautomationlog, ndsstatus, cloudconf, backupdb, nds, mmsdbqueues, mmsdbprovisioning]', uri=mongodb://localhost:23456/}\nmongodb-mms[18716]: Pre-flight checks failed. Service can not start.\nmongodb-mms[18716]: Preflight check failed.\nmongodb-mms.service: Main process exited, code=exited, status=1/FAILURE\nmongodb-mms.service: Failed with result 'exit-code'.\n", "text": "Hi,\nI m trying to install mongo ops manager on ubuntu 18 for Mongo server version 4.0.24, when I m starting the service (using: sudo service mongodb-mms start ) it fails at pre-flight checks fails:I even tried with automation.versions.source=remote and also automation.versions.source=local by placing binary, but output remains same.Can someone plz help. Thanks", "username": "Murtaza_Vohra" }, { "code": "", "text": "Looks to be compatibility issues\nYour mongodb version should be min 4.4.Thats what error says\nWhat version of Ops manager. are you installing?\nCheck compatibility matrix", "username": "Ramachandra_Tummala" } ]
Installing Mongo Ops Manager
2023-04-05T07:58:16.320Z
Installing Mongo Ops Manager
869
null
[ "charts" ]
[ { "code": "", "text": "Hello, we’ve a mongo DB instance running on an own physical server and I’d like to find a free tool where could be visualized the data and created few dashboards.I read about Atlas Charts, but they seem to be available only for cloud instance of mongo DB. Is there way how to use it for DB which is not in cloud and not running on a public IP?What are other options where I can create dashboards and charts for the data I have in mongo DB for free and share them in intranet?Thanks!\nPetr", "username": "Petr_Kovar" }, { "code": "", "text": "Hello @Petr_Kovar,Welcome to the MongoDB community forum!While there are several tools available to visualise data from MongoDB, such as PowerBI, Tableau, and SAP Analytics Cloud and many more, please note that these tools are not MongoDB products. Therefore, we cannot guarantee their accuracy and support.MongoDB provides its own visualisation tool called Charts, which is available for data deployed on Atlas. If you are interested in using Charts, you can migrate your local database to Atlas using the instructions provided in this link: https://www.mongodb.com/docs/atlas/import/#migrate-or-import-data.Please feel free to ask any further questions you may have.Best regards,\nAasawari", "username": "Aasawari" }, { "code": "", "text": "Hi Aasawari, thanks for your reply.\nSo is there way to setup the Charts locally on my server, or the DB has to be migrated to the cloud? I have on the server Java app producing new data for the mongo DB.\nAre any of the other tools capable of running on a server and not in a cloud?\nThanks,\nPetr", "username": "Petr_Kovar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Looking for how to visualize mongodb data
2023-04-03T15:10:05.174Z
Looking for how to visualize mongodb data
918
null
[ "sharding" ]
[ { "code": "", "text": "We have a Sharded cluster in production setup already. while running COMPACT command to disk reclaim in mogos is not working so we have tried in Sharded cluster instance to login with connection string getting error . Could anyone please help to fix. I am new to Mongodbwhile login using connection stringmongo mongodb://user:user@123’@ip-192-168-133-183.ec2.internal:27017,ip-192-168-145-243.ec2.internal:27017,ip-192-168-222-182.ec2.internal:27017/admin?relicaSet=shardreplica01Getting below error:\nError: Authentication failed. :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1", "username": "Nino_I" }, { "code": "", "text": "Is your connect string correct?\nThere are typos\nYou have “@” occuring twice and replica is typed as relica", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks for your response.I have updated MongoDB connection string like below:\nmongodb://user:‘password’@ip-192-168-133-183.ec2.internal:27017,ip-192-168-145-243.ec2.internal:27017,ip-192-168-222-182.ec2.internal:27017/admin?replicaSet=shardreplica01&authSource=adminbut still I am getting below error. I have googled but not much idea. Please guide me to fix and let me know if you need addition config informations.Error: can’t connect to new replica set master [ip-192-168-133-183.ec2.internal:27017], err: AuthenticationFailed: Authentication failed. :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1", "username": "Nino_I" }, { "code": "", "text": "Are you using correct userid/pwd?\nCan you connect to your replica nodes using the same id&pwd\nYou are authenticating against admin db and connecting to admin db\nIs that user have admin private or a app user?\nDid this connect string work before?\nFrom where you got this?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Yes, We are using the Mongos user ID and PWD.\nHow do I connect replica nodes ?\nI am not sure. The MongoDB sharded cluster already formed . While I am trying to run the COMPACT query to reclaim the disk space through mongos shell facing the error. So Google about the issue then I got the idea to run the command through Mongod shell. When tried to connect getting the authentication issue.So consider me as a newbie MongoDB. Please guide me", "username": "Nino_I" }, { "code": "", "text": "Please check this link and refer to mongo documentation for Compact and connect strings for standalone,replica etc\nCompact is a maintenance operation.Do not run any compact without understanding the implicationsFrom your connect string remove replicaset and other nodes.\nIt will put you directly to the node you passed\nBe clear about data bearing replica nodes and mongos", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Sure. I will remove replicaset,nodes and let me try", "username": "Nino_I" }, { "code": "", "text": "I have changed the replicaset and other nodes from connection string. Please find the connection string below.mongo mongodb://user:[email protected]:27017/db_name?authSource=adminBut still I am getting authentication error:\nError: Authentication failed. :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1", "username": "Nino_I" }, { "code": "mongouse myDatabasedb.auth(\"username\",\"password\")", "text": "I have tried , In command line I just wrote mongo and use myDatabase then db.auth(\"username\",\"password\")but getting errorError: Authentication failed.\n0", "username": "Nino_I" }, { "code": "", "text": "Does this user exist?\nIn which db this user was created?\nYou have to authenticate in that db\nTry mongo --host IP --port port_num -user -pwd --authenticationDatabase db_name instead of connect string you are trying\nCheck documentation or our forum threads for exact syntax", "username": "Ramachandra_Tummala" }, { "code": "", "text": "The users exist in mongos shell. but when I login sharded primary/secondary node getting bellow error.\n\nimage1895×196 14.5 KB\n\n“uncaught exception: Error: not authorized on admin to execute command { usersInfo: 1.0, lsid: { id: UUID(“8c86070e-17f1-417c-9219-6e194f0d8c15”) }, $clusterTime: { clusterTime: Timestamp(1680605390, 1), signature: { hash: BinData(0, 1BA96384824611713539DD67BE1B9F88757608DF), keyId: 7158478101695954964 } }, $db: “admin” } :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDB.prototype.getUsers@src/mongo/shell/db.js:1659:15\n@(shell):1:1”I can able to authenticate in mongos shell but I am getting error while tried to login mongodb sharded nodes like primary/ secondary", "username": "Nino_I" }, { "code": "", "text": "It means user does not have privileges on admin db\nDoes other commands like show dbs,show users work?\nWhat privs this user has on mongos?", "username": "Ramachandra_Tummala" }, { "code": "mongosmongosmongos", "text": "Hi @Nino_I ,As mentioned from the documentation, you Need to create user in the primary of shard to perform operation directly on the shard.These shard local users are completely independent from the users added to the sharded cluster via mongos . Shard local users are local to the shard and are inaccessible by mongos .Direct connections to a shard should only be for shard-specific maintenance and configuration. In general, clients should connect to the sharded cluster through the mongos BR", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Hi @ Fabio_RamohitajHow do we run the COMPACT command. I am getting error while running COMPACT in mongosI dont have an idea sharded mongodb cluster. So please guide me where we can run the compact query", "username": "Nino_I" }, { "code": "", "text": "Hi @Nino_I ,\nhere is all the information you need:BR", "username": "Fabio_Ramohitaj" }, { "code": "\"_id\" : \"admin.fullintelAdmin\",\n \"userId\" : UUID(\"sdgdsgadsgadgaag\"),\n \"user\" : \"User\",\n \"db\" : \"admin\",\n \"roles\" : [\n {\n \"role\" : \"dbAdmin\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"myCustomCompactRole\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"dbAdminAnyDatabase\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"read\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"clusterAdmin\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"userAdminAnyDatabase\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"root\",\n \"db\" : \"admin\"\n },\n {\n \"role\" : \"readWriteAnyDatabase\",\n \"db\" : \"admin\"\n }\n ],\n", "text": "I think we have a privileges on admin db . I have listed the details below", "username": "Nino_I" }, { "code": "", "text": "Thanks Fabio_Ramohitaj . Let me try and update", "username": "Nino_I" }, { "code": "db.runCommand({'compact': 'collection_name'})\n{\n \"ok\" : 0,\n \"errmsg\" : \"compact not allowed through mongos\",\n \"code\" : 115,\n \"codeName\" : \"CommandNotSupported\",\n \"operationTime\" : Timestamp(1680621019, 1),\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1680621019, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"X/3G4bRhzIt7SYVaA4JNN++nCEI=\"),\n \"keyId\" : NumberLong(\"7158478101695954964\")\n }\n }\n}\n", "text": "I have followed the documentation and give the privilege to access user and role but still I am getting same error while running from route node ( mongos).I cannot able to login shared cluster node to run the compact query.", "username": "Nino_I" }, { "code": "uncaught exception: Error: couldn't add user: not authorized on admin to execute command\n", "text": "We have tried to sharded cluster getting bellow error:", "username": "Nino_I" }, { "code": "compactmongodcompactcompactmongos", "text": "Hi @Nino_I ,\nas mentioned from documentation:compact only applies to mongod instances. In a sharded environment, run compact on each shard separately as a maintenance operation.You cannot issue compact against a mongos instance.So you need to create the correct user on each shard and than you can apply that operation.BR", "username": "Fabio_Ramohitaj" } ]
Sharded cluster instance(Node) getting authentication error
2023-03-30T14:20:04.710Z
Sharded cluster instance(Node) getting authentication error
2,013
null
[]
[ { "code": "", "text": "Suppose I have a table with roles and an item table. Only the role id is stored in the user datа. To get the user’s items, I first need to get the role from the roles table and then, taking into account this role, get access to the item.Is it possible to create Rule Expressions taking into account data from an additional table?", "username": "Vlad_Ustenko" }, { "code": "", "text": "Hi @Vlad_Ustenko and welcome to the MongoDB community forum!!To get the user’s items, I first need to get the role from the roles table and then, taking into account this role, get access to the item.If I understand correctly, you have two different collections, one which contains the role ID and their item information in a different collection trying to establish a one to many relationship.\nCould you help me understand as why do you need to store the role id in a different collection.This would nor only makes the query faster, but the involved would also be simple. You can visit the documentation on data modelling for detailed understanding.To help you better with the appropriate solution, can you please share the below details:Is it possible to create Rule Expressions taking into account data from an additional table?Regards\nAasawari", "username": "Aasawari" } ]
Rule expressions with additional table
2023-04-03T17:42:31.729Z
Rule expressions with additional table
359
null
[ "node-js", "crud" ]
[ { "code": "db.user.insertOne({\n email: \"[email protected]\",\n name: \"First Last\",\n age: 55,\nanalytics: [ { date: \"2023-03-25\", country: { UK: 1 }, browser: {}, os: {} },\n { date: \"2023-03-26\", country: {}, browser: {}, os: {} },\n { date: \"2023-03-27\", country: {}, browser: {}, os: {} },\n { date: \"2023-03-28\", country: {}, browser: {}, os: {} },\n { date: \"2023-03-29\", country: { UK: 1, AU: 5 }, browser: {}, os: {} },]}\n);\nfindOneAndUpdate{ date: \"2023-03-25\", country: { UK: 1 }, browser: { Opera: 1 }, os: { Windows: 1 } }Updating the path 'analytics.$[element].country.UK' would create a conflict at 'analytics'db.user.findOneAndUpdate(\n { email: \"[email protected]\" },\n {\n $set: { \"analytics.$[element].date\": \"2023-03-31\", \"analytics.$[element].country\": {\"UK\": 1}, \"analytics.$[element].browser\": {\"Opera\": 1}, \"analytics.$[element].os\" : {\"Windows\": 1} },\n $inc: { \"analytics.$[element].country.UK\": 1, \"analytics.$[element].browser.Opera\" : 1, \"analytics.$[element].os.Windows\" : 1 }\n },\n {\n arrayFilters: [{ \"element.date\": \"2023-03-30\" }],\n upsert: true,\n returnOriginal: false\n }\n)\nfindOneAndUpdate", "text": "I’m using MongoDB with NodeJS. I have a user database like the one below:Here is what I want to do:\nI want to do a findOneAndUpdate query. If today’s date exists in the analytics array, it will update the country, browser, and os field data. If the date does not exist inside the analytics array then it will create a new object like { date: \"2023-03-25\", country: { UK: 1 }, browser: { Opera: 1 }, os: { Windows: 1 } }I tried doing queries like these, but it shows Updating the path 'analytics.$[element].country.UK' would create a conflict at 'analytics'Is there any way to conditionally update this in a single findOneAndUpdate query or using other methods?", "username": "Ahsan" }, { "code": "import mongodb from \"mongodb\";\n...\n try {\n await client.connect();\n const collection = client.db('test').collection('sample');\n\n // Test case 1: Update existing document\n const result = await collection.findOneAndUpdate(\n {\n \"email\": \"[email protected]\",\n \"analytics.date\": \"2023-03-29\"\n },\n {\n $set: {\n \"analytics.$[element].country.UK\": 1,\n \"analytics.$[element].browser.Opera\": 1,\n \"analytics.$[element].os.Windows\": 1\n }\n },\n {\n arrayFilters: [{ \"element.date\": \"2023-03-29\" }],\n }\n );\n console.log(\"Document updated:\", result.value);\n\n // Test case 2: Insert a new document\n const result2 = await collection.findOneAndUpdate(\n {\n \"email\": \"[email protected]\",\n \"analytics.date\": \"2023-03-30\"\n },\n {\n $set: {\n \"analytics.$[element].country.UK\": 1,\n \"analytics.$[element].browser.Opera\": 1,\n \"analytics.$[element].os.Windows\": 1\n }\n },\n {\n arrayFilters: [{ \"element.date\": \"2023-03-30\" }],\n }\n );\n\n if (!result2.value) {\n const result3 = await collection.findOneAndUpdate(\n {\n \"email\": \"[email protected]\"\n },\n {\n $push: {\n \"analytics\": {\n \"date\": \"2023-03-30\",\n \"country\": {\n \"UK\": 1\n },\n \"browser\": {\n \"Opera\": 1\n },\n \"os\": {\n \"Windows\": 1\n }\n }\n }\n },\n {\n upsert: true,\n }\n );\n console.log(\"Document inserted:\", result3.value);\n }\n } catch (err) {\n console.error(err);\n } finally {\n await client.close();\n }\n}\nfindOneAndUpdateuserarrayFiltersupsert", "text": "Hi @Ahsan,Welcome to the MongoDB Community forums Based on your shared sample documents and the update condition I’ve written an update query. Sharing the code snippet for your reference:The code uses the findOneAndUpdate method to update/insert a document in the user collection based on the email and date of the analytics. If a document exists, the analytics object is updated with new data using the $set operator and the arrayFilters option. If no document exists, a new document is inserted with the $push operator and the upsert option set to true.Please note this is just an example solution code and you should update this as per your requirements. It is recommended to evaluate the performance and test this with expected workloads based on your specific use case.I hope it helps!Best,\nKushagra", "username": "Kushagra_Kesav" } ]
Conditional data update shows error
2023-03-30T19:52:24.160Z
Conditional data update shows error
551
https://www.mongodb.com/…f_2_1024x434.png
[]
[ { "code": "", "text": "Hello,I’m new to MongoDB and have begun working through the introductory coursework and have run into an issue won the third lesson’s lab.I have followed the instructions and when I click ‘check’ I receive the notice:Incorrect solution 1/1The users collection document count was incorrect. Please try again.Maybe I’m overlooking something, I have included a screenshot of my current database. Any advice as to what is holding me up would be greatly appreciated. Thank you.\nScreenshot 2023-03-08 1221021232×523 25.7 KB\n", "username": "Jimmy_Quadros" }, { "code": "", "text": "I am having the exact same issue…", "username": "Brendan_Kersey" }, { "code": "", "text": "I was able to get around this issue by ensure that i was in the correct Project ( needs to be mdb_edu)", "username": "Brendan_Kersey" }, { "code": "", "text": "Solved. The issue is the lab was opening ‘Project 0’ instead of MDB_EDU.I followed the instructions to the letter even creating another new account and setting up sample data in the new account (when this was already done in the very first class, then into the wrong project then finally into the correct project. Given how long it takes to insert this data it’s a pretty huge waste of time). Overlooking the project dropdown (no actual instruction as to what projects are as well) was poorly articulated in the lab and I must say I am thus far really frustrated with the classes.Here’s hoping it gets better as it goes along.", "username": "Jimmy_Quadros" }, { "code": "", "text": "2 posts were split to a new topic: Lesson 3: Lab 1, Cannot submit", "username": "Kushagra_Kesav" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Lesson 3: Lab 1, cannot submit *SOLVED*
2023-03-08T17:22:57.502Z
Lesson 3: Lab 1, cannot submit *SOLVED*
1,584