image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"node-js",
"react-native",
"react-js",
"mobile-bytes"
] | [
{
"code": "@realm/react@realm/reactrealm-js@realm/react@realm/reactrealmTask_id_idclass Task extends Realm.Object {\n _id!: Realm.BSON.ObjectId;\n description!: string;\n isComplete!: boolean;\n createdAt!: Date;\n\n static generate(description: string) {\n return {\n _id: new Realm.BSON.ObjectId(),\n description,\n createdAt: new Date(),\n };\n }\n\n static schema = {\n name: 'Task',\n primaryKey: '_id',\n properties: {\n _id: 'objectId',\n description: 'string',\n isComplete: { type: 'bool', default: false },\n createdAt: 'date'\n },\n };\n}\n\ncreateRealmContextRealmProvideruseRealmuseQueryuseObjectRealmProvidercreateRealmContextRealmProviderconst { RealmProvider, useRealm, useQuery } = createRealmContext({ schema: [Task] })\n\nexport default function AppWrapper() {\n return (\n <RealmProvider><TaskApp /></RealmProvider>\n )\n}\nuseRealmuseQueryTextInputTaskTaskFlatListuseCallbackStylesheet.createfunction TaskApp() {\n const realm = useRealm();\n const tasks = useQuery(Task);\n const [newDescription, setNewDescription] = useState(\"\")\n\n return (\n <SafeAreaView>\n <View style={{ flexDirection: 'row', justifyContent: 'center', margin: 10 }}>\n <TextInput\n value={newDescription}\n placeholder=\"Enter new task description\"\n onChangeText={setNewDescription}\n />\n <Pressable\n onPress={() => {\n realm.write(() => {\n realm.create(\"Task\", Task.generate(newDescription));\n });\n setNewDescription(\"\")\n }}><Text>➕</Text></Pressable>\n </View>\n <FlatList data={tasks.sorted(\"createdAt\")} keyExtractor={(item) => item._id.toHexString()} renderItem={({ item }) => {\n return (\n <View style={{ flexDirection: 'row', justifyContent: 'center', margin: 10 }}>\n <Pressable\n onPress={() =>\n realm.write(() => {\n item.isComplete = !item.isComplete\n })\n }><Text>{item.isComplete ? \"✅\" : \"☑️\"}</Text></Pressable>\n <Text style={{ paddingHorizontal: 10 }} >{item.description}</Text>\n <Pressable\n onPress={() => {\n realm.write(() => {\n realm.delete(item)\n })\n }} ><Text>{\"🗑️\"}</Text></Pressable>\n </View>\n );\n }} ></FlatList>\n </SafeAreaView >\n );\n}\n\nimport React, { useState } from \"react\";\nimport { SafeAreaView, View, Text, TextInput, FlatList, Pressable } from \"react-native\";\nimport { Realm, createRealmContext } from '@realm/react'\nclass Task extends Realm.Object {\n _id!: Realm.BSON.ObjectId;\n description!: string;\n isComplete!: boolean;\n createdAt!: Date;\n\n static generate(description: string) {\n return {\n _id: new Realm.BSON.ObjectId(),\n description,\n createdAt: new Date(),\n };\n }\n\n static schema = {\n name: 'Task',\n primaryKey: '_id',\n properties: {\n _id: 'objectId',\n\ndescription: 'string',\n isComplete: { type: 'bool', default: false },\n createdAt: 'date'\n },\n };\n}\n\nconst { RealmProvider, useRealm, useQuery } = createRealmContext({ schema: [Task] })\n\nexport default function AppWrapper() {\n return (\n <RealmProvider><TaskApp /></RealmProvider>\n )\n}\n\nfunction TaskApp() {\n const realm = useRealm();\n const tasks = useQuery(Task);\n const [newDescription, setNewDescription] = useState(\"\")\n\n return (\n <SafeAreaView>\n <View style={{ flexDirection: 'row', justifyContent: 'center', margin: 10 }}>\n <TextInput\n value={newDescription}\n placeholder=\"Enter new task description\"\n onChangeText={setNewDescription}\n />\n <Pressable\n onPress={() => {\n realm.write(() => {\n realm.create(\"Task\", Task.generate(newDescription));\n });\n setNewDescription(\"\")\n }}><Text>➕</Text></Pressable>\n </View>\n <FlatList data={tasks.sorted(\"createdAt\")} keyExtractor={(item) => item._id.toHexString()} renderItem={({ item }) => {\n return (\n <View style={{ flexDirection: 'row', justifyContent: 'center', margin: 10 }}>\n <Pressable\nonPress={() =>\n realm.write(() => {\n item.isComplete = !item.isComplete\n })\n }><Text>{item.isComplete ? \"✅\" : \"☑️\"}</Text></Pressable>\n <Text style={{ paddingHorizontal: 10 }} >{item.description}</Text>\n <Pressable\n onPress={() => {\n realm.write(() => {\n realm.delete(item)\n })\n }} ><Text>{\"🗑️\"}</Text></Pressable>\n </View>\n );\n }} ></FlatList>\n </SafeAreaView >\n );\n}\n\n@realm/react@realm/reactrealm-jsreact-native",
"text": "Greetings Realm folks,My name is Andrew Meyer, one of the engineers at realm-js, and I am making a guest post today in @henna.s Realm Byte column to help React Native users get started with our new library @realm/react. @realm/react is a module built on top of realm-js with the specific purpose of making it easier to implement Realm in React.I wanted to provide a quick example for React Native developers, to get an idea of how easy it is to get started using Realm using @realm/react. Therefore, I made an 80 line example of how to create a simple task manager using the library.You only need to have @realm/react and realm installed in your project and you will be good to go. If you aren’t using TypeScript, simply modify the Task class to not use types.Here is a breakdown of the code.Setting up and thinking about your model is the first step in getting any application off the ground. For our simple app, we are defining a Task model with a description, completion flag, and creation timestamp. It also contains a unique _id, which is the primary key of the Task model. It’s good to define a primary key, in case you want to reference a single Task in your code later on.We have also added a generate method. This is a convenience function that we will use to create new tasks. It automatically generates a unique _id, sets the creation timestamp, and sets the description provided by its argument.The schema property is also required for Realm. This defines the structure of the model and tells Realm what to do with the data. Follow Realm Object Model for more information.Here is the code for setting up your model class:The next part of the code is a necessary part in setting up your application to interact with Realm using hooks. In this code, we are calling createRealmContext which will return an object containing a RealmProvider and a set of hooks (useRealm, useQuery and useObject).The RealmProvider must wrap your application in order to make use of the hooks. When the RealmProvider is rendered, it will use the configuration provided to the createRealmContext to open the Realm when it is rendered. Alternatively, you can set the configuration through props on RealmProvider.Here is the code for setting up your application wrapper:Now that you have an idea of how to set everything up, let’s move on to the application. You can see right away that two of the hooks we generated are being used. useRealm is being used to perform any write operations, and useQuery is used to access all the Tasks that have been created.The application is providing a TextInput that will be used to generate a new Task. Once a Task is created, it will be displayed in the FlatList below. That timestamp we set up earlier is used to keep the list sorted so that the newest task is always at the top.In order to keep this code short, we skipped a few best practices. All the methods provided to the application should ideally be set to variables and wrapped in a useCallback hook, so that they are not redefinined on every re-render. We are also using inline styles to spare a few more lines of code. One would normally generate a stylesheet using Stylesheet.create.Here is the code for the application component:Here is the example in full, including all the required import statements.For more details on how to use @realm/react checkout our README and our documentation. If you are just getting started with React Native, you can also use our Expo templates to get started with minimal effort.And with that being said, what do you think about @realm/react? Any other examples you would like to see? We are working hard to make it easy to integrate realm-js with react-native, so let us know if you have any questions or feature requests!Happy Realming!",
"username": "Andrew_Meyer"
},
{
"code": "",
"text": "Wow, great development! I was waiting for a wrapper like this. To test it out it created a new Expo project based on the javascript Expo template you provided (not the TypeScript). The todo app runs flawlessly but once I enable the sync then it throws me a partitionValue must be of type ‘string’, ‘number’, ‘objectId’, or ‘null’ error. I tried several things to resolve it especially using the example code you provided for Native React. I got stuck. Any ideas what I do wrong here?",
"username": "Joost_Hazelzet"
},
{
"code": "return (\n <RealmProvider sync={{user: loggedInUserObject, partitionValue: \"someValue\"}} ><TaskApp /></RealmProvider>\n )\n",
"text": "Hi @Joost_Hazelzet, the partitionValue must be setup in Atlas.\n\nimage1420×786 73 KB\n\nIf you want to enable Sync, you will need to set a partitionValue. This can be arbitrary, but it is usually an ID that the data can be filtered on (as in this example, the userID).\nYou can dynamically set the partitionValue in the RealmProvider component:The value should mirror the same type that you have setup in Atlas.\nGlad you are enjoying the library! Let us know if this helps ",
"username": "Andrew_Meyer"
},
{
"code": "",
"text": "Hey Andrew, yes it was the missing partitionValue in the RealmProvider and and now the Todo app is running like a charm including the tasks synched to the Atlas database. Thank you. My code is available as Github repository https://github.com/JoostHazelzet/expoRealm-js in case somebody wants to use it.",
"username": "Joost_Hazelzet"
},
{
"code": "",
"text": "Can we setup multiple partition value in a single object?",
"username": "Zubair_Rajput"
},
{
"code": "",
"text": "Hi Andrew,The app I’m trying to build currently needs to keep track of ordering for a set of items. My data is structured such that a Group has a Realm.List. I am using the useObject() hook to get the Group, and then I’m trying to render the Group.items in a React Native FlatList. However, I can’t use the Realm.List type as an argument to the data prop of FlatList. I have tried using the useQuery() hook to get that same list of Items, but I need to preserve ordering in the Group, so what I really need is access to the Group so I can add/remove items from Group.items.Do you know of a way I can render a Realm.List?",
"username": "max_you"
},
{
"code": "",
"text": "Ah nevermind, I was able to get it working with react native’s VirtualizedList instead of using the FlatList which is more restrictive",
"username": "max_you"
},
{
"code": "RealmProvidercreateRealmContext",
"text": "You can create multiple RealmProviders with createRealmContext and have each using a different partition value. You will just have to export and use the hooks related to said partition.",
"username": "Andrew_Meyer"
},
{
"code": "Realm.ListFlatList",
"text": "I have written tests to do exactly what you have described. What exactly is the problem you are experiencing when trying to use a Realm.List in a FlatList? Feel free open an issue on github and provide some more information. I want to make sure that works ",
"username": "Andrew_Meyer"
},
{
"code": "keyExtractoritem: Item & Realm.ObjectItem",
"text": "I tried again and it works using a FlatList. I had just incorrectly specified an incorrect type for my parameter in my keyExtractor prop. I had specified the type of the variable as item: Item & Realm.Object instead of just Item. This was just a mistake on my part while I’m getting familiar with using Realm still! Thank you for the help!",
"username": "max_you"
},
{
"code": "",
"text": "Hi @Andrew_Meyer,This is a great example. Do you have something similar for implementing authentication? The example from Mongo talks uses React and I’m having trouble adapting it. A React Native example would be much appreciated.I’ll continue to search the forums to see if such an example already exists.Thanks,\nJosh",
"username": "Joshua_Barnard"
},
{
"code": "",
"text": "Thanks! Good information",
"username": "Ikbal_Sk"
},
{
"code": "",
"text": "The missing part in this tutorial is how do you access the Realm instance outside of the components (unless i’m missing something not everything is a component in a React app).\nWhen I tried the useRealm / useQuery / … hooks outside of components, I got the “Hooks can only be called inside of the body of a function component”.\nAnd if I try to create a new Realm() outside, either I get errors because the Realm is already opened with another schema OR the realm instance close randomly (Garbage Collected?).So I’m really curious to know how we are supposed to handle this.",
"username": "Julien_Curro"
},
{
"code": "RealmProvidercloseOnUnmountRealmProvidercloseOnUnmountRealmProvider",
"text": "@Julien_Curro Thanks for reaching out. This is doable. We have added a flag to the RealmProvider in version 0.6.0 called closeOnUnmount which can be set to false to stop the Realm from closing if you are trying to do anything with the Realm instance outside of the component. Without setting this flag, as soon as the RealmProvider goes out of scope, the realm instance instantiated therein will be closed.\nIt’s important to note, that with realm instances that are instantiated and point to the same realm, when one of them is closed, they all close. We will address this in a future version of the Realm JS SDK, but for now, the closeOnUnmount flag can be used to workaround this.\nAnother note, any of the hooks will only work within components rendered as children of the RealmProvider. Anything done with a realm instance outside of this provider must be done without hooks. This includes registering your own change listeners if you want to react to changes on data, which the hooks handle automatically.\nLet us know if you have any other issues ",
"username": "Andrew_Meyer"
},
{
"code": "closeOnUnmountimport { Realm } from '@realm/react';\nimport { realmConfig } from \"./schema\";\n\nclass RealmInstance {\n private static _instance: RealmInstance;\n public realm!: Realm;\n private constructor() {}\n\n public static getInstance(): RealmInstance { \n if (!RealmInstance._instance) {\n RealmInstance._instance = new RealmInstance();\n RealmInstance._instance.realm = new Realm(realmConfig);\n console.log('DB PATH:', RealmInstance._instance.realm.path)\n }\n if (RealmInstance._instance.realm.isClosed) {\n RealmInstance._instance.realm = new Realm(realmConfig);\n }\n\n return RealmInstance._instance;\n }\n}\n\nexport default RealmInstance.getInstance().realm;\n",
"text": "I am testing the closeOnUnmount prop, but I don’t know how I am supposed to get the 2nd non-context related realm instance.Am I supposed to use the realmRef prop ? Or should I justBefore your post I was trying with an ugly singleton like this :Edit: is there a discord somewhere to talk about Realm ? There’s a mongodb server, but nobody seems to know what Realm is ",
"username": "Julien_Curro"
},
{
"code": "realmConfig<RealmProvider {...realmConfig} closeOnUnmount={false}>new RealmRealm.opensync",
"text": "@Julien_Curro The singleton example you posted should work for this purpose. The realmConfig you are using here is spreadable onto <RealmProvider {...realmConfig} closeOnUnmount={false}>. If you open a Realm with the same config, you get a shared instance of the same Realm.\nThe only change I would suggest is to change new Realm to Realm.open. Realm.open is async and more suited for a Realm configured with sync settings.At the moment we do not have a discord, but you are not the first to ask about this. We are currently trying to merge Realm even closer the MongoDB, so hopefully in the near future the discord is more knowledgable on these topics.",
"username": "Andrew_Meyer"
},
{
"code": "closeOnUnmountimport { Realm } from '@realm/react';\nimport { realmConfig } from \"./schema\";\n\nconst RealmInstance = new Realm(realmConfig)\nexport default RealmInstance\n",
"text": "closeOnUnmount seems to be working and I finally simplified the singleton, since in TS I can export the instance instead of a class, it’s easier like that :Thanks for your time, and would be happy to ask other things to you on any discord you want ",
"username": "Julien_Curro"
},
{
"code": "",
"text": "Hello @Julien_Curro ,Thank you for raising your questions and thanks @Andrew_Meyer for taking the time to help our fellow member @Julien_Curro , please feel free to ask questions and share your solutions in the community forum in related categories so we have a knowledge house of information everyone can benefit from We as a community appreciate your contributions Happy Coding!Cheers, \nHenna\nCommunity Manager, MongoDB",
"username": "henna.s"
},
{
"code": "<RealmProvider \nsync={{\n flexible: true,\n onError: (_, error) => {\n console.log(error);\n },\n }}\n>\n<SubscriptionProvider>\n<TaskApp />\n<SubscriptionProvider>\n</RealmProvider>\n",
"text": "Hey @henna.s , @Andrew_MeyerI am using realm with device sync. Can I set up a SubscriptionProvider instead of doing the subscription directly in the screens?",
"username": "Siso_Ngqolosi"
},
{
"code": "",
"text": "@Siso_Ngqolosi This is allowed and looks like a good setup. Subscriptions are globally defined, so if you apply them in any section of you app it will effect all components using the Realm.\nLet us know if you have any issues!",
"username": "Andrew_Meyer"
}
] | Mobile Bytes #9 : Realm React for React Native | 2022-03-31T15:53:59.789Z | Mobile Bytes #9 : Realm React for React Native | 7,109 |
null | [] | [
{
"code": "",
"text": "× mongod.service - MongoDB Database Server\nLoaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\nActive: failed (Result: exit-code) since Wed 2023-10-18 10:42:27 UTC; 20min ago\nDocs: https://docs.mongodb.org/manual\nMain PID: 6847 (code=exited, status=2)\nCPU: 32msOct 18 10:42:27 ip-CENSORED systemd[1]: Started MongoDB Database Server.\nOct 18 10:42:27 ip-CENSORED mongod[6847]: Unrecognized option: storage.journal.enabled\nOct 18 10:42:27 ip-CENSORED mongod[6847]: try ‘/usr/bin/mongod --help’ for more information\nOct 18 10:42:27 ip-CENSORED systemd[1]: mongod.service: Main process exited, code=exited, stat>\nOct 18 10:42:27 ip-CENSORED systemd[1]: mongod.service: Failed with result ‘exit-code’.Someone could help me?",
"username": "MrK"
},
{
"code": "",
"text": "Oct 18 10:42:27 ip-CENSORED mongod[6847]: Unrecognized option: storage.journal.enabled\nOct 18 10:42:27 ip-CENSORED mongod[6847]: try ‘/usr/bin/mongod --help’ for more informationYou are using an unrecognized option.",
"username": "Kobe_W"
}
] | I'm having a error with my MongoDB | 2023-10-18T12:10:25.378Z | I’m having a error with my MongoDB | 202 |
null | [
"aggregation",
"sharding"
] | [
{
"code": "db.getCollection(\"myData\").aggregate( [ { $indexStats: { } } ] )accesses.opsaccesses.ops",
"text": "Given a sharded cluster with multiple indexes defined I would like to retrieve a metric for the overall usage for every index.Calling db.getCollection(\"myData\").aggregate( [ { $indexStats: { } } ] ) returns a JSON with the accesses.ops for a single shard.Is there any chance to either\na) select the shard to get the metric from\nb) get the metric from all shards at once in order to sum up the accesses.ops for every index.The official documentation says nothing about https://www.mongodb.com/docs/manual/reference/operator/aggregation/indexStats/#mongodb-pipeline-pipe.-indexStatsThanks,\nJens",
"username": "Jens_Lippmann"
},
{
"code": "",
"text": "a) select the shard to get the metric fromTry running the command directly on a shard instead of via mongos ?",
"username": "Kobe_W"
}
] | Index usage metric for sharded cluster | 2023-10-18T09:31:17.168Z | Index usage metric for sharded cluster | 183 |
null | [
"server",
"containers"
] | [
{
"code": "2023-10-19 10:01:51 {\"t\":{\"$date\":\"2023-10-19T02:01:51.708+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784928, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the TTL monitor\"}\n2023-10-19 10:01:51 {\"t\":{\"$date\":\"2023-10-19T02:01:51.708+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":6278511, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down the Change Stream Expired Pre-images Remover\"}\n2023-10-19 10:01:51 {\"t\":{\"$date\":\"2023-10-19T02:01:51.708+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":4784929, \"ctx\":\"initandlisten\",\"msg\":\"Acquiring the global lock for shutdown\"}\n2023-10-19 10:01:51 {\"t\":{\"$date\":\"2023-10-19T02:01:51.708+00:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4784931, \"ctx\":\"initandlisten\",\"msg\":\"Dropping the scope cache for shutdown\"}\n2023-10-19 10:01:51 {\"t\":{\"$date\":\"2023-10-19T02:01:51.708+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20565, \"ctx\":\"initandlisten\",\"msg\":\"Now exiting\"}\n2023-10-19 10:01:51 {\"t\":{\"$date\":\"2023-10-19T02:01:51.708+00:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23138, \"ctx\":\"initandlisten\",\"msg\":\"Shutting down\",\"attr\":{\"exitCode\":100}}\n2023-10-19 10:07:33 {\"t\":{\"$date\":\"2023-10-19T02:07:33.775+00:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"main\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":21},\"outgoing\":{\"minWireVersion\":6,\"maxWireVersion\":21},\"isInternalClient\":true}}}\n2023-10-19 10:07:33 Traceback (most recent call last):\n2023-10-19 10:07:33 File \"/usr/local/bin/docker-entrypoint.py\", line 637, in <module>\n2023-10-19 10:07:33 subprocess.run(get_final_command_line_args(), check=True)\n2023-10-19 10:07:33 File \"/usr/lib/python3.10/subprocess.py\", line 526, in run\n2023-10-19 10:07:33 raise CalledProcessError(retcode, process.args,\n2023-10-19 10:07:33 subprocess.CalledProcessError: Command '['mongod', '--bind_ip_all']' returned non-zero exit status 100.\n2023-10-19 10:07:57 Warning: File MONGO_INITDB_ROOT_USERNAME_FILE is deprecated. Use MONGODB_INITDB_ROOT_USERNAME_FILE instead.\n2023-10-19 10:07:57 Warning: File MONGO_INITDB_ROOT_PASSWORD_FILE is deprecated. Use MONGODB_INITDB_ROOT_PASSWORD_FILE instead.\n",
"text": "I tried to install MongoDB on Mac using docker. I followed the steps in this link: https://www.mongodb.com/docs/manual/tutorial/install-mongodb-community-with-docker/, but when I ran the MongoDB in docker, I got this error.I installed the latest docker version, which is 24.0.6, build ed223bc. Any Idea how to solve this issue?",
"username": "Chris_Ian_Fiel"
},
{
"code": "",
"text": "",
"username": "Jack_Woehr"
}
] | Installing MongoDB in Docker Issue | 2023-10-19T02:16:32.831Z | Installing MongoDB in Docker Issue | 276 |
null | [] | [
{
"code": "",
"text": "Dear Forum Members,I hope this message finds you well. I am seeking advice and guidance on a data access issue that I am currently facing, and I believe this community’s expertise may be instrumental in finding a solution. I apologize in advance for the length of this post, but I want to provide a comprehensive background on the issue.Background: I am responsible for managing data access and reporting for different branches, each of which represents a distinct company within our organization. These branches have expressed a need for direct access to specific data for use in Power BI, enabling them to create their own custom reports and charts.Challenges: My primary challenge lies in providing these branches with filtered data that corresponds to their specific branch, ensuring data privacy and security. My goal is to empower them to create meaningful reports while maintaining strict access controls.Current Approach: I have attempted to address this challenge by developing a data API that can filter data based on the branch. While this has been successful on the data preparation side, I am struggling to understand how to connect this filtered data source to Power BI.Key Questions:I would greatly appreciate any insights, experiences, or suggestions that forum members can provide regarding this complex issue. Please share any relevant resources or step-by-step guidance, as I am dedicated to finding a secure and efficient solution that empowers our branches to harness the power of Power BI while adhering to strict data access controls.Thank you in advance for your valuable input, and I look forward to engaging in a fruitful discussion within this community.",
"username": "Donis_Rikardo"
},
{
"code": "",
"text": "I have not read all details of your post yet but at first glance I would think that one direction to look would be",
"username": "steevej"
},
{
"code": "",
"text": "You define a view with an aggregation pipeline. With a $match stage you make sure only documents from a given branch is visible in a given view.As forAre there best practices or recommended methods for achieving this without compromising data security and integrity?I do not know if it is a best practice but you may define user’s roles that only give read access to the branches views. You may even REDACT some fields to hide confidential information.",
"username": "steevej"
},
{
"code": "",
"text": "No. It will not work. I can’t see any ability grand access for power bi for specific user/private keys etc. Also it look like I have to create same view for each table for each branch. and each time when I create new model I have create for each branch new views.",
"username": "Donis_Rikardo"
}
] | Seeking Guidance on Providing Filtered Data Access to Specific Branches for Power BI Reporting | 2023-10-15T02:25:09.082Z | Seeking Guidance on Providing Filtered Data Access to Specific Branches for Power BI Reporting | 181 |
null | [
"node-js",
"database-tools",
"backup"
] | [
{
"code": "",
"text": "I have dumped my mongo data using a cronjob in nodejs and I see the created gzip file in specified location however when I am trying to restore , I am always getting E11000 errorI am using the below command to restore , please let me know what I am doing wrong\nmongorestore --db=ep-restoreTest --gzip --archive=easyplatter.gzip --noIndexRestore\nIm using this command to restore to db with its name ep=restoreTestError looks like below:2023-10-18T15:35:00.504-0300\tcontinuing through error: E11000 duplicate key error collection: db.products index: id dup key: { _id: ObjectId(‘609ac324a46cb01f53e2e07f’) }I have tried this way as well and Its failing too\nmongorestore --nsInclude=ep-restoreTest.* --gzip --archive=easyplatter.gzip --noIndexRestore\n2023-10-18T15:55:23.455-0300\tpreparing collections to restore from\n2023-10-18T15:55:23.527-0300\t0 document(s) restored successfully. 0 document(s) failed to restore.",
"username": "priyatham_ik"
},
{
"code": "",
"text": "Finally found the solution for anyone facing the similar porblem\nmongorestore --nsFrom=originalDBName.* --nsTo=newRestoreDBName.* --gzip --archive=easyplatter.gzip --noIndexRestore",
"username": "priyatham_ik"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Monogrestore is always failing with E11000 error | 2023-10-18T18:48:22.734Z | Monogrestore is always failing with E11000 error | 176 |
null | [
"node-js",
"next-js"
] | [
{
"code": "",
"text": "I’m starting to use the mongodb driver for next.js. I followed the example of How to Integrate MongoDB Into Your Next.js App | MongoDB but first I found that the example code doesn’t work anymore but I manage to make it work with a few tweaks.Now I want to use it in my own app, so I went and npm install mongodb and everything is working, I can query the database and get the results, but the weird thing is the function isConnected is not working. I get a message saying: “TypeError: client.isConnected is not a function”.I’m new with next.js and mongodb, so I don’t get what is happening, in the demo app with the example the function works without problems, but in my app I get this message.Can somebody help me how to make it work? This is the line with the error:const isConnected = await client.isConnected()Thanks everybody.",
"username": "Alejandro_Chavero"
},
{
"code": "import { connectToDatabase } from '../lib/mongodb'\nconst { client } = await connectToDatabase()\nimport clientPromise from '../lib/mongodb'\nconst client = await clientPromise\nclient.isConnected()import { MongoClient } from 'mongodb'\n\nconst uri = process.env.MONGODB_URI\nconst options = {\n useUnifiedTopology: true,\n useNewUrlParser: true,\n}\n\nlet client\nlet clientPromise\n\nif (!process.env.MONGODB_URI) {\n throw new Error('Please add your Mongo URI to .env.local')\n}\n\nif (process.env.NODE_ENV === 'development') {\n // In development mode, use a global variable so that the value\n // is preserved across module reloads caused by HMR (Hot Module Replacement).\n if (!global._mongoClientPromise) {\n client = new MongoClient(uri, options)\n global._mongoClientPromise = client.connect()\n }\n clientPromise = global._mongoClientPromise\n} else {\n // In production mode, it's best to not use a global variable.\n client = new MongoClient(uri, options)\n clientPromise = client.connect()\n}\n\n// Export a module-scoped MongoClient promise. By doing this in a\n// separate module, the client can be shared across functions.\nexport default clientPromise\n",
"text": "Hey Alejandro -We just pushed a huge update to the next.js repo that changes how a couple of things work, so the issue is def NOT on your end. You’ll just have to make a few minor tweaks.The updated code is here:canary/examples/with-mongodbThe React Framework. Contribute to vercel/next.js development by creating an account on GitHub.But essentially, the way we import the library has changed.Instead of importingand callingTo get our connection to the database. We’ll instead import the library like so:and to access a database in our getServerSideProps, we’ll do:and now the client.isConnected() function should work.The updated library itself looks like this:I would check out the code here for further instructions:\nhttps://github.com/vercel/next.js/blob/canary/examples/with-mongodb/pages/index.jsPlease let me know if that helps! I will update the blog post in the next few days as well to reflect these changes.Thanks!",
"username": "ado"
},
{
"code": "",
"text": "Thanks for your help Ado, sadly I still get the error that IsConnected is not a function. If I take out the app is connecting and querying the DB without problem.",
"username": "Alejandro_Chavero"
},
{
"code": "",
"text": "Yes, I have same problem with Alejandro even though I followed exact code as posted. I hope it will be notified by more developers and be fixed soon.",
"username": "Brandon_Lee"
},
{
"code": "mongodb@^[email protected]",
"text": "In the Next.js example, they used mongodb@^3.5.9.mongo@latest, which is 4.1.1 as of today, does not have isConnected method on MongoClient as far as I see. So if you just installed mongo in your own project, this might be it.",
"username": "nefil1m"
},
{
"code": "",
"text": "Well it’s nice to know I’m not the only one having problems with the isConnected not being a function.",
"username": "Jeff_Woltjen"
},
{
"code": "",
"text": "thanks and do you know an alternative to this function?",
"username": "Alejandro_Chavero"
},
{
"code": "getServerSidePropsexport async function getServerSideProps(context) {\n\n let isConnected;\n try {\n const client = await clientPromise\n isConnected = true;\n } catch(e) {\n console.log(e);\n isConnected = false;\n }\n\n return {\n props: { isConnected },\n }\n}\n",
"text": "to solve the “isConnected is not a function” error, change the function getServerSideProps to:Thanks,\nRafael,",
"username": "Rafael_Green"
},
{
"code": "",
"text": "But how I create /api routing for api calls? I need to create API routes inside /page directory, but it will not accept mongodb.js. Can you share the code of API call, please?",
"username": "Il_Chi"
},
{
"code": "import clientPromise from \"../../../lib/mongodb\";\nexport default async (req, res) => {\n const client = await clientPromise\n const { fieldvalue } = req.query\n const database = client.db('databasename');\n const userdb = await database.collection('collectionname')\n .find({ \"<field>\": `${ fieldvalue }` })\n .project({ \"_id\": 0 })\n .toArray();\n res.json(userdb)\n}",
"text": "This is how I’m calling my api routes:",
"username": "Alejandro_Chavero"
},
{
"code": "",
"text": "For that problem, the standard solution is to import clientPromise because versions higher than 3.9/4.0 do not have \"import {Mongoclient} \" command.Then also, if you want to use {MongoClient} then,Now it will work",
"username": "Bhagya_Shah"
},
{
"code": "",
"text": "Hi,\nIs there a special reason that we export clientPromise? If I export it, I need to access to db object and pick the database I want to work with in each route. So I don’t want to repeat myself. Of course I can find a quick solution, before spending time I just wanted to learn what is the reason we do it like that.\nThanks",
"username": "Yalcin_OZER"
}
] | IsConnected not a function in next.js app | 2021-08-26T21:48:16.961Z | IsConnected not a function in next.js app | 16,746 |
null | [
"queries",
"dot-net"
] | [
{
"code": " var fOptions = new FindOptions<Log, Log> { Limit = 1 };\n var data = await _collection.FindAsync(f => f.Id == 200 );\nfind - { \\\"find\\\" : \\\"log\\\", \\\"filter\\\" : { \\\"Id\\\" : 200, }, \\\"$db\\\" : \\\"xpto\\\", \\\"lsid\\\" : { \\\"id\\\" : CSUUID(\\\"375b6374-2367-4c08-b51c-5d4cd9da2f9f\\\") } }\n var data = await _collection.FindAsync(f => f.Id == 200 && f.Date >= DateTime.UtcNow );\n data.FirstOrDefaultAsync();\nfind - { \\\"find\\\" : \\\"log\\\", \\\"filter\\\" : { \\\"Id\\\" : 200, }, \\\"$db\\\" : \\\"xpto\\\", \\\"lsid\\\" : { \\\"id\\\" : CSUUID(\\\"375b6374-2367-4c08-b51c-5d4cd9da2f9f\\\") } }\n",
"text": "FirstOrDefaultAsync creates a command to fetch only 1 piece of data or is this in memory?I’m seeing the tracking of the query and I’m not seeing the limit in it, how does it create this instruction to obtain only 1 documentexemple query intercept:the query does not change with the instruction:exemple query intercept:",
"username": "Mychell_Dias"
},
{
"code": "",
"text": "Or is the limit related to another instruction, in this case the pipeline?How can we effectively see the query obtaining only 1 result like select top 1?",
"username": "Mychell_Dias"
}
] | Find With FirstOrDefaultAsync Ou FirstOrDefault | 2023-10-11T18:25:36.471Z | Find With FirstOrDefaultAsync Ou FirstOrDefault | 269 |
null | [] | [
{
"code": " {\n _id: '$external.CN=admin',\n userId: new UUID(\"012da6f9-a284-4fbd-94f2-ddcdf23ba817\"),\n user: 'CN=admin',\n db: '$external',\n roles: [ { role: 'userAdminAnyDatabase', db: 'admin' } ],\n mechanisms: [ 'external' ]\n }\ndev> show collections\nMongoServerError: not authorized on dev to execute command { listCollections: 1, filter: {}, cursor: {}, nameOnly: true, authorizedCollections: false, lsid: { id: UUID(\"ea1db02d-2676-44ec-93e6-b1ae046c8942\") }, $db: \"dev\", $readPreference: { mode: \"primaryPreferred\" } }\n",
"text": "So i setup my Mongodb with TLS auth and created all keys and certs. The user i log into Mongodb is already in the $external db.In my Theory the user should have full rights now on the db but i face a auth error like below when i use for example show collections or insert a document into a db.Has anyone an idea where the problem could be?Thanks in advance\nJojo",
"username": "Jo_M1"
},
{
"code": "",
"text": "Hi @Jo_M1,\nI don’t understand why you created the 'user in the db $external.Regards",
"username": "Fabio_Ramohitaj"
}
] | Authorization Problems | 2023-10-18T12:55:29.605Z | Authorization Problems | 120 |
null | [
"replication",
"performance",
"transactions"
] | [
{
"code": "",
"text": "Hey!\nAny possibility mongo 4 is faster than Mongo 7?Situation: in my local dev macos system I have mongo 7 installed via brew and mongo 4 via run-rs tool. I have a web-app with bunch of tests, using Mongo replicaset(localhost). So, problem is - if I start replica via brew (mongo 7) even with only one memeber these tests run 2-3 times slower(!) than if I use default run-rs replica (localhost 20017, 20018, 20019). My tests using updates, deletes, queries, transactions if it important.I repoduse it many times and the output always the same - run-rs replica mongo ver4 with 3 memebers faster.Why is that? Is it case of Mongo or some options and tweaks of run-rs config?",
"username": "Alex_Kotomin"
},
{
"code": "",
"text": "@Alex_Kotomin As far as I know, mmapv1 engine (available earlier than v4.2) engine is faster than WiredTiger engine because the architecture is very simple at taking a risk of database file crash.\nEven if it is that, in my performance tests (shell based simple 1000 commands running test), v4 and v5 seems they are 2 ~ 30 times faster than v6 and v7. I’m also in trouble thinking about it.",
"username": "ystskm"
},
{
"code": "",
"text": "thank you! I guess we can live with it =)",
"username": "Alex_Kotomin"
},
{
"code": "",
"text": "Hey @Alex_Kotomin,Welcome to the MongoDB Community!in my local dev macos system I have mongo 7 installed via brew and mongo 4 via run-rs toolCould you please provide more specific details about the version (such as 4.0.x) you’re using and the deployment configuration of both versions you have in place?if I start replica via brew (mongo 7) even with only one memeber these tests run 2-3 times slower(!) than if I use default run-rs replica (localhost 20017, 20018, 20019)Based on the above information, it seems like you’re comparing a MongoDB v7.0 standalone instance with a MongoDB v4.0 3-member replica set. May I ask if there’s a specific reason for this particular comparison?It appears to me that the comparison is not entirely balanced or equivalent in terms of the deployment configurations being considered. Therefore, I suggest that you test both versions by running them with run-rs.However, it would be helpful if you could share more insights into your use case and the workflow you’re following. Additionally, please specify the language drivers and its version that you are testing with, and let us know whether you are using the same code to run these tests. This information will help us to assist you better.Looking forward to your response.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "this is run-rs mongo version 4.0.12\n(run-rs -h 127.0.0.1 --dbpath '/opt/homebrew/var/mongodb' --keep)\nFinished in 2.4 seconds (1.0s async, 1.4s sync)\n473 tests, 0 failures\n\n\nHere is run-rs with 7.0.1\n(run-rs -v 7.0.1 -h 127.0.0.1)\nFinished in 17.8 seconds (2.4s async, 15.4s sync)\n473 tests, 0 failures\n",
"text": "Hey!May I ask if there’s a specific reason for this particular comparison?There were no comparison, I just have to use replicaSet to use transaction. So i download run-rs and just start it with defaults (ver 4.0.12, 3 replicaset members).Everything was OK, but I decided to use latest version of Mongo and download it with brew. When I start it with replicaSet option and has 3 members too, I suddenly see my test slow down 2-3 times. I was suprprised by that, and start to look for reason, try to use only 1 memeber for example (27017), but it were still slower than run-rs.I use Elixir, mongodb-driver 1.0.3, tests are the same -no differs there, before tests there are collections and indexes creating, but not during the tests. Before some db tests clearing of some collections happen deleteMany()I tried that u asked for -download 7.0.1 version via run-rsBy now it is not problem for me, I just curious. By the way, on my windows 10 desktop same tests run with the same high speed as run-rs but has latest Mongo there (no run-rs on Windows).",
"username": "Alex_Kotomin"
},
{
"code": "",
"text": "@Kushagra_Kesav Thanks for all of your contribution to MongoDB and for this response to us \nIt’s not just a curious for me. I have been insterested in MongoDB for a long time (since 2011) and I’m using MongoDB v4.0 with configuration engine mmapv1 in several subsystems for my reserch environment. I think one of the characteristics of document-based database systems is the agility of data manipulation for large amounts of data.\nSo, I’d like to know why a simple operation like the one @Alex_Kotomin tried works slower in the latest version of MongoDB than in previous versions.",
"username": "ystskm"
},
{
"code": "",
"text": "Could be related to the write concern. IIRR since v6 ‘majority’ is the new default.",
"username": "Jens_Lippmann"
}
] | Mongo 4 faster than Mongo 7? | 2023-09-23T06:36:40.469Z | Mongo 4 faster than Mongo 7? | 461 |
null | [
"compass"
] | [
{
"code": "",
"text": "HI Team,We are gettting below error while monitoring performance via mongodb compassCommand “currentOp” returned error “Invalid regular expression: /^(?)\\Qcustomeradmin\\E/i: Invalid group”Please help on this.Thanks,\nKiran",
"username": "Kiran_Joshy"
},
{
"code": "",
"text": "I’m wondering if this is because what you are trying is not supported on your Atlas tier of service.\nI’m on the free tier and here’s what I get when I go to the Performance tab:image1163×63 5.79 KB",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "So how can we monitor properly with community version which hosted in onprem Linux",
"username": "Kiran_Joshy"
},
{
"code": "",
"text": "Maybe it is a bug.\nYou can file a bug report at https://jira.mongodb.org/plugins/servlet/samlsso?redirectTo=%2F",
"username": "Jack_Woehr"
}
] | In compass , We are getting Command "currentOp" returned error "Invalid regular expression: /^(?)\Qcustomeradmin\E/i: Invalid group" | 2023-10-17T22:15:55.363Z | In compass , We are getting Command “currentOp” returned error “Invalid regular expression: /^(?)\Qcustomeradmin\E/i: Invalid group” | 196 |
null | [] | [
{
"code": "",
"text": "HI,I want to setup a new cluster were i want to update 16 million documents in 20 mins and this will be done in a regular basis.",
"username": "Senthilkumar_S1"
},
{
"code": "",
"text": "Hello Senthilkumar,apart from the number of documents you want to update, please consider:The more details you can provide, the better a sizing recommandation will fit your needs.Best,\nJens",
"username": "Jens_Lippmann"
}
] | Capacity planning for heavy rights | 2023-07-28T11:04:57.364Z | Capacity planning for heavy rights | 491 |
null | [
"swift",
"flexible-sync"
] | [
{
"code": "class Comment: Object, ObjectKeyIdentifiable {\n @Persisted(primaryKey: true) public var _id: String = UUID().uuidString\n @Persisted public var ownerId: String\n @Persisted public var comment: String\n}\nlet app = App(id: \"xxxxxxx\")\n@main\nstruct TestSyncApp: SwiftUI.App {\n var body: some Scene {\n WindowGroup {\n if let app = app {\n AppView(app: app)\n .frame(maxWidth: .infinity, maxHeight: .infinity)\n }\n else {\n Text(\"No RealmApp found!\")\n }\n }\n }\n}\n\nstruct AppView: View {\n @ObservedObject var app: RealmSwift.App\n\n var body: some View {\n if let user = app.currentUser {\n let config = user.flexibleSyncConfiguration(initialSubscriptions: { subs in\n if subs.first(named: \"Comment\") != nil {\n return\n }\n else {\n subs.append(QuerySubscription<Comment>(name: \"Comment\"))\n }\n })\n OpenSyncedRealmView()\n .environment(\\.realmConfiguration, config)\n .environmentObject(user)\n }\n else {\n LoginView()\n }\n }\n}\n\nstruct OpenSyncedRealmView: View {\n @AutoOpen(appId: \"xxxxxxx\", timeout: 4000) var realmOpen\n\n var body: some View {\n switch realmOpen {\n case .connecting,.waitingForUser,.progress(_):\n ProgressView(\"waiting ...\")\n case .open(let realm):\n RealmContentView()\n .environment(\\.realm, realm)\n case .error(let error):\n Text(\"opening realm error: \\(error.localizedDescription)\")\n }\n }\n}\nstruct RealmContentView: View {\n @Environment(\\.realm) var realm: Realm\n @ObservedResults(Comment.self) var comments\n @State var subscribeToEmail: String = \"\"\n\n var body: some View {\n VStack {\n HStack {\n Spacer()\n Text(\"SubscribeTo:\")\n TextField(\"Email\", text: $subscribeToEmail)\n Button {\n if let user = app.currentUser {\n Task {\n do {\n _ = try await user.functions.subscribeToUser([AnyBSON(subscribeToEmail)])\n }\n catch {\n print(\"Function call failed - Error: \\(error.localizedDescription)\")\n }\n }\n }\n } label: {\n Image(systemName: \"mail\")\n }\n Text(\"New Comment:\")\n Button {\n let dateFormatter : DateFormatter = DateFormatter()\n dateFormatter.dateFormat = \"yyyy-MMM-dd HH:mm:ss.SSSS\"\n let date = Date()\n let dateString = dateFormatter.string(from: date)\n\n let newComment = Comment()\n newComment.comment = \"\\(app.currentUser!.id) - \\(dateString)\"\n newComment.ownerId = app.currentUser!.id\n $comments.append(newComment)\n } label: {\n Image(systemName: \"plus\")\n }\n }\n .padding()\n if comments.isEmpty {\n Text(\"No Comments here!\")\n }\n else {\n List {\n ForEach(comments) { comment in\n Text(comment.comment)\n .listRowBackground(comment.ownerId == app.currentUser!.id ? Color.white: Color.green)\n }\n }\n .listStyle(.automatic)\n }\n }\n }\n}\n",
"text": "Hi there,we try to implement the “Restricted News Feed” example in Swift from the Flexible Sync Permissions Guide.We couldn’t check out the example via the template, so we had to copy the backend related things from the guide to a newly created app. (enabled email authentication, added the authentication trigger, the function to subscribe to someone else, enabled custom userdata etc…)The backend seems to work as it should.On client side we implemented a simple comment Object, with String Data to display:User now can log in to the client and create comments - and sync it (works as expected). And they could subscribe to other users comments like in the example from the guide (with the same server function as in the guide). On the server we can see that the data is correct.The problem now: on client side nothing happens when a user subscribes to another users comment. The other users comments won’t be synced…Only when the user deletes his app from the device, reinstalls it and logs in with the same user as before - then he can see his comments and the comments from the user he subscripted to.Here is the code for initializing the realm in SwiftUI:And the code for displaying the comments:Did we miss something? Do we have to manage/handle subscriptions in a different way? Or have we found a bug?Thanks for any help!",
"username": "Dan_Ivan"
},
{
"code": "rerunOnOpentruelet config = user.flexibleSyncConfiguration(initialSubscriptions: { subs in\n if subs.first(named: \"Comment\") != nil {\n return\n }\n else {\n subs.append(QuerySubscription<Comment>(name: \"Comment\"))\n }\n}, rerunOnOpen: true)\n",
"text": "Hey Dan - there are a couple of things you might try to resolve this. I haven’t looked in detail at this permissions model so I’m not sure which is your best fix.One option is to try setting the rerunOnOpen parameter to true in your Sync Configuration. So:This forces the subscriptions to recalculate every time the app is opened, and might resolve the need to delete/reinstall. But it would still require the user to close the app and re-open it to see the updated subscriptions. Let me know if that works, and if not, I may have some other suggestions to try.",
"username": "Dachary_Carey"
},
{
"code": "rerunOnOpen: true",
"text": "Hey Dachary!many thanks for the answer!\nWe had tried the rerunOnOpen: true before and now again on your advice.Unfortunately that doesn’t change anything. The other users’s data remains unsynced until the user deletes and reinstalls the app.We look forward to other suggestions!Kind regards,\nDan",
"username": "Dan_Ivan"
},
{
"code": "let config = user.flexibleSyncConfiguration(initialSubscriptions: { subs in\n if subs.first(named: \"Comment\") != nil {\n return\n }\n else {\n subs.append(QuerySubscription<Comment>(name: \"Comment\"))\n }\n}, clientResetMode: .recoverUnsyncedChanges())\n",
"text": "Ok, Dan - I’ve dug a little deeper here and have another suggestion to try. The docs for the restricted news feed state:changes don’t take effect until the current session is closed and a new session is started.I believe this is because we are effectively setting a new session role role for the user.The Swift SDK provides APIs to suspend and resume a Sync session. I believe that if you suspend and then resume Sync, that will trigger a session role change and the user should be able to sync the new comments. This may trigger a client reset, so you’ll want to set a client reset mode in your sync configuration. This would look something like:This should then trigger the realm to re-sync relevant comments based on the updated subscription.We do have some work planned in the future to improve this process, but I think this is roughly what you’ll need to do to handle it currently.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "Hey Dachary,unfortunately, setting the client reset doesn’t do anything. same sync behavior as before.we took a closer look at the realm logs: we found nothing that indicates a client reset. It seems that the client reset is never triggered and maybe that is the underlying problem?",
"username": "Dan_Ivan"
},
{
"code": "",
"text": "Are you finding there is no client reset after suspending and resuming sync? I would not expect the client reset to occur until after the Sync session stops and a new one starts. This makes me wonder if there is still an active Sync session and that’s why the role change isn’t happening & new relevant docs are not getting synced.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "No, we can’t find in the log anything that indicates a client reset. We call the function to subscribe to the comments of another user, then we suspend on the synced realm, then we resume the synced realm - we see in the logs that the first sync session is closed and disconnected and that another sync session is started - but nothing about a client reset.",
"username": "Dan_Ivan"
},
{
"code": "",
"text": "Got it. A client reset may not be expected in this case - I know we’ve been doing work around reducing the need for client resets under certain scenarios. It’s also possible this isn’t a role change, and I’m conflating this with another permissions scenario.I’ll tag our engineers and see if I can find any other suggestions for you.",
"username": "Dachary_Carey"
},
{
"code": "",
"text": "Are there any news regarding this issue?We are working on a project that relies on similar functionality and this issue is currently blocking our development.\nWould it be advisable to book an appointment with an engineer at MongoDB (Flex Consulting) to solve this quickly?Thank you!",
"username": "Dan_Ivan"
},
{
"code": "",
"text": "@Dan_Ivan What’s the question? I will say that we have released a Client Reset with Automatic Recovery across all of our SDKs which should perform this recovery and reset logic for you automatically under the hood -",
"username": "Ian_Ward"
},
{
"code": "owner_idownerId",
"text": "I did check with engineering, and they spotted that the docs & backend use owner_id as the queryable field, but the snippet you’ve posted here uses ownerId. If that’s the issue, you should be seeing in the logs that the field used in permissions is not a queryable field.If that doesn’t solve the issue, then some debugging directly with our engineers is probably the right next step.",
"username": "Dachary_Carey"
},
{
"code": "owner_id",
"text": "I’m facing the same issue. And I use proper owner_id",
"username": "Alexandar_Dimcevski"
},
{
"code": "",
"text": "@Alexandar_Dimcevski What error are you getting?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "No error. But rerun on open doesn’t run when I close and open the app",
"username": "Alexandar_Dimcevski"
},
{
"code": "",
"text": "We had a support session with a MongoDB support engineer and found out that this is not yet fully implemented in the Swift SDK: currently the realm won’t change automatically if - as in the example above - the flexible sync permissions change due to a change in the custom data (session role change).\nThe only safe way at the moment - according to the MongoDB engineer - is to “log out and log in the user mandatorily”. Then the data is correctly synchronized again with the new permissions.He also told us: “It should be noted that the feature to handle role changes without client reset is under active consideration and is being developed now it may take some time to be available for the\ngeneral public.”It would be very interesting to hear from official MongoDB staff here when we can expect this feature to be implemented - because it is not reasonable that users have to log out and log in again to get their data synced correctly!",
"username": "Dan_Ivan"
},
{
"code": "\"ChatMessage\": [\n {\n \"name\": \"anyone\",\n \"applyWhen\": {},\n \"read\": {},\n \"write\": {\n \"authorID\": \"%%user.id\"\n }\n }\n],\n",
"text": "Is there a reason to store the subscriptions inside of custom_data? Perhaps it’d work if you made a synced User object instead of using using custom_data. The RChat example does this.flex-syncContribute to realm/RChat development by creating an account on GitHub.edit: Also it seems the RChat app lets anyone read chats, so maybe that doesn’t actually work.",
"username": "Jonathan_Czeck"
},
{
"code": "",
"text": "Hi, is there any update on this issue?",
"username": "Dominik_Hait"
},
{
"code": "",
"text": "@Dachary_Carey Can you help here?",
"username": "Dominik_Hait"
},
{
"code": "",
"text": "Dominik,\nToday, permissions are cached per sync session. as @Dan_Ivan mentioned previously. While this is an area of planned improvement, a permission change (for instance, a change to custom user data) is only guaranteed to take effect after the sync session restarts (ie disconnect and reconnect / log out and log back in).We would recommend changing the subscription rather than the permissions to change what the user sees.",
"username": "Sudarshan_Muralidhar"
},
{
"code": "",
"text": "@Sudarshan_Muralidhar If I do as you say, there is absolutely Zero data security. Anyone can see anything.That’s completely nonviable, dangerous and irresponsible to suggest!Take the collaboration examples off the site until you actually support collaboration.The collaboration approach suggested in the official docs does not work for reasons you wrote. Why do you suggest people do this?-Jon",
"username": "Jonathan_Czeck"
}
] | Realm Flexible Sync not working properly in Swift | 2022-10-31T09:03:37.901Z | Realm Flexible Sync not working properly in Swift | 5,092 |
null | [
"aggregation"
] | [
{
"code": "{\n _id: \"123456\",\n name: \"John Doe\",\n items: [\n {\n item: \"7890\", \n count: 4\n },\n {\n item: \"6543\", \n count: 4\n },\n ]\n}\n{\n _id: \"7890\",\n name: \"item1\"\n},\n{\n _id: \"6543\",\n name: \"item2\"\n}\n{\n _id: \"123456\",\n name: \"John Doe\",\n items: [\n {\n item: \"7890\", \n name: \"item1\",\n count: 4\n },\n {\n item: \"6543\", \n name: \"item2\",\n count: 4\n },\n ]\n}\n{\n \"$lookup\": {\n \"from\": \"items\",\n \"localField\": \"items.item\",\n \"foreignField\": \"_id\",\n \"as\": \"items\"\n }\n }\n{\n _id: \"123456\",\n name: \"John Doe\",\n items: [\n {\n item: \"7890\", \n name: \"item1\"\n },\n {\n item: \"6543\", \n name: \"item2\"\n },\n ]\n}\n",
"text": "I have two collections, one with orders and one with itemsOrders:Items:Ideally, I would like whenever an order is pulled up, the items will populate with the count requested; as shown belowCurrently, I have tried the following lookup in the Orders collection;However, that lookup completely overrides the ‘count’ field in the order and gives me this;Is there a way to expand the ‘item’ object in the ‘order.items’ array without it overriding the ‘count’ field?, I would think it could be something done with $mergeObjects, but I am not experienced enough with the aggregation pipeline to understand how it works,PlaygroundThank you for the advice ahead of time.",
"username": "Travis_Engle"
},
{
"code": "nameitem id and items_lookup$mepitemsitems_lookup$indexOfArray$arrayElemAt$mergeObjects$$iitemsdb.orders.aggregate([\n {\n \"$lookup\": {\n \"from\": \"items\",\n \"localField\": \"items.item\",\n \"foreignField\": \"_id\",\n \"as\": \"items_lookup\"\n }\n },\n {\n \"$project\": {\n \"name\": 1,\n \"items\": {\n \"$map\": {\n \"input\": \"$items\",\n \"as\": \"i\",\n \"in\": {\n \"$mergeObjects\": [\n \"$$i\",\n {\n \"name\": {\n \"$arrayElemAt\": [\n \"$items_lookup.name\",\n { \"$indexOfArray\": [\"$items_lookup._id\", \"$$i.item\"] }\n ]\n }\n }\n ]\n }\n }\n }\n }\n }\n])\n",
"text": "Hello @Travis_Engle, Welcome to the MongoDB community forum,To avoid every lookup and the whole process, you can store the item’s name along with item id and count` properties.The rule of thumb whenever you design your schema is:Data that is accessed together should be stored together.If you still want to understand query how to solve your problem then here you go,PlaygroundThere are many ways in aggregation query to achieve this result!",
"username": "turivishal"
},
{
"code": "",
"text": "Thank you for the response, and the explanation of the process!One question I do have is would this same pipeline work if there was more fields in the ‘items’ db instead of just the name, such as a ‘sku’ and ‘manufacturer’? Or would it be wiser computation-wise for as orders are created to just store all the item information within the order since you mentioned that;Data that is accessed together should be stored together.I would think it would be easier to have the reference to pull the data so the information isn’t duplicated within the database, but I can also understand how it would be easier since as the number of available items can increase that it would increase the time needed to run such function.Thanks again!",
"username": "Travis_Engle"
},
{
"code": "",
"text": "Hello @Travis_Engle,One question I do have is would this same pipeline work if there was more fields in the ‘items’ db instead of just the name, such as a ‘sku’ and ‘manufacturer’?No, it requires a different approach, something like this Playground",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using aggregation to expand object in nested object without overriding other values | 2023-10-17T14:30:41.321Z | Using aggregation to expand object in nested object without overriding other values | 180 |
[
"python"
] | [
{
"code": "",
"text": "FAIL: Unable to retrieve connection string.\nimage736×121 11 KB\nAbove error occurred when I hit the “check” button during the first step of the lab hands on for “Lab: Connecting to an Atlas Cluster in Python Applications”. I’ve tried connecting using connection string and password which succeeded, but I still get the error like the one in the image.",
"username": "DIXIT_B_C"
},
{
"code": "",
"text": "Hi,Thanks for reaching out. The University team will review the error and apply the fixes necessary.If you face other issues, don’t hesitate to contact them at [email protected]",
"username": "Davenson_Lombard"
}
] | Unable retrieve connection string: Error while using in-browser Integrated Development Environment | 2023-10-17T12:41:41.353Z | Unable retrieve connection string: Error while using in-browser Integrated Development Environment | 205 |
|
[
"next-js",
"typescript"
] | [
{
"code": "",
"text": "Hello. I joined this community today because I was frustrated about my first connection Prisma ORM to Mongo DB Atlas. I always get a message “An error occurred during DNS resolution: request timed out” and I never know what caused it. I have tried to turn off my firewall, refresh MongoDB services, change DNS from my connections to Google DNS, I’ve asked some developer groups on Facebook, and I’ve made a lot of effort but until now my laptop still can connect with MongoDB atlas.Before with the same project file, I try this with my working laptop in the office but all is clear. MongoDB can’t connect to my Next JS Prisma code.I saw some similar threads, but I couldn’t solve my problem. Could you help me, please?\n\nimage1033×271 24.1 KB\n",
"username": "Johandika_Syahputra_Lubis"
},
{
"code": "",
"text": "Some documentations\n\nimage468×878 9.8 KB\n",
"username": "Johandika_Syahputra_Lubis"
},
{
"code": "",
"text": "\nimage1044×966 72.7 KB\n",
"username": "Johandika_Syahputra_Lubis"
},
{
"code": "",
"text": "\nimage1020×105 11.6 KB\n",
"username": "Johandika_Syahputra_Lubis"
},
{
"code": "mongosh",
"text": "Hi @Johandika_Syahputra_Lubis - Welcome to the community.I’ve made a lot of effort but until now my laptop still can connect with MongoDB atlas.Before with the same project file, I try this with my working laptop in the office but all is clear. MongoDB can’t connect to my Next JS Prisma code.Firstly, I’d like to just clarify the above, when you state “my laptop still can connect with MongoDB atlas” - Do you mean “can’t”? I just want to double check.Additionally, it sounds like the main issue here is connection to the Atlas instance via Prisma but I would also like to know if you are able to connect locally (on your laptop for example) direct to the Atlas instance perhaps using mongosh or MongoDB Compass. At least from this we can round out what is happening here.I am not too familiar with the full capabilities of Prisma but please advise the following:Look forward to hearing from you.Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Hi @Jason_Tran , Thanks for your response.Yes, MongoDB can connect with the office network, and all is good. The problem just in my sharing network with my smartphone that used Telkomsel provider and some of the developers in my group are experiencing something similar and until now still have not found a solution.I followed the guide from the official link : MongoDB database connectorI haven’t made contact with Prisma’s help center because I think the issue is in the MongoDB settings or a problem with my provider regarding srv. Some time ago I tried to install MongoDB compass to connect, and it worked. But I found another error that I can’t push data to MongoDB local, but I forgot what that error was, but it’s a different error.",
"username": "Johandika_Syahputra_Lubis"
},
{
"code": "nslookup -type=srv cluster0.umk4yne.mongodb.net\n",
"text": "Yes, MongoDB can connect with the office network, and all is good.From this information I would assume there are no issues with the Atlas project’s network access list or cluster.The problem just in my sharing network with my smartphone that used Telkomsel provider and some of the developers in my group are experiencing something similar and until now still have not found a solution.My interpretation is that you cannot connect when using a tethered network connection with your smartphone? Is this correct? Sounds like it probably is a network issue with the hotspot connection based on this. Have you tried performing some basic network tests whilst using the mobile hotspot connection? You can also try running the following on the problematic network to see what the response is:Regards,\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Install Cloudflare WARP VPN on your laptop, and it will work.",
"username": "Yassine_Jouaoudi"
}
] | Error: MongoDB error. An error occurred during DNS resolution: request timed out | 2023-08-12T15:41:45.579Z | Error: MongoDB error. An error occurred during DNS resolution: request timed out | 1,368 |
|
null | [
"transactions",
"storage"
] | [
{
"code": "",
"text": "Hi!\nIn the past weeks we’ve been observing strange interruptions in our service that was identified as mongoDb crashing/shutting down. There are no error messages in the logs, but today I did notice the very last entry in the log:\n[ftdc] serverStatus was very slow: { after basic: 153, after asserts: 331, after backgroundFlushing: 376,\nafter connections: 424, after dur: 453, after electionMetrics: 665, after extra_info: 842, after freeMonitoring: 1111,\nafter globalLock: 1598, after locks: 1970, after logicalSessionRecordCache: 2302, after network: 2662, after opLatencies: 2809,\nafter opReadConcernCounters: 2854, after opcounters: 2884, after opcountersRepl: 2974, after oplogTruncation: 3557, after repl:\n3714, after security: 3724, after storageEngine: 3857, after tcmalloc: 4018, after transactions: 4130, after transportSecurity: 4361,\nafter wiredTiger: 5896, at end: 6783 }We run our instance on a VM. I do remember reading that malware/antivirus scans can interfere with mongod, but can there be any other known causes that we can look into? Thanks a lot",
"username": "Petr_N_A"
},
{
"code": "",
"text": "Hi @Petr_N_A and welcome to MongoDB community forums!!Unfortunately, we do not have the right tools to look at your ftdc data. It would be helpful if you could share more details on the error that you are seeing.\nIf you wish to look deeper into the issue, the recommendation would be to contact the MongoDB Support channels and raise the request with all necessary details.Else, it would be helpful if you could share more details on the deployment and the error that you are facing.Regards\nAasawari",
"username": "Aasawari"
}
] | [ftdc] serverStatus was very slow often preceding crashes (with no other msgs in the logs) | 2023-10-17T12:01:59.734Z | [ftdc] serverStatus was very slow often preceding crashes (with no other msgs in the logs) | 185 |
null | [] | [
{
"code": "Type System.Collections.Generic.List`1[[MyNameSpace.Entity, MyNameSpace, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null]] is not configured as a type that is allowed to be serialized for this instance of ObjectSerializer.\nvar objectSerializer = new ObjectSerializer(type => ObjectSerializer.DefaultAllowedTypes(type) || type.FullName.StartsWith(\"MyNamespace\"));\nBsonSerializer.RegisterSerializer(objectSerializer);\nMongoDB.Bson.BsonSerializationException : There is already a serializer registered for type Object.\n",
"text": "I fight with the serialitation of a list after updating my driver at 2.22.0.\nI got this exception:I saw the release notes for 2.19.0 and add this to my code:And got this exception:Could you please help me understand what I am doing wrong here?Thank you for your time and support.",
"username": "Denis_Sobek"
},
{
"code": "ObjectSerializer\n",
"text": "Make sure you’re registeringonly once around your code base.",
"username": "Vladislav_Lemish"
},
{
"code": "",
"text": "I checked it. \nSearch for “ObjectSerializer” in hole solutionimage1016×207 8.4 KB",
"username": "Denis_Sobek"
}
] | Serialization problems with version 2.19+ | 2023-10-18T09:20:53.368Z | Serialization problems with version 2.19+ | 164 |
null | [
"node-js",
"data-modeling"
] | [
{
"code": "",
"text": "Hi there,are there resources or examples somewhere on how to properly architect a data model for a Group so that realm can sync/subscribe to the data with the limitations coming with Flexible sync like (Flexible Sync does not support querying on properties in Embedded Objects or links).\nA User can belong to many Groups. (group members)\nGroup members can access group resources.\nI’m having a very hard time designing this.\nThanks in advance.",
"username": "Benoit_Werner"
},
{
"code": "SharedWith",
"text": "I’m working on such a sample because of my own frustrations with the samples documented and distributed, and will be writing an article about it. There are likely to be many architectures depending on your frequency of updates and the nature of sharing relationships.I am presenting it publicly on the 25th but will try to release it much earlier - I’ve got a client demo on the 20th.It leans into the MongoDB document philosophy of copying and duplicate data for speed. The essence is that shared data is copied into a SharedWith object as nested objects. This propagates changes very fast and has minimal subscriptions (remember there’s a limit of 10 active).Just for your convenience, I’ve pushed the docs up to the MADsyncily repo.Check out the Architecture and DataSubscriptions docs.The sample is in C# with Xamarin Forms so the rest of the code will probably be of little interest and I have a few bugs to cleanup before I want it out there.",
"username": "Andy_Dent"
},
{
"code": "",
"text": "Thanks so much for sharing this. I’ll read this with great interest!",
"username": "Benoit_Werner"
},
{
"code": "",
"text": "Also see the official Device Sync Permissions Guide which discusses a few different architectures mostly from the aspect of how to setup permissions.It mentions the subscription models in passing.",
"username": "Andy_Dent"
}
] | Properly architecting Groups for flex sync | 2023-10-17T09:49:13.541Z | Properly architecting Groups for flex sync | 253 |
[] | [
{
"code": "",
"text": "Hello, I am writing this message in case you can help me.\nI want to update all the documents (movies) that have the empty cast array, adding a new element inside the array with Undefined value.\nThanks in advance!\n\ncaptura..1362×328 90.2 KB\n",
"username": "Josjoaqun1991"
},
{
"code": "> db.test.updateMany({\"array\":{$size:0}},{$push:{\"array\":{value:\"undefined\"}}})\n{ \"acknowledged\" : true, \"matchedCount\" : 2, \"modifiedCount\" : 2 }\n> db.test.find().pretty()\n{\n \"_id\" : ObjectId(\"63b9d3114274ac5214a44331\"),\n \"array\" : [\n {\n \"value\" : \"undefined\"\n }\n ]\n}\n{\n \"_id\" : ObjectId(\"63b9d32f4274ac5214a44332\"),\n \"array\" : [\n {\n \"value\" : \"undefined\"\n }\n ]\n}\n",
"text": "Hi @Josjoaqun1991\nif i understand your request, you can use this query for example:Above there is the result of this operation:Best Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Thank you very much @Fabio_Ramohitaj !Another question…\nBefore, I want to check how many films have empty the array “cast”I have used this code , Do you think is correct?db.dataset.count({ cast: { $exists: true, $not: {$size: 0} } })",
"username": "Jose_jimenez1"
},
{
"code": "",
"text": "Hi @Jose_jimenez1,\nLooking it quickly, it looks likes correct for me!Best Regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "Dave_Seth"
}
] | Update adding "Undefined" value | 2023-01-07T19:34:33.187Z | Update adding “Undefined” value | 2,354 |
|
null | [
"aggregation"
] | [
{
"code": "db={\n \"item\": [\n {\n \"_id\": \"ABC\",\n \"name\": \"first item\"\n },\n {\n \"_id\": \"DEF\",\n \"name\": \"second item\"\n },\n {\n \"_id\": \"XYZ\",\n \"name\": \"third item\"\n },\n {\n \"_id\": \"EFG\",\n \"name\": \"fourth item\"\n }\n ],\n \"itemBox\": [\n {\n \"_id\": \"1\",\n \"items\": [\n \"ABC\",\n \"DEF\"\n ]\n },\n {\n \"_id\": \"2\",\n \"items\": [\n \"EFG\"\n ]\n }\n ]\n}\n[\n {\n \"_id\": \"XYZ\",\n \"name\": \"third item\"\n }\n]\ndb.item.aggregate([\n {\n \"$lookup\": {\n \"from\": \"itemBox\",\n \"localField\": \"_id\",\n \"foreignField\": \"items\",\n \"as\": \"itemBox_doc\"\n }\n }\n])\n[\n {\n \"_id\": \"ABC\",\n \"itemBox_doc\": [\n {\n \"_id\": \"1\",\n \"items\": [\n \"ABC\",\n \"DEF\"\n ]\n }\n ],\n \"name\": \"first item\"\n },\n {\n \"_id\": \"DEF\",\n \"itemBox_doc\": [\n {\n \"_id\": \"1\",\n \"items\": [\n \"ABC\",\n \"DEF\"\n ]\n }\n ],\n \"name\": \"second item\"\n },\n {\n \"_id\": \"XYZ\",\n \"itemBox_doc\": [],\n \"name\": \"third item\"\n },\n {\n \"_id\": \"EFG\",\n \"itemBox_doc\": [\n {\n \"_id\": \"2\",\n \"items\": [\n \"EFG\"\n ]\n }\n ],\n \"name\": \"fourth item\"\n }\n]\n",
"text": "I have the database as shown below:I want to select only items that do not have if its ID appear in the items list for any itemBox. In this example, I would expect the result to bethis is currently the query I have to perform the lookup:This results in the output:How should I modify the query to provide the expected result?Mongoplayground link: Mongo playground",
"username": "Curtis_L"
},
{
"code": "db.item.aggregate([\n {\n \"$lookup\": {\n \"from\": \"itemBox\",\n \"localField\": \"_id\",\n \"foreignField\": \"items\",\n \"as\": \"itemBox_doc\"\n }\n },\n {\n $match: {\n \"itemBox_doc\": {\n $size: 0\n }\n }\n },\n {\n \"$project\": {\n \"itemBox_doc\": 0\n }\n }\n])\n",
"text": "I have managed to find a query that returns my expected result",
"username": "Curtis_L"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Select documents where lookup is empty | 2023-10-18T09:24:33.998Z | Select documents where lookup is empty | 149 |
null | [
"ops-manager"
] | [
{
"code": "STATE (APPDB)$ kubectl get namespaces\nmongodb Active 47m\n\n$ kubectl get om -n mongodb\nNAME REPLICAS VERSION STATE (OPSMANAGER) STATE (APPDB) STATE (BACKUP) AGE WARNINGS\nops-manager 1 4.4.3 Pending 47m\n\n$ kubectl get sc \nNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE\nmanaged-nfs-storage nfs-dynamic-storage Delete WaitForFirstConsumer false 50m\n\n$ kubectl get pv\nNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE\ndata-ops-manager-db-0 5Gi RWO Retain Available managed-nfs-storage 44m\ndata-ops-manager-db-1 5Gi RWO Retain Available managed-nfs-storage 44m\ndata-ops-manager-db-2 5Gi RWO Retain Available managed-nfs-storage 44m\n\n$ kubectl get pvc -n mongodb\nNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE\ndata-ops-manager-db-0 Bound pvc-48501c9d-0582-4410-9c0d-30fa9ae24ed1 16G RWO managed-nfs-storage 44m\ndata-ops-manager-db-1 Bound pvc-f5adc6bf-5978-4b3c-97cb-a9077363f9f6 16G RWO managed-nfs-storage 44m\ndata-ops-manager-db-2 Bound pvc-24732b52-1656-4e12-9589-fea8f8bd47b5 16G RWO managed-nfs-storage 44m\n\n$ kubectl get pods -n mongodb\nNAME READY STATUS RESTARTS AGE\nmongodb-enterprise-operator-6dd9b65cdd-hq72w 1/1 Running 0 49m\nops-manager-db-0 2/2 Running 0 48m\nops-manager-db-1 2/2 Running 0 38m\nops-manager-db-2 2/2 Running 0 32m\n\n$ kubectl describe om -n mongodb\nName: ops-manager\nNamespace: mongodb\nLabels: <none>\nAnnotations: <none>\nAPI Version: mongodb.com/v1\nKind: MongoDBOpsManager\nMetadata:\n Creation Timestamp: 2021-06-05T17:50:37Z\n Generation: 1\n Managed Fields:\n API Version: mongodb.com/v1\n Fields Type: FieldsV1\n fieldsV1:\n f:metadata:\n f:annotations:\n .:\n f:kubectl.kubernetes.io/last-applied-configuration:\n f:spec:\n .:\n f:adminCredentials:\n f:applicationDatabase:\n .:\n f:additionalMongodConfig:\n .:\n f:operationProfiling:\n f:members:\n f:podSpec:\n .:\n f:cpu:\n f:version:\n f:backup:\n .:\n f:enabled:\n f:configuration:\n .:\n f:automation.versions.source:\n f:mms.adminEmailAddr:\n f:mms.fromEmailAddr:\n f:mms.ignoreInitialUiSetup:\n f:mms.mail.hostname:\n f:mms.mail.port:\n f:mms.mail.ssl:\n f:mms.mail.transport:\n f:mms.minimumTLSVersion:\n f:mms.replyToEmailAddr:\n f:externalConnectivity:\n .:\n f:type:\n f:replicas:\n f:version:\n Manager: kubectl-client-side-apply\n Operation: Update\n Time: 2021-06-05T17:50:37Z\n API Version: mongodb.com/v1\n Fields Type: FieldsV1\n fieldsV1:\n f:status:\n .:\n f:applicationDatabase:\n .:\n f:lastTransition:\n f:message:\n f:observedGeneration:\n f:phase:\n f:version:\n Manager: mongodb-enterprise-operator\n Operation: Update\n Time: 2021-06-05T17:51:07Z\n Resource Version: 16618\n UID: b0009425-2826-4f77-8cdf-727420385fa5\nSpec:\n Admin Credentials: adminusercredentials\n Application Database:\n Additional Mongod Config:\n Operation Profiling:\n Mode: slowOp\n Members: 3\n Pod Spec:\n Cpu: 0.25\n Version: 4.2.6-ent\n Backup:\n Enabled: false\n Configuration:\n automation.versions.source: mongodb\n mms.adminEmailAddr: [email protected]\n mms.fromEmailAddr: [email protected]\n mms.ignoreInitialUiSetup: true\n mms.mail.hostname: email-smtp.us-east-1.amazonaws.com\n mms.mail.port: 465\n mms.mail.ssl: true\n mms.mail.transport: smtp\n mms.minimumTLSVersion: TLSv1.2\n mms.replyToEmailAddr: [email protected]\n External Connectivity:\n Type: NodePort\n Replicas: 1\n Version: 4.4.3\nStatus:\n Application Database:\n Last Transition: 2021-06-05T18:40:08Z\n Message: Application Database Agents haven't reached Running state yet\n Observed Generation: 1\n Phase: Pending\n Version:\nEvents: <none>\n",
"text": "I’m trying to deploy the ops-manager in my kubernetes cluster.\nThe cluster has been setup using kubeadm and has a master and a worker node.I have been following the steps mentioned in this link.\nhttps://www.mongodb.com/blog/post/running-mongodb-ops-manager-in-kubernetesHowever, the STATE (APPDB) is always in pending. The following is the output of different commands which I feel will be helpful in further debugging:I wanted to know if there is something that I’m missing here because of which the APP DB doesn’t appear in the running state ?",
"username": "black_albus"
},
{
"code": "",
"text": "Hi, we faced the same situation and found more hints in details of application databases pods. Just switch between containers of those pods and check the logs. I think in our case it was the agent pod which showed problems with SSL certificate - missing CA certificates in ConfigMap. After updating the config map with CAs it worked.",
"username": "Gabriel_Gubis"
}
] | APP DB doesnt appear when deploying ops manager | 2021-06-05T18:48:49.108Z | APP DB doesnt appear when deploying ops manager | 3,158 |
null | [
"atlas-search"
] | [
{
"code": " {\n $search: {\n index: \"segments\",\n queryString: {\n \"query\" : \"foo AND bar\", \n defaultPath : { \"value\": \"Content\", \"multi\": \"en\" }\n \n \n }\n }\n }\n {\n $search: {\n index: \"segments\",\n text: {\n \"query\" : \"Completed mediation\", \n path :\"Content\"\n \n \n },\n \"highlight\": {\n \"path\": \"Content\"\n }\n }\n }\n[\n {\n $search: {\n index: \"segments\",\n text: {\n \"query\" : \"Completed mediation\", \n path : { \"value\": \"Content\", \"multi\": \"en\" }\n \n \n },\n \"highlight\": {\n \"path\": { \"value\": \"Content\", \"multi\": \"en\" }\n }\n }\n }\n]\n",
"text": "I have an Atlas Search index where I’ve used multi option to index a field Content with lucene.standard and multi en with english. This is a thing common in Elasticsearch. It seems that it not possible to specify multi on defaultPath part of queryString query, because it gives me the error: “queryString.defaultPath” must be a stringHow can I specify a multifield as defaultPath in queryString?Also this query is executed perfectlyBut I’m not able to have highlight in the multifield because I got “unexpected error”…Clearly if I specify highlight only on Content field it does not work because lucene is not capable to match the field.It seems that the whole experience of multifield is somewhat difficult to use…Gian Maria.",
"username": "Gian_Maria_Ricci"
},
{
"code": "defaultPathstringAtlas atlas-cihc7e-shard-0 [primary] test> db.multipleFiledSearch.find()\n[\n {\n _id: ObjectId(\"652e2cba1a3f40941660c3ba\"),\n content: 'Completed mediation'\n },\n { _id: ObjectId(\"652e2da81a3f40941660c3bb\"), content: 'foo AND bar' }\n]\n{\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"content\": {\n \"analyzer\": \"lucene.standard\",\n \"multi\": {\n \"mySecondaryAnalyzer\": {\n \"analyzer\": \"lucene.english\",\n \"type\": \"string\"\n }\n },\n \"type\": \"string\"\n }\n }\n }\n}\n[\n {\n $search: {\n text: {\n query: \"Completed mediation\",\n path: {\n value: \"content\",\n multi: \"mySecondaryAnalyzer\",\n },\n },\n },\n },\n]\n",
"text": "Hello, @Gian_Maria_Ricci, and welcome to the MongoDB community forums!As mentioned in the queryString documentation, it’s worth noting that the defaultPath is a required parameter, and the field must be of string data type. As a result, the first and third queries would not work correctly with the parameters specified.Based on the query you’ve shared, you can utilise multiField indexes, as shown in the following example:Suppose we have the sample data:I have defined the index as:And the following query:will give you the desired output.For more details, please refer to the Construct the Query Path documentation.In case if you have further questions or concerns, please feel free to share additional info such as sample documents and the expected results, so community can help you better.Warm regards,\nAasawari",
"username": "Aasawari"
}
] | How to use Multifield on defaultPath of a queryString search query | 2023-10-14T15:57:22.466Z | How to use Multifield on defaultPath of a queryString search query | 230 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "My App mostly uses local reads, and every once in a while will save data that is synced with Atlas DeviceSync, and more often will received updates from other user devices, but almost never two devices will be on at the same time. All works great for my desktop .Net project with one caveat. Everytime I get Realm Instance with Flex config to read and object, the Realm creates a connection to the Atlas service. Is there a way to avoid it?",
"username": "Milen_Milkovski"
},
{
"code": "",
"text": "You can pause and resume the sync session using this API: https://www.mongodb.com/docs/realm/sdk/dotnet/sync/sync-session/Let me know if that is not quite what you are looking for though.Best,\nTyler",
"username": "Tyler_Kaye"
},
{
"code": "",
"text": "Thanks a lot, that was it!",
"username": "Milen_Milkovski"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can I use Realm and DeviceSync in Flex mode without connecting to Atlas for read operations? | 2023-10-17T23:24:07.037Z | Can I use Realm and DeviceSync in Flex mode without connecting to Atlas for read operations? | 249 |
null | [
"sharding",
"mongodb-shell"
] | [
{
"code": "user@monitor:~$ sudo apt remove --purge mongodb-org-*\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\nNote, selecting 'mongodb-org-unstable-tools' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-mongos' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-tools' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-unstable-database-tools-extra' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-unstable-server' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-unstable-shell' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-shell' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-unstable-mongos' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-database-tools-extra' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-unstable' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-tools-unstable' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-database' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-server' for glob 'mongodb-org-*'\nPackage 'mongodb-org-unstable' is not installed, so not removed\nPackage 'mongodb-org-unstable-database-tools-extra' is not installed, so not removed\nPackage 'mongodb-org-unstable-mongos' is not installed, so not removed\nPackage 'mongodb-org-unstable-server' is not installed, so not removed\nPackage 'mongodb-org-unstable-shell' is not installed, so not removed\nPackage 'mongodb-org-unstable-tools' is not installed, so not removed\nPackage 'mongodb-org-tools-unstable' is not installed, so not removed\nPackage 'mongodb-org-database-tools-extra' is not installed, so not removed\nPackage 'mongodb-org-database' is not installed, so not removed\nPackage 'mongodb-org-shell' is not installed, so not removed\nPackage 'mongodb-org-tools' is not installed, so not removed\nThe following packages will be REMOVED:\n mongodb-org-mongos* mongodb-org-server*\n0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.\nAfter this operation, 311 MB disk space will be freed.\nDo you want to continue? [Y/n] y\n(Reading database ... 141132 files and directories currently installed.)\nRemoving mongodb-org-mongos (7.0.2) ...\nRemoving mongodb-org-server (7.0.2) ...\nProcessing triggers for man-db (2.9.1-1) ...\n(Reading database ... 141115 files and directories currently installed.)\nPurging configuration files for mongodb-org-server (7.0.2) ...\nuser@monitor:~$ sudo apt list --installed | grep mongo\n\nWARNING: apt does not have a stable CLI interface. Use with caution in scripts.\nuser@monitor:~$ cat /etc/apt/sources.list.d/mongodb-org-7.0.list\ndeb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/7.0 multiverse\nuser@monitor:~$ sudo apt remove --purge mongodb-org-*\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\nNote, selecting 'mongodb-org-unstable-tools' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-mongos' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-tools' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-unstable-database-tools-extra' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-unstable-server' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-unstable-shell' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-shell' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-unstable-mongos' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-database-tools-extra' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-unstable' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-tools-unstable' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-database' for glob 'mongodb-org-*'\nNote, selecting 'mongodb-org-server' for glob 'mongodb-org-*'\nPackage 'mongodb-org-unstable' is not installed, so not removed\nPackage 'mongodb-org-unstable-database-tools-extra' is not installed, so not removed\nPackage 'mongodb-org-unstable-mongos' is not installed, so not removed\nPackage 'mongodb-org-unstable-server' is not installed, so not removed\nPackage 'mongodb-org-unstable-shell' is not installed, so not removed\nPackage 'mongodb-org-unstable-tools' is not installed, so not removed\nPackage 'mongodb-org-tools-unstable' is not installed, so not removed\nPackage 'mongodb-org-database-tools-extra' is not installed, so not removed\nPackage 'mongodb-org-database' is not installed, so not removed\nPackage 'mongodb-org-shell' is not installed, so not removed\nPackage 'mongodb-org-tools' is not installed, so not removed\nThe following packages will be REMOVED:\n mongodb-org-mongos* mongodb-org-server*\n0 upgraded, 0 newly installed, 2 to remove and 0 not upgraded.\nAfter this operation, 311 MB disk space will be freed.\nDo you want to continue? [Y/n] y\n(Reading database ... 141132 files and directories currently installed.)\nRemoving mongodb-org-mongos (7.0.2) ...\nRemoving mongodb-org-server (7.0.2) ...\nProcessing triggers for man-db (2.9.1-1) ...\n(Reading database ... 141115 files and directories currently installed.)\nPurging configuration files for mongodb-org-server (7.0.2) ...\nuser@monitor:~$ sudo apt list --installed | grep mongo\n\nWARNING: apt does not have a stable CLI interface. Use with caution in scripts.\nuser@monitor:~$ cat /etc/apt/sources.list.d/mongodb-org-7.0.list\ndeb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg ] https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/7.0 multiverse\nuser@monitor:~$ sudo apt-get install -y mongodb-org\nReading package lists... Done\nBuilding dependency tree \nReading state information... Done\nThe following additional packages will be installed:\n mongodb-database-tools mongodb-mongosh mongodb-org-database mongodb-org-database-tools-extra mongodb-org-mongos mongodb-org-server mongodb-org-shell mongodb-org-tools\nThe following NEW packages will be installed:\n mongodb-database-tools mongodb-mongosh mongodb-org mongodb-org-database mongodb-org-database-tools-extra mongodb-org-mongos mongodb-org-server mongodb-org-shell mongodb-org-tools\n0 upgraded, 9 newly installed, 0 to remove and 0 not upgraded.\nNeed to get 57.5 MB/156 MB of archives.\nAfter this operation, 530 MB of additional disk space will be used.\nGet:1 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/7.0/multiverse amd64 mongodb-org-shell amd64 7.0.2 [3080 B]\nGet:2 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/7.0/multiverse amd64 mongodb-org-server amd64 7.0.2 [34.0 MB]\nGet:3 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/7.0/multiverse amd64 mongodb-org-mongos amd64 7.0.2 [23.6 MB]\nGet:4 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/7.0/multiverse amd64 mongodb-org-database-tools-extra amd64 7.0.2 [7720 B]\nGet:5 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/7.0/multiverse amd64 mongodb-org-database amd64 7.0.2 [3536 B]\nGet:6 https://repo.mongodb.org/apt/ubuntu focal/mongodb-org/7.0/multiverse amd64 mongodb-org-tools amd64 7.0.2 [2892 B]\nFetched 57.5 MB in 2s (26.2 MB/s) \nSelecting previously unselected package mongodb-database-tools.\n(Reading database ... 141115 files and directories currently installed.)\nPreparing to unpack .../0-mongodb-database-tools_100.8.0_amd64.deb ...\nUnpacking mongodb-database-tools (100.8.0) ...\nSelecting previously unselected package mongodb-mongosh.\nPreparing to unpack .../1-mongodb-mongosh_2.0.2_amd64.deb ...\nUnpacking mongodb-mongosh (2.0.2) ...\nSelecting previously unselected package mongodb-org-shell.\nPreparing to unpack .../2-mongodb-org-shell_7.0.2_amd64.deb ...\nUnpacking mongodb-org-shell (7.0.2) ...\nSelecting previously unselected package mongodb-org-server.\nPreparing to unpack .../3-mongodb-org-server_7.0.2_amd64.deb ...\nUnpacking mongodb-org-server (7.0.2) ...\nSelecting previously unselected package mongodb-org-mongos.\nPreparing to unpack .../4-mongodb-org-mongos_7.0.2_amd64.deb ...\nUnpacking mongodb-org-mongos (7.0.2) ...\nSelecting previously unselected package mongodb-org-database-tools-extra.\nPreparing to unpack .../5-mongodb-org-database-tools-extra_7.0.2_amd64.deb ...\nUnpacking mongodb-org-database-tools-extra (7.0.2) ...\nSelecting previously unselected package mongodb-org-database.\nPreparing to unpack .../6-mongodb-org-database_7.0.2_amd64.deb ...\nUnpacking mongodb-org-database (7.0.2) ...\nSelecting previously unselected package mongodb-org-tools.\nPreparing to unpack .../7-mongodb-org-tools_7.0.2_amd64.deb ...\nUnpacking mongodb-org-tools (7.0.2) ...\nSelecting previously unselected package mongodb-org.\nPreparing to unpack .../8-mongodb-org_7.0.2_amd64.deb ...\nUnpacking mongodb-org (7.0.2) ...\nSetting up mongodb-mongosh (2.0.2) ...\nSetting up mongodb-org-server (7.0.2) ...\nSetting up mongodb-org-shell (7.0.2) ...\nSetting up mongodb-database-tools (100.8.0) ...\nSetting up mongodb-org-mongos (7.0.2) ...\nSetting up mongodb-org-database-tools-extra (7.0.2) ...\nSetting up mongodb-org-database (7.0.2) ...\nSetting up mongodb-org-tools (7.0.2) ...\nSetting up mongodb-org (7.0.2) ...\nProcessing triggers for man-db (2.9.1-1) ...\nuser@monitor:~$ systemctl start mongod\n==== AUTHENTICATING FOR org.freedesktop.systemd1.manage-units ===\nAuthentication is required to start 'mongod.service'.\nAuthenticating as: user\nPassword: \n==== AUTHENTICATION COMPLETE ===\n\nuser@monitor:~$ systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: failed (Result: core-dump) since Tue 2023-10-17 17:06:38 EDT; 2s ago\n Docs: https://docs.mongodb.org/manual\n Process: 104443 ExecStart=/usr/bin/mongod --config /etc/mongod.conf (code=dumped, signal=ILL)\n Main PID: 104443 (code=dumped, signal=ILL)\n\nOct 17 17:06:37 monitor systemd[1]: Started MongoDB Database Server.\nOct 17 17:06:38 monitor systemd[1]: mongod.service: Main process exited, code=dumped, status=4/ILL\nOct 17 17:06:38 monitor systemd[1]: mongod.service: Failed with result 'core-dump'.\nuser@monitor:~$ sudo ls /var/lib/mongodb\nuser@monitor:~$ cat /etc/lsb-release\nDISTRIB_ID=Ubuntu\nDISTRIB_RELEASE=20.04\nDISTRIB_CODENAME=focal\nDISTRIB_DESCRIPTION=\"Ubuntu 20.04.6 LTS\"\n\n",
"text": "I was upgrading from v4 to v7 and now can’t get past ‘core-dump’I started by removing anything mongodb related from the serverThen I check that there’s nothing left installed mongo-relatedI added the new apt list entry:I was upgrading from v4 to v7 and now can’t get past ‘core-dump’I started by removing anything mongodb related from the serverThen I check that there’s nothing left installed mongo-relatedI added the apt list:and proceeded to installDoesn’t seem like there were any problems, but a fresh install won’t start.So I go to look at the logs, but there aren’t any?So what gives?OS info:",
"username": "Jonathan_Salas"
},
{
"code": "signal=ILL",
"text": "This has been discussed at lengthsignal=ILLsee Search results for 'signal=ILL' - MongoDB Developer Community Forums",
"username": "steevej"
},
{
"code": "root@hypervisor:~# cat /proc/cpuinfo\nprocessor : 0\nvendor_id : GenuineIntel\ncpu family : 6\nmodel : 60\nmodel name : Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz\n/proc/cpuinfouser@VM:~$ cat /proc/cpuinfo\nprocessor\t: 0\nvendor_id\t: GenuineIntel\ncpu family\t: 15\nmodel\t\t: 6\nmodel name\t: Common KVM processor\nstepping\t: 1\nmicrocode\t: 0x1\ncpu MHz\t\t: 3392.144\ncache size\t: 16384 KB\nphysical id\t: 0\nsiblings\t: 4\ncore id\t\t: 0\ncpu cores\t: 4\napicid\t\t: 0\ninitial apicid\t: 0\nfpu\t\t: yes\nfpu_exception\t: yes\ncpuid level\t: 13\nwp\t\t: yes\nflags\t\t: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx lm constant_tsc nopl xtopology cpuid tsc_known_freq pni cx16 x2apic aes hypervisor lahf_lm cpuid_fault pti\n",
"text": "Thanks @steevejI’m attempting to install in a VM on Proxmox, so I’ll provide CPU info for both.Hypervisor:According to intel spec sheet, I have the AVX2 Instruction Set Extension which as far as I can tell means I’m compatible. I also see AVX2 under “flags” when I check /proc/cpuinfo.When I check CPU info in the VM, I no longer see AVX2 listed under flags.This discussion suggests that changing my CPU type from kvm64 to Host, I can get AVX2 capabilities in the VM. Will report back after some experimentation.",
"username": "Jonathan_Salas"
},
{
"code": "user@VM:~$ cat /proc/cpuinfo\nprocessor\t: 0\nvendor_id\t: GenuineIntel\ncpu family\t: 6\nmodel\t\t: 60\nmodel name\t: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz\nstepping\t: 3\nmicrocode\t: 0x27\ncpu MHz\t\t: 3392.144\ncache size\t: 16384 KB\nphysical id\t: 0\nsiblings\t: 4\ncore id\t\t: 0\ncpu cores\t: 4\napicid\t\t: 0\ninitial apicid\t: 0\nfpu\t\t: yes\nfpu_exception\t: yes\ncpuid level\t: 13\nwp\t\t: yes\nflags\t\t: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat umip md_clear arch_capabilities\nbugs\t\t: cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf mds swapgs srbds mmio_unknown\nbogomips\t: 6784.28\nclflush size\t: 64\ncache_alignment\t: 64\naddress sizes\t: 39 bits physical, 48 bits virtual\npower management:\nuser@VM:~$ sudo systemctl start mongod\n[sudo] password for user: \nuser@VM:~$ sudo systemctl status mongod\n● mongod.service - MongoDB Database Server\n Loaded: loaded (/lib/systemd/system/mongod.service; enabled; vendor preset: enabled)\n Active: active (running) since Tue 2023-10-17 21:10:53 EDT; 2min 39s ago\n Docs: https://docs.mongodb.org/manual\n Main PID: 814 (mongod)\n Memory: 187.7M\n CGroup: /system.slice/mongod.service\n └─814 /usr/bin/mongod --config /etc/mongod.conf\n\nOct 17 21:10:53 monitor systemd[1]: Started MongoDB Database Server.\nOct 17 21:10:54 monitor mongod[814]: {\"t\":{\"$date\":\"2023-10-18T01:10:54.945Z\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":7484500, \"ctx\":\"main\",\"msg\":\"Environment variable MONGODB_CONFIG_OVERRIDE_NOFORK =>\n",
"text": "From this discussion, it seems like I can safely use “host” cpu type as long as I don’t have high availability set up.So I went ahead and did that:\nimage1394×428 29.5 KBNow when I check cpuinfo in the VM, I see AVX2Voila, she works!",
"username": "Jonathan_Salas"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Issues upgrading to Mongodb v7 on Ubuntu 20 - Failed with result 'core-dump' | 2023-10-17T21:24:09.671Z | Issues upgrading to Mongodb v7 on Ubuntu 20 - Failed with result ‘core-dump’ | 273 |
null | [
"aggregation",
"queries",
"crud"
] | [
{
"code": "",
"text": "db.Acc_Detail.updateMany({},[{$set :{date :{$convert:{input :“$mod_typestamp”, to :“date”}}}}])\nMongoServerError: Error parsing date string ‘2017-02-14:13:13:51’; 10: Unexpected character ‘:’Is there any way to format it ?",
"username": "Kingshuk_Modak"
},
{
"code": ":$convert$toDatedb.Acc_Detail.updateMany(\n {},\n [\n {\n $set: {\n date: {\n $dateFromString: {\n dateString: \"$mod_typestamp\",\n format: \"%Y-%m-%d:%H:%M:%S\",\n // timezone: \"America/New_York\" // update to your specific timeozne if you needed\n }\n }\n }\n }\n ]\n)\n",
"text": "Hello @Kingshuk_Modak, Welcome to the MongoDB community forum,The : between date and hour is not valid so $convert or $toDate operators don’t understand the date string,You can use $dateFromString operator, where you can specify your date format,",
"username": "turivishal"
},
{
"code": "",
"text": "Hi @turivishal I tried that as well however the date is coming as null. Since the format of the string is somewhat different.mod_typestamp field is given as “2017-02-14:13:11:15” in the below format.Is there any other way to update it in date format in bulk.?",
"username": "Kingshuk_Modak"
},
{
"code": "",
"text": "The code shared by turivisal works really well on a date string field named mod_typestamp and formatted“2017-02-14:13:11:15”If you get null, then may be your field is not named mod_typestamp or is not in the format you shared.",
"username": "steevej"
},
{
"code": "",
"text": "Hi @steevej and @turivishal All sorted. however the date format has some additional spaces which was creating the problem. Thanks a lot for the help.",
"username": "Kingshuk_Modak"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Trying to convert string data type to date | 2023-10-17T02:44:30.336Z | Trying to convert string data type to date | 184 |
null | [
"sharding"
] | [
{
"code": "",
"text": "from this link : https://www.mongodb.com/docs/manual/tutorial/sharding-tiered-hardware-for-varying-slas/#scenarioI have some question, from this page setting shard key → { creation_date : 1 } 3 shards (1,2, 3) with 2 zones (“recent”, “archive”)",
"username": "Suthiphong_Thaisuriya"
},
{
"code": "",
"text": "I don’t think using zones for sharding make any big difference regarding how the data is distributed. Basically Mongodb servers should try best to evenly distribute customer data between all available shards automatically. We, as clients/human/engineers, are not supposed to worry about it.",
"username": "Kobe_W"
}
] | Shard Zones with Tiered Hardware | 2023-10-17T07:46:51.152Z | Shard Zones with Tiered Hardware | 169 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "Hi MongoDB Community,In regard to mongosh telemetry, documentation and forums only mention 2 cmds - disableTelemetry() and enableTelemetry(). Can someone kindly help with following info which I was unable to locate anywhere:2.1. Once enabled/disabled, are we supposed to do that again when we re-login in mongosh ?\n2.2. If I have disabled it, what’s the state when my team member logs-in in same database via different machine ? (is it disabled for him as well? or does he need to disable ?)\n2.3. If I have disabled it, and when I login again next day. Does it remain disabled ?\n2.4. If I have disabled it, when I restart mongod service, then what is the status of telemetry ? (is it enabled again or will it remain disabled ?)",
"username": "Pratik_Mehta"
},
{
"code": "config.get('enableTelemetry')enableTelemetry()disableTelemetry()~/.mongodb/mongosh/configenableTelemetryenableTelemetry()disableTelemetry()",
"text": "Hi Pratik,Thanks for your questions, my answers in-line below:Wanted to clarify what you meant by checking the status?If you were looking to see if telemetry was enabled, you have two different options:2.1. Once enabled/disabled, are we supposed to do that again when we re-login in mongosh ?Once you invoke the enableTelemetry() or disableTelemetry() functions, the config file above will store your preferences locally so you don’t have to toggle it on/off each time you re-login.2.2. If I have disabled it, what’s the state when my team member logs-in in same database via different machine ? (is it disabled for him as well? or does he need to disable ?)As mentioned above, the setting is local so it will not affect your teammates using mongosh in a different machine. Their preferences will be stored locally in their machine.2.3. If I have disabled it, and when I login again next day. Does it remain disabled ?Since the setting persists in the config file, if you have disabled it, it will remain disabled unless you explicitly toggle it on again.2.4. If I have disabled it, when I restart mongod service, then what is the status of telemetry ? (is it enabled again or will it remain disabled ?)Restarting mongod will not affect this setting in mongosh since they are two different binaries/applications.Where are you seeing a telemetry message? Is it in a log file somewhere?Finally, mongosh also supports the global config file for disabling telemetry consistently on a given machine. You can read this doc on this here: https://www.mongodb.com/docs/mongodb-shell/reference/configure-shell-settings-global/#configuration-file-location",
"username": "Gaurab_Aryal"
},
{
"code": "",
"text": "disableTelemetry()Thank you so much Gaurab for all these quick answers \nMost of my queries resolved. In case of further queries (if any), I will ask you.I will keep this thread open for another 3 days.",
"username": "Pratik_Mehta"
}
] | How to check status of mongosh telemetry | 2023-10-17T12:31:18.188Z | How to check status of mongosh telemetry | 194 |
null | [
"atlas-search"
] | [
{
"code": " $search: {\n index: 'default',\n compound: {\n must: [\n {\n text: {\n query: 'car',\n path: [\n 'translations.he-IL.displayName',\n 'translations.en-US.displayName',\n 'translations.he-IL.synonyms',\n 'translations.en-US.synonyms',\n ],\n fuzzy: { maxEdits: 1, maxExpansions: 4 },\n },\n },\n ],\n mustNot: [\n {\n equals: {\n value: true,\n path: 'isDeleted',\n },\n },\n ],\n },\n },\n{\n \"_id\" : ObjectId(\"64f852cfaf24690f71daadb5\"),\n \"operationalName\" : \"Agar E-406\",\n \"translations\" : {\n \"en-US\" : {\n \"displayName\" : \"Agar - E-406\",\n \"description\" : \"(thickener) (gelling agent) \"\n },\n \"he-IL\" : {\n \"displayName\" : \"אגר - E-406\",\n \"description\" : \"(thickener) (gelling agent) \"\n }\n }\n}\n{\n \"_id\" : ObjectId(\"64f852cfaf24690f71dab683\"),\n \"operationalName\" : \"Black ear fungus\",\n \"translations\" : {\n \"en-US\" : {\n \"displayName\" : \"Black ear fungus\"\n },\n \"he-IL\" : {\n \"displayName\" : \"פטריות אוזן שחורות\",\n \"synonyms\" : \"פטריות פונגוס שחורות\"\n }\n }\n}\n{\n \"_id\" : ObjectId(\"64f852cfaf24690f71dab25f\"),\n \"operationalName\" : \"Nitrogen (packaging gas) - E941\",\n \"translations\" : {\n \"en-US\" : {\n \"displayName\" : \"Nitrogen (packaging gas) - E941\",\n \"description\" : \"propellant\"\n },\n \"he-IL\" : {\n \"displayName\" : \"חנקן - E941\",\n \"description\" : \"propellant\"\n }\n }\n}\n{\n \"_id\" : ObjectId(\"64f852cfaf24690f71dab5df\"),\n \"operationalName\" : \"בר שוקולד טבעוני כשר לפסח\",\n \"translations\" : {\n \"en-US\" : {\n \"displayName\" : \"Vegan chocolate bar kosher for Passover\"\n },\n \"he-IL\" : {\n \"displayName\" : \"בר שוקולד טבעוני כשר לפסח\"\n }\n }\n}\n{\n \"_id\" : ObjectId(\"64f852cfaf24690f71dab12f\"),\n \"operationalName\" : \"Invert Sugar\",\n \"translations\" : {\n \"en-US\" : {\n \"displayName\" : \"Invert Sugar\",\n \"synonyms\" : \"invert syrup, invert sugar, simple syrup, sugar syrup, sugar water, bar syrup, sucrose inversion\",\n \"description\" : \" a syrup mixture of the monosaccharides glucose and fructose, It is sweeter than table sugar,[2] and foods that contain invert sugar retain moisture better and crystallize less easily than do those that use table sugar instead\"\n },\n \"he-IL\" : {\n \"displayName\" : \"סוכר אינוורטי\",\n \"synonyms\" : \"סירופ סוכר\",\n \"description\" : \" a syrup mixture of the monosaccharides glucose and fructose, It is sweeter than table sugar,[2] and foods that contain invert sugar retain moisture better and crystallize less easily than do those that use table sugar instead\"\n }\n }\n}\n",
"text": "I am using the following text query:When searchTerm is an input.\nIt returns the following documents:I fail to understand why 'carrot ’ is not returned as well (it returns only if I write the full word)",
"username": "7b55b5f9a91383655fec26662aab12c"
},
{
"code": "fuzzywildcardcompound.shouldcar*",
"text": "fuzzy isn’t that fuzzy (was he?). “car” to “carrot” is quite a number of edits, by the text.fuzzy calculation.One technique is to augment your query with some additional clauses that make “car” account for “carrot” as well, such as adding a wildcard clause (under compound.should) for car*.",
"username": "Erik_Hatcher"
},
{
"code": "fuzzy",
"text": "fuzzy isn’t that fuzzy (was he?Haven’t thought of that one since … kindergarten! ",
"username": "Jack_Woehr"
}
] | Fuzzy search doesn't return the expected result | 2023-10-17T09:11:13.431Z | Fuzzy search doesn’t return the expected result | 181 |
[
"queries",
"indexes"
] | [
{
"code": "{\n \"queryPlanner\": {\n \"plannerVersion\": 1,\n \"namespace\": \"someDb.someCollection\",\n \"indexFilterSet\": false,\n \"parsedQuery\": {\n \"someId\": {\n \"$in\": [\n ]\n }\n },\n \"queryHash\": \"937709B1\",\n \"planCacheKey\": \"8A586950\",\n \"winningPlan\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"SORT\",\n \"sortPattern\": {\n \"_id\": -1\n },\n \"memLimit\": 104857600,\n \"type\": \"default\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"someId\": 1,\n \"_id\": -1\n },\n \"indexName\": \"someId_1__id_-1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"someId\": [],\n \"_id\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"someId\": [\n \n ],\n \"_id\": [\n \"[MaxKey, MinKey]\"\n ]\n }\n }\n }\n },\n \"rejectedPlans\": [\n {\n \"stage\": \"SORT\",\n \"sortPattern\": {\n \"_id\": -1\n },\n \"memLimit\": 104857600,\n \"type\": \"simple\",\n \"inputStage\": {\n \"stage\": \"FETCH\",\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"someId\": 1\n },\n \"indexName\": \"someId_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"someId\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"someId\": [\n \n ]\n }\n }\n }\n },\n {\n \"stage\": \"FETCH\",\n \"filter\": {\n \"someId\": {\n \"$in\": [\n \n ]\n }\n },\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"_id\": 1\n },\n \"indexName\": \"_id_\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"_id\": []\n },\n \"isUnique\": true,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"backward\",\n \"indexBounds\": {\n \"_id\": [\n \"[MaxKey, MinKey]\"\n ]\n }\n }\n },\n {\n \"stage\": \"FETCH\",\n \"filter\": {\n \"someId\": {\n \"$in\": [\n \n ]\n }\n },\n \"inputStage\": {\n \"stage\": \"IXSCAN\",\n \"keyPattern\": {\n \"_id\": -1,\n \"someId\": 1\n },\n \"indexName\": \"_id_-1_someId_1\",\n \"isMultiKey\": false,\n \"multiKeyPaths\": {\n \"_id\": [],\n \"someId\": []\n },\n \"isUnique\": false,\n \"isSparse\": false,\n \"isPartial\": false,\n \"indexVersion\": 2,\n \"direction\": \"forward\",\n \"indexBounds\": {\n \"_id\": [\n \"[MaxKey, MinKey]\"\n ],\n \"someId\": [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n ]\n },\n \"serverInfo\": {\n \"host\": \"some.mongodb.net\",\n \"port\": 27017,\n \"version\": \"4.4.25\",\n \"gitVersion\": \"\"\n },\n \"ok\": 1,\n \"$clusterTime\": {\n \"clusterTime\": \"10/9/2023 20:37:07.000 (#1)\",\n \"signature\": {\n \"hash\": \"BinData(0,\\\"//VVuk=\\\")\",\n \"keyId\": \"someIddd\"\n }\n },\n \"operationTime\": \"10/9/2023 20:37:07.000 (#1)\"\n}\n",
"text": "Hi,Data in someCollection : 100000 records\nsomeCollection size : 7GB\nUsing atlas : M30 (8 GB RAM, 52 GB Storage) 3,000 IOPS, Encrypted, Auto-expand StorageI am running a find query like following\ndb.someCollection.find({someId :{$in: [ <260 objectIds> ] }}).sort({_id:-1}).Index applied : {someId:1,_id:-1}Atlas is hosted in ca-central. I making the query from India. The query take more than 5 mins to run even though the index is applied. What is the possible issue ? Is the filter too large for the data ? Doesn’t mongo support large filter value? What are the possible solutions?Primary node query execution plan:\nimage1050×595 26.8 KB",
"username": "Kamaldeep_Kaur"
},
{
"code": "378$in$in",
"text": "Hey @Kamaldeep_Kaur,Welcome to the MongoDB Community!The query take more than 5 mins to run even though the index is appliedBased on the shared image, it appears that the query execution time is around 378 ms. Please let me know if I’m missing something.Is the filter too large for the data ?There is no hard-coded limit on the number of values in $in. But in general, very large $in lists with hundreds or thousands of items will negatively impact query performance.However, if you need further assistance, please feel free to share a sample document (remember to redact any sensitive information) so that we can try to replicate it in our own environment.Best regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "3781. {\n \"_id\": \"61ffffffffffffffffffffff\",\n \"isSomesome\": false,\n \"issometed\": false,\n \"someeeeeeeeBy\": \"Ksome Rrrr\",\n \"isSomeeee\": true,\n \"isSssssssssst\": false,\n \"dummyyyyId\": \"600000000000000000000000\",\n \"someeeId\": \"5ddddddddddddddddddddddd\",\n \"someId\": \"5drrrrrrrrrrrrrrrrrrrrrr\",\n \"asssssssssnkId\": \"5ffffffffffffffffffffff\",\n \"nameOfSigner\": \"Ksome Rrrr\",\n \"signerRole\": \"\",\n \"userAgent\": \"Hhhhhhhhh/3 CFfffffff/1111.0.1 Ssssin/22.6.0\",\n \"ipAddress\": \"111.111.11.111\",\n \"somedddId\": \"6555555555555555556666\",\n \"created\": \"2022-02-04T19:51:47.000Z\",\n \"updated\": \"2022-02-04T19:52:03.712Z\",\n \"noteAssessment\": {\n \"notes\": \"Forms\",\n \"isClearanceNote\": false,\n \"isPhysicalNote\": false,\n \"images\": [],// 6 images size 3.6MB\n \"date\": \"2022-02-04T19:49:00.000Z\",\n \"isNeuropsychNote\": false,\n \"injuryType\": \"\",\n \"attachments\": [\n \"613333333333333333333333/attachment_0.png\",\n \"61tttttttttttttttttttttt/attachment_1.png\",\n \"61fdyyyyyy55555555555555/attachment_2.png\",\n \"61fdwwwwwwwwwwww33333333/attachment_3.png\",\n \"61f666666666666666666666/attachment_4.png\"\n ]\n }\n }\n\n2. \n{\n \"_id\": \"xx\",\n \"isDummyyy\": false,\n \"someeeeeeeeBy\": \"Ttt Ccccccc\",\n \"isSomeeee\": false,\n \"dummyyyyId\": \"666666666666666666666666\", //Objectid\n \"someeeId\": \"62222222222222222222222\",//Objectid\n \"someId\": \"61111111111111111111111\",//Objectid\n \"someeeeSignature\": \"encoded\",\n \"nameOfSigner\": \"Ttt Ccccccc\",\n \"signerRole\": \"Hhhh Ttttttttt/Ttttttt\",\n \"someeeeeeeType\": \"sssssSsssssss\",\n \"someersion\": \"7.2.0\",\n \"userAgent\": \"Hhhhhhhhh/3 CFfffffff/1111.0.1 Ssssin/22.6.0\",\n \"ipAddress\": \"111.111.11.111\",\n \"somedddId\": \"64444444444666666666666\", //objectId\n \"created\": \"2023-08-18T16:56:36.030Z\",\n \"updated\": \"2023-08-18T17:06:43.524Z\",\n \"basomeeYest\": {\n \"sinssssssssssssFfffDssssse\": 1,\n \"doSSSSSeeeeeeeeeeeeeeeeeeeee\": 1,\n \"tasomeStsomeOnFirmSurface\": 1,\n \"wasPerformed\": true,\n \"nonDominantFoot\": \"left\",\n \"testingSurface\": \"hardFloor\",\n \"footwear\": \"shoes\"\n },\n \"cognitiveAssessment\": {\n \"whatMonth\": true,\n \"whatDate\": true,\n \"whatDay\": true,\n \"whatYear\": true,\n \"whatTime\": true,\n \"trialFirst\": 10,\n \"trialSecond\": 10,\n \"trialThird\": 10,\n \"numbersFirst\": true,\n \"numbersSecond\": true,\n \"numbersThird\": true,\n \"numbersFourth\": true,\n \"valueMemoryReverseMonthTime\": 7.292173981666565,\n \"valueMemorySecondFirst\": 1,\n \"valueMemorySecondSecond\": 1,\n \"valueMemorySecondThird\": 1,\n \"valueMemorySecondFourth\": 1,\n \"valueMemorySecondFive\": 1,\n \"valueMemorySecondSix\": 1,\n \"valueMemorySecondSeven\": 1,\n \"valueMemorySecondEight\": 1,\n \"valueMemorySecondNine\": 1,\n \"valueMemorySecondTen\": 1,\n \"digits\": \"4-9-3 6-2-9, 3-8-1-4 3-2-7-9, 6-2-9-7-1 1-5-2-8-6, 7-1-8-4-6-2 5-3-9-1-4-8\",\n \"wordNames\": \"Baby, Monkey, Perfume, Sunset, Iron, Elbow, Apple, Carpet, Saddle, Bubble\",\n \"firstTrialSelectedValues\": \"1,1,1,1,1,1,1,1,1,1\",\n \"secondTrialSelectedValues\": \"1,1,1,1,1,1,1,1,1,1\",\n \"thirdTrialSelectedValues\": \"1,1,1,1,1,1,1,1,1,1\",\n \"immediateMemoryCompletedTime\": \"2023-08-18T16:58:25.189Z\",\n \"delayedRecallStartedTime\": \"2023-08-18T17:06:29.344Z\",\n \"wasPerformed\": true,\n \"immediateMemoryTrialsAudio\": [\n \"trialFirst\",\n \"trialThird\"\n ],\n \"digitsAudio\": [\n \"4-9-3 6-2-9\",\n \"3-8-1-4 3-2-7-9\",\n \"6-2-9-7-1 1-5-2-8-6\",\n \"7-1-8-4-6-2 5-3-9-1-4-8\"\n ],\n \"language\": \"eng\",\n \"valueReverseMonthsCorrect\": \"\"\n },\n \"symptomEvaluation\": {\n \"headache\": 0,\n \"pressureInHead\": 0,\n \"neckPain\": 0,\n \"nauseaOrVomiting\": 0,\n \"dizziness\": 0,\n \"blurredVision\": 0,\n \"balanceProblems\": 0,\n \"sensivityToLight\": 0,\n \"sensivityToNoice\": 0,\n \"feelingSlowedDown\": 0,\n \"feelingLikeInAFog\": 0,\n \"dontFeelRight\": 0,\n \"difficultyConcentraiting\": 0,\n \"difficultyRemembering\": 0,\n \"fatigueOrLowEnergy\": 0,\n \"confusion\": 0,\n \"drawsiness\": 0,\n \"moreEmotional\": 0,\n \"irritability\": 1,\n \"sadness\": 0,\n \"nervousOrAnxious\": 0,\n \"worseActivity\": 0,\n \"worseMentalActivity\": 0,\n \"feelingNotNormal\": 97,\n \"wasPerformed\": true,\n \"assssssSignature\": \"encoded\",\n \"language\": \"english\"\n },\n \"someIId\": \"64dddddddddddddddddddddd\", //objectid\n \"tandemGait\": {\n \"trial1\": 6.467267036437988,\n \"trial2\": 6.117172002792358,\n \"trial3\": 6.384270071983337,\n \"wasPerformed\": true,\n \"ableToPerformTandemGait\": true,\n \"ableToPerformTandemGaitNotes\": \"Sprained ankle \"\n },\n \"dualTaskGait\": {\n \"firstTrialTime\": 23.499022006988525,\n \"practiceTrialValues\": [\n {\n \"expectedAnswer\": 93,\n \"givenAnswer\": 93\n },\n {\n \"expectedAnswer\": 86,\n \"givenAnswer\": 86\n },\n {\n \"expectedAnswer\": 79,\n \"givenAnswer\": 79\n },\n {\n \"expectedAnswer\": 72,\n \"givenAnswer\": 72\n },\n {\n \"expectedAnswer\": 65,\n \"givenAnswer\": 65\n },\n {\n \"expectedAnswer\": 58,\n \"givenAnswer\": 58\n },\n {\n \"expectedAnswer\": 51,\n \"givenAnswer\": 51\n },\n {\n \"expectedAnswer\": 44,\n \"givenAnswer\": 44\n },\n {\n \"expectedAnswer\": 37\n }\n ],\n \"secondTrialValues\": [\n {\n \"expectedAnswer\": 90,\n \"givenAnswer\": 83\n },\n {\n \"expectedAnswer\": 76,\n \"givenAnswer\": 76\n },\n {\n \"expectedAnswer\": 69,\n \"givenAnswer\": 69\n },\n {\n \"expectedAnswer\": 62,\n \"givenAnswer\": 62\n },\n {\n \"expectedAnswer\": 55,\n \"givenAnswer\": 55\n },\n {\n \"expectedAnswer\": 48,\n \"givenAnswer\": 48\n },\n {\n \"expectedAnswer\": 41,\n \"givenAnswer\": 41\n },\n {\n \"expectedAnswer\": 34\n }\n ],\n \"wasPerformed\": true,\n \"secondTrialStartingValue\": 90,\n \"thirdTrialValues\": [\n {\n \"expectedAnswer\": 98,\n \"givenAnswer\": 98\n },\n {\n \"expectedAnswer\": 91,\n \"givenAnswer\": 92\n },\n {\n \"expectedAnswer\": 85,\n \"givenAnswer\": 85\n },\n {\n \"expectedAnswer\": 78,\n \"givenAnswer\": 78\n },\n {\n \"expectedAnswer\": 71,\n \"givenAnswer\": 71\n },\n {\n \"expectedAnswer\": 64,\n \"givenAnswer\": 64\n },\n {\n \"expectedAnswer\": 57,\n \"givenAnswer\": 57\n },\n {\n \"expectedAnswer\": 50\n }\n ],\n \"secondTrialTime\": 20.729416012763977,\n \"thirdTrialStartingValue\": 98,\n \"firstTrialStartingValue\": 88,\n \"thirdTrialTime\": 17.931267976760864,\n \"practiceTime\": 18.905928015708923,\n \"firstTrialValues\": [\n {\n \"expectedAnswer\": 88,\n \"givenAnswer\": 88\n },\n {\n \"expectedAnswer\": 81,\n \"givenAnswer\": 81\n },\n {\n \"expectedAnswer\": 74,\n \"givenAnswer\": 75\n },\n {\n \"expectedAnswer\": 68,\n \"givenAnswer\": 68\n },\n {\n \"expectedAnswer\": 61,\n \"givenAnswer\": 61\n },\n {\n \"expectedAnswer\": 54,\n \"givenAnswer\": 52\n },\n {\n \"expectedAnswer\": 45,\n \"givenAnswer\": 45\n },\n {\n \"expectedAnswer\": 38\n }\n ],\n \"subtractValue\": 7\n }\n}\n",
"text": "“Based on the shared image, it appears that the query execution time is around 378 ms. Please let me know if I’m missing something.” - If I run db.someCollection.find({someId :{$in: [ <260 objectIds> ] }}).sort({_id:-1}).explain(“executionStats”) it shows 378 ms, but if run db.someCollection.find({someId :{$in: [ <260 objectIds> ] }}).sort({_id:-1}) it sometimes doesn’t even returns a result, if I check the profiler the docs returned are shown as null and time taken is around 15 min or so. Majority of cases nothing is returned further hampering our prod env.Requested sample docs:\n(Please note that some docs contain images and the size of an image is around 3.6MB each. There around 6 images , and also the signatures are encoded. Due to images in documents the size of doc is 15MB for around 9022 documents in the collection, otherwise the docs without the images have size around 90KB )",
"username": "Kamaldeep_Kaur"
},
{
"code": "",
"text": "@Kushagra_Kesav There’s another case I would like to discuss for the same collection.\nI have a query like following\ndb.someCollection.find({someKey: {$exists: false},someId:{$in:[<260 objectids>] }}).sort({_id:1})If I apply index on > {someKey:1, someId:1, _id:1} and run the above query with 180 objectids the index is applied. But if I run the above query as is on 260 objectids, the it does the IXSCAN but on _id. Not sure what is the issue ?\nSeems like fields with $exists : false do not accept indexes\nThe other finding is that if I only apply index on { someId:1, _id:1} the query still uses IXSCAN on _id whereas i believe it should use IXSCAN on { someId:1, _id:1}.\nPlease help",
"username": "Kamaldeep_Kaur"
}
] | Indexes performing poorly on $in | 2023-10-09T15:05:06.689Z | Indexes performing poorly on $in | 291 |
|
[] | [
{
"code": "import {MongoClient, ServerApiVersion } from 'mongodb';./node_modules/mongodb/lib/cmap/auth/gssapi.js (4:0)\n\nModule not found: Can't resolve 'dns'\n\nhttps://nextjs.org/docs/messages/module-not-found\n\nImport trace for requested module:\n./node_modules/mongodb/lib/index.js\n\n./utils/dbConnect.js\n\n./components/Admin/Table.jsx\n\n./app/admin/page.jsx\nimport {MongoClient, ServerApiVersion } from 'mongodb';\n\nlet isConnected = false;\n\n// const client = new MongoClient(process.env.MONGODB_CONNECTION_STRING, {\n// serverApi: {\n// version: ServerApiVersion.v1,\n// strict: true,\n// deprecationErrors: true,\n// }\n// }\n// );\n\nexport default async function dbConnect(){\n console.log(\"Connecting to db!\");\n \n // if(isConnected)\n // {\n // console.log(\"MongoDB is already connected\");\n // return;\n // }\n // try {\n // await client.connect(); \n // await client.db(process.env.MONGODB_DATABASE_NAME).command({ ping: 1 });\n // console.log(\"😃 dbConnect executed.......\");\n // isConnected = true; \n // } \n // catch (error) {\n // console.log(\"ERROR: \" + error);\n // }\n // finally {\n // await client.close();\n // } \n}\n//load teachers into the table.\nimport dbConnect from \"../../utils/dbConnect\";\n// \"use client\";\n\n\nasync function getStudents(){\n const res = await fetch(\"https://jsonplaceholder.typicode.com/posts\");\n return res.json();\n\n}\n\nexport default async function Table()\n//const Table = ()=>\n{\n //Call api here.\n const dbConnectfunc = await dbConnect();\n //console.log(\"hello this is the table component.\");\n //console.log(dbConnectfunc); \n //const data = await getStudents();\n //console.log(data);\n return(\n <div >\n <table className=\"table \">\n <thead>\n <tr>\n <th scope=\"col\">#</th>\n <th scope=\"col\">Date</th>\n <th scope=\"col\"> First Name</th>\n <th scope=\"col\"> Last Name</th>\n <th scope = \"col\"> Email </th>\n <th scope = \"col\"> Classes</th>\n <th scope = \"col\"> Students </th>\n <th scope = \"col\"> Remove</th>\n </tr>\n </thead>\n <tbody>\n <tr>\n <th scope=\"row\">1</th>\n <td>5/17/22</td>\n <td> Bob </td>\n <td>Johnson</td>\n <td> [email protected] </td>\n <td> 4</td>\n <td> 120 </td>\n <td> Remove</td>\n </tr>\n </tbody>\n </table>\n </div> \n ) \n}\n{\n \"name\": \"wdn\",\n \"version\": \"0.1.0\",\n \"private\": true,\n \"scripts\": {\n \"dev\": \"next dev\",\n \"build\": \"next build\",\n \"start\": \"next start\",\n \"lint\": \"next lint\"\n },\n \"dependencies\": {\n \"axios\": \"^1.4.0\",\n \"bootstrap\": \"^5.2.3\",\n \"bootstrap-icons\": \"^1.10.5\",\n \"eslint\": \"8.41.0\",\n \"eslint-config-next\": \"13.4.3\",\n \"i\": \"^0.3.7\",\n \"jquery\": \"^3.7.0\",\n \"mongodb\": \"^5.5.0\",\n \"next\": \"13.4.3\",\n \"popper.js\": \"^1.16.1\",\n \"react\": \"18.2.0\",\n \"react-bootstrap\": \"^2.8.0\",\n \"react-dom\": \"18.2.0\"\n },\n \"devDependencies\": {\n \"tailwindcss\": \"^3.3.2\"\n }\n}\n\n",
"text": "I’m using nextjs 13.4 and I’m trying to access the mong0 database. I keep getting an error (see images). I’ve narrowed the problem down to the the import statement in my dbConnect…js file:\nimport {MongoClient, ServerApiVersion } from 'mongodb';\nWhen this is commented out everything works fine. Any ideas on what this could be?Error:Files:\ndbConnectTable.jsxpackage.json\n5253×651 13.6 KB\n",
"username": "david_hollaway"
},
{
"code": "Module not found: Can't resolve 'dns'\nhttps://nextjs.org/docs/messages/module-not-found\nImport trace for requested module:\n./node_modules/mongodb/lib/index.js\nimport {MongoClient, ServerApiVersion } from 'mongodb';\n\nlet isConnected = false;\n\n// const client = new MongoClient(process.env.MONGODB_CONNECTION_STRING, {\n// serverApi: {\n// version: ServerApiVersion.v1,\n// strict: true,\n// deprecationErrors: true,\n// }\n// }\n// );\ndbConnect",
"text": "Hi @david_hollaway,Welcome to the MongoDB Community!I suspect the error you encountered is due to attempting to execute server-side code (a MongoDB query) in client-side code. However, the cause may not be immediately apparent because Next.js allows you to call MongoDB from your components.May I ask why you have commented on the code in the dbConnect file and how are you using the imported function in the code?Look forward to hearing from you.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Hello,Thank you, I was just trouble shooting.I’m actually going to make a new post as I am using a different approach (but I still have an issue) but I can see where I can delete this post.A new post is more appropriate and will not convolute my explanation of the problem.",
"username": "david_hollaway"
},
{
"code": "",
"text": "Hi @david_hollaway,Feel free to share the link to the new post, along with all the details, to better assist you!Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Here’s the link:\nhttps://www.mongodb.com/community/forums/t/error-syntaxerror-unexpected-token-t-in-json-at-position-0/235831?u=david_hollaway",
"username": "david_hollaway"
},
{
"code": "",
"text": "Thank you so much this was my issue much appreciated.",
"username": "Emprendomex_Marketing"
}
] | Nextjs Error in accessing database: Module not found: Can't resolve 'dns' | 2023-07-14T20:35:00.832Z | Nextjs Error in accessing database: Module not found: Can’t resolve ‘dns’ | 1,789 |
|
null | [
"connector-for-bi"
] | [
{
"code": "",
"text": "Hey All,\nWe are trying to get data from mongo DB to Power BI and tried both options (Power BI Connector and ODBC) however getting the following error.DataSource.Error: ODBC: ERROR [HY000] [MongoDB][Core] Trying to execute query failed with error: Kind: Command failed: Error code 168 (InvalidPipelineOperator): Unrecognized expression ‘$unsetField’, correlationID = 178ea7cc3aa19fdeb0e0bb28, labels: {}\nDetails:\nDataSourceKind=Odbc\n…\nCould anyone help me in this situation and let us know how to resolve this?Thanks",
"username": "Shuja"
},
{
"code": "",
"text": "This is likely a result of your underlying Atlas DB version being less than version 5.0. The nature of the error is that in the translation the $unsetField aggregation is being called, but this aggregation isn’t present in versions less than 5.0. So you may have been able to query using Atlas SQL on this same database in the past, but this particular query requires the $unsetField aggregate within the translation.\nHope this helps!\nScreenshot 2023-10-17 at 1.49.38 PM993×247 17.8 KB",
"username": "Alexi_Antonino"
}
] | Power BI Mongo DB data loading error | 2023-10-17T15:51:05.593Z | Power BI Mongo DB data loading error | 187 |
null | [
"app-services-user-auth"
] | [
{
"code": "47492425value_duplicate_nameaccount_name_in_use = 49value_already_exists",
"text": "I’m logging in users through Email/Password authentication. I’m reading error codes to handle specific situations, like (1) when email confirmation is required after creating an account and (2) when user tries to create an account with an existing email address.Previously:After updating to a more recent version of realm-core and realm-cocoa:Is there a documentation for this change?In this realm-core file, error code 25 is called value_duplicate_name, so at least I can see some logic for the (2) error, though error code account_name_in_use = 49 still exists. But error code 24 is linked to value_already_exists, which makes no sense for the (1) “confirmation needed” error.Is there a logic behind this change? Handling realm errors when it’s so poorly documented is already a challenge, if you start changing the codes between versions it’s not maintainable at all anymore…",
"username": "Jean-Baptiste_Beau"
},
{
"code": "function_not_found = 26invalid_email_password = 50 ",
"text": "Ok so now I’m sure this is a bug. The code for (3) invalid credentials —like user entering a wrong password— is now function_not_found = 26 instead of invalid_email_password = 50 . This makes no sense at all.",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "I’m using Realm on iOS through SPM:Realm v10.32.3\nRealmDatabase v12.11.0",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "Also, “user not found” changed from 45 to 4…Any chance the doc specifies the error codes, and we can actually rely on them?",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "Surprise, the codes changed again! The “confirmation required” error changed from 47 to 24 to 5.It seems that every SDK update is gonna be a gamble that may break vital components of your app. Good luck.",
"username": "Jean-Baptiste_Beau"
},
{
"code": "RLMError.h",
"text": "@Jean-Baptiste_Beau - sorry for the inconvenience with the error codes changing across some Swift SDK releases. There was a project merged into Realm Core 13.18.0 (Swift SDK 10.42.2) that consolidated/updated a lot of the error codes that will hopefully help to make the numeric values more stable.Of note, there are error code definitions in RLMError.h that you can use instead of the numeric values to potentially help mitigate the “ever-changing” error codes ",
"username": "Michael_Wilkerson-Barker"
},
{
"code": "",
"text": "Thanks @Michael_Wilkerson-Barker for your answer. I’ve answered on this issue. I think there is some issue with some constants. I will close this topic to avoid duplicates.",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Did Atlas App Service error codes change recently? | 2022-12-19T16:20:35.051Z | Did Atlas App Service error codes change recently? | 2,189 |
null | [
"queries"
] | [
{
"code": "",
"text": "Hi guys,\nMy company have a cluster tier M20 on Atlas. We receive a lot of warning “Scanned Object / returned has gone over 1000”. In the past I proceed to create (a lot of) indexes to optimize our query and the graph of the “examined:Returned ratio” graph show us the improvments: the worst case is returned ratio of 5.That’s great, however the alerts didn’t disappear but if I check the graph in the profiler he didn’t show anything to worry. There are no suggestions to create index, alsoHow can I find the query or the queries that caused the alerts?",
"username": "Luca_Fongaro"
},
{
"code": "",
"text": "Hello @Luca_Fongaro,The Query Profiler displays slow-running operations and their key performance statistics.As a default setting, Atlas exclusively captures queries that exhibit execution times exceeding 100 milliseconds, categorizing them as “slow queries.”In the event that the profiler remains devoid of entries, a plausible explanation may be that the queries in question were characterized by expeditious execution, consistently falling within the stipulated threshold.",
"username": "Shashank_Laud"
},
{
"code": "",
"text": "Hi, thanks for the reply.If I understand correctly, there isn’t a way to show the queries which cause the alert?",
"username": "Luca_Fongaro"
},
{
"code": "slowms0db.setProfilingLevel(0,-1)\nmongodmongos",
"text": "Hello,The only way that you can do this is using db.setProfilingLevel() to slowms to 0, e.g:However, this should be done with caution because it will log all operations and has the potential to impact performance and use up disk space if left enabled for too long. You would need to run this on each mongod/mongos where you want to increase logging. If you enable this in many instances it could be very easy to forget to set it back to previous levels and I advise against doing this unless absolutely necessary.",
"username": "Shashank_Laud"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Scanned Object / returned has gone over 1000, but profiler is empty | 2023-10-10T14:08:54.171Z | Scanned Object / returned has gone over 1000, but profiler is empty | 272 |
null | [
"node-js"
] | [
{
"code": "",
"text": "Hi team,\nAs I understood that mongodb realm work as backend as a service, but I want to use my own backend as express js and frontend with react or any other and want to use realm as local database + mongodb as online database, can I use like this, if yes then how?\nLike how whatsapp store our chat offline on our device and sync with online database while online.",
"username": "Nayan_Rathod"
},
{
"code": "",
"text": "Hi @Nayan_Rathod - yes, you can use the Realm JS SDK with express js as a backend service to serve up your react pages. The data will be stored in a local Realm database file in your backend that will synchronize with Atlas (and your MongoDB database) in the cloud.\nHere is an example for getting started with express js and Realm: Building a Blog with Realm Node.js and Express",
"username": "Michael_Wilkerson-Barker"
},
{
"code": "",
"text": "Sorry, that example I gave is fairly outdated (from 2017) - here is a more recent example/tutorial: https://www.youtube.com/watch?v=jJmrrVqVdUM\nHere is the MERN Stack Crash Course tutorial referenced in the video: https://www.youtube.com/playlist?list=PL4cUxeGkcC9iJ_KkrkBZWZRHVwnzLIoUE\nHopefully this helps you to get started.",
"username": "Michael_Wilkerson-Barker"
}
] | Mongodb realm with my own backend | 2023-05-08T08:13:58.013Z | Mongodb realm with my own backend | 489 |
null | [
"performance"
] | [
{
"code": "",
"text": "Hello ,\nI am looking for a way to launch the two mongotop and mongostat commands to an ATLAS instanceWould it be possible to do these types of remote commands?Can we have these two functionalities in the Atlas portal?Thans",
"username": "Abdallah_MEHDOINI"
},
{
"code": "",
"text": "Hi @Abdallah_MEHDOINI,Welcome to the MongoDB community! It is possible to run mongotop and mongostat on an Atlas instance today, however it requires downloading MongoDB Command Line Database Tools. Please see the documentation here and here on how to run mongotop and mongostat from the CLI.Additionally, Atlas instances of tiers M10+ have a “Real Time Performance Panel”, which calls mongostat and mongotop and returns the results every 1 second.Thanks!\nFrank",
"username": "Frank_Sun"
},
{
"code": "",
"text": "Hi @Abdallah_MEHDOINI / @Tarun_GaurApologies, I’ve edited my previous response! Please see the updated documentation links for more info on how to run mongotop/mongostat via CLI.Thanks!\nFrank",
"username": "Frank_Sun"
}
] | Mongotop To Atlas | 2023-10-15T19:02:12.897Z | Mongotop To Atlas | 225 |
null | [
"aggregation",
"data-modeling"
] | [
{
"code": "documents\n_id\nfolderName\n\n -------------------------\n\nrequest_document: \n_id\ndocumentId\nmemberId\nisAccept\ndeny\ndocuments\n_id\nfolderName\nhasAccess \n",
"text": "I have a collection “documents” and “request_documents”.\nI want to check “members” which has access on “documents” from “request_documents”so when user want to “Get” data to “documents”, query gonna check is member be able to access the document if he/she in the “request_document”, if that member “_id” available on the “request_document”, then the respond on document gonna has this:so on “hasAccess” gonna has logic like this => if member _id exist in collection “request_document” and “isAccept” true, then “hasAccess” gonna be true , and if exist but isAccept and deny false, then “waiting”, otherwise “false”is that possible to do that and how to I added new fields for that??",
"username": "Virtual_Database"
},
{
"code": "",
"text": "Hi @Virtual_Database - Have you resolved this or this still an open question? If it’s an open question, can you update it with a concrete example? I’m having trouble understanding the question.",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "My apologies - I still don’t understand the question. Perhaps someone else will. Adding a sample document for each collection as well as your expected output might help.",
"username": "Lauren_Schaefer"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "Anil_Kumar13"
}
] | It is possible to add new field into existing collection which connected each others? | 2020-09-02T13:06:10.740Z | It is possible to add new field into existing collection which connected each others? | 1,989 |
null | [
"aggregation",
"queries",
"dot-net",
"data-modeling"
] | [
{
"code": "",
"text": "Searching for any 10-digit number in multiple field where 22000000 records and data size 50GBwe used regex with indexing but it is still slow",
"username": "Akanksha_gogar"
},
{
"code": "",
"text": "Hi @Akanksha_gogar welcome to the forums and community.Is the field a number or a string?What is ‘slow’?\nWhat is the document schema/shape?\nWhat indexes have you tried?",
"username": "chris"
},
{
"code": "",
"text": "Field type is stringWe used regex for search using $regex and created compound index with field1,field2,field3\n$or: [\n{ field1: { $regex: /^\\d{10}$/ } },\n{ field2: { $regex: /^\\d{10}$/ } },\n{ field3: { $regex: /^\\d{10}$/ } }\n// Add more fields as needed\n]",
"username": "Akanksha_gogar"
},
{
"code": "",
"text": "The compound indexfield1,field2,field3is not a usable index for the query parts below:{ field2: { $regex: /^\\d{10}$/ } },\n{ field3: { $regex: /^\\d{10}$/ } }Read about compound indexes and in particular abount prefix to understand why.",
"username": "steevej"
}
] | Searching for any 10-digit number in multiple field where 22000000 records and data size 50GB | 2023-10-16T11:57:14.419Z | Searching for any 10-digit number in multiple field where 22000000 records and data size 50GB | 219 |
null | [] | [
{
"code": "",
"text": "I have a Primary-Secondary-Arbiter configuration running with an application running a Change Stream. Change stream events are not delivered to the application when the Secondary node is down. The application is able to read/write to the Primary normally, but change stream events are not delivered until the Secondary node is restored. Is there any way to configure the PSA cluster to deliver change stream events when only Primary is active?",
"username": "Matt_H"
},
{
"code": "",
"text": "This is not normal behaviour if your Change Stream is connected to the replica set.What I suspect is that you are connecting directly to the secondary for the Change Stream rather than the replica set.",
"username": "steevej"
},
{
"code": "",
"text": "This is expected with PSA as you no longer can ‘majority’ commit which the change stream requires.You can mitigate this as outlined in:",
"username": "chris"
},
{
"code": "cfg = rs.conf();\ncfg[\"members\"][<array_index>][\"votes\"] = 0;\ncfg[\"members\"][<array_index>][\"priority\"] = 0;\nrs.reconfig(cfg);\nenableMajorityReadConcern",
"text": "Thanks Chris.To be clear, the following manual reconfiguration sequence for the offline secondary node is required to keep the change stream going?This is similar to how you would implement a manual failover for a two-node Primary-Secondary (no arbiter) replicaSet, correct?Also, is there any server or client side configuration, such as enableMajorityReadConcern (from older MongoDB releases) that would allow the change stream listener to automatically recover from this condition?",
"username": "Matt_H"
},
{
"code": "",
"text": "Looks good to me and agrees with the document.",
"username": "chris"
},
{
"code": "",
"text": "I just learned something that I should have known.Thanks",
"username": "steevej"
},
{
"code": "enableMajorityReadConcern",
"text": "Thank you Chris.Just to be clear on one more point. There is no server or client side configuration, such as enableMajorityReadConcern (from older MongoDB releases) that would allow the change stream listener to automatically recover from this condition?Best Regards,\nMatt",
"username": "Matt_H"
},
{
"code": "cfg = rs.conf();\ncfg[\"members\"][<array_index>][\"votes\"] = 0;\ncfg[\"members\"][<array_index>][\"priority\"] = 0;\nrs.reconfig(cfg);\n",
"text": "There are times when the documented ‘Mitigate Performance Issues with PSA Replica Set’ sequence referenced above hangs indefinitely. The sequence for removing an offline secondary is listed below for reference:Reconfig() hangs forever in the following sequence:rs.reconfig() hangs forever. I have to ctrl-c and rerun rs.reconfig(cfg, {force: true}). I would rather not force the reconfig.Is the something missing from the sequence for the scenario presented above?",
"username": "Matt_H"
},
{
"code": "",
"text": "rs.reconfig() hangs forever. I have to ctrl-c and rerun rs.reconfig(cfg, {force: true}). I would rather not force the reconfig.Yep.Is the something missing from the sequence for the scenario presented above?I think that example is framed in the lagging scenario where it would eventually succeed.",
"username": "chris"
},
{
"code": "",
"text": "Is there a procedure that avoids the use of force: true?",
"username": "Matt_H"
},
{
"code": "",
"text": "While the other member is down ? No",
"username": "chris"
},
{
"code": "votes: 0priority: 0",
"text": "The documentation states the following to remove an unavailable or lagging data-bearing node:To reduce the cache pressure and increased write traffic, set votes: 0 and priority: 0 for the node that is unavailable or lagging.However, I’m observing the following:The recovery procedure only works for 50% of the cases where you lose a data-bearing node.",
"username": "Matt_H"
}
] | Change streams with PSA configuration | 2023-10-03T17:57:14.331Z | Change streams with PSA configuration | 494 |
null | [
"replication"
] | [
{
"code": "",
"text": "Though some testing with the replSetReconfig with force=true command, I’ve hit the maximum value of the Replica Set version field.rs0 [direct: primary] test> rs.reconfig(cfg, {force: true})\nMongoServerError: BSON field ‘version’ value must be < 2147483648, actual value ‘2147543330’rs0 [direct: primary] test> cfg = rs.config().version\n2147482000Is there a way to recover from this without losing the database? If so, what is the procedure?Note similar issues are reference by the following, but no recovery procedure was provided.Reset replica set version number using rs.reconfig()replSetReconfig force generates too high version numberBest Regards,\nMatt",
"username": "Matt_H"
},
{
"code": "",
"text": "Are you also using the K8S operator like the other topics ?Like the others, what are the:",
"username": "chris"
},
{
"code": "$ /usr/bin/mongod --version\ndb version v6.0.10\nBuild Info: {\n \"version\": \"6.0.10\",\n \"gitVersion\": \"8e4b5670df9b9fe814e57cb5f3f8ee9407237b5a\",\n \"openSSLVersion\": \"OpenSSL 1.1.1f 31 Mar 2020\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"ubuntu2004\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\n$ lsb_release -a\nNo LSB modules are available.\nDistributor ID:\tUbuntu\nDescription:\tUbuntu 20.04.6 LTS\nRelease:\t20.04\nCodename:\tfocal\n",
"text": "No, I’m not using K8S. I’m testing various scenarios with a PSA architecture. I’ve created the situation through test scripts looping over reconfig(force=true) and also manually adjusting the version, so I know how I got into the situation.I’m trying to figure out a recovery procedure if this should happen in production after some period of time. Is there a way to recover from this scenario?",
"username": "Matt_H"
},
{
"code": "--replSetlocal--replSet",
"text": "Yes.Stop using arbiters, they’re more trouble than they’re worth.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Recovery from Replica Set 'version' value must be < 2147483648 | 2023-10-16T21:22:56.914Z | Recovery from Replica Set ‘version’ value must be < 2147483648 | 193 |
[] | [
{
"code": "",
"text": "I have installed mongodb in my ubuntu 22.04 system by following below steps\nwget -qO – https://www.mongodb.org/static/pgp/server-3.6.asc | sudo apt-key add -;echo “deb [ arch=amd64 ] MongoDB Repositories bionic/mongodb-org/3.6multiverse” | sudo tee /etc/apt/sources.list.d/mongodb-org-3.6.list;\napt-get update;\napt-get install -y mongodb-org;But i am seeing the mongodb service is failed after giving command service mongod status\nimage1907×282 64.5 KB",
"username": "Joel_Thomas"
},
{
"code": "48mongodmongosmongod/etc/mongod.conf",
"text": "48 Returned by mongod or mongos when an error prevents a newly started instance from listening for incoming connections.Do you have mongod running already or another service using port 27017 (or configured port in /etc/mongod.conf ?",
"username": "chris"
}
] | MongoDB not starting in ubuntu 22.04 | 2023-10-17T06:35:23.101Z | MongoDB not starting in ubuntu 22.04 | 200 |
|
null | [
"python"
] | [
{
"code": "",
"text": "I’m trying to create a Full text search functionality using Flask and Pymongo with hosted atlas.\nWe have a very specific use case where we need to create a new collection using the id of a customer in the customer db.\nWe want to add search indexes to our collections every time a new one is created. Couldn’t find a dedicated documentation how to achieve that using the Pymongo. package.Some help would be highly appreciated",
"username": "a091cdf9f526d18581e0dc80416e114"
},
{
"code": "",
"text": "Hi @a091cdf9f526d18581e0dc80416e114 and welcome to the MongoDB Community forum!!We have a very specific use case where we need to create a new collection using the id of a customer in the customer db.If I understand the question correctly, you are trying to extract _id for a specific user and trying to new collection using the extracted objectID.We want to add search indexes to our collections every time a new one is created.Do you mean to use Atlas Search in your application? Note that this is different than text index that is available outside of Atlas.To help us understand further, could you help us with the following information to help you with a solution.Thanks\nAasawari",
"username": "Aasawari"
}
] | How to create search indexes in atlas using Pymongo | 2023-03-18T15:47:43.077Z | How to create search indexes in atlas using Pymongo | 953 |
null | [
"flutter",
"dart"
] | [
{
"code": "Not a realm type\n\nin: package:flutter_todo/realm/schemas.dart:61:8\n ╷\n16 │ enum EmployeesTypes {\n │ ━━━━━━━━━━━━━━\n... │\n57 │ @RealmModel()\n58 │ class _Employee {\n │ ━━━━━━━━━ in realm model for 'Employee'\n... │\n61 │ late EmployeesTypes type;\n │ ^^^^^^^^^^^^^^ EmployeesTypes is not a realm model type\n ╵\nAdd a @RealmModel annotation on 'EmployeesTypes', or an @Ignored annotation on 'type'.\n\n[INFO] realm:realm_generator on lib/components/todo_item.dart:[generate (0)] completed, took 243ms\n[INFO] realm:realm_generator on lib/components/modify_item.dart:[generate (0)] completed, took 259ms\n[INFO] Running build completed, took 12.6s\n\n[INFO] Caching finalized dependency graph...\n[INFO] Caching finalized dependency graph completed, took 119ms\nimport 'package:realm/realm.dart';\npart 'schemas.g.dart';\n\nenum EmployeesTypes {\n liaisonExecutive,\n loanLead\n}\n\n@RealmModel()\nclass _Employee {\n @PrimaryKey()\n late ObjectId id;\n late EmployeesTypes type;\n late DateTime createdAt;\n late DateTime updatedAt;\n",
"text": "Hello comunity.i’m working with realm and i’m triying to create a realModel with a enum for EmployeeTypes (it cotaint 2 options for now)But when i tried to use it to generate the schemas.g.dart file i’m receiving the following error:And this is my modelAny advice about how can i manage the enums types with realm?",
"username": "Rodolfo_Lugo"
},
{
"code": "",
"text": "Take a look here for supported data types: docs",
"username": "shyshkov.o.a"
}
] | Realm enums on flutter | 2023-10-17T06:39:35.554Z | Realm enums on flutter | 230 |
null | [
"queries",
"node-js"
] | [
{
"code": "",
"text": "Hi,I am trying to connect with MongoDB with the node\nand access control is also enabled\nusing URI\nmongodb://USER:PASSWORD@HOSTANME:27017/DATABASE_NAME\nbut it is not connecting to the database but when I add ?authSource=authDB at the end of the URI it is connected successfullyis there any way to connect with MongoDB without specifying authSource at the end of the URI?",
"username": "Shakir_Idrisi"
},
{
"code": "",
"text": "is there any way to connect with MongoDB without specifying authSource at the end of the URI?Conditionally, the answer is “yes”, depending on your setup. See the authSource Connnection Option documentation.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "hi,I have MongoDB on my system and I want to connect with the node\nusing bellow URImongodb://shakir_usr1:PASSWORD@Hostname:27017/shakir_db?authSource=adminwhen I add ?authSource=admin after the db name it will connect.\nif I remove ?authSource=admin at the end it will not connect.My USER is created in admin db.when adding?authSource=admin in the last which worked.but I want it without ?authSource=admin it should work as well get this error in error log\n“error”: “UserNotFound: Could not find user \"shakir_usr1\" for db \"shakir_db\"”}",
"username": "Shakir_Idrisi"
},
{
"code": "",
"text": "If you read the information in the link I posted, you will have your answer.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Ok,I have checked the link you provided.that means I have to define authSource if I add a database name in the uri.Thanks",
"username": "Shakir_Idrisi"
},
{
"code": "authSourceauthSourcedefaultauthdbdefaultauthdbauthSourceadmin",
"text": "What it says is this:If authSource is unspecified, authSourcedefaults to the defaultauthdb specified in the connection string.If defaultauthdb is unspecified, then authSource defaults to admin.",
"username": "Jack_Woehr"
},
{
"code": "authSource",
"text": "You can connect to MongoDB without specifying authSource in the URI if the authentication database is the same as your target database. Otherwise, include it for clarity and security.",
"username": "Maurice_Klein"
}
] | Mongodb connection URI | 2023-10-09T06:36:47.953Z | Mongodb connection URI | 358 |
null | [
"dot-net",
"server"
] | [
{
"code": "\":{\"$date\":\"2023-09-30T10:01:13.290+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF765174FF9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":101,\"s\":\"mongo::`anonymous namespace'::endProcessWithSignal\",\"s+\":\"19\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.291+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF765175806\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/signal_handlers_synchronous.cpp\",\"line\":245,\"s\":\"mongo::`anonymous namespace'::myTerminate\",\"s+\":\"E6\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.291+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF765245FD7\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":88,\"s\":\"mongo::stdx::dispatch_impl\",\"s+\":\"17\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.291+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF765245FB9\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/stdx/set_terminate_internals.cpp\",\"line\":92,\"s\":\"mongo::stdx::TerminateHandlerDetailsInterface::dispatch\",\"s+\":\"9\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.291+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF804FCDE58\",\"module\":\"ucrtbase.dll\",\"s\":\"terminate\",\"s+\":\"18\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.291+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFFF87F1AAB\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"95B\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.291+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFFF87F2317\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_NLG_Return2\",\"s+\":\"11C7\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.291+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFFF87F4119\",\"module\":\"VCRUNTIME140_1.dll\",\"s\":\"_CxxFrameHandler4\",\"s+\":\"A9\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.292+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF76546AAC8\",\"module\":\"mongod.exe\",\"file\":\"d:/a01/_work/43/s/src/vctools/crt/vcstartup/src/gs/amd64/gshandlereh4.cpp\",\"line\":86,\"s\":\"__GSHandlerCheck_EH4\",\"s+\":\"64\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.292+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF80823575F\",\"module\":\"ntdll.dll\",\"s\":\"_chkstk\",\"s+\":\"11F\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.292+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF808194CEF\",\"module\":\"ntdll.dll\",\"s\":\"RtlWalkFrameChain\",\"s+\":\"14BF\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.292+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF808198AE6\",\"module\":\"ntdll.dll\",\"s\":\"RtlRaiseException\",\"s+\":\"316\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.292+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF804479319\",\"module\":\"KERNELBASE.dll\",\"s\":\"RaiseException\",\"s+\":\"69\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.292+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FFFF88066C0\",\"module\":\"VCRUNTIME140.dll\",\"s\":\"CxxThrowException\",\"s+\":\"90\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.292+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF7651EDF3D\",\"module\":\"mongod.exe\",\"file\":\"C:/data/mci/5397387438e517addde6ea36bfeb6ede/src/build/59f4f0dd/mongo/base/error_codes.cpp\",\"line\":2496,\"s\":\"mongo::error_details::throwExceptionForStatus\",\"s+\":\"42D\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.293+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF76517EC27\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/util/assert_util.cpp\",\"line\":260,\"s\":\"mongo::uassertedWithLocation\",\"s+\":\"187\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.293+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF76359CF41\",\"module\":\"mongod.exe\",\"file\":\".../src/mongo/db/ftdc/controller.cpp\",\"line\":254,\"s\":\"mongo::FTDCController::doLoop\",\"s+\":\"501\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.293+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF76359C8DC\",\"module\":\"mongod.exe\",\"file\":\"C:/Program Files/Microsoft Visual Studio/2022/Professional/VC/Tools/MSVC/14.31.31103/include/thread\",\"line\":56,\"s\":\"std::thread::_Invoke<std::tuple<<lambda_726633d7e71a0bdccc2c30a401264d8c> >,0>\",\"s+\":\"2C\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.293+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF804F8268A\",\"module\":\"ucrtbase.dll\",\"s\":\"o_exp\",\"s+\":\"5A\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.293+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":31445, \"ctx\":\"ftdc\",\"msg\":\"Frame\",\"attr\":{\"frame\":{\"a\":\"7FF805AB7974\",\"module\":\"KERNEL32.DLL\",\"s\":\"BaseThreadInitThunk\",\"s+\":\"14\"}}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.295+08:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23132, \"ctx\":\"ftdc\",\"msg\":\"Writing minidump diagnostic file\",\"attr\":{\"dumpName\":\"C:\\\\Program Files\\\\MongoDB\\\\Server\\\\6.0\\\\bin\\\\mongod.2023-09-30T02-01-13.mdmp\"}}\n{\"t\":{\"$date\":\"2023-09-30T10:01:13.677+08:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":23137, \"ctx\":\"ftdc\",\"msg\":\"*** immediate exit due to unhandled exception\"}\n\n\n************* Preparing the environment for Debugger Extensions Gallery repositories **************\n ExtensionRepository : Implicit\n UseExperimentalFeatureForNugetShare : true\n AllowNugetExeUpdate : true\n AllowNugetMSCredentialProviderInstall : true\n AllowParallelInitializationOfLocalRepositories : true\n\n -- Configuring repositories\n ----> Repository : LocalInstalled, Enabled: true\n ----> Repository : UserExtensions, Enabled: true\n\n>>>>>>>>>>>>> Preparing the environment for Debugger Extensions Gallery repositories completed, duration 0.000 seconds\n\n************* Waiting for Debugger Extensions Gallery to Initialize **************\n\n>>>>>>>>>>>>> Waiting for Debugger Extensions Gallery to Initialize completed, duration 0.078 seconds\n ----> Repository : UserExtensions, Enabled: true, Packages count: 0\n ----> Repository : LocalInstalled, Enabled: true, Packages count: 36\n\nMicrosoft (R) Windows Debugger Version 10.0.25921.1001 AMD64\nCopyright (c) Microsoft Corporation. All rights reserved.\n\n\nLoading Dump File [C:\\Users\\aaron.vo\\Desktop\\mongod.2023-10-03T14-24-13.mdmp]\nUser Mini Dump File: Only registers, stack and portions of memory are available\n\n\n************* Path validation summary **************\nResponse Time (ms) Location\nDeferred srv*\nSymbol search path is: srv*\nExecutable search path is: \nWindows 10 Version 17763 MP (8 procs) Free x64\nProduct: Server, suite: TerminalServer DataCenter SingleUserTS\nEdition build lab: 17763.1.amd64fre.rs5_release.180914-1434\nDebug session time: Tue Oct 3 21:24:13.000 2023 (UTC + 7:00)\nSystem Uptime: not available\nProcess Uptime: 1 days 5:12:28.000\n............................................\nThis dump file has an exception of interest stored in it.\nThe stored exception information can be accessed via .ecxr.\n(3a0c.32f0): Unknown exception - code e0000001 (first/second chance not available)\nFor analysis of this file, run !analyze -v\nntdll!NtGetContextThread+0x14:\n00007ff8`08232494 c3 ret\n0:021> !analyze -v\n*******************************************************************************\n* *\n* Exception Analysis *\n* *\n*******************************************************************************\n\n\nKEY_VALUES_STRING: 1\n\n Key : Analysis.CPU.mSec\n Value: 1077\n\n Key : Analysis.Elapsed.mSec\n Value: 11362\n\n Key : Analysis.IO.Other.Mb\n Value: 4\n\n Key : Analysis.IO.Read.Mb\n Value: 3\n\n Key : Analysis.IO.Write.Mb\n Value: 19\n\n Key : Analysis.Init.CPU.mSec\n Value: 1078\n\n Key : Analysis.Init.Elapsed.mSec\n Value: 54456\n\n Key : Analysis.Memory.CommitPeak.Mb\n Value: 77\n\n Key : Failure.Bucket\n Value: APPLICATION_FAULT_e0000001_mongod.exe!Unknown\n\n Key : Failure.Hash\n Value: {87ea1f7d-1c2e-882d-7844-2eb57a54811c}\n\n Key : Timeline.Process.Start.DeltaSec\n Value: 105148\n\n Key : WER.OS.Branch\n Value: rs5_release\n\n Key : WER.OS.Version\n Value: 10.0.17763.1\n\n Key : WER.Process.Version\n Value: 6.0.5.0\n\n\nFILE_IN_CAB: mongod.2023-10-03T14-24-13.mdmp\n\nNTGLOBALFLAG: 0\n\nCONTEXT: (.ecxr)\nrax=0000000000000000 rbx=0000000000000001 rcx=0000000000000000\nrdx=0000000000000000 rsi=0000005a183feaf0 rdi=00000000ffffffff\nrip=00007ff804479319 rsp=0000005a183fdff0 rbp=0000005a183fe300\n r8=0000000000000000 r9=0000000000000000 r10=00007ff763346aa0\nr11=0000000000000000 r12=0000000000000000 r13=0000005a183fecb0\nr14=0000005a183fe4b0 r15=0000005a183fe4e0\niopl=0 nv up ei pl nz na pe nc\ncs=0033 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000202\nKERNELBASE!RaiseException+0x69:\n00007ff8`04479319 0f1f440000 nop dword ptr [rax+rax]\nResetting default scope\n\nEXCEPTION_RECORD: (.exr -1)\nExceptionAddress: 00007ff804479319 (KERNELBASE!RaiseException+0x0000000000000069)\n ExceptionCode: e0000001\n ExceptionFlags: 00000001\nNumberParameters: 0\n\nPROCESS_NAME: mongod.exe\n\nERROR_CODE: (NTSTATUS) 0xe0000001 - <Unable to get error code text>\n\nEXCEPTION_CODE_STR: e0000001\n\nSTACK_TEXT: \n0000005a`183fdff0 00007ff7`65174ff9 : 00000000`00000000 00000000`00000000 00000000`00000000 00007ff7`6516016a : KERNELBASE!RaiseException+0x69\n0000005a`183fe0d0 00000000`00000000 : 00000000`00000000 00000000`00000000 00007ff7`6516016a 00000000`e0000001 : mongod+0x2094ff9\n\n\nSYMBOL_NAME: mongod+2094ff9\n\nMODULE_NAME: mongod\n\nIMAGE_NAME: mongod.exe\n\nSTACK_COMMAND: ~21s; .ecxr ; kb\n\nFAILURE_BUCKET_ID: APPLICATION_FAULT_e0000001_mongod.exe!Unknown\n\nOS_VERSION: 10.0.17763.1\n\nBUILDLAB_STR: rs5_release\n\nOSPLATFORM_TYPE: x64\n\nOSNAME: Windows 10\n\nIMAGE_VERSION: 6.0.5.0\n\nFAILURE_ID_HASH: {87ea1f7d-1c2e-882d-7844-2eb57a54811c}\n\nFollowup: MachineOwner\n---------\n\n0:021> .exr -1\nExceptionAddress: 00007ff804479319 (KERNELBASE!RaiseException+0x0000000000000069)\n ExceptionCode: e0000001\n ExceptionFlags: 00000001\nNumberParameters: 0\n\n",
"text": "I installed MongoDB as windows services. But sometimes, it crashes unexpectedly.\nI use Windows Server version 1809 (OS Build 17763.3406)\nMongo.logAnd mdmg file:Thank you in advanced.",
"username": "Anh_Vo"
},
{
"code": "",
"text": "Does anyone know the reason ? ",
"username": "Anh_Vo"
}
] | MongoDB Windows Service crashes unexpectedly | 2023-10-03T16:03:16.601Z | MongoDB Windows Service crashes unexpectedly | 323 |
null | [
"dot-net"
] | [
{
"code": " // my interface\n public interface IAudit\n { \n DateTime TimeStamp { get; set; }\n string IpAddress { get; set; }\n }\n \n // first implementation \n public class AuditEdi : IAudit\n {\n public DateTime TimeStamp { get; set; }\n public string IpAddress { get; set; }\n public string MessageType { get; set; }\n ....\n }\n\n // second implementation \n public class AuditIhm : IAudit\n {\n public Guid UserId { get; set; }\n public DateTime TimeStamp { get; set; }\n public string IpAddress { get; set; }\n ...\n }\nBsonClassMap.RegisterClassMap<AuditIhm>(cm =>\n{\n\tcm.SetDiscriminator(\"AuditIhm\");\n\tcm.AutoMap();\n\tcm.SetIgnoreExtraElements(true);\n});\n\nBsonClassMap.RegisterClassMap<AuditEdi>(cm =>\n{\n\tcm.SetDiscriminator(\"AuditEdi\");\n\tcm.AutoMap();\n\tcm.SetIgnoreExtraElements(true);\n});\n\n var filter = Builders<IAudit>.Filter.Eq(r => r.IpAddress, ipAddress);\n var sort = Builders<IAudit>.Sort.Descending(\"TimeStamp\");\n var findOptions = new FindOptions<IAudit>() {Sort = sort};\n",
"text": "Hello everyone. I work with mongoDb Driver 2.18.0I have objects two different classes that implement the same interface. I store these objects in a mongo DB which works fine. The problem is the deserialization of this objects when I filter on interface properties.My data modelingClass Map registrationTry to filter on one of the interface’s propertythe exception\nSystem.InvalidOperationException : ‘Unable to determine the serialization information for r => r.IpAddress.’Should i use class inheritance instead of interface for modeling my documents or is it possible to use this implementation ?",
"username": "Yannick_Darcillon"
},
{
"code": "using System;\nusing MongoDB.Bson.Serialization;\nusing MongoDB.Driver;\n\nBsonClassMap.RegisterClassMap<AuditIhm>(cm =>\n{\n cm.SetDiscriminator(\"AuditIhm\");\n cm.AutoMap();\n cm.SetIgnoreExtraElements(true);\n});\n\nBsonClassMap.RegisterClassMap<AuditEdi>(cm =>\n{\n cm.SetDiscriminator(\"AuditEdi\");\n cm.AutoMap();\n cm.SetIgnoreExtraElements(true);\n});\n\nvar client = new MongoClient();\nvar db = client.GetDatabase(\"test\");\nvar coll = db.GetCollection<ILifeCycleAudit>(\"coll\");\n\nvar ipAddress = \"127.0.0.1\";\nvar filter = Builders<ILifeCycleAudit>.Filter.Eq(r => r.IpAddress, ipAddress);\nvar sort = Builders<ILifeCycleAudit>.Sort.Descending(\"TimeStamp\");\nvar query = coll.Find(filter).Sort(sort);\nConsole.WriteLine(query);\n\n// my interface\npublic interface ILifeCycleAudit\n{\n DateTime TimeStamp { get; set; }\n string IpAddress { get; set; }\n}\n\n// first implementation\npublic class AuditEdi : ILifeCycleAudit\n{\n public DateTime TimeStamp { get; set; }\n public string IpAddress { get; set; }\n public string MessageType { get; set; }\n}\n\n// second implementation\npublic class AuditIhm : ILifeCycleAudit\n{\n public Guid UserId { get; set; }\n public DateTime TimeStamp { get; set; }\n public string IpAddress { get; set; }\n}\nfind({ \"IpAddress\" : \"127.0.0.1\" }).sort({ \"TimeStamp\" : -1 })\n",
"text": "Hi, @Yannick_Darcillon,Welcome to the MongoDB Community Forums. I understand that you’ve encountered an unexpected serialization-related exception. When I ran your code (with some minor alterations to fill in the missing gaps), I did not encounter an exception.The output of this code is:Please provide a self-contained repro of the issue so that we can investigate further.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "coll.Find(filter).Sort(sort).ToEnumerable();",
"text": "I forgot the most important line of code in my example, the execution of the query ! If you add this the problem will appear :coll.Find(filter).Sort(sort).ToEnumerable();IEnumerable will not execute the query until you enumerate over the data.best regard,\nYannick",
"username": "Yannick_Darcillon"
}
] | MongoDb C# Deserialize interface throw exception | 2023-09-18T12:18:59.628Z | MongoDb C# Deserialize interface throw exception | 448 |
null | [
"java",
"field-encryption"
] | [
{
"code": "",
"text": "Hello Mongo Community,I am currently trying to use the Mongo Encryption Library in a GraalVm Native application but I can’t get it to work.I’ve learned that due to the fact LibMongoCrypt is a Native Library, GraalVm might not load it correctly because of AOT Compilation.\nAOT Compilation aims to create a single executable containing every needed component and dependency and given the fact these native libraries are loaded at runtime, GraalVm may not be able to correctly add them.Does anyone know anything else about the topic ?Thanks !!!",
"username": "Diogo_Marques"
},
{
"code": "",
"text": "Hey Diogo, thanks for reaching out. I saw that you filed JAVA-5199 about this issue as well. To avoid duplication, we’ll reply on that ticket.",
"username": "Ashni_Mehta"
},
{
"code": "",
"text": "Hello. I`m experiencing the same issue. A little help would be much appreciated.",
"username": "Rafael_Fogel"
}
] | MongoCrypt Lib Integration with GraalVm Native Images | 2023-10-13T09:47:27.845Z | MongoCrypt Lib Integration with GraalVm Native Images | 242 |
null | [
"queries",
"node-js",
"atlas-search",
"text-search"
] | [
{
"code": "{\n\t'$search': {\n 'index': 'User',\n 'compound':{\n should: [\n \t{\n autocomplete: {\n query: 'marty',\n path: 'username'\n }\n },\n {\n autocomplete: {\n query: 'marty',\n path: 'fname'\n }\n },\n {\n autocomplete: {\n query: 'marty',\n path: 'lname'\n }\n }\n ]\n }\n\t}\n}\n",
"text": "how to search in multiple collection using $search autocomplete ex- User and Address collection both",
"username": "ABHINANDAN_MAITY"
},
{
"code": "",
"text": "Hi @ABHINANDAN_MAITY , you can use $lookup to join two collections to search across. See the tutorials on searching across collections here.",
"username": "amyjian"
},
{
"code": "",
"text": "It’s really depends on the output you need, it’s either using $lookup as suggested or $unionwith\nThe different between those two is $lookup append another document to exist document\nwhile $unionwith append collection to another collection",
"username": "7b55b5f9a91383655fec26662aab12c"
}
] | How to search in multiple collection using $search and autocomplete in more then one collection | 2023-10-04T06:13:20.737Z | How to search in multiple collection using $search and autocomplete in more then one collection | 250 |
null | [] | [
{
"code": "db.collection.aggregate([{\n$search: {\n\ttext: {\n\t\tquery: 'multi word query',\n\t\tpath: [...some fields],\n\t},\n}}]);\nUncaught exception: Error: command failed: {\n\t\"ok\" : 0,\n\t\"errmsg\" : \"Unrecognized pipeline stage name: '$search'\",\n\t\"code\" : 40324,\n\t\"codeName\" : \"Location40324\"\n} : aggregate failed :\n_getErrorWithCode@src/mongo/shell/utils.js:25:13\ndoassert@src/mongo/shell/assert.js:18:14\n_assertCommandWorked@src/mongo/shell/assert.js:583:17\nassert.commandWorked@src/mongo/shell/assert.js:673:16\nDB.prototype._runAggregate@src/mongo/shell/db.js:266:5\nDBCollection.prototype.aggregate@src/mongo/shell/collection.js:1012:12\nDBCollection.prototype.aggregate@:1:355\n@(shell):1:1\n",
"text": "I am using MongoDB atlas search.It is always says…below error.Error:Could you help to unblock.",
"username": "Merlin_Baptista_B"
},
{
"code": "",
"text": "Hi @Merlin_Baptista_B,Welcome to MongoDB communityHave you created an atlas search index on that collection?What is the atlas cluster version?It is supported from 4.2+:Get started quickly with Atlas Search by loading sample data to your cluster, creating a search index, and querying your collection.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I have the same problem, I have created search index on Atlas cloud via search UI (default search index for now)\nHow can I resolve this issue?",
"username": "7b55b5f9a91383655fec26662aab12c"
},
{
"code": "",
"text": "Problem solved. I was using local DB instead of the Atlas one",
"username": "7b55b5f9a91383655fec26662aab12c"
},
{
"code": "",
"text": "Check if you are looking at the right DB, on the local DB search can’t work as far as I understand",
"username": "7b55b5f9a91383655fec26662aab12c"
}
] | Error Throwing Unrecognized pipeline stage name: '$search' | 2021-01-29T04:24:34.272Z | Error Throwing Unrecognized pipeline stage name: ‘$search’ | 15,313 |
null | [
"transactions"
] | [
{
"code": " const newEntry = await prisma.$transaction(async (tx) => {\n // MongoDB operations\n if (draft) {\n if (session.user.id !== draft?.userId) throw Error(\"Not authorized\");\n await tx.diaryEntryDraft.delete({ where: { id: draft?.id } });\n }\n\n const newEntry = await tx.diaryEntry.create({\n data: entryData,\n });\n\n // Pinecone operation\n await diaryEntriesIndex.upsert([\n {\n id: entryId,\n values: embedding,\n metadata: { userId: session.user.id },\n },\n ]);\n\n return newEntry;\n });\n",
"text": "I run a transaction on MongoDB but also include a Pinecone operation in this transaction. I’m aware that Pinecone will not be rolled back if the transaction fails. But this is why I made Pinecone the last operation in the transaction. If Pinecone fails, MongoDB is rolled back. If MongoDB fails, we don’t even reach the execution if Pinecone.This this fine or should I not do this?Here is the code. I’m using the Prisma ORM to execute my MongoDB transaction:",
"username": "Florian_Walther"
},
{
"code": "",
"text": "Hi @Florian_Walther, what’re you describing sounds to what I would consider a nested transaction. Given that one is for MongoDB and one is for Pinecone, as long as your aware of the risk of the MongoDB or Pinecone transaction failing (which is looks like you are), I don’t really see an issue with it.The nesting in this case looks like the MongoDB transaction is the top of the hierarchy and the Pinecone transaction is the inner transaction of the MongoDB transaction, meaning if the inner transaction fails, so does the outer transaction, as you’ve described. The practice seems to indicate that if the inner fails, the outer should rollback, which happens as you’ve described.Further reading I found interesting on this are here and here (not technology specific, mainly for the context on nested transactions).",
"username": "Jacob_Latonis"
},
{
"code": "",
"text": "Thank you for your answer and the links! Seems like my approach is fine!",
"username": "Florian_Walther"
}
] | Is it bad to include a different database call in a MongDB transaction? | 2023-10-15T12:52:52.346Z | Is it bad to include a different database call in a MongDB transaction? | 304 |
null | [
"aggregation",
"atlas-search"
] | [
{
"code": "{ \t_id: ObjectId(\"650ae6b0f2e2c55ce7c3c3c9\"), \tcontentTypeId: ObjectId(\"650ae6aef2e2c55ce7c3c21a\"), \tcontentTypeIdString: \"650ae6aef2e2c55ce7c3c21a\" \t... }{\n facet: {\n operator: {\n exists: {\n path: \"contentTypeIdString\",\n },\n },\n facets: {\n contentTypeFacet: {\n type: \"string\",\n path: \"contentTypeIdString\",\n numBuckets: 100,\n },\n },\n },\n}\n{\n facet: {\n operator: {\n exists: {\n path: \"contentTypeId\",\n },\n },\n facets: {\n contentTypeFacet: {\n type: \"string\",\n path: \"contentTypeIdString\",\n numBuckets: 100,\n },\n },\n },\n}\n{\n facet: {\n operator: {\n equals: {\n path: \"contentTypeIdString\",\n\t\tvalue: \"650ae6aef2e2c55ce7c3c21a\"\n },\n },\n facets: {\n contentTypeFacet: {\n type: \"string\",\n path: \"contentTypeIdString\",\n numBuckets: 100,\n },\n },\n },\n}\n",
"text": "We have an issue running facet queries for Atlas Search. We have a collection containing documents like these;{ \t_id: ObjectId(\"650ae6b0f2e2c55ce7c3c3c9\"), \tcontentTypeId: ObjectId(\"650ae6aef2e2c55ce7c3c21a\"), \tcontentTypeIdString: \"650ae6aef2e2c55ce7c3c21a\" \t... }We use an Atlas $search / $searchMeta aggregation pipeline to return the amount of results in the ‘contentTypeIdString’ facet. But we want to retrict the documents to only those documents matching a specific contentTypeId. If we run this Atlas search it works fine and the query returns 309 results:However, if we exchange the “contentTypeIdString” property with “contentTypeId”, no results at all are returned:The only difference is that contentTypeId is of type ObjectId where contentTypeIdString is of type String. Also this query does not return any results, even though I’m sure this document exists:Why do the last two queries not return any results?",
"username": "Engatta_Team"
},
{
"code": "",
"text": "Hi @Engatta_Team and welcome to MongoDB community forums!!Firstly, $exists in MongoDB is the aggregation stage which filters out the results based on the availability of the filed name.\nThis does not filter the documents based of the field values. You can make use of $match stage if you wish to do so.From the information posted above, there could be two different ways to understand about facets.1. $facet as aggregation stage. In this case, the $facet categories the data in multiple dimensions to perform the query on which means it performs multiple aggregations within the same stage and process the output. If you are using $exits inside the $facet stage, it would filter out the results to be processed. You can visit the example for further information.2. $facet inside $search as an operator: This operator would groups results by values or ranges in the specified faceted fields and returns the count for each of those groups. This operator could be applied to String, numeric and Date fields which have been defined as StringFacet, NumericFacet and DateFacets respectively. You can visit the examples and the index definitions for usage of facets in the String Facets documentations.In your case, it sounds like you want to filter documents by the ObjectId value before faceting. To do this, you can use a $match stage first to filter the relevant documents by the ObjectId, and then apply the $facet stage to aggregate over those resultsHowever, if this is not what you are looking for, could you provide me the sample documents and the desired output from the documents. Also, please share the index query being created.Warm regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "In your case, it sounds like you want to filter documents by the ObjectId value before faceting. To do this, you can use a $match stage first to filter the relevant documents by the ObjectId, and then apply the $facet stage to aggregate over those resultsHi Assawari,Thank you for your answer! Indeed, I want to filter documents by the ObjectId value before faceting. I was confusing the filtering of results versus filtering of documents. So with your answer its clear to me what I should change. Thanks again!Regards, Ivo",
"username": "Engatta_Team"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | No search results for $facet inside $search | 2023-10-11T13:51:38.146Z | No search results for $facet inside $search | 310 |
null | [] | [
{
"code": "",
"text": "Within the UI, the https endpoint feature only supports returning json/ejson. Is there a way to return XML?\nI intend to build an integration with plivo and for call controlling plivo expects an XML document.\nThanks",
"username": "Adedayo_Ayeni"
},
{
"code": "",
"text": "Anybody have any answer to this? I also need to return XML?If HTTPS Endpoints/Functions cannot return XML — then what are the alternatives?",
"username": "Tim_N_A"
},
{
"code": "Content-Typeexports = function({ query, headers, body}, response) {\n response.setHeader(\"Content-Type\", \"text/xml\");\n response.setBody('<?xml version=\"1.0\"?><data></data>');\n response.setStatusCode(200);\n};\ncurl 'https://data.mongodb-api.com/app/<app-id>/endpoint/testXML' \\\n -H 'Accept: \"*/*\"' \\\n -H 'api-key: <API Key>'\n<?xml version=\"1.0\"?><data></data>\n",
"text": "Hi @Adedayo_Ayeni & @Tim_N_A,Within the UI, the https endpoint feature only supports returning json/ejson.This is only partially true: yes, the docs gear towards the JSON/EJSON format as result, and indeed if you set the Respond With Result toggle, the conversion to the chosen format is automatic.But, this isn’t the only possibility: if you don’t set the flag, you can manipulate the response directly, and the body can be any string, thus including an XML representation, providing that you set the proper Content-Type, of course.For example, the following Function, connected to an endpoint, is perfectly possible:And this runsObviously, your Function will likely need to add external dependencies to manipulate the XML, but, as long as the libraries you choose are compatible with the environment they’ll run in, it should work.",
"username": "Paolo_Manna"
},
{
"code": "",
"text": "Thanks @Paolo_Manna !! I think I got that working.",
"username": "Tim_N_A"
}
] | Https endpoint to return XML | 2023-01-03T02:49:10.996Z | Https endpoint to return XML | 797 |
null | [
"chicago-mug"
] | [
{
"code": "",
"text": "Hi everyone, I’m Jacob!I’m currently a senior software engineer in threat research at a security vendor, and I am currently living in Chicago, Illinois. I’m very excited to be joining the MongoDB Community as a MUG Co-Leader for the Chicago MongoDB MUG. I got my start with MongoDB around version 3.6-4 when I learned about it during a Hackathon at my university (it was a MLH (Major League Hacking) hackathon event at the University of Wisconsin). I’m quite excited to help grow the MongoDB community here in Chicago and the Midwest. I have prior experience in both the security world and the software engineering world. I’m currently certified as a Mongo Developer in Python and am studying for the Mongo DBA certification as well.I’ve presented at a few different events in the Midwest, including Chicago’s very own MUG and FutureCon in Indianapolis, Indiana! I love the vibrant and diverse tech community here in the Midwest, and I hope to grow it even further. I also have a passion for open source software and contribute whenever I can.You can find me on LinkedIn, Twitter, and GitHub.Come find me at the Chicago MUG .",
"username": "Jacob_Latonis"
},
{
"code": "",
"text": "Aloha Jacob! Welcome to the MongoDB Community. So excited to see that you’ll be co-leading the Chicago MUG. Reach out if you’d like me to help connect you with the AWS User Group Chicago. Might be a great group to co-host a meetup with! ",
"username": "Karissa_Fuller"
},
{
"code": "",
"text": "Welcome aboard Jacob!",
"username": "bein"
}
] | Hi from Jacob in Chicago | 2023-10-16T21:06:16.660Z | Hi from Jacob in Chicago | 247 |
null | [] | [
{
"code": "",
"text": "How to create a new database within the same cluster using aws cdk",
"username": "Raj_Alamuri"
},
{
"code": "",
"text": "Hey @Raj_Alamuri,From what I know theoretically, it is possible, as it’s basically making use of MongoDB admin APIs via the AWS CDK framework. However, there is a caveat to consider. If you are within an M0 (free) cluster, you cannot create an additional M0 cluster since only one free cluster is permitted.Furthermore, you can add more info about your use case, so other community members can chime in and share their ideas.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "hi @Raj_Alamuri, think of our AWS CDK integration as managing the control plane / admin layer for MongoDB Atlas. Therefore while you can create cluster or project or database user resources, you can’t create MongoDB Databases or Collections or insert Documents via AWS CDK. For these data plane operations you can use the mongo shell, MongoDB Compass, or perform via the Atlas UI.also @Kushagra_Kesav, good news. recently on both AWS CloudFormation and AWS CDK integration we now support M0 deployments as well! Feel free to give it a try and let us know if you have any feedback as well. https://github.com/mongodb/mongodbatlas-cloudformation-resources/blob/69df5e229d619d28eb095984ad41341723dc68d7/examples/cluster/free-tier-M0-cluster.json#L41Thank you both and hope this helps!",
"username": "Zuhair_Ahmed"
}
] | How to create a new database within the same cluster using aws cdk | 2023-10-12T16:39:49.308Z | How to create a new database within the same cluster using aws cdk | 230 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 7.0.2 is out and is ready for production deployment. This release contains only fixes since 7.0.1, and is a recommended upgrade for all 7.0 users.\nFixed in this release:",
"username": "Maria_Prinus"
},
{
"code": "",
"text": "Installed 7.0.2 in a KVM virt machine under Ubuntu tonight for fun … will try migrating my 6.x development to 7.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "will try migrating my 6.x development to 7@Jack_Woehr , there will be no try, only complete success with you doing it! ",
"username": "chris"
},
{
"code": "",
"text": "When will Ops Manager 7.0 be released to use MongoDB 7.0",
"username": "Satya_Oradba"
},
{
"code": "",
"text": "Hi Satya,The team is still determining the release date so we are unable to provide an exact date at the moment. However, they are targeting early 2024. Check out our MongoDB Software Lifecycle Schedules for future release dates for the Server and Ops Manager.– The MongoDB Team",
"username": "Britt_Snyman"
}
] | MongoDB 7.0.2 is released | 2023-09-29T23:30:43.026Z | MongoDB 7.0.2 is released | 764 |
[
"aggregation"
] | [
{
"code": "",
"text": "Hello,I have two issues with mongodb charts.Bar or Column charts, Aggregation axis is displaying values as decimal values. There is no issue with data retrieval. It is getting the correct value, but the value display isn’t right. Number of people can’t be displayed in decimal values. How to fix this? I tried everything in customize options. Nothing worked.\nd11100×578 27.4 KBWhen there is no data in the collection, chart is being displayed as empty. Instead of empty chart, I would like to post a message as \" no data to display\" or something message in it. So, that user doesn’t have to get confused where is there a blank chart. Can I do this from charts side?Any help is greatly appreciated.\nSunita",
"username": "sunita_kodali"
},
{
"code": "CustomizeNumber formattingtoggle off the Decimals field",
"text": "Hello @sunita_kodali ,I noticed that you have not had a response to this topic yet, were you able to find a solution?\nIf not, then I would suggest you to try solutions mentioned belowBar or Column charts, Aggregation axis is displaying values as decimal values. There is no issue with data retrieval. It is getting the correct value, but the value display isn’t right. Number of people can’t be displayed in decimal values. How to fix this?Go to Customize option and in field portion select your axis. Under that you will find Number formatting where you can toggle off the Decimals field.When there is no data in the collection, chart is being displayed as empty. Instead of empty chart, I would like to post a message as \" no data to display\" or something message in it. So, that user doesn’t have to get confused where is there a blank chart. Can I do this from charts side?You can do this with custom code at your client’s end. You can programmatically retrieve the chart data. If it is empty you can hide the chart and show the text message. Currently, there is no such feature present from charts side, but you can always open an idea at MongoDB Feedback Engine.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "Will try. Thank you, Tarun.",
"username": "sunita_kodali"
}
] | Mongodb Charts - Empty and Decimal | 2023-09-27T16:13:44.959Z | Mongodb Charts - Empty and Decimal | 351 |
|
null | [
"indexes"
] | [
{
"code": "",
"text": "I have a huge collection. I need to create multiple compound indexes to filter/search or do multiple ETL operations on that collection.\nCompound Index List:I have created multiple compound indexes because multiple queries run at different times.\nIs it the correct way to create compound indexes?\nWill it increase the load on my MongoDB server?",
"username": "Mehul_Sanghvi"
},
{
"code": "",
"text": "these two can be removed, as they are not needed. You already have the last one which supports these two at the same time. https://www.mongodb.com/docs/manual/core/indexes/index-types/index-compound/#index-prefixesWill it increase the load on my MongoDB server?Sure yes, maintaining index consumes resources, and slows down write path a bit.",
"username": "Kobe_W"
},
{
"code": "_id",
"text": "_id is a required index. It can be whatever you want* as long as it is unique in the collection.*though not an array",
"username": "chris"
},
{
"code": "_id",
"text": "_id is a required indexI completely forgot this for unknown reason. ",
"username": "Kobe_W"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | What load can multiple compound index put on my MongoDB server? | 2023-10-15T07:35:37.292Z | What load can multiple compound index put on my MongoDB server? | 214 |
null | [] | [
{
"code": "",
"text": "I had an error while trying to install the mongodb on my raspberry pi 4 with bullseye 64 bit os version.\nthe error is thi s\"E: Unable to locate package mongodb-org\".I already read so many threads about this errror but nothing workd,So anyone please help me.",
"username": "S_R"
},
{
"code": "",
"text": "If you’re you’re on pi4 then you need to stick to Ubuntu and 4.4 for official builds.Unless you have pi 5 which looks like it is supported hardware then you’ll be able to install 5.0+Otherwise you can try @Matt_Kneiser’s build(s) Error while installing mongoDB on a raspberry pi 4 using ubuntu 22.04 - #3 by Matt_Kneiser",
"username": "chris"
}
] | E: Unable to locate package mongodb-org on raspberry | 2023-10-16T14:34:25.876Z | E: Unable to locate package mongodb-org on raspberry | 308 |
null | [
"node-js",
"mongoose-odm"
] | [
{
"code": "",
"text": "I am working in a development environment on an ongoing project, with a local instance of mongoDB running. Starting today, when I attempt to start the application it immediately disconnects and closes with this message:\n“Mongoose default connection to DB : disconnected because:undefined”Everything is working fine for my colleagues (a distributed team). Does anyone have any suggestions on what might be causing this?",
"username": "John_Newquist"
},
{
"code": "",
"text": "With the help of my team, I found that the issue was I had updated my node to v18. My project works on v16. In case anyone else has this problem, check your node version!",
"username": "John_Newquist"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | "Mongoose default connection to DB : disconnected because:undefined" | 2023-10-16T15:30:32.612Z | “Mongoose default connection to DB : disconnected because:undefined” | 189 |
null | [
"queries"
] | [
{
"code": "",
"text": "String filter works correctly but when I try to apply { branch: “634aeffdbccd9d0f6297bb62” } it does not work and returns null.How to apply filter for ObjectId type?",
"username": "Donis_Rikardo"
},
{
"code": "",
"text": "Hello @Donis_Rikardo, Welcome to the MongoDB community forum,It would be helpful if you provide more details,",
"username": "turivishal"
},
{
"code": "\"634aeffdbccd9d0f6297bb62\"{ \"branch\" : { \"$oid\" : \"634aeffdbccd9d0f6297bb62\" } }\n",
"text": "I am not too sure, but I think you need to use EJSON syntax to make sure the string \"634aeffdbccd9d0f6297bb62\" is processed as an ObjectId. So I would try:",
"username": "steevej"
},
{
"code": "",
"text": "Thanks. There is zero info about that in the documentation.",
"username": "Donis_Rikardo"
},
{
"code": "",
"text": "I got the $oid part from the examples.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks. I’m blind. Or it was another docs",
"username": "Donis_Rikardo"
},
{
"code": "",
"text": "I’m blindYour not. Little details like that are easily skipped.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How Apply ObjectId filter for Filter Incoming Queries | 2023-10-14T00:31:16.141Z | How Apply ObjectId filter for Filter Incoming Queries | 261 |
null | [
"queries",
"data-modeling",
"python",
"mongodb-shell"
] | [
{
"code": "const collection = db.getCollection(\"collection_name\");\n\nconst query = {\n// Some query\n}\n \n\nconst projection = {\n // To fetch only required fields.\n};\n\n# Define batch size and total documents\nbatch_size = 500000\ntotal_documents = collection.count_documents(query)\n\n# Iterate through the data with skip and limit\ncurrent_skip = 0\n\nwhile current_skip < total_documents:\n result = collection.find(query, projection).sort([(\"_id\", -1)]).skip(current_skip).limit(batch_size)\n \n for document in result:\n # Export the document or process it as needed\n print(document)\n\n current_skip += batch_size\n",
"text": "Python Program to export data continuously from MongoDB.Above code is python code that fetches the data from my MongoDB server, in batch of 5 Lakh(500K) records.\nWhat I want is to know whether there is a way in MongoDB to run this kind of logic without switching to Python programming.",
"username": "Mehul_Sanghvi"
},
{
"code": "mongo-shell",
"text": "Hey @Mehul_Sanghvi,export data continuously from MongoDB.May I ask if you are streaming the data from MongoDB to some other system? If not, please help me understand what you meant by “export data continuously”?In case you are looking to continuously export data from MongoDB and stream it to another system, such as Apache Kafka, you can utilize MongoDB Connector for Apache Kafka that integrates MongoDB with Kafka.What I want is to know whether there is a way in MongoDB to run this kind of logic without switching to Python programming.If you prefer not to use any specific programming language, you can run the command directly from the mongo-shell or MongoDB Compass, where you can write a query using the aggregation pipeline as well.Moreover, if you need additional assistance, please share your use case and related details. This information will help us to assist you better.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "@Kushagra_Kesav\nI want to export data locally like we are doing in MongoDB Compass.\nBut my problem is that I have repeatedly kept changing the value of skip & limit. But in Python Programming, I can define a loop that can be executed easily, and I want that kind of automation in my MongoDB query too.Hope you understand my query.",
"username": "Mehul_Sanghvi"
},
{
"code": "result = collection.find(query, projection).sort([(\"_id\", -1)]).skip(current_skip).limit(batch_size)\nskip()_idmongoexport",
"text": "Hi @Mehul_Sanghvi,But my problem is that I have repeatedly kept changing the value of skip & limit. But in Python Programming, I can define a loop that can be executed easilyI think in this scenario, using a programming language is much easier though. May I ask if you are facing any issues with such operations?However, I would suggest not using skip() as it can be bad for the index. It’s preferable to use the _id field when it’s an ObjectId (sortable). Please refer to the Range Queries documentation to learn more.Also, you can consider mongoexport if it aligns with your use case.Regards,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Continuously export data from mongodb | 2023-10-14T19:21:54.367Z | Continuously export data from mongodb | 242 |
null | [] | [
{
"code": "",
"text": "Hi,I am able to connect using the mongodb adapter to PowerBI. I can select the table in power query but get the following error:DataSource.Error: The table has no visible columns and cannot be queried.\nDetails:\nOrdersI verified that the table was populated using Compass.Any help would be appreciated.",
"username": "Airwolf39_N_A"
},
{
"code": "",
"text": "i have the same problem.Some collections work, however, the new ones don’t!",
"username": "logikoz"
},
{
"code": "",
"text": "Hello @Airwolf39_N_A and @logikoz ,Welcome to The MongoDB Community Forums! DataSource.Error: The table has no visible columns and cannot be queried.\nDetails:\nOrdersPlease check if SQL Schema is generated as the most common reason to this error is that the SQL Schema needs to be generated. Could you try running sqlGenerateSchema command on collection, and see if that clears up the errorTo run the sqlGenerateSchema, you must do so from the admin db. Here are some instructions that might help.Screenshot 2023-10-12 at 3.56.36 PM1283×712 114 KB\nScreenshot 2023-10-12 at 3.57.01 PM1283×717 155 KBIf this does not resolve your issue, can you please share additional details such as:Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Power BI DataSource.Error: | 2023-10-10T05:26:03.390Z | Power BI DataSource.Error: | 332 |
null | [
"aggregation",
"crud",
"php"
] | [
{
"code": "$uri = 'mongodb://localhost:27017/?retryWrites=true&w=majority';\n $client = new \\MongoDB\\Client($uri);\n $collection = $client->selectDatabase('test')->test;\n $result = $collection->updateOne(['unique_id' => 'b4f5393a3a7f822eabd3d6eead66e533'], [\n '$set' => [\n 'history.202309' => '$history.202308'\n ],\n ], ['upsert' => true]);\ndb.test.updateOne(\n {\"unique_id\": \"b4f5393a3a7f822eabd3d6eead66e533\"},\n [\n { $set: { \"history.202309\": \"$history.202308\" } }\n ],\n {\"upsert\": true}\n);\n",
"text": "Hello, I’ve tried use php library update with aggregation but it save as string “$history.202308” instead of db value. But it work when I’m run using command in mongodb terminal.Below is the code example.",
"username": "hut_lim_chau"
},
{
"code": "updateOne$result = $collection->updateOne(\n ['unique_id' => 'b4f5393a3a7f822eabd3d6eead66e533'],\n [ '$set' => ['history.202309' => '$history.202308'] ],\n ['upsert' => true]\n);\n$setobject$result = $collection->updateOne(\n ['unique_id' => 'b4f5393a3a7f822eabd3d6eead66e533'],\n [\n (object) [ '$set' => ['history.202309' => '$history.202308'] ],\n ],\n ['upsert' => true]\n);\n",
"text": "Hi @hut_lim_chau, it looks like you want to run an update using an aggregation pipeline. This requires you to pass an array of pipeline stages. Your PHP code is missing the surrounding array. To make this a little more obvious, I’ve reformatted the updateOne code:If you compare this to the shell example, you’ll note that the $set operator is not nested in an array. Once you change this, it will work as expected. Note that I’ve also added an object cast to the pipeline stage. While this isn’t necessary, it can help spot such issues in the future.",
"username": "Andreas_Braun"
}
] | PHP Updates with Aggregation Pipeline NOT WORKING | 2023-10-16T03:19:58.537Z | PHP Updates with Aggregation Pipeline NOT WORKING | 169 |
null | [
"database-tools",
"backup"
] | [
{
"code": "ERROR: <mongorestore : The term 'mongorestore' is not recognized as the name of a cmdlet, \nfunction, script file, or operable program. Check the spelling of the name, or if a path was \nincluded, verify that the path is correct and try again.>\n",
"text": "I have taken a dump from Ubuntu v20 server using mongodump. I need to use it on my Windows laptop.In my Windows laptop,a. from which directory should I execute the mongorestore command?b. where should the dump folder be saved?When I try to execute the mongorestore command from my projects directory in windows, it does not recognize the command.I also tried to do the same from C:\\Program Files\\MongoDB\\Server\\5.0\\bin, but got the same error.",
"username": "Hemant_Parmar"
},
{
"code": "mongoresotore",
"text": "a. from which directory should I execute the mongorestore command?I prefer to do this from the directory containing the dump folder. But it can be done from anywhere as long as all the correct options are used.b. where should the dump folder be saved?Doesn’t matter.When I try to execute the mongorestore command from my projects directory in windows, it does not recognize the command.Add the binary directory to your PATH variable or specify the full path to mongoresotore",
"username": "chris"
},
{
"code": "",
"text": "Did you install mongodb-toolsCommand line tools available for working with MongoDB deployments. Tools include mongodump, mongorestore, mongoimport, and more. Download now.1.Download Mongodb command line tool\n2.Set tools path in windows environment variablesThen run from any whereYou can follow docs set Mongorestore in env variablesThoughts from Ryan Hoffman, an experienced team leader, software architect and developer.",
"username": "Bhavya_Bhatt"
},
{
"code": "",
"text": "I had not downloaded the DB CLI Tools. I did that, and now its working fine.",
"username": "Hemant_Parmar"
}
] | Restoring Mongodb database in Windows by using dump taken from Ubuntu server | 2023-10-15T14:18:51.485Z | Restoring Mongodb database in Windows by using dump taken from Ubuntu server | 248 |
null | [
"java"
] | [
{
"code": "",
"text": "Hi guys, i’m beginner with MongoDB. I want to learn MongoDB with intellij IDEA for convenient Java",
"username": "Bao_Le"
},
{
"code": "",
"text": "Hello, welcome to the MongoDB community.See if this helps you.For IntellijFor VSCodeMongoDB for VS Code Extension allows you to connect to your MongoDB instance and enables you to interact in a way that fits into your native workflow and development tools.",
"username": "Samuel_84194"
},
{
"code": "",
"text": "thank you. But i just use IntelliJ community version.",
"username": "Bao_Le"
}
] | Is there any plugin in Intellij like the MongoDB for VS Code extension? | 2023-09-20T03:05:42.546Z | Is there any plugin in Intellij like the MongoDB for VS Code extension? | 362 |
null | [
"node-js",
"python",
"database-tools",
"backup",
"php"
] | [
{
"code": "mongodumpmongodumpType: Unknown, Last error: connection() error occurred during connection handshake: x509: certificate relies on legacy Common Name field, use SANs instead \nmongodump",
"text": "I’m using a self-signed certificate in development, and my PHP, Python, and Node.js code works fine using appropriate options in the MongoDB URI.However, mongodump is not happy. Various combinations of the same sort of options to mongodump yield errors such as:Is there any “quick fix” to make mongodump work, or do I need to regen my certificate?",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hi @Jack_Woehr,\nI think you need to regen your certificate.Best regards",
"username": "Fabio_Ramohitaj"
},
{
"code": "",
"text": "Yes, I thought that might be the answer … ",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "Hi @Jack_Woehr If you’re in a pinch you could try older versions of MongoDB Database Tools.Thank you for your interest in MongoDB Database Tools. You can download installation packages below. Please note that for those who are not yet MongoDB customers, download and use constitutes acceptance of the Customer Agreement.",
"username": "chris"
},
{
"code": "",
"text": "If you’re in a pinch you could try older versions of MongoDB Database Tools.Thanks, @chris , but that sounds like learning to limp on the other foot \nIf the rule is going forward that the certificate has to use SAN instead of Common Name, I might as well adapt.",
"username": "Jack_Woehr"
}
] | `mongodump` doesn't like my self-signed certificate | 2023-10-15T19:55:07.005Z | `mongodump` doesn’t like my self-signed certificate | 221 |
null | [
"compass"
] | [
{
"code": "",
"text": "I have try import the csv file size 7.00 MB, I have click select collection → ADD DATA → Import JSON or CSV file → select file in my PC → click selectthen nothing happendcan someone help me?",
"username": "L0UEFF_N_A"
},
{
"code": "",
"text": "Hi @L0UEFF_N_A,Welcome to the MongoDB Community forums Please share the screenshot of the MongoDB Compass to better understand the problem. Also, the version of MongoDB Compass you are using.Best,\nKushagra",
"username": "Kushagra_Kesav"
},
{
"code": "",
"text": "Are you using an M1 or M2 Mac? If so, export the CSV as a JSON and import that, it’s a compatibility bug that happens with Apple silicon and Compass.I’ve had this happen on and off myself.",
"username": "Brock"
},
{
"code": "",
"text": "Hi\nI am facing the same problem. I am using compass version1.40.3 for windows 10 64bit. when i try to load a csv file it keeps on loading forever and all the buttons are disabled.\nimage937×631 16.7 KB",
"username": "Muhammad_Nayyer_Qasim"
}
] | MongoDB Compass import csv file not working | 2023-03-16T03:56:09.729Z | MongoDB Compass import csv file not working | 1,447 |
null | [
"node-js",
"connecting",
"atlas-cluster",
"graphql"
] | [
{
"code": "throw new MongoParseError('Invalid scheme, expected connection string to start with \"mongodb://\" or \"mongodb+srv://\"');\n ^\n\nMongoParseError: Invalid scheme, expected connection string to start with \"mongodb://\" or \"mongodb+srv://\" ...\nconst { ApolloServer, gql } = require(\"apollo-server\");\n\nconst { MongoClient } = require(\"mongodb\");\n\nconst dotenv = require(\"dotenv\");\n\ndotenv.config();\n\nconst { DB_URI, DB_NAME } = process.env;\n\n// A schema is a collection of type definitions (hence \"typeDefs\")\n\n// that together define the \"shape\" of queries that are executed against\n\n// your data.\n\nconst typeDefs = gql`\n\n # Comments in GraphQL strings (such as this one) start with the hash (#) symbol.\n\n # This \"Book\" type defines the queryable fields for every book in our data source.\n\n type Book {\n\n title: String\n\n author: String\n\n }\n\n # The \"Query\" type is special: it lists all of the available queries that\n\n # clients can execute, along with the return type for each. In this\n\n # case, the \"books\" query returns an array of zero or more Books (defined above).\n\n type Query {\n\n books: [Book]\n\n }\n\n`;\n\nconst books = [\n\n {\n\n title: \"The Awakening\",\n\n author: \"Kate Chopin\",\n\n },\n\n {\n\n title: \"City of Glass\",\n\n author: \"Paul Auster\",\n\n },\n\n];\n\n// Resolvers define the technique for fetching the types defined in the\n\n// schema. This resolver retrieves books from the \"books\" array above.\n\nconst resolvers = {\n\n Query: {\n\n books: () => books,\n\n },\n\n};\n\nconst start = async () => {\n\n const client = new MongoClient(DB_URI, {\n\n useNewUrlParser: true,\n\n useUnifiedTopology: true,\n\n });\n\n await client.connect();\n\n const db = client.db(DB_NAME);\n\n // The ApolloServer constructor requires two parameters: your schema\n\n // definition and your set of resolvers.\n\n const server = new ApolloServer({\n\n typeDefs,\n\n resolvers,\n\n csrfPrevention: true,\n\n cache: \"bounded\",\n\n });\n\n // The `listen` method launches a web server.\n\n server.listen().then(({ url }) => {\n\n console.log(`🚀 Server ready at ${url}`);\n\n });\n\n};\n\nstart();\nDB_URI =\n\n \"mongodb+srv://username:[email protected]/?retryWrites=true&w=majority\";\n\nDB_NAME = Cluster0;\n",
"text": "I am the beginner in programming. Please help me out, dear experienced programmers, if you can.\nNow I am trying to do a simple To-do app. And I want to use there database. I am stuck for already 12 hours on the stage where it is needed to connect database.\nI have the following error after running the command “node index.js”:I have the next code in index.js:And the file .env:Thank you in advance! ",
"username": "olena_dunamiss"
},
{
"code": "DB_URIconsole.log(DB_URI)new MongoClient",
"text": "Hi @olena_dunamiss welcome to the community!So a big welcome to the coders club! What I gathered so far is that you’re trying to use Apollo GrapQL to connect to a MongoDB database, to create a todo list app. Is this correct?Could you provide some more details:If you’re just starting to code, I would suggest you to learn MongoDB in isolation first (without Apollo or GraphQL) by following MongoDB and Node.js Tutorial - CRUD Operations.Regarding MongoDB + Node, I would also suggest you take a look at the free MongoDB University courses: M001 MongoDB Basics and M220JS MongoDB for JavaScript Developers (although please note that M220JS assumes some familiarity with Javascript/Node).Best regards\nKevin",
"username": "kevinadi"
},
{
"code": "console.log(DB_URI)undefined\nC:\\Users\\Svetl\\test\\node_modules\\mongodb-connection-string-url\\lib\\index.js:9\n return (connectionString.startsWith('mongodb://') ||\n ^\n\nTypeError: Cannot read properties of undefined (reading 'startsWith')``",
"text": "Thank you so much for your response and the materials that you added!\nThe tutorial: Build a GraphQL API with NodeJS and MongoDB (Full-stack MERN Tutorial ) - YouTube\nI added console.log(DB_URI) and I have the following:",
"username": "olena_dunamiss"
},
{
"code": "",
"text": "did you install dotenv package, I almost had same error and I found out I need to install dotenv package so I can access to .env variables dotenv in npm registry",
"username": "Neck_Abdullah"
},
{
"code": "\"mongodb+srv://username:[email protected]/?retryWrites=true&w=majority\"\n",
"text": "I had this same issue and I resolved it by simply removing the “;” at the end of the connection string. So you connection string should just be:That is without the semi-colon at the end.",
"username": "Ubong_Udotai"
},
{
"code": "",
"text": "At this moment, I believe I should just quit my job and become a chef… Thank you, kind sir.",
"username": "Adam_Machowczyk"
},
{
"code": "",
"text": "Lol… What are you currently working as btw… thanks Adam!",
"username": "Ubong_Udotai"
},
{
"code": "",
"text": "This was my problem when the mongoose.connect() in my index.js file would not read my process.env. value.Inside my .env file I had the semi-colon at the end as I usually do when writing JavaScript code:\nMONGO_URL=“mongodb+srv:…”;After removing the semi-colons in my .env files thing were finally working in index.js.Thank you for this solution.",
"username": "Brian_A"
},
{
"code": "",
"text": "I don’t know if you are still having this problem, but as one other person said you need to remove the semi-colon ( ; ) from your variables in your .env file. That should make the process.env work properly.",
"username": "Brian_A"
},
{
"code": "",
"text": "WTH!!! Thanks man. Can’t understand coding anymore. 2 full days trying to sort this…",
"username": "jnr_wadeya"
},
{
"code": "",
"text": "Awesome, I removed the ; from end of line of .env and it’s resolved. Thank you;",
"username": "amastaneh"
},
{
"code": "",
"text": "Fantastic, I removed the semicolon at the end of the line .env, and the problem is solved. Thank you;",
"username": "amastaneh"
},
{
"code": "",
"text": "After five long hours of troubleshooting, I see this response. And Guess what happened.",
"username": "Tohirul_Islam"
},
{
"code": "",
"text": "After 14 hours of troubleshooting, I finally found the solution to my problem. I was able to find the solution to my problem by searching the documentation for “connection string”.See - https://www.mongodb.com/docs/atlas/troubleshoot-connection/#special-characters-in-connection-string-password",
"username": "T.R_Methu_N_A"
},
{
"code": "",
"text": "This solved my problem, too - thank you! Very glad I stumbled upon your solution.",
"username": "Natalie_Gillam"
}
] | Unable to connect db because of "throw new MongoParseError('Invalid scheme, expected connection string to start with "mongodb://" or "mongodb+srv://"');" | 2022-07-24T17:37:56.698Z | Unable to connect db because of “throw new MongoParseError(‘Invalid scheme, expected connection string to start with “mongodb://” or “mongodb+srv://”’);” | 30,358 |
[
"serverless",
"stockholm-mug"
] | [
{
"code": "AWS developer AdvocateSenior Consultant ICAIntegration Consultant",
"text": "2 Stockholm Event Promo Banner 16 Oct -MUG - Design Kit1920×1080 152 KBThe MongoDB MUG Stockholm Meetup is set to be a dynamic event, highlighting two engaging presentations. First up, AWS Developer Advocate Gunnar Grosch will delve into the intricacies of ‘Building a Serverless Application with MongoDB Atlas on AWS.’ Following that, we have Egil Sonesson, a Senior Consultant at ICA, who will talk about ‘How to Move Faster with MongoDB.’ These talks are sure to be both insightful and informative, making this event an absolute must-attend for anyone interested in MongoDB and AWS.Our event kicks off with a 30-minute networking session, providing you with the opportunity to connect and engage with fellow attendees. Following this, we’ll commence with the first talk by Egil, which will run for approximately 25 minutes. After his presentation, there will be an interactive Q&A session to delve deeper into the topics.To fuel your appetite for knowledge and camaraderie, we’ll then take a 30-minute break, offering delicious pizza, refreshing beverages, and further networking opportunities.The spotlight will then shift to Gunnar Grosch, who will deliver a hands-on demo for an engaging 45 minutes, offering practical insights into MongoDB Atlas on AWS.Lastly, we’ll keep the excitement alive with a trivia session and swag giveaways, providing a fun and informative way to wrap up the event. To RSVP - Kindly click on the “✓ RSVP” link located at the top of this event page if you’re planning to attend. The link should change to a green button once you’ve successfully RSVPd. Make sure you’re signed in to access the button and secure your spot for the event!Event Type: In-Person\nLocation: 7A Posthuset, Vasagatan 28, Stockholm 11120AWS developer AdvocateGunnar has been one of the driving forces in creating techniques and tools for using chaos engineering in serverless. He regularly and passionately speaks at events on these and other serverless topics around the world.Senior Consultant ICAEgil is an IT consultant with 25 years of experience, positions include developer, architect, technical test analyst and team lead.Integration ConsultantThis is Claire’s first event as MUG Leader and she is very excited about organizing events for the MongoDB community. Contact Claire for more information about the event.",
"username": "Claire_Hardman"
},
{
"code": "",
"text": "Hey Everyone,Thank you for confirming to attend the event tomorrow the October 16th at 05:30 PM. This is your chance to dive into the world of MongoDB Atlas, discover what’s new in MongoDB 7.0, and play some fun games to win some swag. We are thrilled to have you join us.Address: Room Number 306, 7A Posthuset, Vasagatan 28, Stockholm 11120 We want to make sure everyone has a fantastic time, so please join us at 05:30 PM to ensure you don’t miss any of the sessions. We can also have some time to chat before the talks begin.If you have any questions, please don’t hesitate to ask by replying to this Looking forward to seeing you all at the event!",
"username": "Harshit"
}
] | Stockholm MongoDB Meetup: What’s new in the MongoDB Developer Platform | 2023-09-22T14:37:46.804Z | Stockholm MongoDB Meetup: What’s new in the MongoDB Developer Platform | 992 |
|
null | [
"rust"
] | [
{
"code": "let filter_options = doc!{\"_id\": id};\nlet projection = doc! {\"password\": 0};\nlet opts = FindOneOptions::builder().projection(projection).build();\nlet user = self\n .collection\n .find_one(filter_options, opts)\n .await\n .unwrap();\nError { kind: BsonDeserialization(DeserializationError { message: \"missing field \" })}db.user.findOne({_id: ObjectId('652b9ae383480f1ae820d6d5')}, {password: 0})",
"text": "I need to get the user object from the db without the password fieldI am getting the following error\nError { kind: BsonDeserialization(DeserializationError { message: \"missing field password\" })}In the shell all work correctly\ndb.user.findOne({_id: ObjectId('652b9ae383480f1ae820d6d5')}, {password: 0})",
"username": "Sasha_Zoria"
},
{
"code": "$ifNull:",
"text": "Maybe you need to add some $ifNull: clause to your projection, if that’s possible in Swift.",
"username": "Jack_Woehr"
}
] | collection.findOne with projection option return error | 2023-10-15T09:02:47.822Z | collection.findOne with projection option return error | 206 |
null | [
"replication",
"mongodb-shell",
"containers"
] | [
{
"code": "docker run -d -p 27017:27017 --name mongo1 --network mongoCluster mongo:latest mongod --replSet myReplicaSetName --bind_ip localhost,mongo1docker run -d -p 27018:27017 --name mongo2 --network mongoCluster mongo:latest mongod --replSet myReplicaSetName --bind_ip localhost,mongo2docker run -d -p 27019:27017 --name mongo3 --network mongoCluster mongo:latest mongod --replSet myReplicaSetName --bind_ip localhost,mongo3\ndocker exec -it mongo1 mongosh --eval \"rs.initiate({\n\n_id: \\\"myReplicaSetName\\\",\n\nmembers: [\n\n{_id: 0, host: \\\"mongo1\\\"},\n\n{_id: 1, host: \\\"mongo2\\\"},\n\n{_id: 2, host: \\\"mongo3\\\"}\n\n]\n\n})\"\n\n/etc/host\n127.0.0.1 mongo1\n\n127.0.0.1 mongo2\n\n127.0.0.1 mongo3\n\ndocker exec -it mongo1 mongosh --eval \"rs.status()\"\nmongodb://localhost:27017,localhost:27018,localhost:27019/?replicaSet=myReplicaSetName\n\nUnable to connect: Server selection timed out after 30000 msUnable to connect: Server selection timed out after 30000 msmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000mongodb://127.0.0.1:27018mongodb://127.0.0.1:27019",
"text": "Hi, I followed Deploying A MongoDB Cluster With Docker | MongoDB to create a local replica set – which I need for prisma.1.docker run -d -p 27017:27017 --name mongo1 --network mongoCluster mongo:latest mongod --replSet myReplicaSetName --bind_ip localhost,mongo1docker run -d -p 27018:27017 --name mongo2 --network mongoCluster mongo:latest mongod --replSet myReplicaSetName --bind_ip localhost,mongo2docker run -d -p 27019:27017 --name mongo3 --network mongoCluster mongo:latest mongod --replSet myReplicaSetName --bind_ip localhost,mongo3Edit /etc/host and append the following:You can run docker exec -it mongo1 mongosh --eval \"rs.status()\".Connect through one of the following:I was able to connect to it when I first created the replica set, BUT when I restart my computer and then try to connect to it again, the connection will timeout.I am unable to connect to it through the connection string I pasted above. I’m getting Unable to connect: Server selection timed out after 30000 msI am able to connect to the replica set ONLY AFTER creating it. When I restart my computer, I will get Unable to connect: Server selection timed out after 30000 msI am able to connect through direct connection string mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000 – but as far as I understood, this is not connecting to the replica set. Thisworks also for mongodb://127.0.0.1:27018 and mongodb://127.0.0.1:27019I am using MongoDB for VS Code - Visual Studio Marketplace to connect to it, but my nestjs backend is also unable to connect to it, so this might not be about it.I’ve read other related topics but I can’t fix it still. I am here asking for help.",
"username": "April_Pineda"
},
{
"code": "Current Mongosh Log ID:\t652b5d5d4c291f3fae6c9c71\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.0.1\nUsing MongoDB:\t\t7.0.2\nUsing Mongosh:\t\t2.0.1\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\n------\n The server generated these startup warnings when booting\n 2023-10-15T02:57:26.046+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\n 2023-10-15T02:57:29.189+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n 2023-10-15T02:57:29.190+00:00: vm.max_map_count is too low\n------\n\n{\n set: 'myReplicaSetName',\n date: ISODate(\"2023-10-15T03:32:45.931Z\"),\n myState: 2,\n term: Long(\"3\"),\n syncSourceHost: 'mongo2:27017',\n syncSourceId: 1,\n heartbeatIntervalMillis: Long(\"2000\"),\n majorityVoteCount: 2,\n writeMajorityCount: 2,\n votingMembersCount: 3,\n writableVotingMembersCount: 3,\n optimes: {\n lastCommittedOpTime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long(\"3\") },\n lastCommittedWallTime: ISODate(\"2023-10-15T03:32:40.088Z\"),\n readConcernMajorityOpTime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long(\"3\") },\n appliedOpTime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long(\"3\") },\n durableOpTime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long(\"3\") },\n lastAppliedWallTime: ISODate(\"2023-10-15T03:32:40.088Z\"),\n lastDurableWallTime: ISODate(\"2023-10-15T03:32:40.088Z\")\n },\n lastStableRecoveryTimestamp: Timestamp({ t: 1697340740, i: 1 }),\n electionParticipantMetrics: {\n votedForCandidate: true,\n electionTerm: Long(\"3\"),\n lastVoteDate: ISODate(\"2023-10-15T02:57:39.872Z\"),\n electionCandidateMemberId: 1,\n voteReason: '',\n lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1697284799, i: 1 }), t: Long(\"2\") },\n maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1697284799, i: 1 }), t: Long(\"2\") },\n priorityAtElection: 1,\n newTermStartDate: ISODate(\"2023-10-15T02:57:39.888Z\"),\n newTermAppliedDate: ISODate(\"2023-10-15T02:57:39.913Z\")\n },\n members: [\n {\n _id: 0,\n name: 'mongo1:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 2120,\n optime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long(\"3\") },\n optimeDate: ISODate(\"2023-10-15T03:32:40.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-10-15T03:32:40.088Z\"),\n lastDurableWallTime: ISODate(\"2023-10-15T03:32:40.088Z\"),\n syncSourceHost: 'mongo2:27017',\n syncSourceId: 1,\n infoMessage: '',\n configVersion: 1,\n configTerm: 3,\n self: true,\n lastHeartbeatMessage: ''\n },\n {\n _id: 1,\n name: 'mongo2:27017',\n health: 1,\n state: 1,\n stateStr: 'PRIMARY',\n uptime: 2116,\n optime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long(\"3\") },\n optimeDurable: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long(\"3\") },\n optimeDate: ISODate(\"2023-10-15T03:32:40.000Z\"),\n optimeDurableDate: ISODate(\"2023-10-15T03:32:40.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-10-15T03:32:40.088Z\"),\n lastDurableWallTime: ISODate(\"2023-10-15T03:32:40.088Z\"),\n lastHeartbeat: ISODate(\"2023-10-15T03:32:44.335Z\"),\n lastHeartbeatRecv: ISODate(\"2023-10-15T03:32:44.147Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: '',\n syncSourceId: -1,\n infoMessage: '',\n electionTime: Timestamp({ t: 1697338659, i: 1 }),\n electionDate: ISODate(\"2023-10-15T02:57:39.000Z\"),\n configVersion: 1,\n configTerm: 3\n },\n {\n _id: 2,\n name: 'mongo3:27017',\n health: 1,\n state: 2,\n stateStr: 'SECONDARY',\n uptime: 2116,\n optime: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long(\"3\") },\n optimeDurable: { ts: Timestamp({ t: 1697340760, i: 1 }), t: Long(\"3\") },\n optimeDate: ISODate(\"2023-10-15T03:32:40.000Z\"),\n optimeDurableDate: ISODate(\"2023-10-15T03:32:40.000Z\"),\n lastAppliedWallTime: ISODate(\"2023-10-15T03:32:40.088Z\"),\n lastDurableWallTime: ISODate(\"2023-10-15T03:32:40.088Z\"),\n lastHeartbeat: ISODate(\"2023-10-15T03:32:45.828Z\"),\n lastHeartbeatRecv: ISODate(\"2023-10-15T03:32:44.688Z\"),\n pingMs: Long(\"0\"),\n lastHeartbeatMessage: '',\n syncSourceHost: 'mongo2:27017',\n syncSourceId: 1,\n infoMessage: '',\n configVersion: 1,\n configTerm: 3\n }\n ],\n ok: 1,\n '$clusterTime': {\n clusterTime: Timestamp({ t: 1697340760, i: 1 }),\n signature: {\n hash: Binary.createFromBase64(\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\", 0),\n keyId: Long(\"0\")\n }\n },\n operationTime: Timestamp({ t: 1697340760, i: 1 })\n}\nCurrent Mongosh Log ID:\t652b5fc0f0f4fded3c2eb5c3\nConnecting to:\t\tmongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.0.1\nUsing MongoDB:\t\t7.0.2\nUsing Mongosh:\t\t2.0.1\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\n------\n The server generated these startup warnings when booting\n 2023-10-15T02:57:26.057+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem\n 2023-10-15T02:57:29.224+00:00: Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\n 2023-10-15T02:57:29.224+00:00: vm.max_map_count is too low\n------\n\n{\n _id: 'myReplicaSetName',\n version: 1,\n term: 3,\n members: [\n {\n _id: 0,\n host: 'mongo1:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 1,\n host: 'mongo2:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n },\n {\n _id: 2,\n host: 'mongo3:27017',\n arbiterOnly: false,\n buildIndexes: true,\n hidden: false,\n priority: 1,\n tags: {},\n secondaryDelaySecs: Long(\"0\"),\n votes: 1\n }\n ],\n protocolVersion: Long(\"1\"),\n writeConcernMajorityJournalDefault: true,\n settings: {\n chainingAllowed: true,\n heartbeatIntervalMillis: 2000,\n heartbeatTimeoutSecs: 10,\n electionTimeoutMillis: 10000,\n catchUpTimeoutMillis: -1,\n catchUpTakeoverDelayMillis: 30000,\n getLastErrorModes: {},\n getLastErrorDefaults: { w: 1, wtimeout: 0 },\n replicaSetId: ObjectId(\"6529f995dd3d2ac1a3eedb38\")\n }\n}\n",
"text": "",
"username": "April_Pineda"
},
{
"code": "-p 127.0.0.1:27017:27017-p 127.0.0.2:27017:27017-p 127.0.0.3:27017:27017127.0.0.1 mongo1\n127.0.0.2 mongo2\n127.0.0.3 mongo3\n",
"text": "You can’t port forward like this to a replica set. The hosts:ports that are in the rs.conf are what the client will connect to once the topology is discovered from the first seed that is connected to.So the client will actually try to connect to mogno1:27017\nmongo2:27107 and mongo3:27017I would restart the containers binding each one to a different ip on 27017.-p 127.0.0.1:27017:27017\n-p 127.0.0.2:27017:27017\n-p 127.0.0.3:27017:27017And update the hosts file:",
"username": "chris"
}
] | Unable to connect to local mongodb replica set after restarting my PC | 2023-10-15T03:48:12.997Z | Unable to connect to local mongodb replica set after restarting my PC | 229 |
null | [] | [
{
"code": "var realmConfig = Realm.Configuration.defaultConfiguration\nrealmConfig.fileURL!.deleteLastPathComponent()\nrealmConfig.fileURL!.appendPathComponent(\"debug\")\nrealmConfig.fileURL!.appendPathExtension(\"realm\")\nrealm = try! Realm(configuration: realmConfig)\nprint(realm.configuration.fileURL!)\n",
"text": "I’m trying to have two different realms, a “debug.realm” file, and a “default.realm” one. The “debug” should be using while I develop, and the “default” for the release version of the app.Following the instructions on the docs, I have this code:Now I go to the Application Support folder and delete all the realm files, just in case. That is:I run the app and I can see printed on the logs the file path as “file:///Users/…/Data/Library/Application%20Support/debug.realm”, which is the expected file.However, in finder I see that both “debug.realm” and “default.realm” have been created. Using realm studio I can confirm that making any changes is only modifying “default.realm”.I’m sure I’m missing something. What’s going on?",
"username": "Daniel_Gonzalez_Reina"
},
{
"code": "realm = try! Realm(configuration: realmConfig)class RealmService {\n private init() {}\n\n static let shared = RealmService()\n\n lazy var realm: Realm = {\n let manager = FileManager()\n let homeURL = manager.homeDirectoryForCurrentUser\n let desktopURL = homeURL.appendingPathComponent(\"Desktop\")\n let pathToThisRealmFile = desktopURL.appendingPathComponent(\"desktopRealm.realm\")\n\n var config = Realm.Configuration.defaultConfiguration\n config.fileURL = pathToThisRealmFile\n let realm = try! Realm.init(configuration: config)\n return realm\n }()\n}\nlet realm = RealmService.shared.realmfunc gGetRealm() -> Realm? {\n do {\n let manager = FileManager()\n let homeFolder = manager.homeDirectoryForCurrentUser\n let pathToThisFolder = homeFolder.appendingPathComponent(\"Dropbox/Development/My Realm Project\")\n let pathToThisRealmFile = pathToThisFolder.appendingPathComponent(\"default.realm\")\n\n var config = Realm.Configuration.defaultConfiguration\n config.fileURL = pathToThisRealmFile\n let realm = try Realm.init(configuration: config)\n return realm\n\n } catch let error as NSError {\n print(\"Error!\")\n print(\" \" + error.localizedDescription + \" \" + String(error.code) )\n return nil\n }\n}\nif let maybeRealm = gGetRealm() {\n //do something with maybeRealm\n}\nRealm.init(....",
"text": "Touching Realm before setting up a config for it will cause it to create the default.realm file and then after you set up the config, create the debug.realm file.How is realm instantiated or used before this line?realm = try! Realm(configuration: realmConfig)Are you using it anywhere else? Did you define a migration or attempt to do anything with it before that line?Best practice is thisvar realm = try! Realm()\nvar realmConfig = Realm.Configuration.defaultConfiguration\n…\nrealm = try! Realm(configuration: realmConfig)or, go with a singleton shared service approach - this uses a realm file on the desktopthen to uselet realm = RealmService.shared.realmor the old classic global function (I start my global functions with a 'g\"). This uses a Realm in My Dropbox/Development folder inside the My Realm Project folder.In any of the above, note that nothing is being done with Realm until it’s ready to be used. '=Realm.init(....",
"username": "Jay"
},
{
"code": "Realm()",
"text": "Thanks for your reply! I changed my code to use the singleton approach and it’s working great.I was indeed calling Realm() twice. The first time in the App’s init(), and in a service top level declaration.The change you suggested makes sure it’s called with the right config since the app starts.Thanks again!",
"username": "Daniel_Gonzalez_Reina"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't use anything other than default.realm with SwiftUI | 2023-10-14T14:10:29.495Z | Can’t use anything other than default.realm with SwiftUI | 262 |
[] | [
{
"code": "",
"text": "See hereMongoDB is announcing today that we’re renaming Realm to MongoDB Atlas Device SDKs. Read on to learn more.An interesting change.Inherently, Realm is an offline first database which is a different use case then Atlas - which we feel justified the distinction. So now Atlas is an offline/online first database? Sounds like an identity crisis.Is there something more too this or just an effort to unify the platform?This feels like the Parse scenario from a few years ago which is why we ask.Jay\n(just being a bit dubious)",
"username": "Jay"
},
{
"code": "",
"text": "hey @Jay - we are aligning our names to make it clear to the external market and our internal team what the SDK is ideally intended for - to sync data with Atlas. Not many users have as much experience with the product as you do so they may not have as much clarity on how the SDK works with Atlas. We hope this makes it more clear. We still intend to continue investing in the SDKs and iterating on the feature set with new releases. The SDKs can be used with or without sync but of course we hope they use sync!",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hi Ian, from a practical point of view… did anything change? Do I need to do anything?Thanks!",
"username": "varyamereon"
},
{
"code": "",
"text": "Nothing has changed from a practical point of view. You do not need to do anything",
"username": "Ian_Ward"
},
{
"code": "let realm = try! Realm()let atlas = try! Atlas()",
"text": "Is there a timeline when the word “Realm” will be sunsetted from the documentation and replaced with the word “Atlas Device SDK”?Also is there a future plan to rename code level calls eg let realm = try! Realm() to let atlas = try! Atlas()Just trying to get clarity on the path.",
"username": "Jay"
},
{
"code": "",
"text": "There are no plans to change the realm name in code right now so it will still be part of the docs for the foreseeable future. We will look to update the docs over the next quarter or so to explain the new high-level product name and how realm still serves as the storage part for the SDK.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm is Now Atlas Device SDKs | 2023-09-26T18:22:32.946Z | Realm is Now Atlas Device SDKs | 491 |
|
null | [
"aggregation",
"atlas-search",
"php"
] | [
{
"code": "...\n{\n\n range: {\n\n path: \"startDate\",\n\n gte: new Date(\"2022-02-23T00:00:00+00:00\"),\n\n },\n\n },\n....\nOf course, this won't work.\n['gte' => new Date(\"2022-02-23T00:00:00+00:00\")]\n{\n range: {\n path: \"startDate\",\n gte: {\n \t\t\t\"$date\": \"2022-12-13T10:30:00.000Z\"\n \t\t} \n ,\n },\n },\ncompound.must[2].range.gte.typenew Date()ISODate()",
"text": "Hi,We’ve got a project using some PHP code with the PHP MongoDB Driver.\nWe’ve been trying to use $search with a range query over dates.\nLike this:This works fine in Composer.\nHowever, the php driver’s aggregate command only takes a PHP Array for the query.The PHP BSON UTC Date Time doesn’t work either.\nIn fact, if we put what it generates into Composer, it doesn’t like it too.For example, with the generated BSON value, the query is this:But the error is compound.must[2].range.gte.type is required.\nIt looks like the range.gte doesn’t understand this BSON Type, but it does work with new Date() and new ISODate()Anyone come across this before?\nHow to specify a MongoDB ‘new Date()’ when using the PHP driver?",
"username": "Gav_Grayston"
},
{
"code": "Date()ISODate()$date0x090x11Date()ISODate()createFromFormat()ISODate()",
"text": "In the MongoDB shell, Date() and ISODate() can be used to construct date objects (see: Date() in the shell documentation); however, those functions are unrelated to PHP or its driver. In a subsequent example, I see you used $date, which appears to be Extended JSON syntax (also unrelated to PHP).To clear up any misunderstanding, I would suggest you start by familiarizing yourself with the BSON Types in MongoDB. There is a particular BSON type (0x09) that corresponds to a UTC date time value, which is not to be confused with the Timestamp BSON type (0x11), which is used internally for things like replication.In the MongoDB shell, Date() and ISODate() can both be used to construct JavaScript objects that will encode as UTC date times in BSON. In the PHP driver, the MongoDB\\BSON\\UTCDateTime object serves that purpose.Independent of the MongoDB driver, PHP itself has a DateTime object, which represents a date with a timezone. Since the MongoDB date type assumes a UTC time zone, the PHP driver allows you to construct a UTCDateTime object from a PHP DateTime instance; however, the driver will not automatically convert a PHP DateTime into a BSON date for you. If given a PHP DateTime, the standard rules for encoding PHP objects to BSON will apply (i.e. only its public properties will be saved as document fields).How to specify a MongoDB ‘new Date()’ when using the PHP driver?Please review the examples on MongoDB\\BSON\\UTCDateTime::__construct(), which demonstrate constructing a UTCDateTime instances from an integer, DateTime object, and without arguments.If you are starting from a date string value, then you’ll always want to review DateTime::__construct() or createFromFormat(). Since the PHP driver allows construction of a UTCDateTime from a PHP DateTime, it does not reimplement the logic for parsing a date from a string.This works fine in Composer.Note that Composer is a package manager. I’m not sure if you meant to refer to the MongoDB PHP Library, which is distributed as a Composer package, but it helps to use to correct terms to avoid any misunderstanding.Lastly, my replies in mongodb/mongo-php-driver#187 may also be helpful to clear up some misunderstandings. That’s an old issue, but it pertains to some users attempting to use ISODate() and JSON syntax with the PHP driver.",
"username": "jmikola"
},
{
"code": "$aggregate = [\n [ \"$search\" => [\n \"index\" => \"default\",\n \"range\" => [\n \"path\" => \"startDate\",\n \"gte\" => new MongoDB\\BSON\\UTCDateTime($search_date->getTimestamp())\n ]\n ]\n ]\n]\nrange.gte.type is requiredgteMongoDB\\BSON\\UTCDateTimegte",
"text": "Thanks for your reply, but it is the MongoDB\\BSON\\UTCDateTime class that is creating the extended json, which the range query does not like.Unfortunately I don’t have access to the code the developer is having problems with right now.\nBut the aggregation pipeline being passed into the driver is something like this:This causes range.gte.type is required error.We have tried all formats of dates passed into the gte.\nIt is failing to recognise MongoDB\\BSON\\UTCDateTime as a BSON date.When we dump out the aggregate pipeline array, that’s where we see the EJSON.\nHowever, if the PHP driver is just sending EJSON over to the gte statement, then it’s not working with it.",
"username": "Gav_Grayston"
},
{
"code": "",
"text": "This works fine in Composer.Typo while multi-tasking. It’s meant to say Compass.",
"username": "Gav_Grayston"
},
{
"code": "",
"text": "Update from this morning after talking to the developer.\nNo exception is being thrown from the mongodb php driver. However, the pipeline is ignoring the range query.",
"username": "Gav_Grayston"
},
{
"code": "MongoDB\\BSON\\UTCDateTime",
"text": "MongoDB\\BSON\\UTCDateTime takes milliseconds as parameter while getTimestamp() will return value in seconds. Can you try updating the parameter being passed to MongoDB\\BSON\\UTCDateTime - making sure it is in milliseconds?",
"username": "Rishabh_Bisht"
},
{
"code": "",
"text": "Thanks for all the replies.\nWe found that the developer was not passing the array into the driver. Instead, he was JSON encoding it, so of course, UTCDateTime output the EJSON, which the gte does not support in the $search range statement.With that resolved, we still have a problem with one date field. However, we have a MongoDB consultant now on site to help.",
"username": "Gav_Grayston"
}
] | PHP Driver: Problems with $search date range queries | 2023-10-11T13:40:31.160Z | PHP Driver: Problems with $search date range queries | 352 |
[] | [
{
"code": "",
"text": "mongo1257×602 105 KBwhat is this error? it happened when I 'am trying to create my first cluster",
"username": "MOHAMMED_SHAMAN"
},
{
"code": "",
"text": "Hey @MOHAMMED_SHAMAN,It seems that you are running on an unsupported OS. Could you please confirm the operating system you are using and refer to the Supported OS for Local Atlas Deployments for more information.Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Error caused when creating cluster | 2023-10-14T08:49:23.249Z | Error caused when creating cluster | 177 |
|
null | [
"atlas-functions",
"app-services-user-auth"
] | [
{
"code": "",
"text": "Using realm functions i have access to the context which include services like mongodb atlas etc, what about the actual realm-sdk, surely it should no? or do i have to install the realm-sdk as a dependency?i want to create a user using realm credentials.thanks.",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "@Rishi_uttam : Can you please elaborate the problem/user-case you are trying to solve for.",
"username": "Mohit_Sharma"
},
{
"code": "",
"text": "Hi Mohit.The use caseUsing realm 3rd party http functions, i want to use realm auth, i.e. create a realm user/ delete user with email/password. I can do this in node if i add a the realm sdk package. Now in the realm function do i need to install the realm sdk? it seems i cannot install this dependency as it exceeds the realm limits szie.I assume however that realm functions have access to realm functions via the context object but there isnt any documentation to show this is true.\npls let me know if i am still not clear.",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "I’m searching for the same thing because I want to be able to create user using realm functions instead of using SDK. @Rishi_uttam Have you had a chance to do it ?",
"username": "Mohammed_Ramadan"
},
{
"code": "",
"text": "I’m serarching for the same thing also.\nI’d like to authenticate using realm functions. I’m not using SDKs",
"username": "Carlos_Emilio_Pereira"
},
{
"code": "",
"text": "It looks like realm itself has been discontinued, although some references are scattered throughout the documentation. I am not sure if there is much development on the Auth as a server side unfortunately. ( i dont work for mongodb) hoping someone can chime in.",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "Hi @Rishi_uttam,It looks like realm itself has been discontinuedThis isn’t the case: Realm has just changed name, its development is still going on.And, to answer the original question, it’s always been possible to authenticate a user exclusively via HTTPS endpoints, still the Web SDK has been the suggested solution, as it’s quite lightweight, and takes care of the additional logic. It’s pure Javascript, so it’s easy to look at the endpoints in its source code, if you really want to mimic its behaviour without including it in your apps.",
"username": "Paolo_Manna"
}
] | Does realm functions have access to realm credentials in context? | 2021-09-25T12:24:03.510Z | Does realm functions have access to realm credentials in context? | 3,286 |
null | [] | [
{
"code": "",
"text": "Hi, the community,I’m developing a small web app that can store my movies watch list. I’m storing the data to the browser’s local storage and going to make it online so that I can access it from anywhere.Because the application is really small, I don’t want to set up a back-end between the web app and database cloud. So as the title, could I connect my front-end directly to the Atlas without using any back-end? Or could I just call APIs to CRUD data in Atlas directly from my clients?Thank you,",
"username": "IM_Coder"
},
{
"code": "",
"text": "Hi @IM_Coder,Welcome to MongoDB community.For this use case a Realm application with a realm-web sdk is the perfect solution.https://docs.mongodb.com/realm/get-started/introduction-webThis is the most easy and optimized way to focus on your front-end app while having an elastic managed backend. Realm apps have a generous free tier therefore you should be good.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "MongoDB just launched the Atlas Data API, which allows you to perform CRUD operations on your Atlas data through simple HTTP requests https://www.mongodb.com/docs/atlas/api/data-api/",
"username": "Drew_Beckmen"
},
{
"code": "",
"text": "Using the data api is it secure enough to put the end points on the front end? for writing to database?",
"username": "Rishi_uttam"
},
{
"code": "curl",
"text": "putting end-points to the front end? unless it is “read-only”, that would mean anyone would have access to your database and result in havoc.For read-only purposes, this direct connection is great. you would just be working on a functional/dynamic web content such a stock-market following.But for write access, it is a whole lot of story. Security is the main concept here and you would not want free access to your database. The usual way is to have your own back-end API to communicate with your database and so keep your database credentials secure (as much possible as your host settings allows).Using Realm or Data API is best to use with your IoT devices as they mostly don’t have enough memory to put whole drivers. They can communicate with basic TCP requests to write to the database. As long as you keep those devices in safe places, you can have as many as you want to write to the database. or at least give them access to a very limited resource.Or you may do the same base access from the terminal anytime with tools like good old curl. Or write PoC API fast without going into driver details (Javascript or Python is great for prototyping). Same applies to front-end; for PoC purposes create temporary access points.",
"username": "Yilmaz_Durmaz"
},
{
"code": "",
"text": "Thank you for that detailed reply. That all makes sense and was how I thought the data api would work on the front end. My goal in finding a new way to connect to my database was speed more than anything else.The other option is to use the realm-web sdk and this would solve my problem of calling the db from the front end. However its a big js bundle as I learned in one of my previous projects. Downloading realm 132kb unminified just to establish a connection and send data/authenticate is a bit much.When i tried the data api, this is what I found:\nI used Atlas 3rd party http triggers as a proxy to call the DATA API, but under my tests in the past it takes about 3-4 seconds to return a response .The whole cycle using the data api takes about 2.5 seconds on the quick side.Realm SDK is much faster in my tests but the bundle is quite big.So I am going back to what I did in the past which is simply use AWS lambda HK region running the official mongo db driver, which connects directly to my database. When the function is warm i get results in under 50 ms. – I wish this was the same with the data api, but its up to 10x as long. I hope Mongo Atlas can figure this out. Even calling a cloudflare edge worker still results in the same delay when calling the data api (as its still in preview, the data api end points are only in a few regions)Sorry i went on a tangent, but duration and bundle size is our current problem, surely writing less backend code would come in super handy. Currently we using realm-web sdk for larger projects with good internet connections, but for smaller projects with mobile connections realm sdk is too much weight.Anyway thanks for your help,.",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "This post was flagged by the community and is temporarily hidden.",
"username": "Rishi_uttam"
}
] | Could I connect my front-end directly to Atlas? | 2020-12-20T20:02:18.134Z | Could I connect my front-end directly to Atlas? | 9,877 |
null | [
"graphql"
] | [
{
"code": "www.MYDOMAIN.com\n(which links to MYAPP.mongodbstitch.com)\nhttps://realm.mongodb.com/api/client/v2.0/app/MYAPP/auth/providers/.../login\nhttps://realm.mongodb.com/api/client/v2.0/app/MYAPP/graphql\nhttps://www.MYDOMAIN.com/auth/providers/.../login\nhttps://www.MYDOMAIN.com/graphql\nhttps://API.MYDOMAIN.com/auth/providers/.../login\nhttps://API.MYDOMAIN.com/graphql\nhttps://www.MYDOMAIN.NET/auth/providers/.../login\nhttps://www.MYDOMAIN.NET/graphql\n",
"text": "Hi allAs per the instructions here I know it is possible to setup a Mongo Realm app, setup static hosting on the mongodbstitch domain and then link a custom domain name so that users see my app hosted at:My question:In order to fully “brand” my app is it also possible to link the same or another custom domain to my exposed GraphQL endpoint, so that instead of seeing the default mongodb.com domains for auth and endpoint:the GraphQL auth and endpoint would be something like one of the following:1 - The same custom domain but expose a custom path (would also have to work with SPA app and be ignored by the SPA routing):2 - Use a different subdomain to provide both the auth and endpoint:3 - Use a different but related domain for the auth and endpoint:Thank you",
"username": "mba_cat"
},
{
"code": "",
"text": "Not currently but it has been requested on our feedback portal here - Ability to expose API via our own custom DNS entry – MongoDB Feedback EngineIt may be addressed in a future initiative that looks to overhaul our exposed data api’s - stay tuned",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks Ian is there a timeframe for the overhaul and if I develop against the existing data api will there be a way to migrate to the new api when they are ready?",
"username": "mba_cat"
},
{
"code": "",
"text": "Hey @mba_cat I can’t give you good timeline on this because we’re still in the planning phase, but I can post on this thread with any updates. If you’d like to give more specific feedback around GraphQL/APIs on Realm and anything you’d like to see in the service, you can shoot me an email at [email protected]",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Having a custom domain for realm http triggers is important for maintaining a good branded experience. users see the task bar when calls are made to third party services, best to be on a sub domain i.e. api.xxxx.com. Lambda and all other serverless function services allow for this, sadly we cant move to realm functions unless this happenes.",
"username": "Rishi_uttam"
},
{
"code": "",
"text": "Hi @Sumedha_Mehta1 has the planning phase completed ? Any timelines on when this will be available ?",
"username": "V_P"
},
{
"code": "",
"text": "Hi all, bumping this thread as it seems there will be quite a few reverse proxies out there to handle this need. It would be great to understand if this is planned for or if we need to plan for a workaround with a reverse proxy ourselves. Cheers",
"username": "Andy_O_Connor"
},
{
"code": "",
"text": "Hi Andy, would be great if you can document how you did this with a reverse proxy?",
"username": "Rishi_uttam"
}
] | Using custom domain with graphql endpoint | 2021-05-14T16:41:51.733Z | Using custom domain with graphql endpoint | 6,157 |
null | [] | [
{
"code": "[\n {\n \"resource_id\": \"Machine-1\",\n \"resource_descr\": \"Machine Number 1\",\n \"components\": [\n {\n \"component_id\": \"mach-deas97u\",\n \"component_type\": \"Housing\",\n \"component_descr\": \"Housing 1\"\n },\n {\n \"component_id\": \"mach-7b83ta0\",\n \"component_type\": \"Base\",\n \"component_descr\": \"Base 1\"\n },\n {\n \"component_id\": \"mach-d1mxmd2\",\n \"component_type\": \"Peripherals\",\n \"component_descr\": \"Peripherals 1\"\n }\n ]\n }\n]\n {\n \"resource_id\": \"Machine-1\",\n \"resource_descr\": \"Machine Number 1\",\n \"components\": [\n {\n \"component_id\": \"mach-deas97u\",\n \"component_type\": \"Housing\",\n \"component_descr\": \"Housing 2\"\n },\n {\n \"component_id\": \"mach-7b83ta0\",\n \"component_type\": \"Base\",\n \"component_descr\": \"Base 2\"\n },\n {\n \"component_id\": \"mach-d1mxmd2\",\n \"component_type\": \"Peripherals\",\n \"component_descr\": \"Peripherals 2\"\n }\n ]\n }\n",
"text": "Hi,\nI maybe confused about how to use this positional operator so please help me.I have this data:I only wanted to change the values of the component_descr of each of my array values coming from my web application.I am expecting an output similar to this one:Note that the “component_descr” could be different for each object.I was looking at the $[] operator but I am having trouble coming up with the solution.I tried using this playground MongoDB Playground but it is not what I expected.Can somebody please point me to the correct process? Thanks.",
"username": "Nel_Neliel"
},
{
"code": "$[]arrayFilters$[identifier]db.collection.update({\n \"resource_id\": \"Machine-1\"\n},\n{\n \"$set\": {\n \"components.$[c1].component_descr\": \"Housing 2\",\n \"components.$[c2].component_descr\": \"Base 2\",\n \"components.$[c3].component_descr\": \"Peripherals 2\"\n }\n},\n{\n arrayFilters: [\n { \"c1.component_id\": \"mach-deas97u\" },\n { \"c2.component_id\": \"mach-7b83ta0\" },\n { \"c3.component_id\": \"mach-d1mxmd2\" }\n ]\n})\n",
"text": "Hello @Nel_Neliel,It is not possible with positional all $[] operator, but possible with array filters,Your query would be something like this,Playground",
"username": "turivishal"
},
{
"code": "",
"text": "Hi,\nSo this is how it should be. Thanks for pointing me in the right direction.I do have an additional question though, what if I have multiple array records like a maximum of 30 then my query would be big I suppose? Is there an alternative in that case?I don’t want to put these values in a separate collection as I think they are queried together with my parent object. Updating them in this manner might be a bit cumbersome. I wanted to know if there is an alternative as my readings about MongoDB is not that broad yet. Thank you.",
"username": "Nel_Neliel"
},
{
"code": "",
"text": "Hello @Nel_Neliel,what if I have multiple array records like a maximum of 30 then my query would be big I supposeYes, the query would be big and might be expensive, This is because it has to iterate over the entire array to find the elements that match the filter criteria if your document has a huge array elements.\nYou need to check the server’s memory configuration as well if the query is slow.You can test yourself here is the query: PlaygroundIs there an alternative in that case?I don’t think of any other option, if this query is expensive divide it into 2/3 queries as per performance.",
"username": "turivishal"
},
{
"code": "",
"text": "Wow, thank you very much for being helpful!\nI will try to think of refactoring my schema design as this would be a performance problem but your help is greatly appreciated.",
"username": "Nel_Neliel"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Update only single fields of all elements using all positional operator | 2023-10-13T14:23:21.248Z | Update only single fields of all elements using all positional operator | 256 |
null | [] | [
{
"code": "",
"text": "I want to run a command like sudo systemctl restart mongod in a script without it prompting for a password since the script wont be able to enter it.I’ve been trying to edit the visudo file to allow specific commands to run without password prompt, but can’t seem to figure out exactly what setting I need to enter to get that to work.Also there’s seems to be numerous ways to start/stop the mongo instance like: service mongod start, systemctl start mongod.service. I’m not sure what’s the difference between them.",
"username": "rex_nichols"
},
{
"code": "",
"text": "Create admin access user inside linux machine change user group of file andChange permission of file to root file excuteable",
"username": "Bhavya_Bhatt"
},
{
"code": "%dba ALL=(ALL) NOPASSWD: /bin/systemctl restart mongoddbasystemctl",
"text": "%dba ALL=(ALL) NOPASSWD: /bin/systemctl restart mongodThis entry would allow any member of the group dba to restart mongod.Also there’s seems to be numerous ways to start/stop the mongo instance like: service mongod start, systemctl start mongod.service. I’m not sure what’s the difference between them.All the supported Linuxes will be running systemd as the init system so I’d suggest just concern yourself with systemctl",
"username": "chris"
},
{
"code": "",
"text": "I have that entered in my visudo file, but it still prompts for password when I try to run the “systemctl restart mongod” command.\nsudo visudo file2886×466 46.9 KB",
"username": "rex_nichols"
},
{
"code": "groupssudo -l",
"text": "We’re way out of mongodb and into system administration. There are better sites and forums for that.",
"username": "chris"
}
] | How to run restart mongod terminal commands without password prompt? | 2023-10-11T14:00:21.521Z | How to run restart mongod terminal commands without password prompt? | 249 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "Hello! I am very new to MongoDB.\nI am trying to write a map reduce function (not an aggregation):So far I have:\nvar mapHorse1 = function(){\nvar key = this.shows; if(this.shows <50) emit(key,1);};I need to show the number of horses who have gender = female, and less than 50 shows - but I don’t know how to add in the gender filter to the above function. Can someone please assist?\nMany thanks",
"username": "kristen_Swann"
},
{
"code": "var mapHorse1 = function() {\n var key = this.shows;\n if (this.shows < 50 && this.gender === 'female') {\n emit(key, 1);\n }\n};\nvar reduceFunction1 = function(key, values) {\n return Array.sum(values);\n};\ndb.coll.mapReduce(\n mapHorse1,\n reduceFunction1,\n {\n out: \"map_reduce_result\" // Name collection out\n }\n)\n",
"text": "Hey Kristen, welcome to the MongoDB communityTry this:",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Thank you so much! I’ll give that a go!",
"username": "kristen_Swann"
},
{
"code": "",
"text": "You’re welcome, I’m at your disposal and if your problem has been resolved, leave this topic as resolved so that other people can benefit from it!",
"username": "Samuel_84194"
},
{
"code": "",
"text": "Hi Samuel - sorry that did not work. Whilst there were no errors - when I called the functions, no results / data displayed.",
"username": "kristen_Swann"
},
{
"code": "",
"text": "Hi Samuel - I have worked it out - I just had to make one very small change to the map function! Thank you so much for your help!",
"username": "kristen_Swann"
},
{
"code": "",
"text": "I just had to make one very small change to the map functionIt would be interesting and fair to all to see the final working map function.",
"username": "steevej"
},
{
"code": "",
"text": "Hiya! Apologies - yes of course!\nI just had to change the var key assignment to “this.name”, in the map function.\nThank you so much for your help!var mapHorse1 = function() {\nvar key = this.name;\nif (this.shows < 50 && this.gender === ‘female’) {\nemit(key, 1);\n}\n};",
"username": "kristen_Swann"
}
] | MapReduce Functions | 2023-10-10T21:56:30.682Z | MapReduce Functions | 326 |
[
"java",
"atlas-cluster"
] | [
{
"code": "<dependencies>\n\t\t<dependency>\n\t\t\t<groupId>org.springframework.boot</groupId>\n\t\t\t<artifactId>spring-boot-starter-data-mongodb</artifactId>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>org.springframework.boot</groupId>\n\t\t\t<artifactId>spring-boot-starter-web</artifactId>\n\t\t</dependency>\n\n\t\t<dependency>\n\t\t\t<groupId>org.springframework.boot</groupId>\n\t\t\t<artifactId>spring-boot-devtools</artifactId>\n\t\t\t<scope>runtime</scope>\n\t\t\t<optional>true</optional>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>org.projectlombok</groupId>\n\t\t\t<artifactId>lombok</artifactId>\n\t\t\t<optional>true</optional>\n\t\t</dependency>\n\t\t<dependency>\n\t\t\t<groupId>org.springframework.boot</groupId>\n\t\t\t<artifactId>spring-boot-starter-test</artifactId>\n\t\t\t<scope>test</scope>\n\t\t</dependency>\n\n\t\t<dependency>\n\t\t\t<groupId>me.paulschwarz</groupId>\n\t\t\t<artifactId>spring-dotenv</artifactId>\n\t\t\t<version>2.5.4</version>\n\t\t</dependency>\n\t</dependencies>\n",
"text": "Hi everybody,\nMy project somehow works fine when connecting to Mongodb local but not when connecting to Mongodb atlas.\nimage1767×761 194 KBapplication.properties:\nspring.data.mongodb.uri=mongodb+srv://nghia:*******@cluster0.givjhry.mongodb.net/NTCinemaDB?retryWrites=true&w=majorityPom.xml:Thanks for your help.",
"username": "Nghia_Vo"
},
{
"code": "",
"text": "Hey @Nghia_Vo,Welcome to the MongoDB Community but not when connecting to Mongodb atlas.Looking at the shared error logs it appears that the issue may be related to SSL. Have you had a chance to look at the Enable TLS/SSL on a Connection documentation? I think it would be helpful to set up a TLS connection to MongoDB from your Java application which may resolve the issue.I hope it helps!Regards,\nKushagra",
"username": "Kushagra_Kesav"
}
] | Not connect to Mongodb atlas | 2023-10-05T10:07:13.821Z | Not connect to Mongodb atlas | 330 |
|
null | [
"queries",
"replication",
"python",
"connecting",
"motor-driver"
] | [
{
"code": "b_users, b_chats = await db.get_banned()\nb_chats = [chat['id'] async for chat in chats]\npymongo.errors.ServerSelectionTimeoutError: No replica set members match selector \"Primary()\", Timeout: 30s, Topology Description: <TopologyDescription id: 65217586b0ced5475d5e4b50, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cluster0-shard-00-00.zaiqq.mongodb.net', 27017) server_type: RSSecondary, rtt: 0.0024830238521099095>, <ServerDescription ('cluster0-shard-00-01.zaiqq.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('cluster0-shard-00-01.zaiqq.mongodb.net:27017: The read operation timed out')>, <ServerDescription ('cluster0-shard-00-02.zaiqq.mongodb.net', 27017) server_type: RSSecondary, rtt: 0.0026440465152263643>]>\nTraceback (most recent call last):\n File \"/root/tessa/bot.py\", line 112, in <module>\n app.run()\n File \"/root/tessa/venv/lib/python3.10/site-packages/pyrogram/methods/utilities/run.py\", line 80, in run\n run(self.start())\n File \"/usr/lib/python3.10/asyncio/base_events.py\", line 649, in run_until_complete\n return future.result()\n File \"/root/tessa/bot.py\", line 41, in start\n b_users, b_chats = await db.get_banned()\n File \"/root/tessa/database/users_chats_db.py\", line 82, in get_banned\n b_chats = [chat['id'] async for chat in chats]\n File \"/root/tessa/database/users_chats_db.py\", line 82, in <listcomp>\n b_chats = [chat['id'] async for chat in chats]\n File \"/root/tessa/venv/lib/python3.10/site-packages/motor/core.py\", line 1158, in next\n if self.alive and (self._buffer_size() or await self._get_more()):\n File \"/usr/lib/python3.10/concurrent/futures/thread.py\", line 58, in run\n result = self.fn(*self.args, **self.kwargs)\n File \"/root/tessa/venv/lib/python3.10/site-packages/pymongo/cursor.py\", line 1155, in _refresh\n self.__send_message(q)\n File \"/root/tessa/venv/lib/python3.10/site-packages/pymongo/cursor.py\", line 1044, in __send_message\n response = client._run_operation(\n File \"/root/tessa/venv/lib/python3.10/site-packages/pymongo/mongo_client.py\", line 1424, in _run_operation\n return self._retryable_read(\n File \"/root/tessa/venv/lib/python3.10/site-packages/pymongo/mongo_client.py\", line 1514, in _retryable_read\n server = self._select_server(\n File \"/root/tessa/venv/lib/python3.10/site-packages/pymongo/mongo_client.py\", line 1346, in _select_server\n server = topology.select_server(server_selector)\n File \"/root/tessa/venv/lib/python3.10/site-packages/pymongo/topology.py\", line 244, in select_server\n return random.choice(self.select_servers(selector,\n File \"/root/tessa/venv/lib/python3.10/site-packages/pymongo/topology.py\", line 202, in select_servers\n server_descriptions = self._select_servers_loop(\n File \"/root/tessa/venv/lib/python3.10/site-packages/pymongo/topology.py\", line 218, in _select_servers_loop\n raise ServerSelectionTimeoutError(\npymongo.errors.ServerSelectionTimeoutError: No replica set members match selector \"Primary()\", Timeout: 30s, Topology Description: <TopologyDescription id: 65217586b0ced5475d5e4b50, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cluster0-shard-00-00.zaiqq.mongodb.net', 27017) server_type: RSSecondary, rtt: 0.0024830238521099095>, <ServerDescription ('cluster0-shard-00-01.zaiqq.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('cluster0-shard-00-01.zaiqq.mongodb.net:27017: The read operation timed out')>, <ServerDescription ('cluster0-shard-00-02.zaiqq.mongodb.net', 27017) server_type: RSSecondary, rtt: 0.0026440465152263643>]>\n",
"text": "I’m encountering the following error when trying to fetch banned users and chats using the Motor library with MongoDB Atlas:Any assistance or insights into resolving this issue would be greatly appreciated.",
"username": "MoviezHood"
},
{
"code": "",
"text": "Hi @MoviezHood and welcome to MongoDB community forums!!The error message that you are observing here mostly occurs when the application is unable to resolve the Primary host on the network.\nIn other words, the application tries to connect with the primary but was unable to connect may be because of a network partition. You can refer to the Automatic Failovers for further detailsIn saying, it would be helpful for us to triage the issue if possible in depth if you could share the following informations:I’m using the Motor library to connect to MongoDB Atlas.Are you seeing the issue only while connecting the application with the Atlas cluster or this occurs while using the cluster outside using shell or Compass as well?Finally, can you also try connecting the application using the connection string from the connection modal that specifies all 3 hostnames instead of the SRV record.\nYou can get it by choosing the Python for the Driver option and then Choose the Version 3.4 or later for the version option.Warn regards\nAasawari",
"username": "Aasawari"
},
{
"code": "",
"text": "Hi @Aasawari and thank you for your responce.Unfortunately, this issue occurs at random so I cannot reproduce this issue. And during this time when it happens I won’t be able to connect to Shell or Compass. So that makes this issue in answering your 2 questions. But I will definitely try the new string with 3 hostnames when this issue arises and let you know if that fixes it that time.Regards,\nBen",
"username": "MoviezHood"
}
] | Pymongo throws error saying No replica set members match selector "Primary()" | 2023-10-07T15:28:48.783Z | Pymongo throws error saying No replica set members match selector “Primary()” | 380 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi there,I’ve been working a lot with mongodb, but there is one question I’ve never really got an answer for. I have a lot of data in one collection (+10 million docs), where each day thousands of docs are added. In my aggregation I want to filter all documents in a specific timeframe, mostly per month. My first stage in the pipeline is a match query where I reduce to amount of documents to around 50-100 thousand. With an index this is extremly fast, however the next two stages are always eather unwind or groupby stages with multiple attributes. This process then takes at least 3-10 seconds to return the desired result.My question now is, is there any way to speed up the group_by or unwind stage, when considering lots of documents. Or is there any technique to faster summarize nummerical values. Besides, we are currently using an M30 cluster. Does this also have a major impact on the pipeline duration? Any advice would help me enormously.",
"username": "Cornelius_Blank"
},
{
"code": "",
"text": "Hi @Cornelius_BlankUnwind and group stages with a thousand documents probably will be very hard to execute.On the group stage, build some index that covers this stage can help a lot to execute it but will not help on the unwind stages.Have you tried to create some documents with the Computed Pattern ?There is a several patterns to build your documents that helps a lot with performance issues. You can take a look o some patterns in this article blog.Hope this helps.",
"username": "Jennysson_Junior"
},
{
"code": "",
"text": "My question now is, is there any way to speed up the group_by or unwind stage, when considering lots of documents.It would be easier to give meaningful recommendations if you could share sample documents from the collections, expecting results and the aggregation you have.",
"username": "steevej"
}
] | Indexing during aggregation | 2023-10-13T11:21:30.127Z | Indexing during aggregation | 227 |
null | [
"sharding"
] | [
{
"code": "",
"text": "Hi,\nI have a mongo setup which is sharded and meant for scale.\nThe services may include multiple config and shard servers (mongod instances), and grow overtime with amount of shards (works on multiple nodes of course).\nWhen I was doing some stress tests with my application I started having some memory issues and saw that mongo instances are taking most of the memory in the system (more than 50%).Some more info:A few questions:Cheers,\nOded",
"username": "Oded_Raiches"
},
{
"code": "",
"text": "Your complete setup is unclear but the followingIs there a way to limit ALL the instances together to not exceed 30% of memory consumption? I see that I can put a wiredTigerCacheSizeGB flag per mongod instance but its fixed sized and probably requires restarting the mongod instances and change the values across whenever a mongod instance is added/deleted.makes be think you are running multiple mongod on the same hardware machine.The goal of replica set is for high availability. Running multiple instances of the same replica set within the same machine goes against the goal of high availability.The goal of shards is to increase capacity and performance. Running multiple shards within the same machine goes against the goal of sharding. Your instances are fighting over the same resources.Running multiple mongod instances for replica sets or shards withing the same machine is fine for experimentation and to gain experience. Certainly not fordoing some stress tests with my applicationsweetspot for memory usage needed by the mongod instancesThe sweetspot is to have all memory available for a single instance and to run a single instance per physical host.",
"username": "steevej"
},
{
"code": "",
"text": "@steevej\nthank you for the answer",
"username": "Oded_Raiches"
},
{
"code": "",
"text": "I know this is an old post, and there may be newer questions about this subject, but I thought I would ask.\nI would disagree with this statement: “The sweetspot is to have all memory available for a single instance and to run a single instance per physical host.”\nMongo documentation itself talks about collocated instances, mentioning that it is not recommended for production environments.\nSo the question stands. We run collocated instances on our development/testing environments. We do build normally 3 replicas each on separate host so we follow the availability model. Now, how do we make sure each instance is using a determined amount of java and cache memory? The way mongodb allocates memory (50%) seems to be that the first instance to start will take over 1/2 of the memory available and the second or 3rd instance will take 50% of whatever is left, and so on.\nHow can we configure the memory consumption for each instance?",
"username": "eacardu"
},
{
"code": "",
"text": "I am not too sure but may be you may use on the wiredTiger configuration options.",
"username": "steevej"
}
] | Limiting resources with multiple instances | 2022-06-02T09:18:34.586Z | Limiting resources with multiple instances | 2,535 |
null | [
"serverless"
] | [
{
"code": "",
"text": "I’m aiming to make a new project with maybe 5 M2 machines. The only options I’m seeing are for M10 or larger (or serverless), is there a setting to unlock the smaller machines? (the data is >1gb and not expected to increase, so M10 are overkill. Also I want the version pinned, so serverless is not an option)",
"username": "J_Phoebus"
},
{
"code": "Atlas M0 (Free Cluster), M2, and M5 Limitations",
"text": "Hello @J_Phoebus ,Welcome back to The MongoDB Community Forums! As of now, only dedicated clusters (M10+) support version choice/control. Kindly refer Atlas M0 (Free Cluster), M2, and M5 Limitations for more details.Regards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas Enterprise: New project only allows M10 machines or larger | 2023-10-13T12:26:05.115Z | Atlas Enterprise: New project only allows M10 machines or larger | 243 |
null | [
"reference-pattern"
] | [
{
"code": "",
"text": "Greetings fine community,I have been reading about references and recently read the documentation Database References but it is still not so clear to me the differences.To avoid data duplication, I would like to reference a field from a document in a specific collection; a classic one-to-many relationship in the SQL world.It appears that the best way to somewhat replicate “relationships” is to use DBRef. Am I right in my understanding? Also, a document in a collection needs to have more than one reference from documents in different collections. Based on the documentation, it seems that for this scenario DBRef is the way to go.Is my understanding accurate?Regards.",
"username": "An_Infinite_Loop"
},
{
"code": "",
"text": "Hi @An_Infinite_Loop ,It appears that the best way to somewhat replicate “relationships” is to use DBRef. Am I right in my understanding?Personally, I would not recommend using DBRef unless your use case specifically needs it. While DBRef (Database References) is one way to establish relationships between documents, it’s important to understand how it works and its limitations. As written in this Database ReferencesUnless you have a compelling reason to use DBRefs, use manual references instead.To resolve DBRefs, your application must perform additional queries to return the referenced documents.In summary, your understanding is accurate. DBRef is a way to create relationships between documents in different collections in MongoDB and is suitable for classic one-to-many relationships. However, you should also consider the potential performance implications and whether denormalization or embedding might be more suitable for your specific use case, depending on your read and write patterns.Additionally, I would recommend you to go through below resources to understand more about Data Modeling, Design and Patterns. This will help you design your schema in a way which will provide better performance and flexibility as per your application requirements.Have you ever wondered, \"How do I model a MongoDB database schema for my application?\" This post answers all your questions!Let me know in case you have any more queries regarding this, would be happy to help you! Regards,\nTarun",
"username": "Tarun_Gaur"
}
] | Manual References vs DBRef | 2023-10-13T01:44:58.651Z | Manual References vs DBRef | 216 |
null | [
"swift"
] | [
{
"code": "asyncWriteasyncWritewrite",
"text": "We need some clarification about when to use Realm Swift’s asyncWrite — if you’re already in a method that is isolated to the same actor as your realm, should you still use asyncWrite? If it’s not strictly necessary, is there any benefit to doing so vs. doing a synchronous write?",
"username": "hershberger"
},
{
"code": "",
"text": " anyone from Realm able to provide some insight here?",
"username": "hershberger"
}
] | When to use asyncWrite | 2023-09-12T19:40:32.691Z | When to use asyncWrite | 403 |
null | [
"aggregation",
"queries"
] | [
{
"code": "",
"text": "I have a MongoDB cluster containing more than 50 collections. There is a collection that contains more than 91M documents (records/rows). Indices are properly inserted in this collection.\nMy query is to fetch a count of data containing unique companies in this collection. However, due to the large collection, I am unable to fetch those records because as I start executing my simple group-by query on that field, my CPU usage increases up to 100%, due to which my live server crashes.Even, due to these long-running queries, I am unable to analyze this kind of large collection. Is there any way to create queries that take less CPU consumption and also return accurate results?",
"username": "Mehul_Sanghvi"
},
{
"code": "",
"text": "Show the output of explain ?",
"username": "Kobe_W"
},
{
"code": "",
"text": "Hello Mehul, welcome to the MongoDB community!Queries perform based on indexing, you can try to use some partial index strategies, materialized views, revisit the modeling that may change over time, reference or embed documents to try. If you can share your query and the explain as @Kobe_W commented, we can help you see if there is any tuning. Furthermore, if you could briefly share the type of application that uses this, how frequently, etc…",
"username": "Samuel_84194"
},
{
"code": "db.getCollection(\"collection_name\").aggregate(\n [\n {\n \"$group\" : {\n \"_id\" : \"$field_name\",\n \"count\" : {\n \"$sum\" : NumberInt(1)\n }\n }\n }\n ], \n);\n",
"text": "Thanks for responding.Query: This query is used to fetch a distinct count of field_name (can’t give you due to confidentiality issue).\ncollection_name → huge collection with more than 99M records (documents)Hope this information helps you. @Kobe_W @Samuel_84194There are some solutions that I received while surfing.But the collection on which I am trying to run this query already has proper indexing & also I think it is not feasible to use the skip & limit command because it won’t give me a proper count as there are chances of having some values of field_name beyond the limit.",
"username": "Mehul_Sanghvi"
},
{
"code": "distinct = { \"$group\" : {\n \"_id\" : \"$field_name\"\n} }\ncount = { \"$lookup\" : {\n \"from\" : \"collection_name\" ,\n \"localField\" : \"_id\" ,\n \"foreignField\" : \"field_name\" ,\n \"as\" : \"count\" ,\n \"pipeline\" : [ { \"$count\" : \"result\" } ]\n} }\ndb.getCollection( \"collection_name\" ).aggregate( [ distinct , count ] )\n",
"text": "CPU usage increases up to 100%, due to which my live server crashes.Can you confirm that it stops because of 100% CPU? I rather suspect that you got an OOM or 16MB exception during the $group.The stage $group is blocking and has to process the whole collection before producing the first resulting document. So if you have a lot of unique values the memory consumption by the $group stage might exceed the server RAM.You could try the following aggregation which should cuts the memory requirement.",
"username": "steevej"
},
{
"code": "",
"text": "@steevej Actually, I have to stop my query because of 100% CPU, my DBA kills it.So, I need to find a way in which I can run queries on huge collections, without affecting CPU usage.And the query that you have sent me, I think it’s wrong because I am running a query on a single collection, there is no need for another collection, so why should I use $lookup?",
"username": "Mehul_Sanghvi"
},
{
"code": "",
"text": "I think it’s wrongYou think it is wrong. Have you tried? Have you tried it, at least on a test collection to understand what it does?I think it’s wrongI do not think it is wrong if I shared it.I am running a query on a single collection, there is no need for another collectionWhere is the other collection? I specified the same collection_name that you used in your original code that you shared. So there is no other collection. I $lookup in the same collection. Doing the $count in a different stage reduce the amount of memory used by $group. I was focusing my answer on limiting memory because at first you wrote:my live server crasheswhich is quite different thanmy DBA kills itOne thing is sure is that you have to try what people share, even if you think it is wrong.You may also try to $sort with field_name:1 before you $group, but only if you have an index with field_name as prefix otherwise it will be worst.Another thing is that analytic use-cases like yours are preferably executed on secondary nodes. This way you can live with the temporary 100% CPU spike.But frequent 100% CPU might indicate that your hardware is under-specified for the workload.",
"username": "steevej"
},
{
"code": "",
"text": "Sorry @steevej You are correct. Actually, I forgot that you have done self-join and I forgot that concept for a while. But when I tried it was running properly even on the huge collections.I saved lots of time.\nTHANKS BRO. It was a great help.",
"username": "Mehul_Sanghvi"
},
{
"code": "",
"text": "Please mark one of the post as the solution.",
"username": "steevej"
},
{
"code": "",
"text": "To quote @steevejAnd the explain plan between the two makes me irrationally mad with the query planner.",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Running a query on a huge collection, brings CPU usage to 100% | 2023-10-10T19:28:41.842Z | Running a query on a huge collection, brings CPU usage to 100% | 486 |
null | [
"node-js",
"python"
] | [
{
"code": "{\n _id: ObjectId(),\n text: \"Bye bye\",\n embedding: Array,\n id: 1,\n site: \"check\"\n}\nconst retriever = await vectorStore.asRetriever({\n searchType: \"mmr\",\n searchKwargs: {\n fetchK: 2,\n lambda: 0.1\n },\n filter: {\n \"compound\": {\n \"must\": [\n {\n \"text\": {\n \"path\": \"site\",\n \"query\": \"check\"\n }\n }\n ]\n }\n }\n});\n",
"text": "Hello,For context, here’s the structure of the documents in my collection:Now I am trying to call the vectorStore.asRetriever() method with the goal of only retrieving documents where site = “check” but I am unable to do so using the following code:Any idea how to achieve the above? A similar question was asked for the Python version and I’ve already tried the suggested approach but to no avail: Filtering the results of Vector Search with LangChain",
"username": "Sarim_Haque"
},
{
"code": "const retriever = await vectorStore.asRetriever({\n searchType: \"mmr\",\n searchKwargs: {\n fetchK: 2,\n lambda: 0.1\n },\n filter: { postFilterPipeline: [{ $match: { site: \"check\" } }] }\n});\n",
"text": "Hi @Sarim_Haque and welcome to MongoDB community forums!!Can you confirm if you are able to follow the below code and able to execute the query.Regards\nAasawari",
"username": "Aasawari"
},
{
"code": "const retriever = await vectorStore.asRetriever({\n searchType: \"mmr\",\n searchKwargs: {\n fetchK: 2,\n lambda: 0.1\n },\n filter: {\n preFilter: {\n \"compound\": {\n \"must\": [\n {\n \"text\": {\n \"path\": \"site\",\n \"query\": \"check\"\n }\n }\n ]\n }\n }\n }\n });\n",
"text": "Hi Asawari,I got a reply on another forum, tried that and it worked!Here’s the updated code:",
"username": "Sarim_Haque"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to implement filters in MongoDB Atlas Vector Search? (using langchain.js) | 2023-10-07T19:42:08.704Z | How to implement filters in MongoDB Atlas Vector Search? (using langchain.js) | 447 |
null | [] | [
{
"code": "",
"text": "Hello everyone,I’m Pranav Kumar, a Senior Backend Engineer at Nagarro, and I’m thrilled to be a part of this vibrant MongoDB Community.I’m currently using MongoDB in my projects, where it plays a crucial role in storing extracted raw data from PDFs, enabling us to build efficient and scalable solutions.What brings me here is my passion for MongoDB and my desire to connect with like-minded individuals who share the same enthusiasm for this remarkable database technology.I’m based in Chandigarh, in the beautiful region of Punjab, India. The motivation behind starting a MongoDB User Group for our region is simple: I believe in the power of collaboration and knowledge-sharing. By bringing MongoDB enthusiasts together in our local community, we can learn, grow, and make the most of this fantastic database technology.I’m looking forward to engaging with all of you, learning from your experiences, and contributing to the growth of our MongoDB Community. Let’s build something great together!",
"username": "Pranav_Kumar"
},
{
"code": "",
"text": "Welcome to the community Pranav! We are excited to have you here. I look forward to working with you and your user group!",
"username": "Karissa_Fuller"
}
] | Self-introduction | 2023-10-13T14:29:55.637Z | Self-introduction | 255 |
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "What is the validation duration of my mongoDb credits of $200 received with GitHub students pack, like 6 months or more?",
"username": "Neeraj_N_A1"
},
{
"code": "",
"text": "From memory I think the validity is 12 month from applying the credit.",
"username": "chris"
},
{
"code": "",
"text": "What does “apply” mean here?",
"username": "Neeraj_N_A1"
},
{
"code": "",
"text": "Redeemed in an Atlas Org.The code itself does have an expiry also, but you can ask for a new one. From the FAQ.Atlas Promotional Codes will expire if they’re not applied to your Atlas account within 90 days. You will be able to request a new code assuming you did not use the previous one and that you’ve received it less than 12 months ago.",
"username": "chris"
},
{
"code": "",
"text": "Hello @Neeraj_N_A1\nIf you still have further questions, please reach out to [email protected]. They will be happy to assist you.\nThanks!",
"username": "Heather_Davis"
}
] | What is the validation duration of my mongoDb credits of $200 received with GitHub students pack, like 6 months or more? | 2023-10-02T03:09:41.964Z | What is the validation duration of my mongoDb credits of $200 received with GitHub students pack, like 6 months or more? | 405 |
null | [
"aggregation",
"node-js"
] | [
{
"code": "[\n {\n \"$match\": {\n \"infoCurrent.factoryNumber\": \"11868424\"\n }\n }, \n {\n \"$limit\": 50.0\n }\n ]\n\n5b0a20202020202020207b0a20202020202020202020202022246d61746368223a207b0a2020202020202020202020202020202022696e666f43757272656e742e666163746f72794e756d626572223a20223131383638343234220a2020202020202020202020207d0a20202020202020207d2c200a20202020202020207b0a20202020202020202020202022246c696d6974223a2035302e300a20202020202020207d0a202020205d\n",
"text": "Hi, I try to store MongoDB aggregation pipeline in the database as document to read it later from nodejs applecation. Worked solution - convert aggreagate to hex, save in the mongodb document and convert back from hex by JSON.parseMay be there is another way to store, to view aggregate pipeline in document as JSON, not hex",
"username": "Nurlan_Kazdayev"
},
{
"code": "original_pipeline = [\n {\n \"$match\": {\n \"infoCurrent.factoryNumber\": \"11868424\"\n }\n }, \n {\n \"$limit\": 50.0\n }\n ]\n\nc = db.aggregations\n\nc.insertOne( { _id : 0 , pipeline : original_pipeline } )\n\nc.findOne()\n\n> { _id: 0,\n pipeline: \n [ { '$match': { 'infoCurrent.factoryNumber': '11868424' } },\n { '$limit': 50 } ] }\n\nstored_pipeline = c.findOne( { _id:0 } ).pipeline\n\nc.aggregate( stored_pipeline ) \n",
"text": "I try to store MongoDB aggregation pipeline in the database as document to read it later from nodejs applecation.What was wrong with the above?It seems to work fine in the current version of an M0 cluster.",
"username": "steevej"
},
{
"code": "",
"text": "Interesting, I’d just suggest a view for most cases though.Storing a workspace of queries in a central location could be a great use for this.",
"username": "chris"
},
{
"code": "",
"text": "Unfortunatly restriction applies to embedded pipelines, such as pipelines used in $lookup or $facet stages.",
"username": "Nurlan_Kazdayev"
},
{
"code": "",
"text": "Thanks, good idea!\nI tried to create a document with original_pipeline through the 3Tstudio, but it gave an error\nUsiong insertOne and from compas - all OK!",
"username": "Nurlan_Kazdayev"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to store MongoDB aggregation pipelines in the database as document? | 2023-10-12T06:12:04.476Z | How to store MongoDB aggregation pipelines in the database as document? | 213 |
null | [] | [
{
"code": "fullDocumentBeforeChange: nulloperationType: 'delete'db.watch([], {\n fullDocument: \"updateLookup\",\n fullDocumentBeforeChange: \"whenAvailable\",\n}).on(\"change\", (data) => {})\n",
"text": "I am getting fullDocumentBeforeChange: null on operationType: 'delete' when trying:How can I get the pre-image of the deleted document when using change stream?",
"username": "WONG_TUNG_TUNG"
},
{
"code": "",
"text": "Hello @WONG_TUNG_TUNG ,Please checkRegards,\nTarun",
"username": "Tarun_Gaur"
},
{
"code": "changeStreamPreAndPostImages",
"text": "Can I enable changeStreamPreAndPostImages in Mongodb Atlas without typing the command or using Mongoose?",
"username": "WONG_TUNG_TUNG"
}
] | How to get the document pre-image when using change stream in delete | 2023-10-13T09:06:24.606Z | How to get the document pre-image when using change stream in delete | 162 |
null | [
"node-js"
] | [
{
"code": "await collection.findOne({id: item.id}, {files: {$slice: 3}});\ndb.files.findOne({id: '15'}, {files: {$slice: 3}})\n",
"text": "can not slice files array, in node.js\nbut,it is work in shell\nwhy?",
"username": "zhang_zhang"
},
{
"code": "",
"text": "Hello @zhang_zhang, Welcome to the MongoDB community,More details would be helpful,",
"username": "turivishal"
}
] | $slice does not work in findOne method in node.js driver | 2023-10-13T11:28:28.319Z | $slice does not work in findOne method in node.js driver | 192 |
[] | [
{
"code": "",
"text": "hello everyone,\nI tried to fix this issue but i am bot able to solve. so please guide me or fix this issue.\nScreenshot (90)1920×1080 155 KB",
"username": "Chandra_Shekhar"
},
{
"code": "",
"text": "You did not provide enough information to tell for sure, but one guess is that the server you’re trying to contact isn’t configured to support TLS. See https://www.mongodb.com/docs/manual/tutorial/configure-ssl/",
"username": "Jack_Woehr"
}
] | MongoServerSelectionError: Client network socket disconnected before secure TLS connection was established | 2023-10-13T14:06:40.706Z | MongoServerSelectionError: Client network socket disconnected before secure TLS connection was established | 357 |
|
null | [] | [
{
"code": "",
"text": "I encounter this error message when I try to establish a connection, and my operating system is Windows. I have already tried reinstalling multiple times, but I can’t resolve it",
"username": "Fabio_D_amato"
},
{
"code": "mongod.cfg",
"text": "",
"username": "Jack_Woehr"
}
] | "connect ECONNREFUSED 127.0.0.1:27017" | 2023-10-13T10:09:21.818Z | “connect ECONNREFUSED 127.0.0.1:27017” | 208 |
null | [
"java",
"connecting"
] | [
{
"code": "",
"text": "Dear MongoDB community maintainer ,\nAfter the https://jira.mongodb.org/browse/JAVA-4347 , I know that Socks5 is available after 4.11 version\nBut upgrade mongodb-driver-sync is kinda unrealistic since we are bind it under Spring-data-mongodb usually , is there any suggestion to approach the Socks5 connection in lower mongodb-driver-sync version?Any feedback will be a big appreciation , Thx",
"username": "Alan_Kuo_N_A"
},
{
"code": "",
"text": "Hi @Alan_Kuo_N_AWe do not backport features like this, under the restrictions of https://semver.org/. I can suggest two options:Let us know how it goes.Jeff",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "Hi @Jeffrey_Yemin ,\nFirst , thx for reply!\nI actually did try to using mongodb-driver-sync 4.11 version in our current Spring Data version and face with some class not found issue but since I only replace the mongodb-driver-sync because I don’t want impact too much , you options2 give me some faith that might be possibility to replace all the mongodb-driver into 4.11 with older Spring Data should be work properlyagain , Thx for reply , I will do some testing and hope everything goes as our expectation!",
"username": "Alan_Kuo_N_A"
}
] | MongoDB connection by Socks5 in Java driver | 2023-10-13T10:14:37.129Z | MongoDB connection by Socks5 in Java driver | 204 |
[
"connecting",
"server",
"react-js"
] | [
{
"code": "axios.get(\"http://localhost:4000/api/v1/quality\");brew services start [email protected]://localhost:27017It looks like you are trying to access MongoDB over HTTP on the native driver port.ERR_CONNECTION_REFUSED",
"text": "Hello,I’ve been researching for days every possible solution but I still get the same error when trying to connect to my MongoDB database using axios.get(\"http://localhost:4000/api/v1/quality\"); in my front-end reactjs app. I get the following error in console. I tried every possible port but I get this error every time:\nScreen Shot 2023-10-09 at 12.52.36 AM1232×380 84.1 KBI started the MongoDB server with brew services start [email protected]. Status started. No issues there, tho during installation I had to create a new folder on the desktop to store the db., and I changed the dbPath in the mongod.conf file.The MongoDB Compass app is running and a connection is created on mongodb://localhost:27017. When I access this link in Chrome, it outputs It looks like you are trying to access MongoDB over HTTP on the native driver port.I tried re-installing everything: MongoDB, my app, the db, etc. Can someone help me fix this ERR_CONNECTION_REFUSED error and fetch my data from the db?",
"username": "Pavel_Turcanu"
},
{
"code": "",
"text": "To connect to mongodb you will need to use a driver, mongodb does not have an http api.For example:",
"username": "chris"
},
{
"code": "",
"text": "Thank you for your reply! Unfortunately, it didn’t work. I installed the driver following the instructions. Same “CONNECTION_REFUSED”. Also, I tried an Express app earlier and could retrieve my data through the 4000 port on localhost.\nI read in the documentation that in my connection string, I should specify the port number on which I configured my server to listen for incoming connections. How do I find it?",
"username": "Pavel_Turcanu"
},
{
"code": "mongodb://localhost:27017",
"text": "The MongoDB Compass app is running and a connection is created on mongodb://localhost:27017This is the connection string.",
"username": "chris"
},
{
"code": "27017It looks like you are trying to access MongoDB over HTTP on the native driver port.",
"text": "When I try to access this 27017 port number, I receive the request fulfilled with the message It looks like you are trying to access MongoDB over HTTP on the native driver port.",
"username": "Pavel_Turcanu"
},
{
"code": "",
"text": "Why are you using http against the database ?We’ve already been over the point you need to use a driver to access the database and no http api is provided.Take a look at some of the quick starts and tutorials at the Developer Center",
"username": "chris"
}
] | ERR_CONNECTION_REFUSED at axios.get() execution | 2023-10-09T06:04:22.067Z | ERR_CONNECTION_REFUSED at axios.get() execution | 371 |
|
null | [] | [
{
"code": "",
"text": "This is my first project with a database. I’ve tried a few methods and about to make changes to use synonyms. I have a hard time finding the correct document in my collection because the name field I’m using contains short strings which are abbreviations and often one substitution away from another other matches: vss, vse, vss.Before using Atlas search I had created a field for synonyms that my searches would look at if the name field failed an exact match.Now that I’m using Atlas search, I’ll use my existing synonyms to populate index synonyms.Which token options and combinations would be the most useful? I’ve tried various ngram and a few others, some out of sheer curiosity. There’s a lot of options even after excluding the unrelated ones.",
"username": "Human_N_A"
},
{
"code": "",
"text": "Hi @Human_N_A and welcome to MongoDB community forums!!Which token options and combinations would be the most useful? I’ve tried various ngram and a few others, some out of sheer curiosity. There’s a lot of options even after excluding the unrelated ones.If you wish yo use synonyms using the Atlas search, I believe the blog post with an example which shows Add US Postal Abbreviations to Your Atlas Search would be a good starting point for the requirements.However, if this is not what you are looking for, it would be helpful for me to assist you if could could help me with:Regards\nAasawari",
"username": "Aasawari"
}
] | Atlas search with very short strings | 2023-10-11T17:10:04.529Z | Atlas search with very short strings | 213 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.