image_url
stringlengths
113
131
tags
list
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "aggregation" ]
[ { "code": "PlayerTournament: [\n {\n \"_id\": 1,\n \"Name\": \"John Aims\",\n \"Gender\": \"M\",\n \"DoB\": ISODate(\"1990-01-01T00:00:00Z\"),\n \"Nationality\": \"USA\",\n \"Hand\": \"R\",\n \"YearTurnedPro\": 2010,\n \"Tournament\": [\n {\n \"TournamentYear\": 2016,\n \"TournamentCode\": \"GS1\",\n \"Position\": 8,\n \"PrizeMoney\": 125000,\n \"RankingPoints\": 250\n },\n {\n \"TournamentYear\": 2019,\n \"TournamentCode\": \"GS4\",\n \"Position\": 2,\n \"PrizeMoney\": 625000,\n \"RankingPoints\": 1000\n },\n {\n \"TournamentYear\": 2021,\n \"TournamentCode\": \"GS3\",\n \"Position\": 4,\n \"PrizeMoney\": 312500,\n \"RankingPoints\": 500\n }\n ]\n },\n {\n \"_id\": 2,\n \"Name\": \"George Brown\",\n \"Gender\": \"M\",\n \"DoB\": ISODate(\"1997-03-04T00:00:00Z\"),\n \"Nationality\": \"GB\",\n \"Hand\": \"L\",\n \"YearTurnedPro\": 2013,\n \"Tournament\": [\n {\n \"TournamentYear\": 2016,\n \"TournamentCode\": \"GS1\",\n \"Position\": 4,\n \"PrizeMoney\": 250000,\n \"RankingPoints\": 500\n },\n {\n \"TournamentYear\": 2019,\n \"TournamentCode\": \"GS3\",\n \"Position\": 2,\n \"PrizeMoney\": 625000,\n \"RankingPoints\": 1000\n }\n ]\n },\n {\n \"_id\": 3,\n \"Name\": \"Kate Upson\",\n \"Gender\": \"F\",\n \"DoB\": ISODate(\"1999-12-07T00:00:00Z\"),\n \"Nationality\": \"GB\",\n \"Hand\": \"L\",\n \"YearTurnedPro\": 2013,\n \"Tournament\": [\n {\n \"TournamentYear\": 2016,\n \"TournamentCode\": \"GS1\",\n \"Position\": 1,\n \"PrizeMoney\": 1000000,\n \"RankingPoints\": 2000\n },\n {\n \"TournamentYear\": 2019,\n \"TournamentCode\": \"GS1\",\n \"Position\": 4,\n \"PrizeMoney\": 250000,\n \"RankingPoints\": 500\n },\n {\n \"TournamentYear\": 2020,\n \"TournamentCode\": \"GS4\",\n \"Position\": 2,\n \"PrizeMoney\": 625000,\n \"RankingPoints\": 1000\n },\n {\n \"TournamentYear\": 2017,\n \"TournamentCode\": \"GS2\",\n \"Position\": 2,\n \"PrizeMoney\": 625000,\n \"RankingPoints\": 1000\n }\n ]\n },\n {\n \"_id\": 4,\n \"Name\": \"Mary Bones\",\n \"Gender\": \"F\",\n \"DoB\": ISODate(\"1998-10-04T00:00:00Z\"),\n \"Nationality\": \"AUSTRALIA\",\n \"Hand\": \"L\",\n \"YearTurnedPro\": 2015,\n \"Tournament\": [\n {\n \"TournamentYear\": 2018,\n \"TournamentCode\": \"GS3\",\n \"Position\": 1,\n \"PrizeMoney\": 1250000,\n \"RankingPoints\": 2000\n },\n {\n \"TournamentYear\": 2019,\n \"TournamentCode\": \"GS2\",\n \"Position\": 2,\n \"PrizeMoney\": 625000,\n \"RankingPoints\": 1000\n }\n ]\n },\n {\n \"_id\": 5,\n \"Name\": \"Yuri Roza\",\n \"Gender\": \"M\",\n \"DoB\": ISODate(\"2000-05-11T00:00:00Z\"),\n \"Nationality\": \"BELARUS\",\n \"Hand\": \"R\",\n \"YearTurnedPro\": 2018,\n \"Tournament\": [\n {\n \"TournamentYear\": 2020,\n \"TournamentCode\": \"GS4\",\n \"Position\": 4,\n \"PrizeMoney\": 250000,\n \"RankingPoints\": 500\n },\n {\n \"TournamentYear\": 2018,\n \"TournamentCode\": \"GS2\",\n \"Position\": 4,\n \"PrizeMoney\": 312500,\n \"RankingPoints\": 500\n }\n ]\n }\n ]\ndb.PlayerTournament.aggregate([\n {\n \"$unwind\": \"$Tournament\"\n },\n {\n $match: {\n \"Tournament.TournamentYear\": {\n $gte: 2020\n }\n }\n },\n {\n \"$group\": {\n \"_id\": {\n Name: \"$Name\"\n },\n \"total_qty\": {\n \"$sum\": \"$Tournament.PrizeMoney\"\n }\n }\n }\n])\n", "text": "I am trying to find solution to this query, I am using $group, $match.This would be the collection:This is what I tried.I am getting all the player that have won something after the year 2020.I am looking to select the players’ name of those that won more than 500000 (prizemoney) after the year 2020.", "username": "Eneko_Izaguirre_Martin" }, { "code": "", "text": "Could you please mark your other related thread as solved before doing a followup post?To get only specific elements of an array, use $filter rather than $unwind and $group. That is much more efficient.But if you only $filter to get a sum on the filtered element, you should be using $reduce as proposed in your other thread.", "username": "steevej" } ]
Query to $unwind $group and $match with mongo db
2022-05-11T12:18:45.311Z
Query to $unwind $group and $match with mongo db
4,575
null
[ "aggregation", "mdbw22-hackathon" ]
[ { "code": "[{$addFields: {\n actorCodes: {\n $concatArrays: [\n [\n '$Actor1Name'\n ],\n [\n '$Actor2Name'\n ]\n ]\n },\n geoCodes: {\n $concatArrays: [\n [\n [\n '$ActionGeo_Long',\n '$ActionGeo_Lat'\n ]\n ],\n [\n [\n '$Actor1Geo_Long',\n '$Actor1Geo_Lat'\n ]\n ],\n [\n [\n '$Actor2Geo_Long',\n '$Actor2Geo_Lat'\n ]\n ]\n ]\n }\n}}, {$addFields: {\n geoCodes: {\n $map: {\n input: '$geoCodes',\n as: 'a',\n 'in': {\n $map: {\n input: '$$a',\n as: 'b',\n 'in': {\n $convert: {\n input: '$$b',\n to: 'double',\n onError: ''\n }\n }\n }\n }\n }\n }\n}}, {$addFields: {\n geoCodes: {\n $filter: {\n input: '$geoCodes',\n as: 'b',\n cond: {\n $and: [\n {\n $ne: [\n {\n $arrayElemAt: [\n '$$b',\n 0\n ]\n },\n ''\n ]\n },\n {\n $ne: [\n {\n $arrayElemAt: [\n '$$b',\n 1\n ]\n },\n ''\n ]\n }\n ]\n }\n }\n }\n}}, {$addFields: {\n points: {\n $map: {\n input: '$geoCodes',\n 'in': {\n geometry: {\n type: 'Point',\n coordinates: '$$this'\n }\n }\n }\n }\n}}, {$project: {\n geoCodes: 0\n}}, {$merge: {\n into: 'eventscsv',\n on: '_id',\n whenMatched: 'replace',\n whenNotMatched: 'discard'\n}}]\n", "text": "OK, so I have the data from downloading but I need to get the geocoding compatible. This works well for me:", "username": "Ilan_Toren" }, { "code": "", "text": "Great tip, @Ilan_Toren ! Thanks so much for sharing!", "username": "webchick" }, { "code": "", "text": "Indeed - thanks for sharing with the community @Ilan_Toren", "username": "Shane_McAllister" }, { "code": "", "text": "Thanks for sharing this @Ilan_Toren . I was trying to use it but I got an error:\nimage1274×197 10.8 KB\nHowever after reading up and removing the empty stage object without a field it worked nicelyThanks again for sharing this. Made me properly read about aggregations in mongoDB for the first time.", "username": "Fiewor_John" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Hackathon processing (sharing a tip on geo codes)
2022-05-06T10:12:24.956Z
Hackathon processing (sharing a tip on geo codes)
3,025
null
[ "configuration" ]
[ { "code": "", "text": "Hello together,\nmy name is Gregor from Germany and I have a question regarding timestamps. MongoDB stores the LogFiles in collections with a timestamp in UTC. Is there a way to manipulate this timestamp so that directly when the log is created the timestamp for my timezone is used?Greetings from Germany and thanks in advance for the help", "username": "Gregor" }, { "code": "mongoddateiso8601-localsystemLog.timeStampFormattimeStampFormatiso8601-utc", "text": "Welcome to the MongoDB Community @Gregor_Wachter !Can you provide some more details on your environment:By default the MongoDB server should use the local server timezone for logging (iso8601-local format) per systemLog.timeStampFormat.If your host environment is set to something other than UTC, I would check your MongoDB configuration file to see if the timeStampFormat has perhaps been set to iso8601-utc.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Thanks for the quick reply\nI am using MongoCommunity 4.2 locally on a Windows 10 PC. There is nothing in the MongoDB configuration regarding the timestamp format.\nIn the local collection startup_log of the database I get the startTimeLocal in addition to the startTime which is in the correct timezone. Is there any way to manipulate the timestamp for events in a specific database? I have created a database that stores data on a regular basis. Since I export this data from the database later, the correct timestamp would be particularly relevant for me here. My timezone is Germany, Berlin with +0200 compared to UTC", "username": "Gregor" } ]
Change the MongoDB log timestamp from UTC
2022-05-10T08:38:16.021Z
Change the MongoDB log timestamp from UTC
5,035
null
[ "atlas-device-sync" ]
[ { "code": "exports = async function(loginPayload) {\n const app_secret = \"somesecret\"\n \n const users = context.services\n .get(\"mongodb-atlas\")\n .db(\"app\")\n .collection(\"UserInfo\");\n\n const { clientUserId, secret } = loginPayload;\n \n if(app_secret !== secret) {\n throw \"Secret mismatch : \" + secret;\n }\n\n const user = await users.findOne({ clientUserId: clientUserId });\n \n if (user) {\n return user._id.toString();\n } else {\n const newId = new BSON.ObjectId();\n const partition = \"user=\" + newId.toString();\n const result = await users.insertOne({ _id: newId, _partition: partition, clientUserId: clientUserId, creationDate: new Date() });\n return result.insertedId.toString();\n }\n};\n\nREAD \n(can only read PUBLIC and your own data such as partitions are \"user=myId123\" or \"PUBLIC\")\n{\n \"$or\": [\n {\n \"%%partition\": \"user=%%user.id\"\n },\n {\n \"%%partition\": \"PUBLIC\"\n }\n ]\n}\n\nWRITE \n(can only write your own data such as partitions are \"user=myId123\")\n{\n \"%%partition\": \"user=%%user.id\"\n}\nError: user does not have permission to sync on partition (ProtocolErrorCode=206)\nPartition: user=607ac1b7687c4dca433c3c5e\n607ac1b7687c4dca433c3c5b607ac1b7687c4dca433c3c5e", "text": "Hello,I’m using a custom authentication function so that a mobile client generates a unique id and logs in with it.My function isMy problem is that the “realm user id”, the one that RealmSDK gives me on mobile client is different from the one I generate in the custom function.And I have set the permissions toHow am I supposed to retrieve this realm user id so that I can set up correctly my custom login function and populate correctly my collection with the right _partition and _id?EDIT: The error I get from Realm Sync Backend isBecause my custom function has set the custom user id to 607ac1b7687c4dca433c3c5b but the actual realm user id is 607ac1b7687c4dca433c3c5e and I have no clue how to fetch this id from the custom login function…", "username": "Jerome_Pasquier" }, { "code": "", "text": "Could You solve the problem? I have the same problem…", "username": "Marco" }, { "code": "UserInfo\n _id: ObjectId\n _partition: String\n creationDate: Date\n gameCenterId: Optional String\n appleId: Optional String\n {any_other_login_method_id: Optional String}\nexports = async function(loginPayload) {\n const users = context.services\n .get(\"mongodb-atlas\")\n .db(\"app\")\n .collection(\"UserInfo\");\n \n let findQuery;\n if (loginPayload[\"gameCenterId\"] != null) {\n findQuery = { gameCenterId: loginPayload[\"gameCenterId\"] };\n }\n\n // Check if user already exists using our own GameCenter_id\n const user = await users.findOne(findQuery);\n \n if (user !== null) {\n return user._id.toString();\n } else {\n // Set the partition to a temporary value.\n // A onCreate trigger will replace it with user={InternalRealmId} which we still don't have here because of custom-function internal logic\n const temporaryId = new BSON.ObjectId();\n if (loginPayload[\"gameCenterId\"] != null) {\n const result = await users.insertOne({\n _id: temporaryId,\n _partition: \"temporaryId=\" + temporaryId.toString(),\n creationDate: new Date(),\n gameCenterId: loginPayload[\"gameCenterId\"]\n });\n return result.insertedId.toString();\n }\n }\n};\nexports = async function(authEvent) {\n const user = authEvent.user;\n const realmUserId = user.id; // Now we have the actual realm_id\n const identities = user.identities;\n \n // Update our user's Mongo document's partition with the actual realm_id\n const customIdentity = identities.find(identity => identity.provider_type === \"custom-function\");\n if (customIdentity !== undefined) {\n const userId = new BSON.ObjectId(customIdentity.id);\n const collection = context.services.get(\"mongodb-atlas\").db(\"app\").collection(\"UserInfo\");\n const findUser = await collection.findOne({ _id: userId });\n if (findUser !== null) {\n await collection.updateOne({ \"_id\": userId }, { $set: { \n _partition: \"user=\" + realmUserId.toString()\n } });\n return;\n } else {\n return;\n }\n }\n};\n", "text": "Hi Marco,Yes I found a solution. However, it’s not a straightforward solution, I have to do it in 2 steps:Schema\nI have a collection called UserInfo which I use for saving users.Step 1.\nI set up my “Custom Function Authentication” - by the way I use GameCenter (an iOS game framework which gives me a unique user id).Here is the code.Step 2.\nThen I create a trigger that will always run for every created user with the custom function login. This trigger’s goal is to inject and override the realm_id into the partition of the previous document created.Here is its configuration:And here is my onUserCreated functionAnd voila… I only had a few problems (maybe once per month or so) with the shared cluster and according to Realm support, shared clusters can have very rare trigger failures apparently.", "username": "Jerome_Pasquier" }, { "code": "", "text": "Hi Jerome,You are my hero; thank you so much!You saved my day!!", "username": "Marco" }, { "code": "", "text": "You are welcome (Would be nice if Realm could add a tutorial to their documentation - so that we would finally know the official way.)", "username": "Jerome_Pasquier" } ]
How to set realm id and partition with custom login?
2021-04-17T11:14:52.664Z
How to set realm id and partition with custom login?
3,013
null
[ "swift", "atlas-device-sync" ]
[ { "code": "RealmSwift/SwiftUI.swift:277: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=io.realm Code=10 \"Migration is required due to the following errors:\n- Property 'Tag.newlyAddedText' has been added.\" UserInfo={NSLocalizedDescription=Migration is required due to the following errors:\n- Property 'Tag.newlyAddedText' has been added., Error Code=10}\n2021-06-30 12:59:49.244071+0100 RealmSyncTest[12157:3982699] RealmSwift/SwiftUI.swift:277: Fatal error: 'try!' expression unexpectedly raised an error: Error Domain=io.realm Code=10 \"Migration is required due to the following errors:\n- Property 'Tag.newlyAddedText' has been added.\" UserInfo={NSLocalizedDescription=Migration is required due to the following errors:\n- Property 'Tag.newlyAddedText' has been added., Error Code=10}\n", "text": "I am trying to create add a new required field to an existing realm sync schema. Everything seems to work fine when adding the new field to the schema via the realm sync web portal. I can still add new items into the collection without modifying the client and new items have the field populated with the default value.However, when attempting to add this new field to the client model it throws an error.\nFrom reading the documentation this should be possible as it is an additive change.\nAny idea why it would throw the following error? Or is there a different procedure for adding new required fields to existing schemas?Note, this app uses synced realms only, no local realms.Thanks", "username": "Mishkat_Ahmed" }, { "code": "", "text": "Bump\nRe-visiting this just now and still seeing the same issue on the latest version of the realm-cocoa client (v10.12.0)", "username": "Mishkat_Ahmed" }, { "code": "", "text": "Hi Mishkat,Adding a new required field is considered a non breaking change, however, it does mean that all the pre-existing objects/documents which the new field was added to need to be manually updated with a value for this new field. If this is not done, those documents will no longer be syncable as they will not comply with your new schema that requires the field to have a value.Please see documentation below on this.Regards", "username": "Mansoor_Omar" } ]
Adding a new required field to an existing schema and client
2021-06-30T12:01:23.033Z
Adding a new required field to an existing schema and client
4,072
null
[ "node-js", "connecting", "atlas-cluster" ]
[ { "code": "PS C:\\Users\\Victor Davi\\Desktop\\node\\banco> node index.js \nC:\\Users\\Victor Davi\\Desktop\\node\\banco\\node_modules\\mongodb\\lib\\utils.js:417\n throw error;\n ^\n\nMongoServerSelectionError: Hostname/IP does not match certificate's altnames: Host: cluster0-shard-00-00.zberv.mongodb.net. is not in the cert's altnames: DNS:*.mongodb.net, DNS:mongodb.net\n at Timeout._onTimeout (C:\\Users\\Victor Davi\\Desktop\\node\\banco\\node_modules\\mongodb\\lib\\sdam\\topology.js:318:38) \n at listOnTimeout (node:internal/timers:559:17)\n at processTimers (node:internal/timers:502:7) { \n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n servers: Map(3) {\n 'cluster0-shard-00-00.zberv.mongodb.net:27017' => ServerDescription {\n _hostAddress: HostAddress {\n isIPv6: false,\n'use strict';\n\nconst mongodb = require('mongodb').MongoClient;\n\nconst url = \"mongodb+srv://<user>:<pwd>@cluster0.zberv.mongodb.net/test?retryWrites=true&w=majority\";\n\nmongodb.connect(url,(erro,banco)=>{\n\n if(erro)throw erro;\n\n const dbo= banco.db(\"test\");//Documento do banco de dados\n\n const obj = {curso: \"Curso de Node\", canal: \"CFB Cursos\"};//Dados que serão armazenados dentro do banco de dados\n\n const colecao = \"cursos\";\n\n dbo.collection(colecao).insertOne(obj,(erro,resultado)=>{\n\n if(erro)throw erro;\n\n console.log(\"1 novo curso inserido\");\n\n banco.close();\n\n });\n\n});\n", "text": "Hello everyone !I need help, I’m facing the following error.If someone could help me, I will be very happy, I’m trying for 4 days and just got “no positive results”.CODE*", "username": "Victor_Davi_Almeida" }, { "code": "", "text": "Can you connect by shell using your SRV string?\nI tried both SRV and long form string but neither worked\nCan you check status of your cluster in Atlas", "username": "Ramachandra_Tummala" }, { "code": "mongosh", "text": "Hi @Victor_Davi_Almeida - Welcome to the community!To further assist the troubleshooting here, could you provide the following information:Regards,\nJason", "username": "Jason_Tran" }, { "code": "mongosh", "text": "Hello @Jason_TranThank you for having me !Can be some problem with my firewall ? Do you have some other code to I use in order to test ?", "username": "Victor_Davi_Almeida" }, { "code": "", "text": "Thanks for providing that information @Victor_Davi_Almeida!Can be some problem with my firewall ? Do you have some other code to I use in order to test ?At this stage it’s difficult to say for certain what the issue could be. However, there is an example on the MongoDB Node.JS driver quick start page.MongoServerSelectionError: Hostname/IP does not match certificate’s altnames: Host: cluster0-shard-00-00.zberv.mongodb.net. is not in the cert’s altnames: DNS:*.mongodb.net, DNS:mongodb.netI believe the issue may be related to certificates just based off this error but it would be good to test if you are able to connect via MongoDB Compass for troubleshooting purposes from the same client that’s receiving this error upon connection.In saying so, can you attempt to connect via MongoDB Compass from the same client and let me know if you are able to connect?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Both Compass & Shell failing to connect\nCould it be with whitelist IP?\nMay be allow access from anywhere temporarily and check the connection\nThen figure out if it is firewall or some other issue", "username": "Ramachandra_Tummala" }, { "code": "", "text": "I think its some problem related with IP, but i dont know why.\nFirst. Could you tell me what is the whitelist IP ?\nI’ve allowed access to anywhere IP and tried to connect, I’ve got the same error.I’ve performed the Node.JS driver quick start page - I’ve got the same error.\nIn compass, I’ve got the same error.", "username": "Victor_Davi_Almeida" }, { "code": "", "text": "I think its some problem related with IP, but i dont know why.\nI’ve allowed access to anywhere IP and tried to connect, I’ve got the same error.\nI’ve performed the Node.JS driver quick start page - I’ve got the same error.\nIn compass, I’ve got the same error.\nIn compass, I switched to access just one host of the cluster, no success too.\nIt seems the Host name is mismatching.Jason, Do I need to have an account in Amazon ? I’m only signing in the Mongo Atlas.\nI’m thinking about deleting the cluster and create a new one.", "username": "Victor_Davi_Almeida" }, { "code": "mongodb://mongodb+srv://", "text": "Hi @Victor_Davi_Almeida,First. Could you tell me what is the whitelist IP ?I believe Ramachandra was referring to the Atlas Network Access List. However, please feel to correct me if i’m wrong here Ramachandra I’ve performed the Node.JS driver quick start page - I’ve got the same error.\nIn compass, I’ve got the same error.Thank you for confirming - Could you provide the following information:The MongoDB Compass utilises the Node.JS driver so it was not surprising to see the same error.Jason, Do I need to have an account in Amazon ? I’m only signing in the Mongo Atlas.If you are referring to having an amazon account for resolving this error - No.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "I believe Ramachandra was referring to the Atlas Network Access List.Yes i meant that\nBefore you drop and recreate your cluster show us your Atlas cluster details/status\nCan you see your DBs/collections?", "username": "Ramachandra_Tummala" }, { "code": "", "text": "The whilelist already has the IP from my computer, I’ve got it by using \" ipconfig\" in the cmd terminal.", "username": "Victor_Davi_Almeida" }, { "code": "", "text": "See Unable to connect to Atlas cluster using \"Standard Connection String\" format - #2 by steevej", "username": "steevej" }, { "code": "", "text": "Did you try with SRV string again?\nI can connect\nWhat has changed from previous to now?MongoDB Enterprise atlas-bay1mf-shard-0:PRIMARY> db\ntest\nMongoDB Enterprise atlas-bay1mf-shard-0:PRIMARY> show dbs\nadmin 0.000GB\nlocal 1.059GB\nMongoDB Enterprise atlas-bay1mf-shard-0:PRIMARY> exit\nbye", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Bro, I’ve deleted the previous cluster0 and created a new one.\nThe old one had a AWS cloud.\nThis has a Google cloud.\nThis hasn’t sample data base.", "username": "Victor_Davi_Almeida" }, { "code": "", "text": "Other question, Did you connected with the string I provided in the last answer ?\nmongodb://teste:[email protected]:27017/?authSource=admin&replicaSet=myRepl&tls=true\nIf yes, I have a question, The following directory below, I don’t have access to it:\nMongoDB Enterprise atlas-bay1mf-shard-0:PRIMARY> db\ntest\nWhat just appeared to my Atlas are:\ncluster0-shard-00-02.zberv.mongodb.net:27017 (Primary)\nand the “similars” (Secondary)\nDid you change this name ? atlas-bay1mf-shard-0:PRIMARY>\nHow do I visualize this name in Atlas ?\nI need to put this name in my string connection, such as the example below ?\nmongodb://teste:teste@cluster0- atlas-bay1mf-shard-0shard-00-02.zberv.mongodb.net:27017/?authSource=admin&replicaSet=myRepl&tls=true", "username": "Victor_Davi_Almeida" }, { "code": "", "text": "No I did not use your connect string\nI used SRV string format\nWhy do you want to connect by giving primary or secondary name?\nUse SRV string which uses clustername and it automatically connected to primary without using any replset name\nDid you try what Steeve shared I,e long form of string which uses all 3 node names?\nFrom Atlas when you choose connect by shell it will give you connect string SRV or long form depending on shell version you select\nAlso from DIG command you can find out name of replset\nCheck our forum threads", "username": "Ramachandra_Tummala" }, { "code": " lastElectionReason: 'stepUpRequestSkipDryRun',\n lastElectionDate: 2022-04-26T14:16:21.658Z,\n", "text": "Bro, I’ve deleted the previous cluster0 and created a new one.I have serious doubts about that.I would be very surprised that you terminated the cluster cluster0.zberv.mongodb.net and that Atlas provided you with a new cluster with the same name as the cluster you mentioned on your first post on April 27. But I might be wrong, because after all, saru mo ki kara ochiru.Further more, since I alsocan connectand the following rs.status() informationindicates that the cluster you mentioned on April 27 is still up and running since the 26th.So, I am also interested to knowDid you try with SRV string again?What has changed from previous to now?The following will not work. Your replica set is NOT named myRepl.mongodb://teste:[email protected]:27017/?authSource=admin&replicaSet=myRepl&tls=trueIt is named atlas-bay1mf-shard-0.", "username": "steevej" }, { "code": "", "text": "Hello Steevejsaru mo ki kara ochiru . In portuguese “Macaco também cai do avião”, I don’t know exactly what this means in japanese, but I like it !I’ll try to connect with this name : atlas-bay1mf-shard-0 .\nMy question is, How I can identify the name ?\nBecause the name that appears to me is different.\nProbably I don’t know how to check it, Can you please tell me how I do it ?Thanks and 後になるまで (I believe its goodbye)", "username": "Victor_Davi_Almeida" }, { "code": "", "text": "I’ll check the Forum Threads soon as possible.Thank you for the help you gave me until know bro.", "username": "Victor_Davi_Almeida" }, { "code": "", "text": "Macaco também cai do aviãoThe correct world would be Árvore rather than avião.For Even monkeys fall from trees. I use that when I completely screw up (to say: sorry I made a mistake, the monkey is me and I just fall from a tree) or when I assert something I am not 100% sure.I am sure that you should not:connect with this name : atlas-bay1mf-shard-0Use the Connect button from your Atlas cluster and try to use the SRV version. Otherwise specify an older version of the shell and connect using the old way.As forMy question is, How I can identify the name ?it is a little bit tricky. I cannot really explain better than SRV record - Wikipedia.My mother tongue is French so I do not know about後になるまで", "username": "steevej" } ]
Error: connecting with MongoDB
2022-04-28T01:06:41.090Z
Error: connecting with MongoDB
6,628
null
[ "aggregation" ]
[ { "code": "{ \"_id\" : 1, \"Name\" : \"John Aims\", \"Gender\" : \"M\", \"DoB\" : ISODate(\"1990-01-01T00:00:00Z\"), \"Nationality\" : \"USA\", \"Hand\" : \"R\", \"YearTurnedPro\" : 2010, \"Tournament\" : [ { \"tournamentID\" : 1, \"TournamentYear\" : 2016 }, { \"tournamentID\" : 2, \"TournamentYear\" : 2019 }, { \"tournamentID\" : 3, \"TournamentYear\" : 2021 } ] }\n{ \"_id\" : 2, \"Name\" : \"George Brown\", \"Gender\" : \"M\", \"DoB\" : ISODate(\"1997-03-04T00:00:00Z\"), \"Nationality\" : \"GB\", \"Hand\" : \"L\", \"YearTurnedPro\" : 2013, \"Tournament\" : [ { \"tournamentID\" : 2, \"TournamentYear\" : 2016 }, { \"tournamentID\" : 5, \"TournamentYear\" : 2019 } ] }\n{ \"_id\" : ObjectId(\"626c18a3d880647a888888ff\"), \"TournamentID\" : 1, \"TournamentCode\" : \"GS1\", \"Position\" : 8, \"PrizeMoney\" : 125000, \"RankingPoints\" : 250 }\n{ \"_id\" : ObjectId(\"626c18c2d880647a888888ff\"), \"TournamentID\" : 2, \"TournamentCode\" : \"GS1\", \"Position\" : 4, \"PrizeMoney\" : 250000, \"RankingPoints\" : 500 }\n{ \"_id\" : ObjectId(\"626c18ddd880647a888888ff\"), \"TournamentID\" : 3, \"TournamentCode\" : \"GS1\", \"Position\" : 1, \"PrizeMoney\" : 1000000, \"RankingPoints\" : 2000 }\ndb.Player.aggregate([\n {\"$unwind\" : \"$Tournament\"}, \n{\"$lookup\":\n{\"from\":\"Tournament\",\n\"localField\":\"Tournament.tournamentID\",\n\"foreignField\":\"TournamentID\",\n\"as\":\"Tennis-player\"}},\n { \"$group\": {\n \"_id\": { Name:\"$Name\" },\n \"total_qty\": { \"$sum\": \"$Tennis-player.PrizeMoney\" }\n }}\n])\n", "text": "Player collection:Tournament collection:1st Question:Hello, I want to get the sum of ranking points of each player.I have tried:But I get for every played the sum is 0.I can show it on playground as it is using more than 1 collection.2nd question:Would it be better to create only 1 collections with all the data?", "username": "Eneko_Izaguirre_Martin" }, { "code": "", "text": "I clicked the because:ThanksThe only issue I see at first glance is that Tennis-player is an array and that you may need to use $reduce before you $sum.", "username": "steevej" } ]
Get the sum after the $unwind and $lookup returns 0
2022-05-10T21:20:12.881Z
Get the sum after the $unwind and $lookup returns 0
2,839
null
[ "queries", "transactions" ]
[ { "code": " var A = a.Find(session, f => f.UserId == 1).FirstOrDefault();\n\n var C = c.Update(session, f => f.x == 1, {x: 111});\n // <<<<----- For example , here another system has already replaced 50 by 100 , and we still have 50 and the conditions are met , although it is no longer correct --- >>>\n if (A.Plamp == 50)\n {\n session.commitTransaction();\n }\n", "text": "Hello. How to work with transactions that rely on reading? It turns out that the read operations are not blocking.Example I should commit Transaction if ‘A’ = 50 but ‘A’ can change from other outside system\\thread.How to write such a transaction correctly?", "username": "alexov_inbox" }, { "code": "", "text": "up\nI read your blog article about transactions , but I didn 't find a solution for my situation", "username": "alexov_inbox" }, { "code": "A.PlampPlampvar C = c.Update(session, f => f.x == 1, {x: 111, Plamp: 50});\nPlampWriteConflict", "text": "Hi @alexov_inbox,How to work with transactions that rely on reading? It turns out that the read operations are not blocking.As you have discovered in the case above, read operation does not lock a document from being modified. Worth mentioning that in the code snippet above the condition check for A.Plamp is performed on the client side and not on the database server side.In the above example, one way to invalidate the update operation on C is to insert Plamp into C. For example:If the value of Plamp has been updated to 100, the transaction will throw a WriteConflict. The client application would have to handle this error and apply a retry logic appropriately. See also Drivers API: Transactions for more information.Regards,\nWan.", "username": "wan" }, { "code": " var A = a.Find(session, f => f.UserId == 1).FirstOrDefault();\n\n var C = c.FindAndUpdate(session, f => f.x == 1, {x: 111, Plamp: A.Plamp});\n\n if (C.Plamp != 50)\n {\n session.AbortTransaction();\n }\n", "text": "Thx for answer.You say about throw “abort transaction” by hand on client side check after update(and update change to findAndUpdate)?Do I understand correctly that you are talking about this option?", "username": "alexov_inbox" }, { "code": "A.PlampWriteConflictif (C.Plamp != 50)PlampC", "text": "Hi @alexov_inbox,You say about throw “abort transaction” by hand on client side check after update(and update change to findAndUpdate)?Not quite. By including A.Plamp value in collection C, the application then needs to handle WriteConflict error that would be thrown by the server. The application should not need to perform a client-side conditional check anymore i.e. the if (C.Plamp != 50)here another system has already replaced 50 by 100 , and we still have 50 and the conditions are metDo the other system will also update the value of Plamp in collection C ? This is the significant part to create the conflict.Regards,\nWan.", "username": "wan" } ]
How to work with transactions that rely on reading?
2022-04-25T07:56:30.299Z
How to work with transactions that rely on reading?
2,796
null
[ "database-tools", "mdbw22-hackathon", "mdbw-hackhelp" ]
[ { "code": "mongoimport.shgdeltools./mongoimport.sh --uri \"MY CONNECTION STRING\"", "text": "I’m following one of the sessions for the hackathon and trying to use the mongoimport.sh script that is in available via gdeltools to put the data in my cluster , but nothing is happening after I run the command in my terminal, and my database is still empty meaning no data was imported into it.I’m using a Windows PC and running the command in Powershell (in Visual Studio Code).\nHere’s the command I’m running: ./mongoimport.sh --uri \"MY CONNECTION STRING\"", "username": "Fiewor_John" }, { "code": "", "text": "@Mark_Smith kindly help", "username": "Fiewor_John" }, { "code": "", "text": "I used the script today and this is how it worked for meBtw I enjoyed your presentation", "username": "Ilan_Toren" }, { "code": "", "text": "Hey @Fiewor_John - sorry to hear you’ve been having trouble with this. The mongoimport.sh script is written to run on Linux - so you could run it in WSL on Windows, if you have that set up. Otherwise I can look at translating it to Powershell - I’ll have a look at this today, as I’m sure you’re not the only one who needs this.", "username": "Mark_Smith" }, { "code": "mongoimport.shmongoimport.sh", "text": "It’s also worth noting that you need to have the .CSV files in your current directory when running the script.Earlier versions of mongoimport.sh required you to have the CSV files in the same directory as mongoimport.sh, but as long as you have checked out the latest copy, you shouldn’t need that any more, but the first line of this comment is still true.", "username": "Mark_Smith" }, { "code": "make", "text": "Thank you, but the make command (which I have just installed) isn’t working as expected\nI’ll use WSL as Mark suggested.\nAlso, thank you for the comment about my presentation ", "username": "Fiewor_John" }, { "code": "gdeltloadergdelttools", "text": "@Mark_Smith I keep getting this error while trying to use gdeltloader on my Ubuntu (on WSL)\nimage1057×218 13.4 KB\nI’ve tried uninstalling and re-installing gdelttools but nothing’s changed.My Python version is 3.8.10 and Ubuntu 20.04.4 LTS.CC @Ilan_Toren", "username": "Fiewor_John" }, { "code": "", "text": "Look at the github for gdelttools. Perhaps if you pip in the dependency your code will run.pip instaYou can confirm your install withgdeltloader --version", "username": "Ilan_Toren" }, { "code": "pip install", "text": "Thanks for trying to help. Unfortunately, nothing is still working.\nI think you meant pip install ?\nThank you once again", "username": "Fiewor_John" }, { "code": "", "text": "Well, I am a macos user, and since I’m not using the system default python 2.0 and can’t replace it completely I have python3 and pip3 for my work code. Sometime around Catalina, I tried to dump 2.x and I had to make a speedy retreat. I only really use python in combination with Compass or with tensorflow anyway.", "username": "Ilan_Toren" }, { "code": "", "text": "Thanks for letting us know about this @Fiewor_John. This is a bug that’s been introduced in a recent version of gdeltloader - it looks like it no longer works with Python 3.8.I’ll fix the bug, but it’ll take a little while to publish. In the meantime, you can update to Python 3.10 to fix this problem. I’m sorry about this!My friend Ben has described how to use the deadsnakes repo to install different versions of Python@Mark_Smith", "username": "Mark_Smith" }, { "code": "", "text": "Thank you. Updating to Python 3.10 as suggested worked.", "username": "Fiewor_John" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Problem with mongodbimport script
2022-05-05T16:06:51.104Z
Problem with mongodbimport script
4,155
null
[ "dot-net", "xamarin", "objective-c" ]
[ { "code": "at Realms.Sync.SyncConfiguration.CreateRealmAsync (System.Threading.CancellationToken cancellationToken) [0x00173] in <8705", "text": "A user reported the error “Code 2” today which I haven’t come across before. The stack trace was;at Realms.Sync.SyncConfiguration.CreateRealmAsync (System.Threading.CancellationToken cancellationToken) [0x00173] in <8705There are a number of similar articles however none appeared to relate to this. Has anyone come across this error before?Note: The application is written in c# not objective-c.", "username": "Raymond_Brack" }, { "code": "", "text": "Hey Raymond, the stacktrace seems to be cut off - can you post the entire thing, including the exception type and message? It’s not something I recall seeing, but I hope the stacktrace will offer us some pointers.", "username": "nirinchev" } ]
Code 2 Exception
2022-05-10T22:26:21.447Z
Code 2 Exception
2,678
null
[]
[ { "code": "", "text": "Follow instructions explicitlykye-mgmt02:~ # rpm --import https://www.mongodb.org/static/pgp/server-4.2.asc\ncurl: (60) SSL certificate problem: self signed certificatecurl failed to verify the legitimacy of the server and therefore could not\nestablish a secure connection to it. To learn more about this situation and\nhow to fix it, please visit the web page mentioned above.\nerror: https://www.mongodb.org/static/pgp/server-4.2.asc: import read failed(2).\n–In fact all of the other version keys fail with the same error\nHow to install the Mongodb public key?\nThe system should ask to trust, but it does not. Any suggestions?>", "username": "John_Goutbeck" }, { "code": "", "text": "rt https://www.mongodb.org/static/pgp/server-4.2.ascI got mongodb installedThe cli would not let me to trust the self-signed cert\nBUT yast did and then I could continue to install mongodbinteresting, funny (not very) frustrating and finally great joy.have a good day.", "username": "John_Goutbeck" } ]
Public key install - curl: (60) SSL certificate problem: self signed certificate
2022-05-04T21:10:28.086Z
Public key install - curl: (60) SSL certificate problem: self signed certificate
2,871
null
[ "data-modeling", "mdbw22-hackathon" ]
[ { "code": "", "text": "I can’t form a team in this Hackathon, due to time limitations. But I like to share this idea and if a team likes to pick up I’d be glad to support you when it comes to data architecture / schema design. The question is whether a correlation can be found between news and the price of a digital currency.\nIn the simplest case, for example, the frequency of the mentioning of bitcoin and the bitcoin price.\nLooking at the tone of the news there might be further indicators or grouping criteria.analytics + various3open", "username": "michael_hoeller" }, { "code": "", "text": "A super idea @michael_hoeller - thanks for sharing! Hopefully someone will snap it up…(and if it works, I want a share in the action!!)", "username": "Shane_McAllister" }, { "code": "", "text": "Hi @michael_hoeller and @Shane_McAllister ,\nI really like the idea and I hope to work on it.Thanks", "username": "Crist" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Looking for Hackers: Can news indicate a stock price change
2022-05-04T05:33:46.772Z
Looking for Hackers: Can news indicate a stock price change
2,730
https://www.mongodb.com/…e9cadcf1bddb.png
[ "mdbw22-hackathon" ]
[ { "code": "Staff Developer AdvocateSenior Developer Advocate", "text": "In this session, Staff Developer Advocate Nic Raboy shares the progress of his News Browser Web App that he is building alongside all our hackathon participants.Join us, it will be fun and you will learn too! What’s not to like!!We will be live on MongoDB Youtube and MongoDB TwitchStaff Developer AdvocateSenior Developer Advocate", "username": "Shane_McAllister" }, { "code": "", "text": "We’re live now!!", "username": "Shane_McAllister" }, { "code": "", "text": "If you, like me, have missed a few of these livestreams you can find the code at GitHub - mongodb-developer/mongodb-world-2022-hackathon: Hacking with the GDELT DatasetCheck it out!\n\nWebsite with a geographic heatmap and list of articles1262×746 76.5 KB\n", "username": "webchick" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Fun hack.....Tuesday? Yes!
2022-05-10T16:03:32.430Z
Fun hack&hellip;..Tuesday? Yes!
3,175
null
[]
[ { "code": "[\n {\n \"baseAnalyzer\": \"lucene.keyword\",\n \"charFilters\": [],\n \"name\": \"lowerCaseAnalyzer\",\n \"tokenFilters\": [\n {\n \"type\": \"lowercase\"\n }\n ],\n \"tokenizer\": {\n \"type\": \"keyword\"\n }\n }\n]\n{\n \"baseAnalyzer\": \"lucene.keyword\",\n \"name\": \"testsearchanalyser\"\n }\n", "text": "We tried to create custom analyzer like below in UI,referred the analyzer in index, during index rebuild it throw below error\nYour index could not be built: references invalid analyzer “lowerCaseAnalyzer” that has the following error: unrecognized fields [“charFilters”, “tokenFilters”, “tokenizer”]Define analyzer is accepting only below formatCan any one share the schema to use for custom analyzer", "username": "krishna_kommuri" }, { "code": "{\n \"collectionName\": \"myCollection\",\n \"database\": \"myDatabase\",\n \"name\": \"myIndexName\",\n \"analyzer\": \"myAnalyzer\",\n \"analyzers\": [\n {\n \"name\": \"myAnalyzer\",\n \"charFilters\": [],\n \"tokenizer\": {\n \"type\": \"nGram\",\n \"minGram\": 3,\n \"maxGram\": 7\n },\n \"tokenFilters\": []\n }\n ],\n \"mappings\": {\n \"dynamic\": false,\n \"fields\": {\n \"label\": [\n {\n \"type\": \"string\",\n \"analyzer\": \"myAnalyzer\",\n }\n ]\n }\n }\n} \n", "text": "Hi,We have the same problem and we’re unable to create a custom analyzer in any way. We tried what the OP did, it doesn’t surprise us that much that it doesn’t work because the fields “charFilters”, “tokenFilters” and “tokenizer” are not defined in the API documentation (https://docs.atlas.mongodb.com/reference/api/fts-analyzers-update-all/). The API let us put these fields, we then referred the analyzer in the index definition and it fails to build.We don’t understand the PUT API because the customer analyzers seem to be defined in the index definition (https://docs.atlas.mongodb.com/reference/atlas-search/analyzers/custom/) when creating a new index, but again, call the POST endpoint as in the documentation returns “Invalid attribute analyzers specified”… and that’s true ! the API documentation nowhere mentions the field “analyzers” Here what we tried:So, how do we create a custom analyzer ?", "username": "mleboulaire" }, { "code": "", "text": "Hi,\nI am having the same trouble. Were u able to get it to work?Thanks,\nSupriya", "username": "Supriya_Bansal" }, { "code": " {\n name: VALUE_MATCHING,\n mappings: {\n dynamic: false,\n fields: {\n values: {\n type: FieldTypes.DOCUMENT,\n dynamic: false,\n fields: {\n value: {\n type: FieldTypes.STRING,\n analyzer: \"englishStemmer\",\n searchAnalyzer: \"englishStemmer\",\n },\n },\n },\n },\n },\n analyzers: [\n {\n name: \"englishStemmer\",\n tokenizer: {\n type: TokenizerTypes.STANDARD,\n },\n tokenFilters: [\n {\n type: TokenFilterTypes.LOWERCASE,\n },\n {\n type: TokenFilterTypes.SNOWBALL_STEMMING,\n stemmerName: StemmerName.ENGLISH,\n },\n ],\n },\n ],\n }\n", "text": "Hi same problem I want to create analysers along side my indexes by hitting the APIe.g.The PUT endpoint for creating analysers doesn’t seem to support the same depth of customisation", "username": "Stuart_Clark" }, { "code": "", "text": "Hi,\nis this problem already fixed?\nOn our side, we still have this issue.It worked when we create the analyzer directly in the index definition", "username": "Kevin_Zilke" } ]
Atlas Search Custom Analyzer
2021-01-29T19:25:59.318Z
Atlas Search Custom Analyzer
4,112
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "If the data is imported successfully i am uncomment the mongod.conf file data and again iam check the data in mongodb database it’s not there. Why what happened i dont. Can anyone solve this problem?", "username": "devops_learning" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Mongodb: I have mongodb server for this server without uncomment the( server authorization) mongod.conf file iam not able to import the data mongodb database
2022-05-10T16:23:56.383Z
Mongodb: I have mongodb server for this server without uncomment the( server authorization) mongod.conf file iam not able to import the data mongodb database
2,308
null
[ "react-native" ]
[ { "code": "", "text": "I’m new to MongoDB and Realm sync. I currently working on a React Native project and I want to be able to store app data to cloud , but if the data is deleted in the app, the cloud data remains untouched. It appears that in order to allow writing to Realm Sync, you must also enable reading…in other words no write-only. Is there another way I’m missing?Thanks,\nLes", "username": "Les_Woodland" }, { "code": "", "text": "Hey, sorry I am a little bit confused about what you are trying to do. Are you trying to have an insert/update only mode where deletes are not synced? Or are you trying to have only writing enabled and no reading (it seems like your conclusion is this but im not following your example). Do you mind elaborating a bit more on what you are trying to do?", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks,for replying. If I write data to client Realm, it syncs to Atlas. If I delete that data on the client, then Atlas data is deleted, as you would expect. But I want the Atlas data to be persistent…immutable in fact.", "username": "Les_Woodland" }, { "code": "", "text": "Thanks,for replying. If I write data to client Realm, it syncs to Atlas. If I delete that data on the client, then Atlas data is deleted, as you would expect. But I want the Atlas data to be persistent…immutable in fact.", "username": "Les_Woodland" }, { "code": "", "text": "Got it. Technically what you are talking about doing is opting in and out of synchronization which breaks a few of the underlying assumptions of sync.We have heard this as a point of feedback though, especially in terms of having an insert-only collection where changes sync to atlas and then no changes to that object will ever be synced down. We are planning a project in the coming months to add this “async-insert” functionality, so let me know if that sounds like it fits your needs.", "username": "Tyler_Kaye" }, { "code": "", "text": "Tyler, simply, I want to delete a document in the local realm, and not have that document deleted in the Atlas.\nThanks,\nLes", "username": "Les_Woodland" }, { "code": "", "text": "Hi, currently there is no official way to do this because sync ensures that all of the data matching your query is syncing to the device. As a workaround, I would recomend using flexible sync and having a field in your document called “evicted”. Then you can sync down your query (for example \"status==‘urgent’ && priority > 10 && evicted==false). Then, when you want to remove the object from the device, you can just set evicted to true and it will be removed.We may in the future look into having this as an officially supported feature, but doing so could cause major scalability concerns as it would likely involve a lot of book keeping to be done by the server for every object being synced to every connected client. So, currently the workaround would be to just tighten your subscription to remove documents you no longer care about.Let me know if that works,\nTyler", "username": "Tyler_Kaye" }, { "code": "", "text": "Thank you, that was very helpful.\nLes", "username": "Les_Woodland" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
One way syncing
2022-04-03T21:33:07.150Z
One way syncing
3,031
https://www.mongodb.com/…24db47dacc31.png
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "We are so close to getting submissions now, we should really let you know what’s potentially in-store for you for all your hard work!So, drumroll please! As long as your project uses MongoDB & the GDELT dataset in some way, and it works and is original, you’ll be eligible for prizes.Submission isProjects will be judged on three simple criteria: Is it creative, is it well designed, and is it well made?So feel free to take your projects in exciting directions and take this opportunity to satisfy your inquisitive mind!Project submission form will be available shortly", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Announcing the Hackathon Prizes!
2022-05-10T15:37:15.710Z
Announcing the Hackathon Prizes!
2,949
null
[ "aggregation", "queries", "node-js" ]
[ { "code": "mainID return this.aggregate([\n {\n $lookup: {\n from: 'CollectionB',\n localField: 'mainId',\n foreignField: 'mainId',\n as: 'validd',\n },\n },\n {\n $match: {\n 'validd.mainId': { $exists: false },\n },\n },\n ]);\ncreatedAt$gte$lte$match{\n $match: {\n 'validd.createdAt': { $gte: moment.utc().add(-90, \"days\") },\n },\n},\n{\n _id: 227dd33c5c51c79f63743da3\n mainId: 5c306562-9c87-48dd-93ca-4a118be50490\n createdAt: 2022-05-07T02:28:12.537+00:00\n},\n{\n _id: f3ddd33c5c51c79f63743da3\n mainId: 5c306562-9c87-48dd-93ca-4a118be50490\n createdAt: 2022-05-10T02:28:12.537+00:00\n},\n{\n _id: 227dd33c5c51c79f63743da3\n mainId: 5c306562-9c87-48dd-93ca-4a118be50490\n createdAt: 2022-01-01T02:28:12.537+00:00\n}\ncreatedAtreturn this.aggregate([\n {\n $lookup: {\n from: 'CollectionB',\n localField: 'mainId',\n foreignField: 'mainId',\n as: 'validd',\n },\n },\n {\n $match: {\n $expr: {\n $or: [\n {\n $eq: [\"$validd\", []]\n },\n {\n $and: [\n {\n $lt: [ \"validd.createdAt\", moment.utc().add(-interval, \"days\").format('YYYY-MM-DD') ]\n },\n {\n $ne: [\"validd\", null]\n }\n ]\n }\n ]\n }\n },\n },\n ]);\n", "text": "The code below returns me all data that are present in CollectionA but NOT in CollectionB . (Using the mainID as reference).But now I need to add another filter . I also need to get data where the field createdAt is greater than X days.\nin other words: Records that have more than X days of life.Tried using $gte and $lte inside $match but didn’t work.Here is my database:To clarify. I need to return data if:Also tried this, but didnt work. It returns data only when the first condition is meet.Any helpp?", "username": "Alan" }, { "code": "$eq: [\"$validd\", []]$ne: [\"validd\", null]$ne : [ \"$validd\" : [] ]\nmongosh> c.find()\n{ _id: 0, validd: [] }\n{ _id: 1, validd: [ 10 ] }\n\n// the empty array is also matched\nmongosh> c.aggregate( { \"$match\" : { \"$expr\" : { \"$ne\" : [ \"$validd\" , null ] }}})\n{ _id: 0, validd: [] }\n{ _id: 1, validd: [ 10 ] }\n\n// but with you only get the non-empty array, which is inline with your intent\nmongosh> c.aggregate( { \"$match\" : { \"$expr\" : { \"$ne\" : [ \"$validd\" , [] ] }}})\n{ _id: 1, validd: [ 10 ] }\nmatch_stage = { \"$match\" : {\n \"$or\" :\n [\n { \"validd\" : [] } ,\n {\n \"validd.createdAt\" : { \"$lt\" : moment.utc().add(-interval, \"days\").format('YYYY-MM-DD') }\n }\n ]\n} }\n", "text": "At first sight I see a few things.You correctly use the dollar sign in$eq: [\"$validd\", []]but you completely forgo it in the other part of the query.Even with dollar sign, the following will not do what you want.$ne: [\"validd\", null]I think it should be:because an empty array will match the expression “$ne : [ “$validd” , null ]”, see:This being said about $ne:[validd,null], I do not think you really need it since you are querying validd.createdAt. That last part of the query should only be true if validd contains at least one element with the field createdAt matching your $lt.I do not see anything in your query that requires $expr, so I would try to simplify and do:Untested:", "username": "steevej" }, { "code": "name: \"first\"\nmainId: \"03d36f3e-535e-4074-aefa-f4b09fa5eba2\"\ncreatedAt: 2022-05-10T11:38:08.284+00:00\n\nname: \"second\"\nmainId: \"13d36f3e-535e-4074-aefa-f4b09fa5eba2\"\ncreatedAt: 2022-05-10T11:38:13.284+00:00\n\nname: \"third\"\nmainId: \"23d36f3e-535e-4074-aefa-f4b09fa5eba2\"\ncreatedAt: 2022-05-10T11:38:46.284+00:00\nname: second\nmainId: \"13d36f3e-535e-4074-aefa-f4b09fa5eba2\"\ncreatedAt: 2022-05-10T11:38:13.284+00:00\n\nname: third\nmainId: \"23d36f3e-535e-4074-aefa-f4b09fa5eba2\"\ncreatedAt: 2022-05-04T11:38:13.284+00:00\nfirstthirdCreatedAt = 2022-05-04secondFirstcreatedAtinterval", "text": "Hi, Steeve. Thank you very much for your explanation.I tried your snippet, and I still get the same results.Collection A:Collection B:Then I run the search using an interval of 5 days\nWe can see that:But the output of the query only returns me the first condition. Only First because it doesn’t exists on CollectionB. But I also need to return those that exists but have a createdAt longer than my interval.", "username": "Alan" }, { "code": "momentJSnew Date$match: {\n $or: [\n { \"eligible\": [] },\n { \"eligible.createdAt\": { $lt: new Date(interval) } }\n ]\n}\n", "text": "Sorry for adding another reply (can’t edit previous one).Our solution worked with an extra.Not sure WHY momentJS wasn’t able to do the job.\nI had to add a new Date:", "username": "Alan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to lookup and filter by date?
2022-05-10T12:07:50.266Z
How to lookup and filter by date?
14,161
null
[ "swift", "atlas-device-sync", "developer-hub" ]
[ { "code": "", "text": "I’ve just published a post on using Maps + location with SwuftUI.Embedding Apple Maps and location functionality in SwiftUI apps used to be a bit of a pain. It required writing your own SwiftUI wrapper around UIKit code. Things have got easier for maps at least.iOS14 introduced the Map SwiftUI view (part of Mapkit) allowing you to embed maps directly into your SwiftUI apps without messy wrapper code.This article shows you how to embed Apple Maps into your app views using Mapkit’s Map view. We’ll then look at how you can fetch the user’s current location—with their permission, of course!Finally, we’ll see how to store the location data in Realm in a format that lets MongoDB Realm sync it to MongoDB Atlas. Once in Atlas, you can add a geospatial index and use MongoDB Charts to plot the data on a map—we’ll look at that too.", "username": "Andrew_Morgan" }, { "code": "", "text": "Interested in this @Andrew_Morgan and thanks for the post.Is there any way (or future plans) to run geospatial queries directly on a realm?", "username": "Rob_Elliott" } ]
New article – Using Maps and Location Data in Your SwiftUI (+Realm) App
2021-07-12T12:58:20.994Z
New article – Using Maps and Location Data in Your SwiftUI (+Realm) App
3,870
null
[]
[ { "code": "[{\"path\":\"/main.66c216826303f93d.js\",\"attrs\":[{\"name\":\"Content-Type\",\"value\":\"application/x-javascript\"},{\"name\":\"Content-Encoding\",\"value\":\"gzip\"}]}]", "text": "HiI use the hosting feature of Realm to store my single page (Angular) application which is hooked up and synchronised to my GitHub account. As part of my deployment process, I zip up one of the files and change the meta data in the file hosting/metadata.json e.g.[{\"path\":\"/main.66c216826303f93d.js\",\"attrs\":[{\"name\":\"Content-Type\",\"value\":\"application/x-javascript\"},{\"name\":\"Content-Encoding\",\"value\":\"gzip\"}]}]This all works fine. However, if I make a change in the UI and deploy to GitHub whilst Automatic Deployment is enabled the hosted file (in this case main.66c216826303f93d.js) unzips itself and therefore breaks the website as the browser tried to unzip the file again.It appears related to this issue: Realm app auto deployment - not deploying Rules, Schema (but is deploying Functions, Triggers, etc.) - #3 by Christopher_Roth which was fixed a while back but possibly not for the zipped file use case.Anyone else having similar problems?", "username": "ConstantSphere" }, { "code": "", "text": "@Mansoor_Omar just wondered if this is something you could check out for me? thanks", "username": "ConstantSphere" }, { "code": "", "text": "Hi Simon,Could you elaborate a bit more on the workflow/steps.\nIf i understand correctly is this what you’re doing?:Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "Hi Manny,thanks for taking a look. To be more precise with my situation, it works like this…Hopefully that explains a bit more clearly what is happening. I suspect that the code that syncs with GitHub zips and unzips the code but it should only unzip down to one level.If you could raise this as a bug that would be much appreciated. Many thanks.", "username": "ConstantSphere" }, { "code": "", "text": "Hi Simon,Thanks for that, after testing a few things I’ve raised this internally with all my discoveries.It seems the issue is with the /hosting/metadata.json which is coming up blank on the cloud app but has a value in the github repo.When you add the content-encoding attribute in the github repo and commit, it updates the cloud app state with a value for the metadata.json file but the /files folder on the cloud goes missing. This gets corrected when you do step 10 but you end up with a blank metadata.json file again on the cloud state. In fact if instead of removing the content encoding attribute in step 10, you were to make a different change in the UI like updating a function it would result in error as per my testing because there is a difference in the hosting directory between the cloud app and github repo.Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "Thank you for investigating and raising it. Much appreciated.", "username": "ConstantSphere" } ]
Zipped Hosted Files Unzip after sync to GitHub
2022-05-01T14:28:39.721Z
Zipped Hosted Files Unzip after sync to GitHub
2,366
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "Hey everybody!If you want to import CAMEO data into your database to use with the GDELT data, check out the recording of today’s stream.I’ve cloned a GitHub repo containing the Cameo data so that I could fix up the CSV file, and I’ve added an import script to get the data into your MongoDB database.Happy Hacking!@Mark_Smith", "username": "Mark_Smith" }, { "code": "", "text": "", "username": "Stennie_X" } ]
CAMEO data for import
2022-05-10T11:09:36.794Z
CAMEO data for import
2,361
https://www.mongodb.com/…020a326cd82a.png
[ "mdbw22-hackathon" ]
[ { "code": "Lead Developer AdvocateSenior Developer Advocate", "text": "So come, join in and ask questions. We will be sharing details about the submission process and also announcing the hackathon Prizes!We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer Advocate", "username": "Shane_McAllister" }, { "code": "", "text": "Just 20 minutes to go - you can join in on MongoDB Youtube and MongoDB Twitchor just tune in below", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Hackathon Office hours - APAC/EMEA
2022-05-10T09:37:49.289Z
Hackathon Office hours - APAC/EMEA
2,714
null
[ "aggregation", "serverless" ]
[ { "code": "{\n \"SomeField\": { $regex: 't', $options: 'g' }\n}\n", "text": "Trying this $match stage on a normal Atlas clusterworks fine and returns the matched documents.Trying the same thing on a serverless cluster returns this error:Command aggregate failed: invalid flag in regex options: g.Is this a known limitation, or should I open a bug ticket?", "username": "John_Knoop" }, { "code": "$regex$regexgg$regex options", "text": "Hi @John_Knoop,Thanks for raising this. Serverless instances run on the latest MongoDB release of MongoDB (Version 5.3 as of the time of this message). As noted in the version 4.4 and 5.0 $regex documentation:The $regex operator does not support the global search modifier g .I did some testing with an Atlas cluster with the MongoDB version set to `Latest Release (auto-upgrades) which was version 5.3:\nimage1750×350 36.8 KB\nThe error is returned when the g option is passed through.Starting in MongoDB 5.1, invalid $regex options options are no longer ignored as per the noted changes. I presume the MongoDB cluster which did not return the error is at MongoDB version 5.0 or lower however please correct me if I am wrong here.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Yes, the other cluster is version 4.x.Hmm, so the /g flag didn’t work before either, it just wasn’t reported?", "username": "John_Knoop" }, { "code": "$regexg$regex$regex$regex options", "text": "Hi @John_Knoop,Hmm, so the /g flag didn’t work before either, it just wasn’t reported?Starting at version 4.4 onwards, the $regex operator does not support the global search modifier g as noted in the docs:It was silently ignored in version 4.4 and 5.0 which is why no error is returned. As noted in my previous comment, starting in MongoDB Version 5.1, invalid $regex options are no longer ignored which resulted in the error you received when running the aggregation example you provided with the /g flag against a Serverless instance (which would have been MongoDB version 5.3 judging by the time of this post but correct me if i’m wrong here).Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "I see.Do you happen to know why support for /g was removed?", "username": "John_Knoop" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Regex flag /g not supported in aggregation on Serverless cluster?
2022-05-07T20:39:18.687Z
Regex flag /g not supported in aggregation on Serverless cluster?
5,462
null
[ "queries" ]
[ { "code": " find({\n id: { $in: [5, 3, 4, 1, 2] },\n })\n[ {id: 1}, {id: 2}, {id: 3}, {id: 4}, {id: 5}][{id: 5}, {id: 3}, {id: 4}, {id: 1}, {id: 2}]", "text": "I have a query that passes an array of ids in a specific order.This query returns an array of documents with corresponding ids, but the documents are sorted in the ascending order.[ {id: 1}, {id: 2}, {id: 3}, {id: 4}, {id: 5}]Instead I want the returned array to have documents in the same order in which the query specifies the ids. That is[{id: 5}, {id: 3}, {id: 4}, {id: 1}, {id: 2}]Is there a way to prevent mongodb from sorting the result?", "username": "Vineet_Dixit" }, { "code": "find({\n id: { $in: [5, 3, 4, 1, 2] },\n }).sort({_id:1})\n", "text": "", "username": "David_Tayar" }, { "code": "", "text": "@David_Tayar - Your solution would sort the array in the ascending order by _id. What I am looking for is not a sorted array of documents, but an array of documents that follow the same order as specified in the $in field.", "username": "Vineet_Dixit" }, { "code": "", "text": "mongoose - Does MongoDB's $in clause guarantee order - Stack Overflow you might interested in .it was useful for me", "username": "Faruk_Onder_Beyazit" } ]
Order of documents returned by $in: [val1, val2, ..., valn]
2021-02-28T11:18:26.071Z
Order of documents returned by $in: [val1, val2, &hellip;, valn]
5,220
null
[ "java", "polymorphic-pattern" ]
[ { "code": " InventoryItem (abstract)\n / \\\n Tracked Item AmountItem (Abstract)\n / \\\n SimpleAmountItem ListAmountItem\n@EqualsAndHashCode(callSuper = true)\n@Data\n@AllArgsConstructor\n@JsonTypeInfo(\n\tuse = JsonTypeInfo.Id.NAME,\n\tinclude = JsonTypeInfo.As.EXISTING_PROPERTY, property = \"storedType\"\n)\n@JsonSubTypes({\n\[email protected](value = SimpleAmountItem.class, name = \"AMOUNT_SIMPLE\"),\n\[email protected](value = ListAmountItem.class, name = \"AMOUNT_LIST\"),\n\[email protected](value = TrackedItem.class, name = \"TRACKED\")\n})\n@BsonDiscriminator\npublic abstract class InventoryItem<T> extends ImagedMainObject {\n\t@NonNull\n\t@NotNull\n\tprivate Map<@NonNull ObjectId, @NonNull T> storageMap = new LinkedHashMap<>();\n private final StoredType storedType;\n //...\n}\n\n\n@EqualsAndHashCode(callSuper = true)\n@Data\npublic class TrackedItem extends InventoryItem<Map<@NotBlank String, @NotNull TrackedStored>> {\n //...\n}\n\n\n@EqualsAndHashCode(callSuper = true)\n@Data\n@NoArgsConstructor(access = AccessLevel.PRIVATE)\n@ValidHeldStoredUnits\npublic abstract class AmountItem<T> extends InventoryItem<T> {\n //...\n}\n\n\n\n@EqualsAndHashCode(callSuper = true)\n@Data\npublic class SimpleAmountItem extends AmountItem<AmountStored> {\n //...\n}\n\n\n@EqualsAndHashCode(callSuper = true)\n@Data\npublic class ListAmountItem extends AmountItem<List<@NotNull AmountStored>> {\n //...\n}\n\nTrackedItemListAmountItemSimpleAmountItemEncoding a ListAmountItem: 'ListAmountItem()' failed with the following exception:\n\nFailed to encode 'ListAmountItem'. Encoding 'storageMap' errored with: Can't find a codec for class java.lang.Object.\n\nA custom Codec or PojoCodec may need to be explicitly configured and registered to handle this type.\n\tat org.jboss.resteasy.core.ExceptionHandler.handleApplicationException(ExceptionHandler.java:105)\n\tat org.jboss.resteasy.core.ExceptionHandler.handleException(ExceptionHandler.java:359)\n\tat org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:218)\n\tat org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:519)\n\tat org.jboss.resteasy.core.SynchronousDispatcher.lambda$invoke$4(SynchronousDispatcher.java:261)\n\tat org.jboss.resteasy.core.SynchronousDispatcher.lambda$preprocess$0(SynchronousDispatcher.java:161)\n\tat org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:364)\n\tat org.jboss.resteasy.core.SynchronousDispatcher.preprocess(SynchronousDispatcher.java:164)\n\tat org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:247)\n\tat io.quarkus.resteasy.runtime.standalone.RequestDispatcher.service(RequestDispatcher.java:73)\n\tat io.quarkus.resteasy.runtime.standalone.VertxRequestHandler.dispatch(VertxRequestHandler.java:151)\n\tat io.quarkus.resteasy.runtime.standalone.VertxRequestHandler$1.run(VertxRequestHandler.java:91)\n\tat io.quarkus.vertx.core.runtime.VertxCoreRecorder$13.runWith(VertxCoreRecorder.java:543)\n\tat org.jboss.threads.EnhancedQueueExecutor$Task.run(EnhancedQueueExecutor.java:2449)\n\tat org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1478)\n\tat org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:29)\n\tat org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:29)\n\tat io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)\n\tat java.base/java.lang.Thread.run(Thread.java:829)\nCaused by: org.bson.codecs.configuration.CodecConfigurationException: An exception occurred when encoding using the AutomaticPojoCodec.\nEncoding a ListAmountItem: 'ListAmountItem()' failed with the following exception:\n\nFailed to encode 'ListAmountItem'. Encoding 'storageMap' errored with: Can't find a codec for class java.lang.Object.\nSimple/[email protected]", "text": "(Also on java - Maven: Excluding tests from build - Stack Overflow)I have the following polymorphic structure for objects I want to store in MongoDb:With: (Full code here (Github))As shown, TrackedItem can be appropriately handled by Mongo, stored/ retrieved, etc. However, both the ListAmountItem and SimpleAmountItem cannot, with the following error:It appears that Mongo can reconcile direct/first descendants of a superclass, but not if the inheritance tree gets any deeper than that. Is this as designed, a bug, or something I can tweak to get around?It appears to me that Mongo gets stuck on trying to reconcile the Simple/ListAmountItems as a plain AmountItem, which makes sense as why it’s failing, but not terribly clear as to how to fix it. The @BsonDiscriminator seems rather simplistic, esp. compared to Jackson.I’ll note that I am implementing this in Quarkus 2.7.5.Final.Looks like there might be some support for specifying known types, but I don’t see an analogous java annotation ( Reference → BSON → MappingClasses → Polymorphism )", "username": "Greg_Stewart" }, { "code": "", "text": "Sorry, wrong SO link in OP: MongoDb Java- deeper polymorph tree - Stack Overflow", "username": "Greg_Stewart" }, { "code": "", "text": "Looks like I found a bug- https://jira.mongodb.org/projects/JAVA/issues/JAVA-4578For now I just flattened my sub-object tree and accepted some duplicate code.", "username": "Greg_Stewart" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Java driver- deep polymorphic trees
2022-04-12T14:40:24.005Z
Java driver- deep polymorphic trees
3,848
null
[ "node-js", "crud" ]
[ { "code": "userInfo = {\n name: 'Magical Mr. Mistoffelees',\n email: '[email protected]',\n password: 'somehashedpw'\n}\n\nawait users.insertOne(userInfo, {\n writeConcern: { w: \"majority\", wtimeout: 100 },\n})\n\nconsole.log(userInfo)\n\n{\n name: 'Magical Mr. Mistoffelees',\n email: '[email protected]',\n password: 'somehashedpw',\n _id: 6279884c2b402a8f82efa162\n}\n_id", "text": "Is this expected behavior? It added an _id field.", "username": "Big_Cat_Public_Safety_Act" }, { "code": "_id", "text": "Hi @Big_Cat_Public_Safety_Act,Is this expected behavior? It added an _id field.Yes, it is the expected behavior. MongoDB automatically adds an _id field so as to uniquely identify each and every document in the collection.In case you have any doubts, please feel free to reach out to us.Thanks and Regards.\nSourabh Bagrecha,\nMongoDB", "username": "SourabhBagrecha" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
insertOne modifies input query object
2022-05-09T21:43:10.193Z
insertOne modifies input query object
1,694
null
[ "atlas-device-sync" ]
[ { "code": "", "text": "Hi there,Late last night I had this error: “TranslatorFatalError - encountered non-recoverable resume token error. Sync cannot be resumed from this state and must be terminated and re-enabled to continue functioning: (ChangeStreamHistoryLost) Resume of change stream was not possible, as the resume point may no longer be in the oplog.”. Sync has been paused and I am unable to restart. I have also tried terminating sync and starting again but with no success.I am not sure where the error came from as there were no requests around that time. Is anybody aware of what the cause of this issue is and how it can be resolved?Many thanksWill", "username": "varyamereon" }, { "code": "", "text": "Is anyone from the Realm team able to help with this? I am still facing the issue and cannot figure out the issue.", "username": "varyamereon" }, { "code": "", "text": "Is there anyone who can provide some inputs here?", "username": "Abhishek_Matta" }, { "code": "", "text": "Same here … +1 … happened on my M0 cluster for no reason at all and am unable to recover.", "username": "S_Wayne" }, { "code": "", "text": "Hello,Before answering the questions, let’s discuss some of the terminology mentioned in the error.Translator\nThis refers to an internal automated process called the Sync Translator which has the role of translating “Realm” data in the client into Atlas data, and vice versa. This process creates and executes instructions in the Sync metadata that allow syncing to take place between mobile devices and Atlas.Change Stream\nThis Change Stream is a MongoDB feature that lets applications watch collections for real-time data changes recorded in the Oplog (Operations Log). The Sync Translator uses change streams to check for writes that occur and translate them into instructions so that documents/objects can be synced between the Sync client and your Atlas database(s). Similarly, Triggers use change streams to watch for data changes and fire executions on the operation types configured in the trigger.Resume Token\nThe Resume Token is a point in time in your oplog which the change stream uses to process Change Events recorded in the oplog. If the resume token cannot be found, the Sync Translator or Trigger will not know from which point in time it needs to continue processing change events.If the Sync translator loses the token, it can result a non-recoverable error to be thrown where Sync needs to be terminated and restarted in order for a new token to created. This means the metadata instructions that have been created thus far need to be cleared/reset and rebuilt again using what is in Atlas. The existing clients will need to also undergo a Client Reset to continue using sync. For this reason, we recommend including client reset handling in your app to take care of this automatically.Similarly, if a trigger loses its resume token it will not know from which point in time it needs to process events and as such the trigger will go into a suspended state and must be restarted without a resume token. Unfortunately this means the trigger will not be able to process change events prior to the restart of the trigger without token and can only fire on new events that occur. For further information please see other root causes for trigger suspensions can be suspended.Cause of ChangeStreamHistoryLost errors\nAs discussed above, this is usually due to the change stream used by the translator process not being able to find its resume token. This is most commonly caused by insufficient cluster resources. We recommend a general minimum of an M10 cluster for a production Realm app using Sync, and a best practice minimum of M30 cluster (or greater depending on needs). This will ensure that your app is not affected by other clusters which utilise the shared resources in a shared cluster. Please ensure your app uses a dedicated cluster tier before deploying the Realm app live into production. If you choose to later upgrade from a shared tier to a dedicated tier cluster, you will need to undergo a sync termination as part of the process causing a potential inconvenience to the users.If you have a dedicated cluster tier and experience this error, it is most likely due to the Oplog size on the cluster being insufficient. The oplog size determines how much room there is for change events to be written in the oplog. If there is a surge of writes occurring in your cluster, it will cause more entries to be written into the oplog and as a result may force the resume token to fall outside of the Oplog Replication Window (a graph you can find in your Cluster Metrics). Please increase your oplog size so that there is at least 48 hours of replication oplog window available at any given time.Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
TranslatorFatalError with Realm Sync
2021-08-19T12:12:10.558Z
TranslatorFatalError with Realm Sync
3,899
null
[ "aggregation", "queries", "atlas-search" ]
[ { "code": "", "text": "I’m new to Atlas Search and I’m trying to build a pipeline stage with $search. Based on the usecase that I’m working with, I am searching through a collection that has an Atlas Search index defined, and there will sometimes be a query to match some field with some input (the details aren’t necessary here); however, when there is no input for the query, I want the search query to just return everything in the collection without any filtering/matching. What syntax can I use for $search that has the effect of basically doing nothing and returning the whole collection (similar to .find({} on a MongoDB collection)?", "username": "Francesca_Ricci-Tam" }, { "code": "{\n $search: {\n \"index\": <index name>, // optional, defaults to \"default\"\n \"wildcard\": {\n \"query\": \"*\",\n \"path\": {\"wildcard\": \"*\"},\n \"allowAnalyzedField\": true\n }\n }\n}", "text": "The access pattern you have described is not totally unheard of, and you can do what what you described with Atlas Search. Write a method or function that executes a search compound query if there is a filter condition, and if not, run a wildcard query across a wildcard path.That second query that filters nothing is:", "username": "Marcus" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Atlas Search $search syntax for performing no filter
2022-05-06T02:28:52.552Z
Atlas Search $search syntax for performing no filter
2,126
null
[]
[ { "code": "", "text": "Is there a way to query another cluster in the same account from a Realm Function?For example, if I have to clusterA and clusterB, is there a way to query a database.collection in clusterB from a trigger running on clusterA?Currently the way to query a collection uses “context” but you don’t specify cluster name anywhere in the statement like this:context.services.get(“mongodb-atlas”).db(“mydb”)", "username": "Tyler_Queen" }, { "code": "", "text": "Hi,You can have multiple MongoDB services. The name “mongodb-atlas” is the default name given when you connect a cluster. If you go to “Linked Data Sources” in the UI you will see that you can connect multiple Atlas clusters and use them appropriately by specifying the name you see on that page.", "username": "Tyler_Kaye" }, { "code": "", "text": "Thanks,Do I access the linked datasource like this?context.services.get(“my-linked-datasource-name”).db(“mydb”)", "username": "Tyler_Queen" }, { "code": "", "text": "That is correct. The default name / name for your existing cluster is just “mongodb-atlas”. So if you have a trigger on that cluster and in the function you want to insert to “mongodb-other” you can just reference context.services.get(“mongodb-other”).db(“mydb”)", "username": "Tyler_Kaye" } ]
Query another cluster in the same account from a Realm Function
2022-05-09T21:00:37.691Z
Query another cluster in the same account from a Realm Function
2,251
https://www.mongodb.com/…859fb403f745.png
[ "aggregation", "replication", "atlas-cluster", "mdbw22-hackathon" ]
[ { "code": "", "text": "The CAMEO Manual details a list of event codes as verbs. They all seem somewhat confrontational. How would you code for an article on a technological advance, academic prize (including Nobel), a discovery in medicine or archeology, or COVID? And what does a positive GoldsteinSore mean?db.eventsCSV.aggregate( [{$group: {_id: “$EventRootCode”, tone: {$avg: “$AvgTone”}, scale: {$avg: “$GoldsteinScale”} }} ], {allowDiskUse:true}).toArray()[\n{ _id: ‘08’, tone: -2.5138041216441676, scale: 6.385177271485006 },\n{ _id: ‘16’, tone: -3.2453490134088616, scale: -5.387753047703394 },\n{ _id: ‘14’, tone: -3.806666068522457, scale: -6.58807398025937 },\n{ _id: ‘01’, tone: -2.251555047729386, scale: 0.018630319672395135 },\n{ _id: ‘–’, tone: -8.226224586544756, scale: null },\n{ _id: ‘05’, tone: -0.23222535042816278, scale: 4.123959029658475 },\n{ _id: ‘06’, tone: -1.2006600489934414, scale: 6.24108697755418 },\n{ _id: ‘17’, tone: -5.280027351453161, scale: -5.3401026812549715 },\n{ _id: ‘13’, tone: -3.6296585065911784, scale: -5.035283492894095 },\n{ _id: ‘20’, tone: -4.790790602787274, scale: -9.987066779374471 },\n{ _id: ‘19’, tone: -4.333058927669241, scale: -9.950117329932903 },\n{ _id: ‘03’, tone: -0.5232234635535984, scale: 4.274951367978328 },\n{ _id: ‘11’, tone: -3.6075745099406733, scale: -2 },\n{ _id: ‘09’, tone: -3.0776017755819653, scale: -2 },\n{ _id: ‘15’, tone: -2.896741645440607, scale: -7.2 },\n{ _id: ‘04’, tone: -1.1321422098505745, scale: 2.6427215029220013 },\n{ _id: ‘12’, tone: -2.8975944211327906, scale: -4.210475664691085 },\n{ _id: ‘07’, tone: -0.8038323782725084, scale: 7.326151820640566 },\n{ _id: ‘10’, tone: -2.740437732772371, scale: -5 },\n{ _id: ‘18’, tone: -5.870330131781918, scale: -9.180378334969216 },\n{ _id: ‘02’, tone: -1.860422472187114, scale: 2.9503825073153807 }\n]\nimage811×606 90.8 KB\n", "username": "Ilan_Toren" }, { "code": "", "text": "Here is a primer on the Goldstein scale:A Conflict-Cooperation Scale for WEIS Events Data", "username": "Joe_Drumgoole" }, { "code": "", "text": "Thanks. It was an interesting read although I’d have to look at WEIS to better understand. It is as though the GoldsteinScale is a normalization to bridge between older and new datasets. The GoldsteinScale is not independent of the EventRootCode and neither is the AvgTone", "username": "Ilan_Toren" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Hackathon - CAMEO event code
2022-05-08T07:55:50.751Z
Hackathon - CAMEO event code
3,343
https://www.mongodb.com/…e_2_1024x512.png
[ "python" ]
[ { "code": "s = '{\"event_type\": \"AM\", \"symbol\": \"TSLA\", \"s\": {\"$date\": \"2022-05-09T17:54:00Z\"}, \"t\": {\"$date\": \"2022-05-09T17:55:00Z\"}, \"av\": 21049140, \"op\": 836.45, \"vw\": 802.2499, \"o\": 802.11, \"h\": 803.35, \"l\": 801.59, \"c\": 802.915, \"v\": 45151, \"a\": 817.3105, \"z\": 30, \"received\": {\"$date\": \"2022-05-09T17:55:04.098Z\"}}'\nmessage = json_util.loads(s)\n", "text": "Hey there,Not sure if anyone is following this forum - and also not sure why there is no issue tracker on the github page:PyMongo - the Official MongoDB Python driver. Contribute to mongodb/mongo-python-driver development by creating an account on GitHub.but there is a major issue in the pymongo 4.1.1Consider this string:in python, when converting this to JSON using json_util and pymongo version 3.12.3we get this:{‘event_type’: ‘AM’,\n‘symbol’: ‘TSLA’,\n‘s’: datetime.datetime(2022, 5, 9, 17, 54, tzinfo=<bson.tz_util.FixedOffset object at 0x7f4298238040>),\n‘t’: datetime.datetime(2022, 5, 9, 17, 55, tzinfo=<bson.tz_util.FixedOffset object at 0x7f4298238040>),\n‘av’: 21049140,\n‘op’: 836.45,\n‘vw’: 802.2499,\n‘o’: 802.11,\n‘h’: 803.35,\n‘l’: 801.59,\n‘c’: 802.915,\n‘v’: 45151,\n‘a’: 817.3105,\n‘z’: 30,\n‘received’: datetime.datetime(2022, 5, 9, 17, 55, 4, 98000, tzinfo=<bson.tz_util.FixedOffset object at 0x7f4298238040>)}however, when we use the latest version of pymongo v4.1.1{‘event_type’: ‘AM’,\n‘symbol’: ‘TSLA’,\n‘s’: datetime.datetime(2022, 5, 9, 17, 54),\n‘t’: datetime.datetime(2022, 5, 9, 17, 55),\n‘av’: 21049140,\n‘op’: 836.45,\n‘vw’: 802.2499,\n‘o’: 802.11,\n‘h’: 803.35,\n‘l’: 801.59,\n‘c’: 802.915,\n‘v’: 45151,\n‘a’: 817.3105,\n‘z’: 30,\n‘received’: datetime.datetime(2022, 5, 9, 17, 55, 4, 98000)}The timezones are missing!!!Please fix this asapAlso, please remove this ugly green theme from the forum, it’s absolutely disgusting.Also, please the issue tracker back on github, that’s where it belongs. I shouldn’t have to make an account on a forum just to post a bug report.", "username": "Vishal_Goklani" }, { "code": "", "text": "I can’t solve your problem, but the MongoDB issue tracker is here.", "username": "Jack_Woehr" } ]
Pymongo regression - timezones are getting dropped when converting to JSON
2022-05-09T18:08:32.289Z
Pymongo regression - timezones are getting dropped when converting to JSON
1,593
null
[ "mongodb-shell", "mdbw22-hackathon", "mdbw-hackhelp" ]
[ { "code": "", "text": "Hello I am watching the video Introduction to GDELT for the MongoDB World Hackathon 22 - Session 1 MongoDB presented by Shane and JoeDrumgoole.I am trying to replicate what they do in the video but I have on issue when trying to do the reshape of the data from collection eventscsv to collection events.Running:gdelttools-master % make reshapeI can see in terminal:mongosh --quiet --file=gdelt_reshaper.jsBut the collection events is not created and no more info is provided.I have no experience working with make and I don’t know if there is any previous configuration to be done in the Makefile file.(I don’t make any changes to Makefile, it’s the same than in the repo).I am using a Mac for the hackathon.I have just installed make (It was not installed previously):$ brew install makeAnd try againgdelttools-master % make reshapeI would appreciate help to be able to execute the script correctly, thanks in advance.", "username": "Manuel_Martin" }, { "code": "", "text": "Hi Manuel,\nI reshaped the data passing directly the gdelt_reshaper.js to the mongoshWelcome", "username": "Crist" }, { "code": "gdelt_reshaper.jsmongosh mongodb+ssh://username:[email protected]/yourdatabase gdelt_reshaper.js\n", "text": "I’d recommend running the gdelt_reshaper.js directly with mongosh, as @Crist recommends.If you’re not running MongoDB on localhost, and the direct port, you may also want to provide the connection string of your MongoDB cluster to mongosh, like this:", "username": "Mark_Smith" }, { "code": "gdeltloaderpip install gdelttools # install the package\ngdeltloader --master --download --overwrite --last 20 # download the last 20 days of data\nmake full_data_load #load the downloaded data and reshape it\n", "text": "Hi Manuel,The reshape step assumes you have already loaded some data with the gdeltloader script. If you follow the package steps you should get some output.If you do those steps in order you should get some output.", "username": "Joe_Drumgoole" }, { "code": "", "text": "Thank you, I followed your answer with some suggestions from @Mark_Smith, so I will post my next question as a reply to his answer.", "username": "Manuel_Martin" }, { "code": "mongosh mongodb+ssh://.....\nMongoshInvalidInputError: [COMMON-10001] Invalid URI:\nmongosh mongodb+srv://.....\nCurrent Mongosh Log ID: 6279438bdf708b5009da2113\nConnecting to: mongodb+srv://<credentials>@worldhack22.s8ynf.mongodb.net/databasehackaton2022\nUsing MongoDB: 5.0.8\nUsing Mongosh: 1.1.8\n\nFor mongosh info see: https://docs.mongodb.com/mongodb-shell/\n\n\nTo help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).\nYou can opt-out by running the disableTelemetry() command.\n\nLoading file: gdelt_reshaper.js\n", "text": "Thank you, I’m not running MongoDB on localhost, so I tried providing the connection string of my MongoDB cluster to mongoshI need to change a bit the URI:Results:Using instead this:Results:But nothings more happens and the collection events is not created.", "username": "Manuel_Martin" }, { "code": "", "text": "Thank you, I tried your answer, I already have installed the package gdelttoolsThe following download the files:gdeltloader --master --download --overwrite --last 20 # download the last 20 days of dataBut I tried the command in both directories (gdelttools-master and gdelttools) with the same resultgdelttools-master % make full_data_loadgdelttools % make full_data_loadResults:make: *** No rule to make target `full_data_load’. Stop.", "username": "Manuel_Martin" }, { "code": "", "text": "I just realized that in the file Makefile there is not full_data_load as you said, instead is full_dataload.I tried from the directory gdelttools-master and I get the following resut:gdelttools-master % make full_dataload\npython gdelttools/gdeltloader.py --master --download\nFile “gdelttools/gdeltloader.py”, line 22\nparser = argparse.ArgumentParser(epilog=f\"Version: {version}\\n\"", "username": "Manuel_Martin" }, { "code": "", "text": "I can see now that the events collection has been created, but on a database called GDELT2 (I had named mine databasehackaton2022), maybe your answer worked correctly, I’ll investigate it again and post the results here, thanks again,", "username": "Manuel_Martin" }, { "code": "- gdelt_reshaper.js\n\n- gdeltloader.py\n\n- mongoimport.py\n\n- validator.py\n", "text": "I started from scratch a new project and call the database GDELT2 (I remember now that in the video @Joe_Drumgoole said to do it).I noticed that this Database name is used in the following scripts:And following your reply the collection events was created successfully, thanks to you and also to @Crist and @Joe_Drumgoole for your help.", "username": "Manuel_Martin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Hackathon Project: Issue reshape of the data from collection eventscsv to collection events
2022-05-09T14:51:13.652Z
Hackathon Project: Issue reshape of the data from collection eventscsv to collection events
3,826
null
[ "performance", "capacity-planning" ]
[ { "code": "", "text": "we are evaluating MongoDB Atlas for our use case (ML/AI data accessed on the UI). I was checking the mongoDB documentation regarding wps(writes per second) & rps(reads per second) limits, but am unable to find any definite answer.The objective is to ensure that if i have large number of writes(or reads), they don’t fail because of MongoDB limitations.Any pointers on the wps/rps limitations ? Pls let me know.tia!", "username": "karan_alang" }, { "code": "", "text": "What is your cluster type? Free or paid?\nCheck this thread", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Thanks for the response … currently it is a free cluster, but we will change to paid as the volume grows.\nLet me check the link you specified.thanks!", "username": "karan_alang" }, { "code": "", "text": "Hello Karan,To add a bit more here, for Read/Write performance I don’t think it’s likely that we won’t be able to keep up with requirements, you can scale your cluster vertically to add more read/write capacity, going from an M10 to an M20, or horizontally by sharding eventually to add even more capacity.Here is an article that covers talks about the capacity you can achieve with MongoDB: MongoDB At Scale | MongoDB", "username": "Benjamin_Flast" } ]
MongoDB Atlas limitations - writes per second, reads per second
2022-05-03T19:06:05.614Z
MongoDB Atlas limitations - writes per second, reads per second
6,808
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.2.20 is out and is ready for production deployment. This release contains only fixes since 4.2.19, and is a recommended upgrade for all 4.2 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.2.20 is released
2022-05-09T16:05:11.049Z
MongoDB 4.2.20 is released
2,531
null
[ "production", "server" ]
[ { "code": "", "text": "MongoDB 4.4.14 is out and is ready for production deployment. This release contains only fixes since 4.4.13, and is a recommended upgrade for all 4.4 users.\nFixed in this release:", "username": "Jon_Streets" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB 4.4.14 is released
2022-05-09T16:03:39.486Z
MongoDB 4.4.14 is released
3,024
null
[]
[ { "code": "", "text": "I am successfully logged in to mongoldb atlas from terminal in Mac but am unable to push local app\nrealm cli push\nerror about whitelisting IP\nwhile I have whitelisted them multiple times\nAlso bypassed my ISP by tethering to phone\nnothing helps", "username": "Manjinder_Sandhu" }, { "code": "", "text": "got it\nwas copying and pasting into textedit\nthe “” quotes have to be done again manually", "username": "Manjinder_Sandhu" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Cannot push a local app to realm
2022-04-20T12:51:28.550Z
Cannot push a local app to realm
1,267
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "Cool as Code@Sahil_Agarwal @Dev_BhushanProject Description: We are making a newsletter that will allow the users to create alerts based on certain keywords and they can get constant updates regarding the keywords they have subscribed to.For this project, we will be using NextJS and MongoDBRep Link: dev12321/gdelt-newsletter · GitHub", "username": "Sahil_Agarwal" }, { "code": "", "text": "Sounds great! Thanks for sharing - we’ll check it out!", "username": "Shane_McAllister" }, { "code": "", "text": "The repo is currently empty - do let us know when populated. Thanks", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Hackathon Project Team: Cool as Code
2022-05-05T07:21:15.607Z
Hackathon Project Team: Cool as Code
2,753
null
[ "sharding", "storage" ]
[ { "code": "user@mongos:~$ ss -antpl | grep 27017\nuser@mongos:~$\nuser@mongos:~$ ps aux|grep mongos\nroot 1797 0.0 0.1 10592 5068 pts/0 S+ 15:26 0:00 sudo -u mongodb mongos --config /etc/mongod.conf\nmongodb 1798 0.0 1.2 202976 49704 pts/0 Sl+ 15:26 0:00 mongos --config /etc/mongod.conf\nuser 2021 0.0 0.0 6180 708 pts/1 S+ 15:29 0:00 grep mongos\nuser@mongos:~$ cat /etc/mongod.conf\n# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# Where and how to store data.\n#storage:\n# dbPath: /var/lib/mongodb\n# journal:\n# enabled: true\n# engine:\n# wiredTiger:\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n path: /var/log/mongodb/mongod.log\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 192.168.0.40\n\n\n# how the process runs\nprocessManagement:\n timeZoneInfo: /usr/share/zoneinfo\n\n#security:\n\n#operationProfiling:\n\n#replication:\n\nsharding:\n configDB: ConfigReplSet/192.168.0.37:27019\n\n## Enterprise-Only Options:\n\n#auditLog:\n\n#snmp:\n{\"t\":{\"$date\":\"2022-04-30T15:32:37.374+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"192.168.0.40:52142\",\"uuid\":\"5d1c094a-301d-413d-861d-5aed5f9b7a13\",\"connectionId\":8,\"connectionCount\":1}}\n{\"t\":{\"$date\":\"2022-04-30T15:32:37.375+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"192.168.0.40:52144\",\"uuid\":\"05e26a1a-60b5-4315-ad88-cffe0ffd221c\",\"connectionId\":9,\"connectionCount\":2}}\n{\"t\":{\"$date\":\"2022-04-30T15:32:37.375+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn8\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"192.168.0.40:52142\",\"client\":\"conn8\",\"doc\":{\"driver\":{\"name\":\"NetworkInterfaceTL\",\"version\":\"5.0.8\"},\"os\":{\"type\":\"Linux\",\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 11 (bullseye)\\\"\",\"architecture\":\"x86_64\",\"version\":\"Kernel 5.10.0-13-amd64\"}}}}\n{\"t\":{\"$date\":\"2022-04-30T15:32:37.376+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn9\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"192.168.0.40:52144\",\"client\":\"conn9\",\"doc\":{\"driver\":{\"name\":\"NetworkInterfaceTL\",\"version\":\"5.0.8\"},\"os\":{\"type\":\"Linux\",\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 11 (bullseye)\\\"\",\"architecture\":\"x86_64\",\"version\":\"Kernel 5.10.0-13-amd64\"}}}}\n{\"t\":{\"$date\":\"2022-04-30T15:32:37.377+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":22943, \"ctx\":\"listener\",\"msg\":\"Connection accepted\",\"attr\":{\"remote\":\"192.168.0.40:52146\",\"uuid\":\"f0bbbae7-8d6b-4ea3-9867-48d634bf67d1\",\"connectionId\":10,\"connectionCount\":3}}\n{\"t\":{\"$date\":\"2022-04-30T15:32:37.378+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":51800, \"ctx\":\"conn10\",\"msg\":\"client metadata\",\"attr\":{\"remote\":\"192.168.0.40:52146\",\"client\":\"conn10\",\"doc\":{\"driver\":{\"name\":\"NetworkInterfaceTL\",\"version\":\"5.0.8\"},\"os\":{\"type\":\"Linux\",\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 11 (bullseye)\\\"\",\"architecture\":\"x86_64\",\"version\":\"Kernel 5.10.0-13-amd64\"}}}}\n", "text": "Hi,I have been following this guide.I am using Debian 11 with MongoDB v5.0.8. I was working over the Configure Query Router section, however, I am stuck as mongos doesn’t seem to open a port for communication.Checking for the open port I get no results:I can see the process is running:My config is binding to port 27017:I can also see that the request to the config server is coming through without any issues:Can anyone see any daft mistakes I have made or have advise on how I can get this working?", "username": "R_Birtles" }, { "code": "", "text": "The clues to what is happening can probably find in the mongos log. Please share the mongos log.", "username": "steevej" }, { "code": "ExecStart=/usr/bin/mongos --config /etc/mongod.conf\nuser@mongos:~$ sudo systemctl restart mongod && sudo tail -f -n 0 /var/log/mongodb/mongod.log\n\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.395+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":20698, \"ctx\":\"-\",\"msg\":\"***** SERVER RESTARTED *****\"}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.395+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4915701, \"ctx\":\"-\",\"msg\":\"Initialized wire specification\",\"attr\":{\"spec\":{\"incomingExternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"incomingInternalClient\":{\"minWireVersion\":0,\"maxWireVersion\":13},\"outgoing\":{\"minWireVersion\":13,\"maxWireVersion\":13},\"isInternalClient\":true}}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.401+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23285, \"ctx\":\"main\",\"msg\":\"Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\"}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.427+01:00\"},\"s\":\"W\", \"c\":\"ASIO\", \"id\":22601, \"ctx\":\"main\",\"msg\":\"No TransportLayer configured during NetworkInterface startup\"}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.428+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4648601, \"ctx\":\"main\",\"msg\":\"Implicit TCP FastOpen unavailable. If TCP FastOpen is required, set tcpFastOpenServer, tcpFastOpenClient, and tcpFastOpenQueueSize.\"}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.428+01:00\"},\"s\":\"I\", \"c\":\"HEALTH\", \"id\":5936503, \"ctx\":\"main\",\"msg\":\"Fault manager changed state \",\"attr\":{\"state\":\"StartupCheck\"}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.428+01:00\"},\"s\":\"W\", \"c\":\"CONTROL\", \"id\":22120, \"ctx\":\"main\",\"msg\":\"Access control is not enabled for the database. Read and write access to data and configuration is unrestricted\",\"tags\":[\"startupWarnings\"]}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.429+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":23403, \"ctx\":\"mongosMain\",\"msg\":\"Build Info\",\"attr\":{\"buildInfo\":{\"version\":\"5.0.8\",\"gitVersion\":\"c87e1c23421bf79614baf500fda6622bd90f674e\",\"openSSLVersion\":\"OpenSSL 1.1.1n 15 Mar 2022\",\"modules\":[],\"allocator\":\"tcmalloc\",\"environment\":{\"distmod\":\"debian10\",\"distarch\":\"x86_64\",\"target_arch\":\"x86_64\"}}}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.429+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":51765, \"ctx\":\"mongosMain\",\"msg\":\"Operating System\",\"attr\":{\"os\":{\"name\":\"PRETTY_NAME=\\\"Debian GNU/Linux 11 (bullseye)\\\"\",\"version\":\"Kernel 5.10.0-13-amd64\"}}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.430+01:00\"},\"s\":\"I\", \"c\":\"CONTROL\", \"id\":21951, \"ctx\":\"mongosMain\",\"msg\":\"Options set by command line\",\"attr\":{\"options\":{\"config\":\"/etc/mongod.conf\",\"net\":{\"bindIp\":\"192.168.0.40\",\"port\":27017},\"processManagement\":{\"timeZoneInfo\":\"/usr/share/zoneinfo\"},\"sharding\":{\"configDB\":\"ConfigReplSet/192.168.0.37:27019\"},\"systemLog\":{\"destination\":\"file\",\"logAppend\":true,\"path\":\"/var/log/mongodb/mongod.log\"}}}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.432+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4603701, \"ctx\":\"mongosMain\",\"msg\":\"Starting Replica Set Monitor\",\"attr\":{\"protocol\":\"streamable\",\"uri\":\"ConfigReplSet/192.168.0.37:27019\"}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.433+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333223, \"ctx\":\"mongosMain\",\"msg\":\"RSM now monitoring replica set\",\"attr\":{\"replicaSet\":\"ConfigReplSet\",\"nReplicaSetMembers\":1}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.433+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333226, \"ctx\":\"mongosMain\",\"msg\":\"RSM host was added to the topology\",\"attr\":{\"replicaSet\":\"ConfigReplSet\",\"host\":\"192.168.0.37:27019\"}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.433+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333218, \"ctx\":\"mongosMain\",\"msg\":\"Rescheduling the next replica set monitoring request\",\"attr\":{\"replicaSet\":\"ConfigReplSet\",\"host\":\"192.168.0.37:27019\",\"delayMillis\":0}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.434+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333218, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Rescheduling the next replica set monitoring request\",\"attr\":{\"replicaSet\":\"ConfigReplSet\",\"host\":\"192.168.0.37:27019\",\"delayMillis\":0}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.434+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4333218, \"ctx\":\"ShardRegistry-0\",\"msg\":\"Rescheduling the next replica set monitoring request\",\"attr\":{\"replicaSet\":\"ConfigReplSet\",\"host\":\"192.168.0.37:27019\",\"delayMillis\":0}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.434+01:00\"},\"s\":\"I\", \"c\":\"CONNPOOL\", \"id\":22576, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"Connecting\",\"attr\":{\"hostAndPort\":\"192.168.0.37:27019\"}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.438+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":23729, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"ServerPingMonitor is now monitoring host\",\"attr\":{\"host\":\"192.168.0.37:27019\",\"replicaSet\":\"ConfigReplSet\"}}\n{\"t\":{\"$date\":\"2022-04-30T16:54:48.438+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4333213, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM Topology Change\",\"attr\":{\"replicaSet\":\"ConfigReplSet\",\"newTopologyDescription\":\"{ id: \\\"84167836-da59-42e0-b108-b7ebd0a61a57\\\", topologyType: \\\"Unknown\\\", servers: { 192.168.0.37:27019: { address: \\\"192.168.0.37:27019\\\", topologyVersion: { processId: ObjectId('626d433d45649ad8597be78c'), counter: 1 }, roundTripTime: 1646, type: \\\"RSGhost\\\", minWireVersion: 13, maxWireVersion: 13, lastUpdateTime: new Date(1651334088438), logicalSessionTimeoutMinutes: 30, hosts: {}, arbiters: {}, passives: {} } }, compatible: true }\",\"previousTopologyDescription\":\"{ id: \\\"cd1e7cf4-4182-4681-90ee-9098ce49a63f\\\", topologyType: \\\"Unknown\\\", servers: { 192.168.0.37:27019: { address: \\\"192.168.0.37:27019\\\", type: \\\"Unknown\\\", minWireVersion: 0, maxWireVersion: 0, lastUpdateTime: new Date(-9223372036854775808), hosts: {}, arbiters: {}, passives: {} } }, compatible: true }\"}}\n{\"t\":{\"$date\":\"2022-04-30T16:55:03.433+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4333208, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM host selection timeout\",\"attr\":{\"replicaSet\":\"ConfigReplSet\",\"error\":\"FailedToSatisfyReadPreference: Could not find host matching read preference { mode: \\\"nearest\\\" } for set ConfigReplSet\"}}\n{\"t\":{\"$date\":\"2022-04-30T16:55:03.433+01:00\"},\"s\":\"W\", \"c\":\"SHARDING\", \"id\":23834, \"ctx\":\"mongosMain\",\"msg\":\"Error initializing sharding state, sleeping for 2 seconds and retrying\",\"attr\":{\"error\":{\"code\":133,\"codeName\":\"FailedToSatisfyReadPreference\",\"errmsg\":\"Error loading clusterID :: caused by :: Could not find host matching read preference { mode: \\\"nearest\\\" } for set ConfigReplSet\"}}}\n{\"t\":{\"$date\":\"2022-04-30T16:55:03.434+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4333208, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM host selection timeout\",\"attr\":{\"replicaSet\":\"ConfigReplSet\",\"error\":\"FailedToSatisfyReadPreference: Could not find host matching read preference { mode: \\\"nearest\\\" } for set ConfigReplSet\"}}\n{\"t\":{\"$date\":\"2022-04-30T16:55:03.434+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4333208, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM host selection timeout\",\"attr\":{\"replicaSet\":\"ConfigReplSet\",\"error\":\"FailedToSatisfyReadPreference: Could not find host matching read preference { mode: \\\"nearest\\\" } for set ConfigReplSet\"}}\n{\"t\":{\"$date\":\"2022-04-30T16:55:03.434+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"FailedToSatisfyReadPreference: Could not find host matching read preference { mode: \\\"nearest\\\" } for set ConfigReplSet\",\"nextWakeupMillis\":200}}\n{\"t\":{\"$date\":\"2022-04-30T16:55:03.435+01:00\"},\"s\":\"I\", \"c\":\"SHARDING\", \"id\":22727, \"ctx\":\"shard-registry-reload\",\"msg\":\"Error running periodic reload of shard registry\",\"attr\":{\"error\":\"FailedToSatisfyReadPreference: could not get updated shard list from config server :: caused by :: Could not find host matching read preference { mode: \\\"nearest\\\" } for set ConfigReplSet\",\"shardRegistryReloadIntervalSeconds\":30}}\n{\"t\":{\"$date\":\"2022-04-30T16:55:18.635+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4333208, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM host selection timeout\",\"attr\":{\"replicaSet\":\"ConfigReplSet\",\"error\":\"FailedToSatisfyReadPreference: Could not find host matching read preference { mode: \\\"nearest\\\" } for set ConfigReplSet\"}}\n{\"t\":{\"$date\":\"2022-04-30T16:55:18.635+01:00\"},\"s\":\"I\", \"c\":\"-\", \"id\":4939300, \"ctx\":\"monitoring-keys-for-HMAC\",\"msg\":\"Failed to refresh key cache\",\"attr\":{\"error\":\"FailedToSatisfyReadPreference: Could not find host matching read preference { mode: \\\"nearest\\\" } for set ConfigReplSet\",\"nextWakeupMillis\":400}}\n{\"t\":{\"$date\":\"2022-04-30T16:55:20.433+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4333208, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\"RSM host selection timeout\",\"attr\":{\"replicaSet\":\"ConfigReplSet\",\"error\":\"FailedToSatisfyReadPreference: Could not find host matching read preference { mode: \\\"nearest\\\" } for set ConfigReplSet\"}}\n{\"t\":{\"$date\":\"2022-04-30T16:55:20.433+01:00\"},\"s\":\"W\", \"c\":\"SHARDING\", \"id\":23834, \"ctx\":\"mongosMain\",\"msg\":\"Error initializing sharding state, sleeping for 2 seconds and retrying\",\"attr\":{\"error\":{\"code\":133,\"codeName\":\"FailedToSatisfyReadPreference\",\"errmsg\":\"Error loading clusterID :: caused by :: Could not find host matching read preference { mode: \\\"nearest\\\" } for set ConfigReplSet\"}}}\n", "text": "The systemd entry for mongodb has been edited to start mongos:Here are the logs from a fresh restart:I did have an initial look in here but didn’t see a clear cause.", "username": "R_Birtles" }, { "code": " sudo systemctl restart mongodsudo -u mongodb mongos --config /etc/mongod.confsudo systemctl restart mongodError initializing sharding state, sleeping for 2 seconds and retrying\",\n\"attr\":{\"error\":{\"code\":133,\n \"codeName\": \"FailedToSatisfyReadPreference\",\n \"errmsg\":\"Error loading clusterID :: caused by :: Could not find host matching read preference { mode: \\\"nearest\\\" } for set ConfigReplSet\"}}}\n", "text": "This should not be done.The systemd entry for mongodb has been edited to start mongos:You should create a systemd entry for mongos because sudo systemctl restart mongodfools everyone by letting us think your start mongod.Starting mongos or mongod in a inconsistent way is error prone. Choose one of the following and stick to it:sudo -u mongodb mongos --config /etc/mongod.conforsudo systemctl restart mongodThe systemctl way should be the prefered way.The configuration for mongos should be called /etc/mongos.conf, mongos not mongod. It is error prone to name it like you did.Share your ss command without grep. Also share the command ip addr.Despite the fact that your mongos seems to connect to your configuration server, something is wrong with it because:Mongos will retried until it is fixed before starting to listen.I suspect your config server replica set is not initialized correctly. Share rs.status of your configuration server.", "username": "steevej" }, { "code": "", "text": "I finally had time to revisit this. As it was just a test concept I was working on, I nuked the whole thing and started again. Now it’s up and running with no issues. Thanks for the help.", "username": "R_Birtles" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongos router not opening a port
2022-04-30T14:34:31.952Z
Mongos router not opening a port
3,795
null
[ "queries", "dot-net" ]
[ { "code": "select d.surname, d.forename, c.name, rcs.year, rcs.name, r.raceId, r.fastestLapSpeed\nfrom ((results r join drivers d on r.driverId = d.driverId) \njoin constructors c on c.constructorId = r.constructorId) \njoin races rcs on r.raceId = rcs.raceId\nwhere r.fastestLapSpeed > ( \n\t\t\t\t\t\t\t\tselect avg(rr.fastestLapSpeed)\n from results rr\n where rr.raceId = r.raceId\n group by rr.raceId\n\t\t\t\t\t\t\t)\n", "text": "Good evening,\nI’m searching to translate this mysql query in mongodb, how is it ??", "username": "Jeremy_Sapienza" }, { "code": "", "text": "Please post sample documents from all collections involve.Please share what you have tried so far and indicate how it failed. This will prevent us from working on solutions that you already know it fails.The MongoDB Courses and Trainings | MongoDB University is a good starting point.", "username": "steevej" }, { "code": "db.resultsConstructorsStatus.aggregate([\n {\n $lookup: {\n from: \"drivers\",\n let: {flspeed: \"$fastestLapSpeed\", rcsId: \"$raceId\"},\n pipeline: [\n { $group : { _id: \"$raceId\", avgLapSpeed: {$avg: \"$fastestLapSpeed\" } }},\n { $match : { $and: [{avgLapSpeed: {$lte: \"fastestLapSpeed\" }}, {$eq: {rcsId, \"$raceId\"}}] }}\n ],\n localField: \"driverId\",\n foreignField: \"driverId\",\n as: \"drivers_details\"\n }\n }, \n {$unwind: \"$drivers_details\"},\n {$project: {_id: 0, \"drivers_details.forename\":1, \"drivers_details.surname\":1, \"constructor_info.name\":1, \"raceId\":1, \"fastestLapSpeed\":1}}\n])\n", "text": "", "username": "Jeremy_Sapienza" }, { "code": "", "text": "Please post sample documents from all collections involved.To experiment and help you we in the absence of sample documents, we would need to manually create document that matches the field names you are using. That’s very tedious.In your original SQL, there were mentions of table drivers, constructors and races but in the Mongo version, it looks like you only have 2 collections. The mapping for drivers is quite obvious, but how is resultsConstructorsStatus related to constructors and races. Sample documents could help understanding that too.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 182 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Converting this mysql query to mongoDB
2022-05-08T18:37:23.732Z
Converting this mysql query to mongoDB
1,400
null
[ "aggregation", "queries", "atlas-search" ]
[ { "code": "{\n _id : ObjectId('123'),\n GoogleRating : 4.5,\n LikeCount : 20,\n CommentCount : 10,\n CreatedDate : \"2022-04-21T00:00:00.000+00:00\"\n},\n{\n _id : ObjectId('456'),\n GoogleRating : 1,\n LikeCount : 0,\n CommentCount : 5,\n CreatedDate : \"2021-12-01T00:00:00.000+00:00\"\n}\n\"$search\":{\n \"compound\":{\n \"should\":[\n {\n \"near\":{\n \"path\":\"GoogleRating\",\n \"origin\":5,\n \"pivot\":2,\n \"score\":{\n \"boost\":{\n \"value\":2\n }\n }\n }\n },\n {\n \"near\":{\n \"path\":\"CreatedDate\",\n \"origin\":\"ISODate(\"\"2022-04-21T00:00:00.000Z\"\")\",\n \"pivot\":7776000000,\n \"score\":{\n \"boost\":{\n \"value\":3\n }\n }\n }\n },\n {\n \"near\":{\n \"path\":\"LikeCount\",\n \"origin\":1000,\n \"pivot\":2,\n \"score\":{\n \"boost\":{\n \"value\":3\n }\n }\n }\n },\n {\n \"near\":{\n \"path\":\"CommentCount\",\n \"origin\":1000,\n \"pivot\":2,\n \"score\":{\n \"boost\":{\n \"value\":3\n }\n }\n }\n }\n ]\n }\n}\n", "text": "I’m working on social media platform and I need to fetch posts for newsfeed with mongodb search query.currently I’m using dynamic index on my post collection, Let say this my post collectionAnd this is my current search stageI’ve three criteria to boost document score.I need to improve my search result Because it’s not giving me desired output please suggest me what should i need to change in my search stage.", "username": "Waleed_Nasir" }, { "code": "", "text": "Hi @Waleed_Nasir,Thanks for providing the search stage details as well as sample documents.I need to improve my search result Because it’s not giving me desired output please suggest me what should i need to change in my search stage.Can you provide both the current output as well as the expected / desired output?In addition to this, if there are more stages to your pipeline, please provide those as well.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Hello @Jason_Tran,\nThe other stages are just lookup and project stage.\nmy desired output should be based on recency, popularity on the basis of likes and comment count and google rating which max value is 5.", "username": "Waleed_Nasir" } ]
$search stage with near to rating and current date and increase score proportional to likes and comment count
2022-04-21T21:47:40.813Z
$search stage with near to rating and current date and increase score proportional to likes and comment count
2,335
null
[ "node-js", "python", "cxx", "mdbw22-hackathon" ]
[ { "code": "", "text": "Full-stack developer .\nParticipated in few Hackathons,\nCompleted various projects using MERN stack.Technical Skills :\nCSS, NodeJs, ReactJS, ExpressJs, C++, and python. Data Structure and AlgorithmsHacking Hours – According to Indian Standard Time", "username": "AYUSH_UPADHYAY" }, { "code": "", "text": "Welcome @AYUSH_UPADHYAY - some great skills there! Super valuable - love that you’ve completed projects with the MERN stack. Hopefully you’ll be snapped up by a project team.", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
AYUSH_UPADHYAY is looking for a project!
2022-05-04T13:38:16.839Z
AYUSH_UPADHYAY is looking for a project!
2,717
null
[ "python", "mdbw22-hackathon" ]
[ { "code": "", "text": "I am an experienced Database administrator, I work with Mongo database clusters and I am also skilled in python development.Python, SQLGMT+1", "username": "Ayo_Exbizy" }, { "code": "", "text": "Welcome @Ayo_Exbizy to the MongoDB Hackathon. Delighted that you shared your experience - which is very valuable. Hopefully some team will snap you up, but do also be sure to look in the About the Projects looking for Hackers category too.", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Ayo_Exbizy is looking for a project!
2022-05-08T13:33:36.658Z
Ayo_Exbizy is looking for a project!
2,647
null
[ "node-js", "crud" ]
[ { "code": " //create subscription\n const subscription = await stripe.subscriptions.create({\n customer: customer,\n items: [{ plan: 'plan_DznNb3tPEEI0cj' }],\n default_payment_method: paymentMethod,\n expand: ['latest_invoice.payment_intent']\n });\n\n let updateUser = await User.findOne({_id: userId });\n \n try {\n User.updateOne(\n {id: ObjectId(userId) },\n {$set: { \"stripeCustomer\": subscription}},\n { upsert: true }\n );\n } catch (err) {\n console.log(err);\n }\n\n", "text": "Hi,\nI have an existing database that shows users from a website. When the user subscribes, the subscriptions object from Stripe should be added to the database. I’ve been updating the stripe checkout and know need to append the object to the user document each time there’s a purchase. I’ve been using the UpdateOne to no result. Nothing happens. Can someone help me please? I’m new to nodejs and mongodb so maybe there’s something I’m missing.\nHere is my code:", "username": "Ana_Faria" }, { "code": "User.updateOne(await User.updateOne(...)let updateUser = await User.findOne(...", "text": "User.updateOne(Try using await User.updateOne(...). Also, are you sure you want the statement let updateUser = await User.findOne(...in your code (what is its purpose)?", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you! The await solved it! The purpose of the let updateUser was simply to log it after and see the result, I already deleted it!", "username": "Ana_Faria" }, { "code": "", "text": "Great! Further refer about the asynchronous JavaScript and using Promises and Callbacks with the driver APIs.", "username": "Prasad_Saya" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Can't update document with object from Stripe
2022-05-08T22:14:47.896Z
Can&rsquo;t update document with object from Stripe
1,784
null
[ "replication" ]
[ { "code": "", "text": "I have mongodb (version 4.2) replicaset with 3 nodes - primary, secondary, arbiter, primary occupies close to 250 GB disk space, oplog size is 15 GBsecondary was down for few hours, tried recovering it by restarting, it went into recovering forever.tried initial sync by deleting files on data path, took 15 hours, data path size went to 140GB and failedtried to copy files from primary and seed it to recover secondary node followed Resync a Member of a Replica Set — MongoDB Manual This did not work - (again stale)in the latest doc (5.0) they mention to use a new member ID, does it apply for 4.2 as well? changing the member ID throws error as IP and port is same for node I am trying to recoverThis method was also unsuccessful, planning to recover the node using different data path and port as primary might consider it as a new node, then once the secondary is up, will change the port to which I want and restart, will it work?please provide any other suggestions to recover a replica node with large data like 250 GB", "username": "bhargava_vn" }, { "code": "", "text": "Hy @bhargava_vn,\nwelcome to the community!How many indexes do you have?\nWhat kind of error do you have on the secondary?Best regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "Hi @Fabio_Ramohitaj ,\nThanks for the reply\nI use many indexes, do you need count?\nI managed to get total size of indexes - 123177967616\nattaching last snippet of log with error message\nerror_log.txt (2.1 KB)", "username": "bhargava_vn" }, { "code": "", "text": "Hi @bhargava_vn,\ni see in the log this issue:\ninitialSyncAttempts: [ { durationMillis: 50655350, status: “HostUnreachable: error fetching oplog during initial sync :: caused by :: error in fetcher batch callback :: caused by :: Error connecting to …”, syncSource: “:270…” }\ncould it be a network problem?\nCheck it out and let me knowBest Regards", "username": "Fabio_Ramohitaj" }, { "code": "", "text": "I did not have much time to troubleshoot, so went ahead with the plan which we thought would work (1hr downtime)This worked, could see the secondary node coming up successfully", "username": "bhargava_vn" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongodb failure to resync a stale member of a replica set
2022-04-27T10:50:06.731Z
Mongodb failure to resync a stale member of a replica set
3,019
null
[ "aggregation", "node-js" ]
[ { "code": "async function RunReport(report) {\n c.LogMe('pre RunReport');\n var cursor = await c.db.collection(\"activity\").aggregate(report);\n c.LogMe('post RunReport');\n var answer = await cursor.toArray();\n c.LogMe('post cusor');\n return answer;\n}\nvar answer = await cursor.toArray();", "text": "Hello All,First post so please be nice:)I am trying to debug an odd problem we’re having - and to be honest I’m struggling. We’re performing a aggregate query and depending on the exact query (as it can be altered by the end-user), the client CPU goes to 100%, blocks the event queue while CPU load on the server is not noticeable.So, lets except for the moment that the query in question is rubbish and really slow, why would the client go to 100% cpu and block the node event queue?The code is something like:The CPU load and blocking is at the line:var answer = await cursor.toArray();The results of the query is always small (as there is a limit in the aggregate) and if I’ve interpreted the docs correctly, I can’t see how this line could block even if it has to wait for ‘sometime’ for results to be returned?Our Environment:Server, MongoAtlas 4.2.18\nClient, NodeJS v16.13.2, [email protected] in advances,jez.", "username": "Jeremy_White" }, { "code": "", "text": "Hi @Jeremy_White ,Investigating performance issues is not as simple as looking at the outside code.First we will need to get the full aggregation pipeline you pass to the aggregate command.The toArray method is the place that the query. Is actually executed and waits for a full result set as toArray is not a cursor like iterator.Limiting results is not always enough as the stages before can yield massive resources consumption while client wait. What is the client spec?Against what MongoDB deployment do you run this? What is the topology and machine sizes. Also provide sample docs and output.Thanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "await cursor.toArray();", "text": "Hi Pavel,Thanks for the reply.So this query is basically built by the user, ie, adding various filters and it’s one of these filters that is causing a slow running query. There is an expectation from the user that some of these queries will take time to run and there are appropriate UI controls in place to show this.I am happy to share the pipeline, however, what we’re really confused/concerned about is why the 100% load on the client side and the blocking of the nodejs event loop (while the DB is doing little).It may be my misunderstanding, but with the line:await cursor.toArray();I assume that it effectively yields to the node event loop (allowing other tasks/queries to take place) until the cursor has processed all entries. Or are you saying, toArray() is effectually a blocking operation and we should use another approach to yield back to the event loop while the query is running on the DB?Putting the question slightly differently, if I have a pipeline that takes 10 seconds to run, how do I run it without blocking the nodejs event loop for 10 seconds (with the nodejs process at 100% load)?Cheers,jez.", "username": "Jeremy_White" }, { "code": "", "text": "I am not a Mongo Expert,It looks like an operation that blocks your app cause you’re using await (this is how it should work)…if you want to keep the flow that way and just not block the app use Promises instead of await - as in Promises the code doesn’t stop, it gets to the return statement once the data comes back from Mongo.you need to take into account the size of the results because you’re keeping it inside memory - could cause indirectly a high CPU usage.any way you can limit the results amount to ensure no abuse will be created by fetching data.", "username": "Shay_I" }, { "code": "", "text": "Thanks for the reply. So, I was under the assumption that toArray returns a promise, and the await, well, waits until the promise is resolved? It shouldn’t block the entire nodejs queue, ie, nothing else can run, no other incoming events are processed.Yes, this may be something that I may misunderstanding, I was expected the whole pipeline to be run on the server side, not partly on the client? I will look into this more.", "username": "Jeremy_White" }, { "code": "", "text": "I meant to use Promises like this:`async function RunReport(report) {\nc.db.collection(“activity”).aggregate(report).then( (cursor) => {\ndo somthing…\n});\n}\nin this case, the app isn’t blocking the code while fetching the data…anyway I see that toArray() is a pretty heavy operation by itself:\ntoArray()there are other async solutions for that like:\nMongoDB forum discussionor running on the cursor one-by-one and accumulate your result batch as you want:\nCursor one-by-one fetch", "username": "Shay_I" }, { "code": "", "text": "I am having the same problem with NodeJS. Whereas Python and Java are working pretty fine.", "username": "Ashwani_Garg" }, { "code": "", "text": "I have also created a Stackoverflow post here\nhttps://stackoverflow.com/questions/71966751/nodeexpressmongodb-native-client-performance-issue", "username": "Ashwani_Garg" } ]
100% client CPU, little server CPU, nodejs event queue blocked
2022-02-07T20:01:34.823Z
100% client CPU, little server CPU, nodejs event queue blocked
5,481
null
[]
[ { "code": "", "text": "Hi, I need to copy my data between two accounts. I know we can move data between different Organizationsns but my requirement is to move data from Account A to Account B.", "username": "Rishaeel_A" }, { "code": "", "text": "Check this link", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hello @Rishaeel_A ,Welcome to the community!! You can use mongodump and mongorestore, check this document for more information.Apart from this, you can also use mongomirror.Regards,\nTarun", "username": "Tarun_Gaur" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Copy data from one Mongo Atlas account to another account. Not different Organizations but different Accounts
2022-05-04T07:35:02.401Z
Copy data from one Mongo Atlas account to another account. Not different Organizations but different Accounts
1,886
null
[]
[ { "code": "", "text": "Hi, i am new to MongoDB and NoSQL.\nI am trying to store log files from my apps and will need to query the log content using keywords. The log file size is < 5mbs.In a traditional relational databases, i can extract the log file and store it into a table with the columns i.e. [AppName], [DateTime], [Content].Recently i am exploring an ideal to storing file into NoSQL. I would like to know how mongoDB can work out in my case. If so, what is the advantage using MongoDB compare to the traditional relational databases.Appreciate any help or suggestions. Thanks!", "username": "Nerdinosaur" }, { "code": "", "text": "Often it is a better design pattern with large objects to store them in the file system and store references to their location in the database. Every language nowadays has regular expressions so your program can wander through them.But if you want to store large objects in your database, MongoDB itself has regex searches.As far as the size, the maximum BSON document size is 16 megabytes.", "username": "Jack_Woehr" } ]
Storing file and query its content in MongoDB
2022-05-09T04:06:26.628Z
Storing file and query its content in MongoDB
1,136
null
[ "python" ]
[ { "code": "", "text": "I am making a fastapi appshould i make am a global pymongo client object\nif my app crashes when will the connection to mongodb close??or create a pymongo client for each api\nand close it at end of apiplease help", "username": "Rohit_Krishnamoorthy" }, { "code": "MongoClientMongoClientfrom fastapi import FastAPI\napp = FastAPI()\[email protected]_event(\"shutdown\")\ndef shutdown_event():\n # Invoke MongoClient close()\n # ... \n", "text": "Hi @Rohit_Krishnamoorthy , and welcome to the forums!should i make am a global pymongo client objectGenerally you should only have one instance of PyMongo MongoClient for the life cycle of your application. In the case of a server, you could try creating a singleton class pattern to manage a MongoClient instance. For more information on singleton please see Python3: Singletonif my app crashes when will the connection to mongodb close??This is trickier to handle as “crash” could mean a few things. From FastAPI point of view itself, you could include a clean up method of MongoClient in shutting down eventSee also FastAPI: shutdown event for more information.If the “crash” event is happening on gunicorn layer, then depending on the type of ‘crash’ you may be able to catch the signal and handle it appropriately. At this point, this is more related to the WSGI server instead of the Python MongoDB driver itself.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "Thank you so much for replyingYour answer makes sense ill look into singleton design pattern\nand fastapi shutdown event.ill ellaborate on “crash” eventThis is my setup\nI have gcp VM and deploy my fastapi backend api inside a docker containerI am deploying my app in a docker container\nand when redeploying the app i am removing the old container\nand rebuilding a new image and containerdoes the mongo db connection close when i remove my old container of fastapi\nthe command i use to remove the container is this\ndocker rm -f <container_name>", "username": "Rohit_Krishnamoorthy" }, { "code": "", "text": "Sorry extending the question a bit morelets say my server stoped for some reason without closing connection.\nwhat is going to happens\nWhat happens on the mongoDB side\nhow long it will hold inactive connections", "username": "Rohit_Krishnamoorthy" }, { "code": "", "text": "Hi @Rohit_Krishnamoorthylets say my server stoped for some reason without closing connection. What happens on the mongoDB side. how long it will hold inactive connectionsIn MongoDB v3.6+, by default a reaper process would clean up an expired Server Sessions every 5 minutes.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
When to close pymongo client
2022-05-04T09:25:23.801Z
When to close pymongo client
7,666
null
[ "node-js", "app-services-cli" ]
[ { "code": "$ realm-cli whoami\nnode:internal/child_process:413\n throw errnoException(err, 'spawn');\n ^\n\nError: spawn Unknown system error -86\n at ChildProcess.spawn (node:internal/child_process:413:11)\n at spawn (node:child_process:700:9)\n at Object.<anonymous> (/Users/me/.nvm/versions/node/v17.9.0/lib/node_modules/mongodb-realm-cli/wrapper.js:22:22)\n at Module._compile (node:internal/modules/cjs/loader:1099:14)\n at Object.Module._extensions..js (node:internal/modules/cjs/loader:1153:10)\n at Module.load (node:internal/modules/cjs/loader:975:32)\n at Function.Module._load (node:internal/modules/cjs/loader:822:12)\n at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:77:12)\n at node:internal/main/run_main_module:17:47 {\n errno: -86,\n code: 'Unknown system error -86',\n syscall: 'spawn'\n}\n\nNode.js v17.9.0\n", "text": "When I try to run realm-cli I get node:internal/child_process everytime, no matter what i do. I tried reinstalling, using different node version, previous mongodb-realm-cli version, same error. I’m on new Mac M1, using zsh and nvm", "username": "Anton_Artemyev" }, { "code": "", "text": "Hi Anton,Thanks for posting your issue and welcome to the community!I used zsh, nvm, and node v17.9.0 but was not able to reproduce this error on realm-cli v2.3.5.Which realm-cli version are you using?Also, could you try logging into a different Atlas project with realm-cli and see if it still happens?Regards\nManny", "username": "Mansoor_Omar" }, { "code": "", "text": "Thank you for looking into this.\nMy issue was that i did a mac transfer from old mac to new M1 which broke few things for me. I had to redo setup from scratch and it worked. –", "username": "Anton_Artemyev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Unable to run realm-cli, returns node:internal/child_process
2022-05-08T04:26:11.222Z
Unable to run realm-cli, returns node:internal/child_process
5,422
null
[]
[ { "code": "", "text": "I want to delete all the post that i had posted in this forum. There doesn’t seems to be any option out there. What to do", "username": "Jayarani_G" }, { "code": "", "text": "Hi @Jayarani_G,You can delete recent topics & posts, but older topics or topics with replies currently require some assistance.Flagging posts for moderator assistance (as you have done) is the correct way to get assistance for actions that may not be available to you.I will follow up privately on your flagged topics.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Delete a topic i posted in the forum
2022-05-08T17:25:58.629Z
Delete a topic i posted in the forum
3,653
null
[]
[ { "code": "UPDATE col1, col2\nSET col2.entityId = col1.Entity_ID\nWHERE col1.id = col2.id\nand col2.type='DEVICE';\n", "text": "using Mongo version 4.2 I need to update a field in one collection as based on a field from another collection, according to a mutual ID field in both collections.In a regualr SQL, the update would be:How can I achieve this?Thanks", "username": "Tamar_Nirenberg" }, { "code": "", "text": "It would help us to help you if you could provide sample input document and expected result.It is easier for you to provide sample documents that you have than for us to create arbitrary document that could match your use case.", "username": "steevej" }, { "code": "col1", "text": "In addition to which the SQL offered is not correct since col1 doesn’t get updated!", "username": "Jack_Woehr" }, { "code": "\t\"userName\" : \"admin\",\n\t\"date\" : ISODate(\"2022-04-10T20:37:44.854+03:00\"),\n\t\"entityId\" : 234,\n\t\"type\" : \"DEVICE\",\n\t\"entityName\" : \"G-37299D\",\n\t\"subscriberName\" : \"MTP14262 R5.12u 053 \",\n\t\"deviceName\" : \"G-37299D\",\n\t\"id\" : 13760,\n\t\"tei\" : \"000148190607400\",\n\t\"operationDTO\" : \"CREATE\",\n\t\"operation\" : \"Associated a TG DMO Trng 2a\",\n\t\"version\" : 0,\n\t\"auditId\" : 315\n},\n\t\"id\" : 13760,\n\t\"version\" : 0,\n\t\"audit_id\" : 10823,\n\t\"parent_name\" : \"Garda Cluster2\",\n\t\"role\" : \"\",\n\t\"imei\" : NumberLong(\"108242225510\"),\n\t\"group_member_id\" : \"\",\n\t\"codeplugversion\" : 7325,\n\t\"issi\" : 28827,\n\t\"radioucsbarredcallesrerouting_id\" : 14840254,\n\t\"active_scan_tgs_allowed\" : 4,\n\t\"radioitmpath\" : \"\",\n\t\"directorate_id\" : \"\",\n\t\"department_id\" : \"\",\n\t\"branch_id\" : \"\",\n\t\"section_id\" : \"\",\n\t\"Entity_ID\" : 111\n},\ncol2.entityIdcol1.Entity_ID id", "text": "Hi,So here is an example of a document from both collections:col2 (the collection to be updated):col1 (the source for the update):So I need to update col2.entityId field from col1.Entity_ID field,\naccording to matching id’s field in both collections (but only for those documents where the col2.type=‘DEVICE’)Thanks", "username": "Tamar_Nirenberg" }, { "code": "", "text": "To add more to my previous reply - regarding the expected result:\nI need col2.entityId to be to be updated to value “111” (which is the value of col1.Entity_ID)\naccroding to a match between col2.id and col1.id (in this case tha value “13760” which matches in both collections)Thanks,\nTamar", "username": "Tamar_Nirenberg" }, { "code": "match_stage = { \"$match\" : {{ \"type\" : \"DEVICE\" } }\n idlookup_stage = { \"$lookup\" : { \n \"from\" : \"col1\" ,\n \"localField\" : \"id\" ,\n \"foreignField\" : \"id\" ,\n \"as\" : _tmp_lookup_result\n} }\nset_first_stage = { \"$set\" : { \"_tmp_first\" : { \"$first\": \"$_tmp_lookup_result\" } } }\nproject_stage = { \"$project\" : { \"entityId\" : \"$_tmp_first.Entity_ID' } }\npipeline = [ match_stage , lookup_stage , set_first_stage , project_stage ]\n[ { _id: ObjectId(...), entityId: 111 } ]\nmerge_stage = { \"$merge\" : { \"into\" : \"col2\" , \"on\" : \"_id\" } }\npipeline = [ match_stage , lookup_stage , set_first_stage , project_stage , merge_stage ]\n", "text": "May be there is a better way with the new update with aggregation. But my approach would be to use lookup to get the correct value and then merge that value back into the collection.Untested with many blanks still to fill.We start we a match stage forbut only for those documents where the col2.type=‘DEVICE’Then a lookup stage foraccording to matching id ’s field in both collectionsThen a small utility stage to get the first (should be the only one) element of the lookup.The next stage is a project that only keeps the _id and the value we want to merge into the collection. So far I used tmp* for field names because I like to see the intermediate results when I debug. The project gets rid of the the tmp*.If we just aggregate the pipelinewe should get results likeIf happy with the result we can use a merge stage to set the new value in the collectionSo the final pipeline that updates the documents is", "username": "steevej" } ]
Update a collection based on another collection
2022-05-06T08:15:27.985Z
Update a collection based on another collection
9,646
null
[ "queries" ]
[ { "code": "", "text": "Actually, we were using a query to purge one of the collection data but recently one of the fields (createdOn) was removed and when the query has run again the entire collection data was deleted.\nso just want to know how MongoDB works if any field is removed and if any query uses that field, will there be an error thrown?Please find the query below:\nQuery: { “createdOn” : { “$lt” : { “$date” : “2022-04-14T04:59:47.076Z”}}}, Fields: {}, Sort: {}", "username": "Sravan_Chowdary_Bala" }, { "code": "> db.servers.find({\"asdf\": \"a;lkdjf\"})\n>\n>>> for a in coll.find({'field_doesnt_exist': 'random value'}):\n... print(\"{}\".format(a))\n... \n>>> \n", "text": "No MongoDB will not throw an error if you do a find on a field that doesn’t exist see below an example in python and the shellMongo shell (returns with no errors)Python (pymongo)", "username": "tapiocaPENGUIN" }, { "code": "delete", "text": "Can you share the code including the delete command?", "username": "Joe_Drumgoole" }, { "code": "\nmongosh> c.find()\n// all documents\n{ _id: 0 }\n{ _id: 1, date: null }\n{ _id: 2, date: 2022-05-07T22:14:16.825Z }\n\nmongosh> c.find().sort( { date : 1 } )\n\n// all documents sorted by date, null and missing date are included and are comes before\n{ _id: 0 }\n{ _id: 1, date: null }\n{ _id: 2, date: 2022-05-07T22:14:16.825Z }\n\nmongosh> c.find( { date : { $lt : new Date( \"2022-05-08\")}})\n// when we find by date we only find documents with an existing date field\n{ _id: 2, date: 2022-05-07T22:14:16.825Z }\n\nmongosh> c.find( { date : { $gt : new Date( \"2022-05-08\")}})\n// this is confirmed by finding no document\n\nmongosh> c.deleteMany( { date : { $lt : new Date( \"2022-05-08\")}})\n// in theory only a document that can be found by a query will be delete by the same query\n// and that is confirmed by\n{ acknowledged: true, deletedCount: 1 }\n// and by looking at the remaining document\nmongosh> c.find()\n{ _id: 0 }\n{ _id: 1, date: null }\nmongosh> c.deleleMany({},{“createdOn” : { “$lt” : { “$date” : “2022-04-14T04:59:47.076Z”}}}\n\n// all documents will be delete.\n", "text": "I suspect something else deleted the collection. May deleteMany() has been used wrongly.So something else deleted your documents.However, I have no difficulty seeing someone making an error while issuing a command like:", "username": "steevej" }, { "code": "", "text": "sorry for the delayed reply. Please find the delete command.“command”:{“delete”:“jbk_aud_master”,“ordered”:true,\"$db\":“alljobs_prod”,“lsid”:{“id”:{\"$uuid\":“2306c8dc-1179-4ae4-ae5b-fdc913ddc5c0”}}},“numYields”:99216,“reslen”:45,“locks”", "username": "Sravan_Chowdary_Bala" }, { "code": "", "text": "Thanks for your reply. Below was the delete query used.“command”:{“delete”:“jbk_aud_master”,“ordered”:true,\"$db\":“alljobs_prod”,“lsid”:{“id”:{\"$uuid\":“2306c8dc-1179-4ae4-ae5b-fdc913ddc5c0”}}},“numYields”:99216,“reslen”:45,“locks”", "username": "Sravan_Chowdary_Bala" }, { "code": "", "text": "Your original post was mentioning a query with the field createdOn:Query: { “createdOn” : { “$lt” : { “$date” : “2022-04-14T04:59:47.076Z”}}}, Fields: {}, Sort: {}and now the query is“command”:{“delete”:“jbk_aud_master”,“ordered”:true,\"$db\":“alljobs_prod”,“lsid”:{“id”:{\"$uuid\":“2306c8dc-1179-4ae4-ae5b-fdc913ddc5c0”}}},“numYields”:99216,“reslen”:45,“locks”which has not mentioned of createdOn at all. But may be is referred in the part that you have truncated before posting.I also wonder why your original post mentioned Fields and Sort inQuery: { “createdOn” : { “$lt” : { “$date” : “2022-04-14T04:59:47.076Z”}}}, Fields: {}, Sort: {}for a query used for deleting.So just to summarize the thread.", "username": "steevej" }, { "code": "", "text": "Can you post the complete code fragment. The query and the delete?", "username": "Joe_Drumgoole" } ]
Ran a query while there was no field named "createdOn" present in the collection and so that entire collection documents were deleted
2022-05-06T09:09:53.139Z
Ran a query while there was no field named &ldquo;createdOn&rdquo; present in the collection and so that entire collection documents were deleted
2,960
null
[]
[ { "code": "", "text": "Hi fellows I am Uli 54 from Germany Linux Sysadmin and new in mongodb. Find more about me at : Uli Kleemann Linux SysttemadministratorRegards,Uli", "username": "Ulrich_Kleemann" }, { "code": "", "text": "Hey @Ulrich_Kleemann,\nWelcome to MongoDB Community!Thanks for sharing your profile. It’s great seeing the experience you have, the projects and certifications you have done, and your love for Linux. We would love to know about your experience with MongoDB and what brings you here so that we can help you find the relevant threads in our forums.Also, knowing that you are from Germany, our upcoming DACH User Group Event on MongoDB 5.0 might be interesting for you to attend and meet some of the community members from the region.Hope you have a great time here!\n– Harshit", "username": "Harshit" } ]
Hello I am Uli from Germany
2022-05-06T12:32:23.211Z
Hello I am Uli from Germany
2,673
null
[]
[ { "code": "", "text": "First things first, I’m a noob. Now that that’s out of the way…I’m building a Podcast app utilizing MERN and some of the basic functionality I want on the back end is being able to “like” podcasts and perhaps create playlists.All of the data for the actual podcasts is coming through via a 3rd party API so that’s not a worry.So I’m trying to decide if I would create 2 collections?This is what I’ve seen from some of the courses I’ve taken, but it almost seems more sensible to just put those features in the User schema, no? That way I can just query the User and have everything right there.I don’t understand a lot of the technical aspects of performance, but it just seems weird to store all of the likes by themselves, but also reference them to the user.Am I thinking about this correctly? Or what would you do? Help is much appreciated!", "username": "Nate_O" }, { "code": "", "text": "Let’s look at it this way: If they were 2 separate collections, what would a document for each look like?", "username": "Jack_Woehr" }, { "code": "", "text": "Yeah, I’m having trouble thinking of them as separate sort of. I guess you could just have a document for each podcast that was liked and then the users that liked it?Idk. I’m leaning more towards embedding the likes into the user document. I’m just not sure if it’s bad practice to have to update the user document every time they like something?Thank you for responding btw. I’m actually quite stuck on this even though it doesn’t seem like it should be all that hard.", "username": "Nate_O" }, { "code": "", "text": "The design problem you’re facing is simply an opportunity to think about design.Do it both ways and play with it. What you learn will last you through your career.", "username": "Jack_Woehr" }, { "code": "", "text": "Fundamental principle of software design: Don’t rush to solve the problem … model various solutions. Find a model you like and grow it.", "username": "Jack_Woehr" } ]
Simple question about how to approach schema/collections
2022-05-07T23:46:22.493Z
Simple question about how to approach schema/collections
1,493
null
[ "queries", "java", "atlas-cluster", "spring-data-odm" ]
[ { "code": "{field1: value1, field2: value2},{\"_id\": {\"$in\": [{\"$oid\": \"6274f6cdb114ec8177ff48a6\"}, {\"$oid\": \"6274f6e3b114ec8177ff48a7\"}]}{\"_id\": {\"$in\": [ObjectId(\"6274f6cdb114ec8177ff48a6\"), ObjectId(\"6274f6e3b114ec8177ff48a7\")]}}public interface Collection1Repository extends MongoRepository<Collection1, String>", "text": "I am using the mongodb sync driver v4.2.3 with Spring Boot.I have a requirement of referencing an array of objects which sit in another collection. We have used DBRef to implement this relationship. Here’s my schema:field1: String\nfield2: String\nfield3: Array [ Collection2 DBRef ].field1: String\nfield2: StringWhen I run a simple find on Collection1 with a condition of{field1: value1, field2: value2},The query above matches one document which has 2 elements inside field3 (an Array). I can see that the Java driver is automatically trying to fetch the 2 documents from Collection2 ALSO which are part of the document in Collection1. It runs a find query with a condition like this:{\"_id\": {\"$in\": [{\"$oid\": \"6274f6cdb114ec8177ff48a6\"}, {\"$oid\": \"6274f6e3b114ec8177ff48a7\"}]}And that throws an exception:com.mongodb.MongoCommandException: Command failed with error 2 (BadValue): ‘cannot nest $ under $in’ on server testtimeseries-shard-00-01.35m0c.mongodb.net:27017. The full response is {“ok”: 0.0, “errmsg”: “cannot nest $ under $in”, “code”: 2, “codeName”: “BadValue”, “$clusterTime”: {“clusterTime”: {“$timestamp”: {“t”: 1651934469, “i”: 1}}, “signature”: {“hash”: {“$binary”: {“base64”: “jolmdp4RkFGQXXr08b+10Lh9x9Q=”, “subType”: “00”}}, “keyId”: 7083736286242013188}}, “operationTime”: {“$timestamp”: {“t”: 1651934469, “i”: 1}}}When I run the same query on Collection2 manually from the mongo shell, like this:{\"_id\": {\"$in\": [ObjectId(\"6274f6cdb114ec8177ff48a6\"), ObjectId(\"6274f6e3b114ec8177ff48a7\")]}}it works absolutely fine and returns the two documents.I do have a definition of a Repository object for Collection1 as follows, but I don’t have anything for Collection2:public interface Collection1Repository extends MongoRepository<Collection1, String>Is there something different I need to be doing in order to make this work? Is this an unsupported scenario, or a bug that has been fixed in a later version?\nDo I need to define a Repository class and some find operations on it? I just have a POJO with @Document(collection = “Collection2”) annotation for it.", "username": "Muralidhar_Rao" }, { "code": "{\"$in\": [ObjectId(\"6274f6cdb114ec8177ff48a6\"), ObjectId(\"6274f6e3b114ec8177ff48a7\")]}\n{\"$in\": [{\"$oid\": \"6274f6cdb114ec8177ff48a6\"}, {\"$oid\": \"6274f6e3b114ec8177ff48a7\"}]}\n{\"$in\": [new ObjectId(\"6274f6cdb114ec8177ff48a6\"), new ObjectId(\"6274f6e3b114ec8177ff48a7\")]}\n", "text": "Do in java the same thing that you do in javascript via mongosh.The class is ObjectId (mongo-java-driver 3.6.0 API).In JS you did:rather thanIn java you do:", "username": "steevej" }, { "code": "", "text": "I’m not manually running the second query. The java driver is automatically trying to fetch from collection2 using $oid instead of ObjectId inside a $in, and I have no control over that.If I was running that query, I would have made that change you suggested, I agree.Is there something else I need to do in Collection2.java?", "username": "Muralidhar_Rao" } ]
Cannot nest $ in $in
2022-05-07T17:18:05.830Z
Cannot nest $ in $in
4,596
null
[ "database-tools" ]
[ { "code": "", "text": "I managed to install mongo DB Community edition using mongodb-windows-x86_64-4.4.0-signed.msi on my Windows 10 PRO machine and the database works fine as a windows service, but the developer tools are missing. If I try a custom installation and manually check the Miscellaneous Tools, the installer reports “This feature requires 0KB on your hard drive” and the tools are not installed. Is there a problem with the installer?", "username": "Mark_Greenwood" }, { "code": "", "text": "You can download tools from Try MongoDB Atlas Products | MongoDB and there is a “tools” section that has the downloads for the developer tools (ie Compass).If you download the .zip file for the community edition. In the bin directory you will see the mongod.exe and mongo.exe along with the other tools such as mongoimport and mongorestore. Which you can then run.If above isn’t what you’re looking for here is the link to mongodb developer tools which may give you some extra info on what you’re looking for: MongoDB Developer Tools | MongoDB", "username": "tapiocaPENGUIN" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Developer tools are missing in Windows 10 Community edition installer
2020-09-02T14:49:59.285Z
Developer tools are missing in Windows 10 Community edition installer
4,185
null
[ "python", "connecting", "atlas-cluster" ]
[ { "code": "OLD STRING\nmongo \"mongodb+srv://<username>:<password>@cdf46a6c-6797-40e1-a655.35m0c.mongodb.net\"\nNEW STRING\nmongo \"mongodb+srv://<username>:<password>@cdf46a6c-6797-40e1-a655-**pri.**35m0c.mongodb.net\"\nTraceback (most recent call last):\n File \"mongo.py\", line 40, in <module>\n my_driver = initialise_mongo_client()\n File \"mongo.py\", line 37, in initialise_mongo_client\n print(client.get_database('abs_data').list_collection_names())\n File \"/usr/local/lib64/python3.6/site-packages/pymongo/database.py\", line 880, in list_collection_names\n for result in self.list_collections(session=session, **kwargs)]\n File \"/usr/local/lib64/python3.6/site-packages/pymongo/database.py\", line 843, in list_collections\n _cmd, read_pref, session)\n File \"/usr/local/lib64/python3.6/site-packages/pymongo/mongo_client.py\", line 1515, in _retryable_read\n read_pref, session, address=address)\n File \"/usr/local/lib64/python3.6/site-packages/pymongo/mongo_client.py\", line 1346, in _select_server\n server = topology.select_server(server_selector)\n File \"/usr/local/lib64/python3.6/site-packages/pymongo/topology.py\", line 246, in select_server\n address))\n File \"/usr/local/lib64/python3.6/site-packages/pymongo/topology.py\", line 203, in select_servers\n selector, server_timeout, address)\n File \"/usr/local/lib64/python3.6/site-packages/pymongo/topology.py\", line 220, in _select_servers_loop\n (self._error_message(selector), timeout, self.description))\npymongo.errors.ServerSelectionTimeoutError: cdf46a6c-6797-40e1-a655-shard-00-01-pri.35m0c.mongodb.net:27017: timed out,cdf46a6c-6797-40e1-a655-shard-00-00-pri.35m0c.mongodb.net:27017: timed out,cdf46a6c-6797-40e1-a655-shard-00-02-pri.35m0c.mongodb.net:27017: timed out, Timeout: 30s, Topology Description: <TopologyDescription id: 6234ce37bc72a6a821d90859, topology_type: ReplicaSetNoPrimary, servers: [<ServerDescription ('cdf46a6c-6797-40e1-a655-shard-00-00-pri.35m0c.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('cdf46a6c-6797-40e1-a655-shard-00-00-pri.35m0c.mongodb.net:27017: timed out',)>, <ServerDescription ('cdf46a6c-6797-40e1-a655-shard-00-01-pri.35m0c.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('cdf46a6c-6797-40e1-a655-shard-00-01-pri.35m0c.mongodb.net:27017: timed out',)>, <ServerDescription ('cdf46a6c-6797-40e1-a655-shard-00-02-pri.35m0c.mongodb.net', 27017) server_type: Unknown, rtt: None, error=NetworkTimeout('cdf46a6c-6797-40e1-a655-shard-00-02-pri.35m0c.mongodb.net:27017: timed out',)>]>\n", "text": "Hello folks,I have a cluster to which I have no clue how to connect using Python. I want to mention right at the outset that I HAVE added my IP to the whitelist.My cluster is a UUID based cluster: cdf46a6c-6797-40e1-a655-96b6b5291e2bPreviously when I had tried to connect to this cluster using mongo shell v4.2, I had gotten this URL to connect to from the Atlas “Connect” section, AND IT WORKS:Now when I go to the Atlas interface to get my connection string, it gives me this command:Notice the extra “pri”??In short, I’m wondering which connection string am I supposed to rely on? If I use connection string OLD STRING, both mongo shell and Python program works. If I use NEW STRING, I see the following errors:Somebody please help!", "username": "Muralidhar_Rao" }, { "code": "def initialise_mongo_client() -> pymongo.MongoClient:\n atlas_password = '<thepassword>'\n atlas_user = '<theuser>'\n #bad_atlas_uri = \"mongodb+srv://cdf46a6c-6797-40e1-a655.35m0c.mongodb.net\"\n #good_atlas_uri = \"mongodb+srv://%s:%[email protected]\" % (quote_plus(atlas_user), quote_plus(atlas_password))\n atlas_uri = \"mongodb+srv://%s:%[email protected]/admin?retryWrites=true&w=majority\" % (quote_plus(atlas_user), quote_plus(atlas_password))\n print(atlas_uri)\n client = pymongo.MongoClient(atlas_uri, tls=True, tlsAllowInvalidCertificates=True)\n #client = pymongo.MongoClient(atlas_uri, username=quote_plus(atlas_user), password=quote_plus(atlas_password),\n # tls=True, tlsAllowInvalidCertificates=True, authSource='admin')\n # client = pymongo.MongoClient(uri)\n\n print(client.get_database('abs_data').list_collection_names())\n return client\n\nmy_driver = initialise_mongo_client()\n", "text": "Here’s the python code I’m using for reference:", "username": "Muralidhar_Rao" }, { "code": "cdf46a6c-6797-40e1-a655-**pri.**35m0c.mongodb.net", "text": "cdf46a6c-6797-40e1-a655-**pri.**35m0c.mongodb.netWhat has changed from the time it was working?\nWhat is your cluster type?(M0,M5 etc).Asking this because pri in your string may referring to private end point?\nCan you ping cdf46a6c-6797-40e1-a655-**pri.**35m0c.mongodb.net\nNetwork error indicates it may not be the correct hostname\nFrom Atlas get connect string did you use connect by shell or connect by app?", "username": "Ramachandra_Tummala" }, { "code": "msrao@msrao-r ~ [68]> ping cdf46a6c-6797-40e1-a655-pri.35m0c.mongodb.net\nping: cannot resolve cdf46a6c-6797-40e1-a655-pri.35m0c.mongodb.net: Unknown host\nmsrao@msrao-r ~ [68]> ping cdf46a6c-6797-40e1-a655.35m0c.mongodb.net\nping: cannot resolve cdf46a6c-6797-40e1-a655.35m0c.mongodb.net: Unknown host\nmsrao@msrao-r ~ [68]>\n", "text": "What has changed from the time it was working?\nWhat is your cluster type?(M0,M5 etc).Asking this because pri in your string may referring to private end point?\nCan you ping cdf46a6c-6797-40e1-a655-pri.35m0c.mongodb.net\nNetwork error indicates it may not be the correct hostname\nFrom Atlas get connect string did you use connect by shell or connect by app?I am not sure what has changed. I am pretty sure I got both connection URLs from Atlas, so is Atlas somehow giving me the wrong URL?My cluster is M40.The surprising thing is, the Java driver works OK and connects using mongodb+srv://cdf46a6c-6797-40e1-a655-pri.35m0c.mongodb.net. I am providing only basic connection parameters like username, password, SSL etc and no extra/different options to the Java driver, compared to the Python driver.The Python driver fails to connect with mongodb+srv://cdf46a6c-6797-40e1-a655-pri.35m0c.mongodb.net but it works well with cdf46a6c-6797-40e1-a655.35m0c.mongodb.netPing doesn’t work for either of the hostnames, see below:From Atlas interface, I have tried getting the connect string for mongo shell (mongosh and mongo older), Java Driver and Python driver. All of them return the URL with ‘pri’ in it.Open questions", "username": "Muralidhar_Rao" }, { "code": "/// Using dig to resolve the \"pri\" SRV record:\ndig srv _mongodb._tcp.cdf46a6c-6797-40e1-a655-pri.35m0c.mongodb.net\n\n/// The answer section which includes the hostnames associated with the Private IP of the nodes within the cluster. Note the additional \"pri\" in these as well\n;; ANSWER SECTION:\n_mongodb._tcp.cdf46a6c-6797-40e1-a655-pri.35m0c.mongodb.net. 60\tIN SRV 0 0 27017 cdf46a6c-6797-40e1-a655-shard-00-00-pri.35m0c.mongodb.net.\n_mongodb._tcp.cdf46a6c-6797-40e1-a655-pri.35m0c.mongodb.net. 60\tIN SRV 0 0 27017 cdf46a6c-6797-40e1-a655-shard-00-01-pri.35m0c.mongodb.net.\n_mongodb._tcp.cdf46a6c-6797-40e1-a655-pri.35m0c.mongodb.net. 60\tIN SRV 0 0 27017 cdf46a6c-6797-40e1-a655-shard-00-02-pri.35m0c.mongodb.net.\n/// Using dig to resolve the non-\"pri\" SRV record:\ndig srv _mongodb._tcp.cdf46a6c-6797-40e1-a655.35m0c.mongodb.net\n\n/// The answer section which includes hostnames associated with the public IP of the nodes within the cluster. Note, there is no \"pri\" in these hostnames.\n;; ANSWER SECTION:\n_mongodb._tcp.cdf46a6c-6797-40e1-a655.35m0c.mongodb.net. 60 IN SRV 0 0 27017 cdf46a6c-6797-40e1-a655-shard-00-00.35m0c.mongodb.net.\n_mongodb._tcp.cdf46a6c-6797-40e1-a655.35m0c.mongodb.net. 60 IN SRV 0 0 27017 cdf46a6c-6797-40e1-a655-shard-00-01.35m0c.mongodb.net.\n_mongodb._tcp.cdf46a6c-6797-40e1-a655.35m0c.mongodb.net. 60 IN SRV 0 0 27017 cdf46a6c-6797-40e1-a655-shard-00-02.35m0c.mongodb.net.\ndigpingpingdigping cdf46a6c-6797-40e1-a655-shard-00-00-pri.35m0c.mongodb.net\nping cdf46a6c-6797-40e1-a655-shard-00-00.35m0c.mongodb.net\n", "text": "Hi @Muralidhar_Rao,Welcome to the community.Notice the extra “pri”??The extra “pri” noted in your URI is associated with the private IP addresses of the Atlas nodes of that cluster which should be reachable within the peered network.For demonstration purposes, using the two URI’s provided, you will see that that the hostnames resolved from the SRV record differ slightly with the additional “pri”:You can read more information regarding the above here on the FAQ: Connection String Options documentation.Ping doesn’t work for either of the hostnames, see below:From the above dig results, the ping command and output did not succeed as you have attempted to ping the SRV record and not the hostnames. In the below ping example, the hostnames used were obtained from the dig outputs above.From the python client that is failing to connect, please run the following commands and provide the output of each:I am not sure what has changed. I am pretty sure I got both connection URLs from Atlas, so is Atlas somehow giving me the wrong URL?This is also detailed in the FAQ: Connection String Options documentation mentioned above, but the two strings you’ve obtained are from either of the following options when going through the connect modal:\nimage728×284 16.3 KB\nGenerally, if you’re wanting to connect over the VPC peering connection, you should use the Private IP for Peering connection string. Otherwise, if you’re wanting to connect over the public internet to that specific cluster, you can use the standard connection string.Is the client performing this connection using the Java driver the same client as the one attempting connection using the python driver?Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thanks for the reply Jason!I was using the VPC peered URI for a standard connection. Once I used the right URI for the right client, everything worked fine.", "username": "Muralidhar_Rao" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Connecting to Atlas with pymongo v4.0.2
2022-03-18T18:35:35.323Z
Connecting to Atlas with pymongo v4.0.2
5,662
null
[ "queries", "node-js" ]
[ { "code": "app.post(\"/order\", async (req, res) => {\n\n const my = req.body;\n\n const result = await myCollection.insertOne(my);\n\n res.send(result);\n\n });\n\n app.get(\"/order\", async (req, res) => {\n\n const query = {};\n\n console.log(query);\n\n const cursor = myCollection.find(query);\n\n const services = await cursor.toArray();\n\n res.send(services);\n\n });\nconst onSubmit = (data, event) => {\n\n const url = `http://localhost:5000/service`;\n\n fetch(url, {\n\n method: \"POST\",\n\n headers: {\n\n \"content-type\": \"application/json\",\n\n },\n\n body: JSON.stringify(data),\n\n })\n\n .then((res) => res.json())\n\n .then((result) => {\n\n setIsReload(!isReload);\n\n if (result) {\n\n alert(\"Add Successful\");\n\n }\n\n });\n\n const order = {\n\n email: user.email,\n\n name: event.target.name.value,\n\n description: event.target.description.value,\n\n price: event.target.price.value,\n\n quantity: event.target.quantity.value,\n\n };\n\n axios.post(`http://localhost:5000/order`, order).then((res) => {\n\n const { data } = res;\n\n console.log(data);\n\n if (data.insertedId) {\n\n alert(\"Inserted\");\n\n }\n\n event.target.reset();\n\n console.log(res);\n\n });\n", "text": "“message”: “Unauthorized Access” when i post something on my input form post is working correctly but when I wanted to get that thing http://localhost:5000/my it’s showing “message”: \"Unauthorized AccessserverSide Code IS:Client side code is:", "username": "Mahesh_Biswas" }, { "code": "\"/order\"", "text": "Hi @Mahesh_Biswas\nWelcome to the community forum!!Unauthorized AccessCould you please help me in understanding where do you see this error.\nIs this on the browser or over the Postman application. ?Also, as per the URL: http://localhost:5000/my , is there an API which gets called to display the user inserted.\nAs per the code snippet mentioned, \"/order\" would give the result for the GET request.Also, for our better understanding, could you help with the complete error message with more details or error message related to MongoDB.Thanks\nAasawari", "username": "Aasawari" }, { "code": "", "text": "I have already solved it, mam, thanks for you message", "username": "Mahesh_Biswas" } ]
"message": "Unauthorized Access" is showing when I get http://localhost:5000/my from this location
2022-05-05T05:44:29.877Z
&ldquo;message&rdquo;: &ldquo;Unauthorized Access&rdquo; is showing when I get http://localhost:5000/my from this location
3,148
null
[ "queries", "compass" ]
[ { "code": "", "text": "I have been doing my task using mongodb compass. But is it enough? When I type MongoDB in command prompt, it says the machine doesn’t recognize MongoDB.", "username": "Md_Rayhan" }, { "code": "mongosh", "text": "Hi @Md_Rayhan - Welcome to the communityI have been doing my task using mongodb compass. But is it enough?I guess this will depend on what you are required to do currently and/or in the future. Is there a particular use case currently or in the future you are encountering issues with whilst using MongoDB Compass?When I type MongoDB in command prompt, it says the machine doesn’t recognize MongoDB.I presume you are entering this in the mongosh shell portion within Compass but to confirm please provide the following information:Regards,\nJason", "username": "Jason_Tran" } ]
Is MongoDB Compass enough?
2022-05-05T17:43:02.690Z
Is MongoDB Compass enough?
1,525
null
[]
[ { "code": "{\n \"_id\": \"myid\",\n \"userId\": \"currentuserid\",\n \"username\": \"myuser\",\n \"app\": {\n \"twitter\": [],\n \"rss\": [\n {\n \"url\": \"https://www.example1.com/rss\",\n \"_id\": \"id1site1\",\n \"feed\": [ \n { \"data\": \"this is article 31267218\", \"guid\": { \"_text\": \"31267218\" } },\n { \"data\": \"this is article 31259997\", \"guid\": { \"_text\": \"31259997\" } }\n ]\n },\n {\n \"url\": \"https://www.example2.com/rss\",\n \"_id\": \"id2site2\",\n \"feed\": [\n { \"data\": \"this is article 44446355\", \"guid\": { \"_text\": \"44446355\" } },\n { \"data\": \"this is article 44433222\", \"guid\": { \"_text\": \"44433222\" } }\n ]\n }\n ]\n }\n}\n let distinct = await UsersAppDB.rawCollection().distinct(\"app.rss.feed\", \n { \n \"userId\": \"currentuserid\",\n \"app.rss._id\": \"id2site2\",\n \"app.rss.feed.guid._text\": { $eq: \"44446355\" }\n }\n )\n{ \"data\": \"this is article 44446355\", \"guid\": { \"_text\": \"44446355\" } },{ \"data\": \"this is article 44446355\", \"guid\": { \"_text\": \"44446355\" }, visible: true },", "text": "Hello,I’m using Meteor and Mongodb.Considering the data structure:For example, _id=“id2site2” and guid=“44446355” as parameter,i’m using this command:but this doesn’t return the correct result.i would like to return this:\n{ \"data\": \"this is article 44446355\", \"guid\": { \"_text\": \"44446355\" } },Next is to set a new field {visible: true} in this part of the data:\n{ \"data\": \"this is article 44446355\", \"guid\": { \"_text\": \"44446355\" }, visible: true },How to update the data?Thank you for helping", "username": "Daniel_Assayag" }, { "code": "let guidSelection = await UsersAppDB.rawCollection().aggregate([\n { $match: { \"userId\": Meteor.userId()} },\n { $unwind: '$app.rss' },\n { $replaceRoot: { newRoot: '$app.rss' } },\n { $match: { 'url': url } },\n { $unwind: '$feed' },\n { $match: { 'feed.guid._text': guid } },\n { $set: { 'feed.visibility': !visibility } },\n]).toArray()\n", "text": "First part of the answer:\nI could retrieve the data part using $unwind and $replaceRoot successively to access the part of the data i’m interested using:I just need to use update function, as aggregate doesn’t mutate the data in the Collection.I’ll update this as soon i find out how.", "username": "Daniel_Assayag" }, { "code": "let aggregate = await UsersAppDB.rawCollection().aggregate([\n { $match: { \"userId\": Meteor.userId()} },\n { $match: { 'app.rss.url': url } },\n { $match: { 'app.rss.feed.guid._text': guid } },\n { $set: { 'app.rss.feed.visibility': !visibility } },\n { $merge: 'userappdb' }\n", "text": "considering that userappdb is the name of your collection,\nthis command here doesn’t work exactly. as it create the field “visibility” is all feed array, instead of only where the array contains the matching guid.I can’t find out how to edit a post. If i manage to i’ll compress my answers to one, but i’m still looking for the correct update request to update that part of the data.", "username": "Daniel_Assayag" }, { "code": " 'app.rss.$.array.$[array].fieldToUpdate\nlet aggregate = await UsersAppDB.update({\n \"userId\": Meteor.userId() , 'app.rss.url': url},\n {\n $set: {\n 'app.rss.$.feed.$[feed].visibility': !visibility\n } \n }, \n {\n arrayFilters: [{\n \"feed.guid._text\": guid,\n }]\n }\n)\n", "text": "Finally!\nHere was the answer. Without using aggregate, merge, unwind, replaceroot or anything.\nJust using this query operator that contains the [array] name in brackets preceding by the array name 'app.rss.$.array.$[array].fieldToUpdate", "username": "Daniel_Assayag" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Set new field in array of array
2022-05-05T00:50:23.184Z
Set new field in array of array
1,214
null
[ "c-driver" ]
[ { "code": "mkdir -p /deps \\\n && cd /deps \\\n && apt-get install -y libsasl2-dev wget \\\n && LATEST_RELEASE=\"https://api.github.com/repos/mongodb/mongo-c-driver/releases/latest\" \\\n && TAG=$(curl --silent $LATEST_RELEASE | grep -Po '\"tag_name\": \"\\K.*(?=\")') \\\n && wget \"https://github.com/mongodb/mongo-c-driver/releases/download/$TAG/mongo-c-driver-$TAG.tar.gz\" \\\n && tar xzf \"mongo-c-driver-$TAG.tar.gz\" \\\n && cd \"mongo-c-driver-$TAG\" \\\n && mkdir cmake-build && cd cmake-build \\\n && cmake \\\n -DCMAKE_BUILD_TYPE=Release \\\n -DENABLE_ICU=OFF \\\n -DOPENSSL_USE_STATIC_LIBS=TRUE \\\n -DCMAKE_PREFIX_PATH=\"/opt/openssl\" \\\n -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF .. \\\n && make -j$(nproc) install\n#12 11.26 [ 7%] Building C object src/libbson/CMakeFiles/bson_shared.dir/src/bson/bson-writer.c.o\n#12 11.28 /deps/mongo-c-driver-1.21.0/src/libbson/src/bson/bson-iter.c: In function 'bson_iter_visit_all':\n#12 11.28 [ 7%] Building C object src/libbson/CMakeFiles/bson_shared.dir/src/jsonsl/jsonsl.c.o\n#12 11.28 /deps/mongo-c-driver-1.21.0/src/libbson/src/bson/bson-iter.c:2144:11: error: 'key' may be used uninitialized in this function [-Werror=maybe-uninitialized]\n#12 11.28 bson_utf8_validate (key, strlen (key), false)) {\n#12 11.28 ^\n#12 11.28 /deps/mongo-c-driver-1.21.0/src/libbson/src/bson/bson-iter.c:2145:10: error: 'bson_type' may be used uninitialized in this function [-Werror=maybe-uninitialized]\n#12 11.28 visitor->visit_unsupported_type (iter, key, bson_type, data);\n#12 11.28 ^\n#12 11.29 [ 8%] Building C object src/libbson/CMakeFiles/bson_static.dir/src/bson/bson-utf8.c.o\n#12 11.31 /deps/mongo-c-driver-1.21.0/src/libbson/src/bson/bson-iter.c: In function 'bson_iter_visit_all':\n#12 11.31 /deps/mongo-c-driver-1.21.0/src/libbson/src/bson/bson-iter.c:2144:11: error: 'key' may be used uninitialized in this function [-Werror=maybe-uninitialized]\n#12 11.31 bson_utf8_validate (key, strlen (key), false)) {\n#12 11.31 ^\n#12 11.31 /deps/mongo-c-driver-1.21.0/src/libbson/src/bson/bson-iter.c:2145:10: error: 'bson_type' may be used uninitialized in this function [-Werror=maybe-uninitialized]\n#12 11.31 visitor->visit_unsupported_type (iter, key, bson_type, data);\n#12 11.31 ^\n#12 11.31 [ 8%] Building C object src/libbson/CMakeFiles/bson_shared.dir/__/common/common-b64.c.o\n#12 11.33 [ 9%] Building C object src/libbson/CMakeFiles/bson_shared.dir/__/common/common-md5.c.o\n#12 11.35 [ 9%] Building C object src/libbson/CMakeFiles/bson_static.dir/src/bson/bson-value.c.o\n#12 11.38 [ 9%] Building C object src/libbson/CMakeFiles/bson_shared.dir/__/common/common-thread.c.o\n#12 11.42 [ 9%] Building C object src/libbson/CMakeFiles/bson_static.dir/src/bson/bson-version-functions.c.o\n#12 11.44 [ 9%] Building C object src/libbson/CMakeFiles/bson_static.dir/src/bson/bson-writer.c.o\n#12 11.45 cc1: some warnings being treated as errors\n#12 11.46 [ 9%] Building C object src/libbson/CMakeFiles/bson_static.dir/src/jsonsl/jsonsl.c.o\n#12 11.46 src/libbson/CMakeFiles/bson_shared.dir/build.make:183: recipe for target 'src/libbson/CMakeFiles/bson_shared.dir/src/bson/bson-iter.c.o' failed\n#12 11.46 make[2]: *** [src/libbson/CMakeFiles/bson_shared.dir/src/bson/bson-iter.c.o] Error 1\n#12 11.46 make[2]: *** Waiting for unfinished jobs....\n#12 11.46 [ 10%] Building C object src/libbson/CMakeFiles/bson_static.dir/__/common/common-b64.c.o\n#12 11.48 cc1: some warnings being treated as errors\n#12 11.48 src/libbson/CMakeFiles/bson_static.dir/build.make:183: recipe for target 'src/libbson/CMakeFiles/bson_static.dir/src/bson/bson-iter.c.o' failed\n#12 11.48 make[2]: *** [src/libbson/CMakeFiles/bson_static.dir/src/bson/bson-iter.c.o] Error 1\n#12 11.48 make[2]: *** Waiting for unfinished jobs....\n#12 11.48 [ 10%] Building C object src/libbson/CMakeFiles/bson_static.dir/__/common/common-md5.c.o\n#12 11.74 CMakeFiles/Makefile2:1677: recipe for target 'src/libbson/CMakeFiles/bson_shared.dir/all' failed\n#12 11.74 make[1]: *** [src/libbson/CMakeFiles/bson_shared.dir/all] Error 2\n#12 11.74 make[1]: *** Waiting for unfinished jobs....\n#12 11.77 make[1]: *** [src/libbson/CMakeFiles/bson_static.dir/all] Error 2\n#12 11.77 CMakeFiles/Makefile2:1569: recipe for target 'src/libbson/CMakeFiles/bson_static.dir/all' failed\n#12 11.77 make: *** [all] Error 2\n#12 11.77 Makefile:179: recipe for target 'all' failed\n", "text": "Hello, I am trying to build the C Driver for MongoDB version 1.21.1 and I am getting an error.\nI’m building this driver in Docker with Ubuntu 16.04, CMake 3.17, and GNU 5.4.0.\nMy building commands:And starting from the last release (I guess), I’m getting this error:Any help would be appreciated!", "username": "Daria_Kolodkina" }, { "code": "-- Maintainer flags:", "text": "Can you provide the output of the first part of the CMake command? Specifically, the line that begins -- Maintainer flags:.", "username": "Roberto_Sanchez" } ]
C Driver build error (v1.21.0)
2022-02-03T16:20:18.436Z
C Driver build error (v1.21.0)
3,665
null
[ "c-driver" ]
[ { "code": "", "text": "My process got a segmentfault, and the back trace : 0x00007f3a8405c72e in mongoc_stream_destroy () from /lib/x86_64-linux-gnu/libmongoc-1.0.so.0.\nHow can I get the debug package?\nVersion: libmongoc-1.0-0/stable,now 1.17.6-1 amd64\nOS: debian-11", "username": "kevin_wanglong" }, { "code": "", "text": "You need to install the debugging symbols using one of the methods described in the HowToGetABacktrace Debian wiki article.", "username": "Roberto_Sanchez" } ]
How to get the symbol package?
2022-04-19T08:05:59.576Z
How to get the symbol package?
2,532
null
[ "server" ]
[ { "code": "", "text": "The config sets the dbpath to /mnt/data500/mongo/mongo and enables forking.\nThe service setup sets the User to mongod\nVia systemctl, i.e. as mongod, I get the following error:2022-05-05T12:02:08.869+0000 I STORAGE [initandlisten] exception in initAndListen: Location28596: Unable to determine status of lock file in the data directory /mnt/data500/mongo/mongo: boost::filesystem::status: Permission denied: “/mnt/data500/mongo/mongo/mongod.lock”, terminatingThe directory is owned by mongod:mongod and rwx for owner.\nIf I set it to 777, I get other errors:No TransportLayer configured during NetworkInterface startup\nERROR: Cannot write pid file to /var/run/mongodb/mongod.pid: Permission deniedStarting as root succeeds.", "username": "Marc_Girod" }, { "code": "", "text": "Never do:Starting as root succeeds.All directories must be writable by user mongodb, including /var/run/mongodb.No TransportLayer configured during NetworkInterface startupIf you are trying to listen using an interface on a VPN, you have to ensure that the VPN interface is up and running before mongod is started. You will need to add dependency in you mongod systemctl service file to make sure the network interface is started first.", "username": "steevej" }, { "code": "", "text": "Thanks for your reply. All the directories are writable by user mongod.\nYet under systemctl, the user seems not to be mongod!?[root@ip-172-31-34-89 mongo]# find . ! -user mongod\n[root@ip-172-31-34-89 mongo]# find . ! -group mongod\n[root@ip-172-31-34-89 mongo]# find . -type f ! -perm -u+rw\n[root@ip-172-31-34-89 mongo]# find . -type d ! -perm -u+rwx\n[root@ip-172-31-34-89 mongo]# ls -la /var/run/mongodb/ /var/log/mongodb/\n/var/log/mongodb/:\ntotal 76\ndrwxr-xr-x. 2 mongod mongod 24 May 5 11:38 .\ndrwxr-xr-x. 11 root root 4096 May 5 11:38 …\n-rw-r-----. 1 mongod mongod 70093 May 5 16:00 mongod.log/var/run/mongodb/:\ntotal 0\ndrwxr-xr-x. 2 mongod mongod 40 May 5 16:00 .\ndrwxr-xr-x. 26 root root 800 May 5 12:40 …\n[root@ip-172-31-34-89 mongo]# sudo mongod /usr/bin/mongod -f /etc/mongod.conf\nabout to fork child process, waiting until server is ready for connections.\nforked process: 78585\nERROR: child process failed, exited with error number 1\nTo see additional information in this output, start without the “–fork” option.\n[root@ip-172-31-34-89 mongo]# date\nFri May 6 08:12:35 UTC 2022\n[root@ip-172-31-34-89 mongo]# tail -3 /var/log/mongodb/mongod.log\n2022-05-06T08:12:07.897+0000 I CONTROL [main] ***** SERVER RESTARTED *****\n2022-05-06T08:12:07.899+0000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols ‘none’\n2022-05-06T08:12:07.911+0000 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n[root@ip-172-31-34-89 mongo]# egrep -c ^2022-05-06T08:1 /var/log/mongodb/mongod.log\n3\n[root@ip-172-31-34-89 mongo]# systemctl start mongod.service\nJob for mongod.service failed because the control process exited with error code.\nSee “systemctl status mongod.service” and “journalctl -xe” for details.\n[root@ip-172-31-34-89 mongo]# date\nFri May 6 08:15:32 UTC 2022\n[root@ip-172-31-34-89 mongo]# egrep -c ^2022-05-06T08:15 /var/log/mongodb/mongod.log\n29\n[root@ip-172-31-34-89 mongo]# egrep -i ‘^2022-05-06T08:15.*permission’ /var/log/mongodb/mongod.log\n2022-05-06T08:15:25.315+0000 I STORAGE [initandlisten] exception in initAndListen: Location28596: Unable to determine status of lock file in the data directory /mnt/data500/mongo/mongo: boost::filesystem::status: Permission denied: “/mnt/data500/mongo/mongo/mongod.lock”, terminating\n[root@ip-172-31-34-89 mongo]# egrep ^User /usr/lib/systemd/system/mongod.service\nUser=mongodI indeed intend to expose this service through a VPN, but I didn’t do it yet. I was not expecting mongodb to barf about this!?", "username": "Marc_Girod" }, { "code": "", "text": "Found the solution to my problem: I needed to configure SELinux:", "username": "Marc_Girod" } ]
Systemctl start fails, but /bin/mongod -f cfg as root starts -- 4.2 on CentOS 8
2022-05-05T16:11:24.972Z
Systemctl start fails, but /bin/mongod -f cfg as root starts &ndash; 4.2 on CentOS 8
3,773
null
[ "ops-manager" ]
[ { "code": "", "text": "Hosted ops manager in amazon linux 2 by following the documentation and this is the error I face while trying to create a replica set", "username": "Venkata_Vamsi" }, { "code": "", "text": "Did anyone got this resolved? We have the same issue in local mode of install", "username": "Kalpana_Srinivasa_Murthy" }, { "code": "", "text": "This is likely because you have set the automation.versions.source=local.\nWhen running on this setting, you have the responsibility to populate the /opt/mongodb/mms/mongodb-releases manually.\nI would suggest to run it on remote mode first, and observe what is needed, then if you need, switch it back to local.", "username": "Daniel_Baktiar1" }, { "code": "", "text": "Hi Daniel, assuming there is no remote option, since you are behind three firewalls with no internet connection, how to populate that directory - what ist the structure and how do I tell the ops-manager that files are there?", "username": "Nils_Hildebrand" } ]
No MongoDB versions have been made available for use in your deployment. At least one MongoDB version must be made available before any changes to your Deployment can be made using Automation
2021-04-13T11:43:13.748Z
No MongoDB versions have been made available for use in your deployment. At least one MongoDB version must be made available before any changes to your Deployment can be made using Automation
5,025
null
[ "aggregation" ]
[ { "code": "{ \n \"_id\" : ObjectId(\"62684847e9594c65cbaa5d85\"), \n \"agentId\" : NumberInt(1), \n \"agentName\" : \"Yardi Gaondi\", \n \"policyList\" : [\n {\n \"receivedDate\" : ISODate(\"2022-03-23T04:46:15.000+0000\"), \n \"policyStatusDetail\" : [\n {\n \"policyStsCode\" : NumberInt(7), \n \"policiesArray\" : [\n {\n \"policyDetailedCode\" : NumberInt(1), \n \"policyStatusDate\" : ISODate(\"2022-02-20T04:46:15.000+0000\")\n }, \n {\n \"policyDetailedCode\" : NumberInt(2), \n \"policyStatusDate\" : ISODate(\"2022-01-19T05:46:15.000+0000\")\n }\n ]\n }\n ]\n }, \n {\n \"receivedDate\" : ISODate(\"2022-03-23T04:46:15.000+0000\"), \n \"policyStatusDetail\" : [\n {\n \"policyStsCode\" : NumberInt(7), \n \"policiesArray\" : [\n {\n \"policyDetailedCode\" : NumberInt(3), \n \"policyStatusDate\" : ISODate(\"2022-01-16T04:46:15.000+0000\")\n }\n ]\n }\n ]\n }, \n {\n \"receivedDate\" : ISODate(\"2022-02-23T04:46:15.000+0000\"), \n \"policyStatusDetail\" : [\n {\n \"policyStsCode\" : NumberInt(7), \n \"policiesArray\" : [\n {\n \"policyDetailedCode\" : NumberInt(1), \n \"policyStatusDate\" : ISODate(\"2022-01-20T04:46:15.000+0000\")\n }, \n {\n \"policyDetailedCode\" : NumberInt(2), \n \"policyStatusDate\" : ISODate(\"2022-01-19T05:46:15.000+0000\")\n }\n ]\n }\n ]\n }\n ]\n}\n{ \n \"_id\" : ObjectId(\"62684847e9594c65cbaa5d86\"), \n \"agentId\" : NumberInt(2), \n \"agentName\" : \"Michelle Hazandi\", \n \"policyList\" : [\n {\n \"receivedDate\" : ISODate(\"2022-04-10T04:46:15.000+0000\"), \n \"policyStatusDetail\" : [\n {\n \"policyStsCode\" : NumberInt(7), \n \"policiesArray\" : [\n {\n \"policyDetailedCode\" : NumberInt(2), \n \"policyStatusDate\" : ISODate(\"2022-04-09T05:46:15.000+0000\")\n }\n ]\n }\n ]\n }, \n {\n \"receivedDate\" : ISODate(\"2022-03-10T04:46:15.000+0000\"), \n \"policyStatusDetail\" : [\n {\n \"policyStsCode\" : NumberInt(7), \n \"policiesArray\" : [\n {\n \"policyDetailedCode\" : NumberInt(2), \n \"policyStatusDate\" : ISODate(\"2022-03-09T05:46:15.000+0000\")\n }\n ]\n }\n ]\n }\n ]\n}\n{ \n \"_id\" : ObjectId(\"62684847e9594c65cbaa5d85\"), \n \"agentId\" : NumberInt(1), \n \"agentName\" : \"Yardi Gaondi\", \n \"policyList\" : [\n {\n \"receivedDate\" : ISODate(\"2022-03-23T04:46:15.000+0000\"), \n \"policyStatusDetail\" : [\n {\n \"policyStsCode\" : NumberInt(7), \n \"policiesArray\" : [\n {\n \"policyDetailedCode\" : NumberInt(1), \n \"policyStatusDate\" : ISODate(\"2022-02-20T04:46:15.000+0000\")\n }, \n {\n \"policyDetailedCode\" : NumberInt(2), \n \"policyStatusDate\" : ISODate(\"2022-01-19T05:46:15.000+0000\")\n }\n ]\n }\n ]\n }, \n {\n \"receivedDate\" : ISODate(\"2022-03-23T04:46:15.000+0000\"), \n \"policyStatusDetail\" : [\n {\n \"policyStsCode\" : NumberInt(7), \n \"policiesArray\" : [\n {\n \"policyDetailedCode\" : NumberInt(3), \n \"policyStatusDate\" : ISODate(\"2022-01-16T04:46:15.000+0000\")\n }\n ]\n }\n ]\n }, \n {\n \"receivedDate\" : ISODate(\"2022-02-23T04:46:15.000+0000\"), \n \"policyStatusDetail\" : [\n {\n \"policyStsCode\" : NumberInt(7), \n \"policiesArray\" : [\n {\n \"policyDetailedCode\" : NumberInt(1), \n \"policyStatusDate\" : ISODate(\"2022-01-20T04:46:15.000+0000\")\n }, \n {\n \"policyDetailedCode\" : NumberInt(2), \n \"policyStatusDate\" : ISODate(\"2022-01-19T05:46:15.000+0000\")\n }\n ]\n }\n ]\n }\n ]\n}\n{ \n \"_id\" : ObjectId(\"62684847e9594c65cbaa5d86\"), \n \"agentId\" : NumberInt(2), \n \"agentName\" : \"Michelle Hazandi\", \n \"policyList\" : [\n {\n \"receivedDate\" : ISODate(\"2022-04-10T04:46:15.000+0000\"), \n \"policyStatusDetail\" : [\n {\n \"policyStsCode\" : NumberInt(7), \n \"policiesArray\" : [\n {\n \"policyDetailedCode\" : NumberInt(2), \n \"policyStatusDate\" : ISODate(\"2022-04-09T05:46:15.000+0000\")\n }\n ]\n }\n ]\n }, \n {\n \"receivedDate\" : ISODate(\"2022-03-10T04:46:15.000+0000\"), \n \"policyStatusDetail\" : [\n {\n \"policyStsCode\" : NumberInt(7), \n \"policiesArray\" : [\n {\n \"policyDetailedCode\" : NumberInt(2), \n \"policyStatusDate\" : ISODate(\"2022-03-09T05:46:15.000+0000\")\n }\n ]\n }\n ]\n }\n ]\n}\ndb.getCollection(\"offers2\").aggregate([\n{\n $project: {\n \"agentId\": \"$agentId\",\n \"agentName\": \"$agentName\",\n \"policyList\": {\n $filter: {\n input: \"$policyList\",\n as: \"item\",\n cond: {\n \"$or\": [\n {\n \"$and\": [\n { \"$gte\": [ \"$item.receivedDate\", ISODate(\"2022-02-01\") ] },\n { \"$lte\": [ \"$item.receivedDate\", ISODate(\"2022-03-01\") ] }\n ]\n }, \n {\n $and\": [\n { \"$gte\": [ \"$item.policyStatusDetail.policiesArray.policyStatusDate\", ISODate(\"2022-02-01\") ] },\n { \"$lte\": [ \"$item.policyStatusDetail.policiesArray.policyStatusDate\", ISODate(\"2022-03-01\") ] }\n ]\n }\n ]\n }\n }\n }\n }\n},\n{\n $project: {\n \"agentId\": \"$agentId\",\n \"agentName\": \"$agentName\",\n \"policyList\": \"$policyList\",\n \"numPoliciesPerDate\": {\n $cond: { \n if: {$isArray: \"$policyList\"}, then: {$size: \"$policyList\"}, else: \"0\"\n }\n }\n }\n },\n {\n $match: {\n \"numPoliciesPerDate\": {$gte: 1}\n }\n }\n]) \n{ \n \"_id\" : ObjectId(\"62684847e9594c65cbaa5d85\"), \n \"agentId\" : NumberInt(1), \n \"agentName\" : \"Yardi Gaondi\", \n \"policyList\" : [\n {\n \"receivedDate\" : ISODate(\"2022-02-23T04:46:15.000+0000\"), \n \"policyStatusDetail\" : [\n {\n \"policyStsCode\" : NumberInt(7), \n \"policiesArray\" : [\n {\n \"policyDetailedCode\" : NumberInt(1), \n \"policyStatusDate\" : ISODate(\"2022-01-20T04:46:15.000+0000\")\n }, \n {\n \"policyDetailedCode\" : NumberInt(2), \n \"policyStatusDate\" : ISODate(\"2022-01-19T05:46:15.000+0000\")\n }\n ]\n }\n ]\n }\n ], \n \"numPoliciesPerDate\" : NumberInt(1)\n}\n $and\": [\n { \"$gte\": [ \"$item.policyStatusDetail.policiesArray.policyStatusDate\", ISODate(\"2022-02-01\") ] },\n { \"$lte\": [ \"$item.policyStatusDetail.policiesArray.policyStatusDate\", ISODate(\"2022-03-01\") ] }\n ]\n", "text": "I appreciate any help in such case. Collection in MongoDB (now only 2 documents for demonstration purpose):I appreciate any help in such case. Collection in MongoDB (now only 2 documents for demonstration purpose):So collection consists of 2 documents, in each document there is a field “policyList” which is an array of objects. In first document policyList contains 3 objects, in second document only two. So I have to filter documents in this collection in such way: 1) I need to keep in the “policyList” array only those objects that match to such condition: one of the fields are in a certain time interval, which means that must be a match in at least one of the fields(first field is “receivedDate” - which is located in the array at the first level of nesting - “policyList” or second field “policyStatusDate” - which is located in the array at the third level of nesting - “policiesArray”, and if there is a match in one of abovementioned fields we return from the “policyList” full object, that means we cannot throw away from “policiesArray” any object). One match is enough, for example if I want to see documents from 01/02/2022 to 01/03/2022 I expect to see in the first document in the “policyList” array only first and third object, because first object matches by “policyStatusDate” - 20/02/2022 (match in one of the object in “policiesArray” is enough) and third object matches by “receivedDate” - 23/02/2022) and second object in first document I don’t expect to see because both dates in this document are not in the period from 01/02/2022 to 01/03/2022; 2) if there are no any matches among objects in “policyList”, that means “policyList” must be empty after filtering, and in such case we don’t need to return this document. For example if I request for a documents from 01/02/2022 to 01/03/2022 I’m not expecting to see second document, because no “policyStatusDate” and no “receivedDate” are in requested time interval.My aggregation request:After running this query I expect to receive first document with first object and third object in “policyList” array, but I received only third object (there is matching by “receivedDate” in this object ). Result:So it seems that conditiondoesn’t work correct. I think it’s because it may be impossible to use dot notation when we work with nested arrays. So maybe somebody could help me to fix this aggregation query so that the requirements I described at the beginning will be fullfiled and in our case first object from “policyList” in first document will also be returned?", "username": "Yakov_Markovych" }, { "code": "", "text": "Hi @Yakov_Markovych\nWelcome to the community!!Can you please help me in understanding the question correctly. As per my understanding, you are trying to return the policy having dates in between x and y irrespective of the level the dates are present in the nested array.If I understand the question correctly, the aggregation query looks a bit difficult to me and might contain corner cases. This would eventually lead to difficult to maintain the query and effect the performance of the same. In addition, the query will not be able to use indexes, meaning that if you have a large data, the query performance will be slow.If the query will be uses frequently, you might want to reconsider changing the schema design. Or you might want to refactor into separate collection\nhttps://www.mongodb.com/docs/manual/tutorial/model-referenced-one-to-many-relationships-between-documents/ and also the blog post Schema Design Best Practices might be of use to you as well.Feel free to post questions if you have any.Thanks\nAasawari", "username": "Aasawari" }, { "code": " $and\": [\n { \"$gte\": [ \"$item.policyStatusDetail.policiesArray.policyStatusDate\", ISODate(\"2022-02-01\") ] },\n { \"$lte\": [ \"$item.policyStatusDetail.policiesArray.policyStatusDate\", ISODate(\"2022-03-01\") ] }\n ]\n{\n \"$and\": [\n { \"$gte\": [ \"$item.receivedDate\", ISODate(\"2022-02-01\") ] },\n { \"$lte\": [ \"$item.receivedDate\", ISODate(\"2022-03-01\") ] }\n ]\n }\n", "text": "So it seems that conditiondoesn’t work correct.Not only this one but this one tooIf you look at $filter documentation you will see that you need to use $$item rather than $item.", "username": "steevej" }, { "code": "db.collection.aggregate(\n { \"$set\": {\n \"policyList\": {\n \"$filter\": {\n \"input\": \"$policyList\",\n \"as\": \"policy\",\n \"cond\": {\n \"$or\": [\n { \"$and\": [\n { \"$gte\": [ \"$$policy.receivedDate\", ISODate(\"2022-02-01\") ] },\n { \"$lte\": [ \"$$policy.receivedDate\", ISODate(\"2022-03-01\") ] }\n ]\n },\n { \"$reduce\": {\n \"input\": \"$$policy.policyStatusDetail\",\n \"initialValue\": false,\n \"in\": {\n \"$or\": [\n \"$$value\",\n { \"$reduce\": {\n \"input\": \"$$this.policiesArray\",\n \"initialValue\": false,\n \"in\": {\n \"$or\": [\n \"$$value\",\n { \"$and\": [\n { \"$gte\": [ \"$$this.policyStatusDate\", ISODate(\"2022-02-01\") ] },\n { \"$lte\": [ \"$$this.policyStatusDate\", ISODate(\"2022-03-01\") ] }\n ]\n }\n ]\n }\n }\n }\n ]\n }\n }\n }\n ]\n }\n }\n }\n }\n },\n { \"$match\": { \"$expr\": { \"$gt\": [ { \"$size\": \"$policyList\" }, 0 ] } } }\n)\n", "text": "Hi,\nIt’s real case and we cannot change schema, and we are not talking about indexes. at all\nThanks to rickhg12hs from stackoverflow I already get from him an answer that works well.So solution is:", "username": "Yakov_Markovych" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Filtering data in collection containing documents with 3 level nested arrays depending on the values in the first and last nested array mongo
2022-04-29T09:57:07.586Z
Filtering data in collection containing documents with 3 level nested arrays depending on the values in the first and last nested array mongo
7,256
null
[ "python", "atlas-cluster" ]
[ { "code": "", "text": "I get this error\npymongo.errors.ServerSelectionTimeoutError: ****-shard-00-01.dlu0y.mongodb.net:27017: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: Hostname mismatch, certificate …\"\nIt worked perfectly fine before I upgraded my subscription. Now for some reason it does not work", "username": "amit_tauman" }, { "code": "", "text": "I think this stackoverflow article outlines the fix. Basically you have to run a script to update your certificates.https://stackoverflow.com/questions/27835619/urllib-and-ssl-certificate-verify-failed-error", "username": "Joe_Drumgoole" } ]
Error after upgrading my mongoDB subscription
2022-05-05T18:16:19.225Z
Error after upgrading my mongoDB subscription
1,582
null
[ "node-js", "replication", "mongoose-odm" ]
[ { "code": "const { MongoClient } = require(\"mongodb\");\n\nlet tmMapsDB = null;\nlet staffDB = null;\nmodule.exports = async () => {\n\n // Create a new MongoClient\n\n MongoClient.connect('mongodb://localhost:27017', function (err, client) {\n if (err) throw err;\n\n tmMapsDB = client.db('tmmaps')\n staffDB = client.db('staff')\n \n });\n}\n\nmodule.exports.returnDB = (name) => {\n if (name === 'TMMAPS') {\n return tmMapsDB\n }\n if (name === 'STAFF') {\n return staffDB\n }\n return null\n}```", "text": "Hi,Looking to move away from Mongoose to using the native driver directly in NodeJS.We currently have a per database authentication - so each database has a different user / password - in Mongoose we connect each model using a different connection string - where the username and password is passed - along with the database.Looking at the native driver example - the MongoClient is created once per replica set - so a cached client - but I am unsure then how to connect each database using a different username and password.Are there any examples handy?My current code for a test local instance of mongo looks like this:", "username": "Mark_Barton" }, { "code": "", "text": "I should add some context - we typically have a single Node application which is responsible for a specific task - in this case the app is moving data from an old system (Domino) to multiple Mongo Daatabases on a scheduled basis.It seems to me - either we have 1 application user which can access all databases or 1 node application per Database.It seems odd that mongoose can manage this but not the native driver? Unless under the covers Mongoose is actually creating a lot of connections.", "username": "Mark_Barton" } ]
Multiple Databases with different authentication and a single MongoClient
2022-05-06T10:18:59.464Z
Multiple Databases with different authentication and a single MongoClient
6,225
null
[]
[ { "code": "", "text": "Hi, beginner with Mongodb on windows10 here, not sure whether it’s the problem of GitBash or mongo shell. There are several apparent problems.", "username": "Luke_He" }, { "code": "", "text": "Hi @Luke_He ,Could be the way Git Bash is rendering your Mongo Shell. Could you provide a Screenshot of your Console?I would recommend using powershell or a cmd emulator like cmder (https://cmder.net/) . Worked for me and I havent dealt with any “Formatting” Errors so far. Hope this helps Greetings,\nNiklas", "username": "NiklasB" }, { "code": "", "text": "Hi Niklas Bleicher,Yes, the issues don’t happen on powershell. However, would you recommend running mongoose commands on bash while running MongoDB on powershell? Since I am more familiar with bash as a beginner dev. Might I run into some obvious errors working this way?Kind regards,\nLuke", "username": "Luke_He" }, { "code": "", "text": "\nbash759×534 18 KB\n\nHere is a screenshot", "username": "Luke_He" }, { "code": "", "text": "If you look at the Message given in your Screenshot it seems like you are connection to your Mongo Instance with “mongo <your_instance>”. Try using “mongosh <your_instance>” instead, from your bash console.P.S: If you are running your Mongo locally you can just use the “mongosh” Command without giving an Instance. It will automatically connect to your local InstanceSee here for more Information: https://www.mongodb.com/docs/mongodb-shell/\nHope this helps! Greetings,\nNiklas", "username": "NiklasB" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo shell doesnt work properly
2022-05-04T18:45:21.094Z
Mongo shell doesnt work properly
3,880
null
[ "android", "kotlin" ]
[ { "code": "Realm.init(context)", "text": "By default when we initialize our app with realmSync, it uses the default database name (android/ios app name). But in the cloud we might have several databases and if someone wants to change the database how would he do that?We know only Realm.init(context) for android", "username": "Neeraj_42037" }, { "code": "", "text": "Realm only has the concept of a TableName. Each table name has a mapping to a single MongoDB Namespace (Database Name and Collection Name). The “Default Database” name is really only used when you are defining new TableNames in your SDK because we need to figure out which “database” to put it in. If you want some of your tables to be in databaseA and some to be in databaseB, then you need to define that in the Realm UI. In the “Schemas” section you can create a schema and specify the TableName->(DatabaseName, CollectionName) explicitly", "username": "Tyler_Kaye" }, { "code": "", "text": "Hi, I may have mixed up what you are asking. Are you referring to the mapping in the cloud of the MongoDB database name / collection name, or the file that the local realm database is stored to on the device?", "username": "Tyler_Kaye" }, { "code": "Realm.init(context)", "text": "G’Day, @Neeraj_42037,Welcome to MongoDB Community forums We know only Realm.init(context) for androidThis code snippet is initializing the Realm SDK in your android application. This is not defining any database yet. You can further set up the app configuration using API.There is mostly 1 database per Realm Application on the cloud and multiple public/private realms as part of that and is segregated based on the partition strategy you define. You can have multiple databases but they are tricky to use when you have partition sync, as your partition key should be the same in all the databases.Could you clarify more what you are trying to do and where you are stuck? Are you using Sync or only the local database? If you are using sync, do you have development mode on?I look forward to your response.Cheers, \nHenna", "username": "henna.s" }, { "code": "", "text": "Yes! I have 2 databases running on mongodb cloud as shown in fig\n\nScreenshot (6)1012×644 32.2 KB\n\nThe first database is Employee and the 2nd database is RChat.Imaging if i have created 2 different application named App1 and App2So i just want my first application “App1” to use Emplyoee database and the 2nd application “App2” to use RChat.How can i write code for specifying the database for android application [Java SDK]?", "username": "Neeraj_42037" }, { "code": "", "text": "@Neeraj_42037: This configuration is done/managed by the Sync tab under your Realm App.\nScreenshot 2022-05-05 at 11.42.261682×1158 189 KB\n", "username": "Mohit_Sharma" }, { "code": "CodeCamp", "text": "This option is only enabled for development mode. However i have defined that too but still app is automatically creating the database with app name and using it.\nScreenshot (7)836×882 76.2 KB\n\nScreenshot (8)590×694 11.9 KB\n\nIts still using “CodeCamp” database", "username": "Neeraj_42037" }, { "code": "", "text": "@Neeraj_42037: I am not sure what happened. I have just validated the same use case and it’s working fine on my side.Usecase: Validate if the data is getting pushed to the right database or not. Also after the DB name is changed from the sync tab.", "username": "Mohit_Sharma" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to specify specific database to sync from Realm Android Kotlin
2022-05-04T19:55:12.110Z
How to specify specific database to sync from Realm Android Kotlin
3,672
null
[ "node-js", "mongoose-odm", "mdbw22-hackathon", "react-js" ]
[ { "code": "", "text": "I have 1+ year of experience creating fullstack web applications, but I’m more frontend focused.React. JavaScript. Typescript. Nodejs. MongoDBGMT + 1", "username": "Fiewor_John" }, { "code": "", "text": "Welcome!! Some great tech stack experience there! I’m sure frontend will be in great demand", "username": "Shane_McAllister" }, { "code": "", "text": "Great Tech skills bro! Same with you but im still new in the MERN stack field. Looking forward on learning more with all the people here.", "username": "MARLON_JR_GACRAMA" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Fiewor_John is looking for a project!
2022-04-13T14:23:35.636Z
Fiewor_John is looking for a project!
2,983
null
[ "java", "indexes" ]
[ { "code": "", "text": "Question 1: I have java threads which are working on mongo DB . when Thread A is inserting records on Collection A and after a while Thread B creating indexes on Collection B, mongo is not processing Thread A insertion commands. How can we make sure it will work parlerly .Question 2: I have data ready in relational data , just want to synch that to mongo. in this case Indexing after insertion ? or insertion after indexing ? which one is better .", "username": "Puvvada_Divya" }, { "code": "", "text": "I have java threads which are working on mongo DB . when Thread A is inserting records on Collection A and after a while Thread B creating indexes on Collection B, mongo is not processing Thread A insertion commands.Most likely your setup is struggling from insufficient hardware. Or you are using an old mongod, but even then I do not think that creating index on collection B will affect insert on collection A. So insufficient hardware, thus excessive context switch.Indexing after insertion ? or insertion after indexing ? which one is betterNone is better. The total amount of work is similar. It is a tradeoff, it depends on your use-case. Creating the index before slows down insertion but your data is ready for quick queries when all insertions are done. Creating the index after speeds up insertion, but you have to wait after index creation before you have speedy queries.", "username": "steevej" }, { "code": "mongomongo", "text": "Thanks Steve for your reply . Still I have doubt in Question 1:\nSorry Steve, I do not think that I am suffering from hardware problem , because we are getting this problem in production loud. Mongo is not issuing commands Collection B while indexing is going on for Collection A . I can similar kind of behavior from this page,\nIndex Build Operations on a Populated Collection — MongoDB Manual , can you please explain me below statement from this page .Any operation that requires a read or write lock on all databases (e.g. listDatabases ) will wait for the foreground index build to complete.Background indexing operations run in the background so that other database operations can run while creating the index. However, the [ mongo ]shell session or connection where you are creating the index will block until the index build is complete. To continue issuing commands to the database, open another connection or [ mongo ] instance.", "username": "Puvvada_Divya" }, { "code": "By default, creating an index on a populated collection blocks all other operations on a database.Starting in MongoDB 4.2, index builds use an optimized build process that holds an exclusive lock on the collection at the beginning and end of the index build. The rest of the build process yields to interleaving read and write operations.", "text": "Hi @Puvvada_Divya welcome to the community!Mongo is not issuing commands Collection B while indexing is going on for Collection AThe behaviour you’re seeing was actually mentioned in the page you linked, specifically: By default, creating an index on a populated collection blocks all other operations on a database. So this is in line with what you experienced.However I noticed that you’re linking to the documentation page for the 4.0 series, which is not supported anymore since April 2022. Are you using MongoDB 4.0 series by any chance?This behaviour was improved in MongoDB 4.2: Starting in MongoDB 4.2, index builds use an optimized build process that holds an exclusive lock on the collection at the beginning and end of the index build. The rest of the build process yields to interleaving read and write operations.It sounds like you’re trying to create a new MongoDB deployment. If this is a fresh deployment, I would suggest you use the latest MongoDB version (5.0.8 currently) instead of using an unsupported version. You will also not see this issue if you’re using the latest version.Best regards\nKevin", "username": "kevinadi" } ]
Indexing while inserting
2022-05-05T11:30:31.153Z
Indexing while inserting
5,078
https://www.mongodb.com/…e9cadcf1bddb.png
[ "mdbw22-hackathon" ]
[ { "code": "Staff Developer AdvocateSenior Developer Advocate", "text": "In this session, Staff Developer Advocate Nic Raboy shares the progress of his News Browser Web App that he is building alongside all our hackathon participants.We will be running these sessions each Friday during the hackathon where we will build features onto the WebApp sample app. All repos will be shared, so you can follow along too.We also use this Friday session to share team and individual progress - so if there’s anything you want to share on camera about your own hackathon project and progress, with all the Hackathon viewers, please reply to this post and we’ll send you an invite link. Anybody sharing is in-line to get some cool swag!!Join us, it will be fun and you will learn too! What’s not to like!!We will be live on MongoDB Youtube and MongoDB TwitchStaff Developer AdvocateSenior Developer Advocate", "username": "Shane_McAllister" }, { "code": "", "text": "This will be on MongoDB Youtube and MongoDB Twitchor you can watch here below (so you don’t even have to leave the Hackathon Forums!! But you do miss out on the commenting and the chat!)", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Hackathon Live Coding and Fun Hack Friday! Week 4
2022-05-05T16:29:35.950Z
Hackathon Live Coding and Fun Hack Friday! Week 4
2,879
https://www.mongodb.com/…020a326cd82a.png
[ "mdbw22-hackathon" ]
[ { "code": "Lead Developer AdvocateSenior Developer Advocate", "text": "In this session, we will cover working with the Hackathon GDELT Data, we show you how to set up and use Geofencing and then to trigger notifications when an event occurs within the geofence.We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer Advocate", "username": "Shane_McAllister" }, { "code": "", "text": "This was a good one!! If you missed it, have no fear, you can re-watch below", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Hackathon USA/EMEA Session 2 - GDELT Data - Geofencing & Creating notifications
2022-05-05T15:43:39.898Z
Hackathon USA/EMEA Session 2 - GDELT Data - Geofencing &amp; Creating notifications
2,798
https://www.mongodb.com/…91d95cc2c4af.png
[ "app-services-hosting" ]
[ { "code": "", "text": "Hello,How do I fix this problem? I did a small update to the front end code. Is their an issue with the build or is it on the realm server side?", "username": "SOCAL_First_Time_Homebuyer" }, { "code": "", "text": "Hello @SOCAL_First_Time_Homebuyer,It seems like you are missing “main.d3733b56.js” from your build and hence the source map file “main.d3733b56.map” is also missing.Tarun", "username": "Tarun_Gaur" }, { "code": "", "text": "I don’t have experience with builds other than “npm run build” with react.The two files above do get generated in the build but the error still occurs.I am thinking about deleting node_modules and package-lock and reinstalling the packages. I don’t know where else to start. Any suggestions?", "username": "SOCAL_First_Time_Homebuyer" }, { "code": "", "text": "@Tarun_GaurI just tried what I stated above and it did not work. I then tried a clean create-react-app and it worked.Could something in the source file cause this or what?", "username": "SOCAL_First_Time_Homebuyer" }, { "code": "", "text": "It was time for me to do a clean install on my computer so I did that and it’s now working properly if anyone runs into this.", "username": "SOCAL_First_Time_Homebuyer" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Realm Hosting - 2 Files were not uploaded error
2022-05-04T13:30:56.770Z
Realm Hosting - 2 Files were not uploaded error
3,100
null
[ "swift", "android" ]
[ { "code": "", "text": "Hi All,\nFairly new to using Realm for production applications. We’ve just released an iOS/Android app public that’s using Realm as a local database. Looking at our user analytics, we are seeing an odd behavior where users have background activity at random, this only happens in iOS platforms however and not in Android. Background Tasks capability is not enabled in iOS so I wasn’t expecting any background activity to take place. Is Realm still running processes in the background in some manner and if possible can we block them? This question originated as we started receiving background crashes in a significant number of our users. Appreciate any tips or pointers, thank you!", "username": "Santiago_Arconada" }, { "code": "", "text": "Welcome to the forums!The question is a bit vague - it’s kind of like calling the auto mechanic and telling them your car goes ka-thunk, ka-thunk and asking what the issue is.Yes, Realm has background processes but pretty much all apps have some kind of background processes running - whether it be coded in your app, something the OS is attending to, network calls, or from a SDK.Is there any way you can narrow down what kind of processes they are and clarify if you only have Realm or do you perhaps have other SDK’s involved? Is this a purely local app or is there Sync’ing involved? Does you code use any kind of notifications?Even better would be isolating a chunk of code that’s misbehaving and posting it here so we can take a look.", "username": "Jay" }, { "code": "status = SecItemAdd(query as CFDictionary, nil)\n if status != errSecSuccess {\n fatalError(\"The encryption key could not be saved in the keychain\") // you should throw an error or return nil so this can fail gracefully\n }\n assert(status == errSecSuccess, \"Failed to insert the new key in the keychain\")\n", "text": "Hi Jay, appreciate the response and the feedback on the question.The chunk of code that’s triggering the crashes is this:Reading this https://www.mongodb.com/docs/realm-legacy/docs/swift/latest/#using-realm-with-background-app-refresh I’m trying to understand whether the Data Protection capability on iOS is not enough (or even relevant), and if Realm is routinely trying to encrypt data while the device is locked is causing it to fail at that Fatal Error. I’ll add that when the app opens all data is there, so I’m considering replacing that line with a FatalError and letting it fail silently like the comment suggests, but just want to understand the processes that are going on.To answer your specific questions:", "username": "Santiago_Arconada" }, { "code": "", "text": "We use Firebase and Pendo SDKs as wellWell. everything Firebase does is background tasks; literally everything. And that would also indicate it is not a purely local app as Firebase is an Online First database with some offline persistence for brief interruptions in connectivity.The code in your question doesn’t appear to be directly related to Realm, so I would guess the background activity you’re seeing is 100% Firebase (because that’s just how it works!)", "username": "Jay" } ]
Observed iOS background activity
2022-05-03T19:56:47.460Z
Observed iOS background activity
2,413
https://www.mongodb.com/…2daf84dd874.jpeg
[ "time-series", "data-api", "dach-virtual-community" ]
[ { "code": "Independent ConsultantMongoDB User Group Leader, Solutions ArchitectSolutions Architect, MongoDBSolutions Architect, MongoDB", "text": "Viel hat sich getan getan bei MongoDB rund um Release Cycles,Time Series Collections, Data API, und vieles mehr. Wir geben Euch einen Überblick aller Neuerungen in MongoDB 5.0.Unseren Schwerpunkt legen wir auf das Internet of Things (IoT). Am Beispiel eines Temperaturdifferenzreglers für solarbeheizte Wasserspeicher zeigen wir, wie Ihr mittels der neuen MongoDB Data API Sensordaten direkt von einem IoT Gerät in eine MongoDB Time Series Collection schreibt.Anhand der gespeicherten Sensordaten vertiefen wir das Thema Time Series, schauen uns an was unter der Haube geschieht, nutzen neue Funktionen wie z.B. die Window Functions, um die Daten auszuwerten und zeigen Euch Tricks und Kniffe.Den Code für das Arduino Projekt zur Ansteuerung eines ESP8622 Chips und der Verarbeitung der Daten stellen wir natürlich zur Verfügung.Wer mag, kann bei einem Quiz im Anschluss gleich sein Wissen prüfen – den Gewinnern winken tolle MongoDB Swag-Preise. Find all the event details and RSVP for the event here.Independent Consultant\nMongoDB User Group Leader, Solutions Architect–Solutions Architect, MongoDB–Solutions Architect, MongoDB", "username": "Harshit" }, { "code": "", "text": " Sound nice @michael_hoeller - I plan do join and hope I’ll be able to make it when the day comes!", "username": "hpgrahsl" } ]
DACH MUG: MongoDB 5.0 – Ready for IoT
2022-05-04T13:06:16.814Z
DACH MUG: MongoDB 5.0 – Ready for IoT
4,053
null
[]
[ { "code": "", "text": "Recently we have cleaned some data from our DB, so we want to reduce storage space.However, the storage size remains as is. We asked the support team how to Reclaim/reduce disk space in MongoDB and were advised to use the compact command.We ran the command manually as of now. I was wondering if anyone has a script that utilizes the API to run the command. Here is the advice we got from the support team at MongoDB:On the topic of automation of any portion of this process, you can decrement the disk size using the Modify A Cluster API endpoint.", "username": "invesp_Figpii" }, { "code": "", "text": "Can I ask how you ran it, I am trying to work it out - is this something we can do in Compass or Studio3T?", "username": "Russell_Harrower" }, { "code": "", "text": "If you periodically run a command on mongo db, you can create a shell script which does the job and trigger that script with cronjobs. you need to be sure the script should not run multiple times and it will be good to send email the outputs.", "username": "invesp_Figpii" } ]
Automating the compact() command via the API
2020-07-01T01:38:36.149Z
Automating the compact() command via the API
2,771
null
[ "containers", "installation" ]
[ { "code": "", "text": "I just upgraded my development machine to an M1 MacBook Pro, and immediately discovered that my mongodb docker containers weren’t working. I noticed that there isn’t yet a build of mongodb for the M1, and I found the relevant ticket for the upgrade which doesn’t look like it’ll be addressed any time soon (backlogged P3). This would be fine if mongodb ran under Rosetta, but with v5 an AVX requirement was introduced. Rosetta does not simulate AVX.I find myself stuck. Do I have to roll back to mongo4? Is there any way to get an M1 build prioritised?", "username": "Michael_Townsend" }, { "code": "", "text": "@Michael_Townsend - Let’s continue the discussion from https://jira.mongodb.org/browse/SERVER-42427 here.In your environment, which binaries are being run within the Docker container? Is Docker compiled natively for M1 or is it itself running under Rosetta 2? I suspect that what may be happening is that Docker is trying to run the x86_64 linux MongoDB 5.0 binaries inside the Docker container, and those do indeed require AVX, which Rosetta 2 doesn’t emulate.But I’m puzzled why Docker running on an M1 would be emulating an x86 environment. Shouldn’t it be running an ARM linux, and then using the ARM linux MongoDB in that, etc.?", "username": "Andrew_Morrow" }, { "code": "", "text": "Hello @Michael_Townsend!Would you by chance be interested in trying to run MongoDB 5.0 Natively on your M1 Mac?I am sorry for any frustration this has all caused you, but I may have an acceptable work around for the time being.The way to install 5.0 Natively without Docker on an M1 Mac:To check and verify that it is running:If you have you have any issues with this work around, please let me know. As I would be happy to help you situate this. However, if you must have MongoDB 5.0 within a Docker container for work requirements that’s is completely understandable. I am merely giving you another option to use MongoDB 5.0 on your M1 Mac while compatibility developments using Docker are sorted out.I would also love to hear about your applications stack and use case(s) for MongoDB on your M1 Mac like my colleague Andrew would.Anything else that we can help you with in the meantime, please let us know.Regards,Garrett", "username": "Brock_GL" }, { "code": "", "text": "Hi GarrettI would be happy to try out MongoDB 5.0 natively (without Docker) on my Apple M1 Max running the latest macOS Monterey.When you say “natively” due you mean true native or still via Rosetta 2 ?regards\nMatt", "username": "Matthew_Donaldson" }, { "code": "brewx86_64", "text": "Welcome to the MongoDB Community Forums @Matthew_Donaldson!At the moment, running natively on Apple M1 is referring to using Rosetta 2 without Docker (for example, installing the macOS x86_64 binaries via brew). The macOS packages are working fine for me on M1 with Rosetta 2 installed.This discussion was originally about someone trying to run the Linux x86_64 binaries in Docker on M1, which will be problematic because of the requirement for AVX support in MongoDB 5.0 packages for Linux x86_64. The solution for the original question would be to either install MongoDB 4.4 packages on Linux (since those are not optimised for AVX) or to build MongoDB 5.x from source with an older x86_64 CPU architecture target.Building the MongoDB server with ARM64/aarch64 support for MacOS (SERVER-50115) is currently blocked pending resolution of a SpiderMonkey JavaScript engine upgrade (SERVER-42427).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "Hello Everyone!Yesterday I found this forum as was sad to see that it didn’t seem possible to run Mongo 4.9+ with Docker. However I managed to get it work! I wrote a post on Stack Overflow and am actually right now trying to figure out why it does in fact work. If you’re having this issue hopefully this thread could be of use.", "username": "Luke_Parker" } ]
Mongo 5+ on Apple M1 laptops via Docker
2021-12-11T03:31:22.514Z
Mongo 5+ on Apple M1 laptops via Docker
21,474
null
[ "aggregation", "crud", "transactions" ]
[ { "code": "db.collection.updateMany(\n{ 'alt_text.text': { $regex: /Roaming Fee/ } },\n[\n {\n $set: {\n alt_text: {\n $map: {\n input: '$alt_text',\n in: {\n $replaceOne: {\n input: '$$this.text',\n find: 'Roaming',\n replacement: 'Transaction'\n }\n }\n }\n }\n }\n }\n]\n)\n{ \n \"_id\" : ObjectId(\"61683ffc6de6bb26d40f87b2\"), \n \"last_updated\" : ISODate(\"2021-10-14T14:32:10.699Z\"), \n \"alt_text\" : [\n {\n \"language\" : \"en\", \n \"text\" : \"$1.00 per hour\\n$1.25 Roaming Fee\"\n }, \n {\n \"language\" : \"fr\", \n \"text\" : \"1,00 $ par heure\\nFrais Roaming 1,25 $\"\n }\n ], \n \"id\" : \"abcd\"\n}\n{ \n \"_id\" : ObjectId(\"61683ffc6de6bb26d40f87b2\"), \n \"last_updated\" : ISODate(\"2021-10-14T14:32:10.699Z\"), \n \"alt_text\" : [\n {\n \"language\" : \"en\", \n \"text\" : \"$1.00 per hour\\n$1.25 Transaction Fee\"\n }, \n {\n \"language\" : \"fr\", \n \"text\" : \"1,00 $ par heure\\nFrais de Transaction 1,25 $\"\n }\n ], \n \"id\" : \"abcd\"\n}\n", "text": "I’m attempting to replace a portion of a string in a nested array of objects. Essentially each object in the array has a “text” field, which contains a reference to a roaming fee. I’d like to change the text from roaming fee to transaction fee.The result of my query is a little strange, it replaces the array of objects with an array of strings. The strings have the correct “transaction fee” text.Would really appreciate some help with this one!QueryOriginal DocumentExpected Output", "username": "Greg_Fitzpatrick-Bel" }, { "code": "in: { \n language: \"$$this.language\" ,\n text: { $replaceOne: {\n input: '$$this.text',\n find: 'Roaming',\n replacement: 'Transaction'\n } }\n}\n", "text": "The result of my query is a little strange, it replaces the array of objects with an array of strings.It is not strange. It is what you asked. You asked to $map each object of the array alt_text with the respective $replaceOne which produce strings. Your in: expression has to look something like:This is untested, but it is the general idea.", "username": "steevej" }, { "code": "$mergeObjectstext", "text": "Note that if you don’t know all the fields that might in inside the object in the array, you can use $mergeObjects to preserve existing fields while updating text field.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Attempting to replace a substring in an array of subdocuments
2022-05-04T18:43:30.623Z
Attempting to replace a substring in an array of subdocuments
5,098
null
[ "python" ]
[ { "code": "# from django.db import models\nfrom djongo import models\n\nclass Blog(models.Model):\n id= models.AutoField(\n auto_created = True,\n unique=True,\n primary_key = True,\n serialize = False, \n verbose_name ='ID_nama: ')\n name = models.CharField(max_length=100)\n tagline = models.TextField()\n\n def __str__(self):\n return self.name\n\nclass Author(models.Model):\n name = models.CharField(max_length=200)\n email = models.EmailField()\n\n def __str__(self):\n return self.name\n\nclass Entry(models.Model):\n blog = models.ForeignKey(Blog, on_delete=models.CASCADE)\n headline = models.CharField(max_length=255)\n body_text = models.TextField()\n pub_date = models.DateField()\n mod_date = models.DateField(default=date.today)\n authors = models.ManyToManyField(Author)\n number_of_comments = models.IntegerField(default=0)\n number_of_pingbacks = models.IntegerField(default=0)\n rating = models.IntegerField(default=5)\n\n def __str__(self):\n return self.headline\n{\n \"_id\": {\n \"$oid\": \"626b6627f0d91c65e9f78cc6\"\n },\n \"id\": 5,\n \"name\": \"Beatles Blog\",\n \"tagline\": \"Beatles tour of the Americas.\"\n}\n\"ObjectId\" => \"_id\": {\n \"$oid\": \"626b6627f0d91c65e9f78cc6\"\n}\n", "text": "I use Django + MongoDB /Djongo for backend on Windows10/VSCode. How it is to instantiate document’s “ObjectId” like it is for other field using Python? I have been struggling for a several days. Please help. Code example, below:from datetime import dateHere is the document JSON from MongodDB:My target is to be able to capture theand save it to another new field for other use/purpose.", "username": "Rgs_Ma" }, { "code": ">>> b2 = Blog(name='Name', tagline='Tagline.')\n>>> b2.id # Returns None, because b2 doesn't have an ID yet.\n>>> b2.save()\n>>> b2.id\n", "text": "I’ve never used Djongo (usually use pymongo) so I’m not sure if this will work/what you’re looking for, but if you need the _id value after inserting a document so you can use it elsewhere it looks like you can use Model.id after saving to get the ID value.Here is the doc that has an example: Model instance reference | Django documentation | Django", "username": "tapiocaPENGUIN" } ]
MongoDB document's "ObjectId" instantiation and saving
2022-04-30T13:19:01.671Z
MongoDB document&rsquo;s &ldquo;ObjectId&rdquo; instantiation and saving
4,086
null
[ "java" ]
[ { "code": "@Document(\"region\")\npublic class Region {\n @Id\n private int globalIdLocal;\n private String name;\n}\n\n@Document(\"precipitation\")\npublic class Precipitation {\n @Id\n private int id;\n private String descriptionEn;\n private String descriptionCh;\n}\n\n@Document(\"windspeed\")\npublic class WindSpeed {\n @Id\n private int id;\n private String descriptionEn;\n private String descriptionCh;\n}\n\n@Document(\"dailyforecast\")\npublic class DailyForecastDto {\n @Id\n private String id;\n @DBRef\n private Region region;\n private int minTemp;\n private int maxTemp;\n @DBRef\n private WindSpeed windSpeed;\n @DBRef\n private Precipitation precipitation;\n private String dataUpdate;\n private String forecastDate;\n}\n{\n \"_id\": \"2310300.2022-04-22.2022-04-22T13:31:05\",\n \"region\": {\n \"$ref\": \"region\",\n \"$id\": 2310300\n },\n \"minTemp\": 16,\n \"maxTemp\": 21,\n \"windSpeed\": {\n \"$ref\": \"windspeed\",\n \"$id\": 2\n },\n \"precipitation\": {\n \"$ref\": \"precipitation\",\n \"$id\": 2\n },\n \"dataUpdate\": \"2022-04-22T13:31:05\",\n \"forecastDate\": \"2022-04-22\",\n \"_class\": \"io.github.valvula.weather_forecast.dto.DailyForecastDto\"\n}\n", "text": "Hi there!Some days ago, I started a new personal project based on getting data from a forecast API, converting to my desire and store for later uses.To get all the information I have to make four requests:The classes are something like this:When I save the data for day 1, it saves it fine but when I try to save day 2 it shows:org.springframework.dao.DuplicateKeyException: Write operation error on server localhost:27017. Write error: WriteError{code=11000, message=‘E11000 duplicate key error collection: weather-forecast-db.precipitation index: id dup key: { _id: 0 }’, details={}}.; nested exception is com.mongodb.MongoWriteException: Write operation error on server localhost:27017. Write error: WriteError{code=11000, message=‘E11000 duplicate key error collection: weather-forecast-db.precipitation index: id dup key: { _id: 0 }’, details={}}.In my database it shows all the collections just fine (dailyforecast,precipitation,region,windspeed) and in my dailyforecast, the day 1 shows like it should:I don’t really know if I have some problem with the id’s of the windspeed and precipitation being the same but that I can’t change, or if I am doing some relation the wrong way.\nI would appreciate if someone could enlighten me so I can understand my flaws.", "username": "Diogo_Valente" }, { "code": "windspeedprecipitation_id", "text": "Hi @Diogo_Valente\nWelcome to the community forum!!Can you please help me to understand the following in order to help you with a solution:Let me know if you have any further questions.Thanks\nAasawari", "username": "Aasawari" }, { "code": "{\n\t\"data\": [\n\t{\n\t\t\"descClassPrecIntEN\": \"--\",\n\t\t\"descClassPrecIntPT\": \"---\",\n\t\t\"classPrecInt\": \"-99\"\n\t}, {\n\t\t\"descClassPrecIntEN\": \"No precipitation\",\n\t\t\"descClassPrecIntPT\": \"Sem precipitação\",\n\t\t\"classPrecInt\": \"0\"\n\t}, {\n\t\t\"descClassPrecIntEN\": \"Weak\",\n\t\t\"descClassPrecIntPT\": \"Fraco\",\n\t\t\"classPrecInt\": \"1\"\n\t}, {\n\t\t\"descClassPrecIntEN\": \"Moderate\",\n\t\t\"descClassPrecIntPT\": \"Moderado\",\n\t\t\"classPrecInt\": \"2\"\n\t}, {\n\t\t\"descClassPrecIntEN\": \"Strong\",\n\t\t\"descClassPrecIntPT\": \"Forte\",\n\t\t\"classPrecInt\": \"3\"\n\t}\t]\n}\n{\n\t“data”: [\n\t{\n\t\t“descClassWindSpeedDailyEN”: “–”,\n\t\t“descClassWindSpeedDailyPT”: “—”,\n\t\t“classWindSpeed”: “-99”\n\t}, {\n\t\t“descClassWindSpeedDailyEN”: “Weak”,\n\t\t“descClassWindSpeedDailyPT”: “Fraco”,\n\t\t“classWindSpeed”: “1”\n\t}, {\n\t\t“descClassWindSpeedDailyEN”: “Moderate”,\n\t\t“descClassWindSpeedDailyPT”: “Moderado”,\n\t\t“classWindSpeed”: “2”\n\t}, {\n\t\t“descClassWindSpeedDailyEN”: “Strong”,\n\t\t“descClassWindSpeedDailyPT”: “Forte”,\n\t\t“classWindSpeed”: “3”\n\t}, {\n\t\t“descClassWindSpeedDailyEN”: “Very strong”,\n\t\t“descClassWindSpeedDailyPT”: “Muito forte”,\n\t\t“classWindSpeed”: “4”\n\t} ]\n}\n{\n\t“forecastDate”: “2022-04-22”,\n\t“data”: [\n\t{\n\t\t“tMin”: 9,\n\t\t“tMax”: 14,\n\t\t“classWindSpeed”: 2,\n\t\t“classPrecInt”: 2,\n\t\t“globalIdLocal”: 1010500,\n\t},\n\t{\n\t\t“tMin”: 8,\n\t\t“tMax”: 13,\n\t\t“classWindSpeed”: 2,\n\t\t“classPrecInt”: 2,\n\t\t“globalIdLocal”: 1020500,\n\t},\n\t…\n\t],\n\t“dataUpdate”: “2022-04-22T13:31:05”\n}\nclassPrecIntclassPrecInt", "text": "The precipitation and windspeed documents are somewhat similar:PrecipitationwindspeedAnd the forecast I’m using looks like this:So my idea, from the beginning, was to reference the classPrecInt from the forecast data to the classPrecInt from the precipitation list. The precipitation and windspeed are not supposed to change because it represents the severity of the weather.\nFrom the exception given I understand that I’m maybe duplicating values and I just want to reference the collections.After I investigated the issue I detected something. Everything is working fine except the precipitation even if the implementation is the same as the windspeed.\nWhen I checked the collection, the id 0 shows “Document 2”:", "username": "Diogo_Valente" }, { "code": "@Document(\"precipitation\")\npublic class Precipitation {\n @Id\n private Integer id;\n private String descriptionPt;\n private String descriptionEn;\n\n public Precipitation(PrecipitationDao precDao) {\n this.id = Integer.parseInt(precDao.getClassPrecInt());\n this.descriptionPt = precDao.getDescClassPrecIntPT();\n this.descriptionEn = precDao.getDescClassPrecIntEN();\n }\n}\n", "text": "Update:In the precipitation object I changed the id type from int to Integer and for some reason it worked:in the collections still appears “document 2” in id 0 but it works.", "username": "Diogo_Valente" } ]
DuplicatedKeyException on entering some data in db
2022-05-04T17:52:25.993Z
DuplicatedKeyException on entering some data in db
5,695
null
[ "mdbw22-hackathon" ]
[ { "code": "", "text": "We are creating a web application that showcases good news from all over the world using the GDELT Dataset which categories its immense amount of data by Tone (amongst other categories).The aim is to provide access to accurate information from all over the world that is SOLELY positive.Check out our GitHub repo\nN.B: Repo is currently private, but will be made public as soon as possibleCredit: @webchick for the amazing idea", "username": "Fiewor_John" }, { "code": "", "text": "Great - many thanks.Your repo link is a 404 for me - perhaps it’s private?", "username": "Shane_McAllister" }, { "code": "", "text": "Hi @Shane_McAllister , Could you please check now. I made the repo public", "username": "Avik_Singha" }, { "code": "", "text": "Great - works now. Many thanks", "username": "Shane_McAllister" }, { "code": "", "text": "Hi @Shane_McAllister We would like to add another team member’s name. @Sucheta_Singha.", "username": "Avik_Singha" }, { "code": "", "text": "I think we’re the ones in control of this, so I’ll go ahead and update the info above appropriately", "username": "Fiewor_John" }, { "code": "\"coordinates\": [-81.717, 27.8333]\"", "text": "I’m not quite sure who to address this to, @Shane_McAllister but I have an issue.While working on the visualization of the GDELT positive data that my team is concerned with, I discovered that the location data in the GDELT dataset is not well detailed.What I mean is, I put the location coordinates e.g. \"coordinates\": [-81.717, 27.8333]\" on Google Maps to check if the location returned will be the same as the location fullname (Florida, US for this example) and for some coordinates, it is accurate, but in others, I have to reverse the coordinate values to get the right location.This is an issue because of the absence of a defined “longitude” and “latitude” property in the location field and it sort of introduces complications for me (some points are outside the map).I’m just wondering if this is something I can be helped with?", "username": "Fiewor_John" }, { "code": "", "text": "Hi @Fiewor_John!We’re following the GeoJSON standard here, so that array should be [longitude, latitude]. If you’re getting Florida events, but with those values reversed, then there’s something wrong. Can you provide me with the ID of an event with the values reversed, so I can see if it’s a problem with the source data, or something else?Thanks,Mark", "username": "Mark_Smith" }, { "code": " \"Action\": {\n \"Geo_Fullname\": \"Kuala Lumpur, Kuala Lumpur, Malaysia\",\n \"Geo_CountryCode\": \"MY\",\n \"Location\": {\n \"type\": \"Point\",\n \"coordinates\": [101.7, 3.16667]\n }\n }\n \"Action\": {\n \"Geo_Fullname\": \"Chornobyl, Kyyivs'ka Oblast', Ukraine\",\n \"Geo_CountryCode\": \"UP\",\n \"Location\": {\n \"type\": \"Point\",\n \"coordinates\": [30.2225, 51.2736]\n }\n }```\nI'm testing using this: `https://developers-dot-devsite-v2-prod.appspot.com/maps/documentation/utils/geocoder`", "text": "Thank you for responding, Mark.\nHere’s one that doesn’t return any location, but when reversed, returns the right locationHere’s another that returns the location of somewhere in Iran, but when reversed, returns the right location", "username": "Fiewor_John" }, { "code": "gdelt_reshaper.js", "text": "Where’s the data stored? Are you using the gdelt_reshaper.js script to get it into this structure?", "username": "Mark_Smith" }, { "code": "", "text": "I’m exporting from mongoDB compass after connecting to the cluster made available for the hackathon", "username": "Fiewor_John" }, { "code": "", "text": "Hmm. Okay, I think I know what may have happened - I’ll try to clear this up tomorrow!", "username": "Mark_Smith" }, { "code": "", "text": "Thank you! Looking forward to it", "username": "Fiewor_John" }, { "code": "", "text": "Hello Mark, just a reminder about this.\nThanks for the informative session earlier.", "username": "Fiewor_John" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Team 'Good News '
2022-04-29T13:59:08.521Z
Team &lsquo;Good News &rsquo;
4,526
null
[ "compass", "swift" ]
[ { "code": "func currenciesContent() -> CurrenciesContent {\n if let existingContent = self.realm()?.objects(CurrenciesContent.self).first {\n return existingContent\n }\n let newContent = CurrenciesContent()\n self.saveRealm(newContent)\n return newContent\n}\nRealm.asyncOpen(configuration: userSyncConfig) { [weak self] result in\n switch result {\n case .failure(let error): break // Handle error\n case .success(let syncRealm):\n self?.syncRealm = syncRealm // Realm ready to use\n }\n}\nself.realm()?.objects(CurrenciesContent.self).first", "text": "I have noticed a breaking bug behaviour which makes us impossible to upgrade from a shared Atlas (M2) to a dedicated Atlas (M10) : Terminating sync which leads to downloading incomplete data.Context\nI had a strange Realm bug on our actual M2 server where I was forced to terminate sync.\nI have a collection called CurrenciesContent using the Realm sync partition system (with _partition field).\nEach user has a one and only one CurrenciesContent (which handles the in-app virtual currencies of the user - such as coins, gems, etc as you can find in mobile games).The iOS is very simple : I will read/write the first CurrenciesContent because all users only have one.As for the Realm object, I use a “download everything before using Realm” approach withThe problem\nAfter restarting the sync, the line self.realm()?.objects(CurrenciesContent.self).first always returned null. There were no CurrenciesContent found at all BUT the document still exists in DB online, I double checked using Mongo Compass. Realm iOS SDK just cannot fetch it.\nInstead it will create a new document, and the entire code logic (which relies on having only 1 single document) is now broken.So now, I’m trying to understand why Realm SDK can’t fetch what’s in the DB online after a terminating sync.\nI would like to upgrade server without losing all our users virtual currencies which they paid with real money.", "username": "Jerome_Pasquier" }, { "code": "", "text": "Hey @Jerome_Pasquier Unfortunately, when upgrading from a shared to dedicated cluster the changestream is not migrated over which is necessary for Realm Sync to function. This requires a terminate and re-enablement of Sync which causes a Client Reset on clients which is likely the error you are seeing in your clients, they are stuck and need to client reset in order to gain fresh state from the new dedicated cluster. This is why we do not recommend shared tier clusters for production usage. You can see documentation here -", "username": "Ian_Ward" }, { "code": "", "text": "Okay, so after many tests and a few exchanges with Realm support, I have found out 2 things", "username": "Jerome_Pasquier" }, { "code": "", "text": "Hi Jerome,but the client now has new documents of everythingCould you please elaborate what you mean by this?\nDo you mean the user on that client is syncing documents that they shouldn’t have access to? If that’s the case, then please check your sync permission configuration on the cloud app. The user will see documents for a particular partition being opened if the sync permissions allow it.Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "Sync permissions are set correctly. Everybody can read/write only their own data (partition: “user={user_id}”).The problem is that upon Terminate Sync, I encounter errors:Apparently, it is normal to have these errors.However, my issue is that the user, upon its first connection to a re-enabled sync (having an existing local Realm Sync on the iOS client), couldn’t fetch its own documents. And could only fetch them after a reboot of the iOS app (and therefore a reboot of Realm SDK).", "username": "Jerome_Pasquier" }, { "code": "", "text": "Hi Jerome,Apparently, it is normal to have these errors.Correct, it is normal to have these errors after a sync termination which requires a client reset to resolve as per Ian’s comment above.However, my issue is that the user, upon its first connection to a re-enabled sync (having an existing local Realm Sync on the iOS client), couldn’t fetch its own documents. And could only fetch them after a reboot of the iOS app (and therefore a reboot of Realm SDK).This behaviour is associated with the errors you’re observing.\nIn order for Sync to work again on clients after a sync termination, they will need to undergo a client reset to resume syncing of data. This is also why we do not recommend performing a sync termination in a production app unless otherwise advised by MongoDB staff. On this note we also recommend that before launching a production app, it should be on a minimum of M10 cluster tier so that a termination is not forced should you need to upgrade from a shared tier to a dedicated tier in the future.As per the client reset article linked above, we recommend that you include client reset handling in your app to handle such situations so that the user will not have to manually reset it via uninstall/reinstall the app etc. Please see examples of client reset handling in your particular SDK.Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "I have indeed prepared my iOS app to run a client reset properly (and logout + delete everything locally).Yesterday, I ran a few tests with Terminate Sync and I have noticed a specific flow\nScreenshot 2022-05-04 at 14.06.141920×1336 87 KB\nIs my observation correct? If it is correct, how can I know when everything is finalized and I can allow users to be back on the app?", "username": "Jerome_Pasquier" }, { "code": "", "text": "Hi Jerome,The logs you’re seeing are related to the blue banner sync progress you mentioned.When you terminate sync, it clears out the Sync metadata which contain all the instructions for sync to happen to and fro between the cloud and the client. When reenabling sync after a termination, this metadata needs to be rebuilt and you’ll see the blue banner showing the progress of the MongoDB data being translated to “changeset” instructions that can be downloaded to clients when they open Realms.You’ll notice those logs in your screenshot do not have a value for the “user id” column which means they are not requests made by users. These logs are generated due to a Realm process which handle the rebuilding of instructions when enabling sync. You’ll also see such logs every so often when writes occur either in clients or directly in MongoDB because creation of the corresponding instructions are required.Regards", "username": "Mansoor_Omar" }, { "code": "", "text": "Thank you for the explanation.One very last question : How do you suggest to handle the mobile clients when Realm has to rebuild the metadata?\nCurrently, we have created a server flag “server down” using our own server and when the iOS client sees this flag, it won’t connect Realm at all.But this solution requires another server beside Realm server. Do you have an in-house solution?I just did another test: I forced a metadata rebuild (because I activated Dev Mode), and if I update any document which hasn’t been refreshed by the metadata system yet, then I will lose the document update.\nAs I understand the metadata rebuild happens when there is a terminate sync + dev mode activated + update to the Realm schema. So it seems to be something that can happen in prod and we really don’t want users to lose their progress because Realm metadata is refreshing.", "username": "Jerome_Pasquier" } ]
Clients can't download all data after terminating sync
2022-04-28T22:55:02.801Z
Clients can&rsquo;t download all data after terminating sync
3,118
https://www.mongodb.com/…020a326cd82a.png
[ "mdbw22-hackathon" ]
[ { "code": "Lead Developer AdvocateSenior Developer Advocate", "text": "We will be live on MongoDB Youtube and MongoDB TwitchLead Developer AdvocateSenior Developer Advocate", "username": "Shane_McAllister" }, { "code": "", "text": "We are live again in just over 30 mins!!Watch on MongoDB Youtube and MongoDB Twitchor watch below here -And don’t forget to ask questions in the chat!!", "username": "Shane_McAllister" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Hackathon APAC & EMEA Session- Working with the GDELT Data in MongoDB
2022-05-04T17:37:22.070Z
Hackathon APAC &amp; EMEA Session- Working with the GDELT Data in MongoDB
2,814
https://www.mongodb.com/…_2_1023x576.jpeg
[ "sydney-mug" ]
[ { "code": "MongoDB Senior Consulting EngineerMongoDB Staff EngineerMongoDB Staff Engineer", "text": "\nEvent Details1429×805 128 KB\nSydney MongoDB User Group is excited to announce an online meetup for May 2022. This event will feature two great talks from MongoDB.All levels of technical knowledge are welcome.Event Type: Online\nLink(s):\nLocation\nVideo Conferencing URLMongoDB Senior Consulting EngineerManish is a Senior Consulting Engineer with 12+ years of experience building software. His experiences range from front-end engineering to DevOps, and everything in between. He is thrilled by the capabilities of the new and powerful tools available to engineers today and how MongoDB is helping them to make their life easier. In this talk, he will present an overview of Realm Sync, and demonstrate Realm synchronisation integration in a mobile applicationMongoDB Staff EngineerThomas has been with MongoDB for almost 10 years and in various roles. Last year he joined MongoDB Labs, MongoDB’s research and innovation team. In this talk, he will give an overview of Labs’ team and mission, and then dive into some of the research in the area of Machine Learning he is involved in; Training Deep Neural Networks to learn the structure of datasets for tasks including index recommendations, query planning and approximate query processing.MongoDB Staff Engineer", "username": "wan" }, { "code": "", "text": "Hi Everyone,A reminder that this is happening in approximately 2 hours from now.The event will be held on Zoom https://zoom.us/ , please remember to download and install the Zoom application on your preferred device before the event start.Looking forward to see you all.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "We are on, join here: Launch Meeting - Zoom", "username": "Harshit" } ]
Sydney MUG: Meetup | May 2022 | Online
2022-04-26T02:37:22.074Z
Sydney MUG: Meetup | May 2022 | Online
5,245
null
[ "replication", "database-tools", "backup" ]
[ { "code": "......\nfinished restoring test.dev-db (100 documents, 0 failures)\nrestoring users from archive '/tmp/backup.gz'\nrestoring indexes for collection test.hour from metadata\nindex: &idx.IndexDocument{Options:primitive.M{\"name\":\"timestampAscending\", \"v\":2}, Key:primitive.D{primitive.E{Key:\"timestamp\", Value:1}}, PartialFilterExpression:primitive.D(nil)}\nrestoring indexes for collection test.day from metadata\nindex: &idx.IndexDocument{Options:primitive.M{\"name\":\"timestampAscending\", \"v\":2}, Key:primitive.D{primitive.E{Key:\"timestamp\", Value:1}}, PartialFilterExpression:primitive.D(nil)}\nindex: &idx.IndexDocument{Options:primitive.M{\"name\":\"companyAscending\", \"v\":2}, Key:primitive.D{primitive.E{Key:\"company\", Value:1}}, PartialFilterExpression:primitive.D(nil)}\nindex: &idx.IndexDocument{Options:primitive.M{\"name\":\"companyAscending\", \"v\":2}, Key:primitive.D{primitive.E{Key:\"company\", Value:1}}, PartialFilterExpression:primitive.D(nil)}\nrestoring indexes for collection test.week from metadata\nindex: &idx.IndexDocument{Options:primitive.M{\"name\":\"timestampAscending\", \"v\":2}, Key:primitive.D{primitive.E{Key:\"timestamp\", Value:1}}, PartialFilterExpression:primitive.D(nil)}\nindex: &idx.IndexDocument{Options:primitive.M{\"name\":\"companyAscending\", \"v\":2}, Key:primitive.D{primitive.E{Key:\"company\", Value:1}}, PartialFilterExpression:primitive.D(nil)}\nFailed: test.day: error creating indexes for test.day: createIndex error: (Unauthorized) command createIndexes requires authentication\n200 document(s) restored successfully. 0 document(s) failed to restore.\nmongorestore --uri=\"mongodb://mongodb-0.mongodb-svc.mongodb.svc.cluster.local:27017,mongodb-1.mongodb-svc.mongodb.svc.cluster.local:27017,mongodb-2.mongodb-svc.mongodb.svc.cluster.local:27017\" --username=restore-user --password 'password' --drop --archive=\"backup-mongodb.gz\" --gzip\n", "text": "Hello.I am trying to perform a backup and restore of a mongodb ReplicaSet.\nDuring the restore I get error at the end of the command:The command I am using is:I tried using the user who has restore rights and also root. but the result is the same.The version of mongoDB server used is 4.4.13, and the version of mongodump and mongorestore is 100.5.2.Thanks", "username": "Yuma_N_A" }, { "code": "", "text": "Try with authenticationDatabase parameter", "username": "Ramachandra_Tummala" }, { "code": "", "text": "yes, thank you! I added also --oplogReplay, it works well! thanks a lot", "username": "Yuma_N_A" } ]
Mongorestore ReplicaSet createIndex error: (Unauthorized) command createIndexes requires authentication
2022-05-04T15:58:29.826Z
Mongorestore ReplicaSet createIndex error: (Unauthorized) command createIndexes requires authentication
4,350
null
[ "atlas-search", "data-api" ]
[ { "code": "", "text": "Hello,I’ll preface this by stating that I’m not a developer, I’m from a pharmaceutical background trying to create a tool for academic research.I’m trying to use the atlas $search function (which provides a fuzzy search for my database hosted on mongo) in a Retool App. Currently, I am using the DATA API provided by atlas but I can only use ‘filter’ and ‘projection’ params to get results.Is there some documentation about calling the $search function via POST since Retool supports RESTful as a method to pull data for my webapp.", "username": "Sandeep_N_A" }, { "code": "POST /action/aggregate$search$search$searchcurl --location --request POST 'https://data.mongodb-api.com/app/data-hkzxc/endpoint/data/beta/action/aggregate' \\\n--header 'Content-Type: application/json' \\\n--header 'Access-Control-Request-Headers: *' \\\n--header 'api-key: API_KEY_REDACTED' \\\n--data-raw '{\n \"collection\":\"namecoll\",\n \"database\":\"namedb\",\n \"dataSource\":\"M10Cluster\",\n \"pipeline\": [\n \t{\n \t\t\"$search\": {\n \t\t\t\"index\": \"namecollsearchindex\",\n \t\t\t\"text\": {\n \t\t\t\"query\": \"Test\",\n \t\t\t\"path\": {\n \t\t\t\t\"wildcard\": \"*\"\n \t\t\t}\n \t\t\t}\n \t\t}\n \t\t}\n \t]\n}'\n{\"documents\":[\n{\"_id\":\"62730ee3d51e21ac65d9db06\",\"name\":\"Test Name\"},\n{\"_id\":\"62730f31d51e21ac65d9db0a\",\"name\":\"John Test\"}\n]}\npipeline", "text": "Welcome to the community @Sandeep_N_A Is there some documentation about calling the $search function via POST since Retool supports RESTful as a method to pull data for my webapp.The Atlas Search queries take the form of an aggregation pipeline stage. You will be need to use the POST /action/aggregate endpoint for the Atlas Data API for $search queries. The following Run an Aggregation Pipeline documentation may help you this although the example on the page does not have a $search stage used as part of the pipeline. However, please see the below example I’ve used against a test system which includes the $search stage:Response:Currently, I am using the DATA API provided by atlas but I can only use ‘filter’ and ‘projection’ params to get results.As shown in the above example, you’ll need to use the pipeline parameter in the request body.The Atlas Data API is currently in preview so there are chances the above behaviour may change.I hope the above helps.Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "Thank you for your reply, This behaviour makes sense to me. I’ll try to implement it and see if it works.", "username": "Sandeep_N_A" }, { "code": "", "text": "Thanks! This worked for me. Is there also a way to implement fuzzy search through /aggregate? Because I can now successfully match using $Search but am unable to get anything but an exact string match", "username": "Sandeep_N_A" }, { "code": "", "text": "Thanks for confirming!Is there also a way to implement fuzzy search through /aggregate? Because I can now successfully match using $Search but am unable to get anything but an exact string matchI would recommend creating a new topic for this query with details about the use case, current output, expected output, pipeline stages and anything else that may assist with finding a solution Regards,\nJason", "username": "Jason_Tran" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Looking for a way to use $search fuction via Rest API
2022-05-04T19:14:16.672Z
Looking for a way to use $search fuction via Rest API
5,159
null
[ "node-js", "crud" ]
[ { "code": "{\n \"_id\": {\n \"$oid\": \"61d6552f4a40799193fc5904\"\n },\n \"attributes\": [\n {\n \"required\": true,\n \"order\": 1,\n \"attribute\": {\n \"key\": \"title\",\n \"label\": \"Title\",\n \"dataType\": \"string\",\n \"description\": \"just a test\",\n \"searchable\": true,\n \"inputControl\": {\n \"controlType\": \"foo\"\n },\n \"searchControl\": {}\n }\n }\n ],\n \"icon\": \"none.png\",\n \n }\ndb.getCollection('categories')\n .updateOne(\n {_id: ObjectId(\"61d6552f4a40799193fc5904\")}, \n {$pull: { \"attributes\": { \"attribute.key\": \"rest\" } } }\n );\nUpdateFilter<T>import {\n ClientSession, Filter, FindOptions, InsertOneOptions, InsertOneResult, OptionalUnlessRequiredId,\n UpdateFilter, UpdateOptions, UpdateResult\n} from 'mongodb';\n\npublic async mongoUpdateOne(filter: Filter<T>, update: UpdateFilter<T> | Partial<T>, options?: UpdateOptions): Promise<UpdateResult> {\n return this.model.collection.updateOne(filter, update, options);\n }\n const filter: Filter<CategoryDocument> = { _id: new ObjectId(categoryId) };\n const updateFilter = { $pull: { attributes: { 'attribute.key':'title' } } };\n const updateResult: UpdateResult = await this.categoriesRepository.mongoUpdateOne(filter, updateFilter).catch((error) => Promise.reject(error));\n'attribute.key'", "text": "based on the data above I want to delete an element from the attributes array based on the attribute.key property value.Using MongoDB Playground I can accomplish this with…but within my node typescript app the above violates the UpdateFilter<T> interface that comes with the MongoDB drivers.my base repository class that wraps the MongoDB driver lib…Argument of type ‘{ $pull: { attributes: { ‘attribute.key’: string; }; }; }’ is not assignable to parameter of type ‘UpdateFilter | Partial’.I can clear this up if I do not use the nested object 'attribute.key' but thats where my unique key is.Is there another way to write this update expression?", "username": "Jesse_Beckton" }, { "code": "const updateFilter = { $pull: { 'attributes': { 'attribute.key': attributeItemKey } } } as UpdateFilter<CategoryDocument>;\n", "text": "So after reading this documentI changed my code to…This works as expected.", "username": "Jesse_Beckton" } ]
Issues with expression to Delete object from array based on object.subObject.property value
2022-05-05T01:34:45.642Z
Issues with expression to Delete object from array based on object.subObject.property value
2,395
null
[ "queries", "swift", "atlas-device-sync", "flexible-sync" ]
[ { "code": "", "text": "Realm Team thanks for bringing Flexible Sync!I have some questions to help plan development (knowing it is in Preview and not GA).Document typo found:\nI believe the words in the name are transposed for QuerySubscription (written as SubscriptionQuery) in first paragraph of API reference: QuerySubscription Structure Reference", "username": "Reveel" }, { "code": "", "text": "Roop, welcome back! - I told you we’d get there eventually!I hope this helps-Ian", "username": "Ian_Ward" }, { "code": "[c]INbeginsWithcontainsANY", "text": "Hey Ian – yes you did - glad to see Flexible Sync! Hope all is well.Thanks!", "username": "Reveel" }, { "code": "", "text": "Hi, good thing is that most of the string operators will be implemented and released very soon (beginsWith, contains, [c], endsWith, like). The array operators like ANY will be followed up quickly after and we can ping this thread when they get released if that works. Out of curiosity, when you say you want to use IN, do you mind letting us know what your query is intended to look like? There are two different interpretations with IN depending on if the right-hand side of the query is a string or a list of elements you are trying to OR together.", "username": "Tyler_Kaye" }, { "code": "IN\"%@ IN %K._id\"\"%K IN %@\"", "text": "Hi Tyler - that is good news - thanks!I have both usages for IN. Some use cases in play for comparing collections/aggregates:\"%@ IN %K._id\" - Left side is a constant value (not collection) and right side for a Collection\"%K IN %@\" - Left side is a constant value (not collection) and right side is a Swift-Type Collection (non Realm)", "username": "Reveel" }, { "code": "", "text": "Hi Tyler - Ian mentioned we will be able to compare Arrays to Realm Collections. That is something new and great, is that far away?", "username": "Reveel" }, { "code": "", "text": "Hey, I’m not sure I follow you 100%. We are planning on rolling out support for querying on Realm Collections using the ANY, ALL, NONE syntax in RQL within the next quarter hopefully. Is that what you are referencing?", "username": "Tyler_Kaye" }, { "code": "", "text": "I am referring to #4 that Ian responded to my initial questions - “queries on Arrays for Flexible Sync.” Thank you in advance!", "username": "Reveel" }, { "code": "", "text": "Hey @Ian_Ward - hope you are well!Do you have any new info (timeline, probability, etc.) for having one Realm App with both Partition-based & Flexible sync simultaneously?Thank you in advance!", "username": "Reveel" }, { "code": "", "text": "I’m not sure what your use case is but I what I was mentioning above was to have some sort of way to convert partition-based sync clients to a flexible sync configured on the cloud side. The cloud side would only be configured with flexible sync - we haven’t started this work yet, it is likely a few quarters out.", "username": "Ian_Ward" }, { "code": "", "text": "Hi @Ian_Ward,Thanks for that and it’s understood. Though, my main focus is on the ability to have both (Partition & Flexible) in same App. If that is going to happen? -because if not, managing 2 Apps per user won’t be fun .Currently, I have 3 main designs for Realm with MongoDB, that I have separated as follows:Partition-based with EmbeddedObjects. When using EmbeddedObjects I will NOT have a relationship in Schema, this way the entire Object or List<> will be written in as the embedded value.\n–a) This will be for when user will have a copy an Object to EmbeddedObject, since want to preserve the ‘_id’ → ‘id’ value across objects (no duplicate ‘_id’ in a Collection).Partition-based without EmbeddedObjects. Using relationships via foreign_key references as the nested value.\n–a) This will be for simple user partitions, complex user partitions (via parsing ‘_id’ for further partitioning) and global partitions (for all users).Flexible Sync. To replace most of Legacy Realm’s QBS and which would be too burdensome to have on Partition-based.", "username": "Reveel" }, { "code": "", "text": "At this time we do not plan to have partition-based sync and flexible sync configured on the same backend app.", "username": "Ian_Ward" }, { "code": "", "text": "Oh man .@Ian_Ward -Is that be something you all could consider?I will dig into managing 2 Apps then. Could you advise on any concerns or issues for 2 apps across single users?P.S. btw the feature being developed for converting Partition-based to Flexible now makes a lot more sense and its worth - thanks for building that!", "username": "Reveel" }, { "code": "", "text": "@Ian_Ward,What about offering ability to add multiple Atlas-DBs (e.g.: one for Partition and one for Flexible) to a single Realm App via separated Syncs (could even have separate App-IDs) This way there would only be 1 user to manage (since 1 Realm App). Of course, the Collections may be duplicative but that would be the case anyhow. Each Sync in the Realm App would be either Partition-based or Flexible.", "username": "Reveel" } ]
Flexible Sync Questions:
2022-03-07T00:33:22.875Z
Flexible Sync Questions:
5,990
null
[ "atlas-functions", "serverless" ]
[ { "code": "/// NOTE: Code shown is not actual code, rather recreated & verbose (not DRY) to show each step.\n\nexports = function () {\n\n // NOTE: MongoDb Properties\n const clusterServiceName = \"mongodb-atlas\";\n const dbName = \"Dev-DB\";\n const db = context.services.get(clusterServiceName).db(dbName);\n // NOTE: Create Collection properties\n let useNestedCollection = db.collection(\"PersonRealm\");\n let useCollection = db.collection(\"UsersRealm\");\n\n // NOTE: Step #1 - Create Nested Object\n const personObject = context.values.get(\"PersonRealm_Schema\");\n personObject._id = new BSON.ObjectId();\n personObject.name = \"John Doe\"\n\n // NOTE: Step #2 - Insert Nested Object to its Collection\n useNestedCollection.findOne({ \"_id\": personObject._id })\n .then(found => {\n if (found) {\n // return found;\n }\n else {\n useNestedCollection.insertOne(personObject)\n .then(inserted => {\n // return inserted;\n })\n .catch(error => console.error(`Failed to insertOne 'personObject' document. Here is error: ${error}`));\n }\n })\n .catch(error => console.error(`ERROR in trying to find already existing 'personObject' document check. Here is **ERROR: ${error}`));\n\n // NOTE: Step #3 - Create Top-level Object with references to '_id'\n const usersObject = context.values.get(\"UsersRealm_Schema\");\n usersObject._id = new BSON.ObjectId();\n // This is supposed to reference to the 'personObject' object previously added\n usersObject.lastAddedPerson = personObject._id\n // Assuming this is the first time being added (so entire Array), and not a 'push' of an Element.\n // This is supposed to reference to the 'personObject' object previously added\n usersObject.allPersons = [personObject._id] \n\n // NOTE: Step #4 - Add Top-level Object to its Collection\n useCollection.findOne({ \"_id\": usersObject._id })\n .then(found => {\n if (found) {\n // return found;\n }\n else {\n useCollection.insertOne(usersObject)\n .then(inserted => {\n // return inserted;\n })\n .catch(error => console.error(`Failed to insertOne 'usersObject' document. Here is error: ${error}`));\n }\n })\n .catch(error => console.error(`ERROR in trying to find already existing 'usersObject' document check. Here is ERROR: ${error}`));\n\n}\n\n", "text": "Unless I am missing a better way… I would like to confirm the process for adding nested Objects & List<>s to an Object from a Realm Function (since there is no Realm SDK accessible from a Realm Function).Assumptions:\n•\tThe nested are of type ‘Object’ or ‘List’ (not ‘EmbeddedObject’)\n•\tThere is a proper relationship created in Realm UI (or CLI)Steps:Basically, the Realm Object added in Step #4 will contain references (via ‘_id’) for the nested, while the full Object or List<> of nested were added in Step #2. Thereafter, simply depend on the reference of the ‘_id’ to make the connection between the two.This seems to work but I want to make sure there is not something missing under-the-hood, or better yet if there is a more ideal Mongo method (instead of ‘.insertOne()’ and all steps) for Realm Objects to be inserted that have nested (non EmbeddedObject).SAMPLE (RE-CREATED) CODE:", "username": "Reveel" }, { "code": "", "text": "To help anyone with the same question… This process for Realm without Realm SDK (e.g.: Realm Functions) is correct. This was confirmed through paid MongoDB support.", "username": "Reveel" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Adding Nested Objects & Lists from Realm (serverless) Functions
2022-05-01T18:52:53.443Z
Adding Nested Objects &amp; Lists from Realm (serverless) Functions
2,870
null
[ "production", "golang" ]
[ { "code": "", "text": "The MongoDB Go Driver Team is pleased to release version 1.9.1 of the MongoDB Go Driver.This release includes various bug fixes, including fixing the “soft fail” behavior of the OCSP certificate check and correctly handling 32- or 64-bit integers in server responses. For more information please see the 1.9.1 release notes.You can obtain the driver source from GitHub under the v1.9.1 tag.Documentation for the Go driver can be found on pkg.go.dev and the MongoDB documentation site. BSON library documentation is also available on pkg.go.dev. Questions and inquiries can be asked on the MongoDB Developer Community. Bugs can be reported in the Go Driver project in the MongoDB JIRA where a list of current issues can be found. Your feedback on the Go driver is greatly appreciated!Thank you,\nThe Go Driver Team", "username": "Matt_Dale" }, { "code": "", "text": "Tried it out and posted a link to this announcement to Gophers Slack", "username": "Jack_Woehr" }, { "code": "", "text": "Thx for this.\nJust a little question, must this fixes be back-ported to 1.8 release branch?", "username": "Jerome_LAFORGE" }, { "code": "", "text": "@Jerome_LAFORGE we currently do not plan to back-port the fixes introduced in v1.9.1 to a v1.8.x release.", "username": "Matt_Dale" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
MongoDB Go Driver 1.9.1 Released
2022-05-03T18:39:25.517Z
MongoDB Go Driver 1.9.1 Released
2,492
https://www.mongodb.com/…4_2_1024x512.png
[ "dot-net", "production" ]
[ { "code": "", "text": "This is a patch release that addresses some issues reported since 2.15.0 was released.The list of JIRA tickets resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.15.1%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:There are no known backwards breaking changes in this release.", "username": "Boris_Dogadov" }, { "code": "", "text": "This topic was automatically closed after 90 days. New replies are no longer allowed.", "username": "system" } ]
.NET Driver 2.15.1 Released
2022-05-04T21:15:36.366Z
.NET Driver 2.15.1 Released
1,828
null
[ "aggregation", "queries" ]
[ { "code": "", "text": "Hello, I am working on some HW, and I have issues with my two last queries.Here you have my data fields (one example):\n{“address”: {“building”: “1007”, “coord”: [-73.856077, 40.848447], “street”: “Morris Park Ave”, “zipcode”: “10462”}, “borough”: “Bronx”, “cuisine”: “Bakery”, “grades”: [{“date”: {\"$date\": 1393804800000}, “grade”: “A”, “score”: 2}, {“date”: {\"$date\": 1378857600000}, “grade”: “A”, “score”: 6}, {“date”: {\"$date\": 1358985600000}, “grade”: “A”, “score”: 10}, {“date”: {\"$date\": 1322006400000}, “grade”: “A”, “score”: 9}, {“date”: {\"$date\": 1299715200000}, “grade”: “B”, “score”: 14}], “name”: “Morris Park Bake Shop”, “restaurant_id”: “30075445”}These are my questions:Return the list of boroughs ranked by the number of American restaurants in it. That is, for each borough, find how many restaurants serve American cuisine and print the borough and the number of such restaurants sorted by this number.Find the top 5 American restaurants in Manhattan that have the highest total score. Return for each restaurant the restaurants’ name and the total score.\nHint: You can use “$unwind”.", "username": "Friend_1998" }, { "code": "", "text": "While learning, it won’t give you any good to simply give you the straight answer. You won’t be learning.ForAmerican restaurantsyou would need a $match for the appropriate cuisine.Forlist of boroughs rankedandhe number of such restaurantsa $group with a $sum will be helpful.and to accomplishsorted by this numberobviously, it will be a $sort.ForAmerican restaurants in Manhattana $match on 2 fields is required. To get thetotal scoreof a restaurant, despite the $unwind hint, a better and more efficient approach would be $reduce because its purpose is as documentedApplies an expression to each element in an array and combines them into a single value.Thinks liketop 5 … highest total scoreusually involve a $sort and $limit.", "username": "steevej" }, { "code": "", "text": "Thank you so much! This was very helpful!", "username": "Friend_1998" } ]
Help with two quick mongo queries
2022-05-04T15:30:35.937Z
Help with two quick mongo queries
2,257
null
[ "queries" ]
[ { "code": "[2022-05-04 16:19:35.449] INFO ( 75) Mns.Map.Data.Cache.MongoItem - **** Updated display name to 'New Name'.\n[2022-05-04 16:19:35.449] INFO ( 75) Mns.Map.Data.Cache.MongoItem - **** Display name 'Identification\\Display Name' in database is 'New Name'.\n[2022-05-04 16:19:35.531] INFO (RawQ (Geometries)) Mns.Map.Data.Cache.MongoDataCache - Existing version of 'hw\\rainfall\\geometry types\\ian\\new name' is 0.\n[2022-05-04 16:19:35.885] INFO (RawQ (Geometries)) Mns.Map.Calculator.ItemInfoCache - Adding 'hw\\rainfall\\geometry types\\ian\\new name' to the property descriptor cache due to data based property change(s).\n[2022-05-04 16:19:35.885] INFO (RawQ (Geometries)) Mns.Map.Data.Cache.MongoItem - **** Getting 'new name' from the database for 'Identification\\Display Name.\n[2022-05-04 16:19:35.885] INFO (RawQ (Geometries)) Mns.Map.Calculator.ItemInfoCache - **** Checking if display name 'new name' is in the cache with fromParent = 'False'.\n[2022-05-04 16:19:35.885] INFO (RawQ (Geometries)) Mns.Map.Calculator.ItemInfoCache - **** Adding 'Identification\\Display Name' new name' to cache with fromParent = 'False' for '62729987590b4f4994a27e50' and 'hw\\rainfall\\geometry types\\ian\\new name'\n[2022-05-04 16:19:35.908] INFO (RawQ (Geometries)) Mns.Map.Calculator.ItemInfoCache - The descriptor contains 'new name'.\n[2022-05-04 16:19:36.060] INFO ( 75) Mns.Map.Calculator.ItemInfoCache - Removing 'hw\\rainfall\\geometry types\\ian\\new name' from property descriptor cache due to property change for 'Calculation\\Minimum Start Time'.\n[2022-05-04 16:19:36.060] INFO ( 75) Mns.Map.Data.Cache.MongoItem - **** Getting 'New Name' from the database for 'Identification\\Display Name.\n", "text": "The is an instance on dev where a geometry is loaded and the display name is not being set correctly.After adding logging, the following sequence can be seen:So the 2nd line shows that the display name is being set to “New Name”.However the RawQ thread is still getting “new name” from the database EVEN though the “New Name” has been written to the database.As can be seen in the logging for thread 75, the “New Name” property is being read on this thread correctly.Has anyone got any ideas on what is going wrong?", "username": "Ian_Hannah" }, { "code": "", "text": "You did not share your code so it is hard to tell what is wrong. But here are few things to consider.Until you receive the UpdateResult the write is not necessarily committed to the DB.Depending of your WriteConcern, ReadConcern and ReadPreference, the write is not necessarily commit to the replica set instance that you are reading from. Since sending the write and receive the UpdateResult involves I/0, then chances that a context switch occurs are high and it is possible that you read before the write is completed.From your class names, it looks like you have some cache involve, it is not clear, in the absence of the code, what you do with the cache. may it is the cache that is giving you the wrong result.", "username": "steevej" } ]
Old document is being read on one thread even though it has been updated on another
2022-05-04T16:09:36.986Z
Old document is being read on one thread even though it has been updated on another
1,283
https://www.mongodb.com/…_2_1023x576.jpeg
[ "aggregation", "queries", "mdbw-hackhelp", "newyork-mug" ]
[ { "code": "Principal Developer Advocate, MongoDB", "text": "\nNYC-MUG-MEETUP1288×725 114 KB\nNew York MongoDB User Group is excited to announce its first user group meetup of 2022 on “May the 4th”. The meetup is being hosted to bring together the interested developers and MongoDB enthusiasts in the region, help fast-track them to our exciting MongoDB World Hackathon and share the plan for future events. The event will kick off with an introduction to the hackathon and the dataset, which will be followed by, a quick demo on how you can import and query the dataset to retrieve insightful information on the data. We will then host a quick challenge with some “exciting swag to win”! The event will close with some time for questions and networking In the meantime make sure you join the New York Group to introduce yourself and stay abreast with future meetups and discussions.Event Type: Virtual\nEvent Link: Zoom - Video Conferencing URL\nTo RSVP - Sign in and then please click on the “✓ Going” link at the top of this event page if you plan to attend. The link should change to a green button if you are Going.Principal Developer Advocate, MongoDB", "username": "Michael_Lynn" }, { "code": "", "text": "Hello All,\nWe’re going to be offering some hands-on exercises and challenges. So, if you want to be prepared. It’d be good for you to have a few things in place:Let us know if you have any questions. Looking forward to seeing y’all next week!", "username": "Harshit" }, { "code": "", "text": "Hello All,\nGentle reminder - the event begins in approx 4.5 hours!\nWe are excited to see you. Here’s the meeting link: Launch Meeting - Zoom", "username": "Harshit" } ]
New York MUG: Fast Track into MongoDB World '22 Hackathon
2022-04-21T13:27:36.436Z
New York MUG: Fast Track into MongoDB World &lsquo;22 Hackathon
5,173
null
[ "queries", "dot-net" ]
[ { "code": " var offerList = offerCollection.AsQueryable().Where(m => m.ActiveStatus == true);\n var locationMasterList = locationCollection.AsQueryable().Where(k => k.ActiveStatus == true);\n var userCardMasterList = userCardCollection.AsQueryable().Where(n => n.ActiveStatus == true && n.UserId == \"60dade06ca166e25a8efdf58\");\n var bankCardMasterList = bankCardCollection.AsQueryable().Where(k => k.ActiveStatus == true);\n\n Console.WriteLine(\"\\n\\nofferList => \" + offerList.Count());\n\n\n List<UserCardOffer> userCardOffersList = new List<UserCardOffer>();\n\n if (offerList != null)\n {\n var query = from o in offerList\n join l in locationMasterList.AsQueryable() on o.LocationId equals l.LocationId into locationData\n join u in userCardMasterList.AsQueryable() on o.BankId equals u.BankId into cardData\n join b in bankCardMasterList.AsQueryable() on o.BankId equals b.BankId into bankData\n select new UserCardOffer()\n {\n offerMaster = o,\n UserId = cardData.FirstOrDefault().UserId,\n First6Digits = cardData.FirstOrDefault().First6Digits,\n Last4Digits = cardData.FirstOrDefault().Last4Digits,\n LocationName = locationData.FirstOrDefault().LocationName,\n LocationAddress = locationData.FirstOrDefault().LocationAddress,\n LocationType = locationData.FirstOrDefault().LocationType,\n CardImagePath = bankData.FirstOrDefault().CardImagePath\n };\n\n foreach (var e in query)\n {\n Console.WriteLine(\"user id\" + e.UserId);\n }\n\n }", "text": "Hi I am using MongoDB.Driver.Legacy in C#I am trying to link 4 different tables and query, I get \"“The GroupJoin query operator is not supported.” error when I try looping the query.Below is my code, please can some one help, what I need is straight farward query individual tables from MongoDB and then apply query to itMongoClient client1 = new MongoClient(“mongodb://grXXXhstag:ilXXXXXXXsh.io:27017/?authSource=grXXg&readPreference=primary&appname=MongoDB%20Compass&directConnection=true&ssl=false”);\nvar server1 = client1.GetServer();\nvar db1 = server1.GetDatabase(“grooshstag”);\nvar offerCollection = db1.GetCollection(“OfferMasters”);\nvar locationCollection = db1.GetCollection(“LocationMaster”);\nvar userCardCollection = db1.GetCollection(“UserCardMaster”);\nvar bankCardCollection = db1.GetCollection(“BankCardMaster”);", "username": "jithen_dtk" }, { "code": "join$lookupcoll.Aggregate(pipeline)$lookup$search", "text": "Hi, @jithen_dtk,Welcome to the MongoDB Community Forums. I understand that you’re attempting to join 4 collections using LINQ in the legacy C# driver. Unfortunately the LINQ implementation in the legacy C# driver does not support this capability. I updated your example to the latest LINQ3 implementation, which has partial support for join (aka $lookup), but encountered an error there as well. This is likely related to CSHARP-4054.I would encourage you to review our documentation on MongoDB data modelling as you appear to be modelling your domain using RDBMS concepts such as tables and foreign keys. It is often more efficient and logical to model your domain as documents, which dramatically reduces the need for joins between related entities as the related entities are nested and stored as subdocuments.If you are unable to refactor your data model, then you can re-write your logic as the individual queries with the results of the previous ones providing input for subsequent ones. This would involve additional roundtrips to the database, which isn’t ideal, but is a straightforward technique. Another option would be to use coll.Aggregate(pipeline) to write your own aggregation pipeline in MQL. You can find some examples on how to build custom MQL aggregation pipelines in our Atlas Text Search documentation. In this case you would be using $lookup rather than $search, but the custom pipeline building technique is the same.Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "offerMasters.Find(new BsonDocument()).ToList(); static async Task Main(string[] args)\n {\n MongoClient dbClient = new MongoClient(\"mongodb://groxxx:[email protected]:20007/?authSource=grxxxxg&readPreference=primary&appname=MongoDB%20Compass&directConnection=true&ssl=false\");\n\n IMongoDatabase db = dbClient.GetDatabase(\"grooshstag\");\n var offerMasters = db.GetCollection<BsonDocument>(\"OfferMasters\");\n\n ////--- \n var bsonValues = await(await offerMasters.FindAsync(new BsonDocumentFilterDefinition<BsonDocument>(new BsonDocument()))).ToListAsync();\n Console.WriteLine(bsonValues.Count);\n //// takes 16 seconds\n\n //\n var rawCollection = db.GetCollection<RawBsonDocument>(\"OfferMasters\");\n var rawValues = await (await rawCollection\n .FindAsync(new BsonDocumentFilterDefinition<RawBsonDocument>(new BsonDocument()))).ToListAsync();\n\n Console.WriteLine(rawValues.Count + \" raw values took \");\n // takes 16 seconds\n\n ///\n var typedCollection = db.GetCollection<OfferMaster>(\"OfferMasters\");\n var typedValues = await (await typedCollection.FindAsync(new BsonDocument())).ToListAsync();\n Console.WriteLine(typedValues.Count);\n ///takes 16 seconds\n\n //---\n var resultDoc = offerMasters.Find(new BsonDocument()).ToList();\n foreach (var item in resultDoc)\n {\n Console.WriteLine(item.ToString());\n }\n //// takes 16 seconds\n }", "text": "offerMasters.Find(new BsonDocument()).ToList();Hi James,I have tried all types of variations, please see my below code all take 16 seconds to fetch 1800 records, is there a way to reduce the time", "username": "jithen_dtk" }, { "code": "mongod_idmongod", "text": "Hi, @jithen_dtk,The time taken to fetch documents depends on a variety of factors including network conditions, query time on the server, data size, and deserialization on the client, just to name a few. Your first query will also be influenced by establishment of connections in the connection pools including TCP socket establishment, SSL/TLS handshake (not applicable in this case), and authentication.When I tried to reproduce the issue that you’re observing with your above code, I noticed that the time taken is heavily dependent on the size of the BSON documents and the round-trip time to the server. With small documents on a locally running mongod, the query time was sub-second. With large documents and a remote server, the overall time was dominated by the round-trip time to the server.I would suggest considering and trying the following:Sincerely,\nJames", "username": "James_Kovacs" }, { "code": "_idmongod var offers = db.GetCollection<BsonDocument>(\"OfferMasters\");\n var filter = Builders<BsonDocument>.Filter.Eq(\"ActiveStatus\", true);\n var offerDoc = offers.Find(filter);\n var query = from o in offerDoc\n join l in locationDoc on o.LocationId equals l.LocationId into locationData\n join u in userCardDoc on o.BankId equals u.BankId into cardData\n join b in bankCardDoc on o.BankId equals b.BankId into bankData\n select new UserCardOffer()\n {\n offerMaster = o,\n UserId = cardData.FirstOrDefault().UserId,\n First6Digits = cardData.FirstOrDefault().First6Digits,\n Last4Digits = cardData.FirstOrDefault().Last4Digits,\n LocationName = locationData.FirstOrDefault().LocationName\n };\n", "text": "Hi James,Thank you for the response.Below are my answers to your queriesI tried the below codethis takes few milliseconds, but I am unable to join two documents and query, please can you point me to some sample code in C#I want to do something like thisThanks,\nJithen", "username": "jithen_dtk" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Not able to query data
2022-04-28T12:26:18.225Z
Not able to query data
2,440
null
[ "kafka-connector" ]
[ { "code": "\"connector.class\":\"com.mongodb.kafka.connect.MongoSourceConnector\",\n \"key.converter\":\"org.apache.kafka.connect.storage.StringConverter\",\n \"value.converter\":\"org.apache.kafka.connect.storage.StringConverter\",\n \"connection.uri\":\"XXX\",\n\t \"database\":\"XXX\",\n \"collection\":\"XXX\",\n \"pipeline\":\"[{\\\"$match\\\": { \\\"$or\\\": [{\\\"operationType\\\": \\\"insert\\\"},{\\\"operationType\\\": \\\"delete\\\"},{\\\"operationType\\\": \\\"update\\\"}]}}]\", \n\t \"copy.existing\":\"true\",\n\t \"key.converter.schemas.enable\":\"true\",\n \"value.converter.schemas.enable\":\"true\"\n{\n \"_id\": {\n \"_id\": \"XXX\",\n \"copyingData\": true\n },\n \"operationType\": \"insert\",\n \"documentKey\": {\n \"_id\": \"XXX\"\n },\n \"fullDocument\": { ... }\n", "text": "Hi all,I’m tried to get data from mongoDB → Kafka topic → ksqlDB processingMy expected data via mongoDB atlas source connector able capture insert,update and delete event and also read all existing to Kafka topic and my connector configure as:Data in Kafka topic as:So I need to flat/unwrap data under fullDocument but still have operationType.How to do that?", "username": "may_rununrath" }, { "code": " \"publish.full.document.only\": true", "text": "if you just want the fullDocument just set", "username": "Robert_Walters" }, { "code": "", "text": "Ah I re-read your question, if you still want operationType and fiullDocument, I think you can just $project these two fields in the pipeline. or use an SMT to parse the fields. An SMT would require you use schemas though.", "username": "Robert_Walters" } ]
How to flat data in fullDocument with Kafka connector?
2021-11-20T10:48:24.024Z
How to flat data in fullDocument with Kafka connector?
3,370
null
[ "kafka-connector" ]
[ { "code": "Failed to resume change stream: Resume of change stream was not possible, as the resume point may no longer be in the oplog. 286DatabaseADatabaseBDatabaseA.useruser-source-connectorDatabaseB.eventsReplication Oplog WindowDatabaseA.useruser-source-connectoruser-source-connectoruser-source-connectorFailed to resume change stream: Resume of change stream was not possible, as the resume point may no longer be in the oplog. 286DatabaseADatabaseBDatabaseBDatabaseA.user", "text": "Hi,I have recently had a problem with the MongoDB Source Connector failing with the error message Failed to resume change stream: Resume of change stream was not possible, as the resume point may no longer be in the oplog. 286.I’m aware what this means, i.e the connector cannot longer find the offset, so it cannot reliably continue consuming the change stream → fatal error.To recover the failing connector, the now faulty resume token must be tombstoned / removed from the .offset topic. This is of course time consuming as well as error prone, and more things.I’m wondering how the Source Connector ended up in this situation, and what can be done to mitigate it.If this indeed is true, what are some ways of mitigating the risk of this happening?best regards\nMarcus Wallin", "username": "Marcus_Wallin" }, { "code": "", "text": "Hi, check out the article https://www.mongodb.com/docs/kafka-connector/current/troubleshooting/recover-from-invalid-resume-token/#invalid-resume-tokenTo mitigate this configure a heartbeat interval as described later in that article https://www.mongodb.com/docs/kafka-connector/current/troubleshooting/recover-from-invalid-resume-token/#prevention.", "username": "Robert_Walters" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to prevent "Failed to resume change stream" on rarely changed collections
2022-05-04T12:35:33.060Z
How to prevent &ldquo;Failed to resume change stream&rdquo; on rarely changed collections
3,740
null
[ "python" ]
[ { "code": "", "text": "I am trying to Upload a huge amount of data from Dropbox to MongoDB. Is there any easier way to do it by directly using the file links from the drobox or any connection via python /mongo import to do it easily.", "username": "Pranav_Sane" }, { "code": "", "text": "Dropbox has a developer API … you should be able to do this programmatically.", "username": "Jack_Woehr" }, { "code": "", "text": "Thank you sir. I will check on that", "username": "Pranav_Sane" } ]
Is there any way to Upload CSV files from dropbox directly without downloading it to the local machine via mongo import or any other similarl working command
2022-05-02T16:55:32.747Z
Is there any way to Upload CSV files from dropbox directly without downloading it to the local machine via mongo import or any other similarl working command
1,522
null
[ "java", "python", "mongodb-shell", "mdbw22-hackathon" ]
[ { "code": "", "text": "I am new to hackathons, but here’s an overview of my coding experience so far:I am a web developer and a software engineering student. I have over one year of internship experience and I am currently working as the executive web developer for a blooming organization.I’ve also been a product developer for a software company and developed web pages using JSP code & MySQL.I have accomplished various projects, those including a face recognition attendance system and a concert ticket booking system.Python, C, HTML & CSS, Java, MySQL & Oracle - Database Management and C++6 pm IST to 10 pm IST", "username": "Soumya_Bailkeri" }, { "code": "", "text": "Welcome @Soumya_Bailkeri and glad to have you.That’s some very valuable experience there and no doubt of interest to the other participants too. Let’s hope somebody snaps you up!", "username": "Shane_McAllister" }, { "code": "", "text": "Thank you @Shane_McAllister for the encouragement. I hope I find a team mate too.\nI would, however, love to work on a solo project otherwise\nI’m very excited to be a part of this hackathon!", "username": "Soumya_Bailkeri" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Soumya_Bailkeri is looking for a project!
2022-05-03T18:16:01.371Z
Soumya_Bailkeri is looking for a project!
2,620