image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [] | [
{
"code": "",
"text": "I recently deployed a Realm app in the hopes of deploying just static files. Unfortunately I have made a mistake and overwritten some things like functions and triggers. I can’t seem to find anywhere in the Realm UI where I can recover any of these things. Does anyone have any suggestions? Thanks",
"username": "Lukas_deConantseszn1"
},
{
"code": "",
"text": "Nvm this was something else.",
"username": "Lukas_deConantseszn1"
}
] | Recover Realm app changes | 2020-07-18T22:24:28.257Z | Recover Realm app changes | 1,172 |
null | [] | [
{
"code": "",
"text": "Hello, I need to host numerous(>20,000,000, >150GB) text data and serve it to the application servers, is MongoDB suitable for this kind of usage? If not, is there a well-established solution for this problem?Thanks in advance!",
"username": "brlin"
},
{
"code": "",
"text": "Hello @brlin welcome to the community!I need to host numerous(>20,000,000, >150GB) text data and serve it to the application servers, is MongoDB suitable for this kind of usageThat should be no problem, I used MongoDB for TextSearches with more than 2 Terra Byte in RAM. That was not the smallest machine . But a good end result will be mainly driven by a well fitting schema and indexes. When your plans come to a more concrete state, feel free to post your questions here. We will try to help.Beside this, I’d recommend getting Professional Advice to plan a deployment of this size. There are many considerations, and an experienced consultant can provide better advice with a more holistic understanding of your requirements. Some decisions affecting scalability (such as shard key selection) are more difficult to course correct once you have a significant amount of production data.Cheers,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Usage on big quantity of text data? | 2020-07-20T06:18:52.457Z | Usage on big quantity of text data? | 1,596 |
null | [
"atlas-triggers"
] | [
{
"code": "",
"text": "Taking a multi-user scrabble game as an example - 5 players in game - current player makes move which is sent to realm function - trigger fires on update to game table with new letter - game table contains player info for that game (disconnect at the moment between player info and info needed to communicate/push to that user) - is it possible to use realm features to send new letter back to all players of that game? looking at sync functionality but not sure it is relevant to situation above?edit: just found “built-in services: push notification” … is this a good avenue to explore? Except users could be on desktop also.",
"username": "Mic_Cross"
},
{
"code": "",
"text": "Hi @Mic_Cross,\nThe Realm sync which is in beta might be an option.However, I would explore using a collection.watch method from the relevant sdk of your choice.Example see the react sdk:\nhttps://docs.mongodb.com/realm-sdks/js/10.0.0-beta.9/Realm.RemoteMongoDBCollection.htmlThis is based in MongoDB change streams which listen to configure data changes and allow you to trigger an application logic on every relevant change.Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "I am using WebSDK with GraphQL, ApolloClient etc - just found this in readme for RealmWeb: “the Realm Web project will not include a Realm Sync client in any foreseeable future.” \" A limited selection of services are implemented at the moment:",
"username": "Mic_Cross"
},
{
"code": "",
"text": "We’re currently working on bringing MongoDB watch support to Realm Web (and it is already available in the legacy Stitch SDK for browsers).A Realm Sync client in the Realm Web SDK is a different story (and technically a very different thing), as that won’t be available in any foreseeable future.",
"username": "kraenhansen"
},
{
"code": "",
"text": "went back to mongodb stitch tutorial and have “add collection-level watch” working.",
"username": "Mic_Cross"
}
] | Communicating with logged on users after trigger fires | 2020-07-16T22:08:10.218Z | Communicating with logged on users after trigger fires | 1,548 |
null | [
"aggregation"
] | [
{
"code": "const Customer1 = {\n id: 1,\n name: 'Customer Name',\n projects: [\n {\n name: 'Project 1',\n description: 'Project description',\n instances: [10],\n },\n {\n name: 'Project 2',\n description: 'Project description',\n instances: [10, 20],\n },\n ],\n};\nconst Instances = [\n {\n id: 10,\n operatingSystem: 'Microsoft Windows 2012R2',\n version: '3.1.5',\n product: {\n id: 100,\n name: 'Product 1',\n vendor: 'Vendor A',\n },\n },\n {\n id: 20,\n operatingSystem: 'Microsoft Windows 2016',\n version: '4.1.0',\n product: {\n id: 200,\n name: 'Product 5',\n vendor: 'Vendor B',\n },\n },\n {\n id: 30,\n operatingSystem: 'Microsoft Windows 2019',\n version: '3.0',\n product: {\n id: 300,\n name: 'Product 2',\n vendor: 'Vendor A',\n },\n },\n {\n id: 40,\n operatingSystem: 'Linux',\n version: '1.0',\n product: {\n id: 100,\n name: 'Product 1',\n vendor: 'Vendor A',\n },\n },\n];\nconst Results = {\n id: 1,\n name: 'Customer Name',\n projects: [\n {\n name: 'Project 1',\n description: 'Project description',\n products: [\n {\n id: 100,\n name: 'Product 1',\n vendor: 'Vendor A',\n instances: [\n {\n id: 10,\n operatingSystem: 'Microsoft Windows 2012R2',\n version: '3.1.5',\n },\n {\n id: 10,\n operatingSystem: 'Microsoft Windows 2012R2',\n version: '3.1.5',\n },\n ],\n },\n {\n id: 200,\n name: 'Product 5',\n vendor: 'Vendor B',\n instances: [\n {\n id: 20,\n operatingSystem: 'Microsoft Windows 2016',\n version: '4.1.0',\n },\n ],\n },\n ],\n },\n {\n name: 'Project 2',\n description: 'Project description',\n products: [\n {\n id: 300,\n name: 'Product 2',\n vendor: 'Vendor A',\n instances: {\n id: 30,\n operatingSystem: 'Microsoft Windows 2019',\n version: '3.0',\n },\n },\n {\n id: 100,\n name: 'Product 1',\n vendor: 'Vendor A',\n instances: [\n {\n id: 40,\n operatingSystem: 'Linux',\n version: '1.0',\n },\n {\n id: 10,\n operatingSystem: 'Microsoft Windows 2012R2',\n version: '3.1.5',\n },\n ],\n },\n ],\n },\n ],\n};\n",
"text": "I’ve got the following ‘customers’ collection with a customer document:I’ve got an ‘instances’ collection with the following documents:After spending some hours, couldn’t find a way to aggregate my ‘customer details’ as expected:Is it even possible?\nWould appreciate your assistance",
"username": "Yoav_Melamed"
},
{
"code": "db.customers.aggregate([\n {\n $unwind: '$projects',\n },\n {\n $lookup: {\n from: 'instances',\n let: {\n ids: '$projects.instances',\n },\n pipeline: [\n {\n $match: {\n $expr: {\n $in: ['$_id', '$$ids'],\n }\n }\n },\n {\n $group: {\n _id: '$product._id',\n name: {\n $first: '$product.name',\n },\n vendor: {\n $first: '$product.vendor',\n },\n instances: {\n $push: {\n _id: '$_id',\n operatingSystem: '$operatingSystem',\n version: '$version',\n }\n }\n }\n }\n ],\n as: 'projects.products',\n }\n },\n {\n $group: {\n _id: '$_id',\n name: {\n $first: '$name',\n },\n projects: {\n $push: '$projects',\n }\n }\n }\n]).pretty();\n",
"text": "Is it even possible?Everything is possible with MongoDB’s aggregation pipeline! Well, almost ",
"username": "slava"
},
{
"code": "",
"text": "HiEverything is possible with MongoDB’s aggregation pipeline! Well, almostYou might even can cook coffee with it when come close enough to the CPUs and miss out the on or the other advice. Michael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "It’s only possible when @slava around!\nThank you for your kind help ",
"username": "Yoav_Melamed"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help with data aggregation | 2020-07-18T22:27:02.977Z | Help with data aggregation | 1,326 |
null | [] | [
{
"code": "foobarmongo-init.jsversion: '3'\nservices: \n # skipped some lines\n mongo:\n image: \"mongo:4.2\"\n ports: \n - \"27017:27017\"\n volumes: \n - ./init-mongo.js:/docker-entrypoint-initdb.d/init-mongo.js:ro\nuse foo;\ndb.bar.insertMany([\n { \"id\": \"bd7b69fa-9207-4996-91cd-b7eec3fce21b\", \"body\": \"This is the first entry..\", \"created\": \"2020-07-16 18:28:55.933778455 +0000 UTC\", \"modified\": \"2020-07-16 18:28:55.933778455 +0000 UTC\", \"author\": \"sntshk\" },\n { \"id\": \"8c002a9b-6532-4125-8b58-e2af55a7d60e\", \"body\": \"This is the second entry..\", \"created\": \"2020-07-16 18:28:55.933778455 +0000 UTC\", \"modified\": \"2020-07-16 18:28:55.933778455 +0000 UTC\", \"author\": \"sntshk\" },\n { \"id\": \"70032cd2-0c22-41cf-bf02-b77f52dcdb76\", \"body\": \"This is the third entry..\", \"created\": \"2020-07-16 18:28:55.933778455 +0000 UTC\", \"modified\": \"2020-07-16 18:28:55.933778455 +0000 UTC\", \"author\": \"sntshk\" },\n { \"id\": \"88d9655b-8364-4062-9153-ad35766d3eb9\", \"body\": \"This is the fourth entry..\", \"created\": \"2020-07-16 18:28:55.933778455 +0000 UTC\", \"modified\": \"2020-07-16 18:28:55.933778455 +0000 UTC\", \"author\": \"sntshk\" },\n { \"id\": \"6dddde02-02fa-4027-b469-ab2e3e7dea62\", \"body\": \"This is the fifth entry..\", \"created\": \"2020-07-16 18:28:55.933778455 +0000 UTC\", \"modified\": \"2020-07-16 18:28:55.933778455 +0000 UTC\", \"author\": \"sntshk\" },\n]);\n",
"text": "I am working on an app, and…I want that collection to be already populated so that when it start, there is something to fiddle with.I have learned about mongo-init.js, and want to use it for the same.My docker-compose.yml:My mongo-init.js:Can you help me out? What do you suggest?",
"username": "sntshk"
},
{
"code": "command",
"text": "Hi @sntshk,I believe there are several ways to do it. I suggest to explore the command property to specify a command to be run in the container to execute a shell command:Compose file specificationIf you want you can use a string or mount/copy a js file to the container.Please let me know if you need more assistance.Best regards,\nPavel",
"username": "Pavel_Duchovny"
}
] | How do I add seed data when working with Docker Compose? | 2020-07-16T20:34:45.599Z | How do I add seed data when working with Docker Compose? | 9,959 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "I have tried query on Mongo playground and Local , the result is not same , I am using Mongo atlas 4.2.8 enterprise here and u can see on Mongo Playground hereMongo playground: a simple sandbox to test and share MongoDB queries online\nis that something wrong maybe on my query? or is this because of what?",
"username": "Virtual_Database"
},
{
"code": "[\n {\n \"_id\": ObjectId(\"5f1284078a7dd8a6b9140c97\"),\n \"out\": [\n {\n \"_id\": ObjectId(\"5f1284078a7dd8a6b9140c95\"),\n \"accessLevel\": \"organization_admin\",\n \"status\": true,\n \"title\": \"CEO\"\n }\n ]\n },\n {\n \"_id\": ObjectId(\"5f1284078a7dd8a6b8140c99\"),\n \"out\": []\n }\n]\n```\n\nThankss, \nPavel",
"text": "Hi @Virtual_Database,Can you share the atlas result and confirm if you need it to be compared with:",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongo playground got the result I want but in local the query doesn't work | 2020-07-19T06:28:14.727Z | Mongo playground got the result I want but in local the query doesn’t work | 2,243 |
null | [
"node-js"
] | [
{
"code": "MONGO_DB_URL=mongodb://localhost:27017\nAUTH_DB_NAME=authentication\nconst MongoClient = mongodb.MongoClient\nconst url = process.env.MONGO_DB_URL\nconst mongodbOptions = {\nuseNewUrlParser: true,\nuseUnifiedTopology: true\n}\nconst dbClient = new MongoClient(url, mongodbOptions)\n\nconst dbName = process.env.AUTH_DB_NAME\n\nconst authenticationDb = async () => {\n const dbClient = await client\n if (!dbClient.isConnected()) {\n await dbClient.connect()\n }\n const db = dbClient.db(dbName)\n return db\n}\n",
"text": "Hi,The following message is being triggered whenever I run my node backend.\nDeprecationWarning: current Server Discovery and Monitoring engine is deprecated, and will be removed in a future version. To use the new Server Discover and Monitoring engine, pass option { useUnifiedTopology: true } to the MongoClient constructor.My environment:Connection:\n.env.jsThe connection works as well as every database operation, yet I can’t get rid of this warning.Does anyone have any idea why this is happening?Thanks in advance.",
"username": "Christian_Saboia"
},
{
"code": "",
"text": "Please check these linkshttps://jira.mongodb.org/browse/NODE-2138",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Thank you, Ramachandra.",
"username": "Christian_Saboia"
},
{
"code": "",
"text": "I’ve gone through all the threads, including the linked one (which I actually had already researched beforehand), but as for the code I posted, I am already doing what it was supposed to be done, according to the threads info.\nThe useUnifiedTopology: true option is being passed to the MongoClient, not to the client.connect.\nThe used packages, as well as the MongoDB itself, are up to date. Maybe I’m missing something, but I could not find a solution for this warning in those threads. Would you recommend any step further?\nI truly appreciate the help and attention.",
"username": "Christian_Saboia"
},
{
"code": "useUnifiedTopology: trueMongoClientconst dbClient = await client\n",
"text": "Hi @Christian_Saboia,\nIf you are indeed passing useUnifiedTopology: true to your MongoClient constructor, then this message should not display. Can you please update your code sample above if it does not accurately reflect the code you are running? Specifically, I’m seeing this code which seems to refer to a variable which does not exist:",
"username": "mbroadst"
},
{
"code": "import mongodb from 'mongodb'\nimport Grid from 'gridfs-stream'\nimport multer from 'multer'\nimport GridFsStorage from 'multer-gridfs-storage'\nimport path from 'path'\nimport crypto from 'crypto'\n\nconst MongoClient = mongodb.MongoClient\nconst url = process.env.MONGO_DB_URL\nconst mongodbOptions = {\n useNewUrlParser: true,\n useUnifiedTopology: true\n}\n\nconst client = new MongoClient(url, mongodbOptions)\n\nconst fileHandler = (bucketName) => (req, file) => {\n return new Promise((resolve, reject) => {\n crypto.randomBytes(16, (err, buf) => {\n if (err) {\n return reject(err)\n }\n const filename = buf.toString('hex') + path.extname(file.originalname)\n\n // The prop metadata.expires indicates an expiration date in ms for the case where the request fails for any reason.\n // In this (failure) scenario, the expired files can be removed safely.\n // It is the responsability of further request processors (controllers), to remove metadata.expires property\n // from the files it has properly processed or to remove the file (or files), when applicable.\n // An aditional cron job may be created in order to remove orfanned expired files.\n const fileInfo = {\n filename: filename,\n bucketName,\n metadata: {\n originalName: file.originalname,\n encoding: file.encoding,\n mimetype: file.mimetype,\n size: file.size,\n expires: Date.now() + 1000 * 60 * 60\n }\n }\n resolve(fileInfo)\n })\n })\n}\n\nconst storage = (dbName, bucketName) => {\n const storageUrl = `${url}/${dbName}`\n console.log({ storageUrl })\n\n return new GridFsStorage({\n url: storageUrl,\n file: fileHandler(bucketName)\n })\n}\n\n// accepted options: { fileFilter, limits, preservePath }\nconst makeUpload = async (dbName, bucketName, options = {}) =>\n multer({ storage: storage(dbName, bucketName), ...options })\n\nconst makeGridStream = async (dbName, bucketName) => {\n if (!client.isConnected()) {\n await client.connect()\n }\n const db = client.db(dbName)\n const gfs = Grid(db, mongodb)\n gfs.collection(bucketName)\n return gfs\n}\n\nconst makeBucket = async (dbName, options) => {\n if (!client.isConnected()) {\n await client.connect()\n }\n const db = client.db(dbName)\n return new mongodb.GridFSBucket(db, options)\n}\n\nexport { client, makeUpload, makeGridStream, makeBucket }\nimport { client } from './db'\n\nconst dbName = process.env.AUTH_DB_NAME\n\nconst authenticationDb = async () => {\n const dbClient = await client\n if (!dbClient.isConnected()) {\n await dbClient.connect()\n }\n const db = dbClient.db(dbName)\n return db\n}\n\nexport default authenticationDb\n",
"text": "Hi Matt,\nActually, there are to files. One is called db.js, which follows:The second one is called authenticationDB.js:This is the actual code, which, despite de warning is working properly.\nThanks in advance!",
"username": "Christian_Saboia"
},
{
"code": "",
"text": "I’ve managed to find out what was happening. The error was on GridFsStorage creation. I’ve added options property and the message is gone.\nThanks.",
"username": "Christian_Saboia"
},
{
"code": "",
"text": "A post was split to a new topic: DeprecationWarning: current Server Discovery and Monitoring engine is deprecate",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | DeprecationWarning being triggered even with useUnifiedTopology set to true | 2020-04-09T20:25:28.446Z | DeprecationWarning being triggered even with useUnifiedTopology set to true | 27,088 |
null | [
"rust"
] | [
{
"code": "let result = coll.insert_one(doc! { \"x\": 1 }, None).await?;\npub struct Post {\n id: bson::ObjectID\n name: String\n}\n",
"text": "I’m reading the documentation for this, now v1.0.0, official mongodb Rust driver.While it shows how to insert some arbitrary dataI’m wondering how to define data structures.Can you show an example data structure definiton?For instance something likeonly correct.",
"username": "dalu"
},
{
"code": "",
"text": "Is this question too trivial?",
"username": "dalu"
},
{
"code": "serdeSerializemongodb::bson::to_bsonuse mongodb::bson;\nuse serde::Serialize;\n\n#[derive(Serialize)]\nstruct Post {\n #[serde(rename = \"_id\")]\n id: bson::oid::ObjectId,\n name: String,\n}\n\nlet post = Post {\n id: bson::oid::ObjectId::new(),\n name: \"Bill\".to_string(),\n};\n\nlet serialized_post = bson::to_bson(&post).unwrap();\nlet result = coll\n .insert_one(bson::from_bson(serialized_post).unwrap(), None)\n .await\n .unwrap();\n\nbson",
"text": "Hi @dalu!Your question is not too trivial at all! In fact, it touches on one of the nicest things about using MongoDB in Rust, which is that converting between BSON and your Rust types can be done seamlessly using serde.For your specific example, you’ll need to derive the Serialize trait on your struct. Then, you can use mongodb::bson::to_bson to encode it to BSON for insertion.Complete example:For more examples of working with BSON in Rust, check out the documentation for the bson crate: bson - Rust",
"username": "Patrick_Freed"
},
{
"code": "mod model;\nmod handler;\nuse mongodb::Client;\n\nuse std::io;\nuse actix_web::{HttpServer, App, web};\n\npub struct State {\n client: mongodb::Client\n}\n\n#[actix_rt::main]\nasync fn main() -> io::Result<()> {\n let client = Client::with_uri_str(\"mongodb://localhost:27017/\").await.expect(\"mongo error\");\n\n HttpServer::new(move || {\n App::new()\n .data(State{client: client.clone()})\n .route(\"/\", web::get().to(handler::index))\n })\n .bind(\"127.0.0.1:8080\")?\n .run()\n .await\n}\nuse serde::{Serialize, Deserialize};\n\n#[derive(Serialize, Deserialize)]\npub struct Blog {\n #[serde(rename = \"_id\")]\n pub id: bson::oid::ObjectId,\n pub name: String,\n pub description: String,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct Post {\n #[serde(rename = \"_id\")]\n pub id: bson::oid::ObjectId,\n pub slug: String,\n pub author: String,\n pub title: String,\n pub body: String,\n}\n\n#[derive(Serialize, Deserialize)]\npub struct Comment {\n #[serde(rename = \"_id\")]\n pub id: bson::oid::ObjectId,\n pub author: String,\n pub body: String,\n}\nuse actix_web::{web, HttpRequest, Responder, HttpResponse};\nuse crate::State;\nuse crate::model::Blog;\n\npub async fn index(_data: web::Data<State>, req: HttpRequest) -> impl Responder {\n let coll = _data.client.database(\"mblog\").collection(\"blogs\");\n let cursor = coll.find(None, None);\n let mut m: Vec<Blog> = Vec::new();\n\n for result in cursor {\n if let Ok(item) = result {\n m.push(item)\n }\n }\n\n HttpResponse::Ok().json(m)\n}\n[package]\nname = \"mblog\"\nversion = \"0.1.0\"\nauthors = [\"\"]\nedition = \"2018\"\n\n# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html\n\n[dependencies]\nmongodb = \"1.0.0\"\nactix-web = \"2\"\nactix-rt = \"1.1.1\"\nserde = \"1.0.114\"\nbson = \"1.0.0\"\nerror[E0277]: `impl std::future::Future` is not an iterator\n --> src/handler.rs:10:19\n |\n10 | for result in cursor {\n | ^^^^^^ `impl std::future::Future` is not an iterator\n |\n = help: the trait `std::iter::Iterator` is not implemented for `impl std::future::Future`\n = note: required by `std::iter::IntoIterator::into_iter`\n\nerror: aborting due to previous error\n\nFor more information about this error, try `rustc --explain E0277`.\nerror: could not compile `mblog`.\n",
"text": "Hello Patrick,thank you for your response.\nWould it be too much to ask for a complete CRUD example with actix-web?I’m pretty new to Rust and I’m finding it very unintuitive, having a hard time with learning the language, despite this being my 4th programming language I’m getting into.I’ll post the files I have and the cargo.tomlmain.rsmodel.rshandler.rsThe following error occurs",
"username": "dalu"
},
{
"code": "use actix_web::{web, HttpRequest, Responder, HttpResponse};\nuse crate::State;\nuse crate::model::Blog;\n\npub async fn index(_data: web::Data<State>, req: HttpRequest) -> impl Responder {\n let coll = _data.client.database(\"mblog\").collection(\"blogs\");\n let cursor = coll.find(None, None).await?;\n let mut m: Vec<Blog> = Vec::new();\n\n while let Some(result) = cursor.next().await {\n match result {\n Ok(item) => {\n m.push(item)\n }\n Err(e) => return Err(e.into()),\n }\n }\n\n HttpResponse::Ok().json(m).await\n}\nerror[E0599]: no method named `next` found for struct `mongodb::cursor::Cursor` in the current scope\n --> src/handler.rs:10:37\n |\n10 | while let Some(result) = cursor.next().await {\n | ^^^^ method not found in `mongodb::cursor::Cursor`\n |\n = help: items from traits can only be used if the trait is in scope\n = note: the following traits are implemented but not in scope; perhaps add a `use` for one of them:\n candidate #1: `use std::iter::Iterator;`\n candidate #2: `use std::str::pattern::Searcher;`\n candidate #3: `use tokio::stream::StreamExt;`\n candidate #4: `use futures_util::stream::stream::StreamExt;`\n candidate #5: `use serde_json::read::Read;`\n\nerror[E0277]: the trait bound `mongodb::error::Error: actix_http::error::ResponseError` is not satisfied\n --> src/handler.rs:7:45\n |\n7 | let cursor = coll.find(None, None).await?;\n | ^ the trait `actix_http::error::ResponseError` is not implemented for `mongodb::error::Error`\n |\n = note: required because of the requirements on the impl of `std::convert::From<mongodb::error::Error>` for `actix_http::error::Error`\n = note: required by `std::convert::From::from`\n\nerror: aborting due to 2 previous errors\n\nSome errors have detailed explanations: E0277, E0599.\nFor more information about an error, try `rustc --explain E0277`.\nerror: could not compile `mblog`.\n",
"text": "I have changed the handler.rs index function to look like this:but I receive the following error:",
"username": "dalu"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Rust data structure example | 2020-06-28T21:31:20.444Z | Rust data structure example | 9,543 |
null | [] | [
{
"code": "",
"text": "Hi all, thanks for your time.\nI wanted to know if there is some adapter that exists that parses Couch DB queries into MongoDB queries. Any suggestions are welcome. Thanks.",
"username": "Sreeramji_K_S"
},
{
"code": "",
"text": "Welcome to the community @Sreeramji_K_S!I’m not aware of a solution for transforming CouchDB queries into MongoDB Query Language, but if you are migrating an application it would be best to take advantage of MongoDB’s native query language so you do not have the overhead of transformation or risk of subtle bugs due to differences in query syntax.The JSON Mango Query language added in the CouchDB 2.0 release was inspired by the MongoDB query language, so there are a lot of similarities and it should be straightforward to migrate.There are additional MongoDB Query and Projection Operators you may want to take advantage of as well as the Aggregation Framework for more complex queries and transformations.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks a lot @Stennie_X. That was helpful. I am actually trying to move from Couch to Cosmos for my application. But the constraint is that I can’t change the code.",
"username": "Sreeramji_K_S"
},
{
"code": "",
"text": "Hi @Sreeramji_K_S,Migrating to Cosmos will introduce some further challenges, since Cosmos is an emulation of MongoDB with a distinct server implementation and features.If your goal is to move your application between environments without changing any code, I would look for a compatible hosted solution for your existing database backend.What challenges are you trying to solve for your CouchDB implementation?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks @Stennie_X . I agree with your point. But I currently have these constraints:If possible, can you please elaborate on the “further challenges” that can come up?Regards\nSreeramji",
"username": "Sreeramji_K_S"
},
{
"code": "",
"text": "Hi Sreeramji,You will effectively be using three APIs (CouchDB, MongoDB, and Cosmos DB) with varying behaviours and limits. You can perhaps translate to lowest common denominator, but your legacy application is only expecting CouchDB and you may have to invest significant effort into creating a translation layer.For example, querying and indexing will not be identical across all three and some error codes, request handling, and limits will be specific to one of the APIs involved. Cosmos DB provisions and limits resources based on Request Units (RUs), which is a request handling concept you do not have to consider in either CouchDB or MongoDB. Cosmos DB currently has a 2MB document size limit (versus 16MB in MongoDB and 8MB by default in CouchDB 3.0).You’ll have to decide what makes sense for your use case and constraints, but this approach sounds like a lot of effort compared to using a hosted version of CouchDB on Azure or doing a proper rewrite to support a different database backend.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks a lot @Stennie_X. Your inputs helped a lot in channeling my work. Sorry for the delayed response.Regards\nSreeramji",
"username": "Sreeramji_K_S"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Couch DB queries to MongoDB queries - Adapter / Parser | 2020-06-24T18:30:31.969Z | Couch DB queries to MongoDB queries - Adapter / Parser | 3,636 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi everyone.\nI’m new person in “Mongo World”.\nI knowledge about mongoDB is very pure, but now I have very big project which DB should be MongoDB.\nIn all my work experience I mainly worked with MySql so when I need to architect a Database I’m thinking about relational DB. And I architect a DB structure.\nBut now I have vary big problem I need to crate STRONG DB structure with many tables with relations and the problem is that that I should do It by mongoDB.\nCan someone help me with this?\nThanks that here me so long!))\nRegards,\nDavit",
"username": "Davit_Avagyan"
},
{
"code": "",
"text": "Hello @Davit_Avagyan welcome to the community!It is great that you want to utilize the strong features of MongoDB. As you mention you have a solid SQL background. To get the most out of an noSQL Setup, you need to change the way of thinking about schema design. Your first goal will no longer be to get the maximal normalized Schema, Denormalization is not bad, the requirement of your queries will drive your design. The story will start to think about a good schema design. In case you move the SQL normalized Data Model 1:1 to MongoDB you will not have much fun or benefit.You can find further information on the Transitioning from Relational Databases to MongoDB in the linked blog post. Please note also the links at the bottom of this post, and the referenced migration guide .Since you are new to MongoDB and noSQL I highly recommend to take some of great and free classes from the MongoDB Univerity:This is just a sample which can get you started very well. In case this is going to be a mission critical project\nI’d recommend getting Professional Advice to plan a deployment There are many considerations, and an experienced consultant can provide better advice with a more holistic understanding of your requirements. Some decisions affecting scalability (such as shard key selection) are more difficult to course correct once you have a significant amount of production data.Hope this helps to start, while getting familiar and all time after, feel free to ask you questions here - we will try to help.Cheers,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hello @michael_hoeller,Thank you for your replay and the information that you provide me.\nI hope that it will not be so difficult to change “relations” to “not relations”, because as I understand the first thing to learn MongoDB is that that you should forget everything about relational DB and start everything from beginning .Best Regards,\nDavit",
"username": "Davit_Avagyan"
},
{
"code": "",
"text": "Hi @Davit_AvagyanI think the better is to think in your application entities and create your collections and documents based on that.Having your entities mapped to your objects will make your development faster.I will recommend reviewing the following blog series:A summary of all the patterns we've looked at in this seriesBest regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny,Thanks a lot for your attantion on my topic and for provided info. I’ll have a look on it too.Best Regards,\nDavit",
"username": "Davit_Avagyan"
},
{
"code": "",
"text": "Hi @Davit_AvagyanThank you for the feedback. MongoDB is fun to work with and very productive… Hope you’ll get the best out of our technology ",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Guys I have one more question too.Is the MongoDB and Realm Database are the same?\nIf no what is the difference?Thanks a lot.",
"username": "Davit_Avagyan"
},
{
"code": "",
"text": "Hello @Davit_AvagyanMongoDB Realm is a serverless platform and mobile database. To see what Realm can do for you, please read the introduction on the the MongoDB Realm page I assume that you will get all answers on this page.Cheers,\nMichael",
"username": "michael_hoeller"
}
] | Create MongoDB database with big structure | 2020-07-16T15:14:19.322Z | Create MongoDB database with big structure | 3,029 |
null | [] | [
{
"code": " let results = RealmManager.shared.userRealm.objects(UserData.self)\n \n self.notificationToken = results.observe { (changes: RealmCollectionChange) in\n \n switch changes {\n case .initial:\n NSLog(\"initial\")\n if results.count > 0 {\n self.name = results[0].name\n }\n \n case .update(let results, _, _, _):\n NSLog(\"update\")\n if results.count > 0 {\n self.name = results[0].name\n }\n \n case .error(let error):\n // An error occurred while opening the Realm file on the background worker thread\n fatalError(\"\\(error)\")\n }\n }\n\nupdate",
"text": "We have a simple swift app, that listens to a simple UserData table. If we make changes in the Realm App everything syncs fine. The code that listens to the changes looks like thisIf we edit the data in Compass, we do not get an update call back. However, if we log out, reopen the Realm, re-set up the listener, everything works fine. Is this a known issue and are there any work arounds?Thanks",
"username": "Richard_Krueger"
},
{
"code": "",
"text": "@Richard_Krueger This is a known issue with Compass - when you edit a document with Compass under the hood it is actually doing a DELETE and INSERT - to get around this use a MongoDB SDK to just update the fields you need and the update notification will fire.",
"username": "Ian_Ward"
},
{
"code": "",
"text": "@Ian_Ward Ok that is good to know. As I said, it seems to work fine when two separate MongoDB Realm app instances are changing the data, just not when we change the data in Compass. I am still not missing Realm Studio, although it did not have this problem. Compass is a way more featured product.",
"username": "Richard_Krueger"
}
] | Sync between Realm App and Compass | 2020-07-16T14:52:47.179Z | Sync between Realm App and Compass | 1,545 |
null | [] | [
{
"code": "",
"text": "Does query`s speed depends on document size?For example I have aggregation with $lookup, $unwind and some other steps. If I add step $project, which project only required fields for me after $unwind, does this action speed up query?\nOr it does not depends, and MongoDB works with documents with 5 field and with documents with 100 fields with same speed?",
"username": "Roman_Buzuk"
},
{
"code": "",
"text": "In one particular case, I am pretty sure it makes a difference. If the 5 fields are part of a compound index and the project is near the first match, then you avoid a FETCH, which will bring the whole document from storage to RAM, which is slow.",
"username": "steevej"
}
] | Speed of query depending on document size | 2020-07-17T12:50:13.657Z | Speed of query depending on document size | 1,641 |
null | [] | [
{
"code": "OperationQueueDispatchQueueRealmLinkingObjectsLinkingObjectscountlet realm = try Realm(configuration: config)\nrealm.write {\n // Creation/editing of new/existing object, assigning some base values and relationships\n\n var attachments = self.children.filter(...)\n\n guard attachments.count > 0 else {\n self.mainAttachment = nil\n return\n }\n\n let pdfs = attachments.filter(...).sorted(byKeyPath: \"dateAdded\", ascending: true)\n\n if pdfs.count > 0 {\t// <- this count check crashes\n self.mainAttachment = pdfs.first\n return\n }\n\n // Other checks\n ...\n}\nRealmDispatchQueue",
"text": "Hi,I’m working on an iOS application which needs to run multiple API requests concurrently and store their results to realm database.So I’m using a OperationQueue which sends API requests concurrently (max 4) and then delegates their results to concurrent DispatchQueue. On this queue I always create a new Realm instance and write results to database.The realm object has some raw data (strings and dates) and some relationships (either 1-1 or 1-many), so it also contains direct links to other realm objects or LinkingObjects.At some point during the asynchronous writes I check one of the LinkingObjects count and I get a crash there which says:Exception Type: EXC_BAD_ACCESS (SIGSEGV)\nException Subtype: KERN_INVALID_ADDRESS at 0x000001a1d093cb4d\nVM Region Info: 0x1a1d093cb4d is not in any region. Bytes after previous region: 1783226420046\nREGION TYPE START - END [ VSIZE] PRT/MAX SHRMOD REGION DETAIL\nMALLOC_NANO 0000000280000000-00000002a0000000 [512.0M] rw-/rwx SM=PRV\n—>\nUNUSED SPACE AT ENDTermination Signal: Segmentation fault: 11\nTermination Reason: Namespace SIGNAL, Code 0xb\nTerminating Process: exc handler [2158]\nTriggered by Thread: 5The code that crashes looks like this:So I wanted to ask whether the problem could be that I’m creating multiple Realm instances which then perform writes on a single concurrent DispatchQueue.",
"username": "Michal_Rentka"
},
{
"code": "",
"text": "@Michal_Rentka Can you file an issue here with a reproduction case please:Realm is a mobile database: a replacement for Core Data & SQLite - GitHub - realm/realm-swift: Realm is a mobile database: a replacement for Core Data & SQLite",
"username": "Ian_Ward"
}
] | Multiple realms on concurrent background dispatch queue | 2020-07-17T08:02:16.966Z | Multiple realms on concurrent background dispatch queue | 2,223 |
null | [] | [
{
"code": "",
"text": "Im trying to connect to Mongo Shell using the below connection string but it is throwing me the error.I even have tried the password without special characters but still I face the same issue.Could you please look into it. ",
"username": "Sarish_Khullar"
},
{
"code": "",
"text": "bad authentication means wrong userid/pwdIs your user Student correct?\nShould be m001-student",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "2 posts were split to a new topic: DNSHostNotFound: Failed to lookup service",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Not able to connect using Mongo Shell | 2020-07-16T16:59:35.115Z | Not able to connect using Mongo Shell | 1,247 |
null | [
"queries"
] | [
{
"code": "{\n \"_id\": {\"$oid\": \"5e870200adbe1d000183fa4d\"},\n \"data\": \n {\n \"begin\": \"2020-03-30 10:20:29\",\n \"end\": \"2020-03-30 10:20:32\",\n \"file\": \"salvamento4.mp4\",\n \"type\": \"video\"\n },\n \"idSensor\": 3,\n \"idDevice\": 5\n}\n{\n \"_id\": {\"$oid\": \"5e86fe50adbe1d0001472c0f\"},\n \"data\": \n {\n \"Trackings\":\n [{\n \"BeginTime\": \"2020-03-30T08:23:42.034893+00:00\",\n \"FaceInfo\":\n {\n \"Age\": 26.34,\n \"Emotion\": \"NEUTRAL\",\n \"IsDetected\": true,\n \"MaleProbability\": 0.71,\n \"gazeTime\": 2.37,\n \"numGazes\": 71\n },\n \"ImageSize\": \n {\n \"Height\": 1080,\n \"Width\": 1920\n },\n \"LookingDuration\": 2.37,\n \"PersonID\": \"P-2020-03-30_2749\",\n \"ReIDInfo\": {\"NumReIDs\": 1},\n \"RoiInfo\": {\"RoiDuration\": 0.17},\n \"SensorID\": 0,\n \"TrackingDuration\": 2.77,\n \"Trajectory\": null,\n \"direction\": null,\n \"id\": 1,\n \"roiName\": 0,\n \"roiType\": 1\n }],\n \"timestamp\": \"2020-03-30T08:23:52.327678\"\n },\n \"idSensor\": 2,\n \"idDevice\": 5\n}\nidDevicePersonIDdata.begindata.enddata.Trackings.BeginTimedata.Trackings.BeginTime + data.Trackings.TrackingDurationEmotionAgeidDevice idBroadcast dtBrBeginTime dtBrEndTime idReaction dtReBeginTime dtReEndTime \n1 1 2020-07-03 10:00 2020-07-03 10:03 1 2020-07-03 09:58 2020-07-03 10:02\n1 1 2020-07-03 10:00 2020-07-03 10:03 2 2020-07-03 10:01 2020-07-03 10:07\n1 1 2020-07-03 10:00 2020-07-03 10:03 3 2020-07-03 10:01 2020-07-03 10:02\ndata.Trackings.BeginTimedtReBeginTimedata.enddtBrEndTimedata.Trackings.BeginTime + data.Trackings.TrackingDurationdtReEndTimedata.begindtBrBeginTime$match$project$unwinddata.Trackings$lookup",
"text": "I have a face recognition collection with two different kinds of files:1) Broadcasting files:2) Reaction (to broadcasting) files:Join field is idDevice: a device broadcasts videos and records reactions at the same time.I need to cross both kind of files to determine which emissions are watched by which people, in order to estimate with a BI software if some videos have greater impact on audience than others. There are tons of emissions but only a little amount of different videos; different reactions might also come from recurrent customers (that’s why there is a PersonID).The idea is to check overlapping between broadcasting time (that starts at data.begin and finishes at data.end) and reaction time (that starts at data.Trackings.BeginTime and finishes at data.Trackings.BeginTime + data.Trackings.TrackingDuration) in order to get a table similar to this (just a simple example for one video that provokes three different reactions; ultimate outcome would include also other parameters like Emotion, Age, etc.):In this simple example, 1 emission has been watched by 3 people (or has triggered 3 different reactions); i.e., 1 broadcasting file is related to 3 reaction files. How do we know this? I think the simplest way (correct me if you think there’s a better solution) is to verify these two conditions:My expertise in MongoDb is limited to making very simple A-F queries ($match, $project, I also have used $unwind for breaking data.Trackings into parts), so I have almost no idea about how to address this issue… maybe with $lookup? I’d appreciate any kind of help.Thanks a lot in advance.",
"username": "Javier_Blanco"
},
{
"code": "db.test6.insertMany([\n {\n \"_id\": \"D1\",\n \"data\":\n {\n \"begin\": \"2020-03-30 10:15:45\",\n \"end\": \"2020-03-30 10:20:45\",\n },\n \"idDevice\": 5\n }\n]);\n\ndb.test7.insertMany([\n {\n \"_id\": \"B1\",\n \"data\":\n {\n \"trackings\": [\n {\n \"_id\": \"T1-of-B1\",\n \"beginTime\": \"2020-03-30 10:10:45\",\n \"endTime\": \"2020-03-30 10:14:45\",\n },\n {\n \"_id\": \"T3-of-B1\",\n \"beginTime\": \"2020-03-30 10:15:45\",\n \"endTime\": \"2020-03-30 10:16:45\",\n },\n {\n \"_id\": \"T3-of-B1\",\n \"beginTime\": \"2020-03-30 10:14:45\",\n \"endTime\": \"2020-03-30 10:25:45\",\n },\n ]\n },\n \"idDevice\": 5\n },\n {\n \"_id\": \"B2\",\n \"data\":\n {\n \"trackings\": [\n {\n \"_id\": \"T1-of-B2\",\n \"beginTime\": \"2020-03-30 10:13:45\",\n \"endTime\": \"2020-03-30 10:16:45\",\n },\n {\n \"_id\": \"T2-of-B2\",\n \"beginTime\": \"2020-03-30 10:17:45\",\n \"endTime\": \"2020-03-30 10:17:45\",\n },\n ]\n },\n \"idDevice\": 5\n }\n]);\ndb.test6.aggregate([\n {\n $lookup: {\n // join data form 'test7' into 'test6' collection\n from: 'test7',\n localField: 'idDevice',\n foreignField: 'idDevice',\n as: 'reactions',\n }\n },\n {\n $unwind: '$reactions',\n },\n {\n $addFields: {\n matchedTrackings: {\n $filter: {\n input: '$reactions.data.trackings',\n cond: {\n // filter trakings whatever you like here\n $and: [\n { $lte: ['$data.begin', '$$this.beginTime'] },\n { $gte: ['$data.end', '$$this.endTime'] }\n ]\n }\n },\n }\n }\n },\n {\n // group documents back after calculations are made\n $group: {\n _id: '$_id',\n data: {\n $first: '$data',\n },\n idDevice: {\n $first: '$idDevice',\n },\n reactions: {\n $push: '$reactions',\n },\n matchedTrackings: {\n $push: '$matchedTrackings'\n }\n }\n }\n]).pretty();\n{\n \"_id\" : \"D1\",\n \"data\" : {\n \"begin\" : \"2020-03-30 10:15:45\",\n \"end\" : \"2020-03-30 10:20:45\"\n },\n \"idDevice\" : 5,\n \"reactions\" : [\n {\n \"_id\" : \"B1\",\n \"data\" : {\n \"trackings\" : [\n {\n \"_id\" : \"T1-of-B1\",\n \"beginTime\" : \"2020-03-30 10:10:45\",\n \"endTime\" : \"2020-03-30 10:14:45\"\n },\n {\n \"_id\" : \"T3-of-B1\",\n \"beginTime\" : \"2020-03-30 10:15:45\",\n \"endTime\" : \"2020-03-30 10:16:45\"\n },\n {\n \"_id\" : \"T3-of-B1\",\n \"beginTime\" : \"2020-03-30 10:14:45\",\n \"endTime\" : \"2020-03-30 10:25:45\"\n }\n ]\n },\n \"idDevice\" : 5\n },\n {\n \"_id\" : \"B2\",\n \"data\" : {\n \"trackings\" : [\n {\n \"_id\" : \"T1-of-B2\",\n \"beginTime\" : \"2020-03-30 10:13:45\",\n \"endTime\" : \"2020-03-30 10:16:45\"\n },\n {\n \"_id\" : \"T2-of-B2\",\n \"beginTime\" : \"2020-03-30 10:17:45\",\n \"endTime\" : \"2020-03-30 10:17:45\"\n }\n ]\n },\n \"idDevice\" : 5\n }\n ],\n \"matchedTrackings\" : [\n [\n {\n \"_id\" : \"T3-of-B1\",\n \"beginTime\" : \"2020-03-30 10:15:45\",\n \"endTime\" : \"2020-03-30 10:16:45\"\n }\n ],\n [\n {\n \"_id\" : \"T2-of-B2\",\n \"beginTime\" : \"2020-03-30 10:17:45\",\n \"endTime\" : \"2020-03-30 10:17:45\"\n }\n ]\n ]\n}\n",
"text": "Hello, @Javier_Blanco! Welcome to the community!If I understood you correctly, you need to join ‘broadcasting’ collection with ‘reaction’ collection and then get the trackings, that were taken within range between ‘begin’ and ‘end’ documents from ‘broadcasting’ collection, right?If so, you need to use $lookup + $unwind to join collections, then you need to match the necessary documents with $filter and add them to a document with $addFields. In the end up $group documents, because you used $unwind before.Let me simplify your dataset to show it on example.\nSo, assume we have 2 collections ‘test6’ and ‘test7’:To achieve the goal we could use this aggregation:The above aggregation provides the result in the following format:You can $project out fields, that are not necessary for the output.",
"username": "slava"
},
{
"code": "$lookup",
"text": "Hi, @slava, thanks a lot for your answer!Yes, that’s the idea basically. Maybe I didn’t point this out, but reaction files don’t need to be within range of broadcasting files strictly but just overlapping it; for instance, a reaction that went from 9:57 to 10:02 would match a video that went from 10:01 to 10:05.On the other hand, both kinds of files are within the same collection in my DB, not sure if that’s a handicap for using $lookup, for instance.I’m going to try your solution and come back if I need additional guidance.",
"username": "Javier_Blanco"
},
{
"code": "$and: [\n // reaction began before broadcasting ended;\n { $lt: ['$data.begin', '$$this.endTime'] },\n // reaction ended just when broadasting ended or later\n { $gte: ['$data.end', '$$this.endTime'] }\n]\n$lookup",
"text": "but reaction files don’t need to be within range of broadcasting files strictly but just overlapping it; for instance, a reaction that went from 9:57 to 10:02 would match a video that went from 10:01 to 10:05.You may try to adjust ‘$and’ conditions:On the other hand, both kinds of files are within the same collection in my DB, not sure if that’s a handicap for using $lookup , for instance.$lookup will work just fine. If you use the same collection name - it will do a self join.",
"username": "slava"
},
{
"code": "Trackings",
"text": "Oh, for some reason, Trackings comes as an array of objects but in every file I have checked it only contains one, at index 0. Anyway, I guess it’s better to suppose it might have more than one element as @slava has done, just in case.",
"username": "Javier_Blanco"
},
{
"code": "$lookupreactionsallowDiskUse",
"text": "Hi again, @slava,For what I see in a demo collection with just 8 files, a self $lookup generates 8 files with an 8 objects reactions array each, so in my development collection the outcome might be so huge that the server (I’m using NoSQL Booster as client) is unable to process the request:Total size of documents in sensorsData matching pipeline’s $lookup stage exceeds 104857600 bytesAdding allowDiskUse doesn’t make any difference.",
"username": "Javier_Blanco"
},
{
"code": "",
"text": "Helo, @Javier_Blanco,\nCan you provide:",
"username": "slava"
},
{
"code": "$addFieldsendTimedata.TrackingsBeginTimeTrackingDuration$unwinddata.Trackings",
"text": "Hi, @slava,Sorry for the delay, I’ve been busy with other projects.Not very sure about this issue, the DBA has made some changes and now it seems to work… So for the moment it’s OK. Let’s see in a while…Regarding the $addFields step of your query, there’s no endTime within data.Trackings, it must be generated as the sum of BeginTime and TrackingDuration; should I $unwind data.Trackings first or is it possible to solve this issue in a simpler way?Thanks again!",
"username": "Javier_Blanco"
},
{
"code": "$addFields$$this.endTime$$this.{\"$add\": [{\"$toDate\": \"$data.Trackings.BeginTime\"}, {\"$multiply\": [\"$data.Trackings.TrackingDuration\", 1000]}]}",
"text": "Within $addFields, I have tried to change$$this.endTimeby$$this.{\"$add\": [{\"$toDate\": \"$data.Trackings.BeginTime\"}, {\"$multiply\": [\"$data.Trackings.TrackingDuration\", 1000]}]}and it seems to work -or at least I don’t get an error message-.",
"username": "Javier_Blanco"
},
{
"code": "var pipeline = \n[\n {\n \"$lookup\":\n {\n \"from\": \"data\",\n \"localField\": \"idDevice\",\n \"foreignField\": \"idDevice\",\n \"as\": \"reactions\"\n }\n },\n {\n \"$unwind\": \"$reactions\"\n },\n {\n \"$match\": {\"$and\": [{\"data.begin\": {\"$exists\": true}}, {\"reactions.data.timestamp\": {\"$exists\": true}}]}\n },\n {\n \"$project\":\n {\n \"_id\": 0,\n \"idDevice\": \"$idDevice\",\n \"naBroadcast\": \"$data.file\",\n\t\t\t\"naType\": \"$data.type\",\n \"dtBroadcastStart\": {\"$toDate\": \"$data.begin\"},\n \"dtBroadcastEnd\": {\"$toDate\": \"$data.end\"},\n \"array\": \"$reactions.data.Trackings\",\n }\n },\n {\n\t\t\"$unwind\": \"$array\"\n\t},\n\t{\n \"$project\":\n {\n \"idDevice\": 1,\n \"naBroadcast\": 1,\n\t\t\t\"naType\": 1,\n \"dtBroadcastStart\": 1,\n \"dtBroadcastEnd\": 1,\n \"qtBroadcastDurationS\": {\"$divide\": [{\"$subtract\": [{\"$toDate\": \"$dtBroadcastEnd\"}, {\"$toDate\": \"$dtBroadcastStart\"}]}, 1000]},\n \"idPerson\": \"$array.PersonID\",\n\t\t\t\"dtTrackingStart\": {\"$toDate\": \"$array.BeginTime\"},\n\t\t\t\"dtTrackingEnd\": {\"$add\": [{\"$toDate\": \"$array.BeginTime\"}, {\"$multiply\": [\"$array.TrackingDuration\", 1000]}]},\n\t\t\t\"qtFaceDetected\": \n\t\t\t{\n\t\t\t\t\"$cond\": \n\t\t\t\t{\n\t\t\t\t\t\"if\": {\"$eq\": [\"$array.FaceInfo.IsDetected\", true]}, \n\t\t\t\t\t\"then\": 1, \n\t\t\t\t\t\"else\": 0\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"qtMaleProbability\": \"$array.FaceInfo.MaleProbability\",\n\t\t\t\"qtAge\": \"$array.FaceInfo.Age\",\n\t\t\t\"naEmotion\": \"$array.FaceInfo.Emotion\",\n\t\t\t\"qtGaze\": \"$array.FaceInfo.numGazes\",\n\t\t\t\"qtGazeDurationS\": \"$array.FaceInfo.gazeTime\",\n\t\t\t\"qtFaceDurationS\": \"$array.LookingDuration\",\n\t\t\t\"qtTrackingDurationS\": \"$array.TrackingDuration\",\n\t\t\t\"qtReId\": \"$array.ReIDInfo.NumReIDs\"\n }\n\t},\n]\n\ndb.data.aggregate(pipeline)\ndtTrackingEnddtBroadcastStartdtTrackingStartdtBroadcastEnd{\n \"$match\":\n {\n \"$and\": [{\"dtTrackingEnd\": {\"$ne\": [\"$lte\", \"$dtBroadcastStart\"]}}, {\"dtTrackingStart\": {\"$ne\": [\"$gte\", \"$dtBroadcastEnd\"]}}]\n }\n}\n$ne$not",
"text": "OK, I have done this finally (it’s a sample DB with just a few files):Starting with a sample of 3 broadcasting files and 5 reaction files, I get 15 broadcasting-reaction files, all combined chances. The way they look:image1302×504 27.6 KBNow I need to filter them to get just the few ones that check both conditions:I have tried one last stage within my pipeline:It doesn’t work. I get all 15 files again, while according to my calculations I should get only 6. Not sure if $ne is what I need, but I get an error if I try to use $not. Any hint?Thanks in advance!",
"username": "Javier_Blanco"
},
{
"code": "$and: [\n // reaction began before broadcasting ended;\n { $lt: ['$data.begin', '$this.endTime'] },\n // reaction ended just when broadasting ended or later\n { $gte: ['$data.end', '$this.endTime'] }\n]\n$ne$not{\"$ne\": [\"$lte\", \"$dtBroadcastStart\"]}\n",
"text": "Hello, @Javier_Blanco!Try to integrate the filter that I have suggested you before into your $match stage:I get all 15 files again, while according to my calculations I should get only 6. Not sure if $ne is what I need, but I get an error if I try to use $not . Any hint?Hint:$ne - is an operator.\n$lte and dbBroadcastStart are its arguments and since they have -sign in the beginning, MongoDB treats both of them as field names, so $lte will be always ‘undefined’, because you did not declared such prop in your $project stage and $dbBroadcastStart will evaluate to some real data.$ne will compare its two arguments and return ‘true’ for a document, if the first argument ($lte) is not equal to the second argument ($dtBroadcastStart) and since ‘undefined’ will always be not equal to some real data, that condition will return ‘true’ for all your 15 documents. That’s why all of them pass your last $match stage.",
"username": "slava"
},
{
"code": "{\n \"$match\":\n {\n \"$and\": \n [\n {\"$lt\": [\"$dtBroadcastStart\", \"$this.dtTrackingEnd\"]},\n {\"$gte\": [\"$dtBroadcastEnd\", \"$this.dtTrackingEnd\"]}\n ]\n }\n}\n$match",
"text": "Regardless of the content of the query, I get this error message: unknown top level operator: $lt. This has been happening to me constantly while trying to implement that last $match stage…",
"username": "Javier_Blanco"
},
{
"code": "{\n \"$match\":\n {\n \"$and\": \n [\n {\"dtBroadcastStart\": [\"$lt\", \"$dtTrackingEnd\"]}, \n {\"dtBroadcastEnd\": [\"$gt\", \"$dtTrackingStart\"]}\n ]\n }\n}\n$matchcond(and(not(lte([dtTrackingEnd], [dtBroadcastStart])), not(gte([dtTrackingStart], [dtBroadcastEnd]))), 1, 0)cond(and(lt([dtBroadcastStart], [dtTrackingEnd]), gt([dtBroadcastEnd], [dtTrackingStart])), 1, 0)",
"text": "I think the proper query would be (1):But it returns nothing (at least spits back no error message):\nHowever, I have uploaded my query without that last $match stage to my BI software and applied a boolean filter to all 15 rows (files turn into rows in the datasets of that app); in fact, two boolean filters:First equivalent to the not one from previous posts:\ncond(and(not(lte([dtTrackingEnd], [dtBroadcastStart])), not(gte([dtTrackingStart], [dtBroadcastEnd]))), 1, 0)Second equivalent to (1):\ncond(and(lt([dtBroadcastStart], [dtTrackingEnd]), gt([dtBroadcastEnd], [dtTrackingStart])), 1, 0)And both work; both return just the same six rows (or files) I had calculated by myself would be the proper outcome.So it seems (1) is logically correct, but doesn’t work in Mongo despite there’s no error message… any hint?",
"username": "Javier_Blanco"
},
{
"code": "$match",
"text": "It seems Mongo is optimizing the query and displaying the $match stage in 4th place:",
"username": "Javier_Blanco"
},
{
"code": "$and: [\n // reaction began before broadcasting ended;\n { $lt: ['$data.begin', '$this.endTime'] },\n // reaction ended just when broadasting ended or later\n { $gte: ['$data.end', '$this.endTime'] }\n]\n",
"text": "Try to wrap this:with $expr operator in your $match stage",
"username": "slava"
},
{
"code": "[\n {\n \"$lookup\": /*Une cada documento con los demás -incluyendo consigo mismo-, que agrupa en un array con tantas posiciones como documentos (8, de 0 a 7, en la colección de muestra)*/\n {\n \"from\": \"data\",\n \"localField\": \"idDevice\",\n \"foreignField\": \"idDevice\",\n \"as\": \"array1\"\n }\n },\n {\n \"$unwind\": \"$array1\" /*Descompone los archivos en función de las posiciones del array, por lo que genera 8 x 8 = 64 archivos*/\n },\n {\n \"$match\": {\"$and\": [{\"data.inicio\": {\"$exists\": true}}, {\"array1.data.timestamp\": {\"$exists\": true}}]} /*Deja pasar sólo aquellos archivos con estructura vídeo-reacción y elimina las demás combinaciones: vídeo-vídeo, reacción-reacción y reacción-vídeo (redundantes); dado que hay 3 vídeos y 5 reacciones, pasan 15 archivos*/\n },\n {\n \"$project\": /*Selección inicial de parámetros*/\n {\n \"_id\": 0,\n \"idDevice\": \"$idDevice\",\n \"naBroadcast\": \"$data.archivo\",\n\t\t\t\"naType\": \"$data.tipo\",\n \"dtBroadcastStart\": {\"$toDate\": \"$data.inicio\"},\n \"dtBroadcastEnd\": {\"$toDate\": \"$data.fin\"},\n \"array2\": \"$array1.data.Trackings\"\n }\n },\n {\n\t\t\"$unwind\": \"$array2\" /*Descomposición en función de las posiciones del array; como en los documentos de muestra no existe más que el índice 0, genera de nuevo 15 archivos*/\n\t},\n\t{\n \"$project\": /*Proyección final de parámetros*/\n {\n \"idDevice\": 1,\n \"naBroadcast\": 1,\n\t\t\t\"naType\": 1,\n \"dtBroadcastStart\": 1,\n \"dtBroadcastEnd\": 1,\n \"qtBroadcastDurationS\": {\"$divide\": [{\"$subtract\": [{\"$toDate\": \"$dtBroadcastEnd\"}, {\"$toDate\": \"$dtBroadcastStart\"}]}, 1000]},\n \"idPerson\": \"$array2.PersonID\",\n\t\t\t\"dtTrackingStart\": {\"$toDate\": \"$array2.BeginTime\"},\n\t\t\t\"dtTrackingEnd\": {\"$add\": [{\"$toDate\": \"$array2.BeginTime\"}, {\"$multiply\": [\"$array2.TrackingDuration\", 1000]}]},\n\t\t\t\"qtFaceDetected\": \n\t\t\t{\n\t\t\t\t\"$cond\": \n\t\t\t\t{\n\t\t\t\t\t\"if\": {\"$eq\": [\"$array2.FaceInfo.IsDetected\", true]}, \n\t\t\t\t\t\"then\": 1, \n\t\t\t\t\t\"else\": 0\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"qtMaleProbability\": \"$array2.FaceInfo.MaleProbability\",\n\t\t\t\"qtAge\": \"$array2.FaceInfo.Age\",\n\t\t\t\"naEmotion\": \"$array2.FaceInfo.Emotion\",\n\t\t\t\"qtGaze\": \"$array2.FaceInfo.numGazes\",\n\t\t\t\"qtGazeDurationS\": \"$array2.FaceInfo.gazeTime\",\n\t\t\t\"qtFaceDurationS\": \"$array2.LookingDuration\",\n\t\t\t\"qtTrackingDurationS\": \"$array2.TrackingDuration\",\n\t\t\t\"qtReId\": \"$array2.ReIDInfo.NumReIDs\"\n }\n\t},\n {\n \"$match\": /*Filtrado de los documentos que cumplen las condiciones de solapamiento, que para la colección de muestra son sólo 6*/\n {\n \"$expr\":\n {\n \"$and\": \n [\n {\"$lt\": [\"$dtBroadcastStart\", \"$dtTrackingEnd\"]},\n {\"$gt\": [\"$dtBroadcastEnd\", \"$dtTrackingStart\"]}\n ]\n }\n }\n }\n]\n$lookup",
"text": "Great, now it works!The query looks like this:I guess it’s not a very efficient one, but as my first approach to $lookup I’m pretty satisfied.Thanks a lot for your time, @slava!",
"username": "Javier_Blanco"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Determining time overlapping between two different kinds of files from the same collection | 2020-07-03T13:02:14.381Z | Determining time overlapping between two different kinds of files from the same collection | 4,966 |
null | [] | [
{
"code": "",
"text": "Hello Guys,I would like to introduce GrandNode is the most advanced and the best open-source e-commerce platform built on the newest version of .NET Core and MongoDB. It’s the most advanced ASP.NET e-commerce, available to get for free. The highest code quality, combined with the detailed testing approach and with a pinch of a huge number of features is a recipe for success. No more need to create stores with an enormous number of third party plugins rely on the widest range of implemented functionalities at the market.GrandNode supports:Furthermore, GrandNode is one of the most advanced multi-vendor platforms. It supports applying for vendor accounts, commissions for vendors with automatic payouts.More details can be found on our official website: https://grandnode.com\nor for developers on our GitHub - https://github.com/grandnode/grandnodeCheers!",
"username": "Patryk_Porabik"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | GrandNode - e-commerce platform on MongoDB | 2020-07-17T09:15:21.204Z | GrandNode - e-commerce platform on MongoDB | 4,592 |
null | [] | [
{
"code": "",
"text": "Hi MongoDBI am now to this mongdb and I was wondering how so I search a nested querydb.vulnerabilities.find({“exploitability”: “3”}).pretty() works however how does one search nested information in my data set below.For example if I wanted to look for technical: 3 in the string below.db.vulnerabilities.insert ({a_number: ‘A1’,type: ‘Injection’,threatAgents: ‘application specific’,exploitability: ‘3’,“security weakness”: {prevalence: ‘2’,detectability: ‘3’,}, “impacts”: {technical: ‘3’,business: ‘business specific’,} })",
"username": "tim_Blath"
},
{
"code": "",
"text": "Hello, @tim_Blath!You can query embedded documents using dot-notation.Consider taking some basic course on MongoDB to learn how such things are done.",
"username": "slava"
}
] | Finding Nested query with Mongodb | 2020-07-17T08:01:52.313Z | Finding Nested query with Mongodb | 1,754 |
null | [
"java",
"change-streams"
] | [
{
"code": "// ...\n\nprivate static final Logger log = LoggerFactory.getLogger(MySynchronousBlockingChangeStream.class);\n\n@Autowired\nprivate MongoTemplate mongoTemplate;\n\n// ...\n\npublic void startChangeStream() {\n\tString query = \"{operationType: {$in: ['insert', 'update']}, 'fullDocument.total': {$gt: 100}}\";\n\t\n\ttry {\n\t\tList<Bson> changestream = Collections.singletonList(match(\n\t\t\t\tDocument.parse(query).toBsonDocument(BsonDocument.class, MongoClientSettings.getDefaultCodecRegistry())\n\t\t\t)\t\n\t\t);\n\t\t\n\t\tChangeStreamIterable<Document> iter = collection.watch(changestream);\n\t\tMongoCursor<ChangeStreamDocument<Document>> cursor = iter.iterator();\n\t\tcursor.forEachRemaining(stream -> {\n\t\t\n\t\t\tDocument targetDoc = stream.getFullDocument();\n\t\t\tlog.debug(\"Received ChangeStream: \" + targetDoc);\n\t\t\tmongoTemplate.insert(targetDoc, \"nameOfAuditCollection\");\n\t\t});\n\t\t\n\t} catch (MongoException ex) {\n\t\tlog.error(\"ChangeStream error :: \" + ex.getMessage());\n\t}\n}\n",
"text": "Hi, I’m trying to know how to stop or cancel a change stream that is currently running, in case I no longer need it.For example, if I have the following Java code snippet:how can I stop this running change stream through a method invocation, for example “stopChangeStream() {…}” ?I’ve tried to close the cursor “cursor.close()” while looping, but it throws an Exception.",
"username": "Adianfer"
},
{
"code": "cursor.close()MongoChangeStreamCursor<ChangeStreamDocument<Document>> cursor \n = collection.watch().cursor();\n\nint counter = 0;\n\t\nwhile (cursor.hasNext()) {\n System.out.println(cursor.next());\n if (counter++ > 2) break;\n}\n\ncursor.close();\n",
"text": "Hello @Adianfer, welcome to the forum.You can only close the cursor from outside the change stream’s cursor-loop. But, you can exit (or break from) the loop based upon a condition - assuming the cursor is on a while-loop. For example (for demonstration only), after iterating three times you can break the loop. Then the cursor.close() method is executed.You have to know what is the condition upon which you want to close the cursor; in the example its just a count. And, how the condition is fulfilled is your application functionality.From the documentation, after a change stream’s cursor is acquired… the cursor remains open until one of the following occurs:",
"username": "Prasad_Saya"
},
{
"code": "break;cursor.forEachRemaining(stream -> {\n\t// ...\n});\nwhile (cursor.hasNext()) {\n\tif (isChangeStreamDisabled()) {\n\t\tbreak;\n\t}\n\t\n\t// ...\n}\n\ncursor.close();\n",
"text": "Hi @Prasad_Saya, thank you!\nYour suggestion works correctly, but I was obliged to remove the Java Lambda Expression because it doesn’t allow to use break; inside the loop.So I’ve replaced it with the following while loop in order to break and then close the cursor.",
"username": "Adianfer"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to unwatch or stop a running change stream with mongodb java driver | 2020-07-16T12:33:40.415Z | How to unwatch or stop a running change stream with mongodb java driver | 7,156 |
null | [
"golang"
] | [
{
"code": "bson",
"text": "I have submitted a pull request to update the README.md documentation to reference the bson package.",
"username": "Wei_En_Wong"
},
{
"code": "",
"text": "Welcome to the community @Wei_En_Wong and thanks for your contribution!Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Update to Mongo DB Go Driver README.md | 2020-07-15T23:19:56.398Z | Update to Mongo DB Go Driver README.md | 3,439 |
[
"php",
"release-candidate"
] | [
{
"code": "mongodbMongoDB\\Client::listDatabaseNamesMongoDB\\Database::listCollectionNamesnameOnlylistCollectionsMongoDB\\Operation\\AggregateMongoDB\\Operation\\ExplainableMongoDB\\Collection::explain()explainMongoDB\\Collection::aggregate()composer require mongodb/mongodb^1.7.0@RC\nmongodb",
"text": "The PHP team is happy to announce that version 1.7.0-rc1 of the MongoDB PHP library is now available. This library is a high-level abstraction for the mongodb extension.Release HighlightsNew MongoDB\\Client::listDatabaseNames and MongoDB\\Database::listCollectionNames methods allow enumeration of database and collection names without returning additional metadata. In the case of collection enumeration, this leverages the nameOnly option for listCollections and avoids taking a collection-level lock on the server.The MongoDB\\Operation\\Aggregate class now implements the MongoDB\\Operation\\Explainable interface and can be used with MongoDB\\Collection::explain(). This is an alternative to the explain option supported by MongoDB\\Collection::aggregate() and allows for more verbose output when explaining aggregation pipelines.As previously announced, this version drops compatibility with PHP 5.6 and requires PHP 7.0 or newer.A complete list of resolved issues in this release may be found at:\nhttps://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12483&version=29653DocumentationDocumentation for this library may be found at:FeedbackIf you encounter any bugs or issues with this library, please report them via this form:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12483&issuetype=1InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.",
"username": "jmikola"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB PHP Library 1.7.0-rc1 Released | 2020-07-17T03:20:12.760Z | MongoDB PHP Library 1.7.0-rc1 Released | 1,971 |
|
null | [
"php",
"release-candidate"
] | [
{
"code": "directConnectionMongoDB\\Driver\\Managerpecl install mongodb-1.8.0RC1\npecl upgrade mongodb-1.8.0RC1\n",
"text": "The PHP team is happy to announce that version 1.8.0RC1 of the mongodb PHP extension is now available on PECL.Release HighlightsThis release adds additional validation of the directConnection URI option when specified via array options (i.e. second argument of the MongoDB\\Driver\\Manager constructor).This release updates the bundled libbson and libmongoc libraries to 1.17.0-rc0.As previously announced, this version drops compatibility with PHP 5.6 and requires PHP 7.0 or newer.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation is available on PHP.net:\nPHP: MongoDB - ManualFeedbackWe would appreciate any feedback you might have on the project:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12484&issuetype=6InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb",
"username": "jmikola"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB PHP Extension 1.8.0RC1 Released | 2020-07-17T00:13:26.612Z | MongoDB PHP Extension 1.8.0RC1 Released | 2,059 |
[
"mongoose-odm"
] | [
{
"code": "",
"text": "Heewwoo,I have a problem when I’m creating a new document and saving it to the database, it’s saving the document with wrong data even if the data collected is a specific value set to. I used Mongodb for about 2 years but this is the first time it’s happening to me.\nHere is the data that should be stored in the database:\n\nThe problem appears on “roleID” which it should be just like in picture, “565193939435388948” (This picture is the data collected, not the data from document inserted.).\nAnd below this is the information that I receive when the document is inserted in database:\nAs you can see, in the picture, on variable “RoleID”, there is not the same id as in the first picture of the data collected.\n(Note: In case anyone says the roleID isn’t string as above, it’s because I took the second picture after tried last time to see if it may work after being converted to number but it doesn’t and just left it like that. being string or number doesn’t change the fact that it’s not saving the data correctly as it was collected.)Practically, from my research, if the number is bigger than 16 characters long, the last 2 characters are like set to 0. I tried to insert a document with 16 characters, it was inserted with the exact data collected. If it’s 17 or 18, the last 2 digits are set to 0.\nThis is how the code looks like:image812×269 3.8 KBAs well as the Schema form used:\nI have used mongodb these 2 years only on windows 7. I have updated recently my pc to windows 10 (not personal choice unfortunately…) and I got this problem. Is because of windows 10?Just to make sure, I’ll say again:\nWindows version: Windows 10\nMongodb server version: 4.2.8Can someone guess what is going on? I really like mongodb and I don’t wanna give up on it after I spent so much time learning it.",
"username": "Mirage_Zoe"
},
{
"code": "",
"text": "I think it is more related to a javascript,json limitation.Since you get it as a string (like first image), you could change the mongoose schema to a string. Since it is an id you probably do not make any mathematical computation on it. A string would be fine.",
"username": "steevej"
},
{
"code": "",
"text": "I tried as well to save it as a string but it’s still not working despite chaning mongoose schema to a string.\nEDIT: If I try to parse the string and treat it as json just like in the link you sent when adding to database, can this actually help it ?\nEdit 1.5: Tried to parse and stringify but no use… Isn’t there a way to like change how json works for mongodb?EDIT 2: I have seen something and Visual studio code gave me a “fix” to make it actually show in database the exact number for the id but that’s working only if I have like the number defined already in a variable and adding an “n” to the end meaning it “covert to bigint numeric literal” but how I can do it with a variable name?",
"username": "Mirage_Zoe"
},
{
"code": "",
"text": "My understanding is that it has to be a string all the way. Otherwise as soon at it is jsonyfied as a number the issue will happen.",
"username": "steevej"
},
{
"code": "",
"text": "What do you mean byit has to be a string all the wayI have changed it to be A string when collected data and modify in the schema the type to be a string.\nwhat else I’m missing?EDIT: There isn’t a way to change this in the mongod.cfg file?",
"username": "Mirage_Zoe"
},
{
"code": "",
"text": "The file mongod.cfg has nothing to do with that.what else I’m missing?It is really hard to tell from my end. May be an API in your application is still using it as a number.",
"username": "steevej"
},
{
"code": "",
"text": "Can I do something to help you",
"username": "Mirage_Zoe"
},
{
"code": "NumberNumber.MAX_SAFE_INTEGERRole_IDStringRole_IDStringNumberRole_IdNumberNumber",
"text": "Hi @Mirage_Zoe,As @steevej pointed out, the problem is that you are trying to save a numeric value larger than what can be safely represented in a JavaScript Number (aka Number.MAX_SAFE_INTEGER).If your Role_ID value is an identifier that does not need mathematical manipulation, String would be a more appropriate type to use.If you have changed Role_ID to a String in your Mongoose schema and are still seeing Number values for newly created documents, I suggest you review your code for any operations which manipulate the Role_Id value and could implicitly coerce the value into a Number.Isn’t there a way to like change how json works for mongodb?The Number data type is defined by the JavaScript standard, so this outcome is unrelated to your O/S upgrade or MongoDB server configuration. Manipulation of data types is a client-side issue that you have to resolve in your application code.I suggest posting this as a Mongoose question on Stack Overflow to get further suggestions. It would be helpful to include your specific versions of Mongoose and Node for context.Regards,\nStennie",
"username": "Stennie_X"
}
] | Saving a document with large integer values | 2020-07-15T20:31:26.290Z | Saving a document with large integer values | 10,722 |
|
null | [
"field-encryption",
"schema-validation"
] | [
{
"code": "{\n\"hr.employees\": {\n \"bsonType\": \"object\",\n \"properties\": {\n\n \"taxid\": {\n\n \"encrypt\": {\n\n \"keyId\": [UUID(\"11d58b8a-0c6c-4d69-a0bd-70c6d9befae9\")],\n\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512_Random\",\n\n \"bsonType\" : \"string\"\n\n }\n },\n\n \"taxid-short\": {\n\n \"encrypt\": {\n\n \"keyId\": [UUID(\"2ee77064-5cc5-45a6-92e1-7de6616134a8\")],\n\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\",\n\n \"bsonType\": \"string\"\n\n }\n }\n }\n},\n\"hr.parttimeemployees\": {\n \"bsonType\": \"object\",\n \"properties\": {\n\n \"taxid\": {\n\n \"encrypt\": {\n\n \"keyId\": [UUID(\"11d58b8a-0c6c-4d69-a0bd-70c6d9befae9\")],\n\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512_Random\",\n\n \"bsonType\" : \"string\"\n\n }\n },\n\n \"taxid-short\": {\n\n \"encrypt\": {\n\n \"keyId\": [UUID(\"2ee77064-5cc5-45a6-92e1-7de6616134a8\")],\n\n \"algorithm\": \"AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic\",\n\n \"bsonType\": \"string\"\n\n }\n }\n }\n }\n",
"text": "Hey everyone,We are working on getting CSFLE up and running, we have multiple collections where we have individual fields which are encrypted. The example in the documentation has a schema where fields are encrypted in a single collection. We are attempting to setup our autoEncryption schemaMap in the connection driver to specify fields in multiple collections. We assumed that schema would look something like what is pasted below. However the driver does not seam to recognize the second collection. What is the syntax for specifying field encryption values on multiple collections, is this possible?}",
"username": "David_Stewart"
},
{
"code": "",
"text": "Hi @David_Stewart,What is the syntax for specifying field encryption values on multiple collections, is this possible?Your example schema should have worked. Did you confirm that the setup works for a single collection ?Could you provide a minimal reproducible code example ? Also, could you elaborate what do you mean by the driver does not recognise the second collection. Are documents for the second collection not being encrypted ?Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Wan,We were able to get this working the other day with the code I provided above. We must have had a typo. I will go ahead and close the topic thank you! .David",
"username": "David_Stewart"
},
{
"code": "",
"text": "Hi David,You are welcome, I’m glad that you managed to get it working.Best regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Automatic CSFLE Schemamap (multiple collections) | 2020-07-07T22:30:56.034Z | Automatic CSFLE Schemamap (multiple collections) | 2,671 |
null | [
"atlas-functions"
] | [
{
"code": "Failed to upload node_modules.tar.gz: unknown: Unexpected token, expected ( (357:12) 355 | 356 | return true; > 357 | } catch { | ^ 358 | return false; 359 | } 360 | }",
"text": "I’m attempting to upload function dependencies. I’ve followed the documented steps, however I receive the following error:Failed to upload node_modules.tar.gz: unknown: Unexpected token, expected ( (357:12) 355 | 356 | return true; > 357 | } catch { | ^ 358 | return false; 359 | } 360 | }Is this an error with the upload process or an error inside one of the node modules? How do I go about debugging / resolving this?",
"username": "Craig_Aschbrenner"
},
{
"code": "",
"text": "@Craig_Aschbrenner Does this error reproduce consistently? If so, can you share the node_modules.tar.gz that produces this issue?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "I had created a package.json with just @sveltech/ssr - npm in it. The error was consistent in both the upload and realm-cli approach to uploading it.",
"username": "Craig_Aschbrenner"
},
{
"code": "",
"text": "@Craig_Aschbrenner We took a look and the reason for this error is that the package depends on a version of babel that is higher than the version of babel we use internally to transpile the source code. We are using babel 6.26.3. Can you use a lower version ?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Hmm… good question. Is that due to one of the dependencies downstream from the main module I’m attempting to use? The @sveltech/ssr module isn’t my own so where would I look to see about using a lower babel version?",
"username": "Craig_Aschbrenner"
},
{
"code": "",
"text": "To give some context of what I’m attempting to do… @sveltech/ssr can be setup on both Netlify and Vercel (Zeit) so that “functions” can essentially deliver content in an SSR fashion. I realize a node server could be setup elsewhere and still utilize MongoDB Realm but it would be nice to utilize SSR in functions where necessary and keep everything together.",
"username": "Craig_Aschbrenner"
},
{
"code": "",
"text": "@Craig_Aschbrenner I would recommend trying an older version of the library and see if that works. Yes likely it is in a downstream dependency that you will need to check the babel version",
"username": "Ian_Ward"
},
{
"code": "{\n \"scripts\": {\n \"build\": \"echo \\\"dummy build\\\"\"\n },\n \"devDependencies\": {\n \"babel-core\": \"6.26.3\",\n \"@sveltech/ssr\": \"0.0.11\"\n }\n}",
"text": "I added babel-core to my package.json to fix the version used, rebuild the node_modules folder and then attempted to upload it again. Unfortunately the same error is persisting. @sveltech/ssr does not depend on babel and when searching through the node_modules folder none of the dependencies required a version higher than 6.26.3… however they only required a minimum of 6.26.3 in some cases.",
"username": "Craig_Aschbrenner"
},
{
"code": "",
"text": "@Ian_Ward Any way you could fill me in on how MongoDB transpiles and I can try to replicate it locally to debug? I wasn’t if there was a package.json you could provide that had all of the modules / versions along with the script that does the transpiling.",
"username": "Craig_Aschbrenner"
}
] | Failed to upload node_modules.tar.gz: unknown: Unexpected token | 2020-07-02T22:51:48.890Z | Failed to upload node_modules.tar.gz: unknown: Unexpected token | 2,989 |
null | [
"spark-connector"
] | [
{
"code": "/*\n * Copyright 2016 MongoDB, Inc.\n *\n * Licensed under the Apache License, Version 2.0 (the \"License\");\n * you may not use this file except in compliance with the License.\n * You may obtain a copy of the License at\n *\n * http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\npackage tour;\n\nimport com.mongodb.ConnectionString;\nimport com.mongodb.client.MongoClient;\n <dependency>\n <groupId>org.mongodb.spark</groupId>\n <artifactId>mongo-spark-connector_2.10</artifactId>\n <version>1.1.0</version>\n </dependency>\nException in thread \"main\" java.lang.NoClassDefFoundError: com/mongodb/spark/MongoSpark\n\nat com.virtualpairprogrammers.JavaIntroduction.main(JavaIntroduction.java:27)\nat sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)\nat sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)\nat sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)\nat java.lang.reflect.Method.invoke(Method.java:498)\nat org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:750)\nat org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)\nat org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)\nat org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)\nat org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)\nCaused by: java.lang.ClassNotFoundException: com.mongodb.spark.MongoSpark\nat [java.net](https://java.net/).URLClassLoader.findClass(URLClassLoader.java:381)\nat java.lang.ClassLoader.loadClass(ClassLoader.java:424)\nat java.lang.ClassLoader.loadClass(ClassLoader.java:357)\n... 10 more\nSparkConf conf = new SparkConf()\n.setAppName(\"Load Data from Mongo DB\")\n.set(\"spark.app.id\",\"MongoSparkConnectorTour\")\n.set(\"spark.mongodb.input.uri\", \"mongodb://uname:prod@Host:PortNo/DB.CollectionName\")\n.set(\"spark.mongodb.output.uri\", mongodb://uname:prod@Host:PortNo/DB.CollectionName\");\nJavaSparkContext sc = new JavaSparkContext(conf);\nJavaMongoRDD<Document> customRdd = MongoSpark.load(sc);\nSystem.out.println(\"Download Completed\");\nSystem.out.println( \"Count of Data downloaded \" + customRdd.count());\ncustomRdd.saveAsTextFile(\"/bn_data/Testing/mongoDBData/\", GzipCodec.class);\n",
"text": "I am following your lessons and courses for Spark Java development.\nSorry for the trouble, but i am stuck in the for almost 2 - 3 days and need a push or assistance to move forward.Hope you are doing well during this Covid Situation,I am using the sample code from GIT and trying to connect to Mongo DB from Spark.I am facing and issue when trying to read data from MongoDB.Following is my Spark details,I have Spark 1.6.3 which has Scala 2.10.5I am using the Mongo DB Connector Version 1.1 and package 2.10Following is the dependencies i had used in my MavanGetting the following Error ,Below is the simple Liner Code,If you can give me a pathforward for this it will be helpfull. I have also tried running with the --jar and other options, but still it throws the same error.Also, i have not downloaded this mongo DB package in my Spark (I am not sure if issue is happening because of that).Thanks in Advance",
"username": "Sam_Joshua_Berchmans"
},
{
"code": "",
"text": "I was able to fix this issue at last.\nBuilt it as a Fat jar and it is working for me without any issues.",
"username": "Sam_Joshua_Berchmans"
}
] | Spark MongDB connectivity Issue | 2020-06-30T17:46:35.519Z | Spark MongDB connectivity Issue | 3,787 |
null | [
"backup"
] | [
{
"code": "",
"text": "Hi All,Could someone help me with list of backup tools/ ways we can do backup & restore on Azure Cloud. Need to know the best way to take backup /perform restore for servers built with community edition on Azure Cloud.Regards,\nPriya.",
"username": "KA_Priya"
},
{
"code": "",
"text": "You may want to look into snapshot backups, as described here:I’m not very familiar with Azure, but I’m sure it supports snapshots on persistent disks. You can look into automating the creation of the data disk snapshots by calling the Azure API. One big advantage of snapshot backups is that they will be very fast to create, almost independent of your data and storage size.",
"username": "frankshimizu"
}
] | Backup methods /tools available in Azure Cloud | 2020-07-13T07:04:43.218Z | Backup methods /tools available in Azure Cloud | 1,617 |
null | [] | [
{
"code": "{\n “name” : “aname87” ,\n “aname87” : “avalue”\n}\n",
"text": "HelloI want to select “aname87” field but i don’t know its name,i only know that “name” field value has the name.Is there a simple solution to make that reference to that field?Something like that project(\"$\" + $name) = project(\"$aname87\")Thank you",
"username": "Takis"
},
{
"code": "db.collection.aggregate( [\n { \n $addFields: { obj_as_arr: { $objectToArray: \"$$ROOT\" } } \n },\n {\n $unwind: \"$obj_as_arr\" \n },\n { \n $project: {\n _id: 0,\n field_name: \"$obj_as_arr.k\",\n field_value: { $cond: [ { $eq: [ \"$obj_as_arr.k\", \"$name\" ] }, \"$obj_as_arr.v\", \"$$REMOVE\" ] } \n } \n },\n { \n $match: { \n field_value: { $exists: true } \n } \n }\n] )\n{ \"field_name\" : \"aname87\", \"field_value\" : \"avalue\" }",
"text": "Hello @Takis, welcome to the forum.You can do it like this using the $objectToArray aggregation operator.The output:{ \"field_name\" : \"aname87\", \"field_value\" : \"avalue\" }",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thank you for the reply \nI thought that object to array might have helped.\nI was hoping that i could do something like eval(construct_reference)\nBoth construct and eval inside the database (not the client)\nI think i will just use arrays to store unknown values(not fields) and search on them",
"username": "Takis"
}
] | Selecting a field that i don’t know its name,but its name is stored in the document as value | 2020-07-16T01:45:52.414Z | Selecting a field that i don’t know its name,but its name is stored in the document as value | 3,896 |
null | [
"dot-net"
] | [
{
"code": "db.articles.find( { $text: { $search: \"\\\"coffee shop\\\"\" } } )$\"\\\"\"coffee shop\\\"\"\"\"$text\" : { \"$search\" : \"\\\\\\\"coffee shop\\\\\\\"\" } }",
"text": "I am attempting to do text searches with exact phrases via the .NET driver.The guidance in the MongoDB documentation is to encode a search string in escaped quotes to force searching for an exact phrase, e.g. db.articles.find( { $text: { $search: \"\\\"coffee shop\\\"\" } } ).In the .NET driver, I create my search string like this: $\"\\\"\"coffee shop\\\"\"\", which creates the correct search string. (Note this syntax is an interpolated string, although any method will work just fine).However, I get no search results when I am expecting them, and confirm them via the mongo console. When I analyse the final search string passed to MongoDB from my code, I see that it has multiple backslashes, e.g. \"$text\" : { \"$search\" : \"\\\\\\\"coffee shop\\\\\\\"\" } }. I suspect that this is why I get no results.Has anyone else experienced this or know why this might be happening, and how to fix it?",
"username": "Greg_May"
},
{
"code": "$\"\\\"\"coffee shop\\\"\"\"var mySearch = \"coffee shop\";\nvar query = new BsonDocument ( \"$text\", new BsonDocument(\n \"$search\", $\"\\\"{mySearch}\\\"\"\n));\n",
"text": "Hi @Greg_May, and welcome to the forum!In the .NET driver, I create my search string like this: $\"\\\"\"coffee shop\\\"\"\" , which creates the correct search string. (Note this syntax is an interpolated string , although any method will work just fine).Assuming that you would like to substitute the value i.e. “coffee shop” from a variable, you could do as the following example using interpolated string:Regards,\nWan.",
"username": "wan"
}
] | .NET driver - trouble encoding the search string for exact phrases | 2020-07-15T12:50:46.699Z | .NET driver - trouble encoding the search string for exact phrases | 1,959 |
null | [] | [
{
"code": "",
"text": "Hello,\nI’ve been writing my first application that uses MongoDB, it’s been going well. Now I have a need for transactions for the first time.I followed this tutorial: How to Use MongoDB Transactions in Node.js | MongoDB BlogI get “MongoError: This MongoDB deployment does not support retryable writes. Please add retryWrites=false to your connection string.”If I understand correctly, Transactions internally use “retryable writes”, and my local standalone server does not support it. Is my assumption correct?Assuming the answer was yes, how do I get around this during development time?\nCertainly, in production we will use a cloud-based mongodb with all the bells and whistles. However how can I get past this quickly on our dev machines without investing too much time?Some of us use the mongodb binary directly, some use the docker image.",
"username": "Nathan_Hazout"
},
{
"code": "",
"text": "Hello @Nathan_Hazout, welcome to the forum.Transactions documentation says:To use transactions on MongoDB 4.2 deployments (replica sets and sharded clusters), clients must use MongoDB drivers updated for MongoDB 4.2.You will need at least MongoDB 4.0 replica-set to start working with transactions.That said, setting up a replica-set quickly is very simple using mtools. With a single command you can setup a 3 node replica-set quickly. This should give you start in a development setup.",
"username": "Prasad_Saya"
}
] | Using transactions locally | 2020-07-16T06:46:06.173Z | Using transactions locally | 5,670 |
null | [] | [
{
"code": "",
"text": "Hello everyone! Be sure to check out the Realm Hackathon, July 22-24. Want to participate? You can still register & join a team! Check out @Shane_McAllister’s post for details and link to the registration page.",
"username": "Jamie"
},
{
"code": "",
"text": "",
"username": "Jamie"
}
] | Realm Hackathon - July 23, 2020 | 2020-07-16T05:15:58.065Z | Realm Hackathon - July 23, 2020 | 1,368 |
null | [] | [
{
"code": "",
"text": "Hi,I am trying to display a table chart on a collection with 101.7k documents in it. When I disable ‘sample mode’ to render all chart data, the chart is taking couple of minutes to load. I’ve tried adding indexes on fields being used in the chart but I see on difference in chart rendering time. Are there any ways to reduce the chart rendering time ?Thank you.",
"username": "pickelrick26"
},
{
"code": "$sort$limit",
"text": "Hi @pickelrick26 -Indexes do help in Charts, but only if your chart contains filters. If the filtered fields are indexed, MongoDB is way more efficient at finding the matching documents. If you are not using filters on your charts, indexes won’t help at all.100K documents is likely more than a human is every going to look through. I suspect the best thing to do here would be to limit the number of documents returned. Right now there’s no Limit Results option on table charts (it’s coming soon), but you can do this yourself by entering a pipeline in the query bar with a $sort and $limit stage.HTH\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to load charts faster, do indexes help? | 2020-07-08T14:19:01.132Z | How to load charts faster, do indexes help? | 2,453 |
[
"realm-studio"
] | [
{
"code": "",
"text": "I can connect to the database programmatically but getting the following error when trying to open it in realm studio:Any idea why is this happening and how could I solve it?Best Regards,\nAndrea",
"username": "Andrea_Borsos"
},
{
"code": "",
"text": "Hi Andrea,To ensure we’re not overlooking a detail, what version of Realm Studio are you using here?",
"username": "kraenhansen"
},
{
"code": "",
"text": "Hi Kraen,I’m also having this problem running v3.1.0 of Realm Studio. Has this problem been investigated?",
"username": "Raymond_Brack"
},
{
"code": "",
"text": "In case this helps the status in the top right of the window bounces between green, yellow and red;\nRealmStudioError779×118 27.1 KB",
"username": "Raymond_Brack"
},
{
"code": "",
"text": "Is anyone investigating this problem - I have raised a support ticket for it with no response. It has now been 3 days since I was last able to access any data!",
"username": "Raymond_Brack"
},
{
"code": "",
"text": "Hi @Raymond_Brack,I don’t see an open support case, but will follow up directly for more details.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Following is the content of an email I sent to [email protected] 3 days ago;We had an account with Realm prior to Mongo buying them out but don’t appear able to raise a support ticket. Logging in with my credentials didn’t display our current account and when attempting to raise a support ticket I was prompted to upgrade to a paid account.We are currently having the same problem as documented here Bad State error when trying to open realm in realm studio and need this issue to be resolved ASAP.",
"username": "Raymond_Brack"
},
{
"code": "realm/realm-studio",
"text": "Following is the content of an email I sent to [email protected] 3 days agoHi Raymond,The Support Operations team handles access issues to support systems, but does not work on support cases. However, I expect you should have received a reply by now so I will follow up on that. In this situation I expect the support operations team would direct you as below.If you are a legacy Realm Cloud customer without a support subscription, you can raise operational questions via https://support.realm.io. Please note that this legacy support portal is scheduled to be decommissioned on Sept 7, 2020.Opening cases on the MongoDB Support Portal requires a support subscription. You should have received at least one email with more information on this upcoming transition, but please see Converting your Realm Cloud Subscription to Atlas Developer.If you suspect a bug in Realm Studio, you can also report this directly as a GitHub issue: realm/realm-studio. Since you are a Realm Cloud user, I would contact the support team for initial investigation as this may be an issue with your deployment rather than Realm Studio.Regards,\nStennie",
"username": "Stennie_X"
}
] | Bad State error when trying to open realm in realm studio | 2020-03-18T16:14:49.371Z | Bad State error when trying to open realm in realm studio | 4,631 |
|
null | [
"python",
"release-candidate"
] | [
{
"code": "python -m pip install https://github.com/mongodb/mongo-python-driver/archive/3.11.0rc0.tar.gz\n",
"text": "We are pleased to announce the 3.11.0rc0 release of PyMongo - MongoDB’s Python Driver. This release candidate adds support for MongoDB 4.4.Note that this release will not be uploaded to PyPI and can be installed directly from the GitHub tag:",
"username": "Prashant_Mital"
},
{
"code": "",
"text": "",
"username": "system"
}
] | PyMongo 3.11.0rc0 Released | 2020-07-15T22:58:43.078Z | PyMongo 3.11.0rc0 Released | 1,867 |
null | [] | [
{
"code": "",
"text": "Should i migrate tennis racquets and tennis ball machine reviews website database from mysql to mongodb does it have any speed effect on it.",
"username": "jospeh_lana"
},
{
"code": "",
"text": "Definitively.However a plain migration will not provide you with full benefits. You should work on the data model to profit from the schema less document structure that let you embed related documents rather than relying on join all the time.",
"username": "steevej"
},
{
"code": "",
"text": "Do you have any detailed information on how i would do that for faster loading speed.",
"username": "jospeh_lana"
},
{
"code": "",
"text": "Hi @jospeh_lana,What are the current issues you are trying to solve with your web site? What backend or CMS are you using at the moment?If your site is predominantly content, it would probably benefit more from caching and CDN than migration to a different backend.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "wordpress i wanted to make website fast?",
"username": "jospeh_lana"
},
{
"code": "",
"text": "Hi @jospeh_lana,WordPress currently only supports SQL databases, so fully changing to MongoDB would involve migrating to a new content management platform. If there are parts of your site using custom PHP code working directly with your database (for example, a comparison calculator), you could consider using the MongoDB PHP driver to refactor those features, but this would not replace the core WordPress code.Per my earlier comment, a content-heavy site would probably benefit more from caching and a Content Delivery Network (CDN) to speed up the end user experience. Content can often be generated and cached partially (or fully) rather than being served directly from a database.Since WordPress site optimisation is outside the scope of MongoDB, I’d suggest asking in the WordPress community forums or WordPress Stack Exchange.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Should I migrate from MySQL to MongoDB? | 2020-07-14T15:58:40.900Z | Should I migrate from MySQL to MongoDB? | 2,282 |
null | [] | [
{
"code": "",
"text": "My name is Joonas Hämäläinen, though online (and many places offline too) I’m known as kerbe or kerberos. Nickname from long time ago which has story I can share over cup of something nice. I am fairly fresh entrepreneur, having own company for about six months now, Lunanova Oy. Currently I’m doing pretty much everything myself in the company, but if things go smoothly I might have chance to get small development team onboard. I am not programmer by profession really, it is just something “I have to do” to get things done. I have almost 15 years of professional experience in system administration, project management, sales support, training etc, before I started own company. These days I want to sell my experience to companies that are growing, and maybe not yet need full time employee with this kind of skillset, or who want to train their team in ways of clean code, devops, version control and security.I have been working with project having MongoDB as database for a while now, and growing to like it more and more as I keep learning. Still fresh in the waters, way under a year. Stack otherwise consist nodejs, serverless and react & react native, so currently learning pretty much latest tools out there.As I am bootstrapping this company without huge savings or investments, I’ve cut down hobbies and other expenses from life. But once things start going smoothly, I will invest on my personal computer and return in gaming life. Loving online multiplayers and meeting people across the globe. I love survival games (ARK: Survival Evolved, Rust), big sandbox games (EVE Online, Dwarf Fortress), and then strategy games, and of course roleplaying games. I would like to play some good sandbox kind of MMORPG, but had to settle to Black Desert Online and such due friends being more into those. Though my gaming has been seriously cut back, as soon after I started company, my computer’s GPU melted down, and I had to bolt old GTX580 in it… so it reduces gaming a lot.I’ve already been proven this community to be good source of help:\nGives me confidence to invest in learning & using MongoDB for my projects, as there is way to get help when stuck with documentation. Looking forward how this community grows and hopefully I can transition from one that asks questions to one who has answers at some point, when my own skills grow.",
"username": "kerbe"
},
{
"code": "",
"text": "Welcome, Joonas! We’re thrilled to have you on board. I think entrepreneur is just another word for bootstrapper or ‘the person who wears all the hats’ ",
"username": "Jamie"
},
{
"code": "",
"text": "Hi @kerbe and welcome to the community forums. One of the best ways to learn is to help teach others. Whenever a question comes up, go through and try to answer it, even if you don’t reply publicly. You’ll be amazed at how quickly you can make that transition from asking to answering questions.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Welcome @kerbe! Would love to hear someday how you got a nickname of Kerberos! I hope it’s not because you can be fickle and not always work right like the technology Kerberos sometimes is haha",
"username": "Michael_Grayson"
},
{
"code": "",
"text": "Let’s hope that we meet at some event @Michael_Grayson, then there is chance for it. And no, it isn’t from technology, those who started using it first didn’t have a clue about such things. ",
"username": "kerbe"
},
{
"code": "",
"text": "Welcome everyone or should i be the one who needs to be welcomed, i’m very new to forums but i saw mongodb and i thought i should give it a try cause i just graduated from college and i am hoping to get something started since right now all i know is that i want to be a web developer but honestly i am clueless but have so many ideas i dont know how to start.\ni open to assist with research and the likes just to learn more and also improve my craft",
"username": "samuel_nagba"
},
{
"code": "",
"text": "Hey Samuel! MongoDB University is a great place to get started, learn more, and improve your craft all while getting certified. As a new graduate, you may be interested in C100DEV: MongoDB Certified Developer Associate Exam.",
"username": "Marissa_Jasso"
}
] | Heya, greetings from Finland! | 2020-02-05T21:52:37.312Z | Heya, greetings from Finland! | 2,791 |
null | [
"indexes"
] | [
{
"code": "",
"text": "Chapter 4.2: MongoDB IndexesHello,I need some help with the lab.I find it hard to decide which index best suits all the queries.It seems that every query is best served by a different index so there is no one particular index that serves all.At first glance I thought the best compound index that would fit would be address.state and job or maybe address.state job and first name.Both answers are wrong and I would appreciate it if you could explain why.Thanks, Shirley",
"username": "Shirley_Dahan"
},
{
"code": "",
"text": "You will have better chance to have meaningful answer by using the M201 forum from MongoDB university.",
"username": "steevej"
},
{
"code": "",
"text": "Could you please point out where the forum is? Thanks in advance!",
"username": "Shirley_Dahan"
},
{
"code": "",
"text": "You need an index which is used in all the stages i.e. in both the fetch stage and the sort stage.Also, you can find the MongoDB University forum here.",
"username": "Susnigdha_Bharati"
},
{
"code": "",
"text": "Thank you ",
"username": "Shirley_Dahan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | M201 - chapter 4.2 - lab 4.2.2 | 2020-07-14T09:10:32.019Z | M201 - chapter 4.2 - lab 4.2.2 | 2,369 |
null | [
"c-driver",
"beta"
] | [
{
"code": "",
"text": "I’m pleased to announce version 1.17.0-beta of libbson and libmongoc,\nthe libraries constituting the MongoDB C Driver.Features:Notes:Features:Bug fixes:Notes:Thanks to everyone who contributed to this release.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "Hello Kevin,Would you happen to have a release date for the official/stable 1.17.0 version? We are hoping to get a critical fix for https://jira.mongodb.org/browse/CDRIVER-3486 in this version.Thanks and best regards,\nHolman",
"username": "Holman_Lan"
},
{
"code": "",
"text": "Hello @Holman_Lan. We are planning to do the stable release of 1.17.0 in early June.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "Thank you, Kevin, for the information!",
"username": "Holman_Lan"
},
{
"code": "",
"text": "Hello Kevin,I wanted to follow up on this thread. Would you have a more concrete release date for the stable release of 1.17.0 C driver that you can share?Thank you!\nHolman",
"username": "Holman_Lan"
},
{
"code": "",
"text": "Hello @Holman_Lan,Apologies for the delay. The 1.17.0 C driver release is planned around the MongoDB 4.4 stable server release. The most recent server release was 4.4.0-rc9. We are planning to release the stable 1.17.0 in early July.Best,\nKevin",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "Thank you, Kevin, for the update!",
"username": "Holman_Lan"
},
{
"code": "",
"text": "Hi @Kevin_Albertson,I would like to know if there an update on the date for the stable release of the 1.17.0 C driver ?Regards,\nAziz",
"username": "Aziz_Zitouni"
},
{
"code": "",
"text": "Hi @Aziz_Zitouni,1.17.0-rc0 was released, which includes complete support for MongoDB 4.4 servers. This is the announcement: MongoDB C driver 1.17.0-rc0 releasedSince it is a release candidate, there are no additional features planned for the stable 1.17.0 release. The stable release is planned around the MongoDB 4.4 server stable release. That is tentatively late July.Best,\nKevin",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB C driver 1.17.0-beta released | 2020-04-10T19:35:44.703Z | MongoDB C driver 1.17.0-beta released | 4,949 |
null | [
"app-services-user-auth"
] | [
{
"code": "",
"text": "My app uses Realm Sync to synchronize data across a user’s devices. The authentication is done with Firebase. However, the users can use the app without signing in, in which case the data is stored locally and not synced.The current documentation states (here):Every installation of your app should be uniquely identified as an individual user even if your app does not explicitly require the user to log in. In the case where your app does not require the user to log in, perform a silent login in the app code, which effectively creates a unique identifier for that app installation.But is also states (also here):You should disable anonymous authentication after you complete your proof of concept.I wonder how this “silent login” could be done without anonymous authentication, since the other authentication method is JWT, and this require email and password. How can I silently authenticate a user without any email or password?",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "Hi Jean-Baptiste_Beau,We do not recommend keeping the anonymous auth but you can use it if you have to.We also have a function based authentication which you can customize to perform a “silent” auth. Perhaps querying the cluster for the user existence Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Sync anonymous authentication for silent login | 2020-07-15T12:50:08.334Z | Realm Sync anonymous authentication for silent login | 3,207 |
null | [] | [
{
"code": "",
"text": "Hey, everyone. How’s everyone doing with your projects?I had to rename my project and now I want to know if is possible to rename the cluster on mongodb Atlas. Is it?Thank you all in advance.Daniel Gomes",
"username": "programad"
},
{
"code": "",
"text": "I don’t think Cluster name can be changed once you created it\nHowever project name can be changed\nSelect the project you want to rename in Atlas.There is a 3 dots button on top left\nWhen you click it you can see project settings.Then edit the project name using pencil icon",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi Daniel,You cannot rename the cluster but you can restore it or migrate it to a new one with the new name.Best regards,\nPavel",
"username": "Pavel_Duchovny"
}
] | Rename Project and Cluster | 2020-07-15T00:54:03.734Z | Rename Project and Cluster | 19,465 |
null | [
"installation"
] | [
{
"code": "",
"text": "I downloaded mongodb-win32-x86_64-2012plus-4.2.8-signed from mongdb portal tried to follow the instruction given. It seems installer not working as the pop up disappearing withing seconds. Not sure what is the issue.\nNeed help as I am pretty new to MongoDB.Regrads,\nIndranil",
"username": "Indranil_Banerjee"
},
{
"code": "",
"text": "Hi Indranil,Can you share your operating system version?Please make sure you are running on a supported platform and fulfilled all prerequisites and considerations prior to installing:Kind regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi Pavel,\nIt’s Win 10 professional. I followed instructions mentioned at https://docs.mongodb.com/manual/tutorial/install-mongodb-on-windows/#considerations but unfortunately it did not work.\nRegards,\nIndranil",
"username": "Indranil_Banerjee"
},
{
"code": "",
"text": "Hi Indarnil,Can you try and download the “zip” version? Then you can try to follow the command line instructions here:This should allow you to spionup the mongo server on this host.Let me know if that works for you.Best regards,\nPavel",
"username": "Pavel_Duchovny"
}
] | MSI windows installer not working in win10 | 2020-07-14T15:58:42.808Z | MSI windows installer not working in win10 | 1,962 |
null | [
"golang"
] | [
{
"code": "",
"text": "Hello,I use golang cucumber to write some tests for my application. I need clean the test data at beginning of each test feature. I call Drop and DeleteMany to clean the data of the previous test cases. But I found the Drop or DeleteMany seems not return in sync and sometimes it may impact the ongoing test, i.e., I create a doc in collection during the test but can’t find the doc in the next step. I suspect the Drop or DeleteMany do their staff during my testing running. Can someone help to clarify if the Drop or DeleteMany are async or sync function?Thanks,James",
"username": "Zhihong_GUO"
},
{
"code": "deleteMany",
"text": "How about creating a new collection with new name for each test. The collection name can have a suffix of timestamp or a incremented number. For example, “test_coll_1”, “test_coll_2”, etc. This way the deleteMany or drop collection on “test_coll_1” will not have any effect on the collection used in the following test: “test_coll_2”.",
"username": "Prasad_Saya"
},
{
"code": "defer collection.Drop(ctx)",
"text": "All of the CRUD methods of the driver are synchronous unless done with an unacknowledged write concern (a writeconcern where w=0). I agree that the solution outlined with @Prasad_Saya is more robust and will also let you run tests in parallel if you choose to do that in the future. I’d also recommend doing a defer collection.Drop(ctx) as part of test cleanup so your tests don’t leave behind a large number of collections on the server.",
"username": "Divjot_Arora"
}
] | Are Drop and DeleteMany async or sync functions in the Go driver? | 2020-07-15T06:59:48.539Z | Are Drop and DeleteMany async or sync functions in the Go driver? | 2,653 |
[
"security",
"configuration"
] | [
{
"code": "",
"text": "I am new to MongoDB. I have installed MongoDB, and have done the configurations. Everything works fine till I add the configuration for TLS.I am using a Self signed Certificate. The MongoDB version I am using is 4.2.5. I am using Windows Server 2016 Datacenter edition. I have enabled TLS (1.0, 1.1, 1.2 (Server)) in registry.The below is my config file:\n\nimage732×309 6.16 KB\nWhen I try to start the MongoDB service, I get the below error:\n\nimage744×242 12 KB\nHowever, I noticed that when I take out the below statements from the config file, the service starts:\nnet:\ntls:\nmode: requireTLS\ncertificateSelector: thumdprint=“45b*********************************************37”",
"username": "Rajesh_Joseph"
},
{
"code": "",
"text": "looks like a typo. You have a d instead of b in thumbprint",
"username": "chris"
},
{
"code": "",
"text": "I tried with “thumbprint”, that also did not work",
"username": "Rajesh_Joseph"
},
{
"code": "",
"text": "Please try with subject instead of thumbprint and check if it worksnet:\ntls:\nmode: requireTLS\ncertificateSelector: subject=\"\"",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Look at your mongod logs and/or try starting mongod from the command line.",
"username": "chris"
},
{
"code": "",
"text": "Hi All,I am also using “CertificateSelector” using thumbprint and i am able to run mongo service as well\nalso i can pass --tlsCertificateSelector option while connecting using mongo.exe (client) and able to connect to the server.but the poblem i am facing with mongodump and mongorestore utility i am not able to pass these parameters to take the database backup.Can any one pls guide me how to take mongo dump if i am using “CertificateSelector” using command lineThanks in Advance.",
"username": "Shudhanshu_Shukla"
}
] | Cannot start MongoDB service after configuring TLS | 2020-04-15T00:12:29.097Z | Cannot start MongoDB service after configuring TLS | 3,108 |
|
null | [
"backup"
] | [
{
"code": "",
"text": "Dear Team,I am new to MongoDB, my requirement is to configure point in time backup restore for my MongoDB.Since there is no replication configure.Regards,\nSoni",
"username": "Soni_Singh"
},
{
"code": "mongodump ",
"text": "Hi @Soni_Singh, what backup strategy are you planning to use? Are you for example relying on mongodump for backups?",
"username": "zOxta"
},
{
"code": "",
"text": "Hi zOxta ,\nThanks for replying ,\nI want complete backup of my database may be using mongodump or any other binaries which will give complete backup without losing any of the data not even index.Regards,\nSoni",
"username": "Soni_Singh"
}
] | Mongo DB backup restore point in time without using replication | 2020-05-12T12:05:02.859Z | Mongo DB backup restore point in time without using replication | 1,788 |
null | [] | [
{
"code": "",
"text": "Hi,Is there any way that mongodump/mongorestore can be performed using the mongoc driver ?Invoking mongodump/mongorestore utilities programmatically and passing credentials through command line imposes security risks.Thanks,\nSanthanu",
"username": "santhanu_mukundan"
},
{
"code": "echo $SECRET | mongodump --username backup\n$SECRET",
"text": "Hi @santhanu_mukundan I don’t believe this is possible. You could still invoke mongodump/mongorestore without exposing the password using environment variables.Something like this:Where $SECRET is a predefined environment variable having the actual password.",
"username": "zOxta"
}
] | Mongodump using the mongoc driver | 2020-07-14T21:44:51.890Z | Mongodump using the mongoc driver | 1,225 |
null | [
"aggregation"
] | [
{
"code": "page | total_impression | total_clicksdb.metrics.aggregate([\n{\n $group: {\n _id : {\n page_id:'$page_id',\n event_type:'load'\n }, total_impressions:{$sum :1},\n _id : {\n page_id:'$page_id',\n event_type:'click'\n }, total_clicks:{$sum :1}\n }\n},\n{\n $project : {\n page_id:'$_id.page_id',\n total_impressions : '$total_impressions', \n total_clicks : '$total_clicks', \n _id : 0\n }\n }, { $out : \"metrics_results\" }\n])\n",
"text": "Hi, I have collection with schema something like belowpage_id | even_typeeven_type - has two values 1. load, 2.clickI’m trying to aggregate it like\npage | total_impression | total_clicksI tried following, but getting same value in both columnsCan you please help me?",
"username": "gowri_sankar"
},
{
"code": "{ _id : { page_id : \"/index\" , event_type : \"load\" } , total : 362 }\n{ _id : { page_id : \"/index\" , event_type : \"click\" } , total : 55 }\n",
"text": "We could help you better if you provide real documents. We can work with just a schema like page_id|event_type but then we need to create documents in our env. rather than just cut-n-pasting what you provide. But time being in short supply we sometimes just skip over or we supply untested solution.As a first step, you should have a single _id field in your $group. Do for event_type what you did for page_id. You will end up with documents out of the first stage like:It is not exactly the format you want but it is getting there.",
"username": "steevej"
},
{
"code": "page | total_impression | total_clicks48704 | 1939 | 195 ",
"text": "Hi, Thanks for the response! What you posted is correct, I am also getting similar result, but I want project the data in single row as i mentioned in questionexamplepage | total_impression | total_clicks\n48704 | 1939 | 195 { “_id” : { “page_id” : “48704”, “event_type” : “click” }, “total” : 195 }{ “_id” : { “page_id” : “48704”, “event_type” : “load” }, “total” : 1939 }Thanks",
"username": "gowri_sankar"
},
{
"code": "{\n\t\"$group\" : {\n\t\t\"_id\" : \"$_id.pid\",\n\t\t\"counts\" : {\n\t\t\t\"$push\" : {\n\t\t\t\t\"event\" : \"$_id.et\",\n\t\t\t\t\"total\" : \"$c\"\n\t\t\t}\n\t\t}\n\t}\n}\n",
"text": "The following stagewill bring you closer.I could not use the sample 2 documents you supplied as they where in a quote block rather than a pre, code or triple back ticks. The quotes were all screwed up and I had to type the documents so I used short field names.",
"username": "steevej"
}
] | Aggregate count of one field based values | 2020-07-14T15:58:27.351Z | Aggregate count of one field based values | 5,285 |
null | [
"node-js"
] | [
{
"code": "rs0:PRIMARY> rs.config()\n{\n \"_id\" : \"rs0\",\n \"version\" : 319263,\n \"protocolVersion\" : NumberLong(1),\n \"writeConcernMajorityJournalDefault\" : true,\n \"members\" : [\n {\n \"_id\" : 2,\n \"host\" : \"192.168.45.77:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 5,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 4,\n \"host\" : \"192.168.45.77:27015\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 5,\n \"host\" : \"192.168.45.77:27020\",\n \"arbiterOnly\" : true,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 0,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 5000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"5ed8d7489ff195f6b8ff442c\")\n }\n}\nconst {ObjectID, MongoClient } = require('mongodb')\n\nclient = new MongoClient('mongodb://localhost:27017,localhost:27015,localhost:27020/logs?replicaSet=rs0', {\n useUnifiedTopology: true,\n serverSelectionTimeoutMS: 15000\n})\n\nclient.on('topologyDescriptionChanged', event => {\n console.log(event.newDescription.type)\n})\n\nlet db\n\nclient.connect((err, client) => {\n if (err) {\n return console.log('error inside connection', err)\n } \n \n db = client\n})\n\nsetInterval(() => {\n db\n .db()\n .collection('logs')\n .insertOne({\n _id: new ObjectID(),\n foo: 'bar'\n })\n .then(result => {\n console.log(result.insertedId)\n })\n .catch(err => {\n console.log(`err`, err)\n })\n}, 500)\nmore than 500 mstopologyDescriptionChangedless than 501 mstopologyDescriptionChangedserverSelectionTimeoutMSMongoServerSelectionError: not master",
"text": "I’m Using Mongodb Native driver 3.6configured PSA replicaSet (on the same machine) and this is its configuration parametersi tried to insert into the database with interval and intentionaly stop the primary server to confirm election is done and a new primary is selected1- interval time is more than 500 ms2- interval time is less than 501 msany help please?",
"username": "ahmed_naser"
},
{
"code": "from pymongo import MongoClient\nfrom time import sleep\n\n\nimport threading\n\ndef set_interval(func, sec):\n def func_wrapper():\n set_interval(func, sec)\n func()\n t = threading.Timer(sec, func_wrapper)\n t.start()\n return t\n\n\nc = MongoClient('mongodb://localhost:27017,localhost:27015,localhost:27020/?replicaSet=rs0').test\n\ndef insert():\n z = c.test.insert_one({\"x\": 1})\n print(z.inserted_id)\n\n\nset_interval(insert,.1)\n",
"text": "in interval 100 ms it worked fine and no errors",
"username": "ahmed_naser"
}
] | Replica set doesn't make election if insertion interval is less than 501 ms | 2020-07-14T15:58:33.913Z | Replica set doesn’t make election if insertion interval is less than 501 ms | 2,019 |
null | [
"indexes"
] | [
{
"code": "",
"text": "Hi team ,Need help on indexing part in mongodb. as per developer requirement , due to poor performance, I created single key and compound indexes on certain columns of a query. But when checked I found , indexes does not getting used while query run and in Mongo Compass , there is 0 value in index execution Usage.How can i make use of those indexes to performance improvements ?Kind regards\nGaurav",
"username": "Gaurav_Gupta"
},
{
"code": "db.collection.explain('executionStats').find(...)mongo",
"text": "Hi Gaurav,There are many reasons why a query didn’t use indexes. But before that, could you post more details:Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevinplease find belowPlease post some example documents == >>\n_id : 45021\n_class : “com.ctl.bmp.service.pricing_service.ds.dto.PriceBook”\npriceBookName : “SOUTH TACOMA WA TACMWAFA ESHOP-Customer Care Individual Regular”\npriceBookDescription : “SOUTH TACOMA WA TACMWAFA ESHOP-Customer Care Individual Regular”\neffectiveFromDate : 2019-05-21T00:00:00.000+00:00\ncurrency : “USD”\ncatalogId : “EC02875”\ncatalogName : “Catalog WA SOUTH TACOMA TACMWAFA ESHOP-Customer Care Individual Regula…”\ncatalogSpecName : “Catalog Spec WA SOUTH TACOMA TACMWAFA ESHOP-Customer Care Individual R…”\ncatalogSpecId : “ES02875”\noffers : ArrayWhat are the indexes defined in the collection? == >>\n1 ) catalogName_catalogSpecId_\n2) idHow did you check that no indexes are used? == >> Please find attach screenshot from Mongo Compass\n( here it Usage columns showing 0)\nIndex Usage1072×417 27.4 KB\nWhat is your MongoDB version == > 4.0.14-8Please also not there are two collections ( one with document which example above) which linked with DBreff - and size of both collections are 8.5 and 60.7 GB respectively. does size of collections also impacts performance ? i read on web that DBReff does impacts performance but not sure about Collection size.",
"username": "Gaurav_Gupta"
},
{
"code": "",
"text": "Hi GauravWhat is your MongoDB version == > 4.0.14-8This seems to be a version of the Percona fork of MongoDB server. Is this correct?If yes, you might want to contact Percona support regarding this behaviour, since it involves modified server code that may work differently from official MongoDB servers.What I did notice from your screenshot is, if your collections are so large, the index sizes are suspiciously small. This could be either 1) the large collections are not indexed, or 2) the Percona fork of the server is behaving differently.does size of collections also impacts performanceIn most cases, yes. If improperly indexed, larger collections would require more resources to process. For example, a query with a collection scan would then require the server to load all 8.5/60.7 GB of data from disk into memory to answer the query. This will be a massive workload if the server was provisioned with a smaller amount of RAM and slower disks.Having said that, if your working set (data that’s frequently accessed + indexes) can comfortably fit in RAM, performance impact could be minimal even with large collections.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin ,Thanks lot for helping out in this , can you please help to know why index usage showing 0 ? in last screenshot. How can we make them in use for queries ?Kind regards\nGaurav",
"username": "Gaurav_Gupta"
},
{
"code": "",
"text": "Hi Gaurav,As your server is not an official MongoDB release but a fork from Percona, I’m afraid I can’t really help why those indexes are not used in this case since I have no idea what changes Percona did to the server. I would suggest contacting Percona Support for this.Having said that, in an official MongoDB server, you would need to create indexes to support your queries.Regarding collection size, query performance typically depends on:I’m afraid the answer to those questions are very use-case based and there’s no one general answer to answer them all. There is no “recommended” collection size, as a result.Thus, this is a very deep topic with very personal answers, as what’s considered as acceptable performance to other people may not be acceptable to you, and vice versa.However, if I may offer suggestions, I would recommend you to take a look at MongoDB University free courses, especially:The courses listed above would hopefully help you answer the questions you have.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi KevinSorry for delay in responding.Thanks so much for kind help, please allow me to work on those points you suggested and revert you for further queries.Thanks once again\nGaurav",
"username": "Gaurav_Gupta"
},
{
"code": "",
"text": "Hi Kevinjust add on - since our collection size are 60+ GB…is there any utility in mongo which can reduced the collection size ? or should we go for purging old documents from the collection? to make it reduce.Kind regards\nGaurav",
"username": "Gaurav_Gupta"
}
] | Mongodb indexing | 2020-07-05T11:57:28.682Z | Mongodb indexing | 4,365 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "We are trying to build a small stock management web app for a small company. It is the first time that we use NoSQL DB. We designed a system with these collections;",
"username": "Yunus_HUYUT"
},
{
"code": "",
"text": "Hi Yunus,I will recommend going through one of our schema design trainings and also reviewing the following documents:A summary of all the patterns we've looked at in this seriesNow regarding the presented use case I find it hard to understand , is every mentioned entity is a collection? Does the units one represent a many to many relationship?Additionally, what do you mean by a single stock document… What does this document hold? Is it a document per stock with its history and sales is sach transaction documents?Thanks\npavel",
"username": "Pavel_Duchovny"
}
] | Stock Management Software with MongoDB | 2020-07-14T08:41:37.455Z | Stock Management Software with MongoDB | 1,890 |
null | [
"node-js",
"field-encryption"
] | [
{
"code": "db.createCollection(\"users\", {\n validator: {\n $jsonSchema: {\n bsonType: \"object\",\n properties: {\n date_of_birth: {\n encrypt: {\n keyId: [{\n \"$binary\": {\n base64: \"%s\",\n subType: \"04\"\n }\n }],\n bsonType: \"string\",\n algorithm: \"AEAD_AES_256_CBC_HMAC_SHA_512-Random\"\n }\n },\n }\n }\n }\n});",
"text": "Trying to follow the mongo client side field level encryption tutorials but I’m struggling to get the schema working with the keyID field.See below for my JS command.I’m receiving this error “Array elements must have type BinData, found object” with code 51088.I’m interpreting this as the keyID is not an array of UUID but I can’t find any information on how to get around this.",
"username": "L_B"
},
{
"code": "keyId{\n \"$binary\": {\n base64: \"%s\",\n subType: \"04\"\n }\n}\nBinarybase64mongodb.Binaryvar Binary = require('mongodb').Binary;\nvar buffer = Buffer.from(base64KeyId, 'base64');\nvar keyIdBinary= new Binary(buffer, Binary.SUBTYPE_UUID);\nkeyId: [keyIdBinary]",
"text": "Hi @L_B,I’m receiving this error “Array elements must have type BinData, found object” with code 51088.This is because the value of keyId that is passed is in the form of object, or document. In this case it’s :The value needs to be a Binary instance. If you would like to construct this from a base64 string you can utilise mongodb.Binary, i.e:Then you can use it as keyId: [keyIdBinary] , please note that it’s still in array format.See also Client-Side Field Level Encryption Guide: Verify Data Encryption Key Creation for more information.Regards,\nWan.",
"username": "wan"
}
] | Mongo Client Side Field Level Encryption Key ID schema error | 2020-07-07T19:28:12.102Z | Mongo Client Side Field Level Encryption Key ID schema error | 3,811 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hello and good day!I am quite new to MongoDb (and MongoDb C# Drivers) and lately, we are trying to implement an update wherein we use the value of a field and a variable (method parameter) to update several fields.Basically, our doc is something like thispublic class Inventory\n{\npublic string _id;\npublic decimal quantity;\npublic decimal totalCost;\n}What we want to do is to update the quantity and totalCost based on a passed value (qty).\n(1) totalCost -= (qty * (totalCost / quantity))\n(2) quantity -= qtyThe logic behind this is that we want to retain the average cost of our item.\nTake note: the value of quantity field in step (1) should use the original value and not the result of step (2).We can implement this using 2 queries but in our case, we need to execute this logic in one call only as there are different threads that will update a single item.I have read the docs about Aggregate and Projection (and using Expressions) but I cannot seem to figure out how to use or combine the result of projection into the aggregate update.Tried this projection to return the value that should be deducted from totalCostBuilders.Projection.Expression(e => (e.totalCost / e.quantity) * -qty);Thank you and hope you can point us in the right direction.",
"username": "Charles_Stephen_Vice"
},
{
"code": "",
"text": "Here is an equivalent of what we are trying to achieve in mongo shell, provided that qty = 500.db.inventory.updateOne( { _id: “1” }, [ { “$set”: { “TotalCost”: { “$add”: [\"$TotalCost\", { “$multiply”: [-500, { “$divide”: [\"$TotalCost\", “$Quantity”] }] }] } } } ] )",
"username": "Charles_Stephen_Vice"
},
{
"code": "BsonDocumentvar newQuantity = 500;\n var pipeline = new BsonDocumentStagePipelineDefinition<BsonDocument, BsonDocument>(\n new[] { new BsonDocument(\"$set\", \n new BsonDocument{{\"TotalCost\", \n new BsonDocument(\"$add\", \n new BsonArray{ \"$TotalCost\", \n new BsonDocument(\"$multiply\", \n new BsonArray{newQuantity, \n new BsonDocument(\"$divide\", \n new BsonArray{ \"$TotalCost\", \"$Quantity\"}\n )})})}, \n {\"Quantity\", new BsonDocument(\"$subtract\", \n new BsonArray{\"$Quantity\", newQuantity}\n )}})}\n);\nvar updateDefinition = new PipelineUpdateDefinition<BsonDocument>(pipeline);\nvar result = collection.UpdateOne(new BsonDocument{}, updateDefinition);\nBsonDocument.Parse()var newQuantity = 500;\nstring patternPipeline = @\"{{ '$set': {{ \n 'TotalCost': {{ \n '$add': ['$TotalCost', {{ \n '$multiply': [ {0} , {{ \n '$divide': ['$TotalCost', '$Quantity']\n }}] \n }}]}}, \n 'Quantity': {{\n '$subtract': ['$Quantity', {1}]\n }} }} }}\"; \nstring updatePipeline = string.Format(patternPipeline, newQuantity, newQuantity); \nvar pipeline = new BsonDocumentStagePipelineDefinition<BsonDocument, BsonDocument>(\n new[] { BsonDocument.Parse(updatePipeline)});\nvar updateDefinition = new PipelineUpdateDefinition<BsonDocument>(pipeline);\nvar result = collection.UpdateOne(new BsonDocument{}, updateDefinition);\n",
"text": "Hi @Charles_Stephen_Vice , and welcome to the forum!I have read the docs about Aggregate and Projection (and using Expressions) but I cannot seem to figure out how to use or combine the result of projection into the aggregate update.You can either use BsonDocument to construct PipelineDefinition as below example:Alternatively, you could utilise BsonDocument.Parse() to build from a string pipeline, i.e.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Update using Expressions | 2020-07-10T11:45:47.030Z | Update using Expressions | 3,796 |
null | [] | [
{
"code": "",
"text": "Hello! My name is Jane Fine and I work on Developer Experience here at MongoDB. We have an exciting opportunity to participate in a user research project that is already in flight.We are looking for Atlas users who have experience with the MongoDB Aggregation Pipeline. Our goal is to better understand how developers use the MongoDB Query Language (MQL) and the Aggregation Pipeline so that we can make learning and adoption easier in the future, as well as prioritize features on our roadmap.If you believe you have the necessary experience and would like to be a part of this study, please direct message me. If selected, we will invite you to an hour-long interview which will be completely confidential.We look forward to working with you!",
"username": "Jane_Fine"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Join MongoDB Aggregation Pipeline User Research! | 2020-07-14T22:47:33.463Z | Join MongoDB Aggregation Pipeline User Research! | 3,001 |
null | [
"aggregation",
"performance"
] | [
{
"code": "db.getCollection(\"SourceRecon\").aggregate(\n [\n { \n \"$match\" : { \n \"DtoName\" : \"CashflowInventory\"\n }\n }, \n { \n \"$match\" : { \n \"BusinessDate\" : \"20200703\"\n }\n }, \n { \n \"$match\" : { \n \"SourceSystem\" : \"TRM\"\n }\n }, \n { \n \"$lookup\" : { \n \"from\" : \"FileRecon\", \n \"localField\" : \"PrimaryKey\", \n \"foreignField\" : \"PrimaryKey\", \n \"as\" : \"FileRecon\"\n }\n }, \n { \n \"$unwind\" : { \n \"path\" : \"$FileRecon\", \n \"preserveNullAndEmptyArrays\" : true\n }\n }, \n { \n \"$match\" : { \n \"$expr\" : { \n \"$ne\" : [\n \"$Hash\", \n \"$FileRecon.Hash\"\n ]\n }\n }\n }, \n { \n \"$match\" : { \n \"FileRecon.Hash\" : { \n \"$exists\" : true\n }\n }\n }, \n { \n \"$project\" : { \n \"_id\" : 0.0, \n \"NoSQLSourceStructure.Hash\" : \"$Hash\", \n \"NoSQLSourceStructure.LstHashColumns\" : \"$LstHashColumns\", \n \"NoSQLFilePreparedStructure.Hash\" : \"$FileRecon.Hash\", \n \"NoSQLFilePreparedStructure.LstHashColumns\" : \"$FileRecon.LstHashColumns\"\n }\n }\n ], \n { \n \"allowDiskUse\" : false\n }\n);\n{ \n \"stages\" : [\n { \n \"$cursor\" : { \n \"query\" : { \n \"$and\" : [\n { \n \"$and\" : [\n { \n \"DtoName\" : \"CashflowInventory\"\n }, \n { \n \"BusinessDate\" : \"20200703\"\n }\n ]\n }, \n { \n \"SourceSystem\" : \"TRM\"\n }\n ]\n }, \n \"fields\" : { \n \"FileRecon.Hash\" : NumberInt(1), \n \"FileRecon.LstHashColumns\" : NumberInt(1), \n \"Hash\" : NumberInt(1), \n \"LstHashColumns\" : NumberInt(1), \n \"NoSQLFilePreparedStructure\" : NumberInt(1), \n \"NoSQLSourceStructure\" : NumberInt(1), \n \"PrimaryKey\" : NumberInt(1), \n \"_id\" : NumberInt(0)\n }, \n \"queryPlanner\" : { \n \"plannerVersion\" : NumberInt(1), \n \"namespace\" : \"EibIrrBb_Recon.SourceRecon\", \n \"indexFilterSet\" : false, \n \"parsedQuery\" : { \n \"$and\" : [\n { \n \"BusinessDate\" : { \n \"$eq\" : \"20200703\"\n }\n }, \n { \n \"DtoName\" : { \n \"$eq\" : \"CashflowInventory\"\n }\n }, \n { \n \"SourceSystem\" : { \n \"$eq\" : \"TRM\"\n }\n }\n ]\n }, \n \"queryHash\" : \"4E57EB9A\", \n \"planCacheKey\" : \"B1C176CE\", \n \"winningPlan\" : { \n \"stage\" : \"FETCH\", \n \"inputStage\" : { \n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : { \n \"BusinessDate\" : NumberInt(-1), \n \"SourceSystem\" : NumberInt(1), \n \"DtoName\" : NumberInt(1)\n }, \n \"indexName\" : \"BusinessDate_-1_SourceSystem_1_DtoName_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : { \n \"BusinessDate\" : [\n\n ], \n \"SourceSystem\" : [\n\n ], \n \"DtoName\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : NumberInt(2), \n \"direction\" : \"forward\", \n \"indexBounds\" : { \n \"BusinessDate\" : [\n \"[\\\"20200703\\\", \\\"20200703\\\"]\"\n ], \n \"SourceSystem\" : [\n \"[\\\"TRM\\\", \\\"TRM\\\"]\"\n ], \n \"DtoName\" : [\n \"[\\\"CashflowInventory\\\", \\\"CashflowInventory\\\"]\"\n ]\n }\n }\n }, \n \"rejectedPlans\" : [\n { \n \"stage\" : \"FETCH\", \n \"inputStage\" : { \n \"stage\" : \"IXSCAN\", \n \"keyPattern\" : { \n \"BusinessDate\" : NumberInt(-1), \n \"SourceSystem\" : NumberInt(1), \n \"DtoName\" : NumberInt(1), \n \"PrimaryKey\" : NumberInt(1)\n }, \n \"indexName\" : \"BusinessDate_-1_SourceSystem_1_DtoName_1_PrimaryKey_1\", \n \"isMultiKey\" : false, \n \"multiKeyPaths\" : { \n \"BusinessDate\" : [\n\n ], \n \"SourceSystem\" : [\n\n ], \n \"DtoName\" : [\n\n ], \n \"PrimaryKey\" : [\n\n ]\n }, \n \"isUnique\" : false, \n \"isSparse\" : false, \n \"isPartial\" : false, \n \"indexVersion\" : NumberInt(2), \n \"direction\" : \"forward\", \n \"indexBounds\" : { \n \"BusinessDate\" : [\n \"[\\\"20200703\\\", \\\"20200703\\\"]\"\n ], \n \"SourceSystem\" : [\n \"[\\\"TRM\\\", \\\"TRM\\\"]\"\n ], \n \"DtoName\" : [\n \"[\\\"CashflowInventory\\\", \\\"CashflowInventory\\\"]\"\n ], \n \"PrimaryKey\" : [\n \"[MinKey, MaxKey]\"\n ]\n }\n }\n }\n ]\n }\n }\n }, \n { \n \"$lookup\" : { \n \"from\" : \"FileRecon\", \n \"as\" : \"FileRecon\", \n \"localField\" : \"PrimaryKey\", \n \"foreignField\" : \"PrimaryKey\", \n \"unwinding\" : { \n \"preserveNullAndEmptyArrays\" : true\n }\n }\n }, \n { \n \"$match\" : { \n \"$and\" : [\n { \n \"$expr\" : { \n \"$ne\" : [\n \"$Hash\", \n \"$FileRecon.Hash\"\n ]\n }\n }, \n { \n \"FileRecon.Hash\" : { \n \"$exists\" : true\n }\n }\n ]\n }\n }, \n { \n \"$project\" : { \n \"_id\" : false, \n \"NoSQLSourceStructure\" : { \n \"Hash\" : \"$Hash\", \n \"LstHashColumns\" : \"$LstHashColumns\"\n }, \n \"NoSQLFilePreparedStructure\" : { \n \"Hash\" : \"$FileRecon.Hash\", \n \"LstHashColumns\" : \"$FileRecon.LstHashColumns\"\n }\n }\n }\n ], \n \"ok\" : 1.0\n}\n",
"text": "Hi everyone,My aggregation query is extremely slow. Even knowing that I have 22 million rows per collection I believe there is something wrong with my indexes.This is the query:This is the explain where I can see rejected plans. Is that the reason why is so slow?",
"username": "Felipe_Cabral_Jeroni"
},
{
"code": "$match:{\n\"DtoName\" : \"CashflowInventory\",\n\"BusinessDate\" : \"20200703\",\n\"SourceSystem\" : \"TRM\"}\n",
"text": "Merge the first three match stages into a single match stage:-Same for the second set of matches used after unwind stage.In the project stage, _id should be having a value “0” only.Finally, you have set allowDiskUse to false, for an aggregation query involving 22 million documents. That’s fine, but you should understand that when you use the unwind stage, each and every key in the “Filerecon” array is converted into a single document and then aggregated upon. The 22million rows increases to quite a number when you’re doing that.An alternative to this is to use a project stage instead of an unwind stage, which you can directly use without involving match stages(in some cases). However, you haven’t posted any details regarding the index you’re using, neither have you posted an example data set without which it becomes impossible for me to provide further suggestions.",
"username": "Susnigdha_Bharati"
}
] | Aggregation slow | 2020-07-14T15:58:52.233Z | Aggregation slow | 1,864 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 3.6.19-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 3.6.18. The next stable release 3.6.19 will be a recommended upgrade for all 3.6 users.Fixed in this release:3.6 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team",
"username": "Jon_Streets"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 3.6.19-rc0 is released | 2020-07-14T18:08:36.705Z | MongoDB 3.6.19-rc0 is released | 1,672 |
null | [] | [
{
"code": "[\"oauth2-apple\"]app.currentUser()",
"text": "After successfully logging a user in or signing a user up the identities property on the user object contains the appropriate user identities e.g [\"oauth2-apple\"] however the persisted user object that is loaded when calling app.currentUser() when the app is restarted always contains an empty identities array, regardless of which provider was used to authenticate the user. Is this the intended behaviour? it makes it very difficult to determine if a user is logged in anonymously or with another provider. Are we supposed to record this info ourselves in the custom data document?",
"username": "Theo_Miles"
},
{
"code": "",
"text": "@Theo_Miles This sounds like an issue - we should be preserving the identities array - can you file an issue here GitHub - realm/realm-swift: Realm is a mobile database: a replacement for Core Data & SQLiteA reproduction case would be helpful",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thanks, will create an issue there.Update: Github issue is being tracked here, this is indeed a bug user.identities property is not persisted between app restarts. · Issue #6647 · realm/realm-swift · GitHub",
"username": "Theo_Miles"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm iOS SDK user identities missing | 2020-07-13T16:31:53.201Z | Realm iOS SDK user identities missing | 1,772 |
[] | [
{
"code": "",
"text": "Following the docs to enable custom user meta data on a realm cluster does not currently work, there are no options availble in the database name dropdown. Are there more steps required to make this work?Screenshot 2020-07-14 at 10.24.372444×1142 198 KB",
"username": "Theo_Miles"
},
{
"code": "",
"text": "It looks like you don’t have a database in your cluster yet. You can create one by going to Rules -> Add Collection.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "Hrmm, I can’t see any “Add Collection” button in Rules. I do see my database and collections in there however. I tried turning devmode on to see if that would open up the option or show my database name in the dropdown but still no luck.",
"username": "Theo_Miles"
}
] | Cannot enable custom user data in realm user control panel | 2020-07-14T08:26:46.435Z | Cannot enable custom user data in realm user control panel | 1,412 |
|
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "Hi. I’m using the Debezium source Sql Server and saving to Kafka and I’m using the sink to save the data to mongodb. I would like to know if it is possible to save data only from fields that have been changed. For example, json there are fields before and after comparing those fields and saving only the changes.",
"username": "Fernando_Silva"
},
{
"code": "",
"text": "Hi Fernando,At the moment the supported sources for the CDC are :https://docs.mongodb.com/kafka-connector/master/kafka-sink-cdc/If you are working with one of the sources above consider using a post processor to compute the changed documents/fields.Thank you\nPavel",
"username": "Pavel_Duchovny"
}
] | Save only change data from CDC | 2020-07-13T22:09:18.834Z | Save only change data from CDC | 1,976 |
null | [
"configuration"
] | [
{
"code": "",
"text": "I have two different applications, One running on a CentOS machine and another one on Ubuntu. I am using Internal ip (From VPC network range) to connect to Mongo Server from my local machine (Using different clients like compass, mongo shell, nosql booster).On Ubuntu, with bind ip as 127.0.0.1, It allows to connect but on CentOS it doesn’t. (I had to define 0.0.0.0 on CentOS in order to connect from my local machine)I am not able to understand why this is happening. Is this OS related thing or this is expected to be the same way?",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "The IP 127.0.0.1 is the localhost and will only be available from the local machine. If you want to access a server from another machine you have to bind with an address that is routed from the machine running the client to the server.",
"username": "steevej"
},
{
"code": "",
"text": "You’re correct @steevej. But my point here is, Even if you don’t binIp with 0.0.0.0 on Ubuntu, It allows you to connect from remote machine. That’s what took my attention when I was trying to connect to this two different servers, one with ubuntu and another with centos. It did work for Ubuntu surprisingly but not for CentOS.",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "Which IP was specified on Ubuntu when it was not 0.0.0.0 and you could access the server from another machine?The output of ss -tlnp and ps -aef | grep [m]ongo would be useful when this happen.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | bindIp: CentOS vs Ubuntu | 2020-07-13T07:52:32.734Z | bindIp: CentOS vs Ubuntu | 2,475 |
null | [
"java"
] | [
{
"code": "{\n \"from\": \"person\",\n \"let\": {\n \"id\": \"$id\"\n },\n \"pipeline\": [\n {\n \"$match\": {\n \"$expr\": {\n \"$and\": [\n {\n \"$eq\": [\n \"$artifactId\",\n \"$$id\"\n ]\n },\n {\n \"$eq\": [\n \"$personId\",\n UUID(\"d52aa8ae-bb1a-4e54-8450-bd633bfb6213\")\n ]\n }\n ]\n }\n }\n }\n ],\n \"as\": \"matchedField\"\n}\n{\n \"$binary\": \"1piYCuzrS7aXLEazXydQFw==\",\n \"$type\": \"04\"\n}\n",
"text": "I am using the MongoDB Java Driver to do a lookup between two collections using this method: Bson lookup(final String from, @Nullable final List<Variable> let, final List<? extends Bson> pipeline, final String as)I have a match stage which uses expr() because I need to use a variable introduced by let. But I also have to match on a UUID.This gives me the desired results in Compass itself but I am not sure how to translate it to Java. When I hit the convert to Java button in Compass, it says “Symbol ‘UUID’ is undefined”I have tried writing the UUID out like this but it tells me “An object representing an expression must have exactly one field”. This is how we usually write our UUIDs out in queries.I also tried pulling the UUID into its own match stage which does not use expr since it’s not using a let variable. This didn’t work because it thinks $binary is a field in one of the collections.",
"username": "Raine_Jordan"
},
{
"code": "",
"text": "Hello @Raine_Jordan, welcome to the MongoDB forum.Please post samples of the two collections used in the aggregation. Also, specify the versions of Compass, Java Driver and the MongoDB server.",
"username": "Prasad_Saya"
}
] | How to Match a UUID inside Expr() in a Lookup using the MongoDB Java Driver | 2020-07-13T21:26:17.636Z | How to Match a UUID inside Expr() in a Lookup using the MongoDB Java Driver | 3,279 |
null | [
"java"
] | [
{
"code": "",
"text": "I have native query of mongo db and I want execute the same native query in Java. How can I execute that ?",
"username": "97vaqasazeem_N_A"
},
{
"code": "mongo",
"text": "Hello and welcome to MongoDB forum.You can use the Java Driver’s MongoDatabase#runCommand method. This is equivalent to Database Comand - Query and Write Operation Commands in mongo shell",
"username": "Prasad_Saya"
},
{
"code": "mongo",
"text": "Welcome to the community @97vaqasazeem_N_A,Can you provide an example of the query you are trying to run? By “native query”, are you referring to a JavaScript query in the mongo shell?The Java driver provides a full interface for querying MongoDB. To get started, see MongoDB Java Driver Quick Start and the Java Driver Tutorials.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": " {\n $match: { \"xxx\" : \"value\" }\n\n },\n \n { $sort : { created_date : 1} },\n \n {\n $group:{\n _id: {\"xyz\" : \"$xyz\"}\n _name: {\"abc\" : \"$abc\"},\n count: { $sum: 1 } \n }\n },\n {$unwind : \"$data\"}\n \n ]\n)",
"text": "Thanks for the reply @Stennie_X . Following is the sample querydb.collectionABC.aggregate(\n[",
"username": "97vaqasazeem_N_A"
},
{
"code": "books{ \"title\" : \"Ulysses\", \"author\" : \"James Joyce\" }\n{ \"title\" : \"War and Peace\", \"author\" : \"Leo Tolstoy\" }\n{ \"title\" : \"Anna Karenina\", \"author\" : \"Leo Tolstoy\" }\n\n// Aggregation pipeline stages\nString match = \"{ '$match':{ 'author': 'Leo Tolstoy' } }\";\nString sort = \"{ '$sort':{ 'title': 1} }\";\n\n// Build pipeline as a Bson\nString pipe = match + \", \" + sort;\nString strcCmd = \"{ 'aggregate': 'books', 'pipeline': [\" + pipe + \"], 'cursor': { } }\";\nDocument bsonCmd = Document.parse(strCmd);\n\n// Execute the native query\nDocument result = db.runCommand(bsonCmd);\n\n// Get the output\nDocument cursor = (Document) result.get(\"cursor\");\nList<Document> docs = (List<Document>) cursor.get(\"firstBatch\");\ndocs.forEach(System.out::println);\n",
"text": "Here is an example with “native” query usage.Sample input documents from a books collection:Note the way you have to extract the result documents - not very covenient. As @Stennie_X has suggested try using the Java Driver provided API.As such you can create the “native” Aggregation pipeline in the MongoDB Compass GUI tool using the Aggregation Pipeline Builder and use the option Export to Specific Language (Java). This generates the Java code, which you can copy and use it in your application - pretty simple.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Native query execution in Java | 2020-07-13T16:32:17.644Z | Native query execution in Java | 13,030 |
null | [
"golang"
] | [
{
"code": "{ \"_id\":\n\n{\"$oid\":\"5e7b22bd2a7912a5a3b73d79\"},\n \"manufacturerID\":\"19193\",\n \"units\":[\n {\n \"assetsReserved\":{\n \"departmentCode\" : \"PHY_DEPTS\",\n \"assets\":{\"primaryID\":\"1234\"}}\n },\n {\n \"departmentCode\" : \"PHY_DEPTS\",\n \"assetsReserved\":{\"assets\":{\"primaryID\":\"4567\"}}\n }\n ]\n}\nmatchStage := bson.D{primitive.E{Key: \"$match\", Value: bson.D{primitive.E{Key: \"manufacturerID\", Value: \"19193\"}}}}\n\nprojectStage := bson.D{\n {\"$project\", bson.D{\n {\"units\", bson.D{\n {\"$filter\", bson.D\n\n{ \n\n{\"input\", \"$units\"},\n{\"as\", \"units\"},\n{\"cond\", bson.D{\n {\"$or\", bson.A{ \n bson.D{{\"$eq\", bson.A{\"$$units.assetsReserved.departmentCode\", \"1234\"}}},\n bson.D{{\"$eq\", bson.A{\"$$units.assetsReserved.assets.primaryID\", \"1234\"}}},\n}},\n}},\n}},\n}},\n}},\n}\n\nunwindStage := bson.D {{\"$unwind\", \"$units\"}}\nunits.assetsReserved.departmentCodeunits.assetsReserved.assets.primaryID",
"text": "I’m using the Go driver go.mongodb.org/mongo-driver v1.3.4\nThe data is stored as:The requirement is to get all the units where primaryID matches the “1234”The mongo.Pipeline is made up ofThe search based on\nunits.assetsReserved.departmentCode worksIf we pass “PHY_DEPTS” we get the response, howeverunits.assetsReserved.assets.primaryID doesn’t return any responseIf we pass “1234”“$elemMatch” works fine with assetsReserved.assets.primaryID and is able to return result however it returns only 1 record even when multiple records with same primaryID exists which is the expected behavior based on documentation.\nLooks like issues with $filter and cond that its not able to filter when subdocument contains an array",
"username": "Abhay_Kumar"
},
{
"code": "units.assetsReserved.assets.primaryID$projectprojectStage := bson.D{\n\t{\"$project\", bson.D{\n\t\t{\"units\", bson.D{\n\t\t\t{\"$filter\", bson.D{\n\t\t\t\t{\"input\", \"$units\"},\n\t\t\t\t{\"as\", \"units\"},\n\t\t\t\t{\"cond\", bson.D{\n\t\t\t\t\t{\"$or\", bson.A{\n\t\t\t\t\t\tbson.D{{\"$eq\", bson.A{\"$$units.assetsReserved.departmentCode\", \"PHY_DEPTS\"}}},\n\t\t\t\t\t\tbson.D{{\"$eq\", bson.A{\"$$units.assetsReserved.assets.primaryID\", \"1234\"}}},\n\t\t\t\t\t}},\n\t\t\t\t}},\n\t\t\t}},\n\t\t}},\n\t}},\n}\n",
"text": "Hi @Abhay_Kumar, and welcome to the forum!units.assetsReserved.assets.primaryID doesn’t return any responseIf we pass “1234”Using the example document above, and the code snippet that you provided I could return the result. The only different that I noticed here, is only the values specified on the $project stage, i.e.If you’re still experiencing an issue with this, could you provide the following to describe the problem better:Regards,\nWan.",
"username": "wan"
},
{
"code": "bson.D{{\"$eq\", bson.A{\"$$units.assetsReserved.departmentCode\", \"XYZ_DEPTS\"}}},\nbson.D{{\"$eq\", bson.A{\"$$units.assetsReserved.assets.primaryID\", \"1234\"}}},\n",
"text": "Hi Wan,Thank you for looking into this.You are getting the result because the first or condition matched try the case where the first condition in the or doesn’t match and the second condition does match.“$$units.assetsReserved.assets.primaryID”, “1234” doesn’t return result even if it satisfy the match condition.Thank you.Regards\nAbhay Kumar",
"username": "Abhay_Kumar"
},
{
"code": "$or",
"text": "Hi @Abhay_Kumar,You are getting the result because the first or condition matched try the case where the first condition in the or doesn’t match and the second condition does match.The $or conditional statement works as expected in my test. I can change either of the conditions and the logical OR is still correct and return the result expected. Only if both conditions do not match that it returns an empty result.If you still encountering this issue please provide:Best regards,\nWan",
"username": "wan"
}
] | Issue in Filter operation in Aggregation pipeline | 2020-07-08T07:59:32.893Z | Issue in Filter operation in Aggregation pipeline | 3,220 |
null | [
"release-candidate",
"c-driver"
] | [
{
"code": "",
"text": "I’m pleased to announce version 1.17.0-rc0 of libbson and libmongoc,\nthe libraries constituting the MongoDB C Driver.libmongoc\nIt is my pleasure to announce the MongoDB C Driver 1.17.0 rc0 release.\nThis release adds support for MongoDB 4.4 servers.Features:Bug fixes:Thanks to everyone who contributed to the development of this release.libbson\nNo changes since 1.17.0 beta2; release to keep pace with libmongoc’s version.",
"username": "Kevin_Albertson"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB C driver 1.17.0-rc0 released | 2020-07-14T01:58:39.086Z | MongoDB C driver 1.17.0-rc0 released | 2,462 |
null | [] | [
{
"code": "",
"text": "Hi,I am on the DBA certification path, currently working on M310, “MongoDB Security”, due for completion by 14th July. Unfortunately, due to unforeseen circumstances, I may struggle to meet the 14th July deadline.Is there an option for deferring this module to the following cycle?Thanks,\nAndrew",
"username": "Andrew_Charlton"
},
{
"code": "",
"text": "You may have better luck on the MongoDB University forum at https://www.mongodb.com/community/forums/latest.In my experience you may un-register from this session and register for the next one. You will not be able to transfer your progress from one session to the other.",
"username": "steevej"
},
{
"code": "",
"text": "The university forum @steevej mentioned are surely the best source. I am not 100% sure, I recall that there was a post saying something like: due to Corona all have no weekly limit.That might require to supscribe „on demand“Cheers\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Thanks a lot everyone. I was able to cancel, and re-register for the next iteration.\nNot sure how I posted to this forum, which I now realise was the wrong place …but you guys sorted me out anyway, so thanks!",
"username": "Andrew_Charlton"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Deferring module completion on DBA Path | 2020-06-29T14:36:21.710Z | Deferring module completion on DBA Path | 1,941 |
null | [
"golang",
"release-candidate"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to announce the release of 1.4.0-rc0 of the MongoDB Go Driver.This release contains support for the upcoming MongoDB 4.4 server release as well as driver-specific improvements. This is a release candidate and there may be changes made for bugfixes before v1.4.0 is officially released.For more information please see the release notes.You can obtain the driver source from GitHub under the v1.4.0-rc0 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Go Driver 1.4.0-rc0 Released | 2020-07-13T16:46:57.540Z | MongoDB Go Driver 1.4.0-rc0 Released | 1,638 |
null | [] | [
{
"code": "Connection Failed... Lost connection to MySQL server at 'reading final connect information'. system error: 2error serving connection: interface conversion: ast.Expr is *ast.FieldOrArrayIndexRef, not *ast.FieldRef, goroutine 194 [running]:",
"text": "I am setting up an ODBC connection to a remote instance of MongoDB. When I try to test the connection, I get an error the following error:\nConnection Failed... Lost connection to MySQL server at 'reading final connect information'. system error: 2I looked at the logs from mongod and mongosql and noticed that mongosql was closing the connection almost immediately. After changing the log level, I noticed the following error from mongosql:\nerror serving connection: interface conversion: ast.Expr is *ast.FieldOrArrayIndexRef, not *ast.FieldRef, goroutine 194 [running]:Can anyone point me in the direction of how to resolve this?MongoDB version: 3.2.6\nMongoSQL version: 2.13.4\nODBC Connector Version 1.4.0",
"username": "sambom"
},
{
"code": "",
"text": "OK, after putting this down for a bit. I came back to it realized that the issue here may be that the instance of MongoDB is not a replica set. I haven’t tested this theory as yet, but I’m fairly certain it’s the root cause.Hopefully this helps someone else in the future.",
"username": "sambom"
}
] | Error Serving Connection: interface conversion error | 2020-06-18T12:22:03.547Z | Error Serving Connection: interface conversion error | 1,964 |
null | [
"o-fish"
] | [
{
"code": "",
"text": "“Code for good” is where the MongoDB values of “think big, go far”, “make it matter” and “build together” intersect. We built the open source O-FISH app to help the not-for-profit WildAid protect our oceans. The O-FISH app consists of an iOS mobile app, an Android mobile app and a web app, that use Realm Sync for a unified data platform.If you want to see the code in action you can build your own instance of the O-FISH app for free.",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "This project is awesome! I’m trying to build my own OFISH instance following the github site instructions. So far so good until I reach step 6 to import realm code. I’ve changed the command line from stitch-cli to realm-cli but still no luck.\nstitch-cli import --strategy=replace --include-dependencies --app-id=REALM_APP_ID\nrealm-cli import --strategy=replace --include-dependencies --app-id=REALM_APP_ID\nQuestion is, do I need to provide --app-name as well since it’s required? Or maybe some of the configuration.json file is wrong. Thank you!\n\nimage1645×562 43.6 KB\n",
"username": "Sharon_Xue"
},
{
"code": "",
"text": "Hi! The app name should only be required if the app is new. The instructions were tested but it’s possible things have changed with a new version (old version was called stitch-cli, new version is realm-cli). We will look into it.",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "Hello Sharon_Xue,I’m pleased to let you know that we found the issue. When stitch-cli was updated, and renamed to realm-cli, the configuration file name changed from stitch.json to the more standard config.json.So, there are 2 changes to be made - in the WildAidDemo directory:\nrename stitch.json to config.jsonIn the config.json, set the config_version to 20200603, so the line looks like this:\n“config_version”: 20200603,Alternatively, git pull the main branch for o-fish-realm. You will want to check that your local settings for your cluster name and app name haven’t been overwritten, e.g. start from step 3 on https://wildaid.github.io/build/2020/06/09/Import-Realm-Code.htmlI hope this helps! I’m so glad you like the project.",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "Thank you Sheer for your quick response. Here’s a quick update. After I made 2 changes, it seems to be working and showed that the import is done. However, when I went to Atlas to verify the code to be imported, it’s empty under the Realm App - Functions. I will do the git pull and check the previous steps to fix it then. Thanks for your help!image1531×328 13.3 KB",
"username": "Sharon_Xue"
},
{
"code": "",
"text": "Hi @Sharon_Xue - just wanted to follow up - did you get the functions to import?",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "Hi @Sheeri_Cabral Thanks for your follow up. I did the git pull again but couldn’t get it work. It said successfully imported however when I went to atlas to check the functions, it’s empty image1534×438 18.3 KB",
"username": "Sharon_Xue"
},
{
"code": "",
"text": "So I found a Mac, followed the build steps from scrach, installed node.js and realm-cli. It said Successfully imported ‘wildaidapp-tenld’ but when I tried to verify the functions, it’s empty.Image 2020-07-05 at 7.48 PM527×563 65.4 KB",
"username": "Sharon_Xue"
},
{
"code": "",
"text": "Ugh, that’s frustrating! I got that at first, when the Realm code was not complete. Can you try re-cloning the repo at GitHub - WildAid/o-fish-realm: Realm application code and sample data for the Officer's Fishery Information Sharing Hub (O-FISH). The mobile app allows fisheries officers to document and share critical information gathered during a routine vessel inspection. The web app allows agencies to gain insights from the aggregated information.? We changed the production branch name to “main” and I wonder if that had an impact. I tried to import today and it worked, so I’m hopeful it will work for you now too.",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "Hi @Sheeri_Cabral, it’s working after I re-cloned the repo and re-imported the data. Yay! Thanks for helping me out. I’ve tested the web app and it connects to the Realm data just imported (7 collections in total and 3039 documents in BoardingReports). Now I have the environment set up and please let me know if I can be any help with this project. Thanks again! ",
"username": "Sharon_Xue"
},
{
"code": "",
"text": "Fantastic news! I’m glad it works now.Each repository has its own issue tracker. If your focus is on web development, the React app issue tracker is at Issues · WildAid/o-fish-web · GitHub and there are a few issues marked “Good first issues”, although you can work on anything there. And I’m happy to help give you guidance if you like, if there’s something in an issue that doesn’t have all the context there. Just ask -Sheeri",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Code for Good: O-FISH App for Wild Aid | 2020-06-09T04:54:44.171Z | Code for Good: O-FISH App for Wild Aid | 9,410 |
null | [
"app-services-user-auth",
"stitch"
] | [
{
"code": "",
"text": "Using the MongoDB Stitch SDK, register with email functionality, we are getting the following error message:unknown user confirmation status “400 BAD REQUEST”(no further details in the logs…type: Authentication)Has anyone encountered this issue before?Thanks,\nMartin",
"username": "Martin_Kayser"
},
{
"code": " \"_id\": \"xx\",\n\n \"co_id\": \"xx\",\n\n \"type\": \"AUTH\",\n\n \"domain_id\": \"xx\",\n\n \"app_id\": \"xx\",\n\n \"group_id\": \"xxx\",\n\n \"request_url\": \"/api/client/v2.0/app/xxx/auth/providers/local-userpass/register\",\n\n \"request_method\": \"POST\",\n\n \"remote_ip_address\": \"xx\",\n\n \"started\": \"2020-07-07T14:12:39.205Z\",\n\n \"completed\": \"2020-07-07T14:12:40.304Z\",\n\n \"error\": \"unknown user confirmation status \\\"400 BAD REQUEST\\\"\",\n\n \"error_code\": \"BadRequest\",\n\n \"status\": 400,\n\n \"messages\": [\n\n \"[object Object]\"\n\n ]\n\n },",
"text": "Log details…{",
"username": "Martin_Kayser"
},
{
"code": "",
"text": "@Martin_Kayser Can you try using the new Realm SDKs for register? Also can you share the code you are using to call register?",
"username": "Ian_Ward"
},
{
"code": "",
"text": "Thank you for the reply!It turned out that the confirmation function had been adjusted and was not returning the required status:success/pending/fail message anymore.It may be useful to update error messaging to help debug such an issue faster!Thanks,\nMartin",
"username": "Martin_Kayser"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Stitch User Auth: unknown user confirmation status "400 BAD REQUEST" | 2020-07-07T15:35:29.043Z | Stitch User Auth: unknown user confirmation status “400 BAD REQUEST” | 4,334 |
null | [
"php"
] | [
{
"code": "",
"text": "I need some guidance about PHP -> MongoDB.Are there 2 different driver regimens MongoClient and Mongo\\Driver\\Manager?",
"username": "Jack_Woehr"
},
{
"code": "mongodbmongodbMongoDB\\ClientMongoDB\\Driver\\Manager",
"text": "Hi @Jack_Woehr,Are there 2 different driver regimens MongoClient and Mongo\\Driver\\Manager?There are two parts of the stack, the MongoDB PHP Library which provides a high-level abstraction around the lower-level PHP driver which is also known as the mongodb extension.The lower-level PHP driver (mongodb extension) provides a minimal API for core driver functionality (i.e. commands, queries, writes, connection management, and BSON serialisation). The extension provides MongoDB\\Driver\\Manager class to manage connections to MongoDB server/clusters.On the other hand, the MongoDB PHP Library provides a full-featured API and models client, database, and collection objects with their respective helper methods. The library provides MongoDB\\Client class to manage connections to MongoDB server/clusters. As the higher level abstraction, MongoDB\\Client composes MongoDB\\Driver\\Manager for users.For more information see also MongoDB PHP driver: Architecture Overview which shows how all different parts of the MongoDB PHP driver fit together.Generally if you are developing a PHP application with MongoDB, you should consider using the MongoDB PHP Library instead of the extension alone.Regards,\nWan",
"username": "wan"
},
{
"code": "",
"text": "Thank you very much @wan for clarifying the layering of the driver architecture.",
"username": "Jack_Woehr"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | PHP - Are there 2 different driver regimens MongoClient and Mongo\Driver\Manager? | 2020-07-10T14:57:21.278Z | PHP - Are there 2 different driver regimens MongoClient and Mongo\Driver\Manager? | 3,847 |
null | [
"node-js",
"field-encryption"
] | [
{
"code": "",
"text": "I’ve been following the encryption guide here for generating data encryption keys for the client side encryption process https://docs.mongodb.com/drivers/use-cases/client-side-field-level-encryption-guideI’m trying to test client side encryption locally to a database that I upgraded from community edition to enterprise but I always get null when I try to do the verification step in Section B Step 4. Does a brand new encryption database have to be created for this to work? I was hoping just to add __keyvault to the existing db i have locally",
"username": "Andrew_Dravucz"
},
{
"code": "const fs = require('fs-extra')\nconst { MongoClient } = require('mongodb');\nconst { ClientEncryption } = require('mongodb-client-encryption')\n\n\nconst path = './master-key.txt';\nconst localMasterKey = fs.readFileSync(path);\n\nconst kmsProviders = {\n local: {\n key: localMasterKey,\n },\n};\n\nconst base64 = require('uuid-base64');\n\nconst username = process.env.PAPER_DB_USER\nconst pass = process.env.PAPER_DB_PASS\n\nconst connectionString = `mongodb://${username}:${pass}@localhost:27017`;\nconst keyVaultNamespace = 'db.__keyVault';\nconst client = new MongoClient(connectionString, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\nasync function main() {\n try {\n await client.connect();\n console.log(\"hello clientEncryption\")\n const encryption = new ClientEncryption(client, {\n keyVaultNamespace,\n kmsProviders,\n });\n console.log(\"done\")\n const key = await encryption.createDataKey('local');\n console.log(\"key made\")\n const base64DataKeyId = key.toString('base64');\n console.log(\"base64 key made\")\n const uuidDataKeyId = base64.decode(base64DataKeyId);\n console.log('DataKeyId [UUID]: ', uuidDataKeyId);\n console.log('DataKeyId [base64]: ', base64DataKeyId);\n } finally {\n await client.close();\n }\n}\nmain();\n",
"text": "Just for some more specifics I have the following code to generate the data encryption keyOutput: ",
"username": "Andrew_Dravucz"
},
{
"code": " const { MongoClient } = require('mongodb');\n\nconst username = process.env.PAPER_DB_USER\nconst pass = process.env.PAPER_DB_PASS\n\nconst connectionString = `mongodb://${username}:${pass}@localhost:27017`;\nconst keyVaultDb = 'db';\nconst keyVaultCollection = '__keyVault';\nconst base64KeyId = '6exu+2ObR8yYaQo/RJe5Gw=='; // use the base64 data key id returned by gen_data_encrypt_key.js in the prior step\n\nconst client = new MongoClient(connectionString, {\n useNewUrlParser: true,\n useUnifiedTopology: true,\n});\n\nconst base64 = require('uuid-base64');\n\nasync function main() {\n try {\n await client.connect();\n const keyDB = client.db(keyVaultDb);\n const keyColl = keyDB.collection(keyVaultCollection);\n\n console.log(\"base64KeyId\", base64KeyId)\n const uuidDataKeyId = base64.decode(base64KeyId);\n console.log(\"uuidDataKeyId \", uuidDataKeyId)\n const query = {\n _id: base64KeyId,\n };\n const dataKey = await keyColl.findOne(query);\n console.log(dataKey);\n } finally {\n await client.close();\n }\n}\nmain();\n",
"text": "Then I use the following code to verify:Output: ",
"username": "Andrew_Dravucz"
},
{
"code": "",
"text": "And this is what it looks like on the __keyvault connection in the database\nimage912×164 26.9 KB",
"username": "Andrew_Dravucz"
},
{
"code": "",
"text": "Lastly this is the version of mongoDB that I am currently using\n",
"username": "Andrew_Dravucz"
},
{
"code": "...\nconst UUID = require('uuid-mongodb');\nconst uuidDataKeyId = UUID.from(base64.decode(base64KeyId));\nconsole.log(\"uuidDataKeyId \", uuidDataKeyId)\nconsole.log(\"typeof uuidDataKeyId\", typeof uuidDataKeyId)\nconst query = {\n _id: uuidDataKeyId,\n};\nconst dataKey = await keyColl.findOne(query);\nconsole.log(dataKey);\n",
"text": "For now I can’t figure out how to make it show with just querying base 64 string, I ended up using another package uuid-mongodb https://www.npmjs.com/package/uuid-mongodb to decode it, then put it back into its proper uuid and then the query would work. Not ideal but it functionsAfter I finally dont get a null result\n",
"username": "Andrew_Dravucz"
},
{
"code": "base64db.__keyVaultconst query = {\n _id: base64KeyId,\n};\nconst dataKey = await keyColl.findOne(query);\nlet base64KeyIdBinary = new Binary(\n Buffer.from(base64KeyId, 'base64'), \n Binary.SUBTYPE_UUID);\nconst query = {\n _id: base64KeyIdBinary,\n};\nconst dataKey = await keyColl.findOne(query);\n",
"text": "Hi @Andrew_Dravucz, and welcome to the forum!I’ve been following the encryption guide here for generating data encryption keys for the client side encryption process …\nbut I always get null when I try to do the verification step in Section B Step 4There is a missing step in the snippet code posted on the documentation page. It should have converted the base64 string into a binary format first before using it in the query. This is because the data stored in the collection db.__keyVault is also in binary format.So instead of the following snippet as mentioned on the page:It should have been:A patch to fix the documentation is currently pending review.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Great thanks @wan that makes sense, I was able to piece that together with some trial and error. Is that Binary builder from the mongodb library directly like mongo client?",
"username": "Andrew_Dravucz"
},
{
"code": "let Binary = require('mongodb').Binary;\n",
"text": "Hi @Andrew_Dravucz,Is that Binary builder from the mongodb library directly like mongo client?Yes, correct. The example import statement would be:Regards,\nWan.",
"username": "wan"
}
] | Testing Client Side encryption locally, returning null value | 2020-06-29T19:30:49.984Z | Testing Client Side encryption locally, returning null value | 3,162 |
null | [
"atlas-functions"
] | [
{
"code": "",
"text": "nativescript-mongo-stitch-sdk will not even instantiate a Stitch instance so I think it might be old code? I followed the awesome Mongo Jumpstart tutorial to build a react frontend to call Realm functions. I now want to do the same with a nativescript frontend … is there a working sdk to do this? Also the Jumpstart tutorial recommends upgrading mongodb-stitch-browser-sdk to mongodb-realm-browser-sdk which I cannot find anywhere?",
"username": "Mic_Cross"
},
{
"code": "",
"text": "@Mic_Cross We currently do not have Nativescript support the RealmJS SDKs but please make a feature request here and we can consider it doing quarterly planning - https://feedback.mongodb.com/",
"username": "Ian_Ward"
}
] | Connecting to Atlas - Realm functions with nativeScript? | 2020-07-08T21:32:43.063Z | Connecting to Atlas - Realm functions with nativeScript? | 2,379 |
null | [] | [
{
"code": "\"_id\": 2020,\n \"students\": {\n \"1\": [{\n \"_id\": {\n \"$oid\": \"5f093dc05706776c704f1ce6\"\n },\n \"admitted\": {\n \"$date\": \"2020-07-11T12:42:06.581Z\"\n },\n \"type\": \"management quota\",\n \"paid\": 4,\n \"fOccupation\": \"Businessman\"\n },{\n \"_id\": {\n \"$oid\": \"5f093dc05706776c704f1ce8\"\n },\n \"admitted\": {\n \"$date\": \"2020-07-11T12:46:06.581Z\"\n },\n \"type\": \"handicapped\",\n \"paid\": 0,\n \"fOccupation\": \"Beggar\"\n }],\n \"2\": [{}]\n }\nfOccupation:[\"Businessman\",\"Beggar\"] //An array of all values under object with key = \"1\"\npaid: 0 // Last value of 'paid' in the array under object with key = \"1\"\n",
"text": "Hello everyone, I need some help regarding projection of a particular field inside a deeply nested document.Consider the following document structure -Now, I need the data in the following manner:I am able to return the fOccupation array just fine but I am not able to project the last value (array.length - 1) of the “paid” property. Please help me out in this regard.",
"username": "Susnigdha_Bharati"
},
{
"code": "var pipeline = [{\n $match:{\n _id:2020\n }\n},{\n $project:{\n _id:0,\n fOccupation: \"$students.1.fOccupation\",\n paid: {$arrayElemAt: [\"$students.1.paid\",-1]}\n }\n}];\ndb.collection.aggregate(pipeline)",
"text": "db.collection.aggregate(pipeline) gives the required result. The number in ‘student.1.X’ represents the class number which can be changed accordingly in the program.",
"username": "Susnigdha_Bharati"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Projecting a field's value inside an array that is in an object | 2020-07-11T13:22:41.151Z | Projecting a field’s value inside an array that is in an object | 8,530 |
null | [] | [
{
"code": "{\n \"total\": 3,\n \"breeds\": [\n { \"_id\": d1, \"name\": \"labrador\" },\n { \"_id\": d2, \"name\": \"dalmatian\" },\n { \"_id\": d3, \"name\": \"husky\"}\n ]\n}\n{\n \"total\": 3,\n \"breeds\": [\n { \"_id\": c1, \"name\": \"persian\" },\n { \"_id\": c2, \"name\": \"ragdoll\" },\n { \"_id\": c3, \"name\": \"siamese\" }\n ]\n}\n{\n \"total\": 0,\n \"items\": []\n}\n{\n \"_id\": p1\n \"petOriginId\" : c1\n \"description\": \"male labrador\"\n}\n",
"text": "Please bare with me for I just started working with mongoDB. Here’s my sample scenario. I have 3 collections: Dogs, Cats and PetsInventory and a pet data that I need to insert in PetsInventory collection.Dogs Collection:Cats Collection:PetsInventory Collection:And a pet data to be inserted in PetsInventory Collection:I need to save the pet data to PetsInventory collection but before that, i need to find out if the value of its petOriginId field is existing in either Dogs collection or Cats collection otherwise it wont be inserted. Thank you in advance!",
"username": "Jayvee_Mendoza"
},
{
"code": "db.getCollection('Dogs').findOne({\"breeds._id\": c1},{\"breeds.$\":1})db.getCollection('PetsInventory').updateOne({_id: p1},{$push:{items: document_to_be_pushed},$inc:{total:1}})",
"text": "Your best bet is to manually check if the records are present according to your specifications, and if they are indeed present - you can go ahead and insert them.So step 1 is to check manually if the documents are present according to your specifications:db.getCollection('Dogs').findOne({\"breeds._id\": c1},{\"breeds.$\":1}) where ‘c1’ is the value of the ‘petOriginId’ to be inserted into the PetsInventory Collection. If the value returned is null, then perform the second search on ‘Cats’ collection. If result is null then exit else do something like this:db.getCollection('PetsInventory').updateOne({_id: p1},{$push:{items: document_to_be_pushed},$inc:{total:1}})Since these are values you are checking against, and not collections at large, I don’t think aggregations can be used. But you can try using mapReduce() to get what you want to achieve in a single call to database.",
"username": "Susnigdha_Bharati"
}
] | Is it possible to combine unrelated collection and then perform task? | 2020-07-11T01:06:12.852Z | Is it possible to combine unrelated collection and then perform task? | 1,386 |
null | [] | [
{
"code": "",
"text": "I apologize in advance if this question doesn’t belong here. As the title says, I am unable to connect to my MongoDB cluster even though my IP address is whitelisted. The problem seems to be with my router, as I am able to connect to the cluster via my phone’s network. Here is my stack overflow question where I’ve gone into more details.",
"username": "Jaskaran_Singh"
},
{
"code": "",
"text": "The DNS error you are having is probably because the dns servers configured in your router does not handle SRV style connections. Try to set it up to use google’s 8.8.8.8 and 8.8.4.4.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to connect to cluster even though my ip address is whitelisted | 2020-07-11T13:42:16.176Z | Unable to connect to cluster even though my ip address is whitelisted | 1,954 |
null | [] | [
{
"code": "{\n\t_id: ...\n\tchanges: [\n\t\t\t{docId: 1, change: ...},\n\t\t\t{docId: 10, change: ...},\n\t\t\t{docId: 5, change: ...},\n\t\t\t...\n\t\t]\n}\n",
"text": "Consider an application in which we have some docs (I use doc instead of document in order to differentiate it from MongoDB’s document) and modifications are performed on them. The only requirement we have is that changes on multiple docs are done atomically (All of them are done, or none). There are two ways to implement it:Note that since we need the history of changes, even in the first solution we keep the changed values inside the doc. Thus, by the measure of storage space these two solutions are not much different (without considering indexes, …).The question is that which of these solutions is better?Some of my own thoughts on this question:",
"username": "Shayan_Test"
},
{
"code": "",
"text": "No comments on this?",
"username": "Shayan_Test"
}
] | MongoDB: Document-based ACID vs Multi-Document ACID | 2020-04-26T08:14:11.447Z | MongoDB: Document-based ACID vs Multi-Document ACID | 1,581 |
null | [] | [
{
"code": "",
"text": "Hi All,I need to get records from a collection by doing a lookup to match two fields of another collection. We expect to return one field from each of the collections.I composed a query in the mongodbexport as\n“–query '{party:{aggregate([{$lookup:{from:“party_preference_list”,as:“preference”}},{$match:{”$and\":[{“preference.party_preferences.selected_preference_value”:“PAPER”}, {“preference.party_preferences.preference_code”:“EOB”}]}},{$project:{“party_role”:1,“preference.card_id”:1}}])}}’\"When I ran mongoexport, it prompted me below error messages:\n“error validating settings: query ‘[123 112 97 … ]’ is not valid JSON: invalid character ‘(’ after object key\n… try ‘mongoexport --help’ for more information”I would very appreciate if someone can help on fixing the query.Thanks, YC",
"username": "Yichang_Chen"
},
{
"code": "--querymongoexportmongoexport--query",
"text": "Hello @Yichang_Chen, welcome to the MongoDB forum.There is no feature to specify an aggregation query with the --query option of the mongoexport. See the feature request: Support aggregation framework queries in mongoexport.But, there are other ways to use aggregation results with mongoexport:",
"username": "Prasad_Saya"
}
] | Invalid character '(' after object key | 2020-07-11T01:36:29.570Z | Invalid character ‘(’ after object key | 4,418 |
[
"php"
] | [
{
"code": "",
"text": "Hi all,\nI am getting these error after restarting the system. Same code is working fine in other System and Server but not in my local system.\nPhp version: 7.3.12\nMongoDB shell version v4.2.8Fatal error: Uncaught MongoDB\\Driver\\Exception\\AuthenticationException: Authentication failed. in C:\\vendor\\mongodb\\mongodb\\src\\Operation\\Find.php on line 322\nMongoDB\\Driver\\Exception\\AuthenticationException: Authentication failed. in C:\\vendor\\mongodb\\mongodb\\src\\Operation\\Find.php on line 322All the connectivity i have checked is fine, when i tried to check commend mongo “serverdetail” from terminal with user name and password it says-*** It looks like this is a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.I have added IP into MongoDB Atlas cluster. Still facing error.\nmongo_error1056×250 34.3 KB",
"username": "Swati_Bhargava"
},
{
"code": "",
"text": "Hi @Swati_Bhargava … I am using PHP too but from Fedora Linux.I had some problems connecting.Some ideas:",
"username": "Jack_Woehr"
}
] | Why am I getting this error: Authentication Exception Authentication failed | 2020-07-09T10:43:04.244Z | Why am I getting this error: Authentication Exception Authentication failed | 7,341 |
|
null | [
"java",
"release-candidate"
] | [
{
"code": "",
"text": "The 4.1.0-rc0 MongoDB Java & JVM Drivers has been released, with support for the upcoming release of MongoDB 4.4.The documentation hub includes extensive documentation of the 4.1 driver, includingand much more.You can find a full list of bug fixes here .You can find a full list of improvements here .You can find a full list of new features here .https://mongodb.github.io/mongo-java-driver/4.1/apidocs/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 4.1.0-rc0 Released | 2020-07-10T19:34:06.070Z | MongoDB Java Driver 4.1.0-rc0 Released | 1,925 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "Hi,I’ve seen all around the documentation that Query-based sync is deprecated, so I’m wondering how should I got about my situation:In my app (using Realm Cloud), I have a list of User objects with some information about each user, like their username. Upon user login (using Firebase), I need to check the whole User database to see if their username is unique. If I make this common realm using Full Sync, then all the users would synchronize and cache the whole database for each change right? How can I prevent that, if I only want the users to get a list of other users’ information at a certain point, without caching or re-synchronizing anything?Thank you for your help.",
"username": "Jean-Baptiste_Beau"
},
{
"code": "",
"text": "@Jean-Baptiste_Beau You can call a Realm function to perform a lookup on the User collection to make sure it was unique before proceeding:",
"username": "Ian_Ward"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Question about Query-based Realm Sync | 2020-07-10T06:28:16.627Z | Question about Query-based Realm Sync | 2,174 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "I have a general question regarding Kafka-Connect. I went through documentation, blogs but couldn’t find a straight answer.If there are two workers, running single Connector(instance) then\nHow does a Connector(instance) decide when to spawn a new task, if eg. tasks.max = 10? Also, how does a Connector(instance) decide how many tasks to spawn, if eg. tasks.max = 10?\nDoes it depend upon underlying hardware configuration? eg. number of cores or memory or cpu utilization?",
"username": "Hamid_Jawaid"
},
{
"code": "tasks.maxtasks.maxtasks.max = 10",
"text": "Hi @Hamid_Jawaid,tasks.max - The maximum number of tasks that should be created for this connector. The connector may create fewer tasks if it cannot achieve this level of parallelism.\nhttps://docs.confluent.io/current/connect/managing/configuring.html#configuring-connectorsGreat questions, the answer is the MongoDB Kafka Connector itself isn’t responsible for managing the task or the number of tasks. All it does is take the tasks.max value and create a number of configurations for each task. This allows the connector to determine how many tasks it can support. Prior to 1.2.0 the connector would only ever allow a single task. In 1.2.0 we now allow multiple tasks and Kafka Connect will then manage how many tasks to run in parallel.The exact algorithm is internal to Kafka-Connect but it generally relates to the number of partitions and topics. So for example if you set tasks.max = 10 and have the following sink connector configuration:The User Guide | Confluent Platform 2.0.0 alludes to this, but as far as the MongoDB Kafka Connector is concerned, it will just process the data it is handed by Kafka Connect.I hope that helps answer your questions,Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Thanks. Nice explanation.\nIn my case, I am using MongoDB-Source Connector and listening to ChangeStream of one collection, so I have one topic but with three partitions.\nFor me, I always see as one task. Is it because I have one topic? For one topic with more than one partitions should also have more tasks. If it’s not so, then seems I can scale my consumers(one per partition) but I can’t scale my producer(MongoDB-Source Connector).",
"username": "Hamid_Jawaid"
},
{
"code": "",
"text": "Prior to 1.2.0 only a single task was supported by the sink connector.The Source connector still only supports a single task, this is because it uses a single Change Stream cursor. This is enough to watch and publish changes cluster wide, database wide or down to a single collection.",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Thanks Ross. That explains the behavior I am witnessing.\nThough I couldn’t find this anywhere in documentation.\nWould MongoDB source connector support multiple tasks in future releases?",
"username": "Hamid_Jawaid"
}
] | Kafka Connector task spawn strategy | 2020-07-08T15:06:12.721Z | Kafka Connector task spawn strategy | 7,714 |
null | [
"aggregation"
] | [
{
"code": "{\n \"_id\": 1,\n \"reference : \"ORDERXYZ\",\n \"orderdate\" : \"2020-01-01\",\n \"orderlines\" : [\n {\n \"position\" : 1,\n \"description\" : \"blue box\",\n \"quantity\" : 1,\n \"price\" : 1.99\n }, {\n \"position\" : 2,\n \"description\" : \"red box\",\n \"quantity\" : 2,\n \"price\" : 3.99\n }\n ]\n}\n",
"text": "Hi all,Im completely new to Mongo and trying to understand how it stores data but specifically its capability to aggregate data.The reason I am asking is that I am currently looking for to replace a relational system with billions or records of orderdata.\neffectively there are about 150 million orders in the system but through all the lines, transactions and services this results in billons of records across multiple tables.\nI understand that Mongo stores its data in Json/bson formats but I am not clear on how it would perform aggregating thisPlease correct me if I am wrong but my thinking would be that in Mongo it would be possible to store a document (Json/bson) that contains for example an order and some (maybe all) of its lines/transactionsSomething like this:So in my example the idea would be to have a 150 million documents stored like this … instead of 800 million rows accross multiple tables:My question is how would Mongo perform if I were to query and aggregate this data? For example in Classic SQL assuming I have a order and orderline table I could use the following query to give me a list of orders containing “red boxes” and the total value they make up in each order:SELECT o.reference, SUM(l.price * l.quantity) AS summedvalue\nFROM order o\nINNER JOIN lines l\nON o.id = l.order_id\nWHERE l.description = ‘red box’\nGROUP BY o.referenceI am sure this is possible in Mongo but was wondering if someone could help me out on how the qery performance would compare to “classic” relational databases and also whether my understanding of how Mongo works is correct.Many thanksj",
"username": "Jacques_Luckhoff"
},
{
"code": "mongodb.orders.aggregate( [\n { \n $match: { \"orderlines.description\": \"red box\" } \n },\n { \n $unwind: \"$orderlines\" \n },\n { \n $group: { \n _id: { \"reference\": \"$reference\" }, \n sum_value: { $sum: { $cond: [ { $eq: [ \"$orderlines.description\", \"red box\" ] }, \n { $multiply: [ \"$orderlines.quantity\", \"$orderlines.price\" ] },\n 0\n ]\n } } \n } \n }\n] )\n{ \"_id\" : { \"reference\" : \"ORDERXYZ\" }, \"sum_value\" : 7.98 }$match$unwindorderlines$grouporderlines.descriptionmongodb.orders.createIndex( { \"orderlines.description\" : 1 } )\ndb.orders.getIndexes()\ndb.orders.explain().aggregate( [ { ... } ] )",
"text": "Hello @Jacques_Luckhoff, welcome to the MongoDB forum.The aggregation query for the corresponding SQL query would be as follows (this is run from mongo shell):This output will be like this:{ \"_id\" : { \"reference\" : \"ORDERXYZ\" }, \"sum_value\" : 7.98 }The aggregation query has three stages; and the first is the $match. The query reads the collection and filters documents in the match stage. The filtered documents are the ones with the order line item description as “red box”. The $unwind stage flattens the array field orderlines- this is for grouping and summing in the next stage, the $group.In the above query the documents are scanned one at a time in the match stage, and are filtered. This can take a long time scanning all the collection documents and filter based upon the supplied predicate. To improve the performance, we can create an index on the fields used for filtering - in this case the field of the orderlines.description, a field in an embedded document of an array. Create an index and verify it is created from mongo shell:The index on an array field is called as Multikey Index.Further, verify the index is applied on the query. This is by generating a query plan with the explain() method on the query:db.orders.explain().aggregate( [ { ... } ] )The generated plan document shows the index usage, this is indicated by a IXSCAN. When there is no index usage it will be a COLLSCAN (collection scan).Indexes are used to make your queries run faster (or perform better). There are many types of indexes, including unique, compound, text, etc. And, indexes are used not only on query filters, but also for sort operations. Within aggregation queries, there are ways to optimally define and utilize indexes - see Aggregation Pipeline Optimization and Indexing Strategies.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi Prassad,Thank you for your very informative answer, it was exactly the information I was trying to find outRegardsJacques",
"username": "Jacques_Luckhoff"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB - Newbie question on Aggregation | 2020-07-09T07:37:26.498Z | MongoDB - Newbie question on Aggregation | 2,279 |
null | [] | [
{
"code": "",
"text": "We can post here about issues that we need to deal with when dealing with strict financial data, while using MongoDB.",
"username": "Pratik_Jain"
},
{
"code": "",
"text": "Hello @Pratik_Jain,welcome to the community! You can post here all MongoDB related questions. Please do not post any private or confidential information. You ask in public and we, as a community, will try to get you an answer. I like to point you to the Getting Started with the MongoDB Community from @Jamie which should answer all questions. If not, do not hesitate to ask here.Cheers,\nMichael",
"username": "michael_hoeller"
}
] | Mongo for Financial Data | 2020-07-09T19:10:57.157Z | Mongo for Financial Data | 5,541 |
[
"dot-net"
] | [
{
"code": "public class Car\n {\n public Guid CarId { get; set; }\n public string Name { get; set; }\n }\nvar pack = new ConventionPack {new IgnoreExtraElementsConvention(true)};cars.find(c => c.CarId == carId).FirstOfDefault();public class Car\n {\n public Guid _id { get; set; }\n public string Name { get; set; }\n }\n",
"text": "Hi i want use clear C# class without reference\\independent to mongodb. Id is Guid. Now my main problem is dublicate ids, example:And on init conventio:\nvar pack = new ConventionPack {new IgnoreExtraElementsConvention(true)};Default use case:\ncars.find(c => c.CarId == carId).FirstOfDefault();In database i have never used _id fieldExamble what need:ConventionRegistry.Register( default_id_field_name = ‘_id’ );\nConventionRegistry.Register( default_id_type = Guid);",
"username": "alexov_inbox"
},
{
"code": "_idGuidObjectId public class Car \n {\n public Guid Id {get; set;}\n public string Name {get; set;}\n }\n{ \n \"_id\" : CSUUID(\"fe664c82-a26c-48b0-88ca-b94e6e11805d\"), \n \"Name\" : \"foo\" \n}\n",
"text": "Hi @alexov_inbox,Can use only one field id, clear class (without ref\\using\\attribute\\etc) with C# Guid?I’m not quite sure what you’re looking for. If you’re looking to have an id field _id with the value of Guid instead of the default ObjectId, you try the following example:On insert, this should create a document example as below:See also BSON Mapping: The Id Member for more information.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "oh… I try this a few years ago and nothing work with many exceptions…\nToday with last mongodb.driver all work from ‘box’ how need in my question.\nThx! Sorry to hurry! ",
"username": "alexov_inbox"
},
{
"code": "",
"text": "Hi @alexov_inbox,Not a problem, I’m glad that your question is answered.Best regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | C# MongoDB strong type class independent of MongoDB | 2020-07-08T13:40:50.418Z | C# MongoDB strong type class independent of MongoDB | 3,632 |
|
null | [
"atlas-triggers"
] | [
{
"code": "",
"text": "Is it possible to get the previous data in a database update trigger event? I have been unable to find anything in the docs that suggests you can.My scenario is this: I have two collections with a many-to-many relationship: Users and Groups. I’d like to setup a trigger that, when a group has its “members” field updated (an array of ids), it updates each user’s “groups” field to contain that group id.This is fine for adding members to a group but not removing. If I only receive the current value of the “members” field in the trigger, I’m not able to update the removed users.",
"username": "Matt_Jennings"
},
{
"code": "",
"text": "Hi, are there any solutions yet for this?Also, need a way to cancel the action when certain conditions not meet,For example: cancel update text when length less than 100",
"username": "Decky_Fiyemonda"
},
{
"code": "",
"text": "It think the question is a bit vague so I don’t know if it’s answerable. What specifically is meant by adatabase update trigger event?Where would you see that in code - within an observe closure or somewhere else?If the scenario is fine for adding members, why not for removing as that info is also passed within the event?",
"username": "Jay"
},
{
"code": "",
"text": "I am referring to MongoDB Realm Database Triggers.The issue is when the trigger is called you only receive the updated document, rather than the change diff. I eventually was able to figure out the removed members through a series of collection queries but a change diff would save the need to do that.",
"username": "Matt_Jennings"
},
{
"code": "",
"text": "The event contains the change differences; any objects that were inserted, modified or removed are in the event data. I am clearly not understanding what you are asking but thisable to figure out the removed membersis exactly what’s contained in the event - it will contain the indices of the removed members.Collection notifications don’t receive the whole Realm, but instead receive fine-grained descriptions of changes. These consist of the indices of objects that have been added, removed, or modified since the last notification.",
"username": "Jay"
},
{
"code": "updateDescriptionfullDocument",
"text": "Hm, well, I was not able to find that info in the change event. Both the updateDescription and fullDocument contained the same values for the document and I didn’t see anywhere that contained the previous value or a change diff, so I suppose I must have missed it.Either way, I have since moved on from mongodb to another service, so unfortunately I won’t be able to take another look and see what I was missing. I will mark this as resolved.",
"username": "Matt_Jennings"
},
{
"code": "",
"text": "Hello,what we need is the old data right before it is updated, the updateDescriptions, and other params given in the trigger only contains the updated data, not the data before it is updated.ultimately in the end, what I want to achieve is a logic to reject a document update if certain conditions are not met, and to process that logic I need to compare the new data with its old version right before it is updated.this might be OOT, is there any way to achieve this?",
"username": "Decky_Fiyemonda"
},
{
"code": "",
"text": "I understand the question and just a couple of general thoughts. There are two issues as work.The first being that Realm notification events fire after the write has been committed so there is no ‘prior data’ available at that point.The second is the timing. If this is the objectiveultimately in the end, what I want to achieve is a logic to reject a document update if certain conditions are not metrejecting it within that notification is the wrong place to do it - if you want to deny a change or delete, it should be done way before that time.Much of this occurs asynchronously so you don’t want to be caught in a race condition when determining if data should be altered.Imagine a multi-user To Do list where one of the To Do’s is ‘locked’ so other users don’t delete it. When another user attempts the delete, you don’t want to wait for a notification or event to occur, the code should check to see if the locked property is set, and when the result of that test asynchronously returns, either approve or deny the request.That being said, if you really want to do handle it in a notification fashion, you could craft some type of soft delete. For example, add a property ‘status’ and when a user want to delete something set it to ‘delete’ The observer will catch that in an event, see the status is set to ‘delete’ perform your check and if it passes, then perform the actual delete, ensuring you don’t fire another event.",
"username": "Jay"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Getting the previous data in a database trigger | 2020-06-24T00:14:31.775Z | Getting the previous data in a database trigger | 4,173 |
null | [
"server"
] | [
{
"code": "",
"text": "Hello folks,WiredTiger triggers checkpoints at intervals of 60 seconds from buffer cache writing to disk and journal write data to disk at 100 milliseconds. My doubt is when occurs journaling the process checks if there is dirty page on cache and replace the page at data cache from the journal files or in the checkpoint process that page will be check if exists in journal files and update the data cache ?Thank you and cheers !!Alexandre Araujo",
"username": "Alexandre_Araujo"
},
{
"code": "",
"text": "Hi Alexandre,A checkpoint in WiredTiger is basically a snapshot of the state of the database where all data files are consistent with each other.WiredTiger implements a write ahead log in the form of a journal. That is, a write that is written to the journal is considered durable (i.e. will survive restarts).See WiredTiger journal and WiredTiger Storage Engine for a high level description of how the process works.The journal files are only used for recovery purposes in case of unclean shutdown. During normal operation, once the dirty pages in the cache (i.e. in memory) are checkpointed and marked clean, WiredTiger will clean up the now-unneeded journal entries.Please note that these are quite specific implementation details, and may change between MongoDB versions. Out of curiosity, what is the reason for the question? Are you simply curious about how things work under the hood, or is there another reason?Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin,Thank you for all the clarifications. The reason is about how things work under the hood.If i may one more doubt, so between the checkpoints, the dirty data will be live in journal or there is a step in write ahead log also update the pages in memory ?Regards,\nAlexandre Araujo",
"username": "Alexandre_Araujo"
},
{
"code": "",
"text": "Hi Alexandre,Between checkpoints, the dirty pages stay in the WiredTiger cache. These dirty pages will then be flushed to the data files during checkpoint.So when a write comes in, it will be written in two places: the journal on disk (synced every 100ms), and the pages in the cache (where they will be marked dirty). The journal is only for safekeeping purposes and is not involved in checkpoints during normal operations. It only comes into play when there is an unclean shutdown, where WiredTiger will restart from the latest known good checkpoint, then replay the journal entries if there are uncommitted writes.Hopefully it’s clear & helpful Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin,Perfectly clear & helpful Thank you and have a great day.Alexandre Araujo",
"username": "Alexandre_Araujo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Internals - Checkpoints and Journaling | 2020-07-04T03:17:45.968Z | Internals - Checkpoints and Journaling | 5,623 |
null | [] | [
{
"code": "",
"text": "Hi\nIs there a way in MongoDb to store the mapping of Search criteria.\nHere is a background of my query !There is a mapping (some business logic) involved when searching a field in a collection.\nSo when one selects ABC, he should also see records of DEF & EGE.SearchOption NewSearchList (Also Include in search criteria)\nABC ABC, DEF, EGE\nXYZ XYZ, TQR, PRT\nand like wise…\nCurrently I am keeping this mapping in my code, (C#) and when querying MongoDB, I map the options and send request like…Builders<Entities.Dto.TDocument>.Filter.ElemMatch(\nx => x.Unit,\ny => NewSearchList .Contains(y.Output))I have been told that this can be achieved inside mongo itself and I don’t have to do this mapping in my code.\nAny help would be greatly appriciated !",
"username": "ictest_account"
},
{
"code": "db.test1.insertMany([\n {\n a: 'x1',\n y: 'y1',\n z: 'z1',\n }\n]);\nfunction calcProjectionForQuery(query) {\n // by default include only document ids in the output\n let projection = { _id: true };\n // get list of queried properties\n const keysAsked = Object.keys(query);\n // include queried properties to the output\n keysAsked.forEach(key => {\n projection[key] = true;\n });\n // add additional fields to the query output,\n // based on queried properties\n if (keysAsked.includes('x') && keysAsked.includes('y')) {\n projection.z = true;\n }\n return projection;\n}\nfunction runQuery(query) {\n const projection = calcProjectionForQuery(query);\n return db.test1.find(query, projection);\n}\nfunction runAggregation(query) {\n const projection = calcProjectionForQuery(query);\n return db.test1.aggregate([\n {\n $match: query,\n },\n {\n $project: projection,\n }\n ]);\n}\n> runQuery({ x: 'x1' })\n{ \"_id\" : ObjectId(\"...\"), \"x\" : \"x1\" }\n> runQuery({ y: 'y1' })\n{ \"_id\" : ObjectId(\"...\"), \"y\" : \"y1\" }\n> runQuery({ x: 'x1', y: 'y1' })\n{ \"_id\" : ObjectId(\"...\"), \"x\" : \"x1\", \"y\" : \"y1\", \"z\" : \"z1\" }\n> runQuery({})\n{ \"_id\" : ObjectId(\"...\") }\n>\ndb.test2.insertMany([\n {\n queryName: 'query1',\n variants: [\n {\n keysAsked: ['x', 'y'],\n addKeysToProjection: ['z'],\n },\n {\n keysAsked: ['y', 'z'],\n addKeysToProjection: ['x'],\n }\n ]\n },\n {\n queryName: 'query2',\n variants: [\n {\n keysAsked: ['a', 'b'],\n addKeysToProjection: ['c'],\n },\n ],\n },\n]);\n",
"text": "There is a mapping (some business logic) involved when searching a field in a collection.\nSo when one selects ABC, he should also see records of DEF & EGE.To include/exclude properties from read operation result, you need to use $projection.Currently (in MongoDB v4.2), there is not built-in mechanism, that would allow you to conditionally project (include/exclude) fields, based the props, that are in your query, so you will have to build the projection object on the application side.Let me show you on example (every example below works fine in mongo-shell).Imagine we have this dataset:Then we have 3 functions somewhere in our application:Execution results:Is there a way in MongoDb to store the mapping of Search criteria.You can achieve the same, if you would store that mapping object in the database. It may look like this:But, the projection building mechanism can be even more complex, than on the example above, because:You will also need to plan those mappings fetching strategy:Summary\nIt is much more rational and easier to store projection mappings and build projection objects with your application code.PS: There is also $redact aggregation pipeline stage, that allows you to include/exclude nested objects or entire documents, based on your conditions.",
"username": "slava"
},
{
"code": "",
"text": "Thanks Slava for your answer.\nI am looking for custom synonyms dictionary but i gues you answered my query that its not avaiable in Mongo.",
"username": "ictest_account"
}
] | Mapping Search Options in MongoDb | 2020-07-07T11:38:12.835Z | Mapping Search Options in MongoDb | 3,205 |
null | [] | [
{
"code": "",
"text": "Hi. I have a data structure where I have a nested array in an object and then more objects nested in each array element. I want to apply an expression to a value in each array element but want the original array with the applied expression to be the output. So a function like $map only gives me the result of the one value that the expression is applied to not the whole array with all the other objects still in their original place. Is there a function that does this?",
"username": "johan_potgieter"
},
{
"code": "",
"text": "Hello, @johan_potgieter!Please, provide:",
"username": "slava"
},
{
"code": "",
"text": "Hi @slava , i am attaching a sample of the document. All i want to do is keep the data exactly the way it is , i just want to remove the integer in the area key so it only says “A”, “C” etc. .\nI tried this.\ndb.test.aggregate([\n{\n$project: {\nteamStats: {\n$map: {\ninput: ‘$teamStats.data’,\nas: ‘rtp’,\nin: {\n$substr: [’$$rtp.area’, 0, 1],\n},\n},\n},\n},\n},\n]);\nbut then only get this result below.\n .\nThanks",
"username": "johan_potgieter"
},
{
"code": "db.test1.insertMany([\n {\n teamStats: {\n data: [\n {\n area: 'A1',\n key: 'K1',\n },\n {\n area: 'B1',\n key: 'K2',\n }\n ]\n }\n }\n]);\n{\n \"_id\" : ObjectId(\"5f05d6902e6a2f6e49fe1fd4\"),\n \"teamStats\" : {\n \"data\" : [\n {\n \"area\" : \"A\",\n \"key\" : \"K1\"\n },\n {\n \"area\" : \"B\",\n \"key\" : \"K2\"\n }\n ]\n }\n}\ndb.test1.aggregate([\n {\n $project: {\n 'teamStats.data': {\n $map: {\n input: '$teamStats.data',\n in: {\n $let: {\n vars: {\n // calculate new value for area\n newArea: {\n $substr: ['$$this.area', 0, 1],\n }\n },\n in: {\n // merge calculated property\n // into current object\n $mergeObjects: ['$$this', {\n area: '$$newArea',\n }]\n }\n }\n }\n }\n }\n }\n }\n]);\n",
"text": "Ok, so we need to change the state of the documents from this:To this:To achieve that, you need to merge the calculated value into current document in the aggregation pipeline. Like this:",
"username": "slava"
},
{
"code": "",
"text": "Great thanks for your help. Will reply on the other post shortly.",
"username": "johan_potgieter"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Similar function to $map ,but keeps the original array | 2020-07-08T13:38:58.563Z | Similar function to $map ,but keeps the original array | 3,405 |
null | [
"java"
] | [
{
"code": " byte[] bytesArray = new byte[(int) file.length()];\n\n FileInputStream fis = new FileInputStream(file);\n fis.read(bytesArray); //read file into bytes[]\n fis.close();\n document.append(\"binaryFile\", new BsonBinary(bytesArray));\n collection.insertOne(new Document(document)); \n ArrayList<Document> docs = new ArrayList<Document>();\n\n it.into(docs);\n\n for (Document doc1: docs) {\n\n System.out.println(doc1);\n byte[] c = ((org.bson.types.Binary)doc1.get(\"binaryFile\")).getData();\n if(c.length==0){\n \n }else{\n fileInputStream= new ByteArrayInputStream(c);\n fileInputStream.read();\n }\n",
"text": "I am using mongodb community version. I stored pdf file with size less than 16 MB in table as followsBut at the time of retrial of the pdf file from database I used following codeFindIterable it = collection.find(whereQuery);At the this time pdf file got created but could not be opened. It displayed following error.“Error Adobe Reader could not open document.pdf becase it is either not a supported file or because file has damaged”.",
"username": "Kavita_Mhatre"
},
{
"code": "// Writing a PDF file to a document\nPath file = Paths.get(\"intro.pdf\");\nbyte [] fileBytes = Files.readAllBytes(file);\nSystem.out.println(\"File size: \" + fileBytes.length);\nBinary binData = new Binary(fileBytes); // org.bson.types.Binary class\nDocument doc = new Document(\"_id\", 1).append(\"file\", binData);\ncollection.insertOne(doc);\n\n// Reading from document and creating the PDF file\nDocument doc = collection.find(new Document(\"_id\", 1)).first();\nBinary binData = doc.get(\"file\", Binary.class);\nbyte [] fileBytes = binData.getData();\nSystem.out.println(\"File size: \" + fileBytes.length);\nPath file = Paths.get(\"new_intro.pdf\");\nFiles.write(file, bytes); // this creates the file",
"text": "Hello @Kavita_Mhatre, welcome to the forum.Here is code I tried to store a small PDF file ( 213470 bytes) in a document of MongoDB collection. I am using Java 8, MongoDB 4.2 and MongoDB Java Driver 3.12. This worked fine:",
"username": "Prasad_Saya"
}
] | Errror while retrieving pdf file from mongodb in java | 2020-07-08T14:18:21.481Z | Errror while retrieving pdf file from mongodb in java | 2,994 |
null | [
"python"
] | [
{
"code": " code date num price money\n0 2 2015-11-15 10 3.8 -38.0\n1 2 2015-11-17 -10 3.7 37.0\n2 2 2015-11-20 20 3.5 -70.0\n3 2 2016-04-01 10 3.2 -32.0\n4 2 2016-04-02 -30 3.6 108.0\n5 2 2016-04-03 50 3.4 -170.0\n6 2 2016-11-01 -40 3.5 140.0\n7 3 2015-02-01 25 7.0 -175.0\n8 3 2015-05-01 35 7.5 -262.5\n9 3 2016-03-01 -15 8.0 120.0\n10 5 2015-11-20 50 5.0 -250.0\n11 5 2016-06-01 -50 5.5 275.0\n12 6 2015-02-01 35 11.5 -402.5 \nimport pandas as pd\nimport numpy as np\n\ndf=pd.DataFrame({'code': [2,2,2,2,2,2,2,3,3,3,5,5,6],\n 'date': ['2015-11-15','2015-11-17','2015-11-20','2016-04-01','2016-04-02','2016-04-03','2016-11-01','2015-02-01','2015-05-01','2016-03-01','2015-11-20','2016-06-01','2015-02-01'],\n 'num' : [10,-10, 20, 10, -30,50, -40, 25, 35, -15, 50, -50, 35],\n 'price': [3.8,3.7,3.5,3.2, 3.6,3.4, 3.5, 7, 7.5, 8, 5, 5.5, 11.5],\n 'money': [-38,37,-70,-32, 108,-170, 140,-175,-262.5,120,-250, 275,-402.5]\n })\n\n\nprint(df,\"\\n------------------------------------------\\n\")\ndf['hold'] = df.groupby(['code'])['num'].cumsum()\ndf['type'] = np.where(df['hold'] > 0, 'B', 'S')\ndf['total']=df['total1']= df.groupby(['code'])['money'].cumsum()\n\ndef abc(dfg):\n if dfg[dfg['hold'] == 0]['hold'].count():\n subT = dfg[dfg['hold'] == 0]['total1'].iloc[-1]\n dfg['total'] = np.where(dfg['hold'] > 0, dfg['total']-subT, dfg['total'])\n return dfg\ndfR = df.groupby(['code'], as_index=False)\\\n .apply(abc) \\\n .drop(['type', 'total1'], axis=1) \\\n .reset_index(drop=True)\n\ndf1=dfR.groupby(['code']).tail(1)\nprint(df1,\"\\n------------------------------------------\\n\")\n code date num price money *hold* *total*\n6 2 2016-11-01 -40 3.5 140.0 *10* *-30.0*\n9 3 2016-03-01 -15 8.0 120.0 *45* *-317.5*\n11 5 2016-06-01 -50 5.5 275.0 *0* *25.0*\n12 6 2015-02-01 35 11.5 -402.5 *35* *-402.5* \n",
"text": "I have a mongodb data like below:I want to get the number of securities held and the funds currently occupied by the securities If I take out the data, I can get the result I want in the following way:outIf use the mongodb method (such as aggregate, or other), how can i directly obtain the same result as above?",
"username": "fei_hua"
},
{
"code": "codedatecodecodenumholdmoneytotalcode+datenumpricemoneymongo",
"text": "@fei_hua Welcome to the MongoDB community.The aggregation query is the correct approach. An aggregation query has pipeline with stages. Each stage processes the documents which is input to the next stage. The first stage in the pipeline has the collection documents as input. In your aggregation you need to use two stages to get the desired result.The first stage is $sort. This is to sort the code and date, both in ascending order.The second stage is the $group stage. This is to group by the the code field and get the output with the accumulations. The grouping key is the code field. Sum the num to get the hold, and sum the money to get the total - see the $sum accumulater operator. The last value of the sorted code+date gives you the num, price, money - see $last aggregation group operator.The links to aggregation provided above are for mongo shell methods. Here is link to PyMongo documentation.Note that you can also build an aggregation query in the Compass GUI tool and generate PyMongo code automatically - using the Aggregation Pipeline Builder (after building the pipeline click the Export to Language button).",
"username": "Prasad_Saya"
},
{
"code": "sort = {'$sort': {\n 'code': 1,\n 'date': 1,\n}}\n\ngroup = {'$group': {\n '_id': {'code': '$code'},\n \"date\": { \"$last\": \"$date\" }, \n 'hold': {'$sum': '$num'},\n 'total': {'$sum': '$money'},\n}}\n\nmydoc=mycol.aggregate([sort,group])\ndata = pd.DataFrame(mydoc)\nprint(data)\n _id date hold total\n0 {'code': '2'} 2016-11-01 10.0 **-25.0**\n1 {'code': '6'} 2015-02-01 35.0 -402.5\n2 {'code': '5'} 2016-06-01 0.0 25.0\n3 {'code': '3'} 2016-03-01 45.0 -317.5\n",
"text": "Is that so?\nBut the result is:",
"username": "fei_hua"
},
{
"code": "$group$last$project$sort{ '$project': { 'code': '_id.code', 'date': 1, 'num': 1, 'price': 1, 'money': 1, 'hold': 1, 'total': 1, '_id': 0 } }\n{ '$sort': { 'code': 1 } }",
"text": "In the $group stage you can use the $last on other fields you want in the output (other than the accumulated values). Now add following $project and $sort stages (after the $group), to get the result:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "I mean, the total of the first line, it is not the desired result",
"username": "fei_hua"
},
{
"code": "pipeline = [\n { '$sort': { 'code': 1, 'date': 1 } },\n { '$group': { '_id': '$code', 'num': { '$last': '$num' }, 'price': { '$last': '$price' }, 'money': { '$last': '$money' }, 'hold': { '$sum': '$num' }, 'total': { '$sum': '$money' } } },\n { '$project': { 'code': '$_id', 'date': 1, 'num': 1, 'price': 1, 'money': 1, 'hold': 1, 'total': 1, '_id': 0 } },\n { '$sort': { 'code': 1 } }\n]\n\npprint.pprint(list(collection.aggregate(pipeline)))",
"text": "Try this code:",
"username": "Prasad_Saya"
},
{
"code": " num price money hold total code\n0 -40.0 3.5 140.0 10.0 -25.0 2\n1 -15.0 8.0 120.0 45.0 -317.5 3\n2 -50.0 5.5 275.0 0.0 25.0 5\n3 35.0 11.5 -402.5 35.0 -402.5 6\n",
"text": "Yes, I ran it with the code you said, look at the number in the first line as the total\nIt is -25.0, the number I want is -30.0",
"username": "fei_hua"
},
{
"code": "-30.0",
"text": "Yes, I ran it with the code you said, look at the number in the first line as the total\nIt is -25.0, the number I want is -30.0The number -30.0 - how did you get (calculate) that figure? Please explain.",
"username": "Prasad_Saya"
},
{
"code": "import pandas as pd\nimport numpy as np\n\ndf=pd.DataFrame({'code': [2,2,2,2,2,2,2,3,3,3,5,5,6],\n 'date': ['2015-11-15','2015-11-17','2015-11-20','2016-04-01','2016-04-02','2016-04-03','2016-11-01','2015-02-01','2015-05-01','2016-03-01','2015-11-20','2016-06-01','2015-02-01'],\n 'num' : [10,-10, 20, 10, -30,50, -40, 25, 35, -15, 50, -50, 35],\n 'price': [3.8,3.7,3.5,3.2, 3.6,3.4, 3.5, 7, 7.5, 8, 5, 5.5, 11.5],\n 'money': [-38,37,-70,-32, 108,-170, 140,-175,-262.5,120,-250, 275,-402.5]\n })\n\n\nprint(df,\"\\n------------------------------------------\\n\")\ndf['hold'] = df.groupby(['code'])['num'].cumsum()\ndf['type'] = np.where(df['hold'] > 0, 'B', 'S')\ndf['total']=df['total1']= df.groupby(['code'])['money'].cumsum()\n\ndef abc(dfg):\n if dfg[dfg['hold'] == 0]['hold'].count():\n subT = dfg[dfg['hold'] == 0]['total1'].iloc[-1]\n dfg['total'] = np.where(dfg['hold'] > 0, dfg['total']-subT, dfg['total'])\n return dfg\ndfR = df.groupby(['code'], as_index=False)\\\n .apply(abc) \\\n .drop(['type', 'total1'], axis=1) \\\n .reset_index(drop=True)\n\ndf1=dfR.groupby(['code']).tail(1)\nprint(df1,\"\\n------------------------------------------\\n\")\n",
"text": "Use the following method\nI just want to see if it is possible to use aggregation query (or other methods of pymongo) to get this result",
"username": "fei_hua"
},
{
"code": "",
"text": "I am not familiar with the pandas. If you can explain in plain English or pseudo-code, I can try.",
"username": "Prasad_Saya"
},
{
"code": " code date num price money\n0 2 2015-11-15 10 3.8 -38.0\n1 2 2015-11-17 -10 3.7 37.0\n2 2 2015-11-20 20 3.5 -70.0\n3 2 2016-04-01 10 3.2 -32.0\n4 2 2016-04-02 -30 3.6 108.0\n5 2 2016-04-03 50 3.4 -170.0\n6 2 2016-11-01 -40 3.5 140.0\n7 3 2015-02-01 25 7.0 -175.0\n8 3 2015-05-01 35 7.5 -262.5\n9 3 2016-03-01 -15 8.0 120.0\n10 5 2015-11-20 50 5.0 -250.0\n11 5 2016-06-01 -50 5.5 275.0\n12 6 2015-02-01 35 11.5 -402.5 \n",
"text": "Calculate groupby code from top to bottom\na=sum(num)\nb=0\nc=sum(money)Judgment during accumulation\nif(a==0){\nb+=c;\nc=0;\n}Judgment after accumulation\nif(a==0){\ntotal=b\n}else{\ntotal=c\n}\nJust like the above, I don’t know if I express it clearly",
"username": "fei_hua"
},
{
"code": "",
"text": "If calculated in excel\nexcel formula:Result of formula calculation in excel:then\na=The last total (middle) of the code corresponding to the hold equal to 0\nIf the a of code does not exist\na=0if the hold of code is not equal to 0\ntotal=total(middle)-a\nelse\ntotal=total(middle)then take the last line of data for each code,that is the result i want",
"username": "fei_hua"
},
{
"code": "map = Code(\"\"\"function () {\n emit(this.code, {hold:this.num,total:this.money,total1:0});\n }\"\"\")\n\nreduce = Code(\"\"\"function (key, values) {\n var a={hold:0,total:0,total1:0};\n for (var i = 0; i < values.length; i++) {\n a.hold += values[i].hold;\n a.total +=values[i].total;\n a.total1 +=values[i].total;\n if (a.hold==0){\n a.total1=0\n }\n }\n return a;\n }\"\"\")\n",
"text": "",
"username": "fei_hua"
},
{
"code": "pipeline = [\n {\n '$sort': { 'code': 1, 'date': 1 }\n },\n { \n '$group': { \n '_id': '$code', \n 'num': { '$last': '$num' }, 'price': { '$last': '$price' }, 'money': { '$last': '$money' }, \n 'code_data': { '$push': { 'n': \"$num\", 'm': \"$money\" } } \n } \n },\n { \n '$addFields': { \n 'result': { \n '$reduce': { \n 'input': '$code_data', \n 'initialValue': { 'hold': 0, 'sum_m': 0, 'total': 0 }, \n 'in': { \n '$let': {\n 'vars': { \n 'hold_': { '$add': [ '$$this.n', '$$value.hold' ] }, \n 'sum_m_': { '$add': [ '$$this.m', '$$value.sum_m' ] }\n },\n 'in': { \n '$cond': [ { '$eq': [ '$$hold_', 0 ] }, \n { 'hold': '$$hold_', 'sum_m': 0, 'total': '$$sum_m_' },\n { 'hold': '$$hold_', 'sum_m': '$$sum_m_', 'total': '$$sum_m_' }\n ] \n }\n }\n }\n } \n }\n } \n },\n { \n '$addFields': { 'code': '$_id', 'hold': '$result.hold', 'total': '$result.total' } \n },\n { \n '$project': { 'code_data': 0, 'result': 0, '_id': 0 } \n },\n { \n '$sort': { 'code': 1 } \n }\n]",
"text": "You can try this aggregation:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "",
"username": "fei_hua"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help writing aggregation query using PyMongo instead of pandas | 2020-07-04T23:27:57.055Z | Help writing aggregation query using PyMongo instead of pandas | 4,919 |
null | [
"node-js"
] | [
{
"code": "const gooseoptions = {\n tls: true,\n tlsCAFile: '..my.crt',\n sslValidate: true,\n useNewUrlParser: true,\n useUnifiedTopology: true\n};\n/home/ec2-user/myapp/**node_modules/mongodb/lib/utils.js:725**\nthrow error;\n^\n\nMongoServerSelectionError: connection <monitor> to <my mongodb IP> closed\nat Timeout._onTimeout (/home/ec2-user/myapp/**node_modules/mongodb/lib/core/sdam/topology.js:430:30**)\nat listOnTimeout (internal/timers.js:549:17)\nat processTimers (internal/timers.js:492:7) {\n reason: TopologyDescription {\n type: 'ReplicaSetNoPrimary',\n setName: null,\n maxSetVersion: null,\n maxElectionId: null,\n servers: Map(2) {\n '...my db...' => [ServerDescription],\n '...my db...' => [ServerDescription]\n },\n stale: false,\n compatible: true,\n compatibilityError: null,\n logicalSessionTimeoutMinutes: null,\n heartbeatFrequencyMS: 10000,\n localThresholdMS: 15,\n commonWireVersion: null\n }\n}\n",
"text": "Recently upgraded a server-side app to\nnode v14.4.0\nMongoDB 4.2.6 CommunityAll recent packages\n{ …\n“express”: “^4.17.1”,\n“mongodb”: “^3.5.9”,\n“mongoose”: “^5.9.20”,\n}Simple setup:When I run the app, it seems to get a connection - but then after about 30 seconds it gets kicked out with this…Note: the same error is posted on ASP.net forum here on Jun 29, 2020:\nnode.js db connection",
"username": "Hankins_Parichabutr"
},
{
"code": "ReplicaSetNoPrimarytlsCAFilemongomongod",
"text": "Hi,A driver will try for 30 seconds to connect to the server using the Server Selection Algorithm. From your description, it seems that the driver cannot find a suitable server to connect to, for some reason.The specific reason is printed in the error message: ReplicaSetNoPrimary. Meaning that the driver cannot seem to find/connect to the primary node. However, there could be many different situation that lead to this error.From the tlsCAFile option you provided to the driver, I’m guessing that this is not an Atlas deployment which requires SSL to connect. For this experiment, have you tried connecting to the server without using SSL to ensure that the driver can reach the server? Have you tried connecting to the server using the mongo shell?If you can connect without SSL but cannot with SSL, what is the relevant error message printed by the node driver and the mongod process? Typically the error message points to the exact problem.Best regards,\nKevin",
"username": "kevinadi"
},
{
"code": "",
"text": "Hi Kevin,The deployment is on IBM Cloud using their “Databases for MongoDB” service.\nYes, I am able to connect using the Mongo shell perfectly fine.Is there a way to see the connection attempt from within Mongo shell?I have not tried connecting without SSL from the app.\nThank you for the guidance, I will follow up with my results…\nHankins",
"username": "Hankins_Parichabutr"
},
{
"code": "mongod",
"text": "Hi Hankins,Is there a way to see the connection attempt from within Mongo shell?Unfortunately no. This would have to be seen from the mongod logs.Note that there seems to be an ongoing infrastructure issue with IBM Cloud, as there was a report of intermittent failures in NODE-2513. You might want to follow up with them as well if your experiments prove unsuccessful.Best regards,\nKevin",
"username": "kevinadi"
}
] | Connection via node/express gets closed / times out | 2020-07-03T20:34:58.034Z | Connection via node/express gets closed / times out | 5,206 |
[] | [
{
"code": "exp.find({ \"address.state\": \"NY\", stars: { $gt: 3, $lt: 4 } }).sort({ name: 1 }).hint({ \"address.state\": 1})\n\"winningPlan\" : {\n\t\t\"stage\" : \"SORT\",\n\t\t\"sortPattern\" : {\n\t\t\t\"name\" : 1\n\t\t},\n\t\t\"inputStage\" : {\n\t\t\t\"stage\" : \"SORT_KEY_GENERATOR\",\n\t\t\t\"inputStage\" : {\n\t\t\t\t\"stage\" : \"FETCH\",\n\t\t\t\t\"filter\" : {\n\t\t\t\t\t\"$and\" : [\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stars\" : {\n\t\t\t\t\t\t\t\t\"$lt\" : 4\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t},\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\"stars\" : {\n\t\t\t\t\t\t\t\t\"$gt\" : 3\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t]\n\t\t\t\t},\n\t\t\t\t\"inputStage\" : {\n\t\t\t\t\t\"stage\" : \"IXSCAN\",\n\t\t\t\t\t\"keyPattern\" : {\n\t\t\t\t\t\t\"address.state\" : 1\n\t\t\t\t\t},\n\t\t\t\t\t\"indexName\" : \"address.state_1\",\n\t\t\t\t\t\"isMultiKey\" : false,\n\t\t\t\t\t\"multiKeyPaths\" : {\n\t\t\t\t\t\t\"address.state\" : [ ]\n\t\t\t\t\t},\n\t\t\t\t\t\"isUnique\" : false,\n\t\t\t\t\t\"isSparse\" : false,\n\t\t\t\t\t\"isPartial\" : false,\n\t\t\t\t\t\"indexVersion\" : 2,\n\t\t\t\t\t\"direction\" : \"forward\",\n\t\t\t\t\t\"indexBounds\" : {\n\t\t\t\t\t\t\"address.state\" : [\n\t\t\t\t\t\t\t\"[\\\"NY\\\", \\\"NY\\\"]\"\n\t\t\t\t\t\t]\n\t\t\t\t\t}\n\t\t\t\t}\n\t\t\t}\n\t\t}\n\t},\n",
"text": "I am working on the course M201 and am up to lab 3.1.I ran the following code in my Mongo shell:And found the following result:As I understand it, this is the same order of operations as the question asks (IXSCAN -> FETCH -> SORT_KEY_GENERATOR -> SORT).None of the other hints in the question result in this order. Why is my answer wrong?Screen Shot 2020-07-09 at 11.09.14 am3360×2100 583 KBScreen Shot 2020-07-09 at 11.09.57 am1920×1080 196 KB",
"username": "Spirit_Dragon"
},
{
"code": "",
"text": "Hello @Spirit_Dragon welcome to the community!\nWe are happy to help, however this is a University question which has a dedicated forum for each class. Please follow this link to reach the M201 University Forum.\nConcerning your question: you are on a good path, just rethink your hint with the equality, sort, range rule in mind.Hope this hint helps \nMichael",
"username": "michael_hoeller"
}
] | M201 lab 3.1 (Explain output) | 2020-07-09T01:54:24.925Z | M201 lab 3.1 (Explain output) | 1,860 |
|
null | [
"scala"
] | [
{
"code": "collection.insertOne(doc).results();\n",
"text": "Hi,here :results() method not availablei’m using - libraryDependencies += “org.mongodb.scala” %% “mongo-scala-driver” % “2.9.0”\nresults() implicit we block until the observer is completed:please help me with this.",
"username": "venkateswararao_yelu"
},
{
"code": "results()mongo-scala-driverHelpersdef results(): Seq[C] = Await.result(observable.toFuture(), Duration(10, TimeUnit.SECONDS))\n",
"text": "Hi @venkateswararao_yelu, and welcome to the forum!results() method not availableThe results() is an implicit observer from the mongo-scala-driver example. Generally in the Scala driver examples you can see an import for Helpers which contains this method.You can find the code on mongo-scala-driver: Helpers.scala:See mongo-scala-driver: Helpers used in the Quick Tour for more information.Regards,\nWan.",
"username": "wan"
}
] | collection.insertOne(doc).results() method is not available | 2020-07-08T09:23:43.202Z | collection.insertOne(doc).results() method is not available | 3,741 |
null | [] | [
{
"code": "",
"text": "I’m working within the AWS infrastructure and have successfully made a connection between a FARGATE task and ATLAS using the PrivateLink connection. This all works nicely when I have a single PrivateLink, however it is not possible to make a connection to additional PrivateLinks created within the same Region.The documentation states that there are limitations on creating multiple PrivateLinks but only across Regions.Is there a single PrivateLink limitation within a single region?",
"username": "Chris_Hills"
},
{
"code": "",
"text": "Hi Chris,For a single-region Atlas Project (e.g. with a single VPC on the Atlas backend in a single region), you can set up multiple AWS Privatelinks.It’s specifically where your Atlas Project’s cluster(s) involves multiple AWS regions and hence multiple VPCs in different regions on the Atlas backend that we limit you to setting up one AWS Privatelink per region.Importantly, because AWS Privatelink is transitive you can set up your own peering connections within your app tier VPCs and hence reach the AWS Privatelink you’ve set up to reach Atlas from multiple VPCs within your AWS account.-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "System.TimeoutException:\nA timeout occured after 30000ms selecting a server using CompositeServerSelector\n{\n\tSelectors = \n\t\tMongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, \n\t\tLatencyLimitingServerSelector { AllowedLatencyRange = 00:00:00.0150000 } \n}\nClient view of cluster state is \n{ \n\tClusterId : \"1\", \n\tConnectionMode : \"ReplicaSet\", \n\tType : \"ReplicaSet\", \n\tState : \"Disconnected\", \n\tServers : [\n\t\t{ \n\t\t\tServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/pl-0-eu-west-2.uagzl.mongodb.net:1024\" }\", \n\t\t\tEndPoint: \"Unspecified/pl-0-eu-west-2.uagzl.mongodb.net:1024\", \n\t\t\tState: \"Disconnected\", \n\t\t\tType: \"Unknown\", \n\t\t\tLastUpdateTimestamp: \"2020-07-01T13:30:51.2980600Z\" \n\t\t}, \n\t\t{ \n\t\t\tServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/pl-0-eu-west-2.uagzl.mongodb.net:1025\" }\", \n\t\t\tEndPoint: \"Unspecified/pl-0-eu-west-2.uagzl.mongodb.net:1025\", \n\t\t\tState: \"Disconnected\", \n\t\t\tType: \"Unknown\", \n\t\t\tLastUpdateTimestamp: \"2020-07-01T13:30:51.2983032Z\"\n\t\t},\n\t\t{ \n\t\t\tServerId: \"{ ClusterId : 1, EndPoint : \"Unspecified/pl-0-eu-west-2.uagzl.mongodb.net:1026\" }\", \n\t\t\tEndPoint: \"Unspecified/pl-0-eu-west-2.uagzl.mongodb.net:1026\", \n\t\t\tState: \"Disconnected\", \n\t\t\tType: \"Unknown\", \n\t\t\tLastUpdateTimestamp: \"2020-07-01T13:30:51.2774193Z\" \n\t\t}\n\t]\n}\n",
"text": "Hi @Andrew_Davidson, thanks for the prompt reply.I’m probably misunderstanding something, but in practice, I’m not able to get an ATLAS connection from a second app tier VPC created in the same region as the first (which has a working vpce/PrivateLink connection). Each app tier has its own AWS vpce connected to its own ATLAS Private Endpoint (PrivateLink connection).The ATLAS cluster is a single region cluster, each app tier is using an identical connection string.The following exception is thrown from the second app tier -Any further help much appreciated.",
"username": "Chris_Hills"
},
{
"code": "",
"text": "Hi Chris,You’re going to need to work with the MongoDB support team to get to the bottom of this one, I suspect.A couple possibilities: are you sure that the peered VPC has a route-back CIDR range that includes the private IP of the PrivateLink?-Andrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hi Andrew,Of the 3 Network Access options available within the ATLAS console (IP Whitelist, Peering and Private Endpoint) I’m using the Private Endpoint option which gives provides the ‘Add PrivateLink Connection’ wizard. It’s this wizard that I’m using to set up access for my AWS app tier to ATLAS, so I’m not using the Peering option. My app tier VPCs are not peered with the ATLAS one.I’ve been following this documentation - Set up a Private Endpoint.Am I missing something? Is peering required to establish more than one PrivateLink?Chris",
"username": "Chris_Hills"
},
{
"code": "",
"text": "Hi Chris,Apologies. I had assumed you had connected 1 VPC “VPC A” in your applications tier to Atlas using an Atlas Private Endpoint / AWS Privatelink and then had another VPC “VPC B” in your app tier peered to your first VPC A.I understand you have set up two Atlas Private Endpoints, one in each of VPC A and VPC B.Therefore, I wonder if you might be using the connection string associated with Private Endpoint A from VPC B or vice versa? The Atlas connect modal should offer both options in the drop-down with the Private Endpoint selector. Since each endpoint is specifically associated with one VPC on your side, if they might have been reversed that could explain the issue.Cheers\nAndrew",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Hi Andrew,I had thought that there should be a different connection string for each Endpoint but when I first looked at the 2 options they appeared to be the same. Now that you mention that I might be using the same connection string for both (which is what I’ve been doing), upon closer inspection I see that there is a difference - an incrementing numeric on the -pl-0- portion of the connection string for each PrivateLink created.Thanks for your help, I now have an additional app tier connection from another VPC.Chris",
"username": "Chris_Hills"
},
{
"code": "",
"text": "Great to hear it! And good feedback that that nuance can be easy to miss.",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Connecting to more than one ATLAS AWS PrivateLink | 2020-07-06T09:13:15.167Z | Connecting to more than one ATLAS AWS PrivateLink | 4,038 |
[
"java"
] | [
{
"code": "",
"text": "Is there a java example for Pagination similar to the one below?The application i’m building is architected as:\nWeb page → On-load calls a service to fetch data (Angular) ← Service returns data (Java, jax-rs) <–> Connects to MongoDB (local, open-source)Wanted to implement the fetch on backend/java-service, and allowing the frontend to send the offset and page-number.",
"username": "Suren_Konathala"
},
{
"code": "",
"text": "Here is a post to start with: Paging with the Bucket Pattern - Part 1.I also, suggest do a general search (i.e., Google, etc.) with the string “mongodb java pagination” where you get few articles / posts discussing some scenaios, opinions and solutions. It is likely, you will find a suitable answer for your specific need. Hope this is useful.",
"username": "Prasad_Saya"
},
{
"code": "public String fetchData(int offset, int page) {\n final String uriString = \"mongodb://$[username]:$[password]@$[hostlist]/$[database]?authSource=$[authSource]\";\n MongoClient mongoClient = MongoClients.create(\"mongodb://localhost:27017\");\n MongoDatabase database = mongoClient.getDatabase(\"airbnb\");\n MongoCollection<Document> sampleDataCollection = database.getCollection(\"sample\");\n\n List<Document> sampleDataList = sampleDataCollection.find()\n .skip( page > 0 ? ( ( page - 1 ) * offset ) : 0 )\n .limit(offset)\n .into(new ArrayList<>());\n\n System.out.println(\"\\n\\nTotal rows = \" + sampleDataList.size());\n\n return gsonObj.toJson(sampleDataList);\n}\n@Path(\"/airbnb-sample\")\n@GET\n@Produces(MediaType.APPLICATION_JSON)\npublic String getAirbnbSampleData(@QueryParam(\"offset\") int offset, @QueryParam(\"page\") int page) {\n return mongodbService.fetchData(offset, page);\n}\nhttp://localhost:9191/mongodb/airbnb-sample-search?offset=50&page=2\n",
"text": "Here’s an example that worked for me:Caller…REST call…I will add the full example on Github soon!",
"username": "Suren_Konathala"
}
] | How to implement pagination in Java (not using Spring framework)? | 2020-06-30T17:46:19.269Z | How to implement pagination in Java (not using Spring framework)? | 8,445 |
|
null | [
"java"
] | [
{
"code": "",
"text": "In the mongodb-driver-legacy (4…0.4) I could find the class MongoClientOptions to set for example:\nconnectionsPerHostIt looks like in the mongodb-driver-sync (4.0.4), this class is replaced by MongoClientSettings, but it’s missing the connectionsPerHost.\nCould someone indicate me where could I configure this in the new lib?",
"username": "Etienne_Le_Nigen"
},
{
"code": "",
"text": "MongoClientSettings is a bit different from MongoClientOptions in that there are nested settings classes within it. For connection pool management, for example, there is com.mongodb.connection.ConnectionPoolSettings, which has maxSize, minSize, etc properties. Within the context of MongoClientSettings, you can figure this via the method com.mongodb.MongoClientSettings.Builder#applyToConnectionPoolSettings",
"username": "Jeffrey_Yemin"
}
] | Java legacy driver MongoClientOptions gone | 2020-07-08T20:50:03.594Z | Java legacy driver MongoClientOptions gone | 4,805 |
null | [
"production",
"golang"
] | [
{
"code": "golang.org/x/text",
"text": "The MongoDB Go Driver Team is pleased to announce the release of 1.3.5 of the MongoDB Go Driver.This release contains several bugfixes and a change to upgrade the driver’s golang.org/x/text dependency to v0.3.3 in response to CVE - CVE-2020-14040. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.3.5 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Go Driver 1.3.5 Released | 2020-07-08T14:09:46.304Z | MongoDB Go Driver 1.3.5 Released | 1,700 |
null | [
"golang"
] | [
{
"code": "",
"text": "I am trying to connect aws documentdb from local environment following the docs Connecting to an Amazon DocumentDB Cluster from Outside an Amazon VPC - Amazon DocumentDB\ni am able to connect from mongo shell but not from GO applicationTunneling command\nssh -L 27017::27017 @ -NMongo Shell(this is working)\nmongo --sslAllowInvalidHostnames --ssl --host localhost:27017 --sslCAFile rds-combined-ca-bundle.pem --username --password Sample Go codes(This is not working)\nconnectionURI := mongodb://:@localhost:27017/?sslCAFile=rds-combined-ca-bundle.pem&replicaSet=rs0&readPreference=secondaryPreferred&ssl=true&sslAllowInvalidHostnames=true&retryWrites=false&sslInsecure=true&sslVerifyCertificate=falsetlsConfig, err := getCustomTLSConfig(caFilePath)\nif err != nil {\nfmt.Printf(“Failed getting TLS configuration: %v”, err)\n}// Connect to MongoDB\nclient, err := mongo.NewClient(options.Client().ApplyURI(connectionURI).SetTLSConfig(tlsConfig))\n//client, err := mongo.NewClient(options.Client().ApplyURI(connectionURI))\nif err != nil {\nfmt.Println(\"client error: \", err)\n}ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)\ndefer cancel()err = client.Connect(ctx)\nif err != nil {\nfmt.Println(\"connect error: \", err)\n}ctx, cancel = context.WithTimeout(context.Background(), 2*time.Second)\ndefer cancel()err = client.Ping(ctx, nil)if err != nil {\nfmt.Println(\"ping error: \", err)\n}fmt.Println(“Connected to MongoDB!”)collection := client.Database(“mytestdb”).Collection(“mytestcollection”)res, err := collection.InsertOne(context.TODO(), bson.M{“name”: “pi”, “value”: 3.14159})\nif err != nil {\nfmt.Printf(“Failed to insert document: %v”, err)\n}getting following errors\ni. ping error: context deadline exceeded\nii. Failed to insert document: server selection error: server selection timeout, current topology: { Type: ReplicaSetNoPrimary, Servers: [{ Addr: test-1.test.us-west-2.docdb.amazonawscom:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connection(test-docdb-1.test.us-west-2.docdb.amazonawscom:27017[-3]) incomplete read of message header: EOF }, { Addr: test-docdb-3.test.us-west-2.docdb.amazonawscom:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connection(test-docdb-3.test.us-west-2.docdb.amazonawscom:27017[-4]) incomplete read of message header: EOF }, { Addr: test-docdb-2.test.us-west-2.docdb.amazonawscom:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : connection(test-docdb-2.test.us-west-2.docdb.amazonawscom:27017[-5]) incomplete read of message header: EOF }, ]I would greatly appreciate for any help/suggestion.(I am new to Go language)\nThanks",
"username": "Manoj_Maharjan"
},
{
"code": "",
"text": "Hi @Manoj_Maharjan, and welcome to the forum!Please note that AWS DocumentDB API is an emulation of MongoDB which differs in features, compatibility, and implementation from an actual MongoDB deployment. Their suggestion of API version support (eg 3.6) is referring to the wire protocol used rather than the full MongoDB feature set for that version. Official MongoDB drivers (i.e. MongoDB Go driver) are only tested against actual MongoDB deployments.For further questions on AWS DocumentDB connections I’d suggest to contact AWS.Depending on your requirements, you may find it useful to know that MongoDB Atlas clusters can be deployed in AWS, GCP, and Azure. MongoDB Atlas also supports network peering connections for AWS, GCP and Azure-backed clusters. See also MongoDB Atlas: Set up a Network Peering Connection.\nFor more information, see Connect to an Atlas clusterRegards,\nWan.",
"username": "wan"
},
{
"code": "localhostc.addr = \"localhost:27017\"(*topology.connection).connect(...)mongo",
"text": "I am having the same issue in a similar situation (using AWS DocumentDB). The issue seems to be entirely within the driver before any commands are sent to the server.I tried to trace it back but wasn’t able to follow the driver code very well. What seems to be happening is that the driver appears to recurse through the connection a couple of times to apply changes, and it ends up replacing the “localhost” address with the TLS server name (the DocDB instance) at some point. Since the address of the instance is not Internet-connected, the connection fails and the entire process aborts. The connection must be made through localhost via the SSH tunnel.When I add c.addr = \"localhost:27017\" to the top of (*topology.connection).connect(...), which forces the update to the address to be undone, I am able to establish a connection. I am not certain whether the behavior described earlier is a bug or not, but for me, it is. And, official tools like MongoDB Compass and the mongo shell do not have this issue.",
"username": "Marshall_Meng"
},
{
"code": "connectionURI := \"mongodb://<user_name>:<password>@localhost:27017/?ssl=true&sslCAFile=<path to rds-combined-ca-bundle.pem>&connect=direct&sslInsecure=true&replicaSet=rs0&readPreference=secondaryPreferred",
"text": "Thank you @wan and @Marshall_Meng for your responses. It seems AWS doesn’t allow auto discoverr to all the endpoints. Basically, it allows only one cluster endpoint(primary) connection while tunneling from local environment and i had to set following 2 parameters to connect directly to the primary endpoint. i am able to connect successfully now.\n“&connect=direct&sslInsecure=true”Complete connection uri:\nconnectionURI := \"mongodb://<user_name>:<password>@localhost:27017/?ssl=true&sslCAFile=<path to rds-combined-ca-bundle.pem>&connect=direct&sslInsecure=true&replicaSet=rs0&readPreference=secondaryPreferred",
"username": "Manoj_Maharjan"
},
{
"code": "",
"text": "Thanks for the update Manoj. I can confirm this worked for me as well.",
"username": "Marshall_Meng"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Tunneling (Port forwarding) with MongoDB Go driver is not working | 2020-06-28T21:31:46.669Z | Tunneling (Port forwarding) with MongoDB Go driver is not working | 10,930 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "Hi,\nI am currently working in WalmartLabs Software Division.We are trying to read data from MongoDB Collection using Kafka Source Connector https://docs.mongodb.com/kafka-connector/master/kafka-source/. We are noticing that one task is able to read only 500 Documents Per Second. We have no custom filters and not doing any processing on the document read form the MongoDB Change Stream. We also notice that there is no Spike in CPU or Memory on the VM where the Kafka Connector in running. So below are some questions:Request you to help us in this regard which will unblock our development and able to deliver quality sofwarePunith",
"username": "Punith_Kumar"
},
{
"code": "",
"text": "How many tasks are spawned?\nHow many cores are in CPU?",
"username": "Hamid_Jawaid"
}
] | Scaling Kafka Source Connector | 2020-06-03T16:37:58.509Z | Scaling Kafka Source Connector | 2,582 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 4.0.5 MongoDB Java & JVM Drivers release is a patch to the 4.0.4 release and a recommended upgrade.The documentation hub includes extensive documentation of the 4.0 driver, includingand much more.You can find a full list of bug fixes here .https://mongodb.github.io/mongo-java-driver/4.0/apidocs/ ",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 4.0.5 Released | 2020-07-08T15:52:27.018Z | MongoDB Java Driver 4.0.5 Released | 3,874 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 3.12.6 MongoDB Java Driver release is a patch to the 3.12.5 release and a recommended upgrade.The documentation hub includes extensive documentation of the 3.12 driver, includingand much more.You can find a full list of bug fixes here .http://mongodb.github.io/mongo-java-driver/3.12/javadoc/ ",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 3.12.6 Released | 2020-07-08T15:51:16.615Z | MongoDB Java Driver 3.12.6 Released | 2,475 |
null | [
"golang"
] | [
{
"code": "ctx, _ := context.WithTimeout(context.Background(), 10*time.Second)\nvar results bson.M\nerr := fsFiles.FindOne(ctx, bson.M{}).Decode(&results)\n",
"text": "I can see that in mongo-go-driver, we usually put a withTimeout context in the database query. I want to know if that’s necessary to each and every database query.As shown in the above code, why the “cancel” value being disregarded. Isn’t calling a defer cancel() be better?If I’m using the “gin” framework, can I just put the gin context into the query context and not specifying a “withTimeout context”?",
"username": "Jason_W"
},
{
"code": "context.Background()WithTimeoutcontext.WithCancel",
"text": "Hi @Jason_W,Every driver call that may block requires a context to be passed in. If you’re willing to let the call block for as long as necessary, you can use context.Background() for this. If you want to have a specific timeout for the entire call, you can use WithTimeout as the examples do. You can also use context.WithCancel if you want to provide a cancellable context with no timeout.Thanks for pointing out the disregarded context.CancelFunc return value. I’ve opened Add deferred context cancellations to README examples by divjotarora · Pull Request #441 · mongodb/mongo-go-driver · GitHub to fix this in our README examples.– Divjot",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Golang driver context | 2020-07-07T18:40:09.351Z | Golang driver context | 4,523 |
null | [
"backup"
] | [
{
"code": "",
"text": "Hello,I am currently looking out for solutions that provide “point in time” restore for MongoDB community edition databases (database version 4.2).What do I mean by “point in time” restore?\nAt any point of time in the past ‘X’ days, we should be in a position to restore the latest backup of the database and roll forward the Oplogs so that the data loss is minimized till the last Oplog backup.After my extensive search, I thought either OpsManger or CloudManager would resolve this issue.After POC with OpsManger, I got the below screen shot saying that “Continuous Backups” are supported only for MongoDB Enterprise builds. So, I’m kind of stuck here… I also assume the issue is same with CloudManager as both CloudManager and OpsManger are using the same software.If someone has gone through this scenario, could you please provide your thoughts/suggestions?Thanks!Best regards,\nManu",
"username": "Manu"
},
{
"code": "",
"text": "Beyond evaluation/development you require a license(Enterprised Advanced) for OpsManager anyway.My solution was a hidden(non-voting) replica backed by ZFS. Using ZFS autosnapshots.The recovery wasn’t turnkey, but was able to do PITR.",
"username": "chris"
},
{
"code": "",
"text": "Thanks @chris for the inputs.",
"username": "Manu"
}
] | Continuous backups for MongoDB Community edition (4.2+) | 2020-06-30T17:47:02.417Z | Continuous backups for MongoDB Community edition (4.2+) | 2,862 |
Subsets and Splits