image_url
stringlengths
113
131
tags
sequence
discussion
list
title
stringlengths
8
254
created_at
stringlengths
24
24
fancy_title
stringlengths
8
396
views
int64
73
422k
null
[ "connecting" ]
[ { "code": "", "text": "When building a new 4.2.8 (3 nodes w/ 1 Arbiter), I’m seeing the following in the primary & secondary log file:2020-08-13T12:20:02.035+0000 I NETWORK [listener] connection accepted from 10.93.57.24:44784 #175 (17 connections now open)\n2020-08-13T12:20:02.036+0000 E - [conn175] Assertion: Location34348: cannot translate opcode 2010 src/mongo/rpc/message.h 120\n2020-08-13T12:20:02.036+0000 I NETWORK [conn175] DBException handling request, closing client connection: Location34348: cannot translate opcode 2010\n2020-08-13T12:20:02.036+0000 I NETWORK [conn175] end connection 10.93.57.24:44784 (16 connections now open)Any ideas?", "username": "Erwin_Grunwald" }, { "code": "Assertion: Location34348: cannot translate opcode 2010 src/mongo/rpc/message.h 120/**\n * Copyright (C) 2018-present MongoDB, Inc.\n *\n * This program is free software: you can redistribute it and/or modify\n * it under the terms of the Server Side Public License, version 1,\n * as published by MongoDB, Inc.\n *\n * This program is distributed in the hope that it will be useful,\n * but WITHOUT ANY WARRANTY; without even the implied warranty of\n * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the\n * Server Side Public License for more details.\n *\n * You should have received a copy of the Server Side Public License\n * along with this program. If not, see\n * <http://www.mongodb.com/licensing/server-side-public-license>.\n *\n * As a special exception, the copyright holders give permission to link the\n * code of portions of this program with the OpenSSL library under certain\n * conditions as described in each individual source file and distribute\n * linked combinations including the program with the OpenSSL library. You\n", "text": "Per the error message in the mongod.log: Assertion: Location34348: cannot translate opcode 2010 src/mongo/rpc/message.h 120I found this.Check out lines: 57-60", "username": "Erwin_Grunwald" }, { "code": "[conn175]10.93.57.24", "text": "Hi @Erwin_Grunwald, welcome to the community.I think what caused the error message is a client that somehow sending a message that the server cannot understand. It appears that the client in question is identified as [conn175] from 10.93.57.24. Are you seeing any issue in the server’s operation that’s connected to these messages? Could you check how that client is connecting to the server?(3 nodes w/ 1 Arbiter)On another note, I would recommend against using an arbiter in this setup. A replica set needs an odd number of nodes to facilitate voting, and the arbiter is not adding value here since you already have 3 data bearing members. See Replica Set Deployment Architectures for a more detailed explanation.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks Kevin. I will take a look.BTW, when I said (3 nodes w/ 1 Arbiter)… I should have said (3 nodes w/ one being an Arbiter). I do have an old number of nodes in the cluster.", "username": "Erwin_Grunwald" } ]
DBException handling request, closing client connection
2020-08-13T12:37:47.313Z
DBException handling request, closing client connection
3,948
null
[ "python", "motor-driver" ]
[ { "code": " Detailed Percentile spectrum:\n\n #[Mean = 24283.712, StdDeviation = 10161.542]\n #[Max = 48431.104, Total count = 2756]\n #[Buckets = 27, SubBuckets = 2048]\n ----------------------------------------------------------\n 3314 requests in 1.00m, 1.44MB read\n Socket errors: connect 0, read 0, write 0, timeout 4\n Non-2xx or 3xx responses: 3314\n Requests/sec: 55.22\n Transfer/sec: 24.64KB\n\nBenchmark:\n ++++++++++++++++++++\n 200Req/s Duration:300s open connections:20\n Running 5m test @ http://192.168.1.2:8080/\n 8 threads and 20 connections\n Thread calibration: mean lat.: 4509.201ms, rate sampling interval: 14213ms\n Thread calibration: mean lat.: 4358.747ms, rate sampling interval: 13746ms\n Thread calibration: mean lat.: 4194.528ms, rate sampling interval: 13590ms\n Thread calibration: mean lat.: 4319.308ms, rate sampling interval: 13221ms\n Thread calibration: mean lat.: 4189.944ms, rate sampling interval: 13180ms\n Thread calibration: mean lat.: 4145.555ms, rate sampling interval: 12836ms\n Thread calibration: mean lat.: 4424.448ms, rate sampling interval: 13443ms\n Thread calibration: mean lat.: 4403.278ms, rate sampling interval: 14680ms\n Thread Stats Avg Stdev Max +/- Stdev\n Latency 1.86m 1.01m 4.16m 58.56%\n Req/Sec 6.53 0.95 8.00 100.00%\n Latency Distribution (HdrHistogram - Recorded Latency)\n 50.000% 1.84m \n 75.000% 2.72m \n 90.000% 3.23m \n 99.000% 3.82m \n 99.900% 4.08m \n 99.990% 4.15m \n 99.999% 4.16m \n 100.000% 4.16m \n \n Detailed Percentile spectrum:\n\n #[Mean = 111375.774, StdDeviation = 60805.210]\n #[Max = 249692.160, Total count = 16171]\n #[Buckets = 27, SubBuckets = 2048]\n ----------------------------------------------------------\n 16654 requests in 5.00m, 7.26MB read\n Socket errors: connect 0, read 0, write 0, timeout 1\n Non-2xx or 3xx responses: 16654\n Requests/sec: 55.51\n Transfer/sec: 24.77KB\n Detailed Percentile spectrum:\n\n #[Mean = 25098.605, StdDeviation = 10575.561]\n #[Max = 49709.056, Total count = 2462]\n #[Buckets = 27, SubBuckets = 2048]\n ----------------------------------------------------------\n 2998 requests in 1.00m, 1.31MB read\n Socket errors: connect 0, read 0, write 0, timeout 16\n Non-2xx or 3xx responses: 2998\n Requests/sec: 49.95\n Transfer/sec: 22.29KB\n\nBenchmark:\n ++++++++++++++++++++\n 200Req/s Duration:300s open connections:20\n Running 5m test @ http://192.168.1.2:8080/\n 8 threads and 20 connections\n Thread calibration: mean lat.: 3703.784ms, rate sampling interval: 12713ms\n Thread calibration: mean lat.: 3748.122ms, rate sampling interval: 12943ms\n Thread calibration: mean lat.: 3697.915ms, rate sampling interval: 12689ms\n Thread calibration: mean lat.: 3774.441ms, rate sampling interval: 12689ms\n Thread calibration: mean lat.: 3562.794ms, rate sampling interval: 11821ms\n Thread calibration: mean lat.: 3626.784ms, rate sampling interval: 11976ms\n Thread calibration: mean lat.: 3646.199ms, rate sampling interval: 12738ms\n Thread calibration: mean lat.: 4295.842ms, rate sampling interval: 13770ms\n Thread Stats Avg Stdev Max +/- Stdev\n Latency 1.87m 1.01m 4.16m 58.40%\n Req/Sec 6.34 0.96 8.00 100.00%\n Latency Distribution (HdrHistogram - Recorded Latency)\n 50.000% 1.85m \n 75.000% 2.74m \n 90.000% 3.24m \n 99.000% 3.80m \n 99.900% 4.05m \n 99.990% 4.14m \n 99.999% 4.16m \n 100.000% 4.16m \n \n Detailed Percentile spectrum:\n\n #[Mean = 112274.070, StdDeviation = 60410.941]\n #[Max = 249561.088, Total count = 15703]\n #[Buckets = 27, SubBuckets = 2048]\n ----------------------------------------------------------\n 16205 requests in 5.00m, 7.06MB read\n Socket errors: connect 0, read 0, write 0, timeout 3\n Non-2xx or 3xx responses: 16205\n Requests/sec: 54.01\n Transfer/sec: 24.10KB\n", "text": "I’ve just ran a benchmark tool (GitHub - hasura/graphql-bench: A super simple tool to benchmark GraphQL queries) in a very simple graphql query.I configured two api’s doing the same query, but one using motor driver and the other one using PyMongo.I got similar results for both.moto-driver resultscandidate: simpleQuery on python-async-mongodb at http://192.168.1.2:8080/\nWarmup:\n++++++++++++++++++++\n200Req/s Duration:60s open connections:20\nRunning 1m test @ http://192.168.1.2:8080/\n8 threads and 20 connections\nThread calibration: mean lat.: 3396.996ms, rate sampling interval: 13017ms\nThread calibration: mean lat.: 3308.702ms, rate sampling interval: 11968ms\nThread calibration: mean lat.: 3554.039ms, rate sampling interval: 12828ms\nThread calibration: mean lat.: 3406.929ms, rate sampling interval: 12484ms\nThread calibration: mean lat.: 3445.203ms, rate sampling interval: 12648ms\nThread calibration: mean lat.: 3426.942ms, rate sampling interval: 12812ms\nThread calibration: mean lat.: 3652.908ms, rate sampling interval: 13115ms\nThread calibration: mean lat.: 3430.667ms, rate sampling interval: 11984ms\nThread Stats Avg Stdev Max +/- Stdev\nLatency 24.28s 10.16s 48.43s 58.56%\nReq/Sec 6.78 0.96 8.00 100.00%\nLatency Distribution (HdrHistogram - Recorded Latency)\n50.000% 24.28s\n75.000% 32.93s\n90.000% 37.98s\n99.000% 44.79s\n99.900% 47.58s\n99.990% 48.46s\n99.999% 48.46s\n100.000% 48.46sPyMongo resultscandidate: simpleQuery on python-sync-mongodb at http://192.168.1.2:8080/\nWarmup:\n++++++++++++++++++++\n200Req/s Duration:60s open connections:20\nRunning 1m test @ http://192.168.1.2:8080/\n8 threads and 20 connections\nThread calibration: mean lat.: 3800.246ms, rate sampling interval: 13156ms\nThread calibration: mean lat.: 3823.071ms, rate sampling interval: 12681ms\nThread calibration: mean lat.: 3858.973ms, rate sampling interval: 13156ms\nThread calibration: mean lat.: 3669.897ms, rate sampling interval: 12419ms\nThread calibration: mean lat.: 3648.545ms, rate sampling interval: 12181ms\nThread calibration: mean lat.: 3685.089ms, rate sampling interval: 12173ms\nThread calibration: mean lat.: 3764.000ms, rate sampling interval: 12632ms\nThread calibration: mean lat.: 3752.922ms, rate sampling interval: 12640ms\nThread Stats Avg Stdev Max +/- Stdev\nLatency 25.10s 10.58s 49.71s 59.71%\nReq/Sec 5.93 0.81 8.00 92.59%\nLatency Distribution (HdrHistogram - Recorded Latency)\n50.000% 23.69s\n75.000% 34.08s\n90.000% 39.98s\n99.000% 46.60s\n99.900% 49.55s\n99.990% 49.74s\n99.999% 49.74s\n100.000% 49.74sShouldn’t I get a much better results using motor-drive ?There is also a similar question on SO", "username": "Kleyson_Rios" }, { "code": "Requests/sec: 55.51Requests/sec: 54.01", "text": "Summarizing, motor got Requests/sec: 55.51 and Pymongo got Requests/sec: 54.01.", "username": "Kleyson_Rios" }, { "code": "", "text": "Hi Kleyson! Thanks for your question.I think there’s a couple of things to talk about here:The first is that async code isn’t necessarily faster than non-async code. What it is often better for is handling more concurrent connections, because it can switch context when waiting for IO to complete, with less overhead per connection than spinning up a thread for each connection.The other thing is that although Motor provides an asyncio API, it’s actually implemented as a wrapper around PyMongo, with all blocking operations conducted in a worker thread, so in practice you’d expect to get similar results with Motor and PyMongo.", "username": "Mark_Smith" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Shouldn't motor deliver a higher performance than pymongo?
2020-08-20T01:12:01.406Z
Shouldn&rsquo;t motor deliver a higher performance than pymongo?
8,725
null
[]
[ { "code": "", "text": "When I change config server’s some setting through one mongos, how other mongos instantly know that change ? I suppose config server can register a hook that will be called to notify mongos when changes are made. In this way mongos needn’t restart.Is this right ?", "username": "Lewis_Chan" }, { "code": "mongosmongos", "text": "Hi @Lewis_Chan,In a sense, I believe this is already been done from the direction of the mongos pulling the change instead of the config servers pushing the change.For metadata change (chunk split, chunk move, etc.), the mongos will load the metadata lazily. For some changes, such as config server member change, the monitoring thread on the mongos will ping periodically and notices the change. Are these the cases you have in mind?However, some major change e.g. movePrimary would require all mongos routers to be restarted in pre-MongoDB 4.4.Best regards,\nKevin", "username": "kevinadi" } ]
Can config server notify changes to all mongos?
2020-08-17T08:18:52.512Z
Can config server notify changes to all mongos?
1,938
null
[ "security" ]
[ { "code": "", "text": "Good afternoon,I have an piece of software that is written in nodejs and is bundled with mongodb.The database is currently wide open for anyone with an account on the server to read any and all data.My question revolves around the securing of the instance of mongedb in this application.this is a black box - one install with only webserver parameters available for configurationconfigurations are minimal and documentation is sparse, and does not talk about securing the applicationI am in the middle of the mongodb university security course, I understand there are many ways to secure mongodbthe question is how can I secure the “database instance” in this black box without breaking the application (not a generic mongdb instance)Thanks for any insightsPaul", "username": "Paul_Jacobs" }, { "code": "", "text": "i have exactly the same issue. my app is using :i know it is straight forward to enforce authentication from the terminal. but the docs does not mention how to secure the “mongod” command itself. i tried updating the default config file with\nsecurity:\nauthorization: enabled\nand it does not work either !!!any help here please!", "username": "code_gist" }, { "code": "", "text": "Hi @Paul_Jacobs and @code_gist,If I understand correctly, the goal is to secure the instance so that it cannot be executed by unauthorized users having access to the server. Is this accurate?Although there are multiple ways to secure a MongoDB instance from external parties using auth and/or TLS, there is no method I’m aware of (from MongoDB’s side) that can prevent anyone having access to the server itself to connect to the database if the database is not secured with authentication. I believe at this point, it becomes a server security issue instead of a database security issue.Perhaps something like SELinux could be used in this use case?Best regards,\nKevin", "username": "kevinadi" } ]
Securing MongoDB
2020-07-29T18:21:59.757Z
Securing MongoDB
1,572
null
[ "php" ]
[ { "code": "", "text": "Hellofor a few days I have tried to read functions in javascript generated on the server (mongodb), but I have not had success, in previous versions, I used execute on the php side to consume the functions, but this command for version 4.2 of mongoDB is obsolete , could someone support me with an example.I thank you and excuse me for my English", "username": "Joel_Gonzalez" }, { "code": "", "text": "Hi @Joel_Gonzalez, welcome to the community.Could you elaborate with some examples what you mean by “execute on the php side”? Would be helpful if you include the command that was working pre-MongoDB 4.2 as well.Best regards,\nKevin", "username": "kevinadi" } ]
Store a javascript function on the server and consume it in the php application
2020-08-18T21:00:15.483Z
Store a javascript function on the server and consume it in the php application
1,639
https://www.mongodb.com/…613a45ac1a53.png
[ "stitch" ]
[ { "code": "", "text": "After making a new deployment (import via CLI) to Stitch / Realm, we have been receiving the following error message:\n(sorry for the bad crop: translates to -> error resolving cluster hostname)We have not been able to revert back or make the App work anymore, even with a --strategy replace on a working version.Has anyone experienced a similar issue / knows how to resolve it?Thanks!\nMartin", "username": "Martin_Kayser" }, { "code": "", "text": "The full error message: FunctionError: this service was recently updated, so it may take several minutes until the cluster can be connected to. Error: error resolving cluster hostname.In the function, we try and get the service as follows: context.services.get(“database”).db(db_name);It does not resolve itself after waiting even a couple of hours.", "username": "Martin_Kayser" }, { "code": "Error: error resolving cluster hostname\n at get (<native code>)\n at apply (<native code>)\n at <anonymous>:16:10\n at _callee$ (function.js:2:3)\n at call (<native code>)\n at tryCatch (<anonymous>:55:37)\n at invoke (<anonymous>:281:22)\n at <anonymous>:107:16\n at call (<native code>)\n at tryCatch (<anonymous>:55:37)\n at invoke (<anonymous>:145:20)\n at <anonymous>:180:11\n at <anonymous>:3:6\n at callInvokeWithMethodAndArg (<unknown>)\n at enqueue (<anonymous>:202:13)\n at <anonymous>:107:16\n at <anonymous>:226:9\n at _callee (function.js:1:11)\n at apply (<native code>)\n at function_wrapper.js:1:11\n at <anonymous>:12:1\n", "text": "Same here from my functions after CLI updateran on Thu Jul 30 2020 12:04:24 GMT+0200 (heure d’été d’Europe centrale)\ntook 301.4469ms\nerror:\nthis service was recently updated, so it may take several minutes until the cluster can be connected to. Error: error resolving cluster hostname\ntrace:\nFunctionError: this service was recently updated, so it may take several minutes until the cluster can be connected to.", "username": "Jeremie_COLOMBO" }, { "code": "", "text": "“StitchServiceError: this service was recently updated, so it may take several minutes until the cluster can be connected to. Error: error resolving cluster hostname”After updating from CLI", "username": "Dhaval_Chaudhary" }, { "code": "", "text": "Hi,\nI resolved this problem by deleting and recreating cluster link via realm web interface.", "username": "Jeremie_COLOMBO" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Error resolving cluster hostname
2020-07-29T23:46:07.743Z
Error resolving cluster hostname
5,605
null
[ "aggregation" ]
[ { "code": "projectsproductsproductsprojectprojects._id{\n from: 'products',\n let: { id: '$_id'},\n pipeline: [{\n $match: { project: '$$id' }\n }],\n as: 'products'\n}\n", "text": "I have two collections projects and products, where products has a field project, which refers to the projects._id field.I am using this aggregation $lookup to find the products that belong to a specific projectthe resulting products field is always empty, and I don’t understand why.(I know I can use the other $lookup variant with foreignField localField, but I think I need the pipeline because I want to add more clauses to the $match)", "username": "devboell" }, { "code": "", "text": "Is your field in the products collection really named project?", "username": "steevej" }, { "code": "", "text": "yes, I am a afraid it is.", "username": "devboell" }, { "code": "\"$<variable>\"$lookuppipeline$match$expr$expr$match$expr$match$lookup$match$expr$match$expr", "text": "I just when back to documentation at https://docs.mongodb.com/manual/reference/operator/aggregation/lookup/ and found:NOTETo reference variables in pipeline stages, use the \"$<variable>\" syntax.The let variables can be accessed by the stages in the pipeline, including additional $lookup stages nested in the pipeline .Special attention to the sentence A $match stage requires the use of an $expr operator to access the variables.", "username": "steevej" }, { "code": "", "text": "Thank you, it works now.", "username": "devboell" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Not able to use _id in a lookup pipeline
2020-08-19T15:12:40.480Z
Not able to use _id in a lookup pipeline
9,041
null
[ "node-js" ]
[ { "code": " const pool = require('../../db/postgres/init').pool.with MongoClient.connect().then((db) => {})", "text": "Postgres node driver has it to where I can grab the db cursor in any where in my node app with: const pool = require('../../db/postgres/init').pool.With the DB cursor that is created from with MongoClient.connect().then((db) => {}), does the mongodb npm library have a similar feature where I can grab the already created/pre-existing db curser from anywhere in my app?", "username": "Darin_Hensley" }, { "code": "poolSizeMongoClient.connect()var db = null // global variable to hold the connection\n\nMongoClient.connect('mongodb://localhost:27017/', function(err, client) {\n if(err) { console.error(err) }\n db = client.db('test') // once connected, assign the connection to the global variable\n})\n\napp.get('/', function(req, res) {\n db.collection('test').find({}).toArray(function(err, docs) {\n if(err) { console.error(err) }\n res.send(JSON.stringify(docs))\n })\n})\n", "text": "Hi @Darin_Hensley, welcome to the community.By “grabbing the cursor”, I believe you’re talking about getting a connection from a connection pool. Is this correct?If yes, then MongoClient was designed to create and manage a connection pool for you. Note that in MongoClient.connect() you can specify a poolSize parameter (default is 5).Thus the optimal pattern of MongoClient usage is to instantiate the MongoClient connection once for the duration of the app’s lifetime, instead of calling MongoClient.connect() for every database operation. To achieve this, you can use a global variable, for example:However, it is a bit different from the Postgres pool example you posted. To emulate it, you may be able to package the MongoClient connection in a separate module and ensure that it acts like a singleton in node, which is an entirely separate discussion Best regards,\nKevin", "username": "kevinadi" } ]
Is there a feature where I can require the existing cursor in the npm mongodb library?
2020-08-19T20:35:55.596Z
Is there a feature where I can require the existing cursor in the npm mongodb library?
1,345
null
[ "data-modeling", "indexes" ]
[ { "code": "", "text": "Hi.I am recording in a single record two events (bitcion input and its expenditure). Each event has a different time. I used to record both times in different fields (utxo_time, spent_time). But indexes didn’t work in two fields cause they first classify in one fiedl and then on the other when I want them to be treated at same level.\nThen I created an array field time: [time1, time2] or [time1]. But when I make a search based on this time field I get oknly a result so if I order by time only the earliest time of the two is used.How can I get two results from [time1, time2], and of course only one from [time1]?Thanks.", "username": "Eduardo_Cobian" }, { "code": "", "text": "Hi @Eduardo_Cobian,To help understand what you are trying to achieve, please provide some more information:Thanks,\nStennie", "username": "Stennie_X" } ]
How to query on indexed array field to get two matching results
2020-08-18T21:00:51.481Z
How to query on indexed array field to get two matching results
1,491
null
[]
[ { "code": "", "text": "Hi all,I was able to connect using Compass and complete all requirements, however I was trying to replicate this in Robo3T since some developer friends use that and I wanted to get cross-familiarity, however there doesn’t seem to be a way to connect. I’ve configured the authentication and using the replica set level of connection, however it responds with “Set’s primary is unreachable” and there doesn’t seem to be a clear way to recreate the ssl connection Compass uses by default. Anyone dealt with this and found a solution?", "username": "alex_aberman" }, { "code": "", "text": "My configuration is as follows:On the Authentication tab the Auth Mechanism set to SCRAM-SHA-1. It also has the database as admin.On the SSL tab the Authentication Method is Self-signed Certificate.Can’t remember what was the problem with mine, but I know that I changed the Address to connect to use one of the shards.", "username": "Fabricio_81244" }, { "code": "", "text": "Hello!You can follow this step by step:https://www.datduh.com/blog/2017/7/26/how-to-connect-to-mongodb-atlas-using-robo-3t-robomongoLet us know if you run into any issue.", "username": "js-ivan-marquez" }, { "code": "", "text": "Hi all,I encountered the same problem- not able to connect to clusters using Robo 3T. I followed all the steps listed out in the dataduh blog but unable to connect: https://www.datduh.com/blog/2017/7/26/how-to-connect-to-mongodb-atlas-using-robo-3t-robomongoI tried both type of connections: Direct, ReplicaSet. But no luck either ways. After trying out all the combinations I finally have the error on Authorization: “Authorization failed on admin database as m001-student”Database: admin\nUsername: m001-student\nPassword: ***************\nAuth Mechanism: SCRAM-SHA-1Appreciate any help on this issue.", "username": "Vig_78283" }, { "code": "", "text": "Hello @Vig_78283,Follow along with this video and let us know if you have any issues.http://recordit.co/LKadnMIFXA", "username": "js-ivan-marquez" }, { "code": "", "text": "I am also having issue with connecting my Sandbox atlas cluters using robo 3t … the video recording posted is not available it seems currently…could you please help?", "username": "Malekar_Srikanth" }, { "code": "", "text": "Yes those links are not workingTry this.May helpStudio 3T's Connection Manager makes it easy to connect to MongoDB whether it's cloud-hosted or on-premise, or through a direct connection or a replica set.", "username": "Ramachandra_Tummala" }, { "code": "", "text": "", "username": "system" } ]
Unable to connect using Robo3T
2019-04-08T15:23:31.553Z
Unable to connect using Robo3T
12,703
null
[ "student-developer-pack" ]
[ { "code": "", "text": "Is the free certification program limited to Github Student Developer pack or anyone who completes the DBA/Developer paths on MongoDB University can claim a voucher for the particular exam?", "username": "Ajinkya_Bapat" }, { "code": "", "text": "Hi @Ajinkya_Bapat,The current offer for free certification is a benefit for participants in the GitHut Student Developer Pack program.Please see MongoDB Student Pack for more information and FAQs.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Free Certification Program
2020-08-18T07:57:11.341Z
Free Certification Program
5,123
null
[]
[ { "code": "", "text": "Hi Team,Can you please elaborate below query.We have patching on Linux servers Q3, during patching server rebooted and after patching completed server came online then MongoDB service not running automatically it was stop.So can you please anybody provide what is causes of after patching was stop MongoDB servicewe have found our server permission denied on .pem file after given permission MongoDB service running and also some servers manually start service", "username": "hari_dba" }, { "code": "", "text": "What flavor/release of Linux?\nWere all MongoDB Production Notes followed initially?\nWhat are the errors in the mongod.log file?", "username": "Eric_Reid" }, { "code": " I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Marking host serversname.:27017 as failed :: caused by :: ShutdownInProgress: connection pool is in shutdown\n I NETWORK [ReplicaSetMonitor-TaskExecutor-0] Cannot reach any nodes for set shard0_2. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n W SHARDING [signalProcessingThread] cant reload ShardRegistry :: caused by :: CallbackCanceled: Callback canceled\n I FTDC [signalProcessingThread] Shutting down full-time diagnostic data capture\n I STORAGE [WTOplogJournalThread] oplog journal thread loop shutting down\n I STORAGE [signalProcessingThread] WiredTigerKVEngine shutting down\n I STORAGE [signalProcessingThread] shutdown: removing fs lock...\n I CONTROL [signalProcessingThread] now exiting\n I CONTROL [signalProcessingThread] shutting down with code:0\n", "text": "What flavor/release of Linux?A) RedhatWere all MongoDB Production Notes followed initially?YesWhat are the errors in the mongod.log file?", "username": "hari_dba" } ]
After Q3 Patching MongoDB Service stop
2020-08-03T15:46:01.577Z
After Q3 Patching MongoDB Service stop
1,384
null
[]
[ { "code": "", "text": "Hi, i want to know that from where we can view/complete the new user tutorial and advance user tutorial.\n(Certified badge on MongoDB Developer Community Forums)\n(Licensed badge on MongoDB Developer Community Forums)Completed our advanced user tutorial\nHere is some description about that, kindly share it with me.Completed our new user tutorial:\nThis badge is granted upon successful completion of the interactive new user tutorial. You’ve taken the initiative to learn the basic tools of discussion, and now you’re certified!Completed our advanced user tutorial", "username": "Nabeel_Raza" }, { "code": "", "text": "Here is the link of badges site\nhttps://www.mongodb.com/community/forums/badges", "username": "Nabeel_Raza" }, { "code": "@leafiebot start tutorial", "text": "You should have a message from leafybot to start the tutorial.you can start it by sending leafiebot a message, from the messages page on your profile.@leafiebot start tutorial", "username": "chris" }, { "code": "@leafiebot display help@leafiebot start tutorial@leafiebot start advanced tutorial", "text": "You should have a message from leafybot to start the tutorial.Hi @Nabeel_Raza,A private message from Leafiebot is sent when you join the site, and will have the subject “Greetings!”.To find out available commands (including how to start the basic or advanced user tutorial) you can reply in your personal message thread with @leafiebot display help.The basic tutorial can be started with:\n@leafiebot start tutorialThe advanced tutorial can be started with:\n@leafiebot start advanced tutorialRegards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB Badge about user tutorial
2020-08-19T13:47:36.800Z
MongoDB Badge about user tutorial
3,845
null
[ "production", "rust" ]
[ { "code": "mongodb", "text": "The MongoDB Rust driver team is pleased to announce version 1.1.0 of the official Rust driver for MongoDB. You can read more about the release on Github, and the release is published on https://crates.io under the package name mongodb . If you run into any issues, please file an issue on JIRA.Thank you, and we hope you enjoy using the driver!", "username": "Samuel_Rossi" }, { "code": "", "text": "", "username": "system" } ]
Announcing Rust driver v1.1.0
2020-08-19T17:31:22.218Z
Announcing Rust driver v1.1.0
2,610
null
[]
[ { "code": "", "text": "As my scnerio, I have configured 3 member replicas in my test environment. Is it possible to deploy the sharding with my same test up.", "username": "Premkumar_Jayaraman" }, { "code": "", "text": "Welcome to the community forum @Premkumar_Jayaraman!Please see the Convert a Replica Set to a Sharded Cluster tutorial in the documentation for your version of MongoDB (the link here is to the latest which is currently MongoDB 4.4).Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "@Premkumar_Jayaraman At a minimum you will need 3 new nodes to make a config replica set. And somewhere to run at least one mongos.", "username": "chris" } ]
Upgrading replica set to sharding
2020-08-19T14:29:01.913Z
Upgrading replica set to sharding
1,297
null
[ "ops-manager" ]
[ { "code": "", "text": "Team,I am getting the following error when I try to start back-up using the ops manager. Could please help me?\nWe’re sorry, an unexpected error has occurredAlso, please let me know the time taken for a backup of 100 GB of a database in ops manager vs mongodump.", "username": "Jitender_Dudhiyani" }, { "code": "", "text": "Team,\nYour help would be appreciated. Please help me.", "username": "Jitender_Dudhiyani" }, { "code": "", "text": "Anyone please help me with the error message with OPS manager.", "username": "Jitender_Dudhiyani" }, { "code": "mongodump", "text": "Hi @Jitender_Dudhiyani,We’re sorry, an unexpected error has occurredThe “unexpected error has occurred” message in the web interface indicates there has been an error, but you’ll have to review the Ops Manager logs to find more details.please let me know the time taken for a backup of 100 GB of a database in ops manager vs mongodump.Ops Manager does continuous backups, which is a different approach from mongodump. The initial backup time may be similar because all data has to be backed up, but the ongoing impact of Ops Manager will be significantly less. I’m not aware of any reference times for backing up data, as this is highly dependent on your deployment and infrastructure.Please note that Ops Manager is licensed as part of a MongoDB Enterprise Advanced subscription, which includes commercial support. Per the MongoDB Customer Agreement you accept before downloading Ops Manager, you can use the software for evaluation and development purposes but this does not include any support.Regards,\nStennie", "username": "Stennie_X" } ]
Backup using Ops Manager
2020-07-30T12:34:16.859Z
Backup using Ops Manager
3,255
null
[ "backup", "ops-manager" ]
[ { "code": "\"op\" : \"query\",\n\"ns\" : \"backupjobs.restorejobs\",\n\"command\" : {\n\t\"find\" : \"restorejobs\",\n\t\"filter\" : {\n\t\t\"type\" : \"restore\",\n\t\t\"groupId\" : ObjectId(\"5ee282ec46e0fb00f28f6274\")\n\t},\n\t\"sort\" : {\n\t\t\"_id\" : -1\n\t},\n\t\"$db\" : \"backupjobs\"\n},\n\"nreturned\" : 1,", "text": "Hi,\nI’m using Mongodb Ops manager 3.6.12, when restoring a shard from a snapshot everything is well, but when I tried to do a “point in time” restore , the job stayed queued forever.\nProfiling backupjob.restorejob application db, I see that BackupDaemon is doing a find query and a match is found. But after that nothing happens, no errors in the daemon.log whatsoever even at DEBUG level.\nDoes anyone know how why BackDaemon is refusing to service the point in time restore job? TIA", "username": "jff_frm" }, { "code": "", "text": "Additional info:\nWhen doing PIT recovery on this particular shard, I see no queries being made into OplogStore db, but on a separate shard(ConfigRs) I’ve tested where PIT restoration works, queries are being made.", "username": "jff_frm" }, { "code": "", "text": "Upgrading to version 4.0.19 made PIT worked, unfortunately trying to compare the old MMS data and migrated data didnt reveal any obvious reason why it was failing. closing this one", "username": "jff_frm" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Point in time shard restore getting queued forever
2020-08-01T21:42:14.733Z
Point in time shard restore getting queued forever
1,686
null
[ "golang", "schema-validation" ]
[ { "code": "bson.M{\n \"bsonType\": \"object\",\n \"required\": []string{\"endpointID\", \"ip\", \"port\", \"lastHeartbeatDate\"},\n \"properties\": bson.M{\n \"endpointID\": bson.M{\n \"bsonType\": \"double\",\n \"description\": \"the endpoint Hash\",\n },\n \"ip\": bson.M{\n \"bsonType\": \"string\",\n \"description\": \"the endpoint IP address\",\n },\n \"port\": bson.M{\n \"bsonType\": \"int\",\n \"maximum\": 65535,\n \"description\": \"the endpoint Port\",\n },\n \"lastHeartbeatDate\": bson.M{\n \"bsonType\": \"date\",\n \"description\": \"the last time when the heartbeat has been received\",\n },\n },\n}\ntype GameServer struct {\n ID primitive.ObjectID `bson:\"_id,omitempty\"`\n EndpointID int64 `bson:\"endpointID,omitempty\"`\n IP string `bson:\"ip,omitempty\"`\n Port int32 `bson:\"port,omitempty\"`\n LastHeartbeatDate time.Time `bson:\"lastHeartbeatDate,omitempty\"`\n}\n\n[...]\n\ngs := GameServer{\n EndpointID: 123456789,\n IP: \"192.168.2.16\",\n Port: 27015,\n LastHeartbeatDate: time.Now(),\n}\n\nctx, _ := context.WithTimeout(context.Background(), 10*time.Second)\nres, err := gameServersCollection.InsertOne(ctx, gs)\n", "text": "Hello,I am trying to use schema validation on a specific collection of my database. Unfortunately, document insertion fails without any precision about what error has happened during the validation process.Here’s the validatorAnd here’s an attempt to insert a documentNot sure what is the reason of the validation error, I think with the latest versions of the Golang driver, the struct time.Time is compatible with the BSON Date type. I have also indexed the EndpointID field (unique index) and the LastHeartbeatDate field (TTL index) but I don’t think they are related to the validation process anyway.Thank you for pointing me in the right direction.", "username": "_Mickael_B" }, { "code": "", "text": "After some trials and errors, I just realized I was using a “double” BSON type instead of “long” to represents “int64” Golang type.I will keep an eye on MongoDB’s issue tracker :\nhttps://jira.mongodb.org/browse/SERVER-20547", "username": "_Mickael_B" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Work with schema validation in Golang
2020-08-16T21:29:11.430Z
Work with schema validation in Golang
6,958
null
[ "aggregation" ]
[ { "code": "-- current Output --\n-- Required Output --\n", "text": "Hey, I want to know that if i have multiple collections and we are using lookup on them and there is a possibility that some of the collections are empty and i want to add/replace them with some data./* 1 */\n{\n“c_id” : “000074819”,\n“u_id” : 106.0,\n“user_A” : ,\n“user_B” : ,\n“user_C” : [\n{\n“name” : “XYZ”,\n“currency” : “AED”,\n“c_id” : “000074819”,\n“status” : “C”\n}\n],\n“user_D” : \n}/* 1 */\n{\n“c_id” : “000074819”,\n“u_id” : 106.0,\n“user_A” : [\n{\n“c_id” : “000074819”,\n}],\n“user_B” : [\n{\n“c_id” : “000074819”,\n}],\n“user_C” : [\n{\n“name” : “XYZ”,\n“currency” : “AED”,\n“c_id” : “000074819”,\n“status” : “C”\n}\n],\n“user_D” : [\n{\n“c_id” : “000074819”,\n}]\n}Thanks in advance.", "username": "Nabeel_Raza" }, { "code": "", "text": "These are square brackets i.e. [ ]", "username": "Nabeel_Raza" }, { "code": "db.players.insertMany([\n {\n _id: 'P1',\n name: 'William',\n teamId: 'T1',\n },\n {\n _id: 'P2',\n name: 'Deborah',\n // notice, that there is not T2 team in the 'teams' collection\n teamId: 'T2', \n },\n]);\n\ndb.teams.insertMany([\n {\n _id: 'T1',\n title: 'Angry Beavers',\n },\n]);\ndb.players.aggregate([\n {\n $lookup: {\n from: 'teams',\n as: 'team',\n localField: 'teamId',\n foreignField: '_id',\n },\n },\n {\n $set: {\n team: {\n $cond: {\n if: {\n // if nothing joined from\n // the 'teams' collection (empty array)\n $eq: [{\n $size: '$team',\n }, 0],\n },\n then: {\n // then define arbitrary object\n // as a replacement here\n _id: '$teamId',\n isMissing: true,\n },\n else: {\n // otherwise, pick that single object\n // from array, (if we always join 1 object with $lookup)\n $arrayElemAt: ['$team', 0],\n },\n },\n },\n },\n },\n]).pretty();\n[\n {\n \"_id\" : \"P1\",\n \"name\" : \"William\",\n \"teamId\" : \"T1\",\n \"team\" : {\n \"_id\" : \"T1\",\n \"title\" : \"Angry Beavers\"\n }\n },\n {\n \"_id\" : \"P2\",\n \"name\" : \"Deborah\",\n \"teamId\" : \"T2\",\n \"team\" : {\n \"_id\" : \"T2\",\n \"isMissing\" : true\n }\n }\n]\n", "text": "Hello, @Nabeel_Raza!You can do it with $set (alias for $addFields) pipeline stage and $cond pipeline operator.\nLet me show how it is done by an example.First, we make a sample minimal dataset to work with:Then, we run this aggregation, against the above dataset:That will give this output:Notice, that for T2 team’s object is composed in the aggregation pipeline, and is not joined from the ‘teams’ collection.The same aggregation would produce the same result, if we make joins from non-existent collections.", "username": "slava" }, { "code": "db.players.aggregate([\n {\n $lookup: {\n from: 'teams',\n as: 'team',\n localField: 'teamId',\n foreignField: '_id',\n },\n },\n {\n $set: {\n team: {\n $cond: {\n if: {\n // if nothing joined from\n // the 'teams' collection (empty array)\n $eq: [{\n $size: '$team',\n }, 0],\n },\n then: {\n // then define arbitrary object\n // as a replacement here\n url: '$teamId',\n isMissing: true,\n },\n else: {\n // otherwise, pick that single object\n // from array, (if we always join 1 object here)\n $arrayElemAt: ['$team', 0],\n },\n },\n },\n },\n },\n]).pretty();\n", "text": "Thanks @slava \nThis work for me.", "username": "Nabeel_Raza" }, { "code": "", "text": "Here are more detail about set(aggregation) clause in mongodb: https://docs.mongodb.com/manual/reference/operator/aggregation/set/index.html.", "username": "Nabeel_Raza" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Replace empty array of unwind
2020-08-19T09:39:24.884Z
Replace empty array of unwind
3,981
https://www.mongodb.com/…4_2_1024x512.png
[ "java" ]
[ { "code": "db.sales.aggregate([\n {\n $project: {\n items: {\n $filter: {\n input: \"$items\",\n as: \"item\",\n cond: { $gte: [ \"$$item.price\", 100 ] }\n }\n }\n }\n }\n])\n{\n _id: 0,\n items: [\n { item_id: 43, quantity: 2, price: 10 },\n { item_id: 2, quantity: 1, price: 240 }\n ]\n}\n{\n _id: 1,\n items: [\n { item_id: 23, quantity: 3, price: 110 },\n { item_id: 103, quantity: 4, price: 5 },\n { item_id: 38, quantity: 1, price: 300 }\n ]\n}\n{\n _id: 2,\n items: [\n { item_id: 4, quantity: 1, price: 23 }\n ]\n}\n{\n \"_id\" : 0,\n \"items\" : [\n { \"item_id\" : 2, \"quantity\" : 1, \"price\" : 240 }\n ]\n}\n{\n \"_id\" : 1,\n \"items\" : [\n { \"item_id\" : 23, \"quantity\" : 3, \"price\" : 110 },\n { \"item_id\" : 38, \"quantity\" : 1, \"price\" : 300 }\n ]\n}\n{ \"_id\" : 2, \"items\" : [ ] }\nBson priceFilter = Filters.gte(\"items.price\", 100);\nmongoCollection.aggregate(\n Aggregates.project(Projections.fields(priceFilter))\n);\n", "text": "I am referring mongodb official page for projection where I came across following example where elements of array in subdocument is filtered:I am trying to implement this in Java but I am not doing it correctly and elements in subdocument array are not filtered.Input Collection:Expected Output Collection:In Java(mongo Driver 3.9.1), this is what I am doing:How do I project with aggregate function for the subdocument arrays where I need to filter out elements from subdocument array based on some condition?", "username": "Hamid_Jawaid" }, { "code": "", "text": "Hello @Hamid_Jawaid,You can build your aggregation in the MongoDB Compass using the Aggregation Pipeline Builder and then Export to Specific Language, where you have the Java option. You can use that generated Java code.", "username": "Prasad_Saya" }, { "code": "", "text": "Nice! Thanks.\nThe example mentioned here, If I have more fields to the Document, Aggregates.project will remove them. Is there a way to leave all other fields untouched, as a document may have several other fields too. Using Projections.include(String… fieldNames) would require every field to be explicitly put there and documents may have different fields too and new fields may get added at later point in time.", "username": "Hamid_Jawaid" }, { "code": "$addFields$project", "text": "Use the $addFields stage instead of $project.", "username": "Prasad_Saya" }, { "code": "", "text": "Thanks Prasad. Ill check this out.", "username": "Hamid_Jawaid" } ]
Java MongoDB Projection
2020-08-19T05:51:07.592Z
Java MongoDB Projection
2,964
null
[]
[ { "code": "", "text": "Can you confirm that these two filtering statements are not the same.{“director”: “Patty Jenkins”}\n&\n{“director” : “patty jenkins”}If we do not know how the directors name was stored, how do we now locate the document?", "username": "Olufemi_Ogunfowora" }, { "code": "", "text": "They are not the same. Most of the time equality matches are case sensitive. You should take the name of fields and string values as given in the problem statement.", "username": "steevej" }, { "code": "Regular expression", "text": "Hi @Olufemi_Ogunfowora,{“director”: “Patty Jenkins”}\n&\n{“director” : “patty jenkins”}They are not the same. If you want to do case insensitive matching then you could use Regular expression.Please refer our documentation : $regexLet me know if you have any questions.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
On the subject of filtering data in a collection
2020-08-18T10:08:24.313Z
On the subject of filtering data in a collection
1,288
null
[ "indexes", "performance" ]
[ { "code": "{\n\"_id\": \"FlightLegPaxRecord::UA::1494::13-AUG-2020::SFO::EWR\",\n\"travelerCounts\": [\n {\n \"counts\": [\n {\n \"key\": \"J\",\n \"value\": \"16\"\n },\n {\n \"key\": \"Y\",\n \"value\": \"153\"\n }\n ],\n \"travelCountType\": \"ActualCapacity\"\n },\n {\n \"counts\": [\n {\n \"key\": \"J\",\n \"value\": \"16\"\n },\n {\n \"key\": \"Y\",\n \"value\": \"153\"\n }\n ],\n \"travelCountType\": \"AdjustedCapacity\"\n },\n{\"travelerCounts.travelCountType\": 1, \"travelerCounts.counts.key\": 1}\nMongoCollection collection = database.getCollection(colName);\n\nMap<String, String> arrayFilters = new HashMap<String, String>();\n\t\t\t\t\tarrayFilters.put(\"travelerCounts.travelCountType\", \"AdjustedCapacity\");\n\t\t\t\t\tarrayFilters.put(\"counts.key\", \"Y\");\nUpdateOptions updateOptions = new UpdateOptions().arrayFilters(arrayFiltersAdd);\n\nUpdateResult result = collection.updateOne(Filters.eq(\"_id\", id), Updates.set(\"travelerCounts.$[travelerCounts].counts.$[counts].value\", \"154\"), updateOptions);\n\n", "text": "I am using arrayFilters to update an element within a document, but its taking more than 200 ms. I am not able to identify the bottleneck. We are expecting a response time of ~30ms.\nDocument size is 125kB and total nbr of documents in MongoDB- 25kSnippet of the document -If I go for a compound index will it help - My understanding - the document is first retrieved by _id and then arrayFilters will be applied to get the correct position in the array. Please confirm if my understanding is correct and would it help adding the below compound index ? what can be done to improve the performance of the update querySnippet of java code -", "username": "samaresh_kirtania" }, { "code": "", "text": "@samaresh_kirtania I’d recommend you put the commands / queries rather than the java code, as not everybody will understand it.\nI did get an idea, but as it doesnt have the full query, I cannot answer for sure.", "username": "Dushyant_Bangal" } ]
MongoDB ArrrayFilters taking too much time to update
2020-08-16T20:31:56.245Z
MongoDB ArrrayFilters taking too much time to update
2,291
null
[ "aggregation" ]
[ { "code": " db.collection.aggregate([\n {\n $match: {\n \"dob.age\": 59\n }\n },\n {\n $count: \"total\"\n },\n {\n $project: {\n \"dob.age\": 1,\n \"total\":1\n }\n }\n ])\n", "text": "Trying to get total count along with some other properties.Doesn’t seem to be working one way or another.Also posted on Stack Overflow.", "username": "jim_rock" }, { "code": "db.collection.aggregate([\n {\n $match: {\n \"dob.age\": 59\n }\n },\n {\n $group: {\n _id: \"$dob.age\",\n total: {\n $sum: 1\n }\n }\n },\n {\n $project: {\n _id: 0,\n age: \"$_id\",\n \"total\": 1\n }\n }\n])\n", "text": "Hi @jim_rock,I see that the answer was provided on Stack Overflow:Please let us know if you needed something else.Best regards,\nPavel", "username": "Pavel_Duchovny" } ]
How to get $count along with other properties?
2020-08-18T21:33:59.909Z
How to get $count along with other properties?
3,564
null
[ "indexes" ]
[ { "code": "db.childData.createIndex({ \"Status\" : 1,\"Children.Name\" : 1,\"Guardians.Name\" : 1},{\"name\":\"childData_1\"}) \n \"errmsg\" : \"cannot index parallel arrays [Children] [Guardians]\",\n \"code\" : 171,\n \"codeName\" : \"CannotIndexParallelArrays\"\n{\n \"_id\" : UUID(\"7cc6db32-240a-4de1-83f9-14d23f40e783\"),\n \"Name\" : \"test 123\",\n \"PriorityContact\" : \"test ABC\",\n \"PriorityContactMobileNumber\" : \"0410000000\",\n \"Children\" : [ \n {\n \"ChildId\" : UUID(\"0cdc56d4-fe4e-4bdc-ae98-a0cb916967b2\"),\n \"Name\" : \"Child 1\"\n }, \n {\n \"ChildId\" : UUID(\"c0abdf68-0b3b-4262-b79b-47bf83a7bc79\"),\n \"Name\" : \"Child 2\"\n }\n ],\n \"Guardians\" : [ \n {\n \"GuardianId\" : UUID(\"008f680b-3c23-415e-9cac-e8ac7467e5c0\"),\n \"Name\" : \"Guardian 1\"\n }, \n {\n \"GuardianId\" : UUID(\"082a49fd-a736-4a5c-9ea0-110de3380711\"),\n \"Name\" : \"Guardian 2\"\n }\n ],\n \"Status\" : \"Active\",\n \"CreatedById\" : UUID(\"7159a72b-ab7e-41c8-a593-a2a6fac96d5d\"),\n \"CreatedOn\" : \"2019-12-16T03:04:40.3641115+00:00\",\n}\n", "text": "Hello,I am trying to create Index on Children.Name and Guardians.Name with below scriptMongo is throwing error like:How can I achieve to create Index in this scenario?I have collection Structure as below:", "username": "Yatin_Patel" }, { "code": "ChildrenGuardians", "text": "Hi @Yatin_Patel,Index on an array field is called as Multikey Index. There is a limitation that you cannot create a compound multikey index on the collection since both the Children and Guardians fields are arrays. See Limitations - Compound Multikey Indexes", "username": "Prasad_Saya" } ]
Cannot index parallel arrays error code 171
2020-08-18T21:02:12.010Z
Cannot index parallel arrays error code 171
11,596
null
[ "graphql" ]
[ { "code": "", "text": "Hi all!I’ve been working on an iOS app that links to my MongoDB Realm using Apollo and the GraphQL integration. I noticed that I was getting really slow API response times with GraphQL, and figured that it was because I am currently in Australia (quite far from the server), but after some additional playing around with my queries, I realised something else was making the requests slow!Basically, my request is using the _id_in query for an array of roughly 40 ids, and returns a response with some properties, as well as 3 related documents stored in other collections. When I include these 3 relationships in the query, the query takes ~10 seconds. When I don’t include these relationships, the query takes <1 second.It seems like when I include the relationships in the query, the complexity jumps to O(N).All the related document collections are indexed by _id (as I figure the GraphQL integration would use), but is there another index that I should be making to speed up this query, or is this just a limitation of the Realm GraphQL integration? It’s really slowing down the performance of my app pretty much everywhere!Thanks so much for your help!", "username": "Pierre_Rodgers" }, { "code": "", "text": "Hi @Pierre_Rodgers,We constantly improve the graphql performance. However, with relationships each document that we process against the relationship needs to be validated against the Realm rules on the realm service side and not during the time the query is executed against the database.This slowdown the retrieval. Perhaps for a 1 to many relationship I would suggest to perform 2 queries or have a view on the Atlas side and expose the joined view.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks so much Pavel! That makes sense re performance. I’m not quite sure what you mean by the joined view on the Atlas side – do you have any more info?", "username": "Pierre_Rodgers" }, { "code": "", "text": "Hi @Pierre_RodgersWhat I meant is to create a view on the database where the pipeline will have the needed lookup. Then you can expose this view for combined queries in graphql .This way the engine will treat it as a single queryThanks\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thanks for that! I got a view set up and the GraphQL API works perfectly, however this has now broken Realm Sync – I get the following error:recoverable event subscription error encountered: error issuing collMod command for music.fullReview: (InvalidOptions) option not supported on a view: recordPreImagesI gather that Realm Sync relies on the recordPreImages collMod command for syncing. I don’t need the view synced, but I do need it accessible in the GraphQL API – is there any way of setting up my Realm such that this view is ignored for Sync, but still available in GraphQL requests? Or is there another way around this?", "username": "Pierre_Rodgers" }, { "code": "", "text": "Hi @Pierre_Rodgers,Oh, I see how it can conflict if sync is in use.The only workaround I can see is to have a separate Realm application for the view without the sync and query it.Best\nPavel", "username": "Pavel_Duchovny" } ]
Slow query performance with GraphQL relationships
2020-08-16T10:58:54.744Z
Slow query performance with GraphQL relationships
5,231
null
[]
[ { "code": "", "text": "I tried using the method of quit and running the comand outside the mongo shell but getting below error.", "username": "Akhil_Khanna_Khanna" }, { "code": "", "text": "Is mongodb installed?If yes it coud be path issue.Have you updated mongdb/bin to your PATH?\nCheck echo $PATH", "username": "Ramachandra_Tummala" }, { "code": "", "text": "Hi @Akhil_Khanna_Khanna,This Is mongodb installed?If yes it coud be path issue.Have you updated mongdb/bin to your PATH?\nCheck echo $PATH~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "yes, mongodb is installed.getting below error.\nimage961×145 7.72 KB", "username": "Akhil_Khanna_Khanna" }, { "code": "", "text": "you need to do that outside the mongo shell.", "username": "steevej" }, { "code": "", "text": "at first i tried doing outside the mongo shell. i have pasted the screenshot above", "username": "Akhil_Khanna_Khanna" }, { "code": "", "text": "yes, but you must have fixed the path issue because you are inside the mongo shell. when someone offer a solution follow it. with the path fixed you should try the original command rather than going into another direction.", "username": "steevej" }, { "code": "", "text": "Hi @Akhil_Khanna_Khanna,I hope you found @steevej-1495’s response helpful.Please let us know if you are still facing any issues.~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "Hi @Shubham_Ranjan / @steevej-1495Thanks for the help. Issue is resolved now.\nPATH was not set in the environment variables.Thanks\nAkhil Khanna", "username": "Akhil_Khanna_Khanna" }, { "code": "", "text": "@Ramachandra_37567 Thanks for the help, it was the path issue.", "username": "Akhil_Khanna_Khanna" }, { "code": "", "text": "", "username": "system" } ]
Unable to connect compass using the mongo shell
2020-08-15T21:55:09.139Z
Unable to connect compass using the mongo shell
1,604
null
[ "mongoose-odm" ]
[ { "code": "const GroupPost = new Schema({\n groupName: {\n type: String,\n required: true,\n },\n groupPost: [\n {\n id: {\n type: String,\n required: true,\n },\n userName: {\n type: String,\n required: true,\n },\n userPhoto: {\n type: File,\n },\n commentSection : [\n {\n comment : {\n type : String\n },\n reply : [\n {content : String}\n ]\n }\n ] \n },\n ],\n})\n", "text": "How to update reply elementAnybody help me", "username": "Praveen_Gupta" }, { "code": "db.test1.insertMany([\n {\n _id: 'G1',\n name: 'My Group',\n posts: [\n {\n _id: 'P1',\n title: 'Post 1',\n comments: [\n {\n _id: 'C1',\n name: 'Comment 1',\n replies: [\n {\n _id: 'R1',\n content: 'Reply 1',\n },\n {\n _id: 'R2',\n content: 'Reply 2',\n },\n ],\n },\n {\n _id: 'C2',\n name: 'Comment 2',\n replies: [\n {\n _id: 'R3',\n content: 'Reply 3',\n },\n {\n _id: 'R4',\n content: 'Reply 4',\n },\n ],\n },\n ],\n },\n ],\n },\n {\n _id: 'G2',\n name: 'My Group',\n posts: [\n {\n _id: 'P2',\n title: 'Post 2',\n comments: [\n {\n _id: 'C3',\n name: 'Comment 3',\n replies: [\n {\n _id: 'R5',\n content: 'Reply 5',\n },\n {\n _id: 'R6',\n content: 'Reply 6',\n },\n ],\n },\n {\n _id: 'C4',\n name: 'Comment 4',\n replies: [\n {\n _id: 'R7',\n content: 'Reply 7',\n },\n {\n _id: 'R8',\n content: 'Reply 8',\n },\n ],\n },\n ],\n },\n ],\n },\n]);\n{ modified=true }db.test1.updateOne(\n {\n _id: 'G2',\n },\n {\n $set: {\n 'posts.$[post].comments.$[comment].replies.$[reply].modified': true,\n },\n },\n {\n arrayFilters: [\n {\n 'post._id': {\n $eq: 'P2',\n },\n },\n {\n 'comment._id': {\n $eq: 'C4',\n },\n },\n {\n 'reply._id': {\n $eq: 'R7',\n },\n },\n ],\n },\n);\n{\n '_id': 'R7',\n 'content': 'Reply 7',\n 'modified': true\n},\n", "text": "Welcome to the community, @Praveen_Gupta! Let’s start solving this by making an example dataset:Assume, we want to update G2->C4->R7 document, by adding a flag { modified=true } to it.\nTo achieve this, we can use arrayFilters, and the final update operation will look like this:Problem solved!\nNow, the target reply object (R7) looks like this:", "username": "slava" }, { "code": "", "text": "I want push a new reply in G1>P1>C1", "username": "Praveen_Gupta" } ]
Update nested array
2020-08-18T04:28:05.723Z
Update nested array
1,597
null
[ "aggregation" ]
[ { "code": "Name Total Count\n------- ------- -------- \nProduct 5 10\nKey Vaue\n-------- --------\nName Product\nTotal 5\nCount 10\n", "text": "Hi ,I have created an aggregation pipeline which will gives output with 3 columns and a row. How do I convert this to rows, like key value pair?I would like to display the above output as all rowsI tried to use the ‘$objectToArray’, tried to convert the output to array and convert them to Key value pair. I am struck in this. Please advise me the approach for thisThanks,\nVijay", "username": "vijayaraghavan_krish" }, { "code": "{\n \"$project\":\n {\n \"one\":\n {\n \"Name\":\"$Name\",\n \"Total\":\"$Total\",\n \"Count\":\"$Count\"\n }\n }\n },\n {\n \"$project\":\n {\n \"Output\": { \"$objectToArray\" :\"$one\" }\n }\n },\n {\n \"$unwind\": \"$Output\"\n }", "text": "Thanks I am able to do it like below.", "username": "vijayaraghavan_krish" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Convert the Aggregation Pipeline to rows
2020-08-18T17:25:49.987Z
Convert the Aggregation Pipeline to rows
3,457
https://www.mongodb.com/…4a435a0d3639.png
[ "php" ]
[ { "code": "\"Sentenca\": [{\n \"Situacao\": \"nc\",\n \"Prob\": 0.96\n }]\n$result = $collection->find( [\"Sentenca\"=>[\"Situacao\"=>['$nin'=>[\"nc\"]]]] );Uncaught MongoDB\\Driver\\Exception\\LogicException: Cursors cannot yield multiple iterators\n", "text": "I saw similar questions for this, but I don’t know how to use PHP’s find() to solve it.I have an Array of objects that always has two field and one index [0].\nLike this:I want to fetch all documents where ‘Situacao’ $nin ‘nc’. I’m new to MongoDB so I don’t know how to use PHP’s find() for this, the only example from the documentation uses simple fieldsEDIT: I managed to create this query, trying to fetch documents where ‘Situacao’ not in “nc”\n$result = $collection->find( [\"Sentenca\"=>[\"Situacao\"=>['$nin'=>[\"nc\"]]]] );But this query returns nothing and if I try to print it I get the error:", "username": "Joao_Victor_Daijo" }, { "code": "$result = $collection->find([\n\n \"Sentenca.0.Situacao\" => [\n '$nin' => [\"nc\"]\n ]\n \n]);\n\nprint_r($result->toArray());\n", "text": "If you want to fetch documents where “Situacao” not in “nc”, you can use this PHP code:", "username": "Samuel_Tallet" }, { "code": "", "text": "Oh, so I use ‘.’ to access array elements, thank you for you help.", "username": "Joao_Victor_Daijo" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
PHP MongoDB\Collection->find(): Fetch documents with condition on an Array element
2020-08-16T20:35:36.122Z
PHP MongoDB\Collection-&gt;find(): Fetch documents with condition on an Array element
4,938
null
[ "database-tools" ]
[ { "code": "", "text": "Hi, when I export csv using mongoexport that _id filed doesn’t export as per collection. The format gets change in csv. I would like to export _id filed which is available in collection. for an example:\nIn my collection I am having “_id” : UUID(“0005baaf-eea9-4b9d-b13e-59e238e804ec”)When I export in csv it is exporting like 0005BAAFEEA94B9DB13E59E238E804EC.mongoexport /db dbname /collection collectionname /type csv /fields _id /out test11.csvI would like to export exactly like UUID(“0005baaf-eea9-4b9d-b13e-59e238e804ec”) in CSV.", "username": "Yatin_Patel" }, { "code": "UUID(...)UUID(...)", "text": "Hi Yatin,I believe the export is correct, since it exported the content of the UUID field. UUID() is a function in the mongo shell that can generate a random UUID for you, and it also signifies that the content of a field is a proper UUID as defined in RFC4122. This is similar to ObjectId() in the shell, or ISODate().In fact, some may argue that exporting the field by enclosing it in UUID(...) is the wrong thing to do, since anything with brackets typically signifies a function call. It is also was not mentioned that a UUID should be enclosed in a function-call-like format as per RFC4122.If you would like to export the field enclosed in UUID(...) then unfortunately you would have to roll out your own post-export processing.Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thanks ! I will do the same.", "username": "Yatin_Patel" } ]
Mongoexport - doesn't export exact _id as per collection
2020-05-20T06:32:43.987Z
Mongoexport - doesn&rsquo;t export exact _id as per collection
3,299
null
[]
[ { "code": "", "text": "Hello!\nI dont know the difference between ‘insertMany, updateMany, deleteMany’ and ‘bulk.insert, bulk.update, bulk.remove’.\nWhat’s the difference between ‘insertMany, …’ and ‘bulk.insert, …’?\nThnx in advance.", "username": "DongHyun_Lee" }, { "code": "", "text": "Hi @DongHyun_Lee,Performance wise there is no major difference and both methods come to have less client to database round trips.The syntax of bulk operations allow you to have a mix of updates/deletes or inserts within a bulk while insertMany will only support inserts for example.Additionally the insertMany will execute the amount of data in the command while the bulk will only execute when the execute method is called so you van better control execution timing and logicBest\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
What's the difference between 'insertMany()' and 'bulk.insert()'?
2020-08-17T06:29:59.811Z
What&rsquo;s the difference between &lsquo;insertMany()&rsquo; and &lsquo;bulk.insert()&rsquo;?
12,011
null
[ "performance", "atlas" ]
[ { "code": "", "text": "Hi MongoDB community,I really appreciate the Performance Advisor feature.Do you know if there is a way to be notified when the Performance Advisor detects optimizations?Ideally, I would like to receive email alerts when new Index Suggestions and Schema Anti-Patterns are available.Thanks,Vincent", "username": "Vincent_Denise" }, { "code": "", "text": "Hi @Vincent_Denise,I think this was released in the latest atlas release.Project owners are now alerted when there are Performance Advisor index recommendations for a cluster that is experiencing high query targeting.Check the alert configuration for your projectsBest\nPavel", "username": "Pavel_Duchovny" } ]
Performance Advisor Notification
2020-08-17T13:31:31.724Z
Performance Advisor Notification
1,483
null
[ "php" ]
[ { "code": "", "text": "Hello,Since days, we are trying to read our mango DB from an external PHP, hosted by a simple web provider. We need this in order to make a ladder for our guild.We first tried to make Heroku to use our external MySQL database, but without success, so we used a MangoDB, but now we don’t know how to read it from our website.I hope someone will help us, we really need…\nThank you and sorry for my english.", "username": "_Jijo_N_A" }, { "code": "", "text": "You need to install the drivers and connect using a URI for your target MongoDB. See https://docs.mongodb.com/drivers/php", "username": "Jack_Woehr" }, { "code": "", "text": "Are you using a platform (e.g. WordPress) or a framework (e.g. Symfony)? Do you block on a specific step (e.g. Connection)? More information we’ll have, more easily we could help you.", "username": "Samuel_Tallet" }, { "code": "", "text": "Hello, we now installed the drivers + node.js, we are look at your link.\nYes we use Joomla.", "username": "_Jijo_N_A" }, { "code": "$client = new MongoDB\\Client(\n 'mongodb+srv://x:[email protected]/x?retryWrites=true&w=majority'\n);\n\n$db = $client->test;\n\t?>\n", "text": "Actualy I use\n<?phpTo connect on our db, with a VPS with node.js and the mango’s driver. But I don’t know how to read the content and show it on tables, I’m noob with db.", "username": "_Jijo_N_A" }, { "code": "<?php\n// Define some useful constants.\ndefine('MONGO_URL', 'mongodb+srv://x:[email protected]/x?retryWrites=true&w=majority');\ndefine('MONGO_DATABASE', 'testdb');\ndefine('MONGO_COLLECTION', 'testcoll');\n\ntry {\n\n // Connect to database and select collection.\n $mongoClient = new MongoDB\\Client(MONGO_URL);\n $mongoCollection = $mongoClient->selectCollection(MONGO_DATABASE, MONGO_COLLECTION);\n \n // Get collection contents and store it.\n $mongoDocuments = $mongoCollection->find();\n $mongoDocumentsArray = $mongoDocuments->toArray();\n\n} catch (\\Throwable $th) {\n echo 'Error: ' . $th->getMessage();\n}\n\n// If collection is not empty:\nif ( isset($mongoDocumentsArray) && count($mongoDocumentsArray) >= 1 ) {\n\n $mongoDocumentsArrayKeys = array_keys((array) $mongoDocumentsArray[0]);\n\n $htmlTable = '<table>';\n\n // Prepare table head.\n $htmlTable .= '<tr>';\n foreach ($mongoDocumentsArrayKeys as $mongoDocumentsArrayKey) {\n $htmlTable .= '<th>' . $mongoDocumentsArrayKey . '</th>';\n }\n $htmlTable .= '</tr>';\n\n // Prepare table body.\n foreach ($mongoDocumentsArray as $mongoDocumentArray) {\n $htmlTable .= '<tr>';\n foreach ($mongoDocumentArray as $mongoDocumentArrayValue) {\n $htmlTable .= '<td>' . $mongoDocumentArrayValue . '</td>';\n }\n $htmlTable .= '</tr>';\n }\n\n $htmlTable .= '</table>';\n\n // Display table.\n echo $htmlTable;\n\n}\n?>\n", "text": "Here’s a PHP code to help you. Note: nested fields are not supported.", "username": "Samuel_Tallet" }, { "code": "", "text": "Thank you!\nBut I think the php host need mangodb and librairies installed, and node.js, no ?", "username": "_Jijo_N_A" }, { "code": "", "text": "PHP host needs MongoDB extension and library installed.", "username": "Samuel_Tallet" } ]
How to read the database from an external PHP?
2020-08-14T13:35:53.342Z
How to read the database from an external PHP?
2,468
null
[ "connector-for-bi" ]
[ { "code": "", "text": "I’m trying to connect my database to Tableau, installed Connector BI and started the service, but I can’t get the connection. The port has already been released on Azure and the firewall on the machine is disabled, has anyone experienced this problem?", "username": "Fernando_Mota" }, { "code": "", "text": "I would first look at the logs generated by the BI Connector. You can also try to use a MySQL Client to connect to the BI Connector directly.The bigger challenge with deploying your own BI Connector versus using the one in MongoDB Atlas is dealing with certificates. In Atlas you just check a box so if you can use MongoDB Atlas it is much easier.\nThat said, here is a detailed setup with a self-hosted BI Connector and Tableau. https://docs.mongodb.com/bi-connector/current/connect/tableau/", "username": "Robert_Walters" } ]
Connect Tableau with MongoDB via external BI Connector
2020-08-07T21:38:52.086Z
Connect Tableau with MongoDB via external BI Connector
1,705
null
[]
[ { "code": "", "text": "I want to migrate my Alexa skills data from DynamoDB to MongoDB. I exported the data from DynamoDB using Export.csv and imported into MongoDB. From now on, I want my skill to store data on MongoDB. How do I do that using the same code that I used for DynamoDB.P.S : Currently the skill refers to DynamoDB table, even after removing the DynamoDB credentials.", "username": "Heta_Shah" }, { "code": "", "text": "Hi, there is not enough information for me to explain exactly how you would go about this. Once the data is in MongoDB you can use one of many drivers for programmatic access to MongoDB. https://docs.mongodb.com/drivers/.", "username": "Robert_Walters" } ]
How to migrate from DynamoDb to MongoDB
2020-08-10T01:14:33.774Z
How to migrate from DynamoDb to MongoDB
1,949
https://www.mongodb.com/…4_2_1024x512.png
[]
[ { "code": "", "text": "I’m trying to create a scheduled trigger to drop the database at a certain time of the day.\nI can’t find the correct command on the documentation. Is that even possible?\nMy cluster is an M10 and I use it to demonstrate an application, that’s why I have to drop it daily.\nThanks.", "username": "Felipe_Lima" }, { "code": "", "text": "Hi @Felipe_LimaI don’t think realm will allow you to drop databases from a function.What you can do is remove all documents via trigger set to system. Won’t that have the same affect?You can also recreate a cluster via the Atlas API similar toLearn how to automate your MongoDB Atlas cluster with scheduled triggers.One final “hack” you can do is to load a dependency of MongoDB Node Js driver and have a plain connection to the database to perform the drop…Best\nPavel", "username": "Pavel_Duchovny" } ]
How to create a scheduled trigger to drop the database daily?
2020-08-17T09:18:54.868Z
How to create a scheduled trigger to drop the database daily?
3,566
null
[ "atlas" ]
[ { "code": "", "text": "I have an M20 cluster with 200 GB disk.My disk usage was around 150 GB, but now that we have moved some of the data, it is coming down to 90-100 GB, and I dont really need the M20 compute power.I want to downgrade it to M10 with 110 or 120 GB disk (max M10 supports is 128 GB)I see that I can just edit configuration, select M10 and 120 GB disk size, but I have never reduced disk size before, so want to know if it will work or do I have to do something else?", "username": "Dushyant_Bangal" }, { "code": "", "text": "Hi @Dushyant_Bangal,If your storage size allows the resize the disk should be just replaced with a smaller device.Having said that we recommend keeping 30% of free space to avoid issues as a general guide line.Best\nPavel", "username": "Pavel_Duchovny" } ]
Safely downgrading Cluster type and disk
2020-08-18T10:52:37.045Z
Safely downgrading Cluster type and disk
2,661
null
[ "indexes", "performance" ]
[ { "code": "{\n \"ns\" : \"my_collection\",\n \"size\" : 499941088660,\n \"count\" : 121981837,\n \"avgObjSize\" : 4098,\n \"storageSize\" : 178798247936,\n \"capped\" : false,\n \"nindexes\" : 1,\n \"totalIndexSize\" : 1713725440,\n \"indexSizes\" : {\n \"_id_\" : 1713725440\n },\n \"ok\" : 1\n}\nVM Spec: n1-highmem-8 (8 vCPUs, 52 GB memory) -- Google Cloud\nDisk: 500GB SSD with random IOPS limit at 15000/15000 (read & write each) and max throughput at 240 MB/S. \nmaxIndexBuildMemoryUsageMegabytesiotop", "text": "We have a collection with about 121 million documents that results in following stats.We are trying to create an index on an existing field that all documents have. For trying this out, we have created an isolated mongo instance with following specs:Index creation seems to running very slow (by our estimate it would take to a day or two).We tried different values for maxIndexBuildMemoryUsageMegabytes (default 500, 800, 1000, 5000, 10000, 30000) But this didn’t seem to affect the speed. (Even with high values here, VM still had free memory apart from used + cache, so we believe it didn’t cause insufficient working set memory leading to higher disk I/O).Checking on iotop, we noticed reads happening (avg 10MB/S) but almost no writes.Are there any other parameters we should be looking at?", "username": "Shivaprasad_Bhat" }, { "code": "", "text": "Hi @Shivaprasad_Bhat,We recommend creating index on a large collection using the rolling maintenance manner:\nhttps://docs.mongodb.com/manual/core/index-creation/With this approach you should be able to have minimal impact on your production and create the index with default speed. As long as your replication window is sufficient you should be good doing this build one node at a time.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "ascasc", "text": "Hi @Pavel_Duchovny,We did go through this documentation. But one complication we have is that an index with unique constraint on the same field already exists. The new index we want to create is a partial index (i.e., index only those documents where the target field exists thus allowing multiple documents to not have the field or have null value). Since we can’t have 2 indices at once, we have to drop the existing index first which would affect the production traffic. This prevents us from using background indexing on primary.Other approach from Mongo documentation is to take one secondary out of cluster at a time, run foreground index creation and put it back into cluster. Finally change primary to secondary and do the same thing. Since we have about 8 secondaries, this will require lot of manual intervention and is cumbersome.We have come up with an alternate approach to solve this based on following facts:Approach:Let me know if you see any problems with this approach.Regards,\nShivaprasad Bhat", "username": "Shivaprasad_Bhat" }, { "code": "", "text": "Hi @Shivaprasad_Bhat,That sounds like a good plan. But be aware that index builds on secondaries start after the Primary build.Btw, you can always add a dummy field to your index to create more “duplicate” indexes.Best\nPavel", "username": "Pavel_Duchovny" } ]
Improving create Index performance
2020-08-13T03:48:21.418Z
Improving create Index performance
4,465
null
[ "mongodb-shell" ]
[ { "code": "'cccddd'.replace(/(?<!c)d/g, 'A')\nReplace all `d` not preceded by `c` with `A`\ncccdAA\nMongoDB shell version v4.2.2\nMongoDB server version: 4.2.2\n\n> 'cccddd'.replace(/(?<!c)d/g, 'A')\n2020-08-15T19:07:15.871+0530 E QUERY [js] uncaught exception: SyntaxError: \ninvalid regexp group :\n@(shell):1:17\n", "text": "Mongo Shell Support For Negative Lookbehind RegExp,Browser console:However if I run this into mongo shellI can’t find any definitive documentation on what Mongo shell supports insofar, but it appears it doesn’t support negative look-behind (at least).", "username": "turivishal" }, { "code": "$ brew tap mongodb/brew\n$ brew install mongosh\n", "text": "This works in the new MongoDB Shell..You can install it from the download center or if you are on macOS with homebrew:", "username": "Massimiliano_Marcon" }, { "code": "", "text": "Thanks @Massimiliano_Marcon, it is working in new MongoDB Shell.", "username": "turivishal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does Mongo Shell support negative look-behind assertion?
2020-08-15T20:37:13.206Z
Does Mongo Shell support negative look-behind assertion?
3,080
null
[ "backup" ]
[ { "code": "", "text": "How can I use mongoDump and mongoRestore to export/import all tables between different connection strings. I have two different monogoDB databases on azure.", "username": "Mike_Z" }, { "code": "", "text": "Hi @Mike_Z,Please see the following documentation example on how to dump and restore databases :\nhttps://docs.mongodb.com/database-tools/mongorestore/#copy-clone-a-databaseIf you have specific difficulty please let us know.Best\nPavel", "username": "Pavel_Duchovny" } ]
mongoDB export/import
2020-08-17T20:32:20.859Z
mongoDB export/import
1,959
null
[]
[ { "code": "", "text": "Hello,I am using Mongo-Connector, and all other DDL are syncing fine, but index creation is not syncing from source to Destination. Is there any Particular configuration needs to be enabled.", "username": "Aayushi_Mangal" }, { "code": "", "text": "Hi @Aayushi_Mangal,Mongo Connector syncs data between different supported environments, but I don’t believe it currently supports syncing indexes. Depending on your source and destination software and versions, you may also want more control over when index builds are triggered on populated collections (see: Index Builds on Populated Collections for behaviour in MongoDB 4.4).I would try searching mongo-connector GitHub issues and ask there if index syncing does not appear to be an open feature request or documented feature.Regards,\nStennie", "username": "Stennie_X" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo-Connector
2020-08-18T07:32:30.792Z
Mongo-Connector
3,181
null
[ "sharding", "containers", "upgrading" ]
[ { "code": "4.2.74.4.0configshardmongos{\"t\":{\"$date\":\"2020-08-17T20:48:55.000+00:00\"},\"s\":\"F\", \"c\":\"CONTROL\", \"id\":4757800, \"ctx\":\"ftdc\",\"msg\":\"Writing fatal message\",\"attr\":{\"message\":\"DBException::toString(): NonExistentPath: \\\"/dev/stdout.diagnostic.data\\\" could not be created: Permission denied\\nActual exception type: mongo::error_details::ExceptionForImpl<(mongo::ErrorCodes::Error)29, mongo::AssertionException>\\n\"}}\nsystemLogsystemLog:\n destination: file\n logAppend: true\n path: /dev/stdout\n/dev/stdout/dev/stdout.diagnostic.data4.2.74.4.0", "text": "Hey there,\nI am trying to update a sharded cluster from 4.2.7 to 4.4.0.So far I have updated the config and shard replica sets without any issue.But when updating the mongos instances I get the Following exception:My systemLog config looks like this:I am using /dev/stdout as I am running my mongod and mongos instances within docker containers. From the exception it seems that mongo is trying to create a file at /dev/stdout.diagnostic.data which obviously doesn’t work. Not sure what changed between 4.2.7 and 4.4.0 that broke this for me, as the only thing I have changed in my configuration is the mongo server version.Anybody any insight on how to fix this?", "username": "Jascha_Brinkmann" }, { "code": "", "text": "Hi @Jascha_Brinkmann,The systemLog.path expect a full file destination and not just a path:\nhttps://docs.mongodb.com/manual/reference/configuration-options/#file-formatIts not something new and for quite sometime it also create the diagnostic directory under that parent directory.We suggest using the syslog option to centrelize logs rather then stdout.Best\nPavel", "username": "Pavel_Duchovny" } ]
Mongos 4.4.0 exception after upgrading from 4.2.7
2020-08-17T22:47:14.988Z
Mongos 4.4.0 exception after upgrading from 4.2.7
2,256
null
[ "queries" ]
[ { "code": "user = {\n '_id': email, \n 'username': username,\n 'password': hashed,\n 'images': {\n 'tags': []\n }\n }\nuser = db.users.find_one({'username': username})\ncursor = db.fs.files.find({'_id': {'$in': user['images']}})\n", "text": "The structure is like this:I added the images to the MongoDB database using GridFS and now I’m trying to recover them like this:But I get an error: $in needs an array. How can I fix this?", "username": "cosmina" }, { "code": "user['images']user = {\n '_id': email, \n 'username': username,\n 'password': hashed,\n 'images': [ 'xxxxxxxxxxxx', ....]\n }\n", "text": "Hi @cosmina,\nThe. $in needs an array however user['images'] is an object. Should images contain an array of Ids?I would expect then the structure to be:Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Yes, but I also need the ‘tags’ array within images. Can images still be an array in that case?", "username": "cosmina" }, { "code": " 'images': [{ id: 'xxxxxxxxxxxx', tags :[ ... ]}, ...}\nfield : { $in : [array of values to match]}", "text": "Hi @cosmina,Not sure I understand the data design. Perhaps what you are looking for is an array of compound documents :You will need to map and push the ids into a new array of only ids to use the $in because it looks for field : { $in : [array of values to match]}Best\nPavel", "username": "Pavel_Duchovny" } ]
Error: $in needs an array
2020-08-17T11:37:07.430Z
Error: $in needs an array
11,459
null
[ "on-premises" ]
[ { "code": "", "text": "Hi,I have set up the mongodb and mongodb charts on same host. Both mongodb and the chart WebUI work well. But there is a problem that always bothers me.When I want to add a new chart, after loading the data source. I drag a field item to the chart X/Y axis, there is always no reaction on the X/Y axis(the field bar can’t stay there). Using sample mode sometimes helps, but not always. After waiting about 20minutes, the axis will start works, but still not very stable. Do you know why this happens and how to fix it?Here is my environment\nThe system is a VM with 16 vcpu and 16G memories, with fedora 31installed.\nThe mongodb version is mongodb-org-4.2.8-1.el8.x86_64.\nThe chart version is 19.12.1.\nIn Mongo shell, show dbs shows the usage as 3.1G.Thanks", "username": "Hangbin_Liu" }, { "code": "", "text": "Hi @Hangbin_Liu - I haven’t seen that behaviour, but I’m not sure I fully understand it. Are you able to show any screenshots or GIFs demonstrating the issue?thanks\nTom", "username": "tomhollander" }, { "code": "", "text": "Hi @tomhollander, sorry for the late reply. I updated mongodb to 4.4.0 recently and the issue just disappeared… Maybe there is some performance update on 4.4.0? I’m not sure, but I’m happy to not has this issue now. If I could reproduce it, I will post a GIF for you.Thanks\nHangbin", "username": "Hangbin_Liu" }, { "code": "", "text": "Hi @tomhollander, today I reproduced this issue and recorded a gif. Here is itWould you like to help check why that happens and how to avoid it?Thanks!", "username": "Hangbin_Liu" }, { "code": "", "text": "Thanks for recording the GIF. What OS and browser are you using?Tom", "username": "tomhollander" }, { "code": "", "text": "I use fedora 31 and google-chrome-stable-84", "username": "Hangbin_Liu" }, { "code": "", "text": "Thanks. This would be a client-side issue (not related to MongoDB version) but I’ve never seen it before. A few more questions that may help us:thanks\nTom", "username": "tomhollander" }, { "code": "", "text": "Thanks for your quick reply. It would be good to know the the server works well.\nFor the client side, as I said, it is not always reproducible. For your questionsSo I though it maybe because I opened chrome with too many tabs(20+) for a long time(a few days). Restart the browser may resolve the issue. I will see if it really works when I got this issue again.Thanks\nHangbin", "username": "Hangbin_Liu" } ]
No action when drag fields to charts X/Y axis
2020-07-30T10:54:40.526Z
No action when drag fields to charts X/Y axis
3,768
null
[]
[ { "code": "", "text": "The MongoDB UX Research Team is looking for Mobile App Developers to participate in a diary study. We are asking participants to create a mobile application from start to finish and to journal their experiences along the way. Participants will be given two weeks to complete this diary study (approximately 10 hours of developing) and will be compensated accordingly. If you are a Mobile App Developer, please apply here. We will reach out to you if you qualify for the diary study.", "username": "Michael_Lynn" }, { "code": "", "text": "Oh, how I keep missing interesting stuff on these forums. Apparently I need to try to find a way to get better notified on various opportunities. Would have been interesting to see how this kind of diary study works out.", "username": "kerbe" }, { "code": "", "text": "", "username": "Stennie_X" } ]
Are you a mobile app developer? Want to earn $1500?
2020-07-29T13:02:58.639Z
Are you a mobile app developer? Want to earn $1500?
4,836
null
[ "atlas-search" ]
[ { "code": "", "text": "Does MongoDB Atlas support full text search for chinese text?\nIf not, what should be done to get the search to work ?", "username": "Supriya_Bansal" }, { "code": "", "text": "Hi @Supriya_Bansal,Unfortunately, at the moment there are no language analyzing for Chinese language:\nhttps://docs.atlas.mongodb.com/reference/atlas-search/analyzers/language/You can open a feature request at:\nhttps://feedback.mongodb.com/You can try to set the locale on the collection level and try a simple utf8 text search to see if it works for you.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Thank you!! This is helpful.", "username": "Supriya_Bansal" }, { "code": " lucene.cjk", "text": "Hi @Supriya_Bansal,Apparently I missed that lucene.cjk is listed as supported see if this works for you.Sorry for the confusion. Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Oh nice…this is amazing!!\nThank you!", "username": "Supriya_Bansal" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Full Text Search for Chinese text
2020-08-14T20:27:05.821Z
Full Text Search for Chinese text
4,621
null
[]
[ { "code": "PlatformException (PlatformException(error, Migration is required due to the following errors:\n", "text": "According to documentation database automatically handles all synced schema migrations and does not allow to specify a migration function. That being said after adding a few models changes give me this error.Do I have to write sync migration? if yes then, how do I acces migration function for SyncConfiguration", "username": "Safik_Momin" }, { "code": "Realm.getDefaultInstance()Realm.getDefaultInstance()", "text": "Figured out issue on why it was giving the above error:\nBasically I was doing Realm.getDefaultInstance() become starting the syncing. After removing Realm.getDefaultInstance() and adding it after the syncing process takes care of the migration automatically.", "username": "Safik_Momin" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Sync Migration on android
2020-08-09T16:51:03.838Z
Sync Migration on android
2,321
null
[]
[ { "code": "", "text": "I’m coming from Realm Cloud, where storing images in the database is not possible due to the limits of storage (2.5 GB) and bandwidth (20GB/month). My solution was to only store an image link and store the actual images in Firebase storage instead. The image was sent directly from the user’s device, to avoid wasting realm’s paid bandwidth.I’ve seen that in MongoDB, you can store images in AWS S3, and use server functions to transfer the images. But to run the server functions, you have to send the image to the server in the first place right? Won’t that create a bandwidth issue? Is there any limit on the bandwidth you can use with functions?", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "Hi! There isn’t a bandwidth limit but some other limits to consider are:There are limits for the free tier data egress which you can read about here.", "username": "Sumedha_Mehta1" }, { "code": "", "text": "@Jean-Baptiste_Beau I did see Alex’s talk at MongoDB Live where he mentioned using server functions to cut up image files after uploading them. Although it could be done this way, there are a few reasons why it should still be done the old way.First, one typically uses blob storage like AWS S3 or Azure to store these large image assets. MongoDB Realm is not a CDN, so you still have to use this type of storage. Typically, you get a write URL to upload your image to, and then a read URL to share it with other clients. In my opinion, the purpose of the server function is to compute the read and write URLs on behalf of the client, i.e. the client should not have to worry about the details of the blob storage. However, that is what I would limit the role of the server to. Second, I would let the client do the actual work of cutting up the image. Client compute power is cheap, free from the point of view of the developer, and does not encumber paid for server resources. Since the client has to upload the image anyways, uploading the image and the cuts is not that much ‘extra’ work. Plus, the act of uploading should not have to go through a server on its way to blob storage - it should just go there directly. As a rule, when developing collaborative applications, you always want to push work away from the central server towards the client. The server should do the lightweight management, not the heavy lifting.I hope this was useful.Richard Krueger", "username": "Richard_Krueger" }, { "code": "", "text": "Perfect answer, thank you!", "username": "Jean-Baptiste_Beau" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Handling images (and other big files) in MongoDB Realm: bandwidth limit?
2020-08-11T09:26:34.293Z
Handling images (and other big files) in MongoDB Realm: bandwidth limit?
8,039
null
[]
[ { "code": "", "text": "I have a schema which consists of a few attributes, Around 30 or so. The problem I am facing is that only two or three fields would be mandatory while the others would be optional.I would like your help in understanding if it’s a good approach storing fields with null values instead of not storing them to the document at all.Thanks for the help!", "username": "Subrit_X" }, { "code": "", "text": "I prefer NOT storing the fields. Smaller space might make some operation faster.It somewhat complicate the code since you have to process the absence of the field but in mi opinion it is not much more complicated than processing the null value.", "username": "steevej" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Not including fields vs storing fields with a null value
2020-08-17T13:06:06.040Z
Not including fields vs storing fields with a null value
3,946
null
[]
[ { "code": "", "text": "Give the MongoDB command to show the name of each course and Honours which has the value true if the course level is 8 or higher, otherwise false. This is the question and am looking for a good place to start.", "username": "Marie_Dillane" }, { "code": "", "text": "I would start with taking courses at https://university.mongodb.com/", "username": "steevej" }, { "code": "", "text": ".Thanking you may I ask regarding a queryimage1076×109 5.19 KBI wish to add to the “Tom Kenna” entry, the word none where qualifications are in the other entries, and I also do not need address and age to appear in the entries and want to remove them in the query if you can be of any assistance I would be most grateful", "username": "Marie_Dillane" }, { "code": "", "text": "I recommend using Compass to do your modifications.", "username": "steevej" }, { "code": "", "text": "Compass is a good GUI tool for running CRUD operation against mongodb.", "username": "BM_Sharma" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoCommand for listing fields and their contents
2020-08-14T13:36:07.096Z
MongoCommand for listing fields and their contents
1,730
null
[]
[ { "code": "{\n \"version\": \"4.4.0\",\n \"gitVersion\": \"563487e100c4215e2dce98d0af2a6a5a2d67c5cf\",\n \"openSSLVersion\": \"OpenSSL 1.1.1g 21 Apr 2020\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"ubuntu2004\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\n", "text": "Anyway to move the data to another location? My HD almost full and want to move it to another partition.OS\nDistributor ID: Neon\nDescription: KDE neon User Edition 5.19\nRelease: 20.04\nCodename: focalMongo:\ndb version v4.4.0\nBuild Info:Thanks.", "username": "mamat_hensem" }, { "code": "", "text": "Stop mongod.\nMove the directory specified by dbPath to new partition.\nUpdate dbPath to point to new location.", "username": "steevej" }, { "code": "", "text": "How do I get the dbPath. Just move all the files there into new directory?Thanks.", "username": "mamat_hensem" }, { "code": "", "text": "find mongodb config file\nsudo find / -name mongod.conf -printget dbPath from that filestop mongodb\nservice mongod stopcheck mongodb is not running\nsudo systemctl status mongodcopy all files from dbPath to new dbPath\ncp -r old_dbPath new_dbPathchown of new dbPath\nsudo chown -R mongodb:mongodb new_dbPathchange dbPath in mongodb config file to new_dbPathstart mongodb\nsudo systemctl start mongodif everything good after restarting your pc, remove the old dbPath", "username": "mamat_hensem" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Move db to another location
2020-08-17T07:33:41.485Z
Move db to another location
2,307
null
[]
[ { "code": "", "text": "Hi Team\nI have some test documents:\n{\"_id\":{\"$oid\":“5f328c3a03792828c45940dd”},“Children_Num”:8,“Children”:[{“name”:“Snir”,“dob”:“1995-09-01”,“dep”:false},{“name”:“Iron”,“dob”:“1999-03-19”,“dep”:false},{“name”:“Sivan”,“dob”:“2003-05-31”,“dep”:true}]}On Compass I run succesfuly this code:db.Children.update([{$match: {\nChildren_Num: {\n$gte: 0\n}\n}}, {$set: {\nChildren_Num:{$size:\"$Children\"}\n}}])I would like to update Children_Num field accordingly using updates.I tested this code successfully:\ndb.Children.update(\n{ _id: ObjectId(“5f328c3a03792828c45940dd”)},\n{$set:{Children_Num:8}})But When I tried to replace the 8 with {$size:\"Children\"}\nI got an error\nThe dollar () prefixed field ‘$size’ in ‘Children_Num.$size’ is not valid for storagePlease advise\nThanks\nTal", "username": "Tal_Shainfeld" }, { "code": "db.Children.update([{$match: {\n_id: ObjectId(\"5f328c3a03792828c45940dd\")\n}}, {$set: {\nChildren_Num:{$size:\"$Children\"}\n}}])\n", "text": "Hi @Tal_Shainfeld,The initial command works since it is an expressive update using an aggregation pipeline whitin the update.This is exactly why this was introduced in 4.2 to allow you to compute those expressions whithin the same update.MongoDB 4.2 brings pipelines to updates to boost your productivityYou cannot use a $size or $expr in a “clasic” $set clause.Why can’t you use the Pipiline method:Best regards,\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel\nI’m running MongoDB server version: 4.4.0.\nYet when running the code you sent\n(I got error on the bracersdb.Children.update({$match: {_id: ObjectId(“5f328c3a03792828c45940dd”)}},\n{$set: {Children_Num:{$size:\"$Children\"}}})I got this:\nerrmsg’ : ‘unknown top level operator: $match’\"But\nI tried this on compass successfully:\n[{ $match: { Children_Num: { $gte: 0 } }}, { $set: { Children_Num: { $size: “$Children” } }}]And on the mongo shell it failed:image1336×229 6.73 KBSame it failed on NoSQLBooster:\nimage1348×390 34.2 KBThus I guess the update makes the differencesSo I tried on the mongoshell both:\nimage1518×207 17.8 KB\nGot error on update when aggregte is fine .Please adviseThanks Tal", "username": "Tal_Shainfeld" }, { "code": "[]", "text": "Hi @Tal_Shainfeld,In compass you missed [] so the engine will understand its a pipeline.In the shell it might be some hidden character or the quotes are bad ones. What version of shell is this?Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi Pavel\nI tried 2 codes: the example you send me with $Sum that works, and my with $size was not run. I run it on same mongo server and shell .I copied from the compass to shell thus no typo errors.\nPlease try my code and see if works for you:\ndb.Children.update([{$match: {\nChildren_Num: {\n$gte: 0\n}\n}}, {$set: {\nChildren_Num:{$size:\"$Children\"}\n}}])Document\n{\"_id\":{\"$oid\":“5f328c3a03792828c45940dd”},“Children_Num”:8,“Children”:[{“name”:“Snir”,“dob”:“1995-09-01”,“dep”:false},{“name”:“Iron”,“dob”:“1999-03-19”,“dep”:false},{“name”:“Sivan”,“dob”:“2003-05-31”,“dep”:true}]}\nMongoshell beta 0.1. 0 and directly on mongodb and also on NOSQLbooster.\nMongo ver 4.4 and 4.2 also\nThanks a lot\nTal", "username": "Tal_Shainfeld" } ]
Update with $set
2020-08-16T09:52:48.003Z
Update with $set
1,348
null
[ "server" ]
[ { "code": "", "text": "Hi everyone,\nI would like to implement a priority ordering to the requests sent by the clients, as part of my thesis.\nI suppose there is some sort of message queue that connects the clients to the server, but I’m unable to find its implementation in this huge repository. Therefore, I need help going through the source code of the database and find out how the requests are managed.Thank you very much for your assistance", "username": "Zikker" }, { "code": "", "text": "I located the source files I’m interested in, but I don’t know which branch I should fork from to apply my changes: master, v4.2 or r4.2.0?", "username": "Zikker" }, { "code": "", "text": "I think that doesn’t matter. Just use a recent version to play with your thesis", "username": "Lewis_Chan" } ]
Exploring the source code to understand how requests are managed
2020-04-29T20:29:16.123Z
Exploring the source code to understand how requests are managed
2,283
null
[]
[ { "code": "{timestamp:1}{code:1,timestamp:1}timestampcodecode1_ABC_1\n1_ABC_2\n1_PQR_1\n1_XYZ_1\n{\n \"code\": {\n \"$regex\": \"WMS\"\n },\n \"timestamp\": {\n \"$gte\": {\n \"$date\": 1596220200000\n },\n \"$lte\": {\n \"$date\": 1597429799999\n }\n }\n}\n{\n \"stage\" : \"FETCH\",\n \"filter\" : {\n \"device\" : {\n \"$regex\" : \"ABC\"\n }\n },\n \"inputStage\" : {\n \"stage\" : \"IXSCAN\",\n \"keyPattern\" : {\n \"timestamp\" : 1\n },\n \"indexName\" : \"timestamp_1\",\n \"isMultiKey\" : false,\n \"multiKeyPaths\" : {\n \"timestamp\" : [ ]\n },\n \"isUnique\" : false,\n \"isSparse\" : false,\n \"isPartial\" : false,\n \"indexVersion\" : 2,\n \"direction\" : \"forward\",\n \"indexBounds\" : {\n \"timestamp\" : [\n \"[{ $date: 1596220200000.0 }, { $date: 1597429799999.0 }]\"\n ]\n }\n }\n}", "text": "I have two indexes {timestamp:1} and {code:1,timestamp:1}timestamp is date, while code is string.code contains a string like the ones below:Following is my match stage:Yet, the winning plan ends up being the following:", "username": "Dushyant_Bangal" }, { "code": "^1_WMS", "text": "Hi @Dushyant_Bangal,The $regex will not utilize an index when the search is not anchored.Can you search ^1_WMS instead?If not consider using, text index ot Atlas and Atlas search to better index regular expressions.Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hi @Pavel_Duchovny,I tried that, but it didnt work.\nI also tried with a small set of data on my local machine, still the same. When I dropped the timestamp single index, then it took the correct Index.But I need the timestamp Index.", "username": "Dushyant_Bangal" }, { "code": "", "text": "Hi @Dushyant_Bangal,You can recreate the timestamp index. Its better to clear query cache or use a hint on the query to force q better index and avoid remove necessary index just for a plan recompute.Best\nPavel", "username": "Pavel_Duchovny" } ]
Match regex is not utilizing index correctly
2020-08-14T12:14:05.986Z
Match regex is not utilizing index correctly
2,796
null
[ "ruby" ]
[ { "code": "", "text": "Hi all,\nIs there any way to run grantRolesToUsers like query through mongodb ruby driver?", "username": "Kunal_Kunkulol" }, { "code": " client.database.command(grantRolesToUser: username, roles: ['foo'])\nFoo.collection.clientFoo", "text": "This appears to be a cross post from ruby - Grant roles to mongo users via rails - Stack Overflow which has already been answered (pasted below):The Ruby mechanism for running arbitrary commands is documented here.You would then do something like:To get the client instance, use Foo.collection.client where Foo is a Mongoid model class.", "username": "alexbevi" } ]
grantRolesToUsers with ruby
2020-08-10T23:35:00.911Z
grantRolesToUsers with ruby
3,080
null
[ "atlas-triggers" ]
[ { "code": "", "text": "There are options to add Triggers on the Atlas and Realm console. Does it make any difference where I add a trigger?", "username": "Emmanuel_Ansah" }, { "code": "", "text": "Hi Emmanuel – It doesn’t make a difference where you add a Trigger in the UI. The Atlas Triggers experience is mainly a simplified UI on top of Realm to help folks get started more easily with Triggers. In both cases a Realm application will be created and you’ll have the full ability to add features/extend your Atlas Triggers if you wish.", "username": "Drew_DiPalma" } ]
Atlas and Realm Triggers
2020-08-17T01:14:01.846Z
Atlas and Realm Triggers
1,821
null
[ "indexes" ]
[ { "code": "{\n//Document 1\n \"channel\" : {\n \"_id\" : \"1\",\n \"name\" : \"switch\", \n \"formats\" : [ \n {\n //I do not want to repeat formatName \"ISO8583-93\" for channel \"switch\" but I can have this //same formatName for different channel within the same collection\n \"formatName\" : \"ISO8583-93\",\n \"description\" : \"ISO Format\",\n \"fields\" : [ \n {\n \"name\" : \"0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\",\n \"required\" : true\n }\n ],\n \"messages\" : [ \n {\n //I do not want to repeat messages.name \"balanceEnquiry\" for channel \"switch\" but I can have this //same messages.name for different channel within the same collection\n\n \"name\" : \"balanceEnquiry\",\n \"alias\" : \"balanceEnquiry\",\n \"description\" : \"balanceEnquiry Request : Sender Bank -> MessageHub\",\n \"messageIdentification\" : \"\",\n \"messageType\" : \"\",\n \"messageFormat\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"DE_0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\"\n }\n ]\n }, \n {\n \"name\" : \"fundTransfer\",\n \"alias\" : \"creditTransfer\",\n \"description\" : \"Funds Transfer Request : Sender Bank -> Message Hub\",\n \"messageIdentification\" : \"\",\n \"messageType\" : \"\",\n \"messageFormat\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"DE_0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\"\n }\n ]\n }\n ]\n }, \n {\n \"formatName\" : \"ISO20022\",\n \"description\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\",\n \"required\" : true\n }\n ]\n }\n ]\n }\n}\n{\n//Document 2\n \"channel\" : {\n \"_id\" : \"2\",\n \"name\" : \"POS\", \n \"formats\" : [ \n {\n //I do not want to repeat formatName \"ISO8583-93\" for channel \"POS\" but I can have this //same formatName for different channel within the same collection\n \"formatName\" : \"ISO8583-93\",\n \"description\" : \"ISO Format\",\n \"fields\" : [ \n {\n \"name\" : \"0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\",\n \"required\" : true\n }\n ],\n \"messages\" : [ \n {\n \"name\" : \"balanceEnquiry\",\n \"alias\" : \"balanceEnquiry\",\n \"description\" : \"balanceEnquiry Request : Sender Bank -> MessageHub\",\n \"messageIdentification\" : \"\",\n \"messageType\" : \"\",\n \"messageFormat\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"DE_0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\"\n }\n ]\n }, \n {\n \"name\" : \"fundTransfer\",\n \"alias\" : \"creditTransfer\",\n \"description\" : \"Funds Transfer Request : Sender Bank -> Message Hub\",\n \"messageIdentification\" : \"\",\n \"messageType\" : \"\",\n \"messageFormat\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"DE_0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\"\n }\n ]\n }\n ]\n }, \n {\n \"formatName\" : \"ISO20022\",\n \"description\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\",\n \"required\" : true\n }\n ]\n }\n ]\n }\n}\n", "text": "I have created index on “channel.name” which is unique within the collection. But I also want “channel.formats.formatName” and “channel.formats.messages.name” unique per document, how to achieve uniqueness of these two fields within the document?\nI have multiple documents in my collection as below.", "username": "Erica_01" }, { "code": "formatsformats:{\n \"ISO8583-93\":{\n \"description\" : \"ISO Format\",\n \"fields\" : [ \n {\n \"name\" : \"0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\",\n \"required\" : true\n }\n ]\n }\n}\n", "text": "I’d be interested in the answer for this.In case you dont get any, you might want to think about changing the schema a little, and making formats and object instead of an array, and use the format name as key.There cannot be two same keys in a sub-document / object.", "username": "Dushyant_Bangal" } ]
How to create unique index within a document only?
2020-08-16T20:37:50.073Z
How to create unique index within a document only?
2,022
null
[]
[ { "code": "", "text": "I configured a community mongo cluster with TLS. It seems to work fine but in the router log I see:\n…\n[listener] connection accepted from 10.25.10.10\n[conn125] no SSL certificate provided by peer\n[conn125] received client metadata from 10.25.10.10\n[conn125] end connection\n[conn126] no SSL certificate provided by peer\n[conn126] received client metadata from 10.25.10.10\n[conn126] Successfully authenticated as principal admin on admin from client 10.25.10.10\n…Am I missing something?", "username": "nvt_nvt" }, { "code": "", "text": "Hi nvt, welcome to the community:Are you saying this is a self-managed software install instead of MongoDB Atlas?-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "yes i did it myself following the mongo documentation", "username": "nvt_nvt" } ]
'no SSL certificate provided by peer' message?
2020-08-13T12:37:13.055Z
&lsquo;no SSL certificate provided by peer&rsquo; message?
3,937
null
[ "dot-net" ]
[ { "code": "mongodb://bb-infraserver.gala.int:37017\ntry\n{\n\tvar client = new MongoClient(\"mongodb://bb-infraserver.gala.int:37017\");\n}\ncatch (Exception e)\n{\n\tLog.Logger.Error($\"MongoDB driver exception: {e.Message}. Exceptions: {e.GetAllExceptions()}\");\n\tthrow;\n}\n\nSystem.IO.IOException: Invalid argument\n at System.IO.FileStream.CheckFileCall(Int64 result, Boolean ignoreNotSupported) in /_/src/System.Private.CoreLib/shared/System/IO/FileStream.Unix.cs:828\n at System.IO.FileStream.ReadNative(Span`1 buffer) in /_/src/System.Private.CoreLib/shared/System/IO/FileStream.Unix.cs:495\n at System.IO.FileStream.ReadSpan(Span`1 destination) in /_/src/System.Private.CoreLib/shared/System/IO/FileStream.Unix.cs:418\n at System.IO.FileStream.Read(Byte[] array, Int32 offset, Int32 count) in /_/src/System.Private.CoreLib/shared/System/IO/FileStream.cs:304\n at System.IO.StreamReader.ReadBuffer() in /_/src/System.Private.CoreLib/shared/System/IO/StreamReader.cs:594\n at System.IO.StreamReader.ReadToEnd() in /_/src/System.Private.CoreLib/shared/System/IO/StreamReader.cs:415\n at System.IO.File.InternalReadAllText(String path, Encoding encoding) in /_/src/System.IO.FileSystem/src/System/IO/File.cs:296\n at System.IO.File.ReadAllText(String path) in /_/src/System.IO.FileSystem/src/System/IO/File.cs:274\n at System.Net.NetworkInformation.StringParsingHelpers.ParseRawLongFile(String filePath) in /_/src/System.Net.NetworkInformation/src/System/Net/NetworkInformation/StringParsingHelpers.Misc.cs:64\n at System.Net.NetworkInformation.LinuxNetworkInterface.GetSpeed(String name) in /_/src/System.Net.NetworkInformation/src/System/Net/NetworkInformation/LinuxNetworkInterface.cs:202\nLinuxNetworkInterface.csprivate static long? GetSpeed(string name)\n{\n\ttry\n\t{\n\t\tstring path = Path.Combine(NetworkFiles.SysClassNetFolder, name, NetworkFiles.SpeedFileName);\n\t\tlong megabitsPerSecond = StringParsingHelpers.ParseRawLongFile(path);\n\t\treturn megabitsPerSecond == -1\n\t\t\t? megabitsPerSecond\n\t\t\t: megabitsPerSecond * 1_000_000; // Value must be returned in bits per second, not megabits.\n\t}\n\tcatch (Exception) // Ignore any problems accessing or parsing the file.\n\t{\n\t\treturn null;\n\t}\n}\n\nMongoUrlBuilder.Parse()name", "text": "I run self-hosted MongoDB standalone and ReplicaSet on CentOS machines.\nMy clients are written in C# against .NETCore, I use Fedora 32 to develop my servers. The server exist for several years now.I upgraded from .NETCore 2.1. to 3.1 without any problems.I upgraded to the latest MongoDB Driver WITH SOME PROBLEMS…So the driver I use is 2.11.0.0. This seams to be the first driver supporting .NET Standard 2.0…I have a connect string to a Standalone Installation which looks as follows and is very simple (no options):I use this URL on 4 of my servers WITHOUT ANY problems.But ONE Server throws the following exception (which I have never seen before…). Here is the simple C# codeThis is the exception which is thrown:I tried to debug this a bit deeper and found a file LinuxNetworkInterface.cs and there a methodThis method is called when doing name resolution which seems to have its origin from a method in the MongoDB driver called MongoUrlBuilder.Parse(). The value of the passed name parameter is “Io” and then it crashes when it tries to build the path with the above exception. This does NOT happen with any of the 2.10.x drivers which support only .NETStandard 1.5.I must say that my MongoDB servers are still Version 4.0.Any idea what the problem could be?", "username": "Thomas_Bednarz" }, { "code": "System.PlatformNotSupportedException: Socket.IOControl handles Windows-specific control codes and is not supported on this platform.\n at System.Net.Sockets.SocketPal.WindowsIoctl(SafeSocketHandle handle, Int32 ioControlCode, Byte[] optionInValue, Byte[] optionOutValue, Int32& optionLength) in /_/src/System.Net.Sockets/src/System/Net/Sockets/SocketPal.Unix.cs:1075\n at System.Net.Sockets.Socket.IOControl(Int32 ioControlCode, Byte[] optionInValue, Byte[] optionOutValue) in /_/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs:1753\n at System.Net.Sockets.Socket.IOControl(IOControlCode ioControlCode, Byte[] optionInValue, Byte[] optionOutValue) in /_/src/System.Net.Sockets/src/System/Net/Sockets/Socket.cs:1768\n at at MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateSocket(EndPoint endPoint)\n", "text": "I have downgraded the driver to 2.10.4. Now I can create a MongoClient instance without a problem, but when the driver ties to access a database for the first time it crashes with the following exception:Looks like the driver has a problem running on a Linux (I am running Fedora 32 as dev workstation and Centos 7 as OS for all MongoDB servers.", "username": "Thomas_Bednarz" }, { "code": "binobj", "text": "Problem solved (at least for the moment).\nDon’t know what caused it but after upgrading the OS (Fedora 31 to 32, the .NETCore SDK and almost all libraries used in the project) some data on the disk must be messed up. I cleared all caches and wrote a script which removed all bin and obj folders of all my server and library projects. Then restarted the Fedora VM and recompiled everything. Problem is gone since then…", "username": "Thomas_Bednarz" } ]
Connection problem with c# and .NETCore 3.1
2020-08-13T12:35:11.689Z
Connection problem with c# and .NETCore 3.1
3,393
null
[ "realm-web" ]
[ { "code": " newCustomData = await app.currentUser.refreshCustomData()\n", "text": "Hey\nIn my (react, web) app, whenever the user changes any personal data, it is inserted to the ‘members’ collection (using a realm serverside function) and then updates everything using:sometimes a small change in the member doc makes the jwt token ‘atob’ to fail.\nif i delete some stuff from the doc, it goes back and everything works…I saw on git they fixed this issue by using ‘js-base64’ which is great, but I’m using realm-web version 0.8.0 (latest, I think?), without that fix…what can I do?thakns", "username": "Orr_Kislev" }, { "code": "", "text": "So…after updating realm-web to 0.8.0 I forgot to ‘yarn install’ all the new dependencies\nI just did this and rerun the project and everythng is fine, there is a fix for this problem in the new version awesome!", "username": "Orr_Kislev" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
customData - bad base64
2020-08-16T20:34:56.272Z
customData - bad base64
2,097
https://www.mongodb.com/…f854636b161e.png
[]
[ { "code": "mongod --repair2020-08-10T14:33:31.890+0000 F REPL [initandlisten] This instance has been repaired and may contain modified replicated data that would not match other replica set members. To see your repaired data, start mongod without the --replSet option. When you are finished recovering your data and would like to perform a complete re-sync, please refer to the documentation here: https://docs.mongodb.com/manual/tutorial/resync-replica-set-member/\n2020-08-10T14:33:31.890+0000 F - [initandlisten] Fatal Assertion 50923 at src/mongo/db/repl/replication_coordinator_impl.cpp 529\n2020-08-10T14:33:31.890+0000 F - [initandlisten] \n\n***aborting after fassert() failure\n", "text": "Greetings!I’ve run into an issue where a tiny bit of data got corrupted on a shard and I had to repair it with mongod --repair. When starting the instance, it gives this error:Is there any way I can remove this “mark of uncleanliness” or somehow skip this check?image787×110 20.9 KBUnfortunately, I’m unable to go about this the correct way by resyncing a replica set member, because the replica set only consists of this single server. (Please don’t crucify me, I know this is not recommended, but the hardware/storage in our dev environment just can’t handle the doubling of I/O throughput from additional replicas.)The possibility of small amounts of missing/changed data is of little concern to me since this isn’t production data, and there are tens of billions of documents in the cluster. I’d just like to get this one back to it’s friends so it can be happy again. Thanks!!", "username": "The_Techromancer" }, { "code": "localsystem.replset--shardsvr--replSet> use local\n> db.system.replset.find({}, {_id: true})\n{ \"_id\" : \"shard18\"}\nrepaired> db.system.replset.update({_id: \"shard18\"}, {$set: {repaired: false}})\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\n", "text": "Coming back to answer this one myself. The attribute in question is stored inside the local database under the system.replset collection.", "username": "The_Techromancer" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Possible to Ignore "Repaired" Mark on Shard Instance?
2020-08-10T23:36:05.954Z
Possible to Ignore &ldquo;Repaired&rdquo; Mark on Shard Instance?
3,148
null
[ "php" ]
[ { "code": "find()php > $a = $db->selectCollection(\"image\")->find(array('Potnum' => 1), array('Potnum' => false, 'Dirpath' => true, 'Filename' => true))->toArray();\nphp > var_dump($a);\narray(1) {\n [0]=>\n object(MongoDB\\Model\\BSONDocument)#276 (1) {\n [\"storage\":\"ArrayObject\":private]=>\n array(4) {\n [\"_id\"]=>\n object(MongoDB\\BSON\\ObjectId)#273 (1) {\n [\"oid\"]=>\n string(24) \"5f3057ccd84aeb83c4872119\"\n }\n [\"Potnum\"]=>\n int(1)\n [\"Dirpath\"]=>\n string(11) \"a/burnished\"\n [\"Filename\"]=>\n string(16) \"watercarrier.png\"\n }\n }\n}\n", "text": "Very recent install. Excluding fields in find() doesn’t seem to work.\nI have tried every variation I can imagine of the following and still ‘Potnum’ is returned.", "username": "Jack_Woehr" }, { "code": "find()'projection'$a = $db->selectCollection(\"image\")->find(array('Potnum' => 1), array('projection' => array('Potnum' => false, 'Dirpath' => true, 'Filename' => true)))->toArray();\n", "text": "Answer found\nThe second argument to find() is an array of options.\nThe option limiting returned fields is called 'projection'.\nThe phrase in the original question should read:", "username": "Jack_Woehr" } ]
PHP MongoDB\Collection->find() exclude fields doesn't work?
2020-08-14T05:15:22.851Z
PHP MongoDB\Collection-&gt;find() exclude fields doesn&rsquo;t work?
3,040
null
[ "data-modeling" ]
[ { "code": "", "text": "Hi guys,We are fairly experienced with data modeling in MongoDB. However, we will have new access patterns in future that are more BI-like/OLAP-like for some of our collections.Scenario:Naive approach:Questions:Cheers,\nManuel", "username": "Manuel_Reil" }, { "code": "", "text": "Hi @Manuel_Reil,I believe you should review the following blogs:https://www.mongodb.com/blog/post/time-series-data-and-mongodb-part-1-introductionhttps://www.mongodb.com/blog/post/time-series-data-and-mongodb-part-2-schema-design-best-practicesI would also consider looking into some of the patterns like bucket, tree, document versioning :A summary of all the patterns we've looked at in this seriesBest\nPavel", "username": "Pavel_Duchovny" } ]
Data modeling for OLAP-style over time trends (no IoT)
2020-08-14T12:00:44.440Z
Data modeling for OLAP-style over time trends (no IoT)
2,596
null
[]
[ { "code": "SyncManager.shared.errorHandler = { error, session in\n\n let syncError = error as! SyncError\n\n switch syncError.code {\n\n case .clientResetError:\n\n if let (path, clientResetToken) = syncError.clientResetInfo() {\n\n closeRealmSafely()\n\n saveBackupRealmPath(path)\n\n SyncSession.immediatelyHandleError(clientResetToken)\n\n }\n\n default:\n\n // Handle other errors...\n\n ()\n\n }\n\n}\n", "text": "Hi,I am going through the doc (https://docs.realm.io/sync/using-synced-realms/errors#client-reset) to handle client reset error and found it’s very confusing . I have two questions about the following code.First there is no method available to closeRealmsafely. can you please help me understand how can I close the realm safely?How can I backup and when I will use it?Should I skip the reset error because in documentation it’s mentions “if the client reset process is not manually initiated, it will instead automatically take place after the next time the app is launched, upon first accessing the SyncManager singleton. It is the app’s responsibility to persist the location of the backup copy if needed, so that the backup copy can be found later.”Below is the error handler sample code from the doc.", "username": "Uma_Tiwari" }, { "code": "", "text": "Cross post to the same question on SO Realm iOS: How to handle Client ResetIt’s a good idea to keep questions in one place so any answers or activity is focused and we don’t do double duty when responding.", "username": "Jay" } ]
How to handle Client Reset iOS
2020-08-13T20:23:46.101Z
How to handle Client Reset iOS
2,499
null
[]
[ { "code": "", "text": "I am quite new to the search in mongodb atlas, after some browsing documentation and blogs in the internet, everyone are creating the search index manually like going to the collection and creating it. But it is not efficient and I am building an app which adds collections to the data dynamically and can have many collections in the database. So, it would be hard to go and create the search index in the each and every collection. Is there a way to may this automatically without any manual work ?P.S: If there is something wrong or my question doesn’t meet community guidelines, sorry for that, I have contacted support, they said to ask in mongodb forum ", "username": "Sai_Varaprasad" }, { "code": "", "text": "Take a look at https://docs.atlas.mongodb.com/api/. May be there is an API to do it.", "username": "steevej" }, { "code": "", "text": "Thanks, for the reply, but couldn’t find the right one…still searching for the answer…", "username": "Sai_Varaprasad" }, { "code": "", "text": "", "username": "Andrew_Davidson" } ]
How to create the Full-Text atlas search index automatically for the collection, when collections are created?
2020-08-14T09:15:24.615Z
How to create the Full-Text atlas search index automatically for the collection, when collections are created?
1,530
null
[]
[ { "code": "", "text": "I want to use transactions in my project during development. But transactions require replica sets. I can run a local replica set for development if I have my db on my local machine. Is there any way to do it with M0 Sandbox? Do I have to upgrade to a paid plan for development?", "username": "Gregg_Squire" }, { "code": "M0", "text": "I want to use transactions in my project during development.Transactions require MongoDB server versions 4.0 or higher and a replica set setup. Atlas Clusters are replica sets (can also be sharded clusters). See:", "username": "Prasad_Saya" }, { "code": "", "text": "Yes you can definitely use transactions on an M0 free tier cluster.Cheers\n-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there any way to use transactions with M0 Sandbox?
2020-08-12T22:08:24.773Z
Is there any way to use transactions with M0 Sandbox?
2,107
https://www.mongodb.com/…543dd707735a.png
[]
[ { "code": "", "text": "\nWhen I add a collection it looks like this to me, I tried to create a collection before adding it, but it looks like this too.", "username": "db_test" }, { "code": "", "text": "Hi Murilo,It looks like you were using the default placeholder “” from the connection string: the idea is the replace that whole string with what you want your DB name to be, e.g. soomething like “dbFoo” instead of \". Make sense?-Andrew", "username": "Andrew_Davidson" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Database name when add a collection
2020-08-12T20:05:23.861Z
Database name when add a collection
1,722
null
[ "licensing" ]
[ { "code": "", "text": "Hello,\nI want to create a demo for mongo db field level encryption. Mongo DB field level encryption is only supported in enterprise version. Is there i can get and trial period access of MongoDB Enterprise Edition", "username": "Dhaval_Solanki" }, { "code": "", "text": "If you are self hosting you can download and use MongoDB EnterpriseOr you can just go with an Atlas cluster.Section 2. b. of the customer agreement allows evaluation and development:(b) Free Evaluation and Development. MongoDB grants you a royalty-free, nontransferable and nonexclusive license to use and reproduce the Software in your internal environment for evaluation and development purposes. You will not use the Software for any other purpose, including testing, quality assurance or production purposes without purchasing an Enterprise Advanced Subscription. We provide the free evaluation and development license of our Software on an “AS-IS” basis without any warranty.", "username": "chris" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Mongo Db Enterprise trial environment for developers
2020-08-14T13:42:45.962Z
Mongo Db Enterprise trial environment for developers
4,276
null
[ "pymodm-odm" ]
[ { "code": "", "text": "Hi,as the title says, is this project being worked on? I see some good pull requests and issues about features but nothing has been merged or committed in the last months. I find this library really useful (and much better performance-wise than mongoengine).", "username": "Martin_Basterrechea" }, { "code": "", "text": "Hi Martin,\nThank you for reaching out! Our Python team has been focused for the last few months on server version 4.4 feature support in our official driver, as well as some upcoming integrations that we’re really excited about. Unfortunately, I don’t think that this is going to change for us in the near term, and we are looking for community members who might be interested in taking a more active role in contributing to pymodm. If you would be interested please shoot me an email - it’s just my first name at mongodb.com.Rachelle", "username": "Rachelle" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is pymodm alive?
2020-06-22T08:33:56.824Z
Is pymodm alive?
3,105
null
[]
[ { "code": "", "text": "Raspberry Pi OS now has an ARM64 version\nhttps://downloads.raspberrypi.org/raspios_arm64/images/\nmore info: raspbian buster - Run 64-bit Raspberry Pi OS beta test version - Raspberry Pi Stack ExchangeI would love to run MongoDB 4.2 on RaspberryPi OS (arm64) 64 bit.\nRaspberry Pi OS is based on Debian.Currently MongoDB is only building ARM64 for older versions of Ubuntu. Ubuntu is a beginner focused OS, while more experienced Linux users generally use Debian.\nhttp://repo.mongodb.org/apt/ubuntu/dists/bionic/mongodb-org/4.2/multiverse/Please add MongoDB 4.2 ARM64 builds for Debian 10 Buster (that will work on Raspberry Pi OS 64 bit)\nhttp://repo.mongodb.org/apt/debian/dists/buster/mongodb-org/4.2/main/Having builds for ARM64 Debian 10 Buster will make it easier for Raspberry Pi OS industrial engineers and students to deploy MongoDB applications on RbPi and focus on MongoDB applications.\nCurrently there’s additional friction in that we’d have to manually build MongoDB on Debian 10 ARM64, every time there’s an update. Also the uncertainty regarding whether it will work or not further discourages MongoDB adoption.A lot of companies focus on students, because they’re the next generations of engineers. RaspberryPi OS is based on Debian and is the OS recommended by the RaspberryPi Foundation. They renamed it from Raspbian (a play on the words Raspberry and Debian) to “Raspberry Pi OS” to make it obvious that it’s the main (most supported, recommended/updated etc) OS. MongoDB would be wise to add builds for this!MongoDB 4.2 ARM 64 builds for Debian 10 Buster that works on Raspberry Pi OS 64 bit, come on, let’s go!", "username": "Joshua_Maserow" }, { "code": "", "text": "Hi @Joshua_Maserow -Adding a new officially supported platform is a fairly significant undertaking for us. We typically only add new platforms in new releases, so we are very unlikely to add support for Debian 10 ARM64 for v4.2. We don’t currently have Debian 10 ARM64 on the v4.4 roadmap either, and given that v4.4 is coming soon, I don’t expect that to change.I have flagged your request as we consider the roadmap for the next version of MongoDB. That though is unlikely to improve the situation on Raspberry Pi in the near term, unless that results in a further decision to backport that support to v4.4, which I think is unlikely.In your post you said that there was additional friction to manually build MongoDB on Debian for ARM64. Is that due to the performance of the Raspberry PI as a build host? Would it be helpful if we provided some instructions on how to do a Debian 10 ARM64 targeted build on a Debian 10 x86_64 system?Thanks,\nAndrew", "username": "Andrew_Morrow" }, { "code": "", "text": "I would be very interested in some instructions for a Debian 10 ARM64 targeted build to get mongodb 4.2+ running on my Raspberry Pi 4.", "username": "Kristof_Moors" }, { "code": "", "text": "@Kristof_Moors - Are you interested in doing a native build on the RPi4, or a cross build from a Debian 10 x86_64 machine?", "username": "Andrew_Morrow" }, { "code": "$ sudo apt-get install gcc-8-aarch64-linux-gnu g++-8-aarch64-linux-gnu\n$ sudo dpkg --add-architecture arm64\n$ sudo apt-get update\n$ sudo apt-get install libssl-dev:arm64 libcurl4-openssl-dev:arm64\n\n$ git clone -b r4.2.8 https://github.com/mongodb/mongo.git\n$ cd mongo\n\n# Consider using a virtualenv for this\n$ python3 -m pip install --user -r etc/pip/compile-requirements.txt\n\n$ python3 buildscripts/scons.py --ssl CC=/usr/bin/aarch64-linux-gnu-gcc-8 CXX=/usr/bin/aarch64-linux-gnu-g++-8 CCFLAGS=\"-march=armv8-a+crc -mtune=cortex-a72\" ./mongo{,s,d}\n-j NNmongomongodmongos$ sudo apt-get install gcc-8 g++-8\n$ sudo apt-get install libssl-dev libcurl4-openssl-dev\n\n$ git clone -b r4.2.8 https://github.com/mongodb/mongo.git\n$ cd mongo\n\n# Consider using a virtualenv for this\n$ python3 -m pip install --user -r etc/pip/compile-requirements.txt\n\n$ python3 buildscripts/scons.py --ssl CC=gcc-8 CXX=g++-8 CCFLAGS=\"-march=armv8-a+crc -mtune=cortex-a72\" ./mongo{,s,d}\n", "text": "@Kristof_Moors (and CC @Joshua_Maserow) -Here are instructions for doing a Debian 10 ARM build of MongoDB Community v4.2.8 for RPi4 from a Debian 10 x86-64 machine. Note that since this is built from the community sources, no enterprise features will be available:The above build took about 25 minutes for me on a 16 vCPU Debian 10 x86_64 machine, so be prepared to wait a little while. By default it will use all cores to build, so if you want to not saturate your machine, you can add an explicit -j N argument to limit to N cores.If the build completes OK, you will find mongo, mongod, and mongos binaries in the root of the tree. The binaries will be very large because they are unstripped, but you can strip them if you want. You can then transfer what you need to the RPi4. I don’t have one to test on, so I can’t promise the binaries work: please try them out and let us know.If you want to try building on the RPi4 itself as a native build, you can simplify the above instructions to something like the following, which is untested:But I have no idea how long that will take to build, and I don’t have an RPi4 on which to try it out.Hope this helps,\nAndrewPS Any reason you want v4.2 not the newly released v4.4? The v4.4 release has some build improvements that can make the process a little simpler. Please let me know if you want v4.4 specific instructions.", "username": "Andrew_Morrow" }, { "code": "", "text": "I would like to do a native build, if possible.", "username": "Kristof_Moors" }, { "code": "", "text": "@Kristof_Moors - OK, I included instructions for that in my most recent reply as well. I don’t have an RPi4, so I can’t actually test it, but it looks pretty reasonable. Please give it a try and let me know if it works for you. Note that depending on the speed and parallelism of the RPi4 it may take a long long time.", "username": "Andrew_Morrow" }, { "code": "", "text": "Thank you for the instructions. I don’t want 4.2 specifically, but I would like to run something on my RPi4 that requires v4.2 or higher. If you could share instructions for 4.4, I would be happy to try it. Do you have an estimate for the total file size of the (native) build?", "username": "Kristof_Moors" }, { "code": "sudo apt-get install gcc-8-aarch64-linux-gnu g++-8-aarch64-linux-gnu\n$ sudo dpkg --add-architecture arm64\n$ sudo apt-get update\n$ sudo apt-get install libssl-dev:arm64 libcurl4-openssl-dev:arm64\n\n$ git clone -b r4.4.0 https://github.com/mongodb/mongo.git\n$ cd mongo\n\n# Consider using a virtualenv for this\n$ python3 -m pip install --user -r etc/pip/compile-requirements.txt\n\n$ python3 buildscripts/scons.py --ssl CC=/usr/bin/aarch64-linux-gnu-gcc-8 CXX=/usr/bin/aarch64-linux-gnu-g++-8 CCFLAGS=\"-march=armv8-a+crc -mtune=cortex-a72\" --install-mode=hygienic --install-action=hardlink --separate-debug archive-core{,-debug}\nsudo apt-get install gcc-8 g++-8\n$ sudo apt-get install libssl-dev libcurl4-openssl-dev\n\n$ git clone -b r4.4.0 https://github.com/mongodb/mongo.git\n$ cd mongo\n\n# Consider using a virtualenv for this\n$ python3 -m pip install --user -r etc/pip/compile-requirements.txt\n\n$ python3 buildscripts/scons.py --ssl CC=gcc-8 CXX=g++-8 CCFLAGS=\"-march=armv8-a+crc -mtune=cortex-a72\" --install-mode=hygienic --install-action=hardlink --separate-debug archive-core{,-debug}\n--install-mode=hygienicDESTDIRPREFIXscons ... DESTDIR=$HOME/tmp PREFIX=/usr/local ...--install-action=hardlinkDESTDIR/PREFIX--separate-debug.debug-debugarchive-corearchive-core-debugtgzbuild.debug[install|archive]-x[-debug]xshellmongodmongodmongosmongosserversmongodmongoscoreserversshell--link-model=dynamic", "text": "The instructions for v4.4 would be very similar, so I’m just manually making adjustments here and apologies in advance for any errors, but the cross build would look like:And the native build would look like:The only real changes here in, in both cases, are to the build system flags on the last step. The new flags are as follows:--install-mode=hygienic: Uses an autoconf like layout. You can control where files are installed to with the DESTDIR and PREFIX arguments to SCons. So scons ... DESTDIR=$HOME/tmp PREFIX=/usr/local ... would do what you expect.--install-action=hardlink: Uses hardlinks when installing files to DESTDIR/PREFIX to avoid extra time spent copying.--separate-debug: Enables automatically stripping debug information out into separate .debug files and packaging them into -debug tarballs.Then the new target names archive-core and archive-core-debug will produce tgz files under the build directory containing the server and shell binaries and .debug files respectively. There are several different targets depending on what you want to build. Generally, they look like [install|archive]-x[-debug] for x in shell (the mongo shell), mongod (just the mongod server), mongos (just the mongos server), servers (both mongod and mongos), and core (servers and shell).Regarding binary size, the binary packages probably aren’t that large, but the debug info packages can be several gigabytes. So you are going to need some space, probably 10-20 GB for the whole build. This is one of the reasons I’m suggesting a cross build might be a good idea.You could also experiment with the --link-model=dynamic flag which might result in a somewhat smaller footprint. Note that builds made with this flag are not production quality - they are primarily used for testing. But Debian 10 ARM / RPi4 isn’t a supported platform anyway.", "username": "Andrew_Morrow" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Add MongoDB 4.2 ARM64 builds for Raspberry Pi OS 64 bit (Debian Buster)
2020-06-07T12:49:10.189Z
Add MongoDB 4.2 ARM64 builds for Raspberry Pi OS 64 bit (Debian Buster)
30,309
null
[]
[ { "code": "{\n _id: ...,\n index: 1,\n name: \"Alfred\",\n workingFor: [ 1, 2 ] //these numbers in the array are referencing to \"index\" of Collection Company.\n}\n{\n _id: ...,\n index: 1,\n name: \"Company A\"\n},\n{\n _id: ...,\n index: 2,\n name: \"Company B\"\n}\n{\n _id: ...,\n index: 1,\n name: \"Alfred\",\n workingFor: [\n {\n index: 1,\n name: \"Company A\"\n },\n {\n index: 2,\n name: \"Company B\"\n },\n ]\n}\n", "text": "Hello, I am fairly new to MongoDB. Allow me to elaborate. I got something like this:Collection Freelancer got 1 Document:Collection Company got 2 Documents:Upon querying, is it possible for MongoDB to automatically resolve for the index? For example, if I query for Alfred, instead of having data appearing like above, have it appear like this:On Node.js, with Mongoose, I am thinking I could find for Alfred on Collection Freelancer, then using the [ 1, 2 ] of workingFor, I could find for Company A and Company B, then remap the dictionary object to be what I want.But before start implementing, does MongoDB have a built-in method for something like this already? This is so I don’t have to re-invent the wheel.Thank you,\nIono.", "username": "iono_sphere" }, { "code": "", "text": "Yes.Look at https://docs.mongodb.com/manual/aggregation/You might be interested to take the course MongoDB Courses and Trainings | MongoDB University", "username": "steevej" }, { "code": "", "text": "Hello @iono_sphere,The MongoDB Aggregation Framework allows you to query two collections with referencing fields as a “join” operation. This is accomplished using the $lookup stage of an aggregation query.", "username": "Prasad_Saya" } ]
Can MongoDB replace referencing indices with actual Documents?
2020-08-14T10:18:45.528Z
Can MongoDB replace referencing indices with actual Documents?
1,380
https://www.mongodb.com/…026c2ca35b50.png
[]
[ { "code": "", "text": "image874×154 6.98 KBI am looking to use the count in mongo db to count the number of students qualified please see image above\ndb.docs.find.count(“qualifications”); isnt working", "username": "Marie_Dillane" }, { "code": "db.docs.find.count(“qualifications”)db.docs.find().count()db.docs.find(query).count()qualificationsdb.docs.find( { qualifications: { $exists: true } } ).count()db.docs.countDocuments( { qualifications: { $exists: true } } )", "text": "Hello @Marie_Dillane, welcome to the community.The method you are trying db.docs.find.count(“qualifications”) doesn’t work. This is because the method you are using is the cursor.count, and it doesn’t take a parameter.You can use the $exists to find documents that have the field qualifications using one of these queries:db.docs.find( { qualifications: { $exists: true } } ).count()\ndb.docs.countDocuments( { qualifications: { $exists: true } } )", "username": "Prasad_Saya" } ]
Count documents
2020-08-14T10:17:52.814Z
Count documents
2,720
null
[]
[ { "code": "{\n \"channel\" : {\n \"_id\" : \"1\",\n \"name\" : \"switch\",\n \"formats\" : [ \n {\n \"formatName\" : \"ISO8583-93\",\n \"description\" : \"ISO Format\",\n \"fields\" : [ \n {\n \"name\" : \"0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\",\n \"required\" : true\n }\n ],\n \"messages\" : [ \n {\n \"name\" : \"balanceEnquiry\",\n \"alias\" : \"balanceEnquiry\",\n \"description\" : \"balanceEnquiry Request : Sender Bank -> MessageHub\",\n \"messageIdentification\" : \"\",\n \"messageType\" : \"\",\n \"messageFormat\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"DE_0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\"\n }, \n {\n \"name\" : \"DE_1\",\n \"alias\" : \"Primary Bitmap\",\n \"lenght\" : \"8\",\n \"description\" : \"Primary Bitmap\",\n \"type\" : \"BIN\",\n \"dataType\" : \"\"\n }\n ]\n }, \n {\n \"name\" : \"fundTransfer\",\n \"alias\" : \"creditTransfer\",\n \"description\" : \"Funds Transfer Request : Sender Bank -> Message Hub\",\n \"messageIdentification\" : \"\",\n \"messageType\" : \"\",\n \"messageFormat\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"DE_0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\"\n }, \n {\n \"name\" : \"DE_1\",\n \"alias\" : \"Primary Bitmap\",\n \"lenght\" : \"8\",\n \"description\" : \"Primary Bitmap\",\n \"type\" : \"BIN\",\n \"dataType\" : \"\"\n }\n ]\n }\n ]\n }, \n {\n \"formatName\" : \"ISO20022\",\n \"description\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\",\n \"required\" : true\n }, \n {\n \"name\" : \"1\",\n \"alias\" : \"Bitmap(s)\",\n \"lenght\" : \"8\",\n \"description\" : \"\",\n \"type\" : \"BIN\",\n \"dataType\" : \"\",\n \"required\" : true\n }\n ]\n }\n ]\n }\n}\n {\n \"name\" : \"balanceEnquiry\",\n \"alias\" : \"balanceEnquiry\",\n \"description\" : \"balanceEnquiry Request : Sender Bank -> MessageHub\",\n \"messageIdentification\" : \"\",\n \"messageType\" : \"\",\n \"messageFormat\" : \"\",\n \"fields\" : [ \n {\n \"name\" : \"DE_0\",\n \"alias\" : \"MTI\",\n \"lenght\" : \"4\",\n \"description\" : \"\",\n \"type\" : \"FIXED\",\n \"dataType\" : \"\"\n }, \n {\n \"name\" : \"DE_1\",\n \"alias\" : \"Primary Bitmap\",\n \"lenght\" : \"8\",\n \"description\" : \"Primary Bitmap\",\n \"type\" : \"BIN\",\n \"dataType\" : \"\"\n }\n ]\n }\n", "text": "I have data in MongoDB like below:I want to fetch element of array “messages” by its “name”. Depending on given condition where “channel.name”:“switch” and “channel.formats.formatName”:“ISO8583-93” and “channel.formats.messages.name”:“balanceEnquiry”. I am expecting only below part as a outputHow to get this specified result by MongoDB query?", "username": "Erica_01" }, { "code": " db.coll.aggregate([{$match: {\n“channel.name”:“switch”, “channel.formats.formatName”:“ISO8583-93” ,“channel.formats.messages.name”:“balanceEnquiry”\n}},\n{\n $project: {\n results: {\n $filter: {\n input: \"$channel.formats.messages\",\n as: \"message\",\n cond: { $eq: [ \"$$message.name\", \"balanceEnquiry\" ] }\n }\n }\n }\n },\n{ $replaceRoot: { newRoot: \"$results\" } }]);\n", "text": "Hi @Erica_01,I think the following aggregation should work:Haven’t tested the full logic, but this should help.Best regards\nPavel", "username": "Pavel_Duchovny" }, { "code": "", "text": "Hello @Erica_01, welcome to the community.Here is some information to start with about querying an array and projecting specific fields from the returned document.", "username": "Prasad_Saya" }, { "code": "db.test1.aggregate([\n {\n // first, filter the documents, that contain\n // fields with necessary values\n $match: {\n 'channel.name': 'switch',\n 'channel.formats.formatName': 'ISO8583-93',\n 'channel.formats.messages.name': 'balanceEnquiry',\n },\n },\n // the following $unwind stages will convert your arrays\n // to objects, so it would be easier to filter the messages\n {\n $unwind: '$channel.formats',\n },\n {\n $unwind: '$channel.formats.messages',\n },\n {\n // filter messages here\n $match: {\n 'channel.formats.messages.name': 'balanceEnquiry',\n },\n },\n {\n // returns only message(s)\n $replaceWith: '$channel.formats.messages',\n },\n]).pretty();\n", "text": "Hello, @Erica_01! Welcome to the community!Ok, so you need to query the document and modify the output.\nTo achieve that you need to use and aggregation.Example:", "username": "slava" }, { "code": "", "text": "Thank you so much Slava!", "username": "Erica_01" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
How to fetch only specified array element from nested array present in document
2020-08-13T12:36:19.827Z
How to fetch only specified array element from nested array present in document
10,661
null
[ "python", "production", "motor-driver" ]
[ { "code": "", "text": "We are pleased to announce the 2.2.0 release of Motor - MongoDB’s Asynchronous Python Driver. This release adds support for MongoDB 4.4.See the changelog for a high-level summary of what’s new and improved or see the Motor 2.2.0 release notes in JIRA for the complete list of resolved issues.Thank you to everyone who contributed to this release!", "username": "Prashant_Mital" }, { "code": "", "text": "", "username": "system" } ]
Motor 2.2.0 Released
2020-08-14T01:17:25.820Z
Motor 2.2.0 Released
2,805
null
[]
[ { "code": "user = {\n '_id': email, \n 'username': username,\n 'password': hashed,\n 'images': {\n 'tags': []\n }\n }\nimage = {\n '_id': _id,\n 'tags': tags\n }\nmongo.db.users.update_one(\n filter = { '_id': session['email']},\n update = { '$addToSet': {'images': image}}\n)\n", "text": "I’m trying to add images along with tags to my collection. The schema is like this:When trying to add a photo to an already existing user I did this:But I got an error: “Cannot apply $addToSet to non-array field. Field named ‘images’ has non-array type object”. Can someone help?", "username": "cosmina" }, { "code": "var img1 = {\n _id: 1,\n url: 'http://example.com/pics/1',\n tags: ['nature', 'africa'],\n};\n\nvar img2 = {\n _id: 2,\n url: 'http://example.com/pics/2',\n tags: ['animals', 'dogs'],\n};\n\n\ndb.test1.insertOne({\n _id: 'A',\n images: img1,\n});\ndb.test1.update(\n // query the document(s), you need to update\n {\n _id: 'A'\n },\n [\n {\n $addFields: {\n images: {\n $cond: {\n if: {\n // if field is array\n $eq: [\n { $type: '$images' },\n 'array'\n ]\n },\n then: {\n // then add new image to it\n $setUnion: [\n '$images',\n [img2]\n ]\n },\n else: {\n // else convert it to array\n // and add image to it\n $setUnion: [\n ['$images'],\n [img2]\n ]\n }\n }\n },\n }\n },\n ]\n);\ndb.test1.updateMany(\n // leave empty object here,\n // so every document will be patched\n {},\n [\n {\n $addFields: {\n images: {\n $cond: {\n if: {\n // if field is array\n $eq: [\n { $type: '$images' },\n 'array'\n ]\n },\n then: '$images', // then do not touch it\n else: ['$images'], // else convert it to an array\n }\n }\n }\n }\n ]\n);\ndb.test1.updateOne(\n { _id: 'A' },\n {\n $addToSet: {\n images: img2,\n }\n }\n);\n", "text": "Hello, @cosmina! Welcome to the community! I got an error: “Cannot apply $addToSet to non-array field. Field named ‘images’ has non-array type object”. Can someone help?You get this error, because you’re trying to use object as an array.\nTo insert your new document into ‘images’ field, you must convert it to an array first.To illustrate it, assume, you have the following dataset:You can do the type-conversion for ‘images’ prop for each document separately, at the same time you insert a new image to it:But IMHO, it is better to migrate schema for all the documents in your collection with this operation:After this migration, the insertion into ‘images’ array will be as easy as:Both operations use updates with aggregation pipeline.", "username": "slava" }, { "code": "", "text": "Thank you so much for the detailed response! ", "username": "cosmina" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Adding into embedded document
2020-08-13T20:23:32.701Z
Adding into embedded document
5,420
null
[]
[ { "code": "", "text": "https://stackoverflow.com/questions/63389598/mongodb-how-select-like-select-id-size-h-from-inventory-where-status-ai need the value only itsize.hSQL like :SELECT size.h from inventory WHERE status = “A”and the MongoDB shell how can i do it ?", "username": "AtlantisDe" }, { "code": "db.inventory.find(\n { status: 'A' }, // where\n { _id: true, 'size.h': true } // select\n);\n[\n { \"_id\" : ObjectId(\"5f355cdb920ca5b143d09356\"), \"size\" : { \"h\" : 14 } },\n { \"_id\" : ObjectId(\"5f355cdb920ca5b143d09357\"), \"size\" : { \"h\" : 8.5 } },\n { \"_id\" : ObjectId(\"5f355cdb920ca5b143d0935a\"), \"size\" : { \"h\" : 10 } }\n]\n", "text": "Hello, @AtlantisDe!You can use this query:It will produce the following output:Checkout some pages in the official MongoDB documentation:I strongly recommend you to take some free Basic MongoDB course to be able to resolve such tasks easily ", "username": "slava" }, { "code": "", "text": "@slava Thank u very much …i got it…\ntks… ", "username": "AtlantisDe" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoDB how select like : SELECT _id, size.h from inventory WHERE status = “A”
2020-08-13T12:34:44.379Z
MongoDB how select like : SELECT _id, size.h from inventory WHERE status = “A”
1,937
null
[ "java" ]
[ { "code": "", "text": "I know how to insert documents from Quick Tour, but how to get the document from db when collection.find().first().subscribe(subscriber); returns Publisher ? Also if all actions are performed async how to make it sync ?", "username": "Zdziszkee_N_A" }, { "code": "collection.find().first().subscribe(subscriber);Publishercollection.find().first()org.reactivestreams.Publisher<TResult>PublisherPublisherPrintSubscriberfindSubscriberHelpersSubscriberHelpers.java", "text": "Hello @Zdziszkee_N_A,… how to get the document from db when collection.find().first().subscribe(subscriber); returns Publisher?collection.find().first() returns a org.reactivestreams.Publisher<TResult>.The Publisher’s subscibe(Subscriber<? super T> s) method “Requests Publisher to start streaming data.” - it doesn’t return a Publisher.The tutorial’s examples have a subscriber defined; use that code. There is an example PrintSubscriber, and this can be used to print the find query result.The SubscriberHelpers is a utility class shown with the examples. It is used as shown in the Quick Tour. The source code for the SubscriberHelpers.java can be found on GitHub: mongo-java-driver-reactivestreams/examples/tour/src/main/tour at master · mongodb/mongo-java-driver-reactivestreams · GitHub. You can use it with your program.Also if all actions are performed async how to make it sync ?I guess, you can try using the synchronous Java APIs for those .", "username": "Prasad_Saya" }, { "code": "", "text": "Okay but how to get the Document object itself from that ? Casting ?", "username": "Zdziszkee_N_A" }, { "code": "subscriber = new PrintDocumentSubscriber();\ncollection.find().first().subscribe(subscriber);\nsubscriber.await();\nPrintDocumentSubscriberSubscriberHelpersSubscriberHelpers", "text": "The Quick Tour - Query the Collection - Find the First Document in a Collection has the code snippet:When this code gets executed, it prints the first document from the collection. The print happens using the PrintDocumentSubscriber (which has a description: “A Subscriber that prints the json version of each document”) from the SubscriberHelpers class (I have already provided the link in the previous post).Casting ?I don’t think casting is way to get your document. It looks like you have to use the SubscriberHelpers class API for the tutorial, or build your own helpers / code to do the tasks you have on your mind.", "username": "Prasad_Saya" }, { "code": "", "text": "Hi,The reactive streams API is fairly low level API and was designed to be a foundation for async stream processing in Java. There are libraries that extend the reactive streams API and make them much more user friendly:I would suggest using a higher level library to aid the use of Publishers by making them easier to consume.Ross", "username": "Ross_Lawley" }, { "code": "collection.find().first()Publisher<Document>Maybecollection.find().first()MongoClient mongoClient = MongoClients.create();\nMongoDatabase database = mongoClient.getDatabase(\"test\");\nMongoCollection<Document> collection = database.getCollection(\"books\");\nPublisher<Document> first = collection.find().first();\n\n// The method returns the first document from the query _or_ an empty document.\nDocument maybeDoc = Maybe.fromPublisher(first)\n .blockingGet(new Document());\n\n// Do something with the document\nSystem.out.println(maybeDoc);\nMaybe.blockingGetblockingGet()null", "text": "… how to get the document from db when collection.find().first().subscribe(subscriber); returns Publisher ? Also if all actions are performed async how to make it sync ?Hi @Zdziszkee_N_A , revisiting the post with example code using RxJava. It is much easier to work with the RxJava APIs as mentioned by @Ross_Lawley.This is how you would query using RxJava. The Maybe class of RxJava allows you to get the queried document (from the collection.find().first() method from Reactive Streams Java Driver, which returns a Publisher<Document>).Maybe is a flow with no items, exactly one item or an error. This is suitable in this case, as the collection.find().first() may return a publisher with a document, or if there are no documents in the collection, returns an empty publisher.Example Code:NOTE: In the above code, the Maybe.blockingGet method blocks execution of the next statement, until it completes. Note that there is also a blockingGet() without parameters, which returns a null if there are no documents in the collection.", "username": "Prasad_Saya" } ]
MongoDB Java Reactive Streams help
2020-07-30T17:12:46.850Z
MongoDB Java Reactive Streams help
6,850
null
[]
[ { "code": "{\n \"idSensor\": 3,\n \"idDevice\": 55,\n \"dateTime\": \n {\n \"instant\": \"2020-07-24T10:00:40Z\",\n \"offset\": \"+02:00\"\n },\n \"data\": \n {\n \"inicio\": \"2020-07-24T12:00:40+02:00\",\n \"fin\": \"2020-07-24T12:00:52+02:00\",\n \"archivo\": \" EKT_captaREVOLUTION_VERT_1_corto.mp4\",\n \"tipo\": \"video\"\n }\n}\n{\n \"idSensor\": 2,\n \"idDevice\": 48,\n \"dateTime\": \n {\n \"instant\": \"2020-06-09T14:01:58.521Z\",\n \"offset\": \"+02:00\"\n },\n \"data\": \n {\n \"BeginTime\": \"2020-06-09T16:01:58.521891+02:00\",\n \"CountingLines\": \n [{\n \"Direction\": \"EnteredLeaving\",\n \"LineID\": 0\n }],\n \"FaceInfo\": \n {\n \"Age\": 32.03,\n \"Emotion\": \"SURPRISE\",\n \"IsDetected\": true,\n \"MaleProbability\": 0.91,\n \"gazeTime\": 0.36,\n \"numGazes\": 4\n },\n \"ImageSize\": \n {\n \"Height\": 1080,\n \"Width\": 1920\n },\n \"LookingDuration\": 1,\n \"PersonID\": 102340,\n \"ReIDInfo\": {\"NumReIDs\": 1},\n \"RoiInfo\": {\"RoiDuration\": 10.909090909090908},\n \"SensorID\": 51,\n \"SocialDistance\": [],\n \"TrackingDuration\": 11.64,\n \"Trajectory\": null,\n \"direction\": null,\n \"id\": 1,\n \"roiName\": 1,\n \"roiType\": 1\n }\n}\nvar pipeline = \n[\n\t{\n\t\t\"$match\": //Deja pasar sólo los archivos de emisiones y de reacciones\n\t\t{\n\t\t \"$or\": [{\"idSensor\": 2}, {\"idSensor\": 3}]\n\t\t}\n\t},\n {\n \"$lookup\": //Une cada documento con los demás -incluyendo consigo mismo-, que agrupa en un array con tantas posiciones como documentos\n {\n \"from\": \"sensorsData\",\n \"localField\": \"idDevice\",\n \"foreignField\": \"idDevice\",\n \"as\": \"array\"\n }\n },\n {\n \"$unwind\": \"$array\" //Descompone los archivos en función de las posiciones del array\n },\n {\n \"$match\": //Deja pasar sólo aquellos archivos con estructura emisión-reacción y elimina las demás combinaciones: emisión-emisión, reacción-reacción y reacción-emisión (redundantes)\n {\n \"$and\": \n [\n {\"data.inicio\": {\"$exists\": true}}, \n {\"array.data.BeginTime\": {\"$exists\": true}}\n ]\n }\n },\n {\n \"$addFields\": //Creación de los parámetros temporales\n {\n \"dtBroadcastStart\": {\"$toDate\": \"$data.inicio\"},\n \"dtBroadcastEnd\": {\"$toDate\": \"$data.fin\"},\n \"dtTrackingStart\": {\"$toDate\": \"$array.data.BeginTime\"},\n\t\t\t\"dtTrackingEnd\": {\"$add\": [{\"$toDate\": \"$array.data.BeginTime\"}, {\"$multiply\": [\"$array.data.TrackingDuration\", 1000]}]}\n }\n },\n {\n \"$match\": //Filtrado de los documentos que cumplen las condiciones de solapamiento temporal\n {\n \"$expr\":\n {\n \"$and\": \n [\n {\"$lt\": [\"$dtBroadcastStart\", \"$dtTrackingEnd\"]},\n {\"$gt\": [\"$dtBroadcastEnd\", \"$dtTrackingStart\"]}\n ]\n }\n }\n },\n {\n \"$project\": //Selección final de parámetros\n {\n \"_id\": 0,\n \"idDevice\": \"$idDevice\",\n \"naBroadcast\": \"$data.archivo\",\n\t\t\t\"naType\": \"$data.tipo\",\n \"dtBroadcastStart\": 1,\n \"dtBroadcastEnd\": 1,\n \"qtBroadcastDurationS\": {\"$divide\": [{\"$subtract\": [{\"$toDate\": \"$dtBroadcastEnd\"}, {\"$toDate\": \"$dtBroadcastStart\"}]}, 1000]}\n \"naWeekday\": \n\t\t\t{\n\t\t\t \"$switch\": \n\t\t\t {\n\t\t\t \"branches\": \n\t\t\t [\n\t\t\t {\"case\": {\"$eq\": [{\"$dayOfWeek\": {\"$toDate\": \"$data.inicio\"}}, 1]}, \"then\": \"Domingo\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$dayOfWeek\": {\"$toDate\": \"$data.inicio\"}}, 2]}, \"then\": \"Lunes\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$dayOfWeek\": {\"$toDate\": \"$data.inicio\"}}, 3]}, \"then\": \"Martes\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$dayOfWeek\": {\"$toDate\": \"$data.inicio\"}}, 4]}, \"then\": \"Miércoles\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$dayOfWeek\": {\"$toDate\": \"$data.inicio\"}}, 5]}, \"then\": \"Jueves\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$dayOfWeek\": {\"$toDate\": \"$data.inicio\"}}, 6]}, \"then\": \"Viernes\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$dayOfWeek\": {\"$toDate\": \"$data.inicio\"}}, 7]}, \"then\": \"Sábado\"}\n\t\t\t ],\n\t\t\t \"default\": \"Fecha incorrecta\"\n\t\t\t }\n\t\t\t},\n\t\t\t\"naMonth\":\n\t\t\t{\n\t\t\t \"$switch\": \n\t\t\t {\n\t\t\t \"branches\": \n\t\t\t [\n\t\t\t {\"case\": {\"$eq\": [{\"$substr\": [\"$data.inicio\", 5, 2]}, \"01\"]}, \"then\": \"Enero\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$substr\": [\"$data.inicio\", 5, 2]}, \"02\"]}, \"then\": \"Febrero\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$substr\": [\"$data.inicio\", 5, 2]}, \"03\"]}, \"then\": \"Marzo\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$substr\": [\"$data.inicio\", 5, 2]}, \"04\"]}, \"then\": \"Abril\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$substr\": [\"$data.inicio\", 5, 2]}, \"05\"]}, \"then\": \"Mayo\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$substr\": [\"$data.inicio\", 5, 2]}, \"06\"]}, \"then\": \"Junio\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$substr\": [\"$data.inicio\", 5, 2]}, \"07\"]}, \"then\": \"Julio\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$substr\": [\"$data.inicio\", 5, 2]}, \"08\"]}, \"then\": \"Agosto\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$substr\": [\"$data.inicio\", 5, 2]}, \"09\"]}, \"then\": \"Septiembre\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$substr\": [\"$data.inicio\", 5, 2]}, \"10\"]}, \"then\": \"Octubre\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$substr\": [\"$data.inicio\", 5, 2]}, \"11\"]}, \"then\": \"Noviembre\"},\n\t\t\t {\"case\": {\"$eq\": [{\"$substr\": [\"$data.inicio\", 5, 2]}, \"12\"]}, \"then\": \"Diciembre\"}\n\t\t\t ],\n\t\t\t \"default\": \"Fecha incorrecta\"\n\t\t\t }\n\t\t\t},\n \"idPerson\": \"$array.data.PersonID\",\n\t\t\t\"dtTrackingStart\": 1,\n\t\t\t\"dtTrackingEnd\": 1,\n\t\t\t\"qtFaceDetected\": \n\t\t\t{\n\t\t\t\t\"$cond\": \n\t\t\t\t{\n\t\t\t\t\t\"if\": {\"$eq\": [\"$array.data.FaceInfo.IsDetected\", true]}, \n\t\t\t\t\t\"then\": 1, \n\t\t\t\t\t\"else\": 0\n\t\t\t\t}\n\t\t\t},\n\t\t\t\"qtMaleProbability\": \"$array.data.FaceInfo.MaleProbability\",\n\t\t\t\"qtAge\": \"$array.data.FaceInfo.Age\",\n\t\t\t\"naEmotion\": \"$array.data.FaceInfo.Emotion\",\n\t\t\t\"qtGaze\": \"$array.data.FaceInfo.numGazes\",\n\t\t\t\"qtGazeDurationS\": \"$array.data.FaceInfo.gazeTime\",\n\t\t\t\"qtFaceDurationS\": \"$array.data.LookingDuration\",\n\t\t\t\"qtTrackingDurationS\": \"$array.data.TrackingDuration\",\n\t\t\t\"qtReId\": \"$array.data.ReIDInfo.NumReIDs\"\n }\n\t}\n]\n\ndb.sensorsData.aggregate(pipeline)\n{\n \"$match\": //Filtrado de los documentos que cumplen las condiciones de solapamiento temporal\n {\n \"$expr\":\n {\n \"$and\": \n [\n {\"$lt\": [\"$dtBroadcastStart\", \"$dtTrackingEnd\"]},\n {\"$gt\": [\"$dtBroadcastEnd\", \"$dtTrackingStart\"]}\n ]\n }\n }\n},\n", "text": "Hi there:Some weeks ago I was trying to get a solution to this problem:I got it, but the data format of the files has changed a little bit:Broadcasting files:Reaction files:So my query:It works, but it seems this stage:It’s too much when querying against the production DB, and it gets stuck. Any idea about how to optimize my query?Thanks in advance!", "username": "Javier_Blanco" }, { "code": " db.sensorsData.explain().aggregate(pipeline);\ndb.sensorData.getIndexes();\ndb.sensorData.stats();\n", "text": "Hi @Javier_Blanco,Why your query lookup from sensorData to sensorData ? Is all data in the same collection?If you need your query execution analysis please provide:Thanks,\nPavel", "username": "Pavel_Duchovny" }, { "code": "{\n \t\"stages\" : [\n \t\t{\n \t\t\t\"$cursor\" : {\n \t\t\t\t\"query\" : {\n \t\t\t\t\t\"$and\" : [\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"$or\" : [\n \t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\"idSensor\" : 2\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\"idSensor\" : 3\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t]\n \t\t\t\t\t\t},\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"data.inicio\" : {\n \t\t\t\t\t\t\t\t\"$exists\" : true\n \t\t\t\t\t\t\t}\n \t\t\t\t\t\t}\n \t\t\t\t\t]\n \t\t\t\t},\n \t\t\t\t\"fields\" : {\n \t\t\t\t\t\"array.data.BeginTime\" : 1,\n \t\t\t\t\t\"array.data.FaceInfo.Age\" : 1,\n \t\t\t\t\t\"array.data.FaceInfo.Emotion\" : 1,\n \t\t\t\t\t\"array.data.FaceInfo.IsDetected\" : 1,\n \t\t\t\t\t\"array.data.FaceInfo.MaleProbability\" : 1,\n \t\t\t\t\t\"array.data.FaceInfo.gazeTime\" : 1,\n \t\t\t\t\t\"array.data.FaceInfo.numGazes\" : 1,\n \t\t\t\t\t\"array.data.LookingDuration\" : 1,\n \t\t\t\t\t\"array.data.PersonID\" : 1,\n \t\t\t\t\t\"array.data.ReIDInfo.NumReIDs\" : 1,\n \t\t\t\t\t\"array.data.TrackingDuration\" : 1,\n \t\t\t\t\t\"data.archivo\" : 1,\n \t\t\t\t\t\"data.fin\" : 1,\n \t\t\t\t\t\"data.inicio\" : 1,\n \t\t\t\t\t\"data.tipo\" : 1,\n \t\t\t\t\t\"dtBroadcastEnd\" : 1,\n \t\t\t\t\t\"dtBroadcastStart\" : 1,\n \t\t\t\t\t\"dtTrackingEnd\" : 1,\n \t\t\t\t\t\"dtTrackingStart\" : 1,\n \t\t\t\t\t\"idDevice\" : 1,\n \t\t\t\t\t\"_id\" : 0\n \t\t\t\t},\n \t\t\t\t\"queryPlanner\" : {\n \t\t\t\t\t\"plannerVersion\" : 1,\n \t\t\t\t\t\"namespace\" : \"mercury.sensorsData\",\n \t\t\t\t\t\"indexFilterSet\" : false,\n \t\t\t\t\t\"parsedQuery\" : {\n \t\t\t\t\t\t\"$and\" : [\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"$or\" : [\n \t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\"idSensor\" : {\n \t\t\t\t\t\t\t\t\t\t\t\"$eq\" : 2\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\"idSensor\" : {\n \t\t\t\t\t\t\t\t\t\t\t\"$eq\" : 3\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"data.inicio\" : {\n \t\t\t\t\t\t\t\t\t\"$exists\" : true\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t}\n \t\t\t\t\t\t]\n \t\t\t\t\t},\n \t\t\t\t\t\"queryHash\" : \"7B4F946C\",\n \t\t\t\t\t\"planCacheKey\" : \"1E9CCC0D\",\n \t\t\t\t\t\"winningPlan\" : {\n \t\t\t\t\t\t\"stage\" : \"FETCH\",\n \t\t\t\t\t\t\"filter\" : {\n \t\t\t\t\t\t\t\"data.inicio\" : {\n \t\t\t\t\t\t\t\t\"$exists\" : true\n \t\t\t\t\t\t\t}\n \t\t\t\t\t\t},\n \t\t\t\t\t\t\"inputStage\" : {\n \t\t\t\t\t\t\t\"stage\" : \"OR\",\n \t\t\t\t\t\t\t\"inputStages\" : [\n \t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n \t\t\t\t\t\t\t\t\t\"keyPattern\" : {\n \t\t\t\t\t\t\t\t\t\t\"idSensor\" : 1,\n \t\t\t\t\t\t\t\t\t\t\"idDevice\" : 1,\n \t\t\t\t\t\t\t\t\t\t\"dateTime\" : 1\n \t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\"indexName\" : \"idSensor_1_idDevice_1_dateTime_1\",\n \t\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n \t\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n \t\t\t\t\t\t\t\t\t\t\"idSensor\" : [ ],\n \t\t\t\t\t\t\t\t\t\t\"idDevice\" : [ ],\n \t\t\t\t\t\t\t\t\t\t\"dateTime\" : [ ]\n \t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\"isUnique\" : false,\n \t\t\t\t\t\t\t\t\t\"isSparse\" : false,\n \t\t\t\t\t\t\t\t\t\"isPartial\" : false,\n \t\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n \t\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n \t\t\t\t\t\t\t\t\t\"indexBounds\" : {\n \t\t\t\t\t\t\t\t\t\t\"idSensor\" : [\n \t\t\t\t\t\t\t\t\t\t\t\"[3.0, 3.0]\"\n \t\t\t\t\t\t\t\t\t\t],\n \t\t\t\t\t\t\t\t\t\t\"idDevice\" : [\n \t\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n \t\t\t\t\t\t\t\t\t\t],\n \t\t\t\t\t\t\t\t\t\t\"dateTime\" : [\n \t\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n \t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n \t\t\t\t\t\t\t\t\t\"keyPattern\" : {\n \t\t\t\t\t\t\t\t\t\t\"idSensor\" : 1\n \t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\"indexName\" : \"idSensor_1\",\n \t\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n \t\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n \t\t\t\t\t\t\t\t\t\t\"idSensor\" : [ ]\n \t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\"isUnique\" : false,\n \t\t\t\t\t\t\t\t\t\"isSparse\" : false,\n \t\t\t\t\t\t\t\t\t\"isPartial\" : false,\n \t\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n \t\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n \t\t\t\t\t\t\t\t\t\"indexBounds\" : {\n \t\t\t\t\t\t\t\t\t\t\"idSensor\" : [\n \t\t\t\t\t\t\t\t\t\t\t\"[2.0, 2.0]\"\n \t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t]\n \t\t\t\t\t\t}\n \t\t\t\t\t},\n \t\t\t\t\t\"rejectedPlans\" : [\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n \t\t\t\t\t\t\t\"filter\" : {\n \t\t\t\t\t\t\t\t\"data.inicio\" : {\n \t\t\t\t\t\t\t\t\t\"$exists\" : true\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\"inputStage\" : {\n \t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n \t\t\t\t\t\t\t\t\"keyPattern\" : {\n \t\t\t\t\t\t\t\t\t\"idSensor\" : 1,\n \t\t\t\t\t\t\t\t\t\"idDevice\" : 1,\n \t\t\t\t\t\t\t\t\t\"dateTime\" : 1\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"indexName\" : \"idSensor_1_idDevice_1_dateTime_1\",\n \t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n \t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n \t\t\t\t\t\t\t\t\t\"idSensor\" : [ ],\n \t\t\t\t\t\t\t\t\t\"idDevice\" : [ ],\n \t\t\t\t\t\t\t\t\t\"dateTime\" : [ ]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"isUnique\" : false,\n \t\t\t\t\t\t\t\t\"isSparse\" : false,\n \t\t\t\t\t\t\t\t\"isPartial\" : false,\n \t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n \t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n \t\t\t\t\t\t\t\t\"indexBounds\" : {\n \t\t\t\t\t\t\t\t\t\"idSensor\" : [\n \t\t\t\t\t\t\t\t\t\t\"[2.0, 2.0]\",\n \t\t\t\t\t\t\t\t\t\t\"[3.0, 3.0]\"\n \t\t\t\t\t\t\t\t\t],\n \t\t\t\t\t\t\t\t\t\"idDevice\" : [\n \t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n \t\t\t\t\t\t\t\t\t],\n \t\t\t\t\t\t\t\t\t\"dateTime\" : [\n \t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t}\n \t\t\t\t\t\t},\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n \t\t\t\t\t\t\t\"filter\" : {\n \t\t\t\t\t\t\t\t\"data.inicio\" : {\n \t\t\t\t\t\t\t\t\t\"$exists\" : true\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\"inputStage\" : {\n \t\t\t\t\t\t\t\t\"stage\" : \"OR\",\n \t\t\t\t\t\t\t\t\"inputStages\" : [\n \t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n \t\t\t\t\t\t\t\t\t\t\"keyPattern\" : {\n \t\t\t\t\t\t\t\t\t\t\t\"idSensor\" : 1,\n \t\t\t\t\t\t\t\t\t\t\t\"idDevice\" : 1,\n \t\t\t\t\t\t\t\t\t\t\t\"dateTime\" : 1\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\"indexName\" : \"idSensor_1_idDevice_1_dateTime_1\",\n \t\t\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n \t\t\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n \t\t\t\t\t\t\t\t\t\t\t\"idSensor\" : [ ],\n \t\t\t\t\t\t\t\t\t\t\t\"idDevice\" : [ ],\n \t\t\t\t\t\t\t\t\t\t\t\"dateTime\" : [ ]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\"isUnique\" : false,\n \t\t\t\t\t\t\t\t\t\t\"isSparse\" : false,\n \t\t\t\t\t\t\t\t\t\t\"isPartial\" : false,\n \t\t\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n \t\t\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n \t\t\t\t\t\t\t\t\t\t\"indexBounds\" : {\n \t\t\t\t\t\t\t\t\t\t\t\"idSensor\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"[2.0, 2.0]\"\n \t\t\t\t\t\t\t\t\t\t\t],\n \t\t\t\t\t\t\t\t\t\t\t\"idDevice\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n \t\t\t\t\t\t\t\t\t\t\t],\n \t\t\t\t\t\t\t\t\t\t\t\"dateTime\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"[MinKey, MaxKey]\"\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n \t\t\t\t\t\t\t\t\t\t\"keyPattern\" : {\n \t\t\t\t\t\t\t\t\t\t\t\"idSensor\" : 1\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\"indexName\" : \"idSensor_1\",\n \t\t\t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n \t\t\t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n \t\t\t\t\t\t\t\t\t\t\t\"idSensor\" : [ ]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\"isUnique\" : false,\n \t\t\t\t\t\t\t\t\t\t\"isSparse\" : false,\n \t\t\t\t\t\t\t\t\t\t\"isPartial\" : false,\n \t\t\t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n \t\t\t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n \t\t\t\t\t\t\t\t\t\t\"indexBounds\" : {\n \t\t\t\t\t\t\t\t\t\t\t\"idSensor\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"[3.0, 3.0]\"\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t}\n \t\t\t\t\t\t},\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"stage\" : \"FETCH\",\n \t\t\t\t\t\t\t\"filter\" : {\n \t\t\t\t\t\t\t\t\"data.inicio\" : {\n \t\t\t\t\t\t\t\t\t\"$exists\" : true\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\"inputStage\" : {\n \t\t\t\t\t\t\t\t\"stage\" : \"IXSCAN\",\n \t\t\t\t\t\t\t\t\"keyPattern\" : {\n \t\t\t\t\t\t\t\t\t\"idSensor\" : 1\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"indexName\" : \"idSensor_1\",\n \t\t\t\t\t\t\t\t\"isMultiKey\" : false,\n \t\t\t\t\t\t\t\t\"multiKeyPaths\" : {\n \t\t\t\t\t\t\t\t\t\"idSensor\" : [ ]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"isUnique\" : false,\n \t\t\t\t\t\t\t\t\"isSparse\" : false,\n \t\t\t\t\t\t\t\t\"isPartial\" : false,\n \t\t\t\t\t\t\t\t\"indexVersion\" : 2,\n \t\t\t\t\t\t\t\t\"direction\" : \"forward\",\n \t\t\t\t\t\t\t\t\"indexBounds\" : {\n \t\t\t\t\t\t\t\t\t\"idSensor\" : [\n \t\t\t\t\t\t\t\t\t\t\"[2.0, 2.0]\",\n \t\t\t\t\t\t\t\t\t\t\"[3.0, 3.0]\"\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t}\n \t\t\t\t\t\t}\n \t\t\t\t\t]\n \t\t\t\t}\n \t\t\t}\n \t\t},\n \t\t{\n \t\t\t\"$lookup\" : {\n \t\t\t\t\"from\" : \"sensorsData\",\n \t\t\t\t\"as\" : \"array\",\n \t\t\t\t\"localField\" : \"idDevice\",\n \t\t\t\t\"foreignField\" : \"idDevice\",\n \t\t\t\t\"unwinding\" : {\n \t\t\t\t\t\"preserveNullAndEmptyArrays\" : false\n \t\t\t\t},\n \t\t\t\t\"matching\" : {\n \t\t\t\t\t\"data.BeginTime\" : {\n \t\t\t\t\t\t\"$exists\" : true\n \t\t\t\t\t}\n \t\t\t\t}\n \t\t\t}\n \t\t},\n \t\t{\n \t\t\t\"$addFields\" : {\n \t\t\t\t\"dtBroadcastStart\" : {\n \t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\"input\" : \"$data.inicio\",\n \t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t}\n \t\t\t\t\t}\n \t\t\t\t},\n \t\t\t\t\"dtBroadcastEnd\" : {\n \t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\"input\" : \"$data.fin\",\n \t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t}\n \t\t\t\t\t}\n \t\t\t\t},\n \t\t\t\t\"dtTrackingStart\" : {\n \t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\"input\" : \"$array.data.BeginTime\",\n \t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t}\n \t\t\t\t\t}\n \t\t\t\t},\n \t\t\t\t\"dtTrackingEnd\" : {\n \t\t\t\t\t\"$add\" : [\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\t\t\"input\" : \"$array.data.BeginTime\",\n \t\t\t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t}\n \t\t\t\t\t\t},\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"$multiply\" : [\n \t\t\t\t\t\t\t\t\"$array.data.TrackingDuration\",\n \t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\"$const\" : 1000\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t]\n \t\t\t\t\t\t}\n \t\t\t\t\t]\n \t\t\t\t}\n \t\t\t}\n \t\t},\n \t\t{\n \t\t\t\"$match\" : {\n \t\t\t\t\"$expr\" : {\n \t\t\t\t\t\"$and\" : [\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"$lt\" : [\n \t\t\t\t\t\t\t\t\"$dtBroadcastStart\",\n \t\t\t\t\t\t\t\t\"$dtTrackingEnd\"\n \t\t\t\t\t\t\t]\n \t\t\t\t\t\t},\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"$gt\" : [\n \t\t\t\t\t\t\t\t\"$dtBroadcastEnd\",\n \t\t\t\t\t\t\t\t\"$dtTrackingStart\"\n \t\t\t\t\t\t\t]\n \t\t\t\t\t\t}\n \t\t\t\t\t]\n \t\t\t\t}\n \t\t\t}\n \t\t},\n \t\t{\n \t\t\t\"$project\" : {\n \t\t\t\t\"_id\" : false,\n \t\t\t\t\"dtTrackingStart\" : true,\n \t\t\t\t\"dtTrackingEnd\" : true,\n \t\t\t\t\"dtBroadcastEnd\" : true,\n \t\t\t\t\"dtBroadcastStart\" : true,\n \t\t\t\t\"idDevice\" : \"$idDevice\",\n \t\t\t\t\"naBroadcast\" : \"$data.archivo\",\n \t\t\t\t\"naType\" : \"$data.tipo\",\n \t\t\t\t\"qtBroadcastDurationS\" : {\n \t\t\t\t\t\"$divide\" : [\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"$subtract\" : [\n \t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\t\t\t\t\"input\" : \"$dtBroadcastEnd\",\n \t\t\t\t\t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\t\t\t\t\"input\" : \"$dtBroadcastStart\",\n \t\t\t\t\t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t]\n \t\t\t\t\t\t},\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"$const\" : 1000\n \t\t\t\t\t\t}\n \t\t\t\t\t]\n \t\t\t\t},\n \t\t\t\t\"naWeekday\" : {\n \t\t\t\t\t\"$switch\" : {\n \t\t\t\t\t\t\"branches\" : [\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$dayOfWeek\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input\" : \"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : 1\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Domingo\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$dayOfWeek\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input\" : \"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Lunes\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$dayOfWeek\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input\" : \"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : 3\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Martes\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$dayOfWeek\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input\" : \"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : 4\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Miércoles\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$dayOfWeek\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input\" : \"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Jueves\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$dayOfWeek\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input\" : \"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : 6\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Viernes\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$dayOfWeek\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\"date\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$convert\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"input\" : \"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\"to\" : {\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"date\"\n \t\t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : 7\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Sábado\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t}\n \t\t\t\t\t\t],\n \t\t\t\t\t\t\"default\" : {\n \t\t\t\t\t\t\t\"$const\" : \"Fecha incorrecta\"\n \t\t\t\t\t\t}\n \t\t\t\t\t}\n \t\t\t\t},\n \t\t\t\t\"naMonth\" : {\n \t\t\t\t\t\"$switch\" : {\n \t\t\t\t\t\t\"branches\" : [\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$substrBytes\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"01\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Enero\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$substrBytes\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"02\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Febrero\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$substrBytes\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"03\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Marzo\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$substrBytes\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"04\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Abril\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$substrBytes\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"05\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Mayo\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$substrBytes\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"06\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Junio\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$substrBytes\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"07\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Julio\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$substrBytes\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"08\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Agosto\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$substrBytes\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"09\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Septiembre\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$substrBytes\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"10\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Octubre\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$substrBytes\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"11\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Noviembre\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\"case\" : {\n \t\t\t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$substrBytes\" : [\n \t\t\t\t\t\t\t\t\t\t\t\t\"$data.inicio\",\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 5\n \t\t\t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\t\t\"$const\" : 2\n \t\t\t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\t\t\"$const\" : \"12\"\n \t\t\t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t\t\t]\n \t\t\t\t\t\t\t\t},\n \t\t\t\t\t\t\t\t\"then\" : {\n \t\t\t\t\t\t\t\t\t\"$const\" : \"Diciembre\"\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t}\n \t\t\t\t\t\t],\n \t\t\t\t\t\t\"default\" : {\n \t\t\t\t\t\t\t\"$const\" : \"Fecha incorrecta\"\n \t\t\t\t\t\t}\n \t\t\t\t\t}\n \t\t\t\t},\n \t\t\t\t\"idPerson\" : \"$array.data.PersonID\",\n \t\t\t\t\"qtFaceDetected\" : {\n \t\t\t\t\t\"$cond\" : [\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"$eq\" : [\n \t\t\t\t\t\t\t\t\"$array.data.FaceInfo.IsDetected\",\n \t\t\t\t\t\t\t\t{\n \t\t\t\t\t\t\t\t\t\"$const\" : true\n \t\t\t\t\t\t\t\t}\n \t\t\t\t\t\t\t]\n \t\t\t\t\t\t},\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"$const\" : 1\n \t\t\t\t\t\t},\n \t\t\t\t\t\t{\n \t\t\t\t\t\t\t\"$const\" : 0\n \t\t\t\t\t\t}\n \t\t\t\t\t]\n \t\t\t\t},\n \t\t\t\t\"qtMaleProbability\" : \"$array.data.FaceInfo.MaleProbability\",\n \t\t\t\t\"qtAge\" : \"$array.data.FaceInfo.Age\",\n \t\t\t\t\"naEmotion\" : \"$array.data.FaceInfo.Emotion\",\n \t\t\t\t\"qtGaze\" : \"$array.data.FaceInfo.numGazes\",\n \t\t\t\t\"qtGazeDurationS\" : \"$array.data.FaceInfo.gazeTime\",\n \t\t\t\t\"qtFaceDurationS\" : \"$array.data.LookingDuration\",\n \t\t\t\t\"qtTrackingDurationS\" : \"$array.data.TrackingDuration\",\n \t\t\t\t\"qtReId\" : \"$array.data.ReIDInfo.NumReIDs\"\n \t\t\t}\n \t\t}\n \t],\n \t\"serverInfo\" : {\n \t\t\"host\" : \"mercury\",\n \t\t\"port\" : 27017,\n \t\t\"version\" : \"4.2.5\",\n \t\t\"gitVersion\" : \"2261279b51ea13df08ae708ff278f0679c59dc32\"\n \t},\n \t\"ok\" : 1\n }\n/* 1 */\n{\n\t\"v\" : 2,\n\t\"key\" : {\n\t\t\"_id\" : 1\n\t},\n\t\"name\" : \"_id_\",\n\t\"ns\" : \"mercury.sensorsData\"\n},\n\n/* 2 */\n{\n\t\"v\" : 2,\n\t\"key\" : {\n\t\t\"idSensor\" : 1,\n\t\t\"idDevice\" : 1,\n\t\t\"dateTime\" : 1\n\t},\n\t\"name\" : \"idSensor_1_idDevice_1_dateTime_1\",\n\t\"ns\" : \"mercury.sensorsData\",\n\t\"background\" : false\n},\n\n/* 3 */\n{\n\t\"v\" : 2,\n\t\"key\" : {\n\t\t\"idSensor\" : 1\n\t},\n\t\"name\" : \"idSensor_1\",\n\t\"ns\" : \"mercury.sensorsData\",\n\t\"background\" : true\n},\n\n/* 4 */\n{\n\t\"v\" : 2,\n\t\"key\" : {\n\t\t\"idDevice\" : 1\n\t},\n\t\"name\" : \"idDevice_1\",\n\t\"ns\" : \"mercury.sensorsData\",\n\t\"background\" : true\n}", "text": "Hi, Pavel; thanks for your answer.Yes, both kind of files are within the same collection.db.sensorsData.explain().aggregate(pipeline);db.sensorsData.getIndexes();", "username": "Javier_Blanco" }, { "code": "{\n\t\"ns\" : \"mercury.sensorsData\",\n\t\"size\" : 355821356,\n\t\"count\" : 720559,\n\t\"avgObjSize\" : 493,\n\t\"storageSize\" : 2510925824,\n\t\"capped\" : false,\n\t\"wiredTiger\" : {\n\t\t\"metadata\" : {\n\t\t\t\"formatVersion\" : 1\n\t\t},\n\t\t\"creationString\" : \"access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u\",\n\t\t\"type\" : \"file\",\n\t\t\"uri\" : \"statistics:table:collection-78--2020613348714777639\",\n\t\t\"LSM\" : {\n\t\t\t\"bloom filter false positives\" : 0,\n\t\t\t\"bloom filter hits\" : 0,\n\t\t\t\"bloom filter misses\" : 0,\n\t\t\t\"bloom filter pages evicted from cache\" : 0,\n\t\t\t\"bloom filter pages read into cache\" : 0,\n\t\t\t\"bloom filters in the LSM tree\" : 0,\n\t\t\t\"chunks in the LSM tree\" : 0,\n\t\t\t\"highest merge generation in the LSM tree\" : 0,\n\t\t\t\"queries that could have benefited from a Bloom filter that did not exist\" : 0,\n\t\t\t\"sleep for LSM checkpoint throttle\" : 0,\n\t\t\t\"sleep for LSM merge throttle\" : 0,\n\t\t\t\"total size of bloom filters\" : 0\n\t\t},\n\t\t\"block-manager\" : {\n\t\t\t\"allocations requiring file extension\" : 98572,\n\t\t\t\"blocks allocated\" : 126701,\n\t\t\t\"blocks freed\" : 114876,\n\t\t\t\"checkpoint size\" : 56659968,\n\t\t\t\"file allocation unit size\" : 4096,\n\t\t\t\"file bytes available for reuse\" : 2454249472,\n\t\t\t\"file magic number\" : 120897,\n\t\t\t\"file major version number\" : 1,\n\t\t\t\"file size in bytes\" : 2510925824,\n\t\t\t\"minor version number\" : 0\n\t\t},\n\t\t\"btree\" : {\n\t\t\t\"btree checkpoint generation\" : 116101,\n\t\t\t\"column-store fixed-size leaf pages\" : 0,\n\t\t\t\"column-store internal pages\" : 0,\n\t\t\t\"column-store variable-size RLE encoded values\" : 0,\n\t\t\t\"column-store variable-size deleted values\" : 0,\n\t\t\t\"column-store variable-size leaf pages\" : 0,\n\t\t\t\"fixed-record size\" : 0,\n\t\t\t\"maximum internal page key size\" : 368,\n\t\t\t\"maximum internal page size\" : 4096,\n\t\t\t\"maximum leaf page key size\" : 2867,\n\t\t\t\"maximum leaf page size\" : 32768,\n\t\t\t\"maximum leaf page value size\" : 67108864,\n\t\t\t\"maximum tree depth\" : 3,\n\t\t\t\"number of key/value pairs\" : 0,\n\t\t\t\"overflow pages\" : 0,\n\t\t\t\"pages rewritten by compaction\" : 0,\n\t\t\t\"row-store empty values\" : 0,\n\t\t\t\"row-store internal pages\" : 0,\n\t\t\t\"row-store leaf pages\" : 0\n\t\t},\n\t\t\"cache\" : {\n\t\t\t\"bytes currently in the cache\" : 397732134,\n\t\t\t\"bytes dirty in the cache cumulative\" : 16410583403,\n\t\t\t\"bytes read into cache\" : 352945299,\n\t\t\t\"bytes written from cache\" : 4447470099,\n\t\t\t\"checkpoint blocked page eviction\" : 0,\n\t\t\t\"data source pages selected for eviction unable to be evicted\" : 74,\n\t\t\t\"eviction walk passes of a file\" : 13,\n\t\t\t\"eviction walk target pages histogram - 0-9\" : 0,\n\t\t\t\"eviction walk target pages histogram - 10-31\" : 2,\n\t\t\t\"eviction walk target pages histogram - 128 and higher\" : 0,\n\t\t\t\"eviction walk target pages histogram - 32-63\" : 6,\n\t\t\t\"eviction walk target pages histogram - 64-128\" : 5,\n\t\t\t\"eviction walks abandoned\" : 0,\n\t\t\t\"eviction walks gave up because they restarted their walk twice\" : 13,\n\t\t\t\"eviction walks gave up because they saw too many pages and found no candidates\" : 0,\n\t\t\t\"eviction walks gave up because they saw too many pages and found too few candidates\" : 0,\n\t\t\t\"eviction walks reached end of tree\" : 26,\n\t\t\t\"eviction walks started from root of tree\" : 13,\n\t\t\t\"eviction walks started from saved location in tree\" : 0,\n\t\t\t\"hazard pointer blocked page eviction\" : 74,\n\t\t\t\"in-memory page passed criteria to be split\" : 899,\n\t\t\t\"in-memory page splits\" : 448,\n\t\t\t\"internal pages evicted\" : 0,\n\t\t\t\"internal pages split during eviction\" : 0,\n\t\t\t\"leaf pages split during eviction\" : 53,\n\t\t\t\"modified pages evicted\" : 509,\n\t\t\t\"overflow pages read into cache\" : 0,\n\t\t\t\"page split during eviction deepened the tree\" : 0,\n\t\t\t\"page written requiring cache overflow records\" : 0,\n\t\t\t\"pages read into cache\" : 2994,\n\t\t\t\"pages read into cache after truncate\" : 1,\n\t\t\t\"pages read into cache after truncate in prepare state\" : 0,\n\t\t\t\"pages read into cache requiring cache overflow entries\" : 0,\n\t\t\t\"pages requested from the cache\" : 5689344859,\n\t\t\t\"pages seen by eviction walk\" : 4412,\n\t\t\t\"pages written from cache\" : 120997,\n\t\t\t\"pages written requiring in-memory restoration\" : 0,\n\t\t\t\"tracked dirty bytes in the cache\" : 0,\n\t\t\t\"unmodified pages evicted\" : 0\n\t\t},\n\t\t\"cache_walk\" : {\n\t\t\t\"Average difference between current eviction generation when the page was last considered\" : 0,\n\t\t\t\"Average on-disk page image size seen\" : 0,\n\t\t\t\"Average time in cache for pages that have been visited by the eviction server\" : 0,\n\t\t\t\"Average time in cache for pages that have not been visited by the eviction server\" : 0,\n\t\t\t\"Clean pages currently in cache\" : 0,\n\t\t\t\"Current eviction generation\" : 0,\n\t\t\t\"Dirty pages currently in cache\" : 0,\n\t\t\t\"Entries in the root page\" : 0,\n\t\t\t\"Internal pages currently in cache\" : 0,\n\t\t\t\"Leaf pages currently in cache\" : 0,\n\t\t\t\"Maximum difference between current eviction generation when the page was last considered\" : 0,\n\t\t\t\"Maximum page size seen\" : 0,\n\t\t\t\"Minimum on-disk page image size seen\" : 0,\n\t\t\t\"Number of pages never visited by eviction server\" : 0,\n\t\t\t\"On-disk page image sizes smaller than a single allocation unit\" : 0,\n\t\t\t\"Pages created in memory and never written\" : 0,\n\t\t\t\"Pages currently queued for eviction\" : 0,\n\t\t\t\"Pages that could not be queued for eviction\" : 0,\n\t\t\t\"Refs skipped during cache traversal\" : 0,\n\t\t\t\"Size of the root page\" : 0,\n\t\t\t\"Total number of pages currently in cache\" : 0\n\t\t},\n\t\t\"compression\" : {\n\t\t\t\"compressed page maximum internal page size prior to compression\" : 4096,\n\t\t\t\"compressed page maximum leaf page size prior to compression \" : 131072,\n\t\t\t\"compressed pages read\" : 2994,\n\t\t\t\"compressed pages written\" : 114713,\n\t\t\t\"page written failed to compress\" : 0,\n\t\t\t\"page written was too small to compress\" : 6284\n\t\t},\n\t\t\"cursor\" : {\n\t\t\t\"bulk loaded cursor insert calls\" : 0,\n\t\t\t\"cache cursors reuse count\" : 23117,\n\t\t\t\"close calls that result in cache\" : 0,\n\t\t\t\"create calls\" : 133,\n\t\t\t\"insert calls\" : 923701,\n\t\t\t\"insert key and value bytes\" : 3698369282,\n\t\t\t\"modify\" : 0,\n\t\t\t\"modify key and value bytes affected\" : 0,\n\t\t\t\"modify value bytes modified\" : 0,\n\t\t\t\"next calls\" : 41743564417,\n\t\t\t\"open cursor count\" : 4,\n\t\t\t\"operation restarted\" : 0,\n\t\t\t\"prev calls\" : 2,\n\t\t\t\"remove calls\" : 203142,\n\t\t\t\"remove key bytes removed\" : 730203,\n\t\t\t\"reserve calls\" : 0,\n\t\t\t\"reset calls\" : 3967697491,\n\t\t\t\"search calls\" : 431822594326,\n\t\t\t\"search near calls\" : 349251038,\n\t\t\t\"truncate calls\" : 0,\n\t\t\t\"update calls\" : 0,\n\t\t\t\"update key and value bytes\" : 0,\n\t\t\t\"update value size change\" : 0\n\t\t},\n\t\t\"reconciliation\" : {\n\t\t\t\"dictionary matches\" : 0,\n\t\t\t\"fast-path pages deleted\" : 0,\n\t\t\t\"internal page key bytes discarded using suffix compression\" : 309317,\n\t\t\t\"internal page multi-block writes\" : 2867,\n\t\t\t\"internal-page overflow keys\" : 0,\n\t\t\t\"leaf page key bytes discarded using prefix compression\" : 0,\n\t\t\t\"leaf page multi-block writes\" : 3220,\n\t\t\t\"leaf-page overflow keys\" : 0,\n\t\t\t\"maximum blocks required for a page\" : 1,\n\t\t\t\"overflow values written\" : 0,\n\t\t\t\"page checksum matches\" : 130720,\n\t\t\t\"page reconciliation calls\" : 9474,\n\t\t\t\"page reconciliation calls for eviction\" : 460,\n\t\t\t\"pages deleted\" : 456\n\t\t},\n\t\t\"session\" : {\n\t\t\t\"object compaction\" : 0\n\t\t},\n\t\t\"transaction\" : {\n\t\t\t\"update conflicts\" : 0\n\t\t}\n\t},\n\t\"nindexes\" : 4,\n\t\"indexBuilds\" : [ ],\n\t\"totalIndexSize\" : 38014976,\n\t\"indexSizes\" : {\n\t\t\"_id_\" : 7380992,\n\t\t\"idSensor_1_idDevice_1_dateTime_1\" : 22368256,\n\t\t\"idSensor_1\" : 4153344,\n\t\t\"idDevice_1\" : 4112384\n\t},\n\t\"scaleFactor\" : 1,\n\t\"ok\" : 1\n}", "text": "db.sensorsData.stats();", "username": "Javier_Blanco" }, { "code": "", "text": "Forgot to tag you, @Pavel_Duchovny. Thanks in advance for your answer.", "username": "Javier_Blanco" }, { "code": "\"$match\": //Deja pasar sólo los archivos de emisiones y de reacciones\n\t\t{\n\t\t \"idSensor\" : { $in : [2,3]}\n\t\t}\n", "text": "Hi @Javier_Blanco,I think that an $in operator might be better for you, the $or force an uneeded index merge:In general, it seems that your data model might benefit from embedding devices which suppose to be linked opposed to having the $lookup.Another idea might be to use $filter instead of unwind and $match.Best\nPavel", "username": "Pavel_Duchovny" }, { "code": "$in$or$match$match", "text": "Hi, @Pavel_Duchovny:Nothing really changes with $in instead of $or; the query still goes on forever -or at east for a very long time-. The problem lies within the second $match stage; by commenting it, the client just needs a little more than two seconds to prompt back the outcome. But that $match stage is crucial…", "username": "Javier_Blanco" } ]
Determining time overlapping between two different kinds of files from the same collection (II)
2020-07-24T12:11:21.765Z
Determining time overlapping between two different kinds of files from the same collection (II)
2,961
null
[ "announcement" ]
[ { "code": "", "text": "As of August 13, 2020, the MongoDB Perl driver and related libraries have reached end of life and are no longer supported by MongoDB. See the August 2019 deprecation notice for rationale.If members of the community wish to continue development, they are welcome to fork the code under the terms of the Apache 2 license and release it under a new namespace. Specifications and test files for MongoDB drivers and libraries are published in an open repository: mongodb/specifications.", "username": "David_Golden" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB Perl Driver End of Life
2020-08-13T15:29:59.267Z
MongoDB Perl Driver End of Life
1,989
null
[ "queries" ]
[ { "code": " {\n\n \"highLevel\":{\n\n \"hl_code\": 123,\n\n \"text\" : \"headache\"\n\n },\n\n \"highLevel\":{\n\n \"hl_code\": 124,\n\n \"text\" : \"breathing\"\n\n }\n\n }\n\n {\n\n \"lowLevel\":{\n\n \"ll_Code\": 345,\n\n \"text\" : \"aspirin\"\n\n },\n\n \"lowLevel\":{\n\n \"ll_Code\": 243,\n\n \"text\" : \"advil\"\n\n }\n\n }\n\n {\n\n \"high_low\":{\n\n \"hl_code\" : 123,\n\n \"ll_code\" : 345,\n\n },\n\n \n\n \"high_low\":{\n\n \"hl_code\" : 123,\n\n \"ll_code\" : 243,\n\n },\n\n \"high_low\":{\n\n \"hl_code\" : 124,\n\n \"ll_code\" : 243,\n\n }\n\n }\n {\n\n \"highLevel\":{\n\n \"hl_code\": 123,\n\n \"text\" : \"headache\",\n\n \"lowLevel\":[{\n\n \"ll_Code\": 243,\n\n \"text\" : \"advil\"\n\n },{\n\n \"ll_Code\": 345,\n\n \"text\" : \"aspirin\"\n\n }]\n\n }\n\n }\n", "text": "I am having trouble writing a mongo query to achieve the following:result:", "username": "Supriya_Bansal" }, { "code": "/*\n * Requires the MongoDB Node.js Driver\n * https://mongodb.github.io/node-mongodb-native\n */\n\n// aggregation pipeline definition\n// runs on highLevel collection\nconst agg = [\n {\n '$lookup': { // first join with high_low to get all the ll_codes\n 'from': 'high_low', \n 'localField': 'hl_code', \n 'foreignField': 'hl_code', \n 'as': 'lowLevels' // temporary field\n }\n }, {\n '$lookup': { // now join with lowLevel\n 'from': 'lowLevel', \n 'localField': 'lowLevels.ll_code', \n 'foreignField': 'll_Code', \n 'as': 'lowLevel'\n }\n }, { // remove temporary field\n '$unset': [\n 'lowLevels'\n ]\n }\n];\n\n// sample code\nMongoClient.connect(\n 'mongodb://localhost:27017/?readPreference=primary',\n { useNewUrlParser: true, useUnifiedTopology: true },\n function(connectErr, client) {\n assert.equal(null, connectErr);\n const coll = client.db('testDb').collection('highLevel');\n coll.aggregate(agg, (cmdErr, result) => {\n assert.equal(null, cmdErr);\n });\n client.close();\n });", "text": "This just needs a double $lookup stage. For example", "username": "PBeliy" }, { "code": "aggregate", "text": "aggregateThis was really helpful!! thank you!", "username": "Supriya_Bansal" }, { "code": "", "text": "Hello, @Supriya_Bansalbased on the data you provided the solution from @PBeliy is great an works fine.\nIf you like you can describe your use case here, since which a lager amount of document this will not be a super fast query. There are various methods to get this fixed by changing you data model. A suggestion depends on you use case and how you access your data. Depending on this there is a good chance to completely get rid of the $lookups.\nThis video provides a short overview on many-to-many relations. Potential Patterns could be external reference, subset, … The blog post Building with Patterns provides a quick overview about standard patterns.Cheers,\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "Thank you for the response!!\nMy use case is as follows:\nI have been given around 10 raw json files that are either related to each other in a many-may or in a one-many relationship. Hierarchy is something as follows:highlevelgroup.json\nhighlevelgroup_highlevel.json\nhighlevel.json\nhighlevel_preferredlevel.json\npreferredlevel.jsonRequirement is : Browse through the hierarchy, perform quick searches and support multiple search criterias.I am planning to remove the intermediary files and have the following structure:highlevelgroup(contains nested high level)\nhighlevel(contains nested preferred level)In this way I can sort of achieve hierarchy and if I need to perform searches I can look into the aggregate document.I would love to heard ideas on how to deal with this scenario.", "username": "Supriya_Bansal" }, { "code": "", "text": "Hello @Supriya_Bansalthe actual use case is still unclear to me, however your notes trigger some signals. The most important thing to do BEFORE you apply any schema is to identify your workload (what is accessed how and how often incl. the amounts of data) than you want to define the references and in the final step you apply a pattern. So I am a little bit reluctant to suggest a solution since we miss quite a bit of the basics. But I like to support you to find the best approach.To get deeper into the data modeling I suggest to attend the free MongoDB University Class: M320 Data Modeling . You may also can checkout:\ngrafik897×814 246 KB\nSome more thoughts on your use case:\nI understand that you have different levels which relate to each other. There might also be a potential tree pattern. This can work well together with a recursive browse through the level → $graphLookup\nDepending on the type of data even embedding of e.g. a group can be a pattern combined with external references (ideally you make the reference value the _id of the referenced collection.Plenty ideas, but as mentioned this is step 3. First get the workload and the references, best take the M320 class. Then you will look at that this from a different perspective.Cheers\nMichael", "username": "michael_hoeller" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
MongoQuery for many-many
2020-08-11T22:21:50.832Z
MongoQuery for many-many
1,531
null
[ "aggregation", "queries" ]
[ { "code": "Model.aggregate(\n [\n { $match : \n { '_Id' : \n {\n $in : ids\n }\n }\n } ,\n { $group : \n { _id : '$roomId' ,\n maxdate: { $max: \"$date\"},\n }\n },\n {$sort: { maxdate: -1} },\n {$skip: skip},\n {$limit: limitNum }\n ]\n", "text": "When using MongoDB’s $in clause with Aggregate, Does That have any max limit in the number of arguments?for exampleIn ids array, how many ids I can pass?\nIf there is any limit can I used an heshing technic in order to pass huge queries?", "username": "Mordechai_Ben_Zechar" }, { "code": "", "text": "There is 16MB limit for BSON documents, but i don’t know if it applies to queries. That said, its going to be slow. If i understand it right, you are sending your ids array over network each time you run a query - which will take many ms or even seconds, dependng on how fast the connection is.\nConsider saving ids in your database and use $elemMatch or any other array query or $in (aggregation expression). Also you can add an index for your ids.", "username": "PBeliy" }, { "code": "", "text": "Hi and thank you for your reply, I tested it with 5MB doc of query and it takes several seconds (23).\nHow can I understand what was the bottleneck ?\nI’m using Atlas service and there is nothing in the profiler.\nAre there is any way to profile the networking and the query time in mongo?\nBTW are there any option to “stream” the query since I need only the first “10” results?\nThank you", "username": "Mordechai_Ben_Zechar" } ]
Does MongoDB's $in clause has any max limit in number of arguments
2020-08-12T09:45:26.872Z
Does MongoDB&rsquo;s $in clause has any max limit in number of arguments
11,329
null
[ "app-services-user-auth" ]
[ { "code": "", "text": "We are writing a Swift client that uses as custom JWT authentication provider. I have set up the meta data fields and they are showing up in the JWT token. And the data is showing up in the Provider Data for each user in the Users table. My only question is this. How do I access this provider data from a Swift client using the MongoDB Realm SDK? We have no problem accessing this data on the Javascript backend functions.Thanks", "username": "Richard_Krueger" }, { "code": "", "text": "Hi @Richard_Krueger, I have the same problem now. After I can successfully store the date, my RealmSwift SDK is returning nil for the SyncUsers “customData” property, even after refreshing it with no error. Please let me know in case you found a solution for this. I ´m investigating it currently.", "username": "Christian_Huck" }, { "code": "", "text": "@Christian_Huck so custom data is not the same thing as provider metadata. The first is set up the through a custom data collection as described herehttps://docs.mongodb.com/realm/users/enable-custom-user-data/The second is meta data that is passed through the JWT token itself as described herehttps://docs.mongodb.com/realm/authentication/custom-jwt/So custom data (as opposed to meta-data) is a collection where your backend server puts data that relates to each user. The custom data collection must be setup by the developer before the user logs in. Your custom data is probably nil because you have not set it up. Also, the RLMSyncUser has a method called refreshCustomData that you can call after you log in. For the moment, I have not seen any way to access the user’s provider data directly from within the client code. This is odd, because you can get access to it from Javascript in a backend trigger function. I have an outstanding ticket on this forum, but I have not heard back from the MongoDB Realm team on this. Perhaps the best practice is to have the back end server function selectively put the meta data at the time of login into the custom data collection. Also, the custom data section is more secure, as only the user who logs in can read it. The JWT meta data on the other hand is less secure, i.e. you don’t want to put sensitive data here. We are in the process of building a custom JWT authenticator at our company for MongoDB Realm, hence my interest in this topic.I hope this was useful.Richard Krueger", "username": "Richard_Krueger" } ]
Accessing authentication provider meta data from a Swift client
2020-08-11T21:58:48.892Z
Accessing authentication provider meta data from a Swift client
2,129
null
[]
[ { "code": "mongo \"mongodb+srv://cluster0-jxeqq.mongodb.net/test\" --username m001-student -password m001-mongodb-basics", "text": "Hello,I was able to connect to Atlas cluster from both Compass and Mongo Shell.\nI ran the following command, which connects me to test database:\nmongo \"mongodb+srv://cluster0-jxeqq.mongodb.net/test\" --username m001-student -password m001-mongodb-basics\nHowever, I cannot see any database named test in compass. Is it possible to connect to a database that does not exist yet, or am I missing something?Here is the screenshot of the Compass:\n", "username": "Omer_Toraman" }, { "code": "Test", "text": "Hi @Omer_Toraman,Test is a default database and by default it’s empty and that’s why you are not seeing it listed in Compass.In theory, you can connect to a non-existent database by specifying it’s name in the connection string. But as soon as you insert data into it, an actual database and collection will get created. Until then it acts like a place holder.Hope it helps!~ Shubham", "username": "Shubham_Ranjan" }, { "code": "mongo \"mongodb+srv://cluster0-jxeqq.mongodb.net\" --username m001-student -password m001-mongodb-basics", "text": "You should trim databases name, which is “test” in that connection string.\nIt should change from this:mongo “mongodb+srv://cluster0-jxeqq.mongodb.net/test” --username m001-student -password m001-mongodb-basics`to this:mongo \"mongodb+srv://cluster0-jxeqq.mongodb.net\" --username m001-student -password m001-mongodb-basics", "username": "Alejandro_Ramirez" }, { "code": "databasetestmongo \"mongodb+srv://cluster0-jxeqq.mongodb.net\" --username m001-student -password m001-mongodb-basicsdb", "text": "Hi @Alejandro_Ramirez,mongo “mongodb+srv://cluster0-jxeqq.mongodb.net” --username m001-student -password m001-mongodb-basicsWhen you don’t specify the name of the database in the connection string then by default it will connect you to the test database.Try this :Connect to the cluster using the connection string that you have shared.mongo \"mongodb+srv://cluster0-jxeqq.mongodb.net\" --username m001-student -password m001-mongodb-basicsAfter you have successfully connected to the cluster, run this command and check the output :\ndb~ Shubham", "username": "Shubham_Ranjan" }, { "code": "", "text": "", "username": "system" } ]
No test database is shown in the Compass
2020-08-12T09:27:39.925Z
No test database is shown in the Compass
4,358
null
[ "app-services-user-auth" ]
[ { "code": "{\n \"https://mydomain.com/userID\":\"facebook|123456\",\n \"isAdmin\":false,\n \"nickname\":\"manic,\n \"name\":\"maname\",\n \"picture\":\"https://platform-lookaside.fbsbx.com/platform/profilepic/?asid=1020552285597199&height=50&width=50&ext=1599684171&hash=AeTQ1GRWHZ04JTAD\",\n \"updated_at\":\"2020-08-10T20:42:51.683Z\",\n \"email_verified\":true,\n \"iss\":\"http://my.iss\",\n \"sub\":\"facebook|123456\",\n \"aud\":\"audi\",\n \"iat\":1597092172,\n \"exp\":1627092172\n}\n[{\n \"Path\": \"https://myDomain.com/userID\",\n \"Field Name\": \"externalUserId\",\n \"Required\": true\n},\n{\n \"Path\": \"name\",\n \"Field Name\": \"name\",\n \"Required\": true\n}]\nexpected field 'https://mydomain.com/userID' to be in token metadata\" UserInfo={NSLocalizedDescription=expected field 'https://mydomain.com/userID' to be in token metadata, realm::app::ServiceError=AuthError", "text": "I’m trying to hook up my Realm Sync with my OAuth 2.0 Provider Auth0.My JWT payload looks like this:I configured my custom JWT provider in the Realm Sync UI to contain the following metadata fields:However, while authentication with my iOS Client works without these Metadata Fields, the SDK complains that the ID field is missing in the JWT when trying to login. Are there any hints to debugging this or does someone have a JWT + Configuration combination that works? Could it be due to the URL prefix ?\nThe error is:", "username": "Christian_Huck" }, { "code": "{\n \"aud\": \"junctiontest1-cwttc\",\n \"sub\": \"[email protected]\",\n \"exp\": 1597278098,\n \"email\": \"[email protected]\",\n \"user_data\": {\n \"name\": {\n \"first\": \"Richard\",\n \"last\": \"Krueger\"\n }\n },\n \"iat\": 1597242098\n}\n", "text": "I will take a stab at this.The JWT stuff has changed from Realm Cloud to MongoDB Realm. First, I think that your MongoDB Realm JWT needs an aud field set the Realm App Id. Second, the isAdmin field is no longer a thing with the new MongoDB Realm. Lastly, I think that the JWT metadata has to be part of the payload.This is an example of a JWT payload that I am usingThe user_data field is the meta data for the JWT token, and is configured in the Realm Application JWT provider meta data section.I hope this was useful.Richard Krueger", "username": "Richard_Krueger" }, { "code": "", "text": "Thank you for this reply. I agree that the. isAdmin flag is no longer a thing in MongoDB Realm and that my token has to be updated (which I am working at). However Ím facing issues with namespaced keys as recommended by Auth0. My suspect here is a conflict with this “dot notation” that is allowed in the custom field settings, e.g. how would it distinguish between “user_data.name.first” and “mydomain.com/userName” as its recommended by Auth0 Create Custom Claims. I got a solution that works no by replacing my dots in the domain name, which however conflicts with the naming rules of custom claims. In Realm Cloud or ROS this was still working for me.", "username": "Christian_Huck" }, { "code": " \"xxx\" : {\n \"yyy\": {\n \"zzz\": \"<value of z>\"\n }\n }\n", "text": "@Christian_Huck I believe that in MongoDB Realm the path component of a meta-data property is of the form xxx.yyy.zzz, which would correspond to the following JSON format in the JWT payloadThese are not variadic parameters, i.e. it literally has to be “xxx” followed by “yyy”, etc… In other words, you can’t have ‘userID’ the variable, you have to have a specific userId - e.g. 'A34B25…\". Also, I am not sure that special characters like ‘:’ and ‘/’ are allowed in a metadata property path definition in MongoDB Realm.It took me a few days to tinker with this thing, until I got it working. My biggest issue right now, is that I don’t seem to be able to retrieve these user provider metadata properties in my client at runtime programmatically. Anyways, good luck.Richard Krueger", "username": "Richard_Krueger" } ]
Custom Metadata Fields in JWT not recognized
2020-08-11T07:11:34.049Z
Custom Metadata Fields in JWT not recognized
2,785
null
[ "cxx" ]
[ { "code": "> db.alphabet.insert({\n \"_id\": \"22222\", \n \"array\": ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'j']\n })\n// We missed the 'i'!\n// Let's add it in the position before the last element.\n> db.alphabet.update(\n { \"_id\": \"22222\" },\n {\n $push: {\n array: {\n $each: ['i'],\n $position: -1\n }\n }\n })\n// Check the results:\n> db.alphabet.find({ \"_id\": \"22222\" })\n{ \"array\" : [ \"a\", \"b\", \"c\", \"d\", \"e\", \"f\", \"g\", \"h\", \"i\", \"j\" ] }v\n auto replace = coll.update_one(make_document(kvp(\"_id\", bsoncxx::oid(\"5f3289b7807200001f002b35\"))),\n make_document(kvp(\"$push\", make_document(\n kvp(\"arrayKeyNameTest\",\"eghhh????????????\")\n ))));\n", "text": "HeyI’m trying to follow this tutorial > https://blog.mlab.com/2018/09/use-push-to-insert-elements-into-an-array/\nMainly >I’m trying to reproduce it in c++ but I’m lost… can any1 guide me somehow?Can any1 help out?TIA", "username": "Dariusz_D" }, { "code": "mongocxxpushusing bsoncxx::builder::stream::document;\nusing bsoncxx::builder::stream::open_document;\nusing bsoncxx::builder::stream::close_document;\nusing bsoncxx::builder::stream::open_array;\nusing bsoncxx::builder::stream::close_array;\nusing bsoncxx::builder::stream::finalize;\n\n... \n\nauto builder = document{};\nbsoncxx::document::value update_statement = builder\n << \"$push\" << open_document\n << \"arrayNameTest\" << open_document\n << \"$each\" << open_array\n << \"i\" \n << close_array\n << \"$position\" << -1\n << close_document\n << close_document\n << finalize;\nauto result = collection.update_one(make_document(kvp(\"_id\", \"101\")), update_statement.view());\n", "text": "Hi @Dariusz_D, and welcome to the forum!I’m trying to reproduce it in c++ but I’m lost… can any1 guide me somehow?There are few ways to build a document in mongocxx, one of them is using the document builder. For more information see mongocxx: Working with BSON.For example you could utilise Stream Builder to construct the push document. i.e.See also mongocxx: Update Documents for more examples.Regards,\nWan.", "username": "wan" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Writing query for array in c++
2020-08-11T20:32:36.009Z
Writing query for array in c++
3,687
null
[]
[ { "code": "", "text": "Tried connecting to Mongodb cluster using Shell got following error. Please help.I am using windows command prompt.connecting to: mongodb://sandbox-shard-00-01.exdlc.mongodb.net:27017,sandbox-shard-00-02.exdlc.mongodb.net:27017,sandbox-shard-00-00.exdlc.mongodb.net:27017/test?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&replicaSet=atlas-jb5ypm-shard-0&ssl=true*** It looks like this is a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.Error: can’t connect to new replica set master [sandbox-shard-00-02.xxxxx.mongodb.net:27017], err: AuthenticationFailed: bad auth Authentication failed. :\nconnect@src/mongo/shell/mongo.js:362:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1", "username": "venkatesh_Chandolu" }, { "code": "", "text": "Hello @venkatesh_Chandolu welcome to the comunity!AuthenticationFailed: bad auth Authentication failed. :You most likely provided incorrect credentials. This page Set Up Atlas Connectivity describes in detail which steps you should take, at the end of the page you find an option to copy your connection string (you may need to overwrite the password statement with your actual password).If you get stuck, feel free to add on to this thread. In that case please copy the statements you used and the error message, Please remove all secret information before posting!Cheers,\nMichael", "username": "michael_hoeller" } ]
Couldn't connect to Mongodb cluster using shell
2020-08-12T20:05:51.492Z
Couldn&rsquo;t connect to Mongodb cluster using shell
5,814
null
[]
[ { "code": "", "text": "I was trying to use GitHub - mongodb/amboy: Amboy -- A Go(lang) Job Queue Tool but can’t find where to start from.\nSo is there any complete example on how I can use this package?", "username": "Sujit_Baniya" }, { "code": "", "text": "Hi! There’s a bit more information at amboy package - github.com/mongodb/amboy - Go Packages with a short example, but we wrote amboy for ourselves, and as far as we know it’s not used externally.We use amboy in several parts of our CI infrastructure, which could be useful to look at as a way to get started. The simplest to work from are probably GitHub - evergreen-ci/cedar: data sink service, GitHub - evergreen-ci/barque: packaging service... ship it!, and GitHub - evergreen-ci/logkeeper: a service for storing test log output. There’s not anything in between the example in the godoc and the full examples in our CI software.We don’t really have the bandwidth to provide support on how to build your own task/worker pool infrastructure, but hopefully that’s enough information to get you going?", "username": "Sheeri_Cabral" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Is there any example for Amboy
2020-08-11T20:34:32.561Z
Is there any example for Amboy
3,640
null
[ "java" ]
[ { "code": "_id: 5f3271160ef36644c04d61fb\nrequested: null\nrole: \"STAFF\"\ndiscordID: null\nbalance: 316.3\ntown: \"Tutorial\"\nlastDonation: \"0\"\nreferredBy: \"x\"\nname: \"Tockra\"\nuuid: \"abcd\"\n[12:51:57 INFO]: FILTERDOC: {\"uuid\": \"abcd\"}\n[12:51:57 INFO]: PLAYER: {\"requested\": null, \"role\": \"STAFF\", \"discordID\": null, \"balance\": 317.3, \"town\": \"Tutorial\", \"referredBy\": \"x\", \"lastDonation\": \"0\", \"name\": \"Tockra\", \"uuid\": \"abcd\"}\n", "text": "Hi,I have a “simple” problem, what I can’t solute myself.\nI wrote a small java class which uses the reactive mongodb lib: mongodb-driver-reactivestreams\nHere is the class: https://pastebin.com/Ddfz6pq7\nThe idea is to create a object and call connect to connect this object with a mongodb database and to choose a collection.Now my problem. The updateOne method doesn’t have any effect.\nHere the entry of the players connection which should be updated:So I call updateOne with a uuid filter (abcd) and a new entry which should update all stuff:After looking into the database with mongodb compass the balance (which is the only changed value), isn’t changed…What did I wrong?T€dit: If I replace updateOne with replaceOne it works. But why?", "username": "Tim_Ta" }, { "code": "", "text": "Thinking you want to find and update the document in its entirety.In that case use replace one by providing the replacement document.If you only updating specific keys in the document would suggest to supply an update operator expression to the second parameter of findOneAndUpdate such as $set,$unset.\npic889×342 24.2 KB", "username": "Kirk-PatrickBrown" }, { "code": "", "text": "But why doesn’t that work with updateOne?", "username": "Tim_Ta" }, { "code": "", "text": "Because the mechanics of the operators are different. That is why you have replace for that use case.Granted you could use $set and pass to it the keys to update.But if you want to replace the entire document then that would save you from the over head of updating every key.I hope this helps with answering your question.", "username": "Kirk-PatrickBrown" } ]
Problems with findOneAndUpdate and updateOne
2020-08-11T13:41:20.094Z
Problems with findOneAndUpdate and updateOne
8,310
null
[]
[ { "code": "", "text": "As I went through the MongoDB source code and the paper: Implementation of Cluster-wide Logical Clock and Causal Consistency in MongoDB. I have a question about the clock time: Does MongoDB advance the clock time when sending a query with ClusterTime field. For instance, in Change Streams, MongoS build several cursors to each MongoD, then it will send a getMore query to MongoD, so I want to know whether it’’ advance the HLC.As the above paper said:It looks like these two rules are contradictory.", "username": "Vinllen_Chen" }, { "code": "", "text": "There is no contradiction between those two statements. Tracking and including does not mean advancing. As the paper says, only writes advance the logical clock.Asya", "username": "Asya_Kamsky" }, { "code": "1. Client sends a write command to the primary, the message includes its current value of the ClusterTime: T1.\n2. Primary node receives the message and advances its ClusterTime to T1, if T1 is greater than the primary node’s current ClusterTime value.\n3. Primary node “ticks” the cluster time to T2 in the process of preparing the OpTime for the write. This is the only time a new value of ClusterTime is generated.\n4. Primary node writes to the oplog.\n5. Result is returned to the client, it includes the new\nClusterTime T2.\n6. The client advances its ClusterTime to T2.\ntick", "text": "Hi, Asya,\nThanks for your reply. My question is when the clusterTime of MongoS is T1, and after it sending a query to the MongoD with clusterTime T0, the HLC of MongoD is still T0, but it’ll track the T1. So what’s the reason of this tracking and including?As the A1.3 chapter said in this paper:Checkout the step2, it looks like the Primary Node will advance the HLC to T1 once receive the message, so in my understanding, both query and write can advance the HLC to T1, but only write can tick the HLC to T2.", "username": "Vinllen_Chen" }, { "code": "", "text": "No, query/read cannot advance the cluster’s time. It can update a particular node’s view of what current cluster time is. But that doesn’t advance the time.Think of it this way. You are node 1. You think it’s T1. Meanwhile node 2 has taken several writes advancing the cluster time to T3. Next request, no matter what it is to node 1 will notify node 1 that the current cluster time is T3 not T1. That does not advance the cluster clock that just catches up node 1 which was snoozing and didn’t realize that the cluster time has advanced.There is no such thing as multiple cluster times. There is one. Not all nodes/clients may realize where it is at a particular point and they can learn about it with any communication even one that doesn’t involve any query.", "username": "Asya_Kamsky" }, { "code": "{a:1}{b:1}", "text": "Thanks Asya, So one more question, if query can’t advance HLC, how does a read only distributed transaction fulfill the isolation level of snapshot?For example, the cluster time of MongoS is T1, while MongoD1 is T2 and MongoD2 is T3. Assume this read only distributed transaction crosses MongoD1 and MongoD2, but the response of this transaction may includes {a:1} from MongoD1 with cluster time T2 and {b:1} from MongoD2 with cluster time T3.Is this phenomenon normal? Or is my understanding wrong?", "username": "Vinllen_Chen" }, { "code": "{a:1}{b:1}{a:1}", "text": "I think you may be thinking of it the wrong way. For instance:cluster time of MongoS is T1, while MongoD1 is T2 and MongoD2 is T3There is only one cluster time - it’s T3 here for the whole cluster - it just so happens that MongoS doesn’t know that yet, and neither does MongoD1, they just happen to be behind in their knowledge of cluster time. However, it doesn’t matter, because the number of writes is ‘consistent’ - in other words, each node can start their read at their current “snapshot” and it will reflect correctly cluster time T3 because the gossip protocol will eventually get each node to update their cluster time up to or past such time, but any write that happens after the snapshot read starts will not be included in the snapshot.the response of this transaction may includes {a:1} from MongoD1 with cluster time T2 and {b:1} from MongoD2 with cluster time T3But what’s key is that {a:1} may have been there at T2 but it’s also the same at T3! It has to be because if it wasn’t then cluster time would have had to have been incremented. So in reality all the snapshots are consistent with T3 time even if some of the components of the cluster don’t yet know that their writes are the same at T3 as whatever latest time they know about.", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Does query advance Logical Time in MongoDB?
2020-08-10T04:04:38.019Z
Does query advance Logical Time in MongoDB?
3,332
null
[ "replication" ]
[ { "code": "", "text": "I have production setup having two nodes(Primary and Secondary). I have a requirement to migrate it to new data center(Different network). Can someone help with the steps if this can be achieved?Thanks in advance.", "username": "Naresh_Chandra" }, { "code": "", "text": "I also have this question, can anyone help?", "username": "brlin" }, { "code": "", "text": "If the 2 data centres can talk together you could achieve that without downtime with:You replica set should have at least three nodes. With two nodes your database is not writable if you lose one.", "username": "steevej" } ]
MongoDB DR setup on a different network
2020-07-02T12:40:00.950Z
MongoDB DR setup on a different network
1,289
null
[ "backup" ]
[ { "code": "", "text": "Dear Gurus,Please see that we have two MongoDB subscriptions.We need to take the backup of Production Subscription mongodb and restore it on Testing subscription (account) for which mongodb said it is not possible BUT YES, we can do it from shell script that automate Mongodb Back-up and restore.We need your kind support if you can help us how can we write that script.BRs\nMalik Adeel Imtiaz", "username": "Drive_Mate_Thai" }, { "code": "", "text": "Can some one please help me in this regard.", "username": "Drive_Mate_Thai" } ]
Script that automates MongoDB back-up and restore
2020-08-10T11:49:27.943Z
Script that automates MongoDB back-up and restore
1,575
null
[]
[ { "code": " [{\n \"labels\": [\n {\n \"id\": \"5ede180af845340001ecafd0\",\n \"category\": \"lighting_condition\",\n \"value\": \"Day\",\n \"source\": \"MDB 1.0\"\n },\n {\n \"id\": \"5ede180af845340001ecafd1\",\n \"category\": \"avg_ego_speed\",\n \"value\": 24,\n \"source\": \"MDB 1.0\"\n },\n {\n \"id\": \"5ede180af845340001ecafd3\",\n \"category\": \"ped_objects\",\n \"value\": 12,\n \"source\": \"MDB 1.0\"\n ]\n },\n {\n \"labels\": [\n {\n \"id\": \"5ede180af845340001ecafdb\",\n \"category\": \"lighting_condition\",\n \"value\": \"Day\",\n \"source\": \"MDB 1.0\"\n },\n {\n \"id\": \"5ede180af845340001ecafdd\",\n \"category\": \"avg_ego_speed\",\n \"value\": 19,\n \"source\": \"MDB 1.0\"\n },\n {\n \"id\": \"5ede180af845340001ecafdf\",\n \"category\": \"vehicle_objects\",\n \"value\": 19,\n \"source\": \"MDB 1.0\"\n },\n {\n \"id\": \"5ede180af845340001ecafe1\",\n \"category\": \"ped_objects\",\n \"value\": 8,\n \"source\": \"MDB 1.0\"\n }\n ]\n }]\n{\"category\": \"lighting_condition\",\"valu\": \"Day\"}{\"category\": \"vehicle_objects\"}{\"labels\":{\"$all\":[{\"$elemMatch\":{\"category\":\"lighting_condition\",\"value\":{\"$eq\":\"Day\"}}},{\"$not\":{\"$elemMatch\":{\"category\": \"vehicle_objects\"}}}]}}", "text": "Hi let say that I have the following scheme:And I want to get all of the documents that meet the following:{\"category\": \"lighting_condition\",\"valu\": \"Day\"}\nAND not contain the label:\n{\"category\": \"vehicle_objects\"}I should get the first doc…I tried to use the following query but without success:{\"labels\":{\"$all\":[{\"$elemMatch\":{\"category\":\"lighting_condition\",\"value\":{\"$eq\":\"Day\"}}},{\"$not\":{\"$elemMatch\":{\"category\": \"vehicle_objects\"}}}]}}But without success, any idea?", "username": "Mordechai_Ben_Zechar" }, { "code": "$all$elemMatch{\n labels: { $elemMatch: { category: \"lighting_condition\", value: \"Day\" } },\n labels: { $not: { $elemMatch: { category: \"vehicle_objects\" } } }\n}", "text": "Hello @Mordechai_Ben_Zechar,I am not sure why you are trying to use the $all operator at all (it is used for matching all array elements). You should be just using the $elemMatch conditions for the query filter (and this works):", "username": "Prasad_Saya" }, { "code": "", "text": "Hi @Prasad_Saya and thank you for your answer.I used .net core driver and I can’t use the field name twice in the same hierarchy of the JSON.\nI’m getting:\n“ClassName”: “System.InvalidOperationException”,\n“Message”: “Duplicate element name ‘labels’.”,This is the reason that I used $all for several $elematch expression on the same field level", "username": "Mordechai_Ben_Zechar" }, { "code": "$and{\n $and: [\n { labels: { $elemMatch: { category: \"lighting_condition\", value: \"Day\" } } },\n { labels: { $not: { $elemMatch: { category: \"vehicle_objects\" } } } }\n] }", "text": "Then, use the $and operator:", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you , the $and operator works like charming!", "username": "Mordechai_Ben_Zechar" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
Negative and positive elematch in the same query
2020-07-27T10:40:30.470Z
Negative and positive elematch in the same query
4,479
null
[ "data-modeling" ]
[ { "code": " {\n \"title\": \"board\",\n \"properties\": {\n \"_id\": {\n \"bsonType\": \"objectId\"\n },\n \"lanes\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"cards\": {\n \"bsonType\": \"array\",\n \"items\": {\n \"bsonType\": \"object\",\n \"properties\": {\n \"description\": {\n \"bsonType\": \"string\"\n },\n \"id\": {\n \"bsonType\": \"objectId\"\n },\n \"label\": {\n \"bsonType\": \"string\"\n },\n \"title\": {\n \"bsonType\": \"string\"\n }\n }\n }\n },\n \"id\": {\n \"bsonType\": \"objectId\"\n },\n \"label\": {\n \"bsonType\": \"string\"\n },\n \"title\": {\n \"bsonType\": \"string\"\n }\n }\n }\n }\n }\n}\n", "text": "Hi all ,I would like to ask a few questions about schema design because I’m currently using Atlas + Realm to play around and learn.The one thing that is confusing to me is why and when we need to split data up in collections because theoretically, my whole application could be one giant schema which is 1 collection.I will use Trello as an example to explain what I mean. Hope you all know what Trello is and how it works, basically, users can create boards and every board is an own project management platform so to say where you can add stuff like todos, here how I would build the Schema for the whole Trello clone in 1 collection.Collection Name is Boards.This is how you could build a whole application with 1Schema but I assume it is not the right way to do it, so why and when and how do I know when to split it up in a new collection?Thanks", "username": "Ivan_Jeremic" }, { "code": "", "text": "why and when and how do I know when to split it up in a new collection?An entity’s data can be stored in one or more collections. The data stored can have relationships, like the one-to-one and one-to-many between entity’s attributes. In MongoDB’s flexible document based model, these relationships can be in terms of data embedding within the same document or referencing in another collection.This design depends upon the application’s needs. For example, the nature of data (like size and relationships), and the data’s usage (the application’s functionality, the kind of queries and updates).Some related info can be found at:", "username": "Prasad_Saya" } ]
Newbie to NoSQL, Schema design?
2020-08-11T19:57:33.855Z
Newbie to NoSQL, Schema design?
1,883
null
[ "replication", "connecting", "performance" ]
[ { "code": "", "text": "I just upgraded from 4.0.18 to 4.2.8, 3 server replica set, rhel 7. Connection to each secondary goes quickly, just as expected (this is subsecond). When I connect to the primary the connection takes well over 10 minutes. Applications have a timeout limit so they are definitely getting a timeout error. When I failover the primary to a new server, the secondary that just became the primary now has the connection delay. The server that was the primary that just became a secondary has the quick connection. The shell connection command is executed directly on the database server. Does anyone have any clues?", "username": "JamesT" }, { "code": "mongors.status()rs.conf()", "text": "Hi @JamesT welcome to the community.This is a peculiar situation you described, and I’m not sure I’ve ever seen one. Well before 10 minutes, any typical connection attempt by any driver or the mongo shell would give up long ago.Could you provide more details:Connection to each secondary goes quickly, just as expected (this is subsecond). When I connect to the primary the connection takes well over 10 minutes.How did you determine the timing? Could you post the logs from both the server and the client when your app tried to connect to the server?The shell connection command is executed directly on the database server.Could you elaborate on what you mean? Could you post the exact command you tried?Please also post the output of rs.status() and rs.conf() during this extra-long connection attempts, so that your replica set topology and state can be determined.Would also be helpful if you can tell us how did you install MongoDB.Best regards,\nKevin", "username": "kevinadi" }, { "code": "2020-08-10T15:23:42.596-0400 I COMMAND [conn2686] command $external.$cmd appName: \"MongoDB Compass\" command: saslStart { saslStart: 1, mechanism: \"PLAIN\", payload: \"xxx\", autoAuthorize: 1, $db: \"$external\" } numYields:0 reslen:219 locks:{} protocol:op_query 1086333msmongo --host rsSet1/server8:27017,server9:27017,server10:27017 --authenticationMechanism 'PLAIN' --authenticationDatabase '$external' --ssl --username userA --password passwordArs.conf()\n{\n \"_id\" : \"rsSet1\",\n \"version\" : 3,\n \"protocolVersion\" : NumberLong(1),\n \"writeConcernMajorityJournalDefault\" : true,\n \"members\" : [\n {\n \"_id\" : 0,\n \"host\" : \"server8:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 1,\n \"host\" : \"server9:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 2,\n \"host\" : \"server10:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"5bb60307b0f85f3386121cdf\")\n }\n}\n", "text": "I downgraded all set members to 4.0.18 and all activity returned to normal and the time to connect dropped back to subseconds. I do have many other environments that were upgraded in the same manner and none of them experienced this problem, including OpsMan sets. This one however is my most active but v4.0.18 could handle the traffic.I cannot post the logs for security reasons.\nMongo was a manual rpm install via yum.My longest connection attempt was Compass at 18 minutes; the item is from mongod.log.2020-08-10T15:23:42.596-0400 I COMMAND [conn2686] command $external.$cmd appName: \"MongoDB Compass\" command: saslStart { saslStart: 1, mechanism: \"PLAIN\", payload: \"xxx\", autoAuthorize: 1, $db: \"$external\" } numYields:0 reslen:219 locks:{} protocol:op_query 1086333msWhen I run the shell command I try both by being logged directly into the server via a putty session and on Windows laptop command window (this one has 4.2.6). Below is one of the connection attempts that hangs. I’m using ldap for authentication. Ldap is not a contributor here as when I connect to the secondary, it still needs to authenticate.mongo --host rsSet1/server8:27017,server9:27017,server10:27017 --authenticationMechanism 'PLAIN' --authenticationDatabase '$external' --ssl --username userA --password passwordAI do not have any of the rs.status() output during the time of this issue as I was in the shell running it and didn’t save any in output files, but it looks pretty much as it does on any normal day. All replicaSet members were in a normal state and were communicating successfully with each other. Heartbeats and pings are good and no infoMessages. What would you be looking for within it, just curious for future reference?", "username": "JamesT" }, { "code": "libldapmongod", "text": "Hi @JamesTIt appears that the connection process may be stalling trying to get a reply from the LDAP server. MongoDB currently uses libldap, and defers all LDAP auth process to the library, so either something is amiss with how the LDAP setup interacts with the primary mongod, or there is something else going on there. It is curious though how older MongoDB doesn’t seem to experience this. Note that the new MongoDB 4.4.0 was just released and might be worth trying as well.Having said that, LDAP connectivity is an Enterprise-only feature, so if you keep having this issue I would recommend you open a support case if this is a production environment. Investigating this issue may require a deeper dive into your specific setup.Best regards,\nKevin", "username": "kevinadi" } ]
Connection to primary is extremely slow or timeout
2020-08-10T06:47:24.813Z
Connection to primary is extremely slow or timeout
9,685
null
[]
[ { "code": "", "text": "I am kind of late to the Mongo train, but we are now moving away from Postgres (Heroku) into Mongo (Atlas).\nI find it surprising that findAndModify() only works on a single document. We have many nice use cases where we need to update multiple documents AND get their values (old values before the update) in one atomic operation. Just curious if there is any special reason why this hasn’t been implemented yet.Is findAndModifyMany() something that is planned? Can we vote somewhere on this feature request?", "username": "yaron_levi" }, { "code": "", "text": "I am kind of late to the Mongo train, but we are now moving away from Postgres (Heroku) into Mongo (Atlas).Hello @yaron_levi, welcome to the forum.We have many nice use cases where we need to update multiple documents AND get their values (old values before the update) in one atomic operationTo perform operation(s) on many documents or multiple operations on a document atomically, you can use transactions.Is findAndModifyMany() something that is planned? Can we vote somewhere on this feature request?You can try searching for and logging a request at MongoDB Jira.", "username": "Prasad_Saya" }, { "code": "", "text": "New features should be suggested at feedback.mongodb.com - Jira is for bug reports for the most part.I’m curious though why you specifically want findAndModify to take multiple updates/return multiple documents - why not findAndModify for each? Otherwise how do you sort out which returned document was for which findAndModify? Normally write operations return back a single result document, and only read operations return a cursor with multiple documents.", "username": "Asya_Kamsky" }, { "code": "", "text": "We have a collection where each document is a “bucket” of a specific time span, from which you can check if there are steps available. We want in an atomic, one call, to get stepsTotal and stepsConverted and set stepsConverted to be stepsTotal (because we’ve just “drunk” all the available steps from those documents). We also set the isHasAvailable to FALSE.Screen Shot 2020-08-09 at 11.38.451280×530 66.6 KB", "username": "yaron_levi" }, { "code": "", "text": "If you want multiple document writes to be atomic you can’t do it with update or any other write outside a transaction. Only single document writes are atomic. Sounds like you’ll need to use a transaction.Asya", "username": "Asya_Kamsky" }, { "code": "", "text": "I misused the term “atomic” (-:\nIt doesn’t need to be atomic, just many documents in one “hit”. Just like updateMany works.", "username": "yaron_levi" }, { "code": "", "text": "You want to save round trips to the server. That’s understandable but it’s not currently supported in this scenario - sorry!feedback.mongodb.com is the place to ask for this feature!", "username": "Asya_Kamsky" }, { "code": "", "text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.", "username": "system" } ]
findAndModifyMany()
2020-08-05T21:58:47.791Z
findAndModifyMany()
1,795
null
[ "data-modeling" ]
[ { "code": "{_id: \"p0\", name: \"apple\", \"price\": 200}{_id: \"p1\", name: \"orange\", \"price\": 500}{_id:\"o0\", orderNumber: 1, products: [\"p0\", \"p1\"]}{_id: \"p0\", name: \"apple\", \"price\": 300}{_id: \"p1\", name: \"orange\", \"price\": 600}{_id: \"p0\", name: \"apple\", \"price\": 200}{_id: \"p1\", name: \"orange\", \"price\": 500}{_id:\"o0\", orderNumber: 1, products: [{_id: \"p0\", \"price\": 200}, {_id: \"p1\", \"price\": 500}]}{_id: \"p0\", name: \"apple\", \"price\": 300}{_id: \"p1\", name: \"orange\", \"price\": 600}{_id: \"p0\", version: \"0\", name: \"apple\", \"price\": 200}{_id: \"p1\", version: \"0\", name: \"orange\", \"price\": 500}{_id: \"pr0\", currentVersion: \"p0\"}{_id: \"pr1\", currentVersion: \"p1\"}{_id:\"o0\", orderNumber: 1, products: [\"p0\", \"p1\"]}{_id: \"p0\", version: \"0\", name: \"apple\", \"price\": 200}{_id: \"p2\", version: \"1\", name: \"apple\", \"price\": 300}{_id: \"p1\", version: \"0\", name: \"orange\", \"price\": 500}{_id: \"p3\", version: \"1\", name: \"orange\", \"price\": 600}{_id: \"pr0\", currentVersion: \"p2\"}{_id: \"pr1\", currentVersion: \"p3\"}", "text": "Hello, World!ASSUMPTIONSAssume that,Also assume that we have the following sample data,products\n{_id: \"p0\", name: \"apple\", \"price\": 200}\n{_id: \"p1\", name: \"orange\", \"price\": 500}orders\n{_id:\"o0\", orderNumber: 1, products: [\"p0\", \"p1\"]}Assume that, the product apple’s price will be increased from 200 to 300, and orange’s price will be increased from 500 to 600.products (after the update)\n{_id: \"p0\", name: \"apple\", \"price\": 300}\n{_id: \"p1\", name: \"orange\", \"price\": 600}PROBLEMObviously, we can create another collection for keeping that change. However, when we have collections with documents that have a lot of properties we never know which property will be update sensitive in the future…To overcome this problem, I find 2 solutions.1- You hardcode the update sensitive data inside.products\n{_id: \"p0\", name: \"apple\", \"price\": 200}\n{_id: \"p1\", name: \"orange\", \"price\": 500}orders\n{_id:\"o0\", orderNumber: 1, products: [{_id: \"p0\", \"price\": 200}, {_id: \"p1\", \"price\": 500}]}products (after the update)\n{_id: \"p0\", name: \"apple\", \"price\": 300}\n{_id: \"p1\", name: \"orange\", \"price\": 600}In this solution, if name becomes update sensitive in the future, since you do not hardcoded it before, you can never know the value at the time of the order is created. That’s why I tried to find my 2nd solution.2- For each collection foo that may have documents with update sensitive properties, we create another collection fooRef which will only have 2 properties _id and currentVersion. _id as you may guess, and currentVersion is also object ID which points to the current version of the document in foo. Also the documents in the collection foo will have a property version.products\n{_id: \"p0\", version: \"0\", name: \"apple\", \"price\": 200}\n{_id: \"p1\", version: \"0\", name: \"orange\", \"price\": 500}productsRef\n{_id: \"pr0\", currentVersion: \"p0\"}\n{_id: \"pr1\", currentVersion: \"p1\"}orders\n{_id:\"o0\", orderNumber: 1, products: [\"p0\", \"p1\"]}\nproducts (after the update)\n{_id: \"p0\", version: \"0\", name: \"apple\", \"price\": 200}\n{_id: \"p2\", version: \"1\", name: \"apple\", \"price\": 300}\n{_id: \"p1\", version: \"0\", name: \"orange\", \"price\": 500}\n{_id: \"p3\", version: \"1\", name: \"orange\", \"price\": 600}productsRef (after the update)\n{_id: \"pr0\", currentVersion: \"p2\"}\n{_id: \"pr1\", currentVersion: \"p3\"}Now, you can reach anything anytime. But this time, each time you update a document with a update sensitive property, you will create an additional record. This will increase the data size. Also, there will be records that you will never use and need.CONCLUSIONWhat I am asking you is, maybe there is another solution I can’t figure out or maybe there is a best practice for this kind of problems.Thanks in advance for the help…", "username": "onurcipe" }, { "code": "", "text": "This is a question about preference, as i dont think there is the best solution. That said, I like the second option more. The first suggestion unnecesarilly fragments data about products IMO, which can be more error-prone as there will be more ways to screw up if schema is going to change again at some point. (there is another reason i like it, and that’s because it treats the critical data as immutable) But there are a couple things i would still do differently:", "username": "PBeliy" }, { "code": "", "text": "Thanks for the reply.1- I use version property because the order of updates are important for me. It is also possible to discover this by _id property. However, this is more costly with _id.2- The data size of course matters. In garbage collection, it is always possible to remove a record which seems unnecessary at the time of removal but might be needed in a future release. So, you can never trust garbage collection.3- It is a different perspective and a way of handling the data. But, I believe this would be more costly in compared to data size.", "username": "onurcipe" } ]
Best Practices for Update Sensitive Properties
2020-08-11T20:31:39.760Z
Best Practices for Update Sensitive Properties
3,037
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.0.20-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.0.19. The next stable release 4.0.20 will be a recommended upgrade for all 4.0 users.Fixed in this release:4.0 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.0.20-rc0 is released
2020-08-11T16:09:09.796Z
MongoDB 4.0.20-rc0 is released
1,704
null
[ "data-modeling" ]
[ { "code": "", "text": "I have around 10 csv files which are basically either one to many related or are many to many related. For many-many there is a csv file which stores the mapping for the two tables. I want to ingest these files into mogodb and create 1 aggregated collection. Is it feasible?", "username": "Supriya_Bansal" }, { "code": "", "text": "Hello @Supriya_Bansal, welcome to the community.I have some related information to start with:", "username": "Prasad_Saya" }, { "code": "", "text": "Thank you!! I ll look into the above links.", "username": "Supriya_Bansal" } ]
Need help to convert relational csv files into mongo documents
2020-08-10T23:37:15.723Z
Need help to convert relational csv files into mongo documents
3,301
null
[ "security" ]
[ { "code": "", "text": "Hello and thanks in advance for feedback! We are working with an organization using a self-hosted virtual environment and they want to store sensitive data in MongoDB v3.4 on Ubuntu 16.04. We are hoping to find a way for them to meet regulatory requirements to “securely delete” data when it is no longer needed. Generally, this means doing a multi-pass overwrite of the location where the data was written. However, the organization doesn’t know how to locate the actual location of a piece of data stored in MongoDB. Is this possible? Or is there a secure (US Department of Defense approved) method for deleting data in a virtual environment from v3.4? It did not appear in the system documentation that the inherent delete functions went to this level of deletion.", "username": "Brett_Bane" }, { "code": "", "text": "Hi @Brett_Bane welcome to the community.The secure delete requirements you mentioned sounds like it should be achieved by the storage layer instead of the database layer. However, I feel it’s a little strange that there is an effort to meet a regulatory requirement regarding data security, while not knowing exactly where the data is stored. Isn’t it contradicting the security requirement?Having said that, one possible solution is to use MongoDB’s Client-Side Field Level Encryption which is a new feature in MongoDB 4.2. In lieu of actually deleting the documents, you may be able to encrypt sensitive fields in a document, and “delete” them by throwing away the decryption key. See Client-Side Field Level Encryption Guide for details and examples.I would also strongly recommend to move away from MongoDB 3.4 series, as it’s out of support since January 2020 (3.4 was released in Nov 2016, almost 4 years ago today).Best regards,\nKevin", "username": "kevinadi" }, { "code": "", "text": "Thank you for the reply!", "username": "Brett_Bane" } ]
Secure deletion for sensitive data
2020-08-07T17:26:18.523Z
Secure deletion for sensitive data
3,331
null
[ "server", "release-candidate" ]
[ { "code": "", "text": "MongoDB 4.2.9-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.8. The next stable release 4.2.9 will be a recommended upgrade for all 4.2 users.Fixed in this release:4.2 Release Notes | All Issues | Downloads\n\nAs always, please let us know of any issues.\n\n– The MongoDB Team", "username": "Jon_Streets" }, { "code": "", "text": "", "username": "Stennie_X" } ]
MongoDB 4.2.9-rc0 is released
2020-08-11T13:20:58.300Z
MongoDB 4.2.9-rc0 is released
1,856