image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [
"replication"
] | [
{
"code": "",
"text": "Does MongoDB supports data replication in bidirectional way. We are using community edition (3.6 version) in Windows server operating system. I have also checked replication process but it was not mentioned related to bidirectional. Thank you.",
"username": "naresh_reddy"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @naresh_reddy!The core replication feature in MongoDB server is based around changes replicating from a primary server to other data-bearing secondary members of the same replica set. See Replication in the MongoDB server documentation for more detail.You cannot have multiple primaries in a single replica set. You could potentially build your own solution for replicating changes (and resolving associated conflicts) using MongoDB Change Streams. I recommend upgrading to MongoDB 4.0 or newer if you are planning on using Change Streams, as there are expanded streaming options including ability to watch a single database or all non-system collections in a deployment. Another good reason to upgrade: MongoDB 3.6 will reach end of life next month (April, 2021) per the MongoDB Support Policy.If you are open to using MongoDB Atlas, another approach to consider would be bidirectional sync and automatic conflict resolution using Realm Sync. Realm Sync is designed for offline-first use cases where changes can be written locally on client devices and synced with a MongoDB Atlas deployment.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you for quick suggestion and we will look into mentioned approach",
"username": "naresh_reddy"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Does MongoDB supports bidirectional replication | 2021-03-10T05:08:24.216Z | Does MongoDB supports bidirectional replication | 3,587 |
[
"replication",
"sharding"
] | [
{
"code": "",
"text": "그림11102×337 23.4 KBThere is Single Shard Shaded cluster divided into three servers.Currently, two servers are broken due to external issues, and only one server survived. Is there a way to operate normally with this remaining server?The server cannot be expanded further because it is in service initial state. We only have plans to extend it later.",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Why do you bother with sharding if you only have one shard?It is highly not recommended to run 2 arbiters on the same replica set.You are wasting resources while adding latency and complexity. Running multiple instances on the same physical hardware is most likely resulting in reduce performance as the difference instances are battling for shared resources.As to temporary fix your problem of unavailability you may do the NON RECOMMENDED configuration.Connect to server-3 configuration server and remove the server-1 and server-2 members. Do the same with your shard replica set. Starts 3 new instances on server-3. Add one to the configuration server replica set as a data bearing node. Add 2nd new instance to the configuration server replica set as an arbiter. Finally add the third new instances as data bearing node to your shard replica set.If your a little bit experienced with file system operations, you may seed the 2 load bearing instances with file system snapshot from the current load bearing nodes with data.This will give you:\n. 1 mongos\n. 1 PSA configuration for the configuration server\n. 1 PSA configuration for your shardDepending of the amount of data and capability of server-3 this configuration will struggle, but it might be functional.But if you can, just run one single normal PSA if you only have 1 physical.",
"username": "steevej"
},
{
"code": "",
"text": "I was considering this way even if it wasn’t.\nThank you very much for your answer. ",
"username": "Kim_Hakseon"
},
{
"code": "mongosforceforceforceforce",
"text": "Gday folks!Why do you bother with sharding if you only have one shard?@steevej: There is no performance benefit setting up a single shard as a sharded cluster, but one motivation for a single shard deployment is when you anticipate adding further shards for scaling in the reasonably near future.A sharded cluster involves extra configuration and resources (config servers, mongos ) that are not required in a replica set deployment, but if you start with this configuration you will have fewer operational changes to make when you start adding shards (for example, no change to your connection string or backup procedures).Currently, two servers are broken due to external issues, and only one server survived. Is there a way to operate normally with this remaining server?@Kim_Hakseon: Since you have lost a majority of members for a shard replica set, you can force reconfigure the replica set using one of the surviving members. I would emphasise the docs description that this is only intended as an emergency procedure:The force option forces a new configuration onto the member. Use this procedure only to recover from catastrophic interruptions. Do not use force every time you reconfigure. Also, do not use the force option in any automatic scripts and do not use force when there is still a primary.As far as arbiters go: my recommendation would be to never have more than one, and ideally avoid using arbiters altogether in modern versions of MongoDB. For more background, please see my response on Replica set with 3 DB Nodes and 1 Arbiter - #8 by Stennie.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thanks for the clarifications and mentioning the benefit of a single shard.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to recover a single shard cluster with majority of replica set members unavailable? | 2021-03-09T15:20:37.695Z | How to recover a single shard cluster with majority of replica set members unavailable? | 4,087 |
|
null | [
"production",
"golang"
] | [
{
"code": "connectTimeoutMS",
"text": "The MongoDB Go Driver Team is pleased to announce the release of v1.4.7 of the MongoDB Go Driver.This release contains several bug fixes and internal testing improvements. Most notably, GODRIVER-1879 is fixed in this release. This ticket ensures that the connectTimeoutMS URI option, which defaults to 30 seconds, applies to both the creation of a socket and the TLS handshake, if applicable. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.4.7 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Go Driver 1.4.7 Released | 2021-03-09T18:20:10.195Z | MongoDB Go Driver 1.4.7 Released | 1,924 |
null | [] | [
{
"code": "",
"text": "Hello Guys, i have a problem with the atlas search. For example: In the users collection i create the atlas index for all fields. I search for the username like “rhom”. But it dont match anything. 1 of the documents has the username “rhombicombi”. Only wehen i search for the full username it match? What can i do?",
"username": "Alessandro_Rhomberg"
},
{
"code": "",
"text": "Hi @Alessandro_Rhomberg,Welcome to MongoDB Community.Can you share what operators you use when querying? Not all allow partial word searches and some wait for a full word to search.Thanks,\nPavel",
"username": "Pavel_Duchovny"
}
] | Atlas search - username search dont match without full name | 2021-03-09T13:40:24.389Z | Atlas search - username search dont match without full name | 2,084 |
null | [
"performance",
"configuration",
"devops"
] | [
{
"code": "",
"text": "For some performance reasons, i need to keep both inmemory and WiredTiger engine on a single node. Can MongoDB support this?",
"username": "Duc_Bui_Minh"
},
{
"code": "mongodmongod",
"text": "Hi @Duc_Bui_MinhA single mongod process only supports one storage engine configuration. You can run more than one mongod in the same host environment, but this is generally not recommended as they will compete for the same resources.What are you hoping to gain in terms of performance? The In-Memory Storage Engine provides consistent latency because it is not maintaining on-disk data aside from some metadata and diagnostic data. If your working set fits entirely within memory (a strict requirement for the In-Memory Storage Engine) but you need data persistence, I would consider using the default WiredTiger storage engine.The In-Memory Storage Engine page also has some suggestions around Deployment Architectures which might be of interest. For example, one deployment pattern is to have a replica set with visible members running in-memory storage for predictable latency and a hidden member using the WiredTiger storage engine for persistence.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can we open both inmemory and WireTiger engine? | 2021-03-10T11:10:30.625Z | Can we open both inmemory and WireTiger engine? | 2,084 |
null | [
"golang",
"transactions"
] | [
{
"code": "",
"text": "I’d like to find and delete many records. In the console, I’d do this with db.collection.findAndModify, but for some reason the go driver has this method as private. The FindOneAndModify, FindOneAndDelete, and FindOneAndReplace methods all internally call the private findAndModify method … but they all limit things to a single result.What is the proper way to atomically find and delete many records using the go driver? A session seems like the right direction, but reading the causal consistency guarantees, but even in the write follows reads model, isolation is not guaranteed (that is, another session can interleave between my read and delete).",
"username": "Some_Dev"
},
{
"code": "",
"text": "The right way to atomically delete batches of records is to wrap them in a transaction. My colleague @nraboy has written a tutorial on transactions in GoLang to help you set this up.",
"username": "Joe_Drumgoole"
}
] | Find and delete many records atomically using the go driver? | 2021-03-09T19:27:20.807Z | Find and delete many records atomically using the go driver? | 2,054 |
null | [
"server",
"installation"
] | [
{
"code": "service mongod start/data/db/var/lib/mongodb/var/lib/mongodb/mongod --config /etc/mongod.conf",
"text": "Hello,When I try to run the mongod.service via service mongod start MongoDB tells me that it can’t find database files on /data/db weird thing is in my configuration file dbPath is specifically specified as /var/lib/mongodb. mongod service configuration under /lib/systemd/system/mongod.service uses /etc/mongod.conf and that config file specifies /var/lib/mongodb/ like I described above.Here comes the even weirder part;When I try to run the mongodb via service it fails but when I try to run it via shell like mongod --config /etc/mongod.conf it runs without a problem.Any idea?",
"username": "Umit_Yayla"
},
{
"code": "/lib/systemd/system/mongod.serviceservice mongod start",
"text": "You might be confusing your init daemons. Systemd uses /lib/systemd/system/mongod.service. But service mongod start is the command for SYS V init. The docs detail both approaches.",
"username": "Joe_Drumgoole"
}
] | MongoDB Behaves Differently With Service File(?) | 2021-03-09T08:20:45.429Z | MongoDB Behaves Differently With Service File(?) | 2,187 |
null | [
"sharding"
] | [
{
"code": "",
"text": "Can not add shard due to NetworkInterfaceExceededTimeLimit.Mongo Version: db version v4.9.0-alpha-1182-g2326fb8\nCluster Type: Sharded Cluster (4 shards, each running single node replica set)\nOS builds: Config server and mongos running on x86 and Shards running on different buildI am able to ping and telnet to each nodes vice-versa.When I am trying to add shards, it is giving“NetworkInterfaceExceededTimeLimit”: Request 107 timed out, , deadline was 2021-02-28T12:54:08.250+00:00, op was RemoteCommand 107.When I checked in the logs, It is marking that as a Slow Query:“msg”:“Slow query”,“attr”:{“type”:“command”,“ns”:“admin.shard1rs1/x.x.x.x:27017”,“appName”:“MongoDB Shell”,has anyone come across such issue? Any suggestions / Any clue would be really helpful. Thank you in advance!I am not preferring an option of changing the mongo version here. (These are custom builds)",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "Mongo Version: db version v4.9.0-alpha-1182-g2326fb8Hi Viraj,What is the base MongoDB server version for your custom build? The v4.9.0-alpha reference looks like a nightly development/unstable release.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X,Yes. This is based out of master branch from MongoDB source code. We’ve build based on that point of time code.Sharded cluster working perfectly fine on our DEV setup and when we try to run it on prod on AWS, it is causing this issue.",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "Hi @viraj_thakrar,Builds from the master branch or development/unstable releases aren’t thoroughly tested yet, and are definitely not ready for production deployment.Config server and mongos running on x86 and Shards running on different buildI would start by ensuring that all of the components of your cluster are built with identical versions. The master branch includes work in progress, so mixing versions may lead to unexpected results.If you have setup a novel environment it is going to be challenging for someone to try to reproduce the problem, so any details would be helpful:What are the differences between your environment and builds. Are the shards running on a different git checkout, different hardware architecture, … ?What options did you use to build MongoDB?Are there specific features or build optimisations you are trying to test?What steps are you running in order to add shards? You mentioned there are four shards – how many were added successfully?I recommend waiting for a tagged release (alpha or RC) if you want a more stable test environment.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": ".addShard()Mongo Version: db version v4.9.0-alpha-1182-g2326fb8_iddb.collection.getShardedDistribution()",
"text": "Hi @Stennie_X,Thank you for replying.Yes. I totally understand your point about using master branch or development releases. I manage to resolve the issues I was facing and the purpose I am using it for, is more of testing performance with particular data set and with different cluster setup. I am using same version on all cluster components.I caught in to the issue of “NetworkInterfaceExceededTimeLimit” because of very very slow network across the devices I was using it. So I tried to manage it with increasing pingTimeouts and was trying to find out if I can some how increase the timeout which it uses when we run .addShard(). I manage to handle this issue with some alternatives.There is one bug I believe which I would like to report specifically for Mongo Version: db version v4.9.0-alpha-1182-g2326fb8 as I came across and could be helpful. To replicate that behaviour, The steps are:Current Behaviour: Even if collection has been sharded with hashed key, checking sharded distribution with above command outputs “Collection is not sharded”Expected Behaviour: It should display the shard distribution as it displays on other versions.Thank you!",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "Hi @viraj_thakrar,Thanks for confirming you were able to resolve the initial problem.Bug reports can be filed directly in the SERVER project in the MongoDB Jira issue tracker: https://jira.mongodb.org. You can login using the same account as the community forums.If you are unable to file an issue I could raise one on your behalf, but generally it is best for you to be the reporter so you will be notified of any updates or requests for further information.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Can't add shards | 2021-02-28T13:11:29.952Z | Can’t add shards | 4,383 |
null | [
"app-services-user-auth",
"security"
] | [
{
"code": "",
"text": "Hi there,I’m currently using a synced Realm via MongoDB Atlas in my (swift) app as a database and would like to use AWS S3 (via amplify?) to store data. However I’m trying to wrap my head around how to best setup the user authentication for S3.At the moment I just use the e-mail & password authentication from MongoDB Atlas to log into the database.Is the best way to use AWS Cognito and then login via a JWT authentication to MongoDB Atlas or is there a better way? I’m quite new to programming so I’m unfortunately not aware of a good concept here Any hints on how to do it or which concepts can be used would be much appreciated ",
"username": "Friendlyguy89"
},
{
"code": "",
"text": "Is the best way to use AWS Cognito and then login via a JWT authentication to MongoDB Atlas or is there a better way?Yup - this would be the recommended way to authenticate to Realm if you’re using Cognito.",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Best practice authentication concept | 2021-03-06T23:34:37.877Z | Best practice authentication concept | 2,991 |
null | [
"atlas-device-sync",
"realm-web"
] | [
{
"code": "",
"text": "Hi there,\nI am a beginner web dev and I would like to understand a bit better how the web SDK works. I am working on a saas project with React and I think Realm web SDK could be a good option for the server side of this web app.\nFor now it is going to be quite simple with access and mutation of the data in the cluster, but eventually if someday I want to build a native app connected to the same source of data, would it be possible to use the sync option of realm and have synchronized data across web and mobile? I am not sure to get this straight…\nThe following note in the documentation makes me think it is possible but I would be grateful to have a deeper insight on this topic.The Web SDK does not support Realm Database or [sync]. Instead, web apps built with MongoDB Realm can use [GraphQL] or the [MongoDB query language]Cheers",
"username": "Fabien_Dorville"
},
{
"code": "",
"text": "Hey Fabien,would it be possible to use the sync option of realm and have synchronized data across web and mobile?When we refer to the Web-SDK not having a sync option, we are specifically referring to real-time, offline sync with automatic conflict resolution. If you’re looking to use real-time sync on the web side, this is not currently supported.However, if you’re simply looking to read and write to the same shared data that is being used by syncing mobile clients, on a web app, there are multiple ways to do this (GraphQL API, MongoDB Access via the SDKs). Hope that was helpful!",
"username": "Sumedha_Mehta1"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Web SDK and sync option | 2021-03-05T10:35:44.236Z | Realm Web SDK and sync option | 4,097 |
null | [
"graphql"
] | [
{
"code": "",
"text": "Hi all,I have set up my first mongocloud environment which I wish to connect to my react app (create-react-app)Now both tutorials I found on this mention GraphQL request validation in the context of a user. They stress that when a user is logged in, any request is done in the context of this user.But what if All I want to do with my react app is display graphQL data? I just want to be able to use the Apollo Client with my main admin account to access all data across my database.I want to use my React app to query and display this data.I cant seem to find documentation on how to do this. Can somebody help me ?",
"username": "Oscar_van_Velsen"
},
{
"code": "",
"text": "Hey Oscar -Can you use anonymous authentication in this use-case with read access to all users? This way, a user doesn’t have to explicitly log in (via username, password, etc) to view parts of the dataAnother alternative is that if this is data that needs to be secured in the context of an admin, maybe API Key authentication might be mroe useful.",
"username": "Sumedha_Mehta1"
}
] | How to access GraphQL as root/admin | 2021-03-05T14:37:05.874Z | How to access GraphQL as root/admin | 1,734 |
null | [
"aggregation",
"performance"
] | [
{
"code": "",
"text": "We have been facing the slow query issue while retrieving the data from the multiple collections using join.Can anyone help me in this ?",
"username": "Dilip_Prajapat"
},
{
"code": "",
"text": "provide the sample query to check.",
"username": "Sudhesh_Gnanasekaran"
}
] | How to make the query faster in Mongo for multiple collections using join | 2021-03-09T06:28:57.336Z | How to make the query faster in Mongo for multiple collections using join | 2,614 |
null | [
"queries",
"indexes"
] | [
{
"code": "{\n \"type\":\"xyz\",\n \"$or\":[\n {\n \"is_custom\":false\n },\n {\n \"is_custom\":{\n \"$exists\":false\n }\n }\n ],\n \"status\":{\n \"$in\":[\n \"Active\",\n \"Pending\"\n ]\n }\n}\n{\n \"type\" : 1,\n \"is_custom\" : 1,\n \"status\" : 1,\n \"id:\" : 1,\n \"_id\" : 1\n}\n",
"text": "I have a very large collection. It has about 60+ million documents. I am trying to retrieve large subset of documents but the query I have written is taking so long and often giving me timeouts. Here is my query:Projection\n_id : 1\nid : 1Limit: 5000Sort:\n_id : 1I created this index:MongoDB is not accepting this index. Any idea how to make this query faster? I have to iterate over 3million docs so I am using _id > _id retrieved from previous query approach.",
"username": "Quantim"
},
{
"code": "explain",
"text": "Hello @Quantim, welcome to the MongoDB Community forum!Please include the explain's output on the query.",
"username": "Prasad_Saya"
}
] | Optimizing mongodb sort query | 2021-03-10T01:15:08.780Z | Optimizing mongodb sort query | 1,617 |
null | [] | [
{
"code": "",
"text": "So for anyone that had completed the Basic course of Mongo DB university. I’m having this concern that is, that I don’t have the database that one quiz requires me to make a query of.\n“1. Query the zips collection from the sample_training database to find all”I don’t have “sample_training” on my cluster!",
"username": "Alejo_Garcia"
},
{
"code": "",
"text": "I have answered you in the university forum which is a better place for course related questions.",
"username": "steevej"
},
{
"code": "sample_training",
"text": "I don’t have “sample_training” on my cluster!Hi @Alejo_Garcia,The sample_training collection is part of the sample data you load into Atlas clusters. If this step was overlooked in the course set up, see Load Sample Data in Your Atlas Cluster.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Don't have sample_training to pass mongodb university quiz | 2021-02-17T00:20:55.589Z | Don’t have sample_training to pass mongodb university quiz | 2,250 |
null | [
"kafka-connector"
] | [
{
"code": "{\n\t \"name\": \"orgunitsinc\",\n\t \"config\": {\n \"connector.class\":\"com.mongodb.kafka.connect.MongoSinkConnector\",\n \"tasks.max\":\"1\",\n \"topics\":\"orgunits\",\n \"connection.uri\":\"mongodb://localhost:32771\",\n \"database\":\"smartconnect\",\n \"collection\":\"orgunits\",\n \"key.converter\":\"org.apache.kafka.connect.json.JsonConverter\",\n \"key.converter.schemas.enable\":false,\n \"value.converter\":\"org.apache.kafka.connect.json.JsonConverter\",\n \"value.converter.schemas.enable\":false\n\t }\n}\n[2020-11-12 19:50:09,467] INFO Cluster created with settings {hosts=[localhost:32771], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} (org.mongodb.driver.cluster:71)\n\n[2020-11-12 19:50:09,479] INFO Opened connection [connectionId{localValue:2, serverValue:16}] to localhost:32771 (org.mongodb.driver.connection:71)\n\n[2020-11-12 19:50:09,482] INFO Monitor thread successfully connected to server with description ServerDescription{address=localhost:32771, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[4, 4, 1]}, minWireVersion=0, maxWireVersion=9, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=2095612} (org.mongodb.driver.cluster:71)\n\n[2020-11-12 19:50:09,483] ERROR Uncaught exception in REST call to /connectors (org.apache.kafka.connect.runtime.rest.errors.ConnectExceptionMapper:61)\n\njava.lang.NullPointerException\n\nat org.apache.kafka.connect.runtime.WorkerConfigDecorator$MutableConfigInfos.lambda$removeAllWithName$0(WorkerConfigDecorator.java:295)\n\nat org.apache.kafka.connect.runtime.WorkerConfigDecorator$MutableConfigInfos.removeAll(WorkerConfigDecorator.java:305)\n\nat org.apache.kafka.connect.runtime.WorkerConfigDecorator$MutableConfigInfos.removeAllWithName(WorkerConfigDecorator.java:294)\n\nat org.apache.kafka.connect.runtime.WorkerConfigDecorator$DecorationPattern.filterValidationResults(WorkerConfigDecorator.java:432)\n\nat org.apache.kafka.connect.runtime.WorkerConfigDecorator.lambda$decorateValidationResult$5(WorkerConfigDecorator.java:273)\n\nat java.util.Collections$SingletonList.forEach(Collections.java:4822)\n\nat org.apache.kafka.connect.runtime.WorkerConfigDecorator.decorateValidationResult(WorkerConfigDecorator.java:273)\n\nat org.apache.kafka.connect.runtime.AbstractHerder.validateConnectorConfig(AbstractHerder.java:392)\n\nat org.apache.kafka.connect.runtime.AbstractHerder.lambda$validateConnectorConfig$1(AbstractHerder.java:326)\n\nat java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\nat java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\nat java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)\n\nat java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)\n\nat java.lang.Thread.run(Thread.java:748)\n",
"text": "Hi,I am getting null pointer exception when creating a Kafka Sink Connector. Below are the details. Can anyone help me what’s missing here to resolve this?",
"username": "venkat_utla"
},
{
"code": "",
"text": "Hi @venkat_utla,I’m not sure what is going on or why the connectors rest call is throwing a NPE. Could I ask you to open up a bug ticket on the project Jira page?That way I can investigate and keep you in the loop with any findings. It could be a connector issue, a Kafka connect issue or some other integration issue. Definitely a bug somewhere and I’m happy to help resolve it / find a work around for you.Ross",
"username": "Ross_Lawley"
},
{
"code": "",
"text": "Also experiencing this exact issue. Did you resolve in the end @venkat_utla?",
"username": "David_Wadge"
},
{
"code": "",
"text": "I wonder if it’s related to Confluent Platform 6.0 / Apache Kafka 2.6? This looks like the same issue: https://github.com/NovatecConsulting/showcase-kafka-iot-emob/issues/6",
"username": "rmoff"
},
{
"code": "",
"text": "Another user experiencing the same issue:",
"username": "David_Wadge"
},
{
"code": "",
"text": "Hi,I solved the problem by indicating security protocol:\n“confluent.topic.security.protocol”: “PLAINTEXT”Manuel",
"username": "Manuel_S_V"
},
{
"code": "",
"text": "Where and how should this setting “confluent.topic.security.protocol”: “PLAINTEXT” be set?\nAnd should the value be PLAINTEXT or PLAIN?",
"username": "Hugues_Journeau"
},
{
"code": "",
"text": "In the end, I got my sink connector working with an Atlas cluster after I installed confluent 6.1.0 and mongodb kafka connector 1.3.0 instead of 1.4.0.",
"username": "Hugues_Journeau"
}
] | Nullpointer exception when creating Kafka sink connector | 2020-11-12T16:56:14.367Z | Nullpointer exception when creating Kafka sink connector | 7,002 |
null | [] | [
{
"code": "",
"text": "is the follows true, even in a sharding cluster?\nthe operation with the same connect would be executed in real time order.\noperations with the same connection had causal consistency naturally, even without used client session.",
"username": "kuku_super"
},
{
"code": "",
"text": "Causally consistent reads require a session, and the use of read concern and write concern majority. See the details in these two documents:",
"username": "Bernie_Hackett"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Causal consistency without client session | 2021-03-08T03:11:09.302Z | Causal consistency without client session | 2,242 |
null | [
"cxx"
] | [
{
"code": "",
"text": "Hello,I just discovered MongoDB and I found it to be a good fit for what I’m doing. My project uses Visual C++ 2010 SP1 compiler. I know that there is a MongoDB C++ driver but it works for C++11 or above.Is MongoDB compatible?Regards,Thiago",
"username": "Thiago_Cavalcante"
},
{
"code": "",
"text": "The C++ driver requires VS2015 or better. Installing the mongocxx driver",
"username": "Bernie_Hackett"
}
] | Can MogoDB be used on Visual Studio 2010 with C++ project? | 2021-03-09T20:02:17.685Z | Can MogoDB be used on Visual Studio 2010 with C++ project? | 2,626 |
null | [
"ruby",
"mongoid-odm"
] | [
{
"code": "PingMongoid.default_client.database.list_collectionsdown",
"text": "Hello,\nI am using Mongoid > 5.0 in my Rails Application. I am implementing a health check endpoint where I need to check my database server status.\nI tried to find the right way to do this. I see Mongo offers a Ping command but its part of Administration and not accessible to my application user. Among all the APIs listed on mongoid doc and Ruby Mongo doc\nSo one way I’ve figured out to do this is using Mongoid.default_client.database.list_collections time out by this API is currently an indication of down status for me.\nWanted to know the community ideas about this.\nYour help is much appreciated.",
"username": "Sitaram_Shelke"
},
{
"code": "",
"text": "You should always be able to send {ismaster:1} to get a reply from the server.",
"username": "Oleg_Pudeyev"
}
] | Mongoid server status used for health check | 2020-08-10T23:35:17.010Z | Mongoid server status used for health check | 3,985 |
null | [
"ruby"
] | [
{
"code": "monitoringfalse",
"text": "Hi,I am using MongoDB with Mongoid (i.e. Ruby driver). The DB is in Atlas. When driver log level is set to debug, a lot of Mongo::Monitoring::Event::ServerDescriptionChanged events are printed to the log.Is there a way for me to disable only these events? I want to see the queries that are being made but I do not want these topology changed events.I tried setting monitoring to false, but that disabled the query logging also. The same happens when I set the log level of the driver logger to INFO.I want to see the queries being made for debugging my application but the server EVENT log lines are just drowning all the wanted log lines.",
"username": "Srirang_Doddihal"
},
{
"code": "",
"text": "You can define your own event listeners for command events that would log queries. You could also copy-paste the driver’s listener implementations and adjust their log level. https://docs.mongodb.com/ruby-driver/master/tutorials/ruby-driver-monitoring/#command-monitoring",
"username": "Oleg_Pudeyev"
}
] | Disabling ServerDescriptionChanged events being logged in Ruby driver | 2020-11-30T18:22:19.600Z | Disabling ServerDescriptionChanged events being logged in Ruby driver | 3,427 |
null | [
"ruby"
] | [
{
"code": "mongo_ext",
"text": "Our Gemfile has an old gem called mongo_ext which hasn’t been updated in more than 10 years. Does anyone know if it is still needed or if I can remove it?",
"username": "Michael_Hagar1"
},
{
"code": "mongo_extmongobson_extbsonmongomongo_extGemfilemongo_ext",
"text": "Hi @Michael_Hagar1,I believe the mongo_ext gem is the precursor to the current mongo driver and included an early version of the C bindings used by the BSON module (which were eventually extracted as bson_ext then moved into bson) .If you are currently including both mongo and mongo_ext in your Gemfile it’s unlikely you’re using the mongo_ext library, however you should obviously validate this yourself.",
"username": "alexbevi"
},
{
"code": "",
"text": "Alex is correct, mongo_ext was part of 1.x Ruby driver. If you use driver 2.x which is the only currently supported one, you shouldn’t need mongo_ext.",
"username": "Oleg_Pudeyev"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Is the mongo_ext still relevant? | 2021-02-26T00:03:07.531Z | Is the mongo_ext still relevant? | 3,452 |
null | [] | [
{
"code": "",
"text": "Hi allWe’re about to start one of our projects using MongoDB - we have requirement that project needs to be deployed to Azure services but also it should be cloud-agnostic as well so we can easily switch to another solution if needed.Since I don’t have experience with CosmosDB and it’s api for MongoDB, can someone pls provide more details about:Much appreciated.Thanks.",
"username": "Adnan_Brotlic"
},
{
"code": "azure-cosmosdb-mongoapi",
"text": "We’re about to start one of our projects using MongoDB - we have requirement that project needs to be deployed to Azure services but also it should be cloud-agnostic as well so we can easily switch to another solution if needed.Welcome to the MongoDB community @Adnan_Brotlic!If you want a cloud-agnostic managed database solution, you should be using MongoDB Atlas. Atlas supports the major cloud providers (Azure, AWS, and GCP) and is using the latest genuine MongoDB Enterprise server versions with full feature support. Atlas is integrated with Azure and even supports Mutli-Cloud Clusters for full portability. You can also use MongoDB Community or Enterprise Server for development or self-hosted deployments.Cosmos provides incomplete emulation for a subset of MongoDB API features and has a distinct underlying server implementation. Cosmos’ provisioning and billing mechanisms are based around Request Units and the API behaviour is not fully cloud-agnostic.Official MongoDB drivers aren’t tested with Cosmos’ emulation and you can’t rationalise server behaviour (for example, indexing or sharding) based on MongoDB server knowledge. Cosmos is passable if you have extremely basic requirements, but the majority of the community discussion I’ve encountered is about compatibility issues (for example, azure-cosmosdb-mongoapi questions on Stack Overflow).For a general overview of MongoDB vs emulated APIs, see MongoDB Atlas Comparison.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | CosmosDB API for MongoDB benefits and restrictions? | 2021-03-09T15:38:43.648Z | CosmosDB API for MongoDB benefits and restrictions? | 2,713 |
null | [
"production",
"golang"
] | [
{
"code": "",
"text": "The MongoDB Go Driver Team is pleased to announce the release of v1.5.0 of the MongoDB Go Driver.This release contains several new features and usability improvements for the driver. For more information please see the release notes.You can obtain the driver source from GitHub under the v1.5.0 tag.General documentation for the MongoDB Go Driver is available on pkg.go.dev and on the MongoDB Documentation site. BSON library documentation is also available on pkg.go.dev. Questions can be asked through the MongoDB Developer Community and bug reports should be filed against the Go project in the MongoDB JIRA. Your feedback on the Go driver is greatly appreciated.Thank you,The Go Driver Team",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Go Driver 1.5.0 Released | 2021-03-09T18:44:39.094Z | MongoDB Go Driver 1.5.0 Released | 1,836 |
[
"backup"
] | [
{
"code": "mongodumpmongoexportmongodumphttps://cloud.mongodb.com/api/public/v1.0https://cloud.mongodb.com/api/atlas/v1.0",
"text": "Dear Mongo community\nI have a cluster on MongoDB atlas cloud service. It is deployed on azure cloud and I have enabled cloud backups in this way:So my cluster is creating frequently snapshots. My objective is to move those snapshots outside the cluster scope since if for some reason the cluster is deleted, the snapshots will be removed as well.I see we can upload mongodump’s or mongoexport’s to amazon s3 buckets, there are plenty of tutorials out there. In addition, I’ve realized, I don’t want to execute mongodump’s of my collections from a script, since the snapshots already do exists on cloud backup on atlas service.I have been reading some documentation and I found several ways to try to move data to external storage but seems that using the mongo API to get the snapshots created and transfer them is a good option here. However, I got confused about which option to use:I got this cloud manager, mongo documentation link to get the snapshots for one cluster but it seems it uses this endpoint: https://cloud.mongodb.com/api/public/v1.0And I found this atlas mongo documentation link to download the snapshots via API Resources but it uses this endpoint: https://cloud.mongodb.com/api/atlas/v1.0Both endpoints are completely different. Considering my cluster is on atlas service, any of them could works for me to interact with my cluster via API?On the other hand, having said this, I am also wondering:ok, by calling the APIs I can get all snapshots but is not clear for me if they are transported directly to AWS S3 buckets when a get/post and upload to s3 operations actions takes place.So, can I transfer an existing snapshot from mongo to s3 without downloading it first?\nDo we need to care about temporary storage in between along this process?\nI mentioned this because once I am dealing with GB of data, will be reliable to trust in a direct transport from mongo to s3?\nIf so I am afraid a security private network link or service endpoint should be used to get the proper bandwidth for this operation transport Am I right here?\nI am not entirely sure if is possible to avoid the downloading process, I would say not.What is the best option to move my snapshots collections to an external storage service like s3 or azure storage accounts?",
"username": "Bernardo_Garcia"
},
{
"code": "",
"text": "Hi @Bernardo_GarciaWelcome to MongoDB community.I believe the way to go is to issue a restore command and get an http download link.Use “download” type.Following that you will need to use a server which might stream the downloaded link file to your external storage.Of course this server needs to have the firewall and credentials to download and upload the backup tar.gz…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Dear @Pavel_Duchovny\nThanks for your advice and the welcome \nI am going to read carefully the context of restore jobs and perhaps I might have some questions after.",
"username": "Bernardo_Garcia"
},
{
"code": "curl --user \"{PUBLIC-KEY}:{PRIVATE-KEY}\" --digest --include \\\n --header \"Accept: application/json\" \\\n --header \"Content-Type: application/json\" \\\n --request POST \"https://cloud.mongodb.com/api/atlas/v1.0/groups/{GROUP-ID}/clusters/{SOURCE-CLUSTER-NAME}/backup/restoreJobs\" \\\n --data '{\n \"snapshotId\" : \"{SNAPSHOT-ID}\",\n \"deliveryType\" : \"download\",\n }'\nHTTP/2 401\nwww-authenticate: Digest realm=\"MMS Public API\", domain=\"\", nonce=\"3X1bUO+5LBP3jVBffeHG0ZHprzcw8MQO\", algorithm=MD5, qop=\"auth\", stale=false\ncontent-type: application/json\ncontent-length: 106\nx-envoy-upstream-service-time: 1\ndate: Tue, 09 Mar 2021 12:25:52 GMT\nserver: envoy\n\nHTTP/2 403\ndate: Tue, 09 Mar 2021 12:25:53 GMT\ncontent-type: application/json\nstrict-transport-security: max-age=31536000\nx-frame-options: DENY\ncontent-length: 187\nx-envoy-upstream-service-time: 23\nserver: envoy\n\n{\"detail\":\"This resource requires access through an access list of ip ranges.\",\"error\":403,\"errorCode\":\"RESOURCE_REQUIRES_ACCESS_LIST\",\"parameters\":[\"213.127.5.142\"],\"reason\":\"Forbidden\"}%\nHTTP/2 401publicprivateHTTP/2 403",
"text": "I am trying to download a snapshot of this way:But I got the following error:Not sure about first HTTP/2 401 error, since I am using the correct public and private keysRegarding second HTTP/2 403 error I have whitelisted my home IP address on IP access list section but not sure why it does not works.",
"username": "Bernardo_Garcia"
},
{
"code": "",
"text": "Hi @Bernardo_Garcia,Please make sure to whitelist the IP in the API key section.Here is the guide make sure to follow each step:Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "curl --user \"{PUBLIC-KEY}:{PRIVATE-KEY}\" --digest --include \\\n --header \"Accept: application/json\" \\\n --header \"Content-Type: application/json\" \\\n --request POST \"https://cloud.mongodb.com/api/atlas/v1.0/groups/{GROUP-ID}/clusters/{SOURCE-CLUSTER-NAME}/backup/restoreJobs\" \\\n --data '{\n \"snapshotId\" : \"{SNAPSHOT-ID}\",\n \"deliveryType\" : \"download\"\n }'\nHTTP/2 401\nwww-authenticate: Digest realm=\"MMS Public API\", domain=\"\", nonce=\"1kjxFzn5t6UROx5NmotVsgE6wcLe1zw0\", algorithm=MD5, qop=\"auth\", stale=false\ncontent-type: application/json\ncontent-length: 106\nx-envoy-upstream-service-time: 1\ndate: Tue, 09 Mar 2021 13:37:48 GMT\nserver: envoy\n\nHTTP/2 200\ndate: Tue, 09 Mar 2021 13:37:48 GMT\nx-mongodb-service-version: gitHash=e9b00d560d9ff15b4dd614bb75d640577ac4f44f; versionString=v20210217\ncontent-type: application/json\nstrict-transport-security: max-age=31536000\nx-frame-options: DENY\ncontent-length: 851\nx-envoy-upstream-service-time: 48\nserver: envoy\n\n{\n\t\"cancelled\": false,\n\t\"deliveryType\": \"download\",\n\t\"deliveryUrl\": [],\n\t\"expired\": false,\n\t\"failed\": false,\n\t\"id\": \"60477a2d3d7834598fbee31a\",\n\t\"links\": [{\n\t\t\"href\": \"https://cloud.mongodb.com/api/atlas/v1.0/groups/{GROUP-ID}/clusters/{SOURCE-CLUSTER-NAME}/backup/restoreJobs/60477a2d3d7834598fbee31a\",\n\t\t\"rel\": \"self\"\n\t}, {\n\t\t\"href\": \"https://cloud.mongodb.com/api/atlas/v1.0/groups/{GROUP-ID}/clusters/{SOURCE-CLUSTER-NAME}/backup/snapshots/604705afbb33ec63425c7553\",\n\t\t\"rel\": \"http://cloud.mongodb.com/snapshot\"\n\t}, {\n\t\t\"href\": \"https://cloud.mongodb.com/api/atlas/v1.0/groups/{GROUP-ID}/clusters/{SOURCE-CLUSTER-NAME}\",\n\t\t\"rel\": \"http://cloud.mongodb.com/cluster\"\n\t}, {\n\t\t\"href\": \"https://cloud.mongodb.com/api/atlas/v1.0/groups/{GROUP-ID}\",\n\t\t\"rel\": \"http://cloud.mongodb.com/group\"\n\t}],\n\t\"snapshotId\": \"{SNAPSHOT-ID}\",\n\t\"timestamp\": \"2021-03-09T05:21:33Z\"\n}\n401 Unauthorizedpublic_keyprivate_keysHTTP/2 200*.tar.gzhttps://restore-604782d892e66d4c432ca912.xxxxx.azure.mongodb.net:port/vxxxxxxxxxxx/restore-{SNAPSHOT-ID}.tar.gz{SNAPSHOT-ID}.tar.gzhttps://restore-604782d892e66d4c432ca912.xxxxx.azure.mongodb.net:port/vxxxxxxxxxxx/restore-{SNAPSHOT-ID}.tar.gzcurl",
"text": "I got it. Thanks for the clue. In additionI had to go to:\nOrganization>Access Manager>API Keys>\nSelect the existing used API KEY, Edit permissions\nPrivate Key & Access List and add my home IP address.I got the following output with my command:I don’t know why the first HTTP 401 Unauthorized client error status response code. I mean my credentials or public_key and private_keys used has permissions over the project cluster I am getting the snapshot.But regarding the second HTTP/2 200 status code, I can see a restore job was created, with its respective direct download link to the *.tar.gz snapshot file of this wayhttps://restore-604782d892e66d4c432ca912.xxxxx.azure.mongodb.net:port/vxxxxxxxxxxx/restore-{SNAPSHOT-ID}.tar.gzimage1276×446 40.3 KBThat is good, but regarding my objective of moving the existing snapshots outside the mongo cluster scope, perhaps I am not understanding if creating a restore job, if this result is aligned with the objective of stream or move my snapshots directly from mongo to external storage …I mean, of course, it is, I got a link to download directly the {SNAPSHOT-ID}.tar.gz and I can use that link to upload this snapshot to my AWS s3 or azure storage account, but then getting back to my question above:Will I need to download every snapshot and then in a separate step upload it to my external storage?\nSo, Is that the way to proceed then? (I know the question is quite obvious, but I want to make a double check with you).If so, in that case is not possible to make a direct transport from mongo to aws s3 for example?\nIf so, how can I get the https://restore-604782d892e66d4c432ca912.xxxxx.azure.mongodb.net:port/vxxxxxxxxxxx/restore-{SNAPSHOT-ID}.tar.gz link value during curl command (or whatever tool approach) execution runtime, just to get it and used it in a subsequent step to be uploaded to my external storage? I ask this because my final goal is to automate this snapshots moving process from mongo to somewhere else.",
"username": "Bernardo_Garcia"
},
{
"code": "",
"text": "Hi @Bernardo_Garcia,Unfortunately you will need to script it .There might be a 3rd party or aws tool to fo this but we provide only a link.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Unfortunately you will need to script it .Yes is that I was thinking. ",
"username": "Bernardo_Garcia"
},
{
"code": "\"deliveryUrl\": [],deliveryUrlcurl",
"text": "\"deliveryUrl\": [],@Pavel_Duchovny sorry it’s me again.\nDo you know how can I retrieve the deliveryUrl parameter in the output of curl command?Here says:deliveryUrl: array of strings\nIf empty, Atlas is processing the restore job. Use the Get All Cloud Backup Restore Jobs endpoint periodically check for a deliveryUrl download value for the restore job.Is not clear for me what kind of parameters should I put in the request for getting it.",
"username": "Bernardo_Garcia"
},
{
"code": "",
"text": "Hi @Bernardo_Garcia,I guess that you can use a pipe to a tool like jq to get the value but probably old good grep and awk can help out.If you have a sample request I can help more maybe.Thanks",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Moving existing atlas mongo snapshots to external storage | 2021-03-08T21:42:47.572Z | Moving existing atlas mongo snapshots to external storage | 8,326 |
|
null | [
"python",
"crud"
] | [
{
"code": "bulkWrite()UpdateOne()upsert$all[{'index': 0,\n'code': 54,\n'errmsg': \"cannot infer query fields to set, path 'players' is matched twice\",\n'op': SON([('q', {'players': {'$all': ['ipbfxu', 'hyndee']}}), ('u', {'$set': {'test_name': 'test_value'}}), ('multi', False), ('upsert', True)])}]\ndef add_or_update_matches_2(matches):\n db = get_db()\n requests = []\n for match in matches:\n request = UpdateOne(\n {\"players\": {\"$all\": match[\"players\"]}},\n {\"$set\": {\"test_name\": \"test_value\"}},\n upsert=True\n )\n requests.append(request)\n if len(requests) == 0:\n return None\n result = db.matches.bulk_write(requests)\n return result\n",
"text": "Hello,I’ve implemented a (Python) method for adding/updating data on (sports) matches. It relies on bulkWrite() with multiple UpdateOne() calls, each with the upsert flag set to true. When I use $all in my filter, to make sure I’m matching all the submitted players in a given array, I get this:Here’s the method, condensed to its bare minimum:If I remove the $all, it works as I want, except that I need to be able to have players in any order. The error doesn’t seem to come from pymongo, so I guess it’s internal to MongoDB. Also, I don’t understand what the error means and what I’m supposed to do instead. Could it be a bug?",
"username": "Gustaf_Liljegren"
},
{
"code": "players$elemMatch$all",
"text": "Hi @Gustaf_Liljegren! Welcome to the forums.There are many reasons this might be failing. Can you provide a few sample documents so I can understand what the structure is like? One possibility I can think of is that if players is an array of documents you need to use $elemMatch with $all as explained here.",
"username": "Prashant_Mital"
},
{
"code": "db.matches.bulkWrite( [\n { deleteMany : { \"filter\" : { \"players\" : { \"$all\" : [ \"Ada\", \"Karl\" ] } } } },\n { insertOne : { \"document\" : { \"players\" : [ \"Karl\", \"Ada\" ] } } },\n { updateOne :\n {\n \"filter\" : { \"players\" : { \"$all\" : [ \"Ada\", \"Karl\" ] } },\n \"update\" : { \"$set\" : { \"success\" : true } },\n \"upsert\" : true\n }\n }\n] )\ndb.matches.find( { \"players\" : { \"$all\" : [ \"Ada\", \"Karl\" ] } } )\ninsertOnedeleteManyupdateOne$elemMatch",
"text": "Thanks for helping out,Using this short test, I was able to reproduce the problem:If I comment out the line containing insertOne, so that the document is created rather than updated, the “cannot infer query fields to set” error occurs. Note that the same filter works when updating an existing document (and with deleteMany), but when inserting a document using updateOne, it fails.As you can see, this is a string array, not an array of objects, so the $elemMatch syntax shouldn’t be necessary.Would love to see a explanation for this behavior.",
"username": "Gustaf_Liljegren"
},
{
"code": "upsert: trueupdateOneupsert: trueupdateupdate\"players\" : {$all: [ \"Ada\", \"Karl\"]}",
"text": "Hi @Gustaf_Liljegren. Thanks for the repro code! The reason this doesn’t work is due to the upsert: true clause in the updateOne operation. You see, when an update operation with upsert: true doesn’t find any matching documents and the update parameter contains operations using update operator expressions (https://docs.mongodb.com/manual/reference/operator/update/#id1) the server first builds a base document using the query parameter itself and then applies the update operations to it.\nIn your case, the server is unable to look at \"players\" : {$all: [ \"Ada\", \"Karl\"]} and figure out that the base document is a 2-member array which results in this error.It is possible that update eventually becomes smart enough to handle this but in the meantime you will need to workaround this limitation in your code.",
"username": "Prashant_Mital"
},
{
"code": "$elemMatch\"players\" : { \"$all\" : [ { \"$elemMatch\" : { \"$eq\" : player } } for player in match[\"players\"] ] }",
"text": "Many thanks @Prashant_Mital. It makes total sense now. I was able to workaround it using $elemMatch as you suggested:",
"username": "Gustaf_Liljegren"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | "cannot infer query fields to set" | 2021-02-23T21:29:37.855Z | “cannot infer query fields to set” | 3,296 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hi,\nIs MongoDB community edition support PSS Model ? If Yes can anyone please share related documents.",
"username": "Arun_Bashaboina"
},
{
"code": "",
"text": "Yes it does. See the difference between the 2.",
"username": "steevej"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | PSS Replication setup on MongoDB Community Edition | 2021-03-09T17:01:22.093Z | PSS Replication setup on MongoDB Community Edition | 1,666 |
null | [
"data-modeling"
] | [
{
"code": "likesuser_likesuser IDuser_iditem_iditems",
"text": "Continuing from my latest thread: Should Areas (country, province, city, district) fields inside Listing collections stamped as index? - Working with Data - MongoDB Developer Community ForumsThink of each user can like many posts or products that they want, this is a Many to Many relationships. In SQL we could have a table named likes or user_likes to store the user’s liked items.Remember that, of course, a user can store thousand or perhaps millions.But how to accomplish it in MongoDB? There are 3 scenarios or schemas that I can think of;#1 The ??? Pattern\nI don’t know what or how to name it, but with this pattern, we could have a collection that automatically creates 1 document for each user. This document would store all the liked items’ ID by the user.The naming convention for every ID within this collection would be user ID, therefore whenever we would like to get or compare items whether they’re liked by the user, we would just have to query this particular document that belongs to a user.#2 Many to Many documents\nThink of this collection is a pivot table in the SQL world, thus every document here are a pivot document that similar to pivot rows SQL.Within every document, we will store the user_id and the item_id.#3 Document references\nAs stated here: Model One-to-Many Relationships with Document References — MongoDB ManualWithin the items collection, we would store all the user ID who likes the item.Thank you everyone at MongoDB.",
"username": "ibgpramana"
},
{
"code": "{ userId : ...,\n ItemId : ...,\n Details : {\n ItemName : ...,\n ItemDiscription : ...,\n DateAdded : ..., \n PicUrl: ....\n ....\n}\n}\n",
"text": "Hi @ibgpramana,Since this many to many relationship is going to be masses to masses I would not embed it in any way.I would use a relationship collection to avoid unbound arrays antipatterns.So the collection of users_likes will reference a userId to an itemId per document. However, I would not only specify the item id here but also some immutable information I need to show my list:Indexing userId and Details.DateAdded : -1 might help you search for user specific items sorted by recently added.And to show an initial list will not require another access to Item collection and will be effective.Let me know if you have any questions.I suggest to read those articles:https://www.mongodb.com/article/schema-design-anti-pattern-summary/https://www.mongodb.com/article/mongodb-schema-design-best-practices/Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "user_iditem_iddetails",
"text": "Thank You, Mr Pavel.Your reply really helps!I’ll surely create indexes for user_id and item_id, and by the way, the reason why we’re putting the details object and its field as sub-document, is so that we don’t need another query access to each item document right?(as you stated: And to show an initial list will not require another access to Item collection and will be effective.)If we don’t, then we’d have to always run query access to each liked item just for getting their names, description, and picture URL.Thank you!",
"username": "ibgpramana"
},
{
"code": "user_iditem_iddetailsdb.users_likes.find({user_id : USER_ID}).sort({\"Details.DateAdded\" : -1})\ndb.items.find({_id : ITEM_ID});\n",
"text": "Hi @ibgpramanaI’ll surely create indexes for user_id and item_id , and by the way, the reason why we’re putting the details object and its field as sub-document, is so that we don’t need another query access to each item document right?Well yes. If you have all the data you need to show your list in UI you can only query this relationship collection.If more details are needed you can implement a click on item and show more details by doing a single document query to items based on ItemId:Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you, Mr Pavel.#Solved.",
"username": "ibgpramana"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to store User's liked items? | 2021-03-09T01:46:32.194Z | How to store User’s liked items? | 12,177 |
null | [
"student-developer-pack"
] | [
{
"code": "",
"text": "Hello,Can anyone please tell me if the free certification exam that was provided after completing any learning path from Github student developer pack still active in march 2021? and how do I get the referal code if I finish the learning path now",
"username": "Shivam_Pandey"
},
{
"code": "",
"text": "Hi @Shivam_Pandey,Welcome to the forums! Free certification will still be available in March.\nIf you go to your profile here: MongoDB Student Pack, you’ll find instructions on how to receive the 100% discount.Good luck!Lieke",
"username": "Lieke_Boon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help with free exam after learning path | 2021-03-08T09:57:32.638Z | Help with free exam after learning path | 4,982 |
null | [] | [
{
"code": "",
"text": "I was trying to filter a collection by a related Id which have relationship to another collection\nWhen I try to filter with query ‘relatedId == 42’, it return error message Accessing object type A which has been invalidated or deleted.\nThe related id is defined as int in the schema, but when I log it, it shows that the related id is an object.",
"username": "Yap_Xia_Tan"
},
{
"code": "",
"text": "Hi Yap_Xia_Tan,In order to help us help you, could you share with us what SDK you are using? That would allow us to provide you with some helpful examples. Additionally some snippets of your code would be beneficial for us to understand where you are having issues:If any of the code has sensitive information, we can provide you with a link to securely upload the code so that it won’t be publically visible.",
"username": "Andrea_Catalini"
}
] | How to filter a collection by related Id which have relationship to another collection? | 2021-03-09T08:27:18.127Z | How to filter a collection by related Id which have relationship to another collection? | 2,541 |
null | [
"indexes"
] | [
{
"code": "area: {\n country: \"us\",\n province: \"california\",\n city: \"san-fransisco\",\n district: \"bayview\"\n}\n",
"text": "Hi there,I’m just starting a fresh project and recently migrated (new) to MongoDB. The project is about an area-based listing directory where user can find a listing based on a country, province, city, and district level.So in order to do so, both in Area and Listing collections will have the following field structure.on Listing document:Again, the question is about should I stamp those fields as indexes? so that whenever we query the listings based on country or province or city or district level, the querying performance is faster.Thanks",
"username": "ibgpramana"
},
{
"code": "area.$**uscalifornia",
"text": "Hi @ibgpramanaWelcome to MongoDB community.Of course if those fields are your predicates you should index them for fast search.However, the way you index them can depend on nature of queries.If your query always include all fields you need to create a compound key in order of fields based on Equility Sort and Range rule order.Best practices for delivering performance at scale with MongoDB. Learn about the importance of indexing and tools to help you select the right indexes.Now if your query can have any of the fields placed as a standalone you might need an index for each field seperately or a wild card index on area.$**.Having said that there might be a good trick on autocomploting the predicates to force a compound index optimal scan. For example if we search only for sanfransisco on app side we can add us and california to the search as those make sense.Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "listingsarea_idareasarea.city == city:san-fransiscoarea.country == usarea.countryarea.provincearea.cityarea.district",
"text": "Thank you for your help, Mr Pavel.I was using MySQL and a table listings which had area_id as foreign key referenced to areas table where it was structured in Nested Set form.I don’t think my query always include all fields, let say a user only search for listings in san-fransisco, then the query will only look for area.city == city:san-fransisco. Same if a user only asked to see listings in US, the query would only look for area.country == us.as I stored the areas in each listing document as an array, I think I should create multi-key indexes on listing collections, for example:I read them here:Multikey Indexes — MongoDB Manual",
"username": "ibgpramana"
},
{
"code": "",
"text": "Hi @ibgpramana,I am not certain a multikey on a nested document will be effecient as on arrays of subdocuments where you index single fieldsAnyway if you expect one predicates per query you might use seperate or wild card indexes on area.Multikey index will be performant only if you use elemMatch on area…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "area",
"text": "Hi Mr @Pavel_Duchovny,I do think that we only expect one predicate per query, as an example, we’re only searching or retrieving listing based on a country, or province, or city, or district levelso as you say that we might use separate indexes on Area, you mean the single field index?In summary, we just stamp the area field as indexes. like what the document says in Create an index on embedded document section: Single Field Indexes — MongoDB ManualPlease confirm, and thanks for your opinion/answer. It really helps.",
"username": "ibgpramana"
}
] | Should Areas (country, province, city, district) fields inside Listing collections stamped as index? | 2021-03-06T01:13:36.423Z | Should Areas (country, province, city, district) fields inside Listing collections stamped as index? | 3,486 |
null | [] | [
{
"code": "",
"text": "Hello, I am new to NoSQL. We use Parse Server and needed to know the minimum version required MongoDB v4.0, v4.2 and v4.4.Please help! Thank you.",
"username": "Harshal_Karande"
},
{
"code": "",
"text": "Welcome to the MongoDB Community @Harshal_Karande!The Parse Server README and Parse Server Guide include more information on compatible software versions and prerequisites. Please reference MongoDB Compatibility in the README.The Parse Server maintainers are aiming to keep up with the latest supported versions of MongoDB server, so any of your suggested MongoDB server versions are an option with the latest verson of Parse Server:Parse Server is continuously tested with the most recent releases of MongoDB to ensure compatibility. We follow the MongoDB support schedule and only test against versions that are officially supported and have not reached their end-of-life date.If you are starting a new project/deployment, I would use the most recent production version of MongoDB (currently 4.4) so you have the latest features & improvements. MongoDB 4.4 (released Sept, 2020) has been out for about six months now so should be a solid starting point.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Parse server minimum version required for MongoSB v4.0, 4.2, 4.4 | 2021-03-02T22:41:39.265Z | Parse server minimum version required for MongoSB v4.0, 4.2, 4.4 | 2,513 |
null | [
"app-services-user-auth"
] | [
{
"code": " Credentials credentials = Credentials.anonymous();\n ClientApp.loginAsync(credentials, result -> {\n if (result.isSuccess()) {\n Log.v(\"QUICKSTART\", \"Successfully authenticated anonymously.\");\n User user = ClientApp.currentUser();\n // interact with realm using your user object here\n } else {\n Log.e(\"QUICKSTART\", \"Failed to log in. Error: \" + result.getError());\n }\n });\n }\n }); \n ClientApp.loginAsync(credentials, result -> {\n",
"text": "Hi everyone,I’m a new French student (in fact worker that try take over studies in programming) and I try to program my first mobile application.\nI would like to connect this application to MongoDB and firstly use the login function through email/password authentification. Because it doesn’t work, I tried to use the anonymous authentification that doesn’t work too.This is an Android app programmed by Android Studio in JAVA.My code which is si:ply (took on Realm web site):mLogin.setOnClickListener(new View.OnClickListener() {\n@Override\npublic void onClick(View viewMain) {\nApp ClientApp = new App(new AppConfiguration.Builder(ClientAppId).build());If I use the degug mode, I can see that this line stop the login function.May somebody could help me and explain me what happen wrong ?Thank you.",
"username": "Florian_Vigier"
},
{
"code": "app.getAnonymous()app.getEmailPassword()",
"text": "Hi Florian! Welcome to the MongoDB Community Forum!First: have you completed all the prerequisites (listed here) for the quickstart? You’ll need to create a backend Realm app and connect to that app using the App ID you can find in the Realm UI.If you’ve completed that, make sure that you’ve enabled Anonymous Authentication. You’ll have to register your anonymous user before you can log in with it – here’s an example of registering an email/password user so you can get a sense of how to do that. Registering an anonymous user should work the same, but with app.getAnonymous() instead of app.getEmailPassword().Once you’ve registered your anonymous user, you should be able to log in with that user. Keep in mind that these methods are asynchronous, so you’ll have to wait for a successful registration callback before you can log in with that anonymous user.",
"username": "Nathan_Contino"
},
{
"code": "",
"text": "Hi Nathan,Thank you for your answer.I did the first step as you sent to me.And I use the same code as you sentimage1912×1114 233 KB\nSorry, because I’m a new user, I can’t add more than one picture (So, the picture can be unclear).\n- First top > The init\n- Second top > The class\n- bottom > The login codeBut it doesn’t work. I guess I do something wrong but what ? ",
"username": "Florian_Vigier"
},
{
"code": "",
"text": "Hello everyone,After some hours to search what’s the problem … I finally find it.It was so simple : don’t forget to click on “deploy” button (top of the page) on the Realm web page for validate the authentification mode.",
"username": "Florian_Vigier"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Problem of authentification between mobile App and Realm | 2021-02-18T22:10:41.532Z | Problem of authentification between mobile App and Realm | 3,167 |
null | [
"sharding",
"charts"
] | [
{
"code": "",
"text": "We are playing with the thought to use Charts for providing largely visualised data access for several teams with a rapid growing data base that need to be sharded in the future.Therefore, we would like to know if Charts also works with a sharded cluster or if we need to consider using a different approach to prevent additional tasks for us in the future.Best regards,\nRalf",
"username": "Ralf_Mader"
},
{
"code": "",
"text": "Yes, Charts works with sharded clusters.",
"username": "tomhollander"
},
{
"code": "",
"text": "Thanks for the fast reply and your webinar last week ",
"username": "Ralf_Mader"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Charts with sharded cluster | 2021-03-08T11:02:52.810Z | MongoDB Charts with sharded cluster | 2,869 |
null | [
"aggregation"
] | [
{
"code": "[{\n \"id\": \"event-1\",\n \"userId\": \"bob\",\n \"type\": \"CHARACTER_CREATE\",\n \"payload\": {\n \"AKAs\": [\n {\n \"id\": \"aka-1\",\n \"original\": {\n \"language\": \"en\",\n \"value\": \"Ross\"\n },\n \"season\": [\n \"1\"\n ]\n }\n ],\n \"id\": \"character-1\"\n }\n},{\n \"id\": \"event-2\",\n \"userId\": \"alice\",\n \"type\": \"CHARACTER_ADD_AKAS\",\n \"payload\": {\n \"id\": \"character-1\",\n \"AKAs\": [\n {\n \"id\": \"aka-2\",\n \"original\": {\n \"language\": \"en\",\n \"value\": \"Ross Geller\"\n },\n \"season\": [\n \"2\"\n ]\n }\n ]\n }\n},{\n \"id\": \"event-3\",\n \"userId\": \"bob\",\n \"type\": \"CHARACTER_ADD_AKA_TITLES\",\n \"payload\": {\n \"id\": \"character-1\",\n \"AKAs\": [\n {\n \"id\": \"aka-2\",\n \"season\": [\n \"3\"\n ]\n }\n ]\n }\n}]\n{\n \"id\": \"character-1\",\n \"AKAs\": [\n {\n \"id\": \"aka-2\",\n \"season\": [\n \"1\"\n ],\n \"original\": {\"language\": \"en\", \"value\": \"Ross Geller\"}\n },\n {\n \"id\": \"aka-2\",\n \"season\": [\n \"2\",\n \"3\"\n ],\n \"original\": {\"language\": \"en\", \"value\": \"Ross\"}\n }\n ]\n}\n[\n{$match: {\n // overkill i know, but will be useful later\n type: {$in: [\n \"CHARACTER_CREATE\",\n \"CHARACTER_ADD_AKAS\",\n \"CHARACTER_ADD_AKA_TITLES\",\n \"CHARACTER_ADD_AKA_LOCS\"\n ]},\n // the real filter\n id: \"character-1\"\n}},\n// i only want to look at payload in the output for now\n{$project: {\n _id: 0,\n payload: 1\n}},\n// one level of arrays\n{$unwind: {\n path: \"$payload.AKAs\",\n preserveNullAndEmptyArrays: true\n}},\n// the next level of arrays\n{$unwind: {\n path: \"$payload.AKAs.season\",\n preserveNullAndEmptyArrays: true\n}}, {$unwind: {\n path: \"$payload.AKAs.localization\",\n preserveNullAndEmptyArrays: true\n}},\n// now the fun part, this outputs 2 documents which look exactly like what I intended for the AKAs.\n{$group: {\n _id: {entityId: \"$payload.id\", akaId: \"$payload.AKAs.id\"},\n \"original\": {$mergeObjects: \"$payload.AKAs.original\"},\n \"season\": {\n $push: \"$payload.AKAs.season\"\n }\n}},\n// fail: I dont know how to join the 2 documents output from above into array elements on one final output document.\n{$group: {\n _id: {id: \"$_id.entityId\"},\n // ERROR: \"$\" is not valid. Also, how do I get the AKAs.id in each array element?\n AKAs: {$push: \"$\"}\n}}]",
"text": "I am implementing the Event Sourcing & CQRS patterns. I have a collection of documents with the events from users to manipulate models. I want to create a separate collection of the models for querying with the aggregation framework.One challenge is that the model will have arrays within nested-documents that are stored under arrays themselves. The collection of events conveniently stores the user inputs in a similar shape as the intended model, which allows me to use $unwind for each level of array nesting. I was able to $group one level of nesting, and I assumed I simply needed to $group again but I’m unclear how to do it with the results I’m seeing in Compass.// here is my collection of user events// here is an example of the expected result after using the aggregation framework.\n// combine payloads above to make this model// here is my attempt to compose the aggregation - it feels 95% done.",
"username": "John_Grant"
},
{
"code": "$group: {\n _id: {id: \"$_id.entityId\"},\n // the fix\n AKAs: { $push: {\n \"id\": \"$_id.akaId\",\n \"season\": \"$season\",\n \"original\": \"$original\"\n }}\n}",
"text": "This is solved.I did not realize you can use an expression in $push that specifies multiple keys.The last stage needed to be:",
"username": "John_Grant"
},
{
"code": "payload.series : [\"Friends\"]{\n \"id\": \"character-1\",\n \"series\": [\"Friends\"],\n \"AKAs\": [..as shown above.]\n}",
"text": "Actually I have a follow up question.If the first user event, CHARACTER_CREATE, contained a series attribute that happens to be an array, e.g. payload.series : [\"Friends\"], I am confused how to send that attribute through the pipeline. Any aggregation I try results in an array of arrays.The result after aggregation should look like:",
"username": "John_Grant"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Help re-grouping after unwind | 2021-03-07T18:19:08.720Z | Help re-grouping after unwind | 8,140 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "I just downloaded the mongo shell zip for windows, added <mongo shell’s download directory>/bin to $PATH variable and trying to invoke mongo shell in command line but below is the error prompt which I am getting. Any help on this pleaseError: “The code execution cannot be proceed because vcruntime140_1.dll was not found. Reinstalling the program may fix the probem.”",
"username": "srikanth_raj_malekar"
},
{
"code": "",
"text": "HI Srikanth,I have same issue, did you found any resolution for the same?",
"username": "Jignesh_Pithava"
},
{
"code": "mongosh",
"text": "Welcome to the community @srikanth_raj_malekar & @Jignesh_Pithava!I recommend using the MSI installer for MongoDB instead of the ZIP download.Quoting from SERVER-26028: Windows zip releases missing msvcr120.dll and msvcp120.dll (a similar issue reported for an older MongoDB release):MongoDB ships two types of releases for Windows: zipfiles and MSIs. If you install our MSIs you will notice that they include the merge modules for the redistributable runtime libraries. This allows us to automatically install the necessary library support should the system not already have it.Unfortunately, the same is not possible with the zip distribution, because a zip distribution does not give us an opportunity to run install actions when you unpack it. So there is no way for us to actually install the redistributable runtime libraries.If you only want to install an admin UI for MongoDB, I would also look into either the new mongosh shell or MongoDB Compass.If you are unable to use MSI installers, you should be able to install the missing runtime libraries by downloading and installing the Microsoft Visual C++ Redistributable for Visual Studio 2015, 2017 and 2019.If you are still experiencing install issues, please confirm your specific version of Windows and the version of the MongoDB distribution you are trying to install.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello Stennie,I followed all the steps which you have mentioned, but still I am getting same error as “The code execution cannot be proceed because vcruntime140_1.dll was not found. Reinstalling the program may fix the problem.”Windows Version : Microsoft Windows [Version 10.0.18363.1316]Waiting for your reply.\nThanks in advance,Best Regards,\nRakhee",
"username": "Rakhee_Hongunti"
},
{
"code": "",
"text": "uninstall all MS C++ runtime libraries of the system\nrestart Windows\ninstall the MS C++ 2015 runtime library x64 (see above)VCRUNTIME140.DLL is missing error addresses issues with a file that is part of Microsoft Visual C++ 2015 Redistributable. Fix it now yourself, for free.",
"username": "Remove_Guide"
}
] | The code execution cannot be proceed because vcruntime140_1.dll was not found | 2020-09-09T02:13:54.341Z | The code execution cannot be proceed because vcruntime140_1.dll was not found | 9,074 |
null | [
"atlas-device-sync"
] | [
{
"code": "",
"text": "I am loving what I am seeing and using when it comes to Realm, but I having a bit of hard time wrapping my head around some structure and hope the community can help provide some insight.Lets say you are building a cash register app. To place everything in a single realm and sync everywhere sounds great, but can become very bloated (thing about a years worth of register transactions multiplied by the number of customers using your application) and every time someone logs into a new device, the entire database would have to be downloaded.I then thought about 2 realms, 1 for all the settings which is synced everywhere and 1 with all the transactions (using query based synchronization) where I could sync only the past month of transactions and require online access to go back further. After reading the docs though (correct me if I am wrong) I believe subscriptions only live as long as the app is open. If the app closes, I would have to re-download the past months worth of data. Additionally, I was being lead to believe (again, correct me if I am wrong) that query based synchronization is very resource intensive, so if I ended up having hundreds or thousands of clients using this, am I causing myself more issues then necessary?Is there something missing or a better way to handle this scenario?",
"username": "Robert_Charest"
},
{
"code": "",
"text": "using query based synchronizationThat is a depreciated feature. MongoDB Realm is a FULL sync database. There may be a change to that in the future but plan accordingly. See the MongoDB Realm Docs and the Sync section. The old Realm docs are depreciated and no longer apply.a years worth of register transactions multiplied by the number of customers using your applicationRealm is not well suited as a multi-tenant database within one Realm (your use case may vary). So one option is to have each customer with their own realm (or realms), significantly reducing file size.uery based synchronization is very resource intensive,As per above, that’s not currently an option.The good news is that Realm is pretty space efficient and thousands of objects are easily handled. Realm does have a shouldCompactOnLaunch feature which can reduce the size footprint.However, I would suggest another strategy; allow the user to archive old data which would remove it from the file, but still provide access. For example give the user the ability to purge data older than say 5 years but have it stored in a local file that could be accessed later if needed.",
"username": "Jay"
},
{
"code": "",
"text": "When using sync, I can define a partition id. This is how I intend to separate tenants. Its honestly one of the best solutions I have seen for a multi-tentant cloud database (unless of course I am missing something).Client archive options are a good idea, but most people will likely not use it unless forced to do so. Meanwhile, I (as a mobile developer) would be responsible for the data transfer rates of these large databases to each device. I am just concerned about an app not properly thought out causing large database utilization fees.",
"username": "Robert_Charest"
},
{
"code": "",
"text": "When using sync, I can define a partition id.When using MongoDB Realm sync, that’s not optional; you must define and use a partition.Keep in mind this is not a multi-tenant situation. It’s multi-user. The users will all be managed in the same way, any permissions will apply to all users and they will be using the same app space.As long as those requirements meet you’re use case, you should be good to go.Client archive options are a good idea, but most people will likely not use it unless forced to do soThats could be something you build into the app. ‘Data is automatically archived after 3 years’ or whatever timeframe/size fits the best.I (as a mobile developer) would be responsible for the data transfer rates of these large databases to each deviceKeep in mind that sync’d data isn’t resynch’d - so the data transfer would be the initial one (if there is any pre-populated data), and then only future changes. So if for example, a transaction references a client and and item that already exist, the client and item don’t resync - just the transaction data.",
"username": "Jay"
}
] | Help Understanding Realm Syncing Option | 2021-03-05T15:29:13.903Z | Help Understanding Realm Syncing Option | 2,660 |
null | [
"app-services-user-auth",
"realm-web"
] | [
{
"code": "",
"text": "I am using the realm-web sdk and when the app.logIn(Realm.Credentials.anonymous()) it stores the user id and auth tokens in local storage. In a new private window my application won’t authenticate an anonomous user and there is nothing stored in the local storage.",
"username": "Colin_Green"
},
{
"code": "",
"text": "Thanks for your post. I’ve created an issue on the Realm JS repository to track this: Web: Can't authenticate an anonymous user in incognito mode · Issue #3632 · realm/realm-js · GitHub feel free to follow that and add in more details as we investigate this further.",
"username": "kraenhansen"
}
] | Realm Annonymous Auth not working on private/incognito mode | 2021-03-08T16:03:25.970Z | Realm Annonymous Auth not working on private/incognito mode | 2,021 |
[] | [
{
"code": "",
"text": "image1825×964 189 KBLook at this. How the hell am I suppose to read docs like that? Same issue on firefox. It does not even help to have large screen.Btw the contents of docs is great but the form…Simon",
"username": "Simon_Obetko"
},
{
"code": "",
"text": "Hi Simon,Thank you for your feedback and for raising this issue. The team is actively working on a fix for the table overflow issue above and is planning for better large screen support in the near future.While not ideal, you can horizontally scroll to view the remainder of the description. We appreciate your patience while we address this.Cheers,\nJon DeStefano\nMongoDB Docs",
"username": "Jonathan_DeStefano"
}
] | Documentation looks really bad | 2021-03-08T12:52:08.349Z | Documentation looks really bad | 1,947 |
|
[
"replication"
] | [
{
"code": "apiVersion: v1\nkind: PersistentVolume\nmetadata:\n name: mongo-0-pv-volume\n labels:\n type: local\nspec:\n storageClassName: manual\n capacity:\n storage: \"2Gi\"\n accessModes:\n - ReadWriteOnce\n hostPath:\n path: \"/data/db/mongo-0\"\n---\n---\napiVersion: v1\nkind: PersistentVolumeClaim\nmetadata:\n namespace: default\n name: mongo-0-pv-claim\nspec:\n accessModes:\n - ReadWriteOnce\n storageClassName: manual\n resources:\n requests:\n storage: 1Gi\n----\n---\napiVersion: v1\nkind: Service\nmetadata:\n namespace: default\n name: mongodb-0-service\n labels:\n run: mongodb-0-service\nspec:\n ports:\n - port: 27017\n targetPort: 27017\n protocol: TCP\n selector:\n defacementComponent: mongodb-0\n---\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n namespace: default\n name: mongodb-0\n labels:\n env: test\n defacementComponent: mongodb-0\nspec:\n replicas: 1\n selector:\n matchLabels:\n defacementComponent: mongodb-0\n template:\n metadata:\n labels:\n defacementComponent: mongodb-0\n spec:\n terminationGracePeriodSeconds: 10\n affinity:\n podAntiAffinity:\n requiredDuringSchedulingIgnoredDuringExecution:\n - labelSelector:\n matchExpressions:\n - key: \"app\"\n operator: In\n values:\n - mongo\n topologyKey: \"kubernetes.io/hostname\"\n containers:\n - image: mtlbillyfong/mongodb-replica-set:20200330-stable-1\n name: mongodb-0\n resources:\n requests:\n ephemeral-storage: \"1Gi\"\n cpu: \"500m\"\n memory: \"1Gi\"\n limits:\n ephemeral-storage: \"2Gi\"\n cpu: \"700m\"\n memory: \"2Gi\"\n env:\n - name: \"MONGO_INITDB_ROOT_USERNAME\"\n# mongo mongodb://mongodb-0-service:27017,mongodb-1-service:27017,mongodb-2-service:27017\nMongoDB shell version v4.4.4\nconnecting to: mongodb://mongodb-0-service:27017,mongodb-1-service:27017,mongodb-2-service:27017/?compressors=disabled&gssapiServiceName=mongodb\nError: couldn't connect to server mongodb-2-service:27017, connection attempt failed: SocketException: Error connecting to mongodb-2-service:27017 (10.106.185.83:27017) :: caused by :: Connection timed out :\nconnect@src/mongo/shell/mongo.js:374:17\n@(connect):2:6\nexception: connect failed\nexiting with code 1\n",
"text": "I am looking for help with connecting to a mongo db replica-set , on K8s.\nGreatful for any help , pointers for the same…I have created a mongodb replicaset using the following method.\n- Created mongo instance in 3 pods.\n- Attached the 3 pods at startup in a replicaset\n- Exposed the services using K8s Services.\nThe Replicas are all up and running and can access each other [tested using ‘mongo --hostname ’]\nHowever, I am not able to access the services using the connection string\n“mongodb://mongodb-0-service:27017,mongodb-1-service:27017,mongodb-2-service:27017/admin?replicaSet=rs0”Error that I am getting isPS : I have also tried the usual method of creating a headless service and Statefulset , there too, I was not able to access the replicaset using the connection string .I am following the repo :Disclaimer: This method is intended for you to quickly setup a dev environment for testing, and not meant for production environment.\nReading time: 6 min read\n",
"username": "rajat_verma"
},
{
"code": "mongo --host mongodb-0-service\nmongo --host mongodb-1-service\nmongo --host mongodb-2-service\n",
"text": "tested using ‘mongo --hostname’First my version of mongo does not recognize the –hostname option. Which version are you using? I guess you mean that you tried all 3 of them with:and all were successful. The output of the command rs.status() when executed from the PRIMARY instance.",
"username": "steevej"
},
{
"code": "rs0:PRIMARYrs.status()\n{\n \"set\" : \"rs0\",\n \"date\" : ISODate(\"2021-03-07T17:33:49.772Z\"),\n \"myState\" : 1,\n \"term\" : NumberLong(2),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"heartbeatIntervalMillis\" : NumberLong(2000),\n \"majorityVoteCount\" : 2,\n \"writeMajorityCount\" : 2,\n \"optimes\" : {\n \"lastCommittedOpTime\" : {\n \"ts\" : Timestamp(1615138429, 1),\n \"t\" : NumberLong(2)\n },\n \"lastCommittedWallTime\" : ISODate(\"2021-03-07T17:33:49.596Z\"),\n \"readConcernMajorityOpTime\" : {\n \"ts\" : Timestamp(1615138429, 1),\n \"t\" : NumberLong(2)\n },\n \"readConcernMajorityWallTime\" : ISODate(\"2021-03-07T17:33:49.596Z\"),\n \"appliedOpTime\" : {\n \"ts\" : Timestamp(1615138429, 1),\n \"t\" : NumberLong(2)\n },\n \"durableOpTime\" : {\n \"ts\" : Timestamp(1615138429, 1),\n \"t\" : NumberLong(2)\n },\n \"lastAppliedWallTime\" : ISODate(\"2021-03-07T17:33:49.596Z\"),\n \"lastDurableWallTime\" : ISODate(\"2021-03-07T17:33:49.596Z\")\n },\n \"lastStableRecoveryTimestamp\" : Timestamp(1615138389, 1),\n \"lastStableCheckpointTimestamp\" : Timestamp(1615138389, 1),\n \"electionCandidateMetrics\" : {\n \"lastElectionReason\" : \"stepUpRequestSkipDryRun\",\n \"lastElectionDate\" : ISODate(\"2021-03-07T14:57:48.407Z\"),\n \"electionTerm\" : NumberLong(2),\n \"lastCommittedOpTimeAtElection\" : {\n \"ts\" : Timestamp(1615129063, 1),\n \"t\" : NumberLong(1)\n },\n \"lastSeenOpTimeAtElection\" : {\n \"ts\" : Timestamp(1615129063, 1),\n \"t\" : NumberLong(1)\n },\n \"numVotesNeeded\" : 2,\n \"priorityAtElection\" : 1,\n \"electionTimeoutMillis\" : NumberLong(10000),\n \"priorPrimaryMemberId\" : 0,\n \"numCatchUpOps\" : NumberLong(0),\n \"newTermStartDate\" : ISODate(\"2021-03-07T14:57:49.345Z\"),\n \"wMajorityWriteAvailabilityDate\" : ISODate(\"2021-03-07T14:57:49.564Z\")\n },\n \"members\" : [\n {\n \"_id\" : 0,\n \"name\" : \"10.110.249.195:27017\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 9325,\n \"optime\" : {\n \"ts\" : Timestamp(1615138419, 1),\n \"t\" : NumberLong(2)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(1615138419, 1),\n \"t\" : NumberLong(2)\n },\n \"optimeDate\" : ISODate(\"2021-03-07T17:33:39Z\"),\n \"optimeDurableDate\" : ISODate(\"2021-03-07T17:33:39Z\"),\n \"lastHeartbeat\" : ISODate(\"2021-03-07T17:33:47.938Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2021-03-07T17:33:47.934Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncingTo\" : \"10.96.135.26:27017\",\n \"syncSourceHost\" : \"10.96.135.26:27017\",\n \"syncSourceId\" : 1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 194403\n },\n {\n \"_id\" : 1,\n \"name\" : \"10.96.135.26:27017\",\n \"health\" : 1,\n \"state\" : 1,\n \"stateStr\" : \"PRIMARY\",\n \"uptime\" : 9518,\n \"optime\" : {\n \"ts\" : Timestamp(1615138429, 1),\n \"t\" : NumberLong(2)\n },\n \"optimeDate\" : ISODate(\"2021-03-07T17:33:49Z\"),\n \"syncingTo\" : \"\",\n \"syncSourceHost\" : \"\",\n \"syncSourceId\" : -1,\n \"infoMessage\" : \"\",\n \"electionTime\" : Timestamp(1615129068, 1),\n \"electionDate\" : ISODate(\"2021-03-07T14:57:48Z\"),\n \"configVersion\" : 194403,\n \"self\" : true,\n \"lastHeartbeatMessage\" : \"\"\n },\n {\n \"_id\" : 2,\n \"name\" : \"10.106.185.83:27017\",\n \"health\" : 1,\n \"state\" : 2,\n \"stateStr\" : \"SECONDARY\",\n \"uptime\" : 9427,\n \"optime\" : {\n \"ts\" : Timestamp(1615138419, 1),\n \"t\" : NumberLong(2)\n },\n \"optimeDurable\" : {\n \"ts\" : Timestamp(1615138419, 1),\n \"t\" : NumberLong(2)\n },\n \"optimeDate\" : ISODate(\"2021-03-07T17:33:39Z\"),\n \"optimeDurableDate\" : ISODate(\"2021-03-07T17:33:39Z\"),\n \"lastHeartbeat\" : ISODate(\"2021-03-07T17:33:47.934Z\"),\n \"lastHeartbeatRecv\" : ISODate(\"2021-03-07T17:33:47.937Z\"),\n \"pingMs\" : NumberLong(0),\n \"lastHeartbeatMessage\" : \"\",\n \"syncingTo\" : \"10.96.135.26:27017\",\n \"syncSourceHost\" : \"10.96.135.26:27017\",\n \"syncSourceId\" : 1,\n \"infoMessage\" : \"\",\n \"configVersion\" : 194403\n }\n ],\n \"ok\" : 1,\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1615138429, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n },\n \"operationTime\" : Timestamp(1615138429, 1)\n}\n",
"text": "",
"username": "rajat_verma"
},
{
"code": "",
"text": "On my installation , I have tried using the ‘mongo --host seconday-IP’ seconday-IP is as listed in the rs.status() output.MongoDB shell version v4.2.3\ngit version: 6874650b362138df74be53d366bbefc321ea32d4\nOpenSSL version: OpenSSL 1.1.1 11 Sep 2018\nallocator: tcmalloc\nmodules: none\nbuild environment:\ndistmod: ubuntu1804\ndistarch: x86_64\ntarget_arch: x86_64",
"username": "rajat_verma"
},
{
"code": "",
"text": "My guess is that you must use IP addresses rather than host name like you have in the following:mongodb://mongodb-0-service:27017,mongodb-1-service:27017,mongodb-2-service:27017/admin?replicaSet=rs0",
"username": "steevej"
},
{
"code": "mongo --host mongodb-0-service",
"text": "mongo --host mongodb-0-serviceThat is what I thought too… I tried the following",
"username": "rajat_verma"
},
{
"code": "mongodb://10.110.249.195:27017,10.96.135.26:27017,10.106.185.83:27017/admin?replicaSet=rs0\n",
"text": "the same connection stringSame as what? SeeMy guess is that you must use IP addresses rather than host nameSo try",
"username": "steevej"
}
] | MongoDB Replica Set on Kubernetes | 2021-03-07T16:47:07.537Z | MongoDB Replica Set on Kubernetes | 7,940 |
|
null | [
"node-js"
] | [
{
"code": "",
"text": "Hi, using [email protected] on [email protected] I get the following error that comes from write_concerns.js:\nTop-level use of w, wtimeout, j, and fsync is deprecated. Use writeConcern instead.\nAny idea why I’m getting this warning and how to fix it?\nWhat would be the recommended version of mongodb to downgrade and try to see if such warn dissapears?",
"username": "Dane411"
},
{
"code": "WriteConcern:{w:\"majority\",wtimeout:2500}",
"text": "this works for me:\nWriteConcern:{w:\"majority\",wtimeout:2500}",
"username": "Alii_N_A"
}
] | Top-level use of w, wtimeout, j, and fsync is deprecated. Use writeConcern instead | 2021-02-07T10:06:17.446Z | Top-level use of w, wtimeout, j, and fsync is deprecated. Use writeConcern instead | 5,427 |
null | [
"containers",
"kubernetes-operator"
] | [
{
"code": "",
"text": "Hi All,Followed the guide to setup mongodb enterprise for trial.Here are the steps I followed:Connectivity test fails constantly. Can someone let me know what could be causing issue with external connectivity ?Thank you",
"username": "Kish_V"
},
{
"code": "",
"text": "Hi @Kish_V,What error are you receiving upon connection?Have you confirmed all nodes and ports are correct?Also please try to use the CAFile with the mongo shell and see if you face a similar issue.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "See attachment for more details.mongodb-conn-test2880×810 369 KBAlso some of the documentation refers to deploy the replica set with tls enabled as false but the replica set fails with this option being false.",
"username": "Kish_V"
},
{
"code": "test-db”: “ec2-node1-public-ip:32561”\n“test-db”: “ec2-node2-public-ip:31828”\n“test-db”: “ec2-node3-public-ip:32212”\n",
"text": "Hi @Kish_V,Why are the ports specified in the mongo shell command, differ from the one in the horizon specifications:What are 32595 and 30432?\nOur documentation uses only ports and nodes specified in the horizon clauses.Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny,I created a new cluster since I usually don’t keep them long due to the cost.Here is the complete breakdown with detailed information.Since this is my test environment, I am ok to share the IP’s and detailed information which might assist you with reviewing it but will be tore down once we have a resolution.connectivity:\nreplicaSetHorizons:\n- “customer-prod-db”: “ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671”\n- “customer-prod-db”: “ec2-3-17-154-122.us-east-2.compute.amazonaws.com:32595”\n- “customer-prod-db”: “ec2-18-218-2-179.us-east-2.compute.amazonaws.com:30432”kubectl get svc | grep customer-rep\ncustomer-replica-set-0 NodePort 10.100.66.6 27017:31671/TCP 76m\ncustomer-replica-set-1 NodePort 10.100.185.237 27017:32595/TCP 76m\ncustomer-replica-set-2 NodePort 10.100.37.222 27017:30432/TCP 76m\ncustomer-replica-set-svc ClusterIP None 27017/TCP 79mkubectl get pods -o wide\nNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES\ncustomer-replica-set-0 1/1 Running 0 10h 192.168.22.100 ip-192-168-2-202.us-east-2.compute.internal \ncustomer-replica-set-1 1/1 Running 0 10h 192.168.80.99 ip-192-168-85-40.us-east-2.compute.internal \ncustomer-replica-set-2 1/1 Running 0 10h 192.168.33.103 ip-192-168-51-42.us-east-2.compute.internal kubectl get nodes -o jsonpath=‘{ $.items[*].status.addresses[?(@.type==“ExternalDNS”)].address }’\nec2-18-216-32-24.us-east-2.compute.amazonaws.com\nec2-18-218-2-179.us-east-2.compute.amazonaws.com\nec2-3-17-154-122.us-east-2.compute.amazonaws.comkubectl get nodes -o wide\nNAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME\nip-192-168-2-202.us-east-2.compute.internal Ready 11h v1.16.13-eks-2ba888 192.168.2.202 18.216.32.24 Amazon Linux 2 4.14.186-146.268.amzn2.x86_64 docker://19.3.6\nip-192-168-51-42.us-east-2.compute.internal Ready 11h v1.16.13-eks-2ba888 192.168.51.42 18.218.2.179 Amazon Linux 2 4.14.186-146.268.amzn2.x86_64 docker://19.3.6\nip-192-168-85-40.us-east-2.compute.internal Ready 11h v1.16.13-eks-2ba888 192.168.85.40 3.17.154.122 Amazon Linux 2 4.14.186-146.268.amzn2.x86_64 docker://19.3.6Navigating the node/ip:\ncustomer-replica-set-0 (31671) → ip-192-168-2-202.us-east-2.compute.internal → external ip → 18.216.32.24 → nslookup → ec2-18-216-32-24.us-east-2.compute.amazonaws.comcustomer-replica-set-1 (32595) → ip-192-168-85-40.us-east-2.compute.internal-> external ip → 3.17.154.122 → nslookup → ec2-3-17-154-122.us-east-2.compute.amazonaws.comcustomer-replica-set-2 (30432) → ip-192-168-51-42.us-east-2.compute.internal-> external ip → 18.218.2.179 → nslookup → ec2-18-218-2-179.us-east-2.compute.amazonaws.comPlease let me know when you have reviewed this so that I can delete or mask the IP’s.Here is the command with the new cluster and associated error:\nmongo --host customer-replica-set/ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671,ec2-3-17-154-122.us-east-2.compute.amazonaws.com:32595,ec2-18-218-2-179.us-east-2.compute.amazonaws.com:30432 --tls --tlsAllowInvalidCertificates --verbose\nMongoDB shell version v4.2.7\nconnecting to: mongodb://ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671,ec2-3-17-154-122.us-east-2.compute.amazonaws.com:32595,ec2-18-218-2-179.us-east-2.compute.amazonaws.com:30432/?compressors=disabled&gssapiServiceName=mongodb&replicaSet=customer-replica-set\n2020-08-16T07:36:53.811-0500 D1 NETWORK [js] Starting up task executor for monitoring replica sets in response to request to monitor set: customer-replica-set/ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671,ec2-3-17-154-122.us-east-2.compute.amazonaws.com:32595,ec2-18-218-2-179.us-east-2.compute.amazonaws.com:30432\n2020-08-16T07:36:53.811-0500 I NETWORK [js] Starting new replica set monitor for customer-replica-set/ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671,ec2-3-17-154-122.us-east-2.compute.amazonaws.com:32595,ec2-18-218-2-179.us-east-2.compute.amazonaws.com:30432\n2020-08-16T07:36:53.811-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to ec2-18-218-2-179.us-east-2.compute.amazonaws.com:30432\n2020-08-16T07:36:53.811-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to ec2-3-17-154-122.us-east-2.compute.amazonaws.com:32595\n2020-08-16T07:36:53.811-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671\n2020-08-16T07:36:53.841-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set customer-replica-set\n2020-08-16T07:36:53.842-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Cannot reach any nodes for set customer-replica-set. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.\n2020-08-16T07:36:53.842-0500 D1 NETWORK [ReplicaSetMonitor-TaskExecutor] Refreshing replica set customer-replica-set took 30ms\n2020-08-16T07:36:54.356-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set customer-replica-set\n2020-08-16T07:36:54.357-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Cannot reach any nodes for set customer-replica-set. Please check network connectivity and the status of the set. This has happened for 2 checks in a row.",
"username": "Kish_V"
},
{
"code": " telnet dns port",
"text": "Hi @Kish_V,Can you ssh to the primary node and provide the rs.status() and rs.conf()?\nIf the rs.conf is with internal ips the mongo shell will try to reach those.Also can you try to connect to a single host without all replica configuration. Just by specifying a single --host and --port.Also please run a telnet dns port for all 3 hostsBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": " },\n \"horizons\" : {\n \"customer-prod-db\" : \"ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671\"\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 1,\n \"host\" : \"customer-replica-set-1.customer-replica-set-svc.mongodb.svc.cluster.local:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"horizons\" : {\n \"customer-prod-db\" : \"ec2-3-17-154-122.us-east-2.compute.amazonaws.com:32595\"\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n },\n {\n \"_id\" : 2,\n \"host\" : \"customer-replica-set-2.customer-replica-set-svc.mongodb.svc.cluster.local:27017\",\n \"arbiterOnly\" : false,\n \"buildIndexes\" : true,\n \"hidden\" : false,\n \"priority\" : 1,\n \"tags\" : {\n\n },\n \"horizons\" : {\n \"customer-prod-db\" : \"ec2-18-218-2-179.us-east-2.compute.amazonaws.com:30432\"\n },\n \"slaveDelay\" : NumberLong(0),\n \"votes\" : 1\n }\n ],\n \"settings\" : {\n \"chainingAllowed\" : true,\n \"heartbeatIntervalMillis\" : 2000,\n \"heartbeatTimeoutSecs\" : 10,\n \"electionTimeoutMillis\" : 10000,\n \"catchUpTimeoutMillis\" : -1,\n \"catchUpTakeoverDelayMillis\" : 30000,\n \"getLastErrorModes\" : {\n\n },\n \"getLastErrorDefaults\" : {\n \"w\" : 1,\n \"wtimeout\" : 0\n },\n \"replicaSetId\" : ObjectId(\"5f388cc3900792e0998729e1\")\n }\n",
"text": "Replica set primary node:\nrs.conf()customer-replica-set:PRIMARY> rs.conf()\n{\n“_id” : “customer-replica-set”,\n“version” : 2,\n“protocolVersion” : NumberLong(1),\n“writeConcernMajorityJournalDefault” : true,\n“members” : [\n{\n“_id” : 0,\n“host” : “customer-replica-set-0.customer-replica-set-svc.mongodb.svc.cluster.local:27017”,\n“arbiterOnly” : false,\n“buildIndexes” : true,\n“hidden” : false,\n“priority” : 1,\n“tags” : {}rs.status()rs.status()\n{\n“set” : “customer-replica-set”,\n“date” : ISODate(“2020-08-16T13:51:01.340Z”),\n“myState” : 1,\n“term” : NumberLong(3),\n“syncingTo” : “”,\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“heartbeatIntervalMillis” : NumberLong(2000),\n“majorityVoteCount” : 2,\n“writeMajorityCount” : 2,\n“optimes” : {\n“lastCommittedOpTime” : {\n“ts” : Timestamp(1597585859, 1),\n“t” : NumberLong(3)\n},\n“lastCommittedWallTime” : ISODate(“2020-08-16T13:50:59.046Z”),\n“readConcernMajorityOpTime” : {\n“ts” : Timestamp(1597585859, 1),\n“t” : NumberLong(3)\n},\n“readConcernMajorityWallTime” : ISODate(“2020-08-16T13:50:59.046Z”),\n“appliedOpTime” : {\n“ts” : Timestamp(1597585859, 1),\n“t” : NumberLong(3)\n},\n“durableOpTime” : {\n“ts” : Timestamp(1597585859, 1),\n“t” : NumberLong(3)\n},\n“lastAppliedWallTime” : ISODate(“2020-08-16T13:50:59.046Z”),\n“lastDurableWallTime” : ISODate(“2020-08-16T13:50:59.046Z”)\n},\n“lastStableRecoveryTimestamp” : Timestamp(1597585840, 8),\n“lastStableCheckpointTimestamp” : Timestamp(1597585840, 8),\n“electionCandidateMetrics” : {\n“lastElectionReason” : “stepUpRequestSkipDryRun”,\n“lastElectionDate” : ISODate(“2020-08-16T01:45:37.715Z”),\n“termAtElection” : NumberLong(3),\n“lastCommittedOpTimeAtElection” : {\n“ts” : Timestamp(1597542328, 1),\n“t” : NumberLong(2)\n},\n“lastSeenOpTimeAtElection” : {\n“ts” : Timestamp(1597542328, 1),\n“t” : NumberLong(2)\n},\n“numVotesNeeded” : 2,\n“priorityAtElection” : 1,\n“electionTimeoutMillis” : NumberLong(10000),\n“priorPrimaryMemberId” : 1,\n“numCatchUpOps” : NumberLong(27017),\n“newTermStartDate” : ISODate(“2020-08-16T01:45:37.761Z”),\n“wMajorityWriteAvailabilityDate” : ISODate(“2020-08-16T01:45:38.278Z”)\n},\n“members” : [\n{\n“_id” : 0,\n“name” : “customer-replica-set-0.customer-replica-set-svc.mongodb.svc.cluster.local:27017”,\n“ip” : “192.168.22.100”,\n“health” : 1,\n“state” : 1,\n“stateStr” : “PRIMARY”,\n“uptime” : 43532,\n“optime” : {\n“ts” : Timestamp(1597585859, 1),\n“t” : NumberLong(3)\n},\n“optimeDate” : ISODate(“2020-08-16T13:50:59Z”),\n“syncingTo” : “”,\n“syncSourceHost” : “”,\n“syncSourceId” : -1,\n“infoMessage” : “”,\n“electionTime” : Timestamp(1597542337, 1),\n“electionDate” : ISODate(“2020-08-16T01:45:37Z”),\n“configVersion” : 2,\n“self” : true,\n“lastHeartbeatMessage” : “”\n},\n{\n“_id” : 1,\n“name” : “customer-replica-set-1.customer-replica-set-svc.mongodb.svc.cluster.local:27017”,\n“ip” : “192.168.80.99”,\n“health” : 1,\n“state” : 2,\n“stateStr” : “SECONDARY”,\n“uptime” : 43519,\n“optime” : {\n“ts” : Timestamp(1597585859, 1),\n“t” : NumberLong(3)\n},\n“optimeDurable” : {\n“ts” : Timestamp(1597585859, 1),\n“t” : NumberLong(3)\n},\n“optimeDate” : ISODate(“2020-08-16T13:50:59Z”),\n“optimeDurableDate” : ISODate(“2020-08-16T13:50:59Z”),\n“lastHeartbeat” : ISODate(“2020-08-16T13:51:00.605Z”),\n“lastHeartbeatRecv” : ISODate(“2020-08-16T13:51:00.876Z”),\n“pingMs” : NumberLong(0),\n“lastHeartbeatMessage” : “”,\n“syncingTo” : “customer-replica-set-2.customer-replica-set-svc.mongodb.svc.cluster.local:27017”,\n“syncSourceHost” : “customer-replica-set-2.customer-replica-set-svc.mongodb.svc.cluster.local:27017”,\n“syncSourceId” : 2,\n“infoMessage” : “”,\n“configVersion” : 2\n},\n{\n“_id” : 2,\n“name” : “customer-replica-set-2.customer-replica-set-svc.mongodb.svc.cluster.local:27017”,\n“ip” : “192.168.33.103”,\n“health” : 1,\n“state” : 2,\n“stateStr” : “SECONDARY”,\n“uptime” : 43525,\n“optime” : {\n“ts” : Timestamp(1597585859, 1),\n“t” : NumberLong(3)\n},\n“optimeDurable” : {\n“ts” : Timestamp(1597585859, 1),\n“t” : NumberLong(3)\n},\n“optimeDate” : ISODate(“2020-08-16T13:50:59Z”),\n“optimeDurableDate” : ISODate(“2020-08-16T13:50:59Z”),\n“lastHeartbeat” : ISODate(“2020-08-16T13:51:00.659Z”),\n“lastHeartbeatRecv” : ISODate(“2020-08-16T13:50:59.778Z”),\n“pingMs” : NumberLong(0),\n“lastHeartbeatMessage” : “”,\n“syncingTo” : “customer-replica-set-0.customer-replica-set-svc.mongodb.svc.cluster.local:27017”,\n“syncSourceHost” : “customer-replica-set-0.customer-replica-set-svc.mongodb.svc.cluster.local:27017”,\n“syncSourceId” : 0,\n“infoMessage” : “”,\n“configVersion” : 2\n}\n],\n“ok” : 1,\n“$clusterTime” : {\n“clusterTime” : Timestamp(1597585859, 1),\n“signature” : {\n“hash” : BinData(0,“AAAAAAAAAAAAAAAAAAAAAAAAAAA=”),\n“keyId” : NumberLong(0)\n}\n},\n“operationTime” : Timestamp(1597585859, 1)\n}Connecting to the single host is the same issue:mongo --host customer-replica-set/ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671 --tls --tlsAllowInvalidCertificates --verbose\nMongoDB shell version v4.2.7\nconnecting to: mongodb://ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671/?compressors=disabled&gssapiServiceName=mongodb&replicaSet=customer-replica-set\n2020-08-16T09:13:33.132-0500 D1 NETWORK [js] Starting up task executor for monitoring replica sets in response to request to monitor set: customer-replica-set/ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671\n2020-08-16T09:13:33.133-0500 I NETWORK [js] Starting new replica set monitor for customer-replica-set/ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671\n2020-08-16T09:13:33.135-0500 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671\n2020-08-16T09:13:33.218-0500 W NETWORK [ReplicaSetMonitor-TaskExecutor] Unable to reach primary for set customer-replica-set\n2020-08-16T09:13:33.219-0500 I NETWORK [ReplicaSetMonitor-TaskExecutor] Cannot reach any nodes for set customer-replica-set. Please check network connectivity and the status of the set. This has happened for 1 checks in a row.telnet ec2-18-216-32-24.us-east-2.compute.amazonaws.com 31671\nTrying 18.216.32.24…\ntelnet: connect to address 18.216.32.24: Connection refused\ntelnet: Unable to connect to remote hostDeployed an nginx pod to make sure that it is not SG related:kubectl get services -w\nNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE\ncustomer-replica-set-0 NodePort 10.100.66.6 27017:31671/TCP 12h\ncustomer-replica-set-1 NodePort 10.100.185.237 27017:32595/TCP 12h\ncustomer-replica-set-2 NodePort 10.100.37.222 27017:30432/TCP 12h\ncustomer-replica-set-svc ClusterIP None 27017/TCP 12h\nmynginxsvc NodePort 10.100.121.119 80:30180/TCP 3m28s\noperator-webhook ClusterIP 10.100.40.145 443/TCP 13h\nops-manager-db-svc ClusterIP None 27017/TCP 13h\nops-manager-svc ClusterIP None 8080/TCP 13h\nops-manager-svc-ext LoadBalancer 10.100.146.190 a326d1d4fefc844e49d9da6d8ce1f229-105300929.us-east-2.elb.amazonaws.com 8080:30187/TCP 13htelnet ec2-3-17-154-122.us-east-2.compute.amazonaws.com 30180\nTrying 3.17.154.122…\nConnected to ec2-3-17-154-122.us-east-2.compute.amazonaws.com.\nEscape character is ‘^]’.@Pavel_Duchovny Seems like this is something specific to MongoDB Operator and the deployment. I would suggest that you run through the same deployment on an EKS cluster and let me know what you find since this should be pretty straight forward for accessing thru node port.",
"username": "Kish_V"
},
{
"code": "telnet ec2-18-216-32-24.us-east-2.compute.amazonaws.com 31671\nTrying 18.216.32.24…\ntelnet: connect to address 18.216.32.24: Connection refused\ntelnet: Unable to connect to remote host\n",
"text": "Hi @Kish_V,If the telnet cannot reach the instance its a 100% issue with your network infrastructureI don’t see a need for a repro until you won’t resolve this.\nBest\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thanks @Pavel_Duchovny.As a follow up I did install a nginx pod into the same namespace (refer above reply) and have verified that the k8s infra/nodes and node port is not an issue including the NACL and SG and it seems like is an issue with the configuration of the replicasethorizons.Please refer to the latter half of the verification that was performed above.",
"username": "Kish_V"
},
{
"code": "",
"text": "Hi @Kish_V,Have you tried to run the mongo shell command from the same place you successfully done the rs.status() does this work?Whats the difference between those two connection attempts?Best\nPqvel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Here you go @Pavel_DuchovnyKubernetes Internal IP:I have no name!@customer-replica-set-0:/$ /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.2.1-ent/bin/mongo --host customer-replica-set-0.customer-replica-set-svc.mongodb.svc.cluster.local --port 27017 --ssl --sslAllowInvalidCertificates\n2020-08-16T16:02:12.922+0000 W CONTROL [main] Option: ssl is deprecated. Please use tls instead.\n2020-08-16T16:02:12.922+0000 W CONTROL [main] Option: sslAllowInvalidCertificates is deprecated. Please use tlsAllowInvalidCertificates instead.\nMongoDB shell version v4.2.1\nconnecting to: mongodb://customer-replica-set-0.customer-replica-set-svc.mongodb.svc.cluster.local:27017/?compressors=disabled&gssapiServiceName=mongodb\n2020-08-16T16:02:12.996+0000 W NETWORK [js] SSL peer certificate validation failed: self signed certificate in certificate chain\nImplicit session: session { “id” : UUID(“85d3bbe5-0f65-4f96-ad67-28d46baadcab”) }\nMongoDB server version: 4.2.1\nWelcome to the MongoDB shell.\nFor interactive help, type “help”.\nFor more comprehensive documentation, see\nhttp://docs.mongodb.org/\nQuestions? Try the support group\nhttp://groups.google.com/group/mongodb-user\n2020-08-16T16:02:13.009+0000 I STORAGE [main] In File::open(), ::open for ‘//.mongorc.js’ failed with Permission denied\nServer has startup warnings:\n2020-08-16T01:45:29.701+0000 I STORAGE [initandlisten]\n2020-08-16T01:45:29.701+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine\n2020-08-16T01:45:29.701+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem\n2020-08-16T01:45:30.725+0000 I CONTROL [initandlisten]\n2020-08-16T01:45:30.725+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2020-08-16T01:45:30.725+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2020-08-16T01:45:30.725+0000 I CONTROL [initandlisten]\nMongoDB Enterprise customer-replica-set:PRIMARY>Node Internal IP:/var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.2.1-ent/bin/mongo --host ip-192-168-2-202.us-east-2.compute.internal --port 27017 --ssl --sslAllowInvalidCertificates\nI have no name!@customer-replica-set-0:/$ /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.2.1-ent/bin/mongo --host ip-192-168-2-202.us-east-2.compute.internal --port 31671 --ssl --sslAllowInvalidCertificates\n2020-08-16T16:05:01.481+0000 W CONTROL [main] Option: ssl is deprecated. Please use tls instead.\n2020-08-16T16:05:01.481+0000 W CONTROL [main] Option: sslAllowInvalidCertificates is deprecated. Please use tlsAllowInvalidCertificates instead.\nMongoDB shell version v4.2.1\nconnecting to: mongodb://ip-192-168-2-202.us-east-2.compute.internal:31671/?compressors=disabled&gssapiServiceName=mongodb\n2020-08-16T16:05:02.552+0000 E QUERY [js] Error: couldn’t connect to server ip-192-168-2-202.us-east-2.compute.internal:31671, connection attempt failed: SocketException: Error connecting to ip-192-168-2-202.us-east-2.compute.internal:31671 (192.168.2.202:31671) :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6\n2020-08-16T16:05:02.554+0000 F - [main] exception: connect failed\n2020-08-16T16:05:02.554+0000 E - [main] exiting with code 1Node External IPI have no name!@customer-replica-set-0:/$ /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.2.1-ent/bin/mongo --host ec2-18-216-32-24.us-east-2.compute.amazonaws.com --port 31671 --ssl --sslAllowInvalidCertificates\n2020-08-16T16:06:26.944+0000 W CONTROL [main] Option: ssl is deprecated. Please use tls instead.\n2020-08-16T16:06:26.944+0000 W CONTROL [main] Option: sslAllowInvalidCertificates is deprecated. Please use tlsAllowInvalidCertificates instead.\nMongoDB shell version v4.2.1\nconnecting to: mongodb://ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671/?compressors=disabled&gssapiServiceName=mongodb\n2020-08-16T16:06:28.024+0000 E QUERY [js] Error: couldn’t connect to server ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671, connection attempt failed: SocketException: Error connecting to ec2-18-216-32-24.us-east-2.compute.amazonaws.com:31671 (192.168.2.202:31671) :: caused by :: Connection refused :\nconnect@src/mongo/shell/mongo.js:341:17\n@(connect):2:6\n2020-08-16T16:06:28.026+0000 F - [main] exception: connect failed\n2020-08-16T16:06:28.026+0000 E - [main] exiting with code 1\nI have no name!@customer-replica-set-0:/$Pod - Localhost:I have no name!@customer-replica-set-0:/$ /var/lib/mongodb-mms-automation/mongodb-linux-x86_64-4.2.1-ent/bin/mongo --host 127.0.0.1 --port 27017 --ssl --sslAllowInvalidCertificates\n2020-08-16T16:07:23.098+0000 W CONTROL [main] Option: ssl is deprecated. Please use tls instead.\n2020-08-16T16:07:23.098+0000 W CONTROL [main] Option: sslAllowInvalidCertificates is deprecated. Please use tlsAllowInvalidCertificates instead.\nMongoDB shell version v4.2.1\nconnecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb\n2020-08-16T16:07:23.164+0000 W NETWORK [js] SSL peer certificate validation failed: self signed certificate in certificate chain\nImplicit session: session { “id” : UUID(“eab8dae7-2984-486f-a22b-2b9c89397c7b”) }\nMongoDB server version: 4.2.1\nWelcome to the MongoDB shell.\nFor interactive help, type “help”.\nFor more comprehensive documentation, see\nhttp://docs.mongodb.org/\nQuestions? Try the support group\nhttp://groups.google.com/group/mongodb-user\n2020-08-16T16:07:23.169+0000 I STORAGE [main] In File::open(), ::open for ‘//.mongorc.js’ failed with Permission denied\nServer has startup warnings:\n2020-08-16T01:45:29.701+0000 I STORAGE [initandlisten]\n2020-08-16T01:45:29.701+0000 I STORAGE [initandlisten] ** WARNING: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine\n2020-08-16T01:45:29.701+0000 I STORAGE [initandlisten] ** See http://dochub.mongodb.org/core/prodnotes-filesystem\n2020-08-16T01:45:30.725+0000 I CONTROL [initandlisten]\n2020-08-16T01:45:30.725+0000 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database.\n2020-08-16T01:45:30.725+0000 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted.\n2020-08-16T01:45:30.725+0000 I CONTROL [initandlisten]\nMongoDB Enterprise customer-replica-set:PRIMARY>I have seen that there are cases that the mongo conf needs to be updated for the bind ip. Do you think that it has a play here since that could cause this not to be exposed outside ?Have you guys tried to run this on AWS EKS and it should hardly take 15 mins to try this out on your own assuming you have a EKS cluster to play with ?",
"username": "Kish_V"
},
{
"code": "",
"text": "Hi @Kish_V,I believe ops manager automation should place a bind of 0.0.0.0 for the hosts.Do you see otherwise?I don’t have EKS at hand at the moment , I might try later this week.It seems as more of a dns problem. Does your vpc have dns and hostname resolution enabled?Can you upload the primary mongod log during a failed attempt?Best\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "hi @Pavel_Duchovny confirmed that the config does include the bind address:I have no name!@customer-replica-set-0:/$ more /data/automation-mongod.conf\nnet:\nbindIp: 0.0.0.0\nport: 27017\ntls:\nCAFile: /mongodb-automation/ca.pem\nallowConnectionsWithoutCertificates: true\ncertificateKeyFile: /mongodb-automation/server.pem\nmode: allowTLS\nprocessManagement:\nfork: “true”\nreplication:\nreplSetName: customer-replica-set\nstorage:\ndbPath: /data\nengine: wiredTiger\nsystemLog:\ndestination: file\npath: /var/log/mongodb-mms-automation/mongodb.logRegarding VPC and host resloution yes it is enabled since I can get to the OPS manager and by test nginx pod with nodeport test.Primary Mongod log during a failed attempt, I did provide this earlier from the client side. I don’t think there is going to be anything at the pod level since the traffic is not even getting thru. If there is anything specific you need let me know where to find and upload it here.I will be tearing down the cluster in the interest of cost but if you do get through the test let me know what you find. I can certainly repeat this issue repeatedly and can spin up a cluster any time we need to try this again but I believe this is an issue with Operator and DNS resolution within the cluster or how the replicasethorizons is set.Thanks @Pavel_Duchovny for staying with me.",
"username": "Kish_V"
},
{
"code": "",
"text": "@Pavel_Duchovny do we have any update on this issue ?Thanks",
"username": "Kish_V"
},
{
"code": "",
"text": "@Pavel_Duchovny do we have any update on this issue ?Thanks",
"username": "Kish_V"
},
{
"code": "sudo kubectl describe svc -n <namespace>kubectl edit svc/<replica set member> -n <namespace>",
"text": "Hi,I believe I know the cause of this and for the benefit of anyone else coming across it:sudo kubectl describe svc -n <namespace>\nreplacing with the namespace configured e.g. mongodbYou may find a selector with a revision-hash e.g. controller-revision-hash=myreplicaset-68d57865cf. These need to be removedkubectl edit svc/<replica set member> -n <namespace>\nreplacing the values in brackets e.g. kubectl edit svc/myreplicaset-2 -n mongodb",
"username": "Greg_Cox"
}
] | MongoDB Enterprise Kubernetes Operator on EKS | 2020-08-15T00:04:37.972Z | MongoDB Enterprise Kubernetes Operator on EKS | 7,458 |
[] | [
{
"code": "",
"text": "Hi\nDoes mongodb atlas multi cloud cluster support auto scaling? the doc link says no but i am able to configure it in the UI?Can someone pls confirm if this is just a stale documentation or do we reallly dont have the support for autoscaling in a multi cloud cluster?",
"username": "Chetan_Dharma"
},
{
"code": "",
"text": "Hi @Chetan_Dharma,Welcome to MongoDB Developer Community Forums — thanks for contributing!I believe that auto-scaling for multi-cloud clusters is now supported. Please check out the Atlas Changelog (refer to 17 February 2021 section).It appears the note on the documentation may need to be updated.Jason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "Thanks @Jason_Tran . That was super quick and an awesome feature indeed.\nI am setting it up for a project and i will keep the thread updated on the progress.",
"username": "Chetan_Dharma"
},
{
"code": "",
"text": "No worries @Chetan_Dharma. I would recommend If you encounter any issues to open / click on the chat bubble on the Atlas UI (located at the bottom right hand corner) and contact one of the chat support agents.",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Multi Cloud Auto Scaling | 2021-03-08T08:03:24.892Z | Multi Cloud Auto Scaling | 1,708 |
|
null | [
"crud",
"mongoose-odm"
] | [
{
"code": "{\n\n movieID: { type: Number, required: true },\n\n customerID: { type: Number, required: true },\n\n amount: { type: Number, required: true },\n\n totalCost: { type: Number, required: true },\n\n returnDate: { type: Date, default: Date.now() + 3 * 24 * 60 * 60 * 1000 },\n\n delayed: { type: Boolean, default: false },\n\n penalty: { type: Number },\n\n},\n\n{ timestamps: true }\n",
"text": "Hi,I need to build a db handling book rentals, if a book is not returned on time (3 days) a monetary penalty needs to be applied to the document and the book should be marked as “delayed” so it’s easy to query and find what books have not been returned on time.I have this mongoose schema:\nconst rentSchema = new schema();But I’m unsure how I can automize this process?Is there a way to insert data conditionally based on the returnDate somehow?I found a similiar thread on Stackoverflow discussing this theme and the author was recommended to use virtual properties with a condition in my case it could be currentDate >= returnDate, but when I checked the documentation it says that virtual properties can’t be used for queries or to insert data into the db? So how could this be useful in this case then?\nIf not virtual properties, is there some other way I could deal with this?",
"username": "Olle_Bergkvist"
},
{
"code": "delayedDays: nn",
"text": "Hello @Olle_Bergkvist, welcome to the MongoDB Community forum!But I’m unsure how I can automize this process?\nIs there a way to insert data conditionally based on the returnDate somehow?Since, everyday you are looking for delayed returns - run an update operation to update the collection and set a field delayedDays: n (where n is the number of days late). So, when the renter returns the book, you will know immediately how many days the book is returned late. There is no way to automate this process, but to run a job everyday for all the renters, perhaps at the start of the day before any of the renters are queried.I found a similiar thread on Stackoverflow discussing this theme …It is not clear what you are trying to say, but do include the link to the post so that others reading this post can see what you are trying to say and respond.The Mongoose Virtuals topic on Limitations says:Mongoose virtuals are not stored in MongoDB, which means you can’t query based on Mongoose virtuals.So, you will not be able to use this feature for this use case.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to dynamically update field in collection based on condition? | 2021-03-08T03:12:08.190Z | How to dynamically update field in collection based on condition? | 7,656 |
null | [
"replication"
] | [
{
"code": "",
"text": "The primary was shut down due to the issue, and there was a situation where the fail-over had to be done.\nIf the data had been coming in, would there be a fail-over after replicating the data? Or will fail-over come first, resulting in data loss?",
"username": "Kim_Hakseon"
},
{
"code": "rs.stepdown()secondaryCatchUpPeriodSecselectablesecondaryCatchUpPeriodSecs",
"text": "Hi @Kim_Hakseon,What specific version of MongoDB server are you using and how did you shutdown the primary?If you used rs.stepdown() there is a secondaryCatchUpPeriodSecs period which waits for an eligible secondary to catch up:The method does not immediately step down the primary. If no electable secondaries are up to date with the primary, the primary waits up to secondaryCatchUpPeriodSecs (by default 10 seconds) for a secondary to catch up. Once an electable secondary is available, the method steps down the primary.However, at some point an election may interrupt some in-flight writes.If you are using a modern version of MongoDB server (3.6+), compatible drivers support retryable writes to retry operations a single time if they encounter a network error or a replica set without a primary.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "I use mongodb 4.4.4, and reason of shutdown is that server was broken by external issue.",
"username": "Kim_Hakseon"
},
{
"code": "w:1w:1majority",
"text": "Hi @Kim_Hakseon,Replica set failover should not result in data loss of writes in progress in most scenarios:If any members of your replica set accepted writes that were not replicated to a new primary, conflicting versions of documents will be exported to a rollback directory for manual reconciliation. See Rollbacks During Replica Set Failover for more detailed information.If an election happens while writes are in progress, your application can recover using retryable writes. See Retryable Write Operations for operations that are retryable when issues with an acknowledged write concern (w:1 or higher).The default write concern for most drivers is w:1, but using a majority write concern in your application will minimise the potential of rollbacks:The more members that acknowledge a write, the less likely the written data could roll back if the primary fails. However, specifying a high write concern can increase latency as the client must wait until it receives the requested level of write concern acknowledgment.Features like retryable writes and retention of rollback data are enabled by default, but it is possible (although not advisable) to disable those via applicable driver or server configuration changes.If your cluster has had a failover due to an external issue, I would check if any rollback files have been created on your former primary. It would also be worth reviewing your application code to ensure retryable writes, write concerns, and appropriate exception handling are being used.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Kim_Hakseon,For an unexpected failover the sequence of events would be something like:Current primary is unavailable and replica set starts election for new primary.Applications sending writes get an an exception (no current primary) and queue retryable writes (non-retryable writes will get an exception).New primary gets elected based on the Replication Election Protocol which considers factors like member priority and most recent oplog entry.Other secondaries resume replication from the new primary and normal writes are now possible.Applications automatically resend retryable writes and only those which haven’t been applied yet will be executed.Former primary eventually rejoins as a secondary. Any accepted writes that hadn’t replicated to the current primary are rolled back.Does that address your follow-up concern?Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Priority of "Replication" vs "Fail-over" | 2021-03-06T04:23:49.799Z | Priority of “Replication” vs “Fail-over” | 1,922 |
null | [
"server",
"configuration"
] | [
{
"code": "net:\n port: 27017\n bindIp: 10.0.1.1,10.0.1.2\nbindIp: \"10.0.1.1,10.0.1.2\"\nbindIp: [10.0.1.1,10.0.1.2]\nmongod.service: Main process exited, code=exited, status=2/INVALIDARGUMENT\n",
"text": "I have mongoDb installed on ubuntu ec2 instance and want to allow the connection from other multiple IP, I tried bindIp but it is not working. Following is the code I have in the conf fileI tried following too but none of them workedOther posts or tutorials suggests one of them, I am not sure why it is not working for me. following is the error I getcan someone help me to get it started with multiple ips",
"username": "Vikram_Singh"
},
{
"code": "mongodmongod",
"text": "Welcome to the MongoDB Community @Vikram_Singh!Unless your computer has multiple network interfaces (i.e. 10.0.1.1 and 10.0.1.2 are both local network interfaces on the host for your mongod instance), this configuration will not work. The bind IP directive determines which local network interfaces a mongod process will listen to.If you review the MongoDB log file you will likely find a message like:Failed to set up listener: SocketException: Can’t assign requested addressFor more information, please see my post on I can't set bindIp - #2 by Stennie.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | bindIp not working on ubuntu 16 for MongoDb | 2021-03-08T04:58:51.175Z | bindIp not working on ubuntu 16 for MongoDb | 2,601 |
null | [] | [
{
"code": "",
"text": "Hello,I’d like to set up a filter for categories which I am primarily interested in. I’d like to apply this filter as default in my settings so that I would initially see only a subset of filtered messages (theses of primary interest). It would be great when the latest new, … would still work but on this subset. Finally it would be perfect when I could toggle the filter to no filter with one click . @Stennie_X any idea how to come close to this ?Michael",
"username": "michael_hoeller"
},
{
"code": "state=muted",
"text": "Hi @michael_hoeller,I think the feature you are looking for is “muting”. You can mute categories, tags, or users to focus on discussions of interest. Muting categories will move them to the bottom of the homepage view and remove those topics from the Latest view.There’s a filter parameter to view the Latest Muted discussions (you can find this via your profile Notifications page or add state=muted to the normal Latest url).You can adjust muting & notification preferences:I haven’t used the muting feature aside from a quick experiment to test the effects, but I do use Watching notifications to prioritise notifications for some categories of interest.If you end up using muting I’d be interested to know how that works out for you and any tips or tricks to share. We have a Managing and subscribing to notifications guide that could use a refresh.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hello Stennie,thanks a lot, I will check that out the over the next week and share the findings.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hello @Stennie_X,I gave it a quick try. The mechanism you describe (mute categories and switch to them via a adding a URL parameter works absolutely fine.\nI need to use the above process a little bit more, after a first test it feels as it the muted categories are fairly far out of focus. As of now I tend to like the option to “watch” certain categories and filter with state=watched the combination to filter watched or tracked seems not to work, this would be nice.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "Showstate=watchingstate=tracking",
"text": "I need to use the above process a little bit more, after a first test it feels as it the muted categories are fairly far out of focus.Hi Michael,The intent of the muting feature is to “turn down the volume” on some discussions so they won’t be promoted in your default browsing or notification experience.My preference is to increase notifications for discussions that might be more interesting and leave the rest at the Normal notification level.As of now I tend to like the option to “watch” certain categories and filter with state=watched the combination to filter watched or tracked seems not to work, this would be nice.You can find the filtered Latest links via the profile notifications preferences page. Click on the Show link next to a notification preference, eg:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "Showstate=watchingstate=tracking",
"text": "As of now I tend to like the option to “watch” certain categories and filter with state=watched the combination to filter watched or tracked seems not to work, this would be nice.You can find the filtered Latest links via the profile notifications preferences page. Click on the Show link next to a notification preference, eg:Hello Stennie,\nthanks for the links, I am aware of them. My point was the OR in between the parameters. I’d like to be able to filter on all posts which I either watch OR track in ONE filter → URL\nRegards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Filter for messages | 2021-03-06T13:20:37.982Z | Filter for messages | 4,277 |
null | [] | [
{
"code": "",
"text": "I can see the visited days count in my profile summary but i am not sure where i can see consecutive visited days.If it is not available then i would suggest it would help me or others to challenge my self to visit forum and participate in topics everyday.Thank you.",
"username": "turivishal"
},
{
"code": "",
"text": "Hi @turivishal,Thank you for the suggestion! Consecutive login days aren’t displayed in your profile at the moment but there are badges to earn if you want to challenge yourself:Badge calculations run on a periodic schedule, but I agree it might be nice to have some indication of progress. Unfortunately this detail won’t help you remember to login, so you really need to form a daily habit.So far your longest login streak was 87 days, but you missed logging in on 2021-02-18 and then 2021-02-20. Your current consecutive login days start from 2021-02-21.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank you for your reply, that is really make sense,When will day start and day end, is it calculate as per my selected timezone in my profile?",
"username": "turivishal"
},
{
"code": "",
"text": "Hi @turivishal,The day start and end is calculated based on the server timezone (which is set to UTC), but for practical purposes there just needs to be at least one valid authenticated request every 24 hours.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to see consecutive visited days in mongodb forum? | 2021-03-01T17:34:37.467Z | How to see consecutive visited days in mongodb forum? | 4,217 |
null | [] | [
{
"code": "",
"text": "Can we have group for discussing around certification, sharing difficulties while purchasing exam slot etc? Are there any other threads where we can discuss about it or anyone can share there ideas / problems they are facing?",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "I was also looking for any discussion but noting available so far…\nInstead of searching more it is recommended to follow the below link https://university.mongodb.com/exam/guide#indexes",
"username": "Raj_Kumar"
},
{
"code": "",
"text": "Yes. MongoDB Manual itself is enough for everything. In addition to that, Very well designed University courses are great resources. So if you stick to the exam guide and prepare yourself with Manual and university, It can be easy to pass an exam. But it’s always something that you wanted to discuss with your colleagues / batch mates / community mates who are preparing for the certification, facing same difficulties you might be facing, doing self check whether you’re on the right track of preparation and does everybody follows the same track / trick / ideas or someone have magic lamp outside.For an example,While preaparing for DBA exam, I have purchased exam seat but haven’t selected a slot yet and I am unable to find a link to select a slot. In that case, Someone from the community, having an experience or have already suffered from this situation can help and direct one to the right place immediately.This is just a case, There could be many more things and its always a pleasure sharing knowledge, getting knowledge and being around like minded people.",
"username": "viraj_thakrar"
},
{
"code": "",
"text": "Hi Guys,I haven’t seen many blogs about C100DBA so I decided to share my tips.I haven’t seen many blogs about C100DBA certification from the NoSQL MongoDB database, so I decided to share my tips.\nReading time: 9 min read\n",
"username": "Arkadiusz_Borucki"
},
{
"code": "",
"text": "The recent University forum migration brought across the Certification Exam category for discussion of certification exams and study guides.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | MongoDB Certification Discussion User Group | 2020-02-18T04:50:47.959Z | MongoDB Certification Discussion User Group | 5,762 |
null | [
"java"
] | [
{
"code": "",
"text": "I can not open mongo-java-driver Reference and API documentation at MongoDB Java Drivers\nIs there any other websites or offline documents?",
"username": "li_zhe"
},
{
"code": "",
"text": "I just visited the link and it looks fine. Try again as it might have been a temporary issue.",
"username": "steevej"
}
] | Can not open Java driver Reference and API documentation | 2021-03-07T12:32:53.892Z | Can not open Java driver Reference and API documentation | 1,405 |
null | [
"indexes"
] | [
{
"code": "{\n \"extraProperties.class\": \"Residential\"\n},\n{\n \"extraProperties.type\": \"Sale\"\n},\n{\n \"extraProperties.propertyType\": \"Condo Apartment\"\n},\n{\n \"extraProperties.propertyTypeStyle\": \"Apartment\"\n}\n",
"text": "so i am struggling for 2 weeks on why does not my indexes get picked when i “explain” my queries.i have this query:{\n“$and”: []\n}the above query wont pick this index :{ “extraProperties.class”:1 , “extraProperties.type” : 1, “extraProperties.propertyType”:1,“extraProperties.propertyTypeStyle”:1}i have been testing everything these days and finally i decided to flatten the hierarchy and now my query looks like this:{\n“$and”: [{\n“class”: “Residential”\n},\n{\n“type”: “Sale”\n},\n{\n“propertyType”: “Condo Apartment”\n},\n{\n“propertyTypeStyle”: “Apartment”\n}]\n}now the above query will pick this index :{ “class”:1 , “type” : 1, “propertyType”:1,“propertyTypeStyle”:1}could someone explain what the hell is going on there?!?!",
"username": "Masoud_Naghizade"
},
{
"code": "",
"text": "Hi @Masoud_Naghizade,I suspect 2 things that can throw the optimzer off.A cached plan that will make it choose another index as for nested shape it is considered better.A more likely reason is that when your data is setting in a nested document array it is a multikey path index. Now this pattern search for documents with corresponding fields but they not neseciraly all need to be in the same subdocument and thus index becomes less efficient.If you need all of those to be in a single doc use $elemMatch in your queryThanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "{\n \"extraProperties.class\": \"Residential\" ,\n \"extraProperties.type\": \"Sale\" ,\n \"extraProperties.propertyType\": \"Condo Apartment\" ,\n \"extraProperties.propertyTypeStyle\": \"Apartment\"\n}\n",
"text": "May be it is the $and operator that fools the query planner. Since there is an explicit and anyway try with the implicit version as follow:",
"username": "steevej"
}
] | Index not picked with nested field hierarchy but gets picked in the flatten mode | 2021-03-05T20:21:38.141Z | Index not picked with nested field hierarchy but gets picked in the flatten mode | 3,404 |
null | [
"data-modeling",
"atlas-device-sync",
"app-services-data-access"
] | [
{
"code": "",
"text": "Hi! I’m creating a react-native app where users can register and start working. This app is going to be multi-tenant(each registered user will have his data) and I’m planing to use Realm for offline and cloud sync.I created an Atlas cluster. And custom roles for each database. One role = access to one database.I know that multitenancy can work easily this way but I’m planning to use Realm.So, after creating the Realm App, I enabled the user/email registration.In the Rules section, I only see the Database.Collection.UserField way to separate data. That means that every collection will have all the customers data shared, just separated by this field.Is there any way to configure something like this:RealmUser->MongoDbRole or RealmUser->Database.* ?Or I need to create a separate AppId for each tenant?Of course, I know that enabling free registers is a open app is going to be a big NO. I’m planning to add some previous step (making a pre-registration/validation first).",
"username": "Mariano_Cano"
},
{
"code": "readPartitions{ \"%%user.custom_data.readPartitions\" : \"%%partition\" }",
"text": "The description in the question isn’t exactly a multi-tenant (multi-tenancy) situation. It’s multi-user with each user having their own dataset.Multi-tenancy would generally be a group of users that have a guaranteed share of the instance. They would have their own user management, data, configuration. It’s more akin for example, a company wide calendaring app where your clients would be separate companies (tenant), each having their own users. The question doesn’t really sound like that - perhaps there’s more to it. While you could craft your app that way, it adds additional complexity that may not be needed.With MongoDB Realm it’s common practice for each user to have their own Realm; that would be a ‘partition’ within a collection. While the data is in the same collection, it’s still kept ‘separate’ by leveraging rules that only allow a user to access their Realm dataIn the Rules section, I only see the Database.Collection.UserField way to separate data. That means that every collection will have all the customers data shared, just separated by this field.If you craft the app where each user has their own Realm (a ‘Partition’ in Atlas), it can be secured through permissions. Here’s a sync permission that enables a users read access if the partition value is listed in the readPartitions field of their custom user data{ \"%%user.custom_data.readPartitions\" : \"%%partition\" }See Define Sync Permissions for more reading.",
"username": "Jay"
}
] | Multi tenancy with Realm User per Mongo Database | 2021-03-06T20:56:59.464Z | Multi tenancy with Realm User per Mongo Database | 5,387 |
null | [
"mongodb-shell"
] | [
{
"code": "",
"text": "I just downloaded the mongo shell zip for windows, added <mongo shell’s download directory>/bin to $PATH variable and trying to invoke mongo shell in command line but below is the error prompt which I am getting. Any help on this pleaseError: “The code execution cannot be proceed because vcruntime140_1.dll was not found. Reinstalling the program may fix the probem.”",
"username": "liza_mae_minay"
},
{
"code": "",
"text": "Welcome to the community!Instead of zip try msi type of installation and see if it works\nor install the missing dll\nAnother student faced same issue\nCheck this link",
"username": "Ramachandra_Tummala"
}
] | Problem installing mongo shell on Windows: vcruntime140_1.dll was not found | 2021-03-06T13:53:38.434Z | Problem installing mongo shell on Windows: vcruntime140_1.dll was not found | 3,300 |
null | [
"sharding"
] | [
{
"code": "",
"text": "I know the maximum number of nodes in the replica set is 50, but how many shards in the sharded cluster are there?",
"username": "Kim_Hakseon"
},
{
"code": "",
"text": "Hi @Kim_Hakseon,I’m not aware of a specific limit on number of shards. For a similar discussion on limits, please see my response on Database and collection limitations - #2 by Stennie.Practical limits you may encounter are documented on MongoDB Limits and Thresholds. Some limits vary between versions of MongoDB server, so make sure you are reviewing the manual version matching your MongoDB server deployment.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Maximum number of shards | 2021-03-06T04:24:50.169Z | Maximum number of shards | 5,079 |
null | [
"queries"
] | [
{
"code": "",
"text": "Hi,\nI have specific structure with array of arrays\n{\n…\nruns:[\n[2,{\"$date\": “2020-01-01T10:12:12.000Z”}, …],\n[55, {\"$date\": “2020-03-01T10:12:12.000Z”}, …] ,\n…]\n}\nthe array of “runs” contains log of procedures that where executed once and I want to delete old ones.\ni.e. I need to remove all sub arrays with date less than X on 2nd place of array.My single suggestion for this\ndb.myCollection.update( {}, {$pull: {runs: {1:{$lte:{$date: “2020-01-15T20:12:12.000Z”}} } } } )\nshould change above to something like\n{\n…\nruns:[\n[55, {\"$date\": “2020-03-01T10:12:12.000Z”}, …] ,\n…]\n}\nbut didn’t work\nI currently using version 4.4.1 of mongodb\nI am new for mongodb than I sure do not see something important.any other suggestions for this case?Thanks",
"username": "Vova_T"
},
{
"code": "1$pull> db.myCollection.update({},{$pull:{runs:{$lt:ISODate('2020-03-01T10:12:12.000Z')}}})\nWriteResult({ \"nMatched\" : 1, \"nUpserted\" : 0, \"nModified\" : 1 })\nISODate$date",
"text": "You don’t need 1 in your $pull just use(I ran this in the shell, so I used ISODate which is equivalent to $date in extended json).\nThis works because operators reach inside arrays - now if you have additional elements which can also be dates, then it’s a bit more problematic (can still be done but requires a bit more complex syntax).By the way, I would recommend storing objects in the array for this exact reason, makes it a lot easier to query for them.Asya",
"username": "Asya_Kamsky"
},
{
"code": "{\n runs: [\n {0:55, 1:{\"$date\": “2020-03-01T10:12:12.000Z”}}\n ...\n ]\n}\n",
"text": "Thanks,But I still do not understand how it works.\nIf I had more than one date in the sub array, would it be both checked?I really do not see any problem in sub arrays because from JS point of view array almost same as the object just have numeric keys.\nIf to any array in JS add non numeric key it automatically converted to object.\nbecause of this here I really cannot understand why “1” cannot not be key. I just could have object likeand from JS point of view it should be same as array.\nAnd more obj[“1”] and obj[1] points to the same value rather obj is array or object.This data design came to save space and time. I moved from sql and had the client that accepts such arrays\nin order to not change client and simplify server (and save space and net transferred data) I moved from sql table to such array.",
"username": "Vova_T"
}
] | Using $pull for array of arrays | 2021-03-05T17:05:19.232Z | Using $pull for array of arrays | 2,993 |
null | [] | [
{
"code": "",
"text": "Hi, Is there a way we can add group of users under one name to the database user list in Atlas. We know this is possible to deal with the project access with teams but could this be possible for database users.any help on this is appreciated… thanks.",
"username": "Sivaram_Prasad_Chenn"
},
{
"code": "",
"text": "Hi @Sivaram_Prasad_Chenn,I don’t think this is possible. What you can do instead is give people access to the project and give them enough permissions so they can create their database user themselves. This gives them a chance to choose their username & password.\nAlso this gives them access to the metrics, connection strings & all the other things they will need at some point.Cheers,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "Thanks… but the customer which I work for is very specific about this and don’t want the application teams to be not the user admins. This created the problem for us.",
"username": "Sivaram_Prasad_Chenn"
},
{
"code": "",
"text": "Would create the database users via the Atlas CLI help maybe?https://docs.mongodb.com/mongocli/stable/reference/atlas/dbuser-create/There is a lot of granularity & flexibility in the Org and Project roles in Atlas.I don’t think you need to be an admin. I would assume that a combo Organization Member + Project Data Access Read/Write is enough.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "LDAP can also be used for this with LDAP group membership used for authorization. However many folks struggle to make their LDAP directory accessible over the network from the Atlas cluster nodes (which is a prereq) so this isn’t a slam dunk.",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "Probably a better bet: MongoDB Atlas - Secrets Engines | Vault | HashiCorp Developer",
"username": "Andrew_Davidson"
},
{
"code": "",
"text": "We have the setup to run the ps scripts to create or modify database users with hashicorp vault authorization. The other thing you mentioned about user access gives the user management access, this is interesting.",
"username": "Sivaram_Prasad_Chenn"
}
] | Atlas database users | 2021-03-03T15:41:11.039Z | Atlas database users | 1,441 |
null | [] | [
{
"code": "",
"text": "Update: The migration is complete! Let us know if you are experiencing any account or content-related issues.Hi all,Happy Wednesday! I wanted to share an exciting announcement with you:The University forums (currently at discourse.university.mongodb.com) will be merging into this community forum next week. This is a change that was requested by you, our community, and is part of a broader strategy to create a more streamlined, unified developer experience for you.Here are the important things to know:Please check your profile information following the migration to ensure that everything is as you’d expect it to appear. (We’ll let you know when the migration window closes.) Some of your stats may vary, depending on your activity in the community forums prior to the migration. If you notice any concerning issues, such as missing badges, images, or content, please comment in this thread or contact us at [email protected] me know below if you have any questions!Jamie",
"username": "Jamie"
},
{
"code": "",
"text": "Wow! That’s great news!",
"username": "Soumyadeep_Mandal"
},
{
"code": "",
"text": "I better make sure @steevej sees this ",
"username": "chris"
},
{
"code": "",
"text": "Hello JamieIs this the reason why there is no response from any of the curriculum staff on Uniersity forum?\nI don’t see any reply for last couple of days\nAre they busy in migration activities?It seems some of the courses(M301-security)are retired but no indication of it nor any reply/confirmation from Course instructors\nCan someone respond to those students?Thanks",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hello @Ramachandra_Tummala,in general this forum is not meant to be a shadow support forum for the university. However this seems to be a ONE TIME exceptional situation. I am not a moderator an will not overstep the rules of this forum.To help you please send me your question as direct message and also the section of the class M310 in case I like to read the context. I will NOT send you the answer but a hint.I like to info @Jamie, @Peter and @Stennie_X to keep transparency and also I like to emphasize that I want to help where help is needed. But we do not establish a bypass to any set processes what means this should be a ONE TIME issue.Hope this is ok for all.\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Adding @Shubham_Ranjan for visibility here",
"username": "Jamie"
},
{
"code": "",
"text": "Thanks @Jamie!!Hi @Ramachandra_Tummala,Thanks for sharing this. I will follow up with the curriculum team on this.Thanks,\nShubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "@Shubham_Ranjan @Jamie @Ramachandra_Tummala @PeterI can confirm that M310 is not longer available as class to take\nI still can see and access the content of M310 via the list of taken classes. This should not have any impact since I have taken all classes, it this would impact than the list of available classes should be empty. But only M310 is missing. This is tested with an on demand account.PS Class T101 is not visible for an on demand account, I’d like to suggest to change this so that there is no difference to an non on demand account.Regards,\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "Hi @Ramachandra_Tummala @michael_hoeller,We have recently retired the M310: MongoDB Security Course from MongoDB University. So it will not be visible to any user who registers for DBA learning Path on MongoDB University.We are working on creating a new Security course with up-to-date content and an improved learner experience. Meanwhile, we highly recommend referencing MongoDB Security Documentation.We have updated our users who have reached out to us for this, however, the discussion forum is still open for user’s questions who have registered for the course before the end of January, 21.Is this the reason why there is no response from any of the curriculum staff on Uniersity forum?We are struggling with a reduced headcount for some time now. We will soon catch up with the pace and respond to our users as soon as possible. We really appreciate our super users who have been supporting our learners tirelessly!!Thanks,\nSonali",
"username": "Sonali_Mamgain"
},
{
"code": "",
"text": "Hi y’all,We’re in the final sprint to complete this migration! The downtime window has been scheduled for 1 AM EST March 4, 2021 through 11:59 PM EST March 5, 2021. During this time, you won’t have access to the University forums and these forums will be read-only. You’ll notice some other announcements around these details going up throughout the next few hours. If you have any course-related issues during this time, please contact the University support team at [email protected]. Otherwise, we’ll see you on the other side!Cheers,Jamie",
"username": "Jamie"
},
{
"code": "",
"text": "How do you expect me to detract myself from work now ?",
"username": "chris"
},
{
"code": "",
"text": "We’re back! The migration is complete. Let us know if you notice anything askew. Otherwise, major props to the team, especially @Stennie_X @Michael_Lee @Shubham_Ranjan & @Katharine_Lucic for getting this done!",
"username": "Jamie"
},
{
"code": "",
"text": "Hi @Jamie,Massive kudos to you for leading the migration project and getting all of the moving parts aligned!This was a great #BuildTogether outcome that involved a tremendous amount of planning, testing, and coordination across teams and timezones.Much appreciated … you #OwnWhatYouDo and #MakeItMatter!Many many thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | The University Forums are Coming to the Community! | 2021-02-17T20:27:23.425Z | The University Forums are Coming to the Community! | 4,840 |
null | [
"change-streams"
] | [
{
"code": "",
"text": "I am R&D’ing on currently leveraging Mongo ChangeStreams.I am seeing that it has a good throughput, resilient to failures due to the ‘Resume token’, but I dont see any documentation on whether if/ how we can manage multiple instances of a subscriber application to receive changes in a load balance manner. Without this, we would have to use a single instance per filtered stream, inherently dealing with single point of failure.If this is currently not offered by the mongo drivers (we use scala), it would be great help if you could direct us to some other implementations that could be done to achieve this.",
"username": "Atil_Pai"
},
{
"code": "",
"text": "Hi did you find out this?",
"username": "Amit_Gupta"
},
{
"code": "",
"text": "Same question here, single point of failure.\nAny solution to this?",
"username": "Froged_Technologies"
}
] | Parallel consumers (subscribers) for Change Streams. Scaling issue | 2020-03-18T16:14:06.004Z | Parallel consumers (subscribers) for Change Streams. Scaling issue | 4,107 |
null | [
"mongodb-shell"
] | [
{
"code": "root@tepalteubuntu1804:/home/admin# \"/usr/bin/mongorestore\" --port 27017 --oplogReplay \"/local/oplog_replay_input\"\n2021-03-05T09:53:53.936+0000 preparing collections to restore from\n2021-03-05T09:53:53.937+0000 replaying oplog\n2021-03-05T09:53:53.940+0000 **Failed: restore error: error applying oplog: applyOps: (Location51070) Modifications to system.views must take an exclusive lock**\n2021-03-05T09:53:53.940+0000 0 document(s) restored successfully. 0 document(s) failed to restore.\n",
"text": "Hi,I periodically take dump of the oplog collection and during restore I apply it using mongorestore utility.\nIf I have a view creation, which is captured by oplog backup and if I try to restore it, it fails with below error.\nPlease can someone help to resolve it and restore views.Thanks,\nAkshaya Srinivasan",
"username": "Akshaya_Srinivasan"
},
{
"code": "mongodmongorestore",
"text": "Hi @Akshaya_Srinivasan,Which versions of mongod and mongorestore are you running?https://jira.mongodb.org/browse/SERVER-47469It looks like this has been fixed in 4.2.10, 4.4.2 and 4.7.0 (dev version, future 5.0 I think).Thanks,\nMaxime.",
"username": "MaBeuLux88"
},
{
"code": "",
"text": "I am using 4.2.8 version mongod and mongorestore. Will try with 4.2.10 and update.",
"username": "Akshaya_Srinivasan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Mongorestore of oplog collection fails if it includes view creation | 2021-03-05T10:08:56.033Z | Mongorestore of oplog collection fails if it includes view creation | 2,115 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "When publish.full.document enabled, source connector unable to push delete record(full document as NULL) to kafka topic, as a result sink is unable to handle delete requests from source.Could you suggest if this is an expected behavior. Or is there any specific config we are missing?publish.full.document is enabled as we need complete document for each update operation.",
"username": "vinay_murarishetty"
},
{
"code": "",
"text": "Hey,Yes this is the known issue (sorry feature of MongoDB connector) done by design at least per Documentation.I suggest you to open another connector that will receive deletes e.g. configured to receive only changes - not full documents.Regards,\nAndrew.",
"username": "lyubick"
}
] | Kafka source connector not sending delete record to kafka topic | 2021-03-05T12:29:11.166Z | Kafka source connector not sending delete record to kafka topic | 4,399 |
null | [
"aggregation",
"crud"
] | [
{
"code": "await Stock.updateOne(\n {\n _id: mongoose.Types.ObjectId(el.stockId),\n agency,\n },\n [\n {\n $set: {\n currentQuantity: {\n $add: [\n '$currentQuantity',\n {\n $switch: {\n branches: [\n {\n case: { $gt: [diff_qty, 0] },\n then: {\n $cond: [\n { $gte: ['$currentQuantity', diff_qty] },\n -diff_qty,\n 0,\n ],\n },\n },\n {\n case: { $lt: [diff_qty, 0] },\n then: { $abs: diff_qty },\n },\n ],\n default: 0,\n },\n },\n ],\n },\n },\n },\n {\n $set: {\n unitQuantity: {\n $mod: ['$currentQuantity', el.units],\n },\n },\n },\n {\n $set: {\n caseQuantity: {\n $floor: {\n $divide: ['$currentQuantity', el.units],\n },\n },\n },\n },\n ],\n { session: session }\n );\n",
"text": "Below is my code to update currentQuantity of stock document.for an example;Instance 1previously requested stock qty = 5;\nnewly requested stock qty = 10\ntherefore diff_qty = 10-5 = 5;Instance 2previously requested stock qty = 5;\nnewly requested stock qty = 3\ntherefore diff_qty = 3- 5 = (-)2;Instance 3previously requested stock qty = 5;\nnewly requested stock qty = 5\ntherefore diff_qty = 5- 5 = 0;I have written the code to cover all these instances inside $switch. See below.In any of the cases my stock will be updated.\nAs in if my currentQty = 10;Instance 1previously requested stock qty = 5;\nnewly requested stock qty = 10\ntherefore diff_qty = 10-5 = 5; // Request 5 more items from the stockthen currentQty = 10 - 5 = 5;Instance 2previously requested stock qty = 5;\nnewly requested stock qty = 3\ntherefore diff_qty = 3- 5 = (-)2; // 2 items are returned back to the warehouse so it needs to be added back to stockthen currentQty = 10 + 2 = 2;Instance 3previously requested stock qty = 5;\nnewly requested stock qty = 5\ntherefore diff_qty = 5- 5 = 0;then currentQty = 10 + 0 = 10;I want to identify the update which happened in Instance 1, and Instance 2.\nBut not Instance 3 even thought it was updated from currentQty 10 => 10 and there is no effect in that update.Is there any way i could achieve this in the aggregate pipeline itself? Your thoughts would be really helpfulCheers",
"username": "Shanka_Somasiri"
},
{
"code": "",
"text": "Hi @Shanka_Somasiri! I noticed this was an interesting aggregation problem and wanted to know the answer, so I looked into it a bit more.I think I’m missing something, because in each case, what is done is the same:\n10 - diff_qty.In instance 1, diff_qty = 5, currentQuantity = 10 - 5\nIn instance 2, diff_qty = -2, currentQuantity = 10 - (-2) = 12\nIn instnace 3, diff_qty = 0, currentQuantity = 10 - 0 = 10Am I missing something, or does that simplify what you’re trying to do?",
"username": "Sheeri_Cabral"
},
{
"code": "",
"text": "Hi @Sheeri_Cabral Yes you are correct. I was able to get it sorted.",
"username": "Shanka_Somasiri"
}
] | Aggregate pipeline with updateOne | 2021-01-22T05:42:57.705Z | Aggregate pipeline with updateOne | 1,613 |
null | [
"data-modeling",
"capacity-planning"
] | [
{
"code": "",
"text": "we’re currently working on achieving data isolation at database level. we are expected to have clients around 15 to 20. Each client with data usage of 50 to 100GB. what are all the possible approaches and pros and cons associated with it ?",
"username": "Manish_Beesetti"
},
{
"code": "",
"text": "Hi @Manish_Beesetti,Welcome to MongoDB community.Please read the following post :Multi-tenancy and shared data - #4 by Pavel_DuchovnyHope it covers your consideration.Thanks\nPavel",
"username": "Pavel_Duchovny"
}
] | Multi Tenancy at Data Base Level | 2021-03-04T00:05:52.640Z | Multi Tenancy at Data Base Level | 3,230 |
null | [
"ops-manager",
"kubernetes-operator"
] | [
{
"code": "",
"text": "I am configuring MongoDB Enterprise Kubernetes operator. All the guides I found so far recommend the following procedure: https://www.mongodb.com/blog/post/running-mongodb-ops-manager-in-kubernetes which includes one step where you have to login into the Ops Manager UI and generate an API Key.I am trying to get this whole process automated end to end avoiding the need for UI interaction but I can not find a way to get that API Key using the Ops Manager REST APIs. Do you know the steps required to automate this whole process or do you know any documentation/blog which could help me?",
"username": "Eduardo_Yubero"
},
{
"code": "",
"text": "I haven’t done it myself but this is the API call. Organization Programmatic API Keys — MongoDB Ops Manager 6.0Remember that the API key is how MongoDB glues the Ops Manager login to the replica set. So each user should have a different API key.",
"username": "Albert_Wong"
},
{
"code": "",
"text": "Here is something similar as a tutorial. Deploy a Cluster through the API — MongoDB Ops Manager 6.0",
"username": "Albert_Wong"
},
{
"code": "Status: \n Application Database:\n Last Transition: 2021-03-04T16:24:34Z\n Members: 3\n Observed Generation: 1\n Phase: Running\n Version: 4.2.2\n Ops Manager:\n Last Transition: 2021-03-04T16:24:34Z\n Message: The secret mongodb/ doesn't exist - you need to create it to finish Ops Manager initialization\n Observed Generation: 1\n Phase: Failed\n URL: http://ops-manager-svc.mongodb.svc.cluster.local:8080\n Events: <none>\n",
"text": "Hi Albert thank you for your quick answer, unfortunately those resources did not fix my issue… My goal is to get the MongoDB Enterprise Kubernetes operator deployed as part of a touch-free CI/CD pipeline, so I need to be able to get it deployed without any human interaction in the process.The first link you sent “Organization Programmatic API Keys” requires an API Key being already available “curl --user “{PUBLIC-KEY}:{PRIVATE-KEY}” --digest \\ …” which I can not get unless I use the OpsManager UI.The second resource sounds like what I need, but again it requires an existing user and the knowledge of the API key which again is only accessible via OpsManager UI.When the Ops Manager resource is created it requires a ops-manager-admin-secret being in kubernetes and those credentials are used to create the default user with GLOBAL_OWNER role:kubectl apply -f ops-manager.yamlOnce you have the default user created you are forced to generate a new user’s API Key for any operation you want to do via REST, so you need to use the OpsManager UI.I thought about creating a new first user via REST call which does not require API key and then upgrade that user’s role to GLOBAL_OWNER but I can not do that unless I have the API Key of the default user, the one created from the secret… which again forces me to use the Ops Manager UI.Another thing I tried was to create the Ops Manager resource via yaml in kubernetes without the ops-manager-admin-secret, and that worked partially as I could use the Ops Manager Rest API to create the first user with GLOBAL_OWNER role, but then I got the following error saying that Ops Manager can not be initialized properly without a secret:I am blocked in this loop where for any action I do I have to end up login into the Ops Manager UI to generate a new API Key therefore not being able to automate the MongoDB Enterprise Kubernetes operator deployment.Surely must be possible to deploy it automatically without any human interaction with the UI but I have not found any way so far. Any idea what sequence of API calls I could follow to get the Ops Manager initialized properly?",
"username": "Eduardo_Yubero"
},
{
"code": "",
"text": "hi @Eduardo_YuberoYou can try the following APIs:\nCreate an API Key — MongoDB Ops Manager 6.0 to create an organization API key (note, that the user making the call must have enough permissions for this operation)or Create and Assign One Organization API Key to One Project — MongoDB Ops Manager 6.0 to restrict the org API key to one specific project only.You may need to use the other APIs (to create organization/project) the same way.To answer you question about full automation - in your CI/CD you can use the global API key that is created once the OpsManager is created in Kubernetes. During OpsManager resource creation the Operator initializes the first user there and saves the API keys to the secret with format “<om_resource_name>-admin-key”. After reading you will be able to access OM as a global admin",
"username": "Anton_Lisovenko"
},
{
"code": "kubectl get secret ops-manager-admin-key --namespace=mongodb -o jsonpath=\"{.data.publicApiKey}\" | base64 --decode",
"text": "Thanks Anton! that’s exactly what I wanted to do, and it works!For anybody trying to do the same thing, you just need to wait until the OpsManager is ready and then you can read the API Key from a secret automatically created by Kubernetes with the name “ops-manager-admin-key”:",
"username": "Eduardo_Yubero"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Ops Manager API Key without UI | 2021-03-03T18:00:20.666Z | Ops Manager API Key without UI | 3,978 |
null | [
"replication",
"configuration"
] | [
{
"code": ":27017\",\"error\":\"HostUnreachable: Error connecting to mongo3:27017 (127.0.0.1:27017) :: caused by :: Connection refused\",\"replicaSet\":\"bart\",\"isMasterReply\":\"{}\"}}\n{\"t\":{\"$date\":\"2021-03-01T20:01:18.991+01:00\"},\"s\":\"I\", \"c\":\"NETWORK\", \"id\":4712102, \"ctx\":\"ReplicaSetMonitor-TaskExecutor\",\"msg\":\n\"Host failed in replica set\",\"attr\":{\"replicaSet\":\"bart\",\"host\":\"mongo3:27017\",\"error\"\n:{\"code\":6,\"codeName\":\"HostUnreachable\",\"errmsg\":\"Error connecting to mongo3:27017 (127.0.0.1:27017) :: caused by :: Connection refused\"},\n\"action\":{\"dropConnections\":true,\"requestImmediateCheck\":false,\"outcome\":{\"host\":\"mongo3:27017\",\"success\":false,\"errorMessage\":\"HostUnreachable: \nError connecting to mongo3:27017 (127.0.0.1:27017) :: caused by :: Connection refused\"}}}}\n",
"text": "Hi, I am new to Mongo and I have encountered this error in the log of a newly installed database;From what I understand the error is that access is denied by ip 127.0.0.1. In mongod.conf, bindIp is defined: 102.101.232.214.What is the cause of this error, how can it be solved?Can I include multiple ips (add 127.0.0.1) in bindip parameter?Thanks indeed\nAntonio",
"username": "Antonio_NAVARRO"
},
{
"code": "--bind_ip <hostnames|ipaddresses|Unix domain socket paths>",
"text": "Welcome to the Community!Yes you can add multiple IP’s--bind_ip <hostnames|ipaddresses|Unix domain socket paths>Add localhost,IP,127.0.0.1 and see\nWhere is your mongod running?Locally or hosted elsewhere\nCan you connect locally by shell\nCould be firewall issues or bindip not setup properly",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi Ramachandra_TummalaI have included in the variable bind_id with the two ips, the old one plus the 127.0.0.1bindIp: 127.0.0.1, xxx.xxx.xxx.xxxThe error has disappeared. What I don’t know is why it used 127.0.0.1 to connect to the instance and where it got that IP from.Thanks indeed\nAntonio NAVARRO",
"username": "Antonio_NAVARRO"
},
{
"code": "",
"text": "From your logs it looks like you have replication enabled. Could be a request from another node(unlikely for 127.0.0.1) or from itself.",
"username": "chris"
},
{
"code": "",
"text": "Chris,This is a replicaset, yesterday we saw that / etc / hosts has an entry with the name of the machine and the IP 127.0.0.1 configured.Thanks",
"username": "Antonio_NAVARRO"
}
] | Error in Log (ReplicaSetMonitor-TaskExecutor) | 2021-03-01T22:07:09.435Z | Error in Log (ReplicaSetMonitor-TaskExecutor) | 5,508 |
null | [
"crud",
"mongodb-shell"
] | [
{
"code": "db.getCollection('test').findOneAndUpdate({ lastAssignedTime: {$lt: new Date((new Date())-1000*10)} },\n [\n {\n $set: \n {\n priority: \n {\n \"$cond\": \n {\n if: {$gte: [\"$vmAssignedSessions\", 3]},\n then: 99,\n else: 1,\n }\n }\n }\n }\n ],\n {sort: { \"priority\" : 1 }}\n)\n",
"text": "Hello,I am not able use update pipeline with findOneAndUpdate and need help with the same.I keep running into021-03-03T17:24:11.474-0800 E QUERY [js] Error: the update operation document must contain atomic operators :\nDBCollection.prototype.findOneAndUpdate@src/mongo/shell/crud_api.js:819:1Mongo component versions:\nMongoDB shell version v4.0.0connecting to: mongodb://127.0.0.1:27017MongoDB server version: 4.2.12Thanks in advance!",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "",
"text": "Hello @Abhishek_Kumar_Singh,MongoDB shell version v4.0.0aggregation pipeline is support from mongodb v4.2, i can see your server version is valid but You have to update your mongodb shell version to 4.2 or above.",
"username": "turivishal"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unable to use update pipeline with findOneAndUpdate | 2021-03-04T01:27:19.539Z | Unable to use update pipeline with findOneAndUpdate | 3,329 |
null | [
"dot-net",
"connecting",
"security"
] | [
{
"code": "{\n \"ClassName\": \"System.DllNotFoundException\",\n \"Message\": \"Unable to load shared library 'security.dll' or one of its dependencies. In order to help diagnose loading problems, consider setting the LD_DEBUG environment variable: libsecurity.dll: cannot open shared object file: No such file or directory\",\n \"Data\": null,\n \"InnerException\": null,\n \"HelpURL\": null,\n \"StackTraceString\": \" at MongoDB.Driver.Core.Authentication.Sspi.NativeMethods.AcquireCredentialsHandle(String principal, String package, SecurityCredentialUse credentialUsage, IntPtr logonId, IntPtr identity, Int32 keyCallback, IntPtr keyArgument, SspiHandle& credentialHandle, Int64& timestamp)\\n at MongoDB.Driver.Core.Authentication.Sspi.SecurityCredential.Acquire(SspiPackage package, String username, SecureString password)\\n at MongoDB.Driver.Core.Authentication.GssapiAuthenticator.FirstStep..ctor(String serviceName, String hostName, String realm, String username, SecureString password, SaslConversation conversation)\\n at MongoDB.Driver.Core.Authentication.GssapiAuthenticator.GssapiMechanism.Initialize(IConnection connection, SaslConversation conversation, ConnectionDescription description)\\n at MongoDB.Driver.Core.Authentication.SaslAuthenticator.Authenticate(IConnection connection, ConnectionDescription description, CancellationToken cancellationToken)\\n at MongoDB.Driver.Core.Authentication.AuthenticationHelper.Authenticate(IConnection connection, ConnectionDescription description, IReadOnlyList`1 authenticators, CancellationToken cancellationToken)\\n at MongoDB.Driver.Core.Connections.ConnectionInitializer.InitializeConnection(IConnection connection, CancellationToken cancellationToken)\\n at MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelper(CancellationToken cancellationToken)\",\n \"RemoteStackTraceString\": null,\n \"RemoteStackIndex\": 0,\n \"ExceptionMethod\": null,\n \"HResult\": -2146233052,\n \"Source\": \"MongoDB.Driver.Core\",\n \"WatsonBuckets\": null,\n \"TypeLoadClassName\": null,\n \"TypeLoadAssemblyName\": null,\n \"TypeLoadMessageArg\": null,\n \"TypeLoadResourceID\": 0\n }\nvar settings = new MongoClientSettings\n {\n Credential = MongoCredential.CreateGssapiCredential([email protected])\n .WithMechanismProperty(\"CANONICALIZE_HOST_NAME\", canonicalizeHostName),\n\n Servers = servers.Split(',').Select(s => new MongoServerAddress(s, port))\n };\n\n Database = new MongoClient(settings).GetDatabase(databaseName);\n\n _collectionName = collectionName ?? typeof(T).Name;\n _collection = Database.GetCollection<T>(collectionName);\n",
"text": "I am currently running a dockerized c# .NET Core 2.1 application on Linux.My application connects to Mongo on windows using CreateGssapiCredential and works as expected.When I try to run the same app in linux it fails with the error “An exception occurred while opening a connection to the server.”. Stack trace -I followed the documentation here for linux - Authenticate to MongoDB with the C# Driver — MongoDB Manualand also the GSSAPI/Kerberos documentation here - mongo-csharp-driver/authentication.md at master · mongodb/mongo-csharp-driver · GitHubThis is the code that sets the connection -Nothing seems to fix the problem. How do i get this .NET core 2.1 app to work in linux with GSSAPI?",
"username": "Girish_Nair"
},
{
"code": "masterlibgsaslsecurity.dllmaster",
"text": "Hi, Girish,Thank you for reaching out. We recently implemented GSSAPI/Kerberos support on Linux, which is now in our master branch but not in a stable release yet. We will be releasing it shortly in 2.12.0. More information can be found in https://jira.mongodb.org/browse/CSHARP-2474. (Note that the code for CSHARP-2474 did not make it into 2.12.0-beta1, but will be in the GA release.)Your second documentation reference refers to the unreleased code that will be included in 2.12.0. The first documentation reference is to a very old 1.X-era driver that used libgsasl to implement Kerberos support. The 2.X-era driver implements Kerberos support on Windows using Windows-specific SASL APIs (present in security.dll) that have no direct Linux equivalent. Thus the DLL redirect technique documented in the 1.X documentation will not work with 2.X drivers.You can either compile the 2.12.0 driver from source using the master branch or wait until we release the 2.12.0 NuGet package, which should happen in the next few weeks.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Thank you for the update.",
"username": "Girish_Nair"
},
{
"code": "",
"text": "I compiled the 2.12.0 driver from source using master but i am still getting the same error - “Unable to load shared library ‘security.dll’ or one of its dependencies…”.Is this document mongo-csharp-driver/authentication.md at master · mongodb/mongo-csharp-driver · GitHub upto date on what is required to get this working in RHEL?As mentioned in the document i have libgssapi_krb5.so in the /usr/lib64/ and my dotnet core app is deployed under /app folder. This driver is looking for windows security.dll which does not exist in linux.",
"username": "Girish_Nair"
},
{
"code": "masterRuntimeInformation.IsOSPlatform(OSPlatform.Linux)libgssapi_krb5.soRuntimeInformation.IsOSPlatform(OSPlatform.Windows)security.dllsecurity.dll",
"text": "Hi, Girish,The linked documentation is up-to-date and the driver built from master has been tested against RHEL, Ubuntu, and a variety of other Linux distros.If we detect that we are running on Linux (via RuntimeInformation.IsOSPlatform(OSPlatform.Linux)) then we P/Invoke to libgssapi_krb5.so. If instead we detect running on Windows (via RuntimeInformation.IsOSPlatform(OSPlatform.Windows)) then we P/Invoke to SSPI in security.dll. So it is rather surprising that the driver would be attempting to load security.dll on Linux.Barring unforeseen events, we will be releasing the 2.12.0 driver in the next few days. Please try again with the official driver NuGet once it is released to see if it resolves your issue. If not, we will be happy to investigate further with you.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "Hi, Girish,The v2.12.0 release is now available on NuGet. Please try again with the official release and let us know if you encounter any issues.Sincerely,\nJames",
"username": "James_Kovacs"
},
{
"code": "",
"text": "it worked with v2.12.0. Thank you so much!!",
"username": "Girish_Nair"
},
{
"code": "",
"text": "That’s great news! We are glad that this new feature is working for you.James",
"username": "James_Kovacs"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | GSSAPI Authentication failing on LINUX RHEL7 with .NET Core 2.1 | 2021-02-20T23:15:19.135Z | GSSAPI Authentication failing on LINUX RHEL7 with .NET Core 2.1 | 3,982 |
null | [
"app-services-hosting"
] | [
{
"code": "",
"text": "Hello,Is it possible to upload files to Realm Hosting using HTTP REST APIs?Thanks,Ruslan",
"username": "rkazakov"
},
{
"code": "",
"text": "Hik @rkazakov,You should be able to do that with the Realm Admin API:Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hi @Pavel_Duchovny!APIs look great! We are going to try them.Thank you,Ruslan",
"username": "rkazakov"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Upload files to Realm Hosting using HTTP REST APIs? | 2021-03-03T09:33:46.814Z | Upload files to Realm Hosting using HTTP REST APIs? | 3,727 |
null | [
"atlas-device-sync"
] | [
{
"code": "var body: some View\n{\n if app.currentUser != nil && app.currentUser?.state == .loggedIn\n {\n\n BaseView()\n \n }\n else if firstTime\n {\n IntroBase(showIntro: $showIntro)\n }\n else\n {\n LoginView(showIntro: $showIntro)\n\n }\n}\n",
"text": "Hi everyone, as the title suggests my app crashes on logout with the error: \"“Fatal error: Unexpectedly found nil while unwrapping an Optional value” due to this line of code:\n“let realm = try ! Realm(configuration: app.currentUser!.configuration(partitionValue: “user=(app.currentUser!.id)”))” .So basically because the user has logged out, one of the views that uses realm is receiving the currenUser as nil which is causing the crash. This makes sense however my root view on app startup is this:What I expected would happen would be that on logout, the root view changes back to the LoginView and nothing should be calling Realm.I’m not sure how to fix this? has anyone ran into a similar situation? Any guidance will be appreciated.",
"username": "Deji_Apps"
},
{
"code": "BaseViewBaseView().environment(\\.realmConfiguration, \n app.currentUser!.configuration(partitionValue: “user=\\(app.currentUser!.id)”)\nBaseView@ObservedResults(Item.self) var items\n@Environment(\\.realm) var itemRealm\n",
"text": "Working with (especially synced) realms in a SwiftUI app becomes a lot simpler and more robust in Realm-Cocoa 10.6.From the view you’ve shown, pass the realm configuration to BaseView:and then from BaseView, you can get a live (updatable) query results and / or the realm:",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Thank you so much for your reply Andrew, it was very helpful. I am looking forward to exploring the changes in version 10.6.",
"username": "Deji_Apps"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | IOS app crash on Logout | 2021-02-25T20:21:20.799Z | IOS app crash on Logout | 2,040 |
[
"performance"
] | [
{
"code": "",
"text": "MongoDB Version: 3.6.17\nStorage Engine: WiredTigerPeriodically, the update to the system.sessions collection spikes, when the wired tiger’s write ticket is exhausted\nThis regularly slows down the service because of this.I’m guessing it’s probably due to the 5 minute session refreshIs there a way to avoid the write ticket exhaustion from updating to system.sessions?11588×579 62 KB\n21599×623 92.4 KB",
"username": "Yuki_Akano"
},
{
"code": "",
"text": "Hi,I’m having the same type of problem, MongoDB version is 4.0.20",
"username": "Felipe_Esteves1"
}
] | Updating the system.sessions collection depletes wired tiger's write tickets | 2020-11-17T10:58:13.477Z | Updating the system.sessions collection depletes wired tiger’s write tickets | 1,906 |
|
[
"dot-net",
"production"
] | [
{
"code": "",
"text": "This is the general availability release for the 2.12.0 version of the driver.The main new features in 2.12.0 include:The full list of JIRA issues that are resolved in this release is available at:https://jira.mongodb.org/issues/?jql=project%20%3D%20CSHARP%20AND%20fixVersion%20%3D%202.12.0%20ORDER%20BY%20key%20ASCDocumentation on the .NET driver can be found at:",
"username": "James_Kovacs"
},
{
"code": "",
"text": "",
"username": "system"
}
] | .NET Driver 2.12.0 Released | 2021-03-03T02:59:22.677Z | .NET Driver 2.12.0 Released | 1,978 |
|
null | [
"spark-connector"
] | [
{
"code": "libraryDependencies ++= Seq(\n \"org.apache.spark\" %% \"spark-core\" % \"3.0.1\",\n \"org.apache.spark\" %% \"spark-sql\" % \"3.0.1\", \n \"org.apache.spark\" %% \"spark-mllib\" % \"3.0.1\", \n \"org.apache.spark\" %% \"spark-streaming\" % \"3.0.1\" % \"provided\",\n \"org.mongodb.spark\" %% \"mongo-spark-connector\" % \"3.0.1\" exclude (\"org.mongodb\", \"mongo-java-driver\"),\n)\n",
"text": "I am using Apache Spark Java API and Mongo Spark Connector in PlayFramework 2.8.7After updated from “mongo-spark-connector” version 3.0.0 to version 3.0.1, I got a “local class incompatible” error:org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 10.0 failed 4 times, most recent failure: Lost task 0.3 in stage 10.0 (TID 26, 172.18.0.2, executor 1): java.io.InvalidClassException: com.mongodb.spark.MongoSpark$; local class incompatible: stream classdesc serialVersionUID = -148646310337786170, local class serialVersionUID = -3005450305892693805This normally happens when a class object implements Serializable doesn’t define the serialVersionUID in Java.But this error doesn’t occur in Spark Connector Version 3.0.0. I appreciate any hints and help.",
"username": "Yingding_Wang"
},
{
"code": "mongo-spark-connector 3.0.0spark connector 3.0.1[warn] o.a.s.s.TaskSetManager - Lost task 0.0 in stage 577.0 (TID 1198, 172.18.0.3, executor 0): java.io.InvalidClassException: com.mongodb.spark.MongoSpark$; local class incompatible: stream classdesc serialVersionUID = -148646310337786170, local class serialVersionUID = -3005450305892693805\n\tat java.base/java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:689)\n\tat java.base/java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2012)\n\tat java.base/java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1862)\n\tat java.base/java.io.ObjectInputStream.readClass(ObjectInputStream.java:1825)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1650)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.readArray(ObjectInputStream.java:2102)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:493)\n\tat java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:451)\n\tat org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)\n\tat org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)\n\tat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:83)\n\tat org.apache.spark.scheduler.Task.run(Task.scala:127)\n\tat org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)\n\tat org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)\n\tat org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\n\n[error] o.a.s.s.TaskSetManager - Task 0 in stage 577.0 failed 4 times; aborting job\n[error] application - Exception while classifying. SVMclassification for 3d59af19-88df-4663-a4c8-37179436e997 aborted.\nJob aborted due to stage failure: Task 0 in stage 577.0 failed 4 times, most recent failure: Lost task 0.3 in stage 577.0 (TID 1201, 172.18.0.4, executor 1): java.io.InvalidClassException: com.mongodb.spark.MongoSpark$; local class incompatible: stream classdesc serialVersionUID = -148646310337786170, local class serialVersionUID = -3005450305892693805\n\tat java.base/java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:689)\n\tat java.base/java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2012)\n\tat java.base/java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1862)\n\tat java.base/java.io.ObjectInputStream.readClass(ObjectInputStream.java:1825)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1650)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.readArray(ObjectInputStream.java:2102)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:493)\n\tat java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:451)\n\tat org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)\n\tat org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)\n\tat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:83)\n\tat org.apache.spark.scheduler.Task.run(Task.scala:127)\n\tat org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)\n\tat org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)\n\tat org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\n\nDriver stacktrace:\norg.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 577.0 failed 4 times, most recent failure: Lost task 0.3 in stage 577.0 (TID 1201, 172.18.0.4, executor 1): java.io.InvalidClassException: com.mongodb.spark.MongoSpark$; local class incompatible: stream classdesc serialVersionUID = -148646310337786170, local class serialVersionUID = -3005450305892693805\n\tat java.base/java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:689)\n\tat java.base/java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2012)\n\tat java.base/java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1862)\n\tat java.base/java.io.ObjectInputStream.readClass(ObjectInputStream.java:1825)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1650)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.readArray(ObjectInputStream.java:2102)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:493)\n\tat java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:451)\n\tat org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)\n\tat org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)\n\tat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:83)\n\tat org.apache.spark.scheduler.Task.run(Task.scala:127)\n\tat org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)\n\tat org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)\n\tat org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\n\nDriver stacktrace:\n\tat org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2059)\n\tat org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2008)\n\tat org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2007)\n\tat scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)\n\tat scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)\n\tat scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)\n\tat org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2007)\n\tat org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:973)\n\tat org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:973)\n\tat scala.Option.foreach(Option.scala:407)\n\tat org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:973)\n\tat org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2239)\n\tat org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2188)\n\tat org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2177)\n\tat org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)\n\tat org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:775)\n\tat org.apache.spark.SparkContext.runJob(SparkContext.scala:2099)\n\tat org.apache.spark.SparkContext.runJob(SparkContext.scala:2120)\n\tat org.apache.spark.SparkContext.runJob(SparkContext.scala:2139)\n\tat org.apache.spark.SparkContext.runJob(SparkContext.scala:2164)\n\tat org.apache.spark.rdd.RDD.$anonfun$foreachPartition$1(RDD.scala:994)\n\tat org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)\n\tat org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)\n\tat org.apache.spark.rdd.RDD.withScope(RDD.scala:388)\n\tat org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:992)\n\tat com.mongodb.spark.MongoSpark$.save(MongoSpark.scala:120)\n\tat com.mongodb.spark.MongoSpark$.save(MongoSpark.scala:169)\n\tat com.mongodb.spark.sql.DefaultSource.createRelation(DefaultSource.scala:70)\n\tat org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)\n\tat org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)\n\tat org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)\n\tat org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)\n\tat org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:175)\n\tat org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:213)\n\tat org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)\n\tat org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:210)\n\tat org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:171)\n\tat org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:122)\n\tat org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:121)\n\tat org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:963)\n\tat org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:100)\n\tat org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160)\n\tat org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:87)\n\tat org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:764)\n\tat org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)\n\tat org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:963)\n\tat org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:415)\n\tat org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:399)\n\tat ml.SparkConnectorHelper.saveComputedStressToSegmentCollection(SparkConnectorHelper.java:344)\n\tat ml.Classificator.classifyStressTypeWithSVM(Classificator.java:411)\n\tat ml.Classificator.classifySegments(Classificator.java:151)\n\tat daemon.ClassificationDaemon.lambda$new$0(ClassificationDaemon.java:46)\n\tat akka.actor.LightArrayRevolverScheduler$$anon$1$$anon$2.run(LightArrayRevolverScheduler.scala:113)\n\tat akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:48)\n\tat akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:48)\n\tat java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:290)\n\tat java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1020)\n\tat java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1656)\n\tat java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1594)\n\tat java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:183)\nCaused by: java.io.InvalidClassException: com.mongodb.spark.MongoSpark$; local class incompatible: stream classdesc serialVersionUID = -148646310337786170, local class serialVersionUID = -3005450305892693805\n\tat java.base/java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:689)\n\tat java.base/java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:2012)\n\tat java.base/java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1862)\n\tat java.base/java.io.ObjectInputStream.readClass(ObjectInputStream.java:1825)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1650)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.readArray(ObjectInputStream.java:2102)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1667)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2464)\n\tat java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2358)\n\tat java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2196)\n\tat java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1679)\n\tat java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:493)\n\tat java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:451)\n\tat org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)\n\tat org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)\n\tat org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:83)\n\tat org.apache.spark.scheduler.Task.run(Task.scala:127)\n\tat org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:446)\n\tat org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377)\n\tat org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:449)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)\n\tat java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\n",
"text": "my current solution is deployed with mongo-spark-connector 3.0.0 successfully. As I used spark connector 3.0.1, I got following stack trace while trying to save a Dataset transformed by a LSVC transformer:",
"username": "Yingding_Wang"
},
{
"code": "",
"text": "Hi @Yingding_Wang,Was the 3.0.1 connector deployed to all Spark Executors? I think that this error is due to the change in the MongoSpark object as part of the 3.0.1 release.The different serialVersionUID indicates mixing two versions of the MongoDB Spark Connector. Please ensure that both the MongoDB Spark Connectors are all of the same version and all Spark versions are the same.All the best,Ross",
"username": "Ross_Lawley"
},
{
"code": "spark.executor.extraClassPathserialVersionUID",
"text": "@Ross_Lawley, thanks so much for your hint. It is exactly what has happened. I had a 3.0.0 connector Jar in the class path of worker nodes. The connector version 3.0.1 is only updated in the driver node. Now we have switched to declare the spark.executor.extraClassPath to load the right connector version. I am so grateful, that Spark Connector Dev Team changes the serialVersionUID for every Spark Connector version. It really helps to prevent silly mistakes which I have made to keep the spark cluster consistent.",
"username": "Yingding_Wang"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Spark Connector 3.0.1 MongoSpark local class incompatible error | 2021-02-09T19:27:31.933Z | Spark Connector 3.0.1 MongoSpark local class incompatible error | 4,790 |
null | [
"aggregation",
"queries"
] | [
{
"code": "sensorsDataidDevice: 2542idSensor: 3{\n\t\"_id\": {\"$oid\":\"5f1957c7cdf25116937ed3ef\"},\n\t\"idSensor\":{\"$numberLong\":\"3\"},\n\t\"idDevice\":{\"$numberLong\":\"48\"},\n\t...\n}\ndevices{\n\t\"_id\": {\"$numberLong\":\"2542\"},\n\t\"group\": {\"$id\":{\"$numberLong\":\"11\"}},\n\t...\n}\n_ididDevicegroups{\n\t\"_id\":{\"$numberLong\":\"11\"},\n\t\"name\":\"MyCustomerOfInterest\",\n\t...\n}\ndevicesidSensorsensorsData",
"text": "I know using DBRefs is controversial. Regretfully, I’m not the DBA, so I can’t change its structure.I have one collection called sensorsData that stores documents generated by some devices, identified by their id:idDevice: 2542The value might change, there are tons of devices.Regarding these documents, I’m only interested in those about broadcastings, identified by a concrete id, #3:idSensor: 3Other sensors do other stuff, one device might have different sensors.Both fields are at level 0, after the id of the document itself:I need to filter all devices related to a concrete customer, identified by its specific id, #11. This id is referenced using a DBRef within a collection called devices:Notice this _id is the idDevice from the 1st collection.groups is the third collection, where the id of every customer originally lies:I guess I don’t need to use this 3rd collection, the key information is stored within the 2nd one, that relates the customer to its devices.So, the question:How can I use devices to filter the devices -or the documents with idSensor #3- from sensorsData that are related to this customer #11?Thanks in advance.",
"username": "Javier_Blanco"
},
{
"code": "devicesidSensorsensorsData$lookupsensorsDatadevicesdb.sensorsData.aggregate([\n { \n $match: { idSensor : NumberLong(\"3\") } \n },\n { \n $lookup: {\n from: \"devices\",\n let: { sensorsDeviceId: \"$idDevice\" },\n pipeline: [ \n { \n $match: { \n $expr: {\n $and: [\n { $eq: [ \"$$sensorsDeviceId\", \"$_id\" },\n { $eq: [ \"$group\", NumberLong(\"11\") }\n ]\n }\n }\n }\n ],\n as: \"group_sensors_data\"\n }\n },\n]) \n",
"text": "How can I use devices to filter the devices -or the documents with idSensor #3 - from sensorsData that are related to this customer #11 ?Hello @Javier_Blanco,You can do an aggregation $lookup on the sensorsData and devices collections. The query can look like this (a lookup of Join Conditions and Uncorrelated Sub-queries):Let me know how this works out.",
"username": "Prasad_Saya"
},
{
"code": "var pipeline = \n[\n\t{\n\t\t\"$match\": {\"group.$id\": 11}\n\t},\n\t{\n \"$lookup\":\n {\n \"from\": \"sensorsData\",\n \"localField\": \"_id\",\n \"foreignField\": \"idDevice\",\n \"as\": \"array\"\n }\n },\n {\n \"$unwind\": \"$array\"\n },\n {\n\t\t\"$match\": {\"array.idSensor\": 3}\n\t}, \n...\n]\n\ndb.devices.aggregate(pipeline)\n$match$addFields$projectidSensor$unwind",
"text": "Hi, @Prasad_Saya, thanks a lot for your answer.I came out with something more obvious:After the last $match there are a couple of $addFields and a $project.Your solution looks far better; mine doesn’t filter by idSensor until doing the $unwind, so I’m carrying tons of files I don’t need during first stages. Im trying to implement the query in a BI application and the server is unable to load the data, I always end up getting a timeout error.I’ll try to fix mine using yours, thanks again!",
"username": "Javier_Blanco"
},
{
"code": "idSensor$unwindidSensor$match: { idSensor : NumberLong(\"3\") }",
"text": "Your solution looks far better; mine doesn’t filter by idSensor until doing the $unwind , so I’m carrying tons of files I don’t need during first stages. Im trying to implement the query in a BI application and the server is unable to load the data, I always end up getting a timeout error.I’ll try to fix mine using yours, thanks again!Sure. The query can benefit from indexes, an obvious one is on the idSensor field used in the following match stage: $match: { idSensor : NumberLong(\"3\") }Also refer: Aggregation Pipeline Optimization",
"username": "Prasad_Saya"
},
{
"code": "var pipeline = \n[\n \t{ \n \t\"$match\": {\"idSensor\" : 3} \n \t},\n\t{ \n \"$lookup\": \n {\n \"from\": \"devices\",\n \"let\": {\"sensorsDeviceId\": \"$idDevice\"},\n \"pipeline\": [{\"$match\": {\"$expr\": {\"$and\": [{\"$eq\": [\"$$sensorsDeviceId\", \"$_id\"]}, {\"$eq\": [\"group.$id\", 11]}]}}}],\n \"as\": \"array\"\n }\n\t},\n {\n \"$unwind\": \"$array\"\n },\n...\n]\n\ndb.sensorsData.aggregate(pipeline)\n$unwind$addFields$project",
"text": "This is my new try:As in the previous query, after the $unwind there are a couple of $addFields and a $project .Regretfully, this new query doesn’t work; I mean, I don’t get any error message, it just keeps on going on forever, while the first one just needs about 100 s to give back an answer… any hint?Thanks in advance!",
"username": "Javier_Blanco"
},
{
"code": "",
"text": "Hello @Javier_Blanco,I don’t know what is the reason for the slow query. Are there any indexes on these collections? Did you follow the rules of the aggregation optimization (the link I had included earlier). Also, the query behavior is depending upon factors like Query Selectivity.May be, you have to try the queries (both of them) on a smaller sample set of documents from the two collections, and also Analyze Query Performance.",
"username": "Prasad_Saya"
},
{
"code": "idSensoridSensor$match: {\"idSensor\": 3}",
"text": "I guess there are much more documents with idSensor #3 than #11 customer documents with any kind of idSensor. Even if I just try only the first $match: {\"idSensor\": 3}, the query goes on forever. That might be the problem.",
"username": "Javier_Blanco"
},
{
"code": "sensorsData{\n\t\"_id\": {\"$oid\":\"5f1957c7cdf25116937ed3ef\"},\n\t\"idSensor\": {\"$numberLong\":\"3\"},\n\t\"idDevice\":{\"$numberLong\":\"48\"},\n\t...\n\t\"data\":\n\t{\n\t\t\"inicio\":\"2019-11-28T16:09:08+01:00\",\n\t\t\"fin\":\"2019-11-28T16:09:18+01:00\",\n\t\t...\n\t},\n\t...\n}\n$match[\n...\n\t{\n \"$match\":\n {\n \"$expr\":\n {\n \"$and\": \n [\n {\"array.idSensor\": 3},\n {\"$gt\": [\"array.data.inicio\", \"ISODate('2021-02-28T23:59:59+01:00')\"]},\n {\"$lt\": [\"array.data.fin\", \"ISODate('2021-03-15T00:00:01+01:00')\"]}\n ]\n }\n }\n },\n...\n]\n",
"text": "Given that every sensorsData document includes dates:I’m trying to add them as filters to my original query, within the second $match:But I’m getting this error message:“FieldPath field names may not contain ‘.’.”What is it about?",
"username": "Javier_Blanco"
},
{
"code": " { \"$unwind\": \"$array\" },$match",
"text": "Hello @Javier_Blanco,Please post couple sample documents with relevant fields after the { \"$unwind\": \"$array\" }, stage - I believe the $match stage you had posted is following it - so that I can see what the data looks like and what is wrong with your query.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "I am not too sure but I would try to use $ for the field names as it is usually required withing a $expr.See https://docs.mongodb.com/manual/reference/operator/query/expr",
"username": "steevej"
}
] | Filtering one collection using DBRef from a second one that refers to another third | 2021-02-25T11:19:13.055Z | Filtering one collection using DBRef from a second one that refers to another third | 10,421 |
null | [
"kafka-connector"
] | [
{
"code": "copy.existingINFO Finished copying existing data from the collection(s). (com.MongoDB.kafka.connect.source.MongoSourceTask:553)",
"text": "Hi there,\nI’ve encountered strange behavior of Kafka source connector when I creating new connector with copy.existing setting I’m getting only 101 documents out of 12k+ while I’m using M5 cluster, but after upgrade to M10, and reproducing the same flow it just works and I’m getting all the documents.So, I would like to ask if there may be a bug in the connector so it is not failing/crashing/logging any error but instead just says INFO Finished copying existing data from the collection(s). (com.MongoDB.kafka.connect.source.MongoSourceTask:553)",
"username": "Vlad_Goldman"
},
{
"code": "",
"text": "Hi Vlad,Currently with shared/free tier clusters in Atlas, there is some proxy work that happens under the covers which causes this behavior with the Kafka Connector. We have a ticket to address this, however for now, if you’d like to copy.existing you will need to use a dedicated cluster M10+.Thanks,\nRob",
"username": "Robert_Walters"
},
{
"code": "",
"text": "Thanks for clarification, any chance that I can get a link to the ticket to watch it?",
"username": "Vlad_Goldman"
},
{
"code": "",
"text": "https://jira.mongodb.org/projects/KAFKA/issues/KAFKA-201This should be addressed in the next release - 1.5.",
"username": "Robert_Walters"
}
] | Kafka source connector and copy.existing on shared-tier instances | 2021-02-17T18:39:06.245Z | Kafka source connector and copy.existing on shared-tier instances | 2,329 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "I have a collection upon which i want to enable mongo-kafka connector to push changes to kafka.\nI want to push 2 types of events on 2 separate topics: snapshot and delta where the 1st topic is just the snapshot of record(comes from “fullDocument” key in stream event) and the second is what has changed(comes from “updateDescription”). I need these 2 separately because of the way “fullDocument” works where we may miss updates in between because of concurrent writes.\nIs there a way to achieve this in mongo connector?",
"username": "Saurav_Prakash"
},
{
"code": "[{\"$match\":{ \"$and\": [{\"fullDocument.type\":\"temp\"},{\"fullDocument.value\":{\"$gte\":100}}] }}]\n",
"text": "You can create two instance of the connector that point to the same namespace and define the pipeline configuration accordingly.An example of the pipeline parameter is something like:Here I am matching where the type field is “temp” and the “value” field contains a number greater than or equal to 100.",
"username": "Robert_Walters"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Multiple kafka topic for one mongodb collection using kafka connector | 2021-03-01T16:18:15.746Z | Multiple kafka topic for one mongodb collection using kafka connector | 2,432 |
null | [] | [
{
"code": "nametitlenametitle[{\n identifier: \"x\",\n versions: [\n {\n name: \"test_name\"\n version: \"x.x.x\"\n },\n {\n name: \"test_name\"\n version: \"x.x.x\"\n },\n ]\n},\n{\n identifier: \"y\",\n versions: [\n {\n name: \"test_name2\"\n version: \"x.x.x\"\n },\n {\n name: \"test_name2\"\n version: \"x.x.x\"\n },\n ]\n}, ... ]\ndb.runCommand({\n update: 'apps',\n updates: [\n {\n q: { \"versions.name\": { $exists: true } },\n u: [\n {\n $set: {\n versions: {\n $map: {\n input: \"$versions\",\n in: {\n $mergeObjects: [\n \"$$this\",\n { \"title\": \"$$this.name\" }\n ]\n }\n }\n }\n }\n },\n { $unset: \"versions.name\" }\n ],\n multi: true\n }\n ]\n})\n",
"text": "I want to change the key of a field from name to title with database commands and NOT the shell. Below is a minified version of my collection for reference. I literally only want to change every key called name into title.This query basically does exactly what I need, but it, unfortunately, isn’t supported by the newest CosmosDB MongoDB-API:You can find what is supported here: Azure Cosmos DB für MongoDB (Version 3.6): unterstützte Funktionen und Syntax | Microsoft Learn. How could this be achieved with these limitations in mind?You can also answer this question on Stack Overflow: How to rename a field inside an array with database commands of Azure Cosmos DB's API for MongoDB (3.6 version)? - Stack Overflow",
"username": "Leon"
},
{
"code": "",
"text": "Hi @leaon,As a MongoDB employee I do not know how to workaround things in Cosmos DB But I think its another proof that working with a real mongodb instance solves lots of headaches…Is there a reason you decided to use Cosmos DB and not Atlas azure for example?Best regards,\nPavel",
"username": "Pavel_Duchovny"
}
] | How to rename a field inside an array with database commands of Azure Cosmos DB's API for MongoDB (3.6 version)? | 2021-03-03T10:10:49.349Z | How to rename a field inside an array with database commands of Azure Cosmos DB’s API for MongoDB (3.6 version)? | 2,378 |
null | [
"aggregation",
"crud"
] | [
{
"code": "",
"text": "Can we do something like findOneAndUpdate set salary = 100 where empId nin (select empId where {some condition}) using aggregation pipeline?",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "",
"text": "Hi @Abhishek_Kumar_Singh,Just to make it clear you want to update collection A based on a list of ids from collection B ?Why not to do it in 2 statements?If this is the case I am not sure why you are using find one as it will fetch only one doc.Anyway , I think only $merge aggregation can work and not update for a single statement…Thanks\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Hello Pavel!I want to update collection A based a list of id in collection A itself.Collection schema;\nClassId, lastAllotedTime, studentIdthere are 40 seats in each class.So initially our collection will look something like:ClassId, lastAllotedTime, studentId\nA null null\nA null null\n.\n.\n.\n.\nA null null\nB null null\nB null null\n.\n.\n.\n.\nB null null\nC null null\nC null null\n.\n.\n.A class has many seats. I want to assign a seat in class to a student if no seat has been assigned from this class in last 10 minutes.So my query should be \"if no slot assigned from this class in last 10 minutes, assign one slot to student.\nNext student should not be assigned a slot in same class if he comes the next second.db.getCollection(‘classslots’).findOneAndUpdate(\n{lastAllotedTime: {$lt: new Date((new Date())-10001060)}, “studentId” : null},\n{ $set : { “studentid” : “foobar”, “lastAllotedTime” : new Date()}})I want to add one more condition here that classId nin (classIds of classes which were allotted a slot in last 10 minutes)I want to exploit atomicity of search and update of findOneAndUpdate, If i do use two statements i’ll have to use lock, which I want to avoid at all costs.Thank you so much for your prompt reply. Really appreciate it.",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "{\n_id : ....,\nclassId : \"A\",\nseatsAssigned : [\n { studentId : \"xxxx\"},\n { studentId : \"bbb\"}\n...\n],\ntotalAssigned : 10,\nfreeToAssgin: 30,\nlastAssigned : ISODate(\"2021-03-02T10:00\")\n}\ndb.classes.findOneAndUpdate({lastAssigned : {$lt : {NOW - 10min}}, freeToAssgin : {$gt : 0}}, \n{ $addToSet : { seatsAssigned : {studentId : \"zzz\" } }, \n$inc : { freeToAssgin: -1, freeToAssgin: 1 }, \n$set :{ lastAssigned : new Date() }})\n",
"text": "Hi @Abhishek_Kumar_Singh,Why not to hold the assignment of a specifc class in an array of 40 slots.Now you can index the fields and do an update like:Using embedded related documents help with atomicity in MongoDB and is the correct design for document models.Best regards,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "Thank you so much Pavel. It was really helpful ",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "db.classes.findOneAndUpdate({lastAssigned : {$lt : {NOW - 10min}}, freeToAssgin : {$gt : 0}}, \n{ $addToSet : { seatsAssigned : {studentId : \"zzz\" } }, \n$inc : { freeToAssgin: -1, freeToAssgin: 1 }, \n$set :{ lastAssigned : new Date() }})\ndb.classes.findOneAndUpdate({lastAssigned : {$lt : {NOW - 10min}}, freeToAssgin : {$gt : 0}}, \n{ $addToSet : { seatsAssigned : {studentId : \"zzz\" } }, \n$inc : { freeToAssgin: -1, freeToAssgin: 1 }, \n$set :{ lastAssigned : new Date() },\n$set: {\n \"priority\": {\n \"$cond\": {\n if: {\n $gte: [\"$seatsAssigned\", 3]\n },\n then: 150,\n else: 1,\n }\n }\n }\n}) \nFailed to execute script.\nError: findAndModifyFailed failed: {\n\"ok\" : 0,\n\"errmsg\" : \"The dollar ($) prefixed field '$cond' in 'priority.$cond' is not valid for storage.\",\n\"code\" : 52,\n\"codeName\" : \"DollarPrefixedFieldName\"\n}\n",
"text": "@Pavel_Duchovny, just one more follow up please, I am trying to update priority of class if more than 3 seats has been assigned.I am running mongo 4.4.4 version but i keep running intoDetails:_getErrorWithCode@src/mongo/shell/utils.js:25:13\nDBCollection.prototype.findAndModify@src/mongo/shell/collection.js:736:1\nDBCollection.prototype.findOneAndUpdate@src/mongo/shell/crud_api.js:857:12\n@(shell):1:1Is this the right way to achieve conditional updates?",
"username": "Abhishek_Kumar_Singh"
},
{
"code": "$add$setUnion[]db.classes.findOneAndUpdate({lastAssigned : {$lt : new Date((new Date())-10001060)}, freeToAssgin : {$gt : 0}},\n[{$set: {\n seatsAssigned : { $setUnion : [\"$seatsAssigned\", [{studentId : \"zzz\"}]]},\n totalAssigned : { $add : [\"$totalAssigned\", 1] },\n freeToAssgin : { $add : [\"$totalAssigned\", -1] },\n lastAssigned : new Date()\n}}, {$set: {\n priority: {\n\"$cond\": {\nif: {\n$gte: [\"$totalAssigned\", 3]\n},\nthen: 150,\nelse: 1,\n}\n}\n}}]);\n",
"text": "Hi @Abhishek_Kumar_Singh,So you can use aggregation $cond in aggregation pipeline update syntax.It changes the operators as you can’t use update operators in a pipeline and need to use $add and $setUnion when changing fields instead of $addToSet and $inc, please note that the second pat also start with array pipeline []:Thanks,\nPavel",
"username": "Pavel_Duchovny"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | findOneAndUpdate aggregation pipeline question | 2021-03-02T04:32:52.163Z | findOneAndUpdate aggregation pipeline question | 4,696 |
null | [] | [
{
"code": "",
"text": "Hi,I have a big js file that needs to be put on mongo server newer mongo version has removed the support by removal of db.eval method now how can i do it",
"username": "anubhav_tarar"
},
{
"code": "",
"text": "Will the load() method work for what you need?",
"username": "Jai_Hirsch"
},
{
"code": "",
"text": "I think that db.eval() is a function of mongo, the shell. You might be using mongosh command rather than mongo command. The new mongosh is still missing some functionalities.How were you using the .js script before? Try to connect with the mongo command.",
"username": "steevej"
},
{
"code": "",
"text": "But how to do it using driver load is shell command",
"username": "anubhav_tarar"
},
{
"code": "",
"text": "I have written for hyper log implementation in mongo db I am using this script in my map reduce function I was doing a poc from the shell where I loaded js file using mongo load command and I succeded in the implementation, now the code of map reduce will be executed though mongo node driver which dont have that load command option",
"username": "anubhav_tarar"
}
] | How to execute custom javascript on mongo server | 2021-03-02T13:41:08.575Z | How to execute custom javascript on mongo server | 2,889 |
[
"swift",
"atlas-device-sync"
] | [
{
"code": "",
"text": "I am trying sync module of realm and i have setup dev mode of sync but unfunately data has not sync.\nUsingXcode 12.2RealmSwift 10.6.0\nScreenshot 2021-02-24 at 2.14.54 PM1044×523 50.1 KB\n ",
"username": "Muhammad_Awais"
},
{
"code": "",
"text": "Hi Muhammad, welcome to the community!Are you seeing any logs in you backend Realm app?",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "i am getting this error in Realm log activity",
"username": "Muhammad_Awais"
},
{
"code": "",
"text": "Ending session with error: failed to validate upload changesets: expected primary key name for AddTable instruction for top-level table to equal _id but received “” (ProtocolErrorCode=212)Logs:[ “Session was active for: 0s” ]Partition:My ProjectSession Metrics:{}Remote IP Address:101.53.254.85SDK:Realm Cocoa v10.0.0-beta.5Platform Version:Version 14.2 (Build 18B79)",
"username": "Muhammad_Awais"
},
{
"code": "",
"text": "master/ios/02-full-syncToDo demo app using Realm and Realm Object Server to synchronize tasks. ",
"username": "Muhammad_Awais"
},
{
"code": "Realm Cocoa v10.0.0-beta.5",
"text": "Are you sure that you’re using realm-cocoa 10.6? I see this in your log: Realm Cocoa v10.0.0-beta.5",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "yes i am installing pod realmSwift and i have updated it too",
"username": "Muhammad_Awais"
},
{
"code": "",
"text": "You should see something like this in the logs if you’re using 10.6 If you’ve been making schema changes then you may have confused sync. Try deleting the app from the simulator and removing sync from your backend Realm app and then enable it again. When you install the app again, both sides of the sync will start from a clean point.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Consider switching from Cocoa Pods to SPM – I find it much easier to work with.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "can you send me latest SPM for realm",
"username": "Muhammad_Awais"
},
{
"code": "https://github.com/realm/realm-cocoa.git10.610.7",
"text": "In Xcode, choose the option to add a new dependency… image532×658 113 KB… set the repo to https://github.com/realm/realm-cocoa.git and the version to 10.6 or 10.7. You can then remove the Pod",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "Thanks It has been resolved after remaking the models and schema.",
"username": "Muhammad_Awais"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Error code 212. message size=22,try again = 0 | 2021-03-01T19:58:54.784Z | Error code 212. message size=22,try again = 0 | 2,388 |
|
null | [
"java"
] | [
{
"code": "private PieDataset getCreditsDataset() {\n DefaultPieDataset dataset = new DefaultPieDataset();\n String key = \"Current Credits\";\n for (int i = 0; i < 131; i++) {\n int value = i;\n double total = 0;\n try {\n total = sda.getIntTotal(key, value);\n } catch(NullPointerException npe) {\n //do nothing\n }\n if (total != 0) {\n dataset.setValue(String.valueOf(value), total); \n } else {\n //do not add\n }\n }\n return dataset;\n }\npublic double getIntTotal(String key, int value) {\n String field = \"$\" + key;\n Document doc = collection.aggregate(\n Arrays.asList(\n Aggregates.match(Filters.eq(key, value)),\n Aggregates.group(field, Accumulators.sum(\"count\", 1))\n )\n ).first();\n return (double) doc.getInteger(\"count\", 0);\n }\n",
"text": "Fairly new to MongoDB and working on a simple project. Currently trying to make pie charts using counts of certain fields:value. What I have currently works but for some fields that have multiple possible values creating the chart take a very long time due to the iteration in Java. I’m thinking that MongoDB can do more of this work on its end. This what I’m doing currently:Create the dataset for JFreeChart piechart.Get count of value for keyIterating like this takes quite a bit of time and I think MongoDB should be able to do more the work and reduce time but I don’t know how to structure the query. Is it possible for MongoDB to perform this count operation and return a document with the all of the results? Any help is appreciated.",
"username": "James_C"
},
{
"code": "",
"text": "HelloSeems that you run 1 aggregate/per value , so 131 queries.\nYou can run 1 query only if you remove the match,and you will get the count of 131 groups.\nlike group1 count1 , group2 count2 …\nMaybe i didn’t understand something,but i don’t see the reason for 1 query per value.\nIn case you only want the groups with value 0 to 131,you can filter before group.",
"username": "Takis"
},
{
"code": "",
"text": "Hello Takis,Thanks for the reply. Thats exactly right, I shouldn’t need and don’t want to run 1 query per value, but thats the extent of my ability with MongoDB so far. Hopefully I can explain a bit better.This is a student database and I’m trying to build a pie chart displaying the distribution of credits. Students can have any credit value between 0 and 131. So I need to build a dataset that has a count for each credit value for example there are 1000 students that have 65 credits, 100 student with 20 credits…etc.I’m trying not to do this in the Java app because it takes time to do 1 query per value, but I’m not familiar enough with setting up the proper query/aggregate/filtering to achieve this.What did you mean by filter before the group?",
"username": "James_C"
},
{
"code": "{:student_name \"name0\", :credit 2}\n {:student_name \"name1\", :credit 2}\n {:student_name \"name2\", :credit 1}\n {:student_name \"name3\", :credit 0}\n {:student_name \"name4\", :credit 2}\n {:student_name \"name5\", :credit 0}\n {:student_name \"name6\", :credit 1}\n {:student_name \"name7\", :credit 1}\n {:student_name \"name8\", :credit 2}\n {:student_name \"name9\", :credit 0}\n {:student_name \"name10\", :credit 2}\n {:student_name \"name11\", :credit 0}\n {:student_name \"name12\", :credit 1}\n {:student_name \"name13\", :credit 0}\n {:student_name \"name14\", :credit 1}\n {:student_name \"name15\", :credit 1}\n {:student_name \"name16\", :credit 2}\n {:student_name \"name17\", :credit 1}\n {:student_name \"name18\", :credit 2}\n {:student_name \"name19\", :credit 0}\n{total_students 7, credit 1}\n{total_students 7, credit 2}\n{total_students 6, credit 0}\ncollection.aggregate(\n Arrays.asList(\n Aggregates.group(\"credit\", Accumulators.sum(\"count\", 1)));\n",
"text": "yes 1 group by can do that for all groups.\ngroup by seperates the collections in many groups and in each group separatly apply the accumulator (here find the total members of the group)Example data (20 students with credits 0 to 2 ) (you can have any students and credits 0 131)1 group by credit ,and i get 3 documentsYou only need this,it will return a cursor with the 131 documents.\neach document = 1 credit value , with the “count” the number of students\nTry it if you can.",
"username": "Takis"
},
{
"code": "public PieDataset getIntDataset(String key) {\n DefaultPieDataset dataset = new DefaultPieDataset();\n String field = \"$\" + key;\n\n AggregateIterable<Document> doc = collection.aggregate(\n Arrays.asList(\n Aggregates.group(field, Accumulators.sum(\"count\", 1))\n ));\n for (Document d : doc) {\n JSONObject jo = new JSONObject(d.toJson());\n dataset.setValue(String.valueOf(jo.getInt(\"_id\")), jo.getDouble(\"count\"));\n }\n return dataset;\n }\n",
"text": "Hello Takis!I think I understand a little bit better now. Thanks so much for taking the time to reply and help me out, I really appreciate it.Updated method:Works great. Went from 18 secs for a chart to 0.2 seconds (20k students)…huge improvement. I knew it shouldn’t take that long and MongoDB would be able to do it much faster. Thanks again!",
"username": "James_C"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to make MongoDB do more of the work? | 2021-02-27T17:38:23.263Z | How to make MongoDB do more of the work? | 2,032 |
null | [] | [
{
"code": "db.mycollection.validate({ full: true})db.mycollection.validate({ full: false})db.mycollection.validate({ full: true})",
"text": "I have a mongo setup with 2 Billion entries in a collection with size of around 500GB.There was a hard crash because of memory issue. And when mongod restarted after a hard crash (and a successful recovery) the values returned by ‘db.mycollection.count()’ is different than that of db.mycollection.find().hint(‘ix_retention_date’).count() (full index hint). I believe it is because of already known issue https://jira.mongodb.org/browse/SERVER-19472.now I want to fix this, and the ticket suggests that we use\ndb.mycollection.validate({ full: true}) to fix this. But when I run that command it’s very slow and looks like it will take more than few day for this command to complete.Even I tried db.mycollection.validate({ full: false}) and it is still running for 12 hours now and slow but quite fast compared to running with true flag. But I am not sure if this will fix the count issue .So what is the best way to fix the count thing. Is there a easy way out, surely it should have –\nI don’t believe every time mongo crashes and we will need to run db.mycollection.validate({ full: true}) that takes ages to fix count ?Mongo version is 4.4",
"username": "Sameer_Kattel"
},
{
"code": "db.collection.countDocuments()countdb.collection.count()db.collection.countDocuments()db.collection.count()db.collection.count()",
"text": "@Sameer_Kattel,Use the db.collection.countDocuments() instead of count. From the documentation:Unlike db.collection.count() , db.collection.countDocuments() does not use the metadata to return the count. Instead, it performs an aggregation of the document to return an accurate count, even after an unclean shutdown.From the db.collection.count():IMPORTANT",
"username": "Prasad_Saya"
},
{
"code": "db.mycollection.validate({ full: true})db.mycollection.validate({ full: false})db.collection.countDocuments()",
"text": "@Prasad_Saya,So it is ok to let db.collection.count() to remain incorrect?\nAnd the only wat to fix it is running validate command ? Should run full validation db.mycollection.validate({ full: true}) or db.mycollection.validate({ full: false}) is good enough?If there is no other way, then wondering why there is no way to only update meta data count results with db.collection.countDocuments() count which would be much much faster?Thanks",
"username": "Sameer_Kattel"
},
{
"code": "db.collection.validate(){ full: <boolean> }countDocuments",
"text": "So it is ok to let db.collection.count() to remain incorrect?It is up to you to determine that. Why did you need to count the documents (please ask the question to yourself)?The db.collection.validate() method’s { full: <boolean> } is optional.The default value is false. It is a flag that determines whether the command performs a slower but more thorough check or a faster but less thorough check. If false, omits some checks for a faster but less thorough check. Also, note:If you want an accurate (and exact) count of documents in the collection, use the countDocuments method.",
"username": "Prasad_Saya"
},
{
"code": "countDocuments",
"text": "@Prasad_Sayahttps://jira.mongodb.org/browse/SERVER-19472 suggest that it should be run with true flag. So I was asking if both version would fix metadata, as I was not sure.db.collection.validate() is the only way to fix the metadata count? It is painfully slow either running with full option set to true or false. It took around 17 hours for db.collection.validate() on system with 16vcpus and 64 GB ram\nso wondering if there is more quicker way to fix that.Also countDocuments is pathetically slow – it has been running for more than 12 hours and it still is not completed.Thanks",
"username": "Sameer_Kattel"
}
] | Fix db.mycollection.count() after mongo crash recovery | 2021-03-02T04:01:56.373Z | Fix db.mycollection.count() after mongo crash recovery | 2,617 |
null | [] | [
{
"code": "**[**\n**\t{name: 'A', city: 'Mumbai',status:'Yes'},**\n**\t{name: 'B', city: 'Tokyo',status:'No'},**\n**\t{name: 'C', city: 'New York',status:'Yes'},**\n**\t{name: 'D', city: 'Mumbai',status:'Yes'},**\n**\t{name: 'E', city: 'Mumbai',status:'No'},**\n**\t{name: 'F', city: 'Mumbai',status:'yes'},**\n**]**\n[\n\t{city: 'Mumbai', statusYes: 3, statusNo: 1},\n\t{city: 'Tokyo', statusYes: 0, statusNo: 1},\n\t{city: 'New York', statusYes: 1, statusNo: 0}\n]\n$group: {\n\t_id: {city: \"$city\"},\n\tcity: { $last: \"$city\"},\n \tstatusYes: {\n\t\t$sum: {\n\t\t\t$cond: {\n\t\t\t\tif: {$eq:[\"$status\",\"Yes\"]},\n\t\t\t\tthen: 1,\n\t\t\t\telse: 0\n\t\t\t}\n\t\t}\n\t},\n\tstatusNo: {\n\t\t$sum: {\n\t\t\t$cond: {\n\t\t\t\tif: {$eq:[\"$status\",\"No\"]},\n\t\t\t\tthen: 1,\n\t\t\t\telse: 0\n\t\t\t}\n\t\t}\n\t},\n}\n[\n\t{city: 'Mumbai', statusYes: 2, statusNo: 1},\n\t{city: 'Tokyo', statusYes: 0, statusNo: 1},\n\t{city: 'New York', statusYes: 1, statusNo: 0}\n]\n{ $eq: [\"$status\", /yes/i] }{ status: {$regex: 'yes', $options: 'i'}}",
"text": "Hi Team,Want to know is there facility in $cond can use the $regex to match if condition rather using $eq operation.For Example collection is as below,Need to find out the city wise status count for Yes and No\nExpected output:Tried using aggregation of $group operation by,And then it results,Since count getting wrong the I’ve tried using $regex but it not able to find correct result.\nUsed:\nif: { $eq: [\"$status\", /yes/i] } shows 0 count.And if I used $regex like,\nif: { status: {$regex: 'yes', $options: 'i'}} then shows an expression must have exactly one field.Can anyone please suggest what is missing here.Regards,\nJitendra.",
"username": "Jitendra_Patwa"
},
{
"code": "db.collection.find({city: \"Mumbai\", status: \"Yes\"})status\"yes\"",
"text": "Hello @Jitendra_Patwa,It is due to your data inconsistency. Try this query and see what is the result:db.collection.find({city: \"Mumbai\", status: \"Yes\"})The query is returning only two documents. The third one has a status of \"yes\".",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi @Prasad_Saya,Yes you are right, there is data inconsistency in document, because of that I need to evaluate using $regex.\nSince, when I used below,$sum: {\n$cond: {\nif: {$eq:[\"$status\",“Yes”]},\nthen: 1,\nelse: 0\n}\n}\nThen its appropriately fetch status of value “Yes” and count is 2, but I need including “yes” to be rendered using regex pattern and then it results count 3, but yet not succeeded.Thanks,\nJitendra.",
"username": "Jitendra_Patwa"
},
{
"code": "",
"text": "Hi @Prasad_Saya,Yes you are right, there is data inconsistency in document, because of that I need to evaluate using $regex.\nSince, when I used below,$sum: {\n$cond: {\nif: {$eq:[\"$status\",“Yes”]},\nthen: 1,\nelse: 0\n}\n}Then its appropriately fetch status of value “Yes” and count is 2, but I need including “yes” to be rendered using regex pattern and then it results count 3, but yet not succeeded.Thanks,\nJitendra.",
"username": "Jitendra_Patwa"
},
{
"code": "db.collection.aggregate([\n{ \n $group: {\n _id: \"$city\",\n statusYes: {\n $sum: {\n $cond: {\n if: { $eq:[ { $strcasecmp: [ \"$status\", \"yes\" ] }, 0 ] },\n then: 1,\n else: 0\n }\n }\n },\n statusNo: {\n $sum: {\n $cond: {\n if: { $eq:[ { $strcasecmp: [ \"$status\", \"no\" ] }, 0 ] },\n then: 1,\n else: 0\n }\n }\n }\n } \n}\n])\n{ \"_id\" : \"Tokyo\", \"statusYes\" : 0, \"statusNo\" : 1 }\n{ \"_id\" : \"Mumbai\", \"statusYes\" : 3, \"statusNo\" : 1 } // <== statusYes: 3, for Mumbai\n{ \"_id\" : \"New York\", \"statusYes\" : 1, \"statusNo\" : 0 }",
"text": "Hello @Jitendra_Patwa,You don’t need to use regex for this as there is an aggregation operator to compare in a case-insensitive manner. The following will work fine for your use case:The output (as you had expected):",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hi Prasad,Thanks it works using $strcasecmp.",
"username": "Jitendra_Patwa"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Using $regex in $cond to sum values based on key | 2021-03-02T07:04:25.123Z | Using $regex in $cond to sum values based on key | 5,255 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 4.2.2 MongoDB Java & JVM Drivers release is a patch to the 4.2.1 release.The documentation hub includes extensive documentation of the 4.2 driver, includingand much more.You can find a full list of bug fixes here .https://mongodb.github.io/mongo-java-driver/4.2/apidocs/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 4.2.2 Released | 2021-03-02T23:56:21.450Z | MongoDB Java Driver 4.2.2 Released | 3,940 |
null | [] | [
{
"code": "",
"text": "I have a question regarding public link sharing:Is it possible to set the refresh interval via the URL?\nIt is set to 1h per default. I want to set a 1-min refresh interval automatically when the public dashboard URL is opened. Is it possible to achieve the aforementioned somehow?",
"username": "MartinLoeper"
},
{
"code": "",
"text": "Hi @MartinLoeper -No you can’t do it from the URL, although this is a good feature suggestion. You might want to log it on http://feedback.mongodb.com for others to vote on. You can change the refresh interval from the UI once the dashboard has loaded though.Tom",
"username": "tomhollander"
},
{
"code": "",
"text": "Just saw that somebody already posted this as a suggestion to feedback engine. [1]I will provide a tampermonkey workaround in the meantime.Thanks for the hint to feedback engine - should have looked there first![1] Ability to set refresh rate via url param – MongoDB Feedback Engine",
"username": "MartinLoeper"
}
] | Dashboard Public Link Sharing - Change Default Refresh Interval | 2021-03-02T21:55:51.468Z | Dashboard Public Link Sharing - Change Default Refresh Interval | 1,884 |
[
"atlas-functions"
] | [
{
"code": "const hunterQuery = { \"hunters\": { \"_id\": idCode }};\nconst hunterUpdate = {\n \"$set\": {\n \"hunter.name\": result.name,\n \"hunter.email\": result.email\n }\n };\nconst huntOptions = { returnNewDocument: true };\nhuntsCollection.findOneAndUpdate(hunterQuery, hunterUpdate, huntOptions);\nidCodehunterQueryhunterUpdate",
"text": "Again I’m running into an issue where I just don’t quite understand how to do very Realm-y things in the Realm functions. Hoping for some better documentation soon.I have a trigger that fires when a user updates their name, which is in their User object in their “user=User_id” partition.So the change event is the user object with the updated name.@Andrew_Morgan helped me in another forum post see how to add a Hunter object to my Hunt realm object here:In my function I am able to generate the _id of the embedded object which is lives in Hunt.hunters.I have attempted the following, but can’t seem to get the syntax right:I have console logged and verified that the idCode is correct for the embedded object, but I’m assuming that I either need to adjust the hunterQuery or the hunterUpdate in order to write to the embedded object.This is what the embedded object looks like in Atlas:\n\nScreen Shot 2021-03-01 at 3.59.02 PM756×542 109 KB\nAny pointers or even where to find any info about Realm Functions and Embedded Objects would be awesome. Thanks!–Kurt",
"username": "Kurt_Libby1"
},
{
"code": "hunters$pushexports = function() {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const huntsCollection = db.collection(\"Hunts\");\n \n const idCode = \"123\";\n const result = {\n name: \"Bily\",\n email: \"[email protected]\",\n otherStuff: \"Not interested in this\"\n }\n\n const hunterQuery = { \"_id\": idCode };\n const hunterUpdate = {\n \"$push\": {\n hunters: {\n name: result.name,\n email: result.email\n }\n }\n };\n const huntOptions = { returnNewDocument: true };\n \n return huntsCollection.findOneAndUpdate(hunterQuery, hunterUpdate, huntOptions)\n .then(result => {\n console.log(`Updated Hunts doc for ${result._id}`);\n }, error => {\n console.log(`Failed to insert User document: ${error}`);\n });\n};\n",
"text": "Hi Kurt,as hunters is an array, you should use $push to add the new hunter. This code works for me:",
"username": "Andrew_Morgan"
},
{
"code": "$post$sethuntIdhunterIdhunterQuery_id == hunterIdhuntsCollection.findOneAndUpdatehunt._idhunt.hunters._idconst hunterQuery = { \"hunters._id\": idCode };\nresult.nameresult.email$setotherStuff$push",
"text": "Thanks @Andrew_Morgan.I’ve attempted to use the $post instead of $set and while that seems more appropriate because they are in an array, it’s still not working.Just to be clear, I have a huntId and a hunterId.The hunterQuery is looking for a member of hunt.hunters with the _id == hunterIdso in the huntsCollection.findOneAndUpdate, is it looking at the hunt._id or the hunt.hunters._id?I was able to find the correct embedded object to update by usingthe problem is that it pushed a new object into the array rather than finding the correct properties and updating those with the new result.name and result.email.I only want to update the name and email properties in the existing embedded object in the array. My understanding is that $set will only update the included properties and leave the rest, like your otherStuff property, intact.I assume that is what this is trying to do, but for me, $push is creating a new Hunter object rather than finding and updating the existing embedded object. I did not set upsert to false, though I assume that not setting it at all would default to false.",
"username": "Kurt_Libby1"
},
{
"code": "$exports = function() {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const huntsCollection = db.collection(\"Hunts\");\n \n const idCode = \"321\";\n const result = {\n name: \"Billy\",\n email: \"[email protected]\",\n otherStuff: \"Not interested in this\"\n }\n\n const hunterQuery = { hunters: { \"_id\": idCode }};\n const hunterUpdate = {\n $set: {\n \"hunters.$.name\": result.name,\n \"hunters.$.email\": result.email\n }\n };\n const huntOptions = { returnNewDocument: true };\n \n return huntsCollection.findOneAndUpdate(hunterQuery, hunterUpdate, huntOptions)\n .then(result => {\n console.log(`Updated Hunts doc for ${result._id}`);\n }, error => {\n console.log(`Failed to insert User document: ${error}`);\n });\n};\n",
"text": "OK, I think I now understand what you’re trying to do – update an existing sub-document within an array.If I’m right then you need to use the $ positional operator to index into that array (it should match with the first element that matched your query):",
"username": "Andrew_Morgan"
},
{
"code": "updateManayfindOneAndUpdateexports = function() {\n const db = context.services.get(\"mongodb-atlas\").db(\"RChat\");\n const huntsCollection = db.collection(\"Hunts\");\n \n const idCode = \"321\";\n const result = {\n name: \"Billy\",\n email: \"[email protected]\",\n otherStuff: \"Not interested in this\"\n }\n\n const hunterQuery = { hunters: { \"_id\": idCode }};\n const hunterUpdate = {\n $set: {\n \"hunters.$.name\": result.name,\n \"hunters.$.email\": result.email\n }\n };\n const huntOptions = { returnNewDocument: true };\n \n return huntsCollection.updateMany(hunterQuery, hunterUpdate, huntOptions)\n .then(result => {\n console.log(`Updated Hunts docs`);\n // console.log(`Updated Hunts doc for ${result._id}`);\n }, error => {\n console.log(`Failed to insert User document: ${error}`);\n });\n};\n",
"text": "Thinking some more, I wonder whether your app might actually want to update all Hunts that this hunter belongs too. In which case, you’d want to use updateManay rather than findOneAndUpdate:",
"username": "Andrew_Morgan"
},
{
"code": ".updateMany()",
"text": "Thanks @Andrew_Morgan! That is exactly what I needed.I hadn’t seen the $ positional operator for use with embedded objects. Maybe it’s because I’m searching with Realm lingo in MongoDB documentation and it’s still a bit wonky to translate between the two.I definitely thought I had to get more granular with my server functions because I’m used to Realms/Partitions in the mobile SDKs, so recognizing the ability to query across partitions for all objects is extremely powerful and probably not emphasized enough.I had seen .updateMany() and attempted it’s use a bit, but seeing it here in this example and then how much it simplified my code is . Truly amazing.Is there a guide or documentation for the equivalent of what would be dealing with Realm embedded objects / embedded objects in Realm lists when writing Realm functions?I have been searching the documentation and guides and hadn’t seen this and would love to see more of what might be possible.Thanks again. -Kurt",
"username": "Kurt_Libby1"
},
{
"code": "",
"text": "Hi @Kurt_Libby1, glad this works!Most of the syntax you use to access the database in Realm Functions is actually the same as the MongoDB Node.js driver. In Realm functions, you’re working with (MongoDB) documents and sub-documents rather than Realm Objects and Embedded Objects. That means the best tip is to drop “realm function” from your Google search and add “mongodb javascript” instead.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | .findOneAndUpdate() Realm Embedded Object with Function | 2021-03-01T22:01:02.223Z | .findOneAndUpdate() Realm Embedded Object with Function | 4,148 |
|
null | [
"python",
"transactions"
] | [
{
"code": "from pymongo import MongoClient,WriteConcern\nclient = MongoClient(\"url\")\nclient.t.t.insert_one({}) # Create collection\nwith client.start_session() as s, s.start_transaction():\n client.t.t.insert_one({}, session=s)\n",
"text": "We recently upgraded to Mongo 4.4 and are running into issues with transactions. On 4.0 and 4.2, this following PyMongo code works:As of 4.4, it always fails with the titular error. It’s not a PyMongo issue, as even creating a transaction via the shell causes the same error to happen. We’re baffled as to what configuration issue could be causing this. All of our servers are on 4.4.4 (one primary and two secondaries in a replica set), and the compatibility value is 4.4. Any advice?",
"username": "Ben_Normoyle"
},
{
"code": ">>> client = MongoClient(w='majority')\n>>> client.t.t.insert_one({}) # Create collection\n<pymongo.results.InsertOneResult object at 0x7fe853a3ee00>\n>>> with client.start_session() as s, s.start_transaction():\n... client.t.t.insert_one({}, session=s)\n...\n<pymongo.results.InsertOneResult object at 0x7fe85668e6c0>\n>>> import sys\n>>> sys.version\n'3.9.0 (v3.9.0:9cf6752276, Oct 5 2020, 11:29:23) \\n[Clang 6.0 (clang-600.0.57)]'\n>>> import pymongo\n>>> pymongo.version\n'3.11.3'\n>>> client.server_info()\n{'version': '4.4.4', 'gitVersion': '8db30a63db1a9d84bdcad0c83369623f708e0397', 'modules': ['enterprise'], 'allocator': 'system', 'javascriptEngine': 'mozjs', 'sysInfo': 'deprecated', 'versionArray': [4, 4, 4, 0], 'openssl': {'running': 'Apple Secure Transport'}, ..., 'bits': 64, 'debug': False, 'maxBsonObjectSize': 16777216, 'storageEngines': ['biggie', 'devnull', 'ephemeralForTest', 'inMemory', 'queryable_wt', 'wiredTiger'], 'ok': 1.0, '$clusterTime': {'clusterTime': Timestamp(1614705437, 1), 'signature': {'hash': b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', 'keyId': 0}}, 'operationTime': Timestamp(1614705437, 1)}\n>>> with client.start_session() as s, s.start_transaction():\n... client.t.command('insert', 't', documents=[{}], writeConcern={'w':1}, session=s)\n...\nTraceback (most recent call last):\n File \"<stdin>\", line 2, in <module>\n File \"/Users/shane/git/mongo-python-driver/pymongo/database.py\", line 738, in command\n return self._command(sock_info, command, slave_ok, value,\n File \"/Users/shane/git/mongo-python-driver/pymongo/database.py\", line 626, in _command\n return sock_info.command(\n File \"/Users/shane/git/mongo-python-driver/pymongo/pool.py\", line 683, in command\n return command(self, dbname, spec, slave_ok,\n File \"/Users/shane/git/mongo-python-driver/pymongo/network.py\", line 159, in command\n helpers._check_command_response(\n File \"/Users/shane/git/mongo-python-driver/pymongo/helpers.py\", line 164, in _check_command_response\n raise OperationFailure(errmsg, code, response, max_wire_version)\npymongo.errors.OperationFailure: writeConcern is not allowed within a multi-statement transaction, full error: {'operationTime': Timestamp(1614706228, 1), 'ok': 0.0, 'errmsg': 'writeConcern is not allowed within a multi-statement transaction', 'code': 72, 'codeName': 'InvalidOptions', '$clusterTime': {'clusterTime': Timestamp(1614706228, 1), 'signature': {'hash': b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', 'keyId': 0}}}\n",
"text": "I cannot reproduce this error with the info you’ve provided. The code example works as expected on 4.4.4 for me:Could you please provide the full python, pymongo, and server version info? Like this:Please also include the full exception traceback. For example, I can force the server to return this error (mis)using the low-level Database.command API like this:",
"username": "Shane"
},
{
"code": "\"chainingAllowed\" : true,\n\t\t\"heartbeatIntervalMillis\" : 2000,\n\t\t\"heartbeatTimeoutSecs\" : 10,\n\t\t\"electionTimeoutMillis\" : 10000,\n\t\t\"catchUpTimeoutMillis\" : 60000,\n\t\t\"catchUpTakeoverDelayMillis\" : 30000,\n\t\t\"getLastErrorModes\" : {\n\t\t},\n\t\t\"getLastErrorDefaults\" : {\n\t\t\t\"w\" : 1,\n\t\t\t\"j\" : false,\n\t\t\t\"wtimeout\" : 5000\n\t\t}\n\"settings\" : {\n\t\t\"chainingAllowed\" : true,\n\t\t\"heartbeatIntervalMillis\" : 2000,\n\t\t\"heartbeatTimeoutSecs\" : 10,\n\t\t\"electionTimeoutMillis\" : 10000,\n\t\t\"catchUpTimeoutMillis\" : -1,\n\t\t\"catchUpTakeoverDelayMillis\" : 30000,\n\t\t\"getLastErrorModes\" : {\n\t\t},\n\t\t\"getLastErrorDefaults\" : {\n\t\t\t\"w\" : 1,\n\t\t\t\"wtimeout\" : 0\n\t\t},\n\t\t\"replicaSetId\" : ObjectId(\"5773018ecc19f9e600e06b22\")\n\t}\n",
"text": "The issue apparently was that our replica set had this configuration:When updated to:note getLastErrorDefaults and catchUpTimeoutMillisTransactions began working again. Why that caused it, I don’t know.",
"username": "Ben_Normoyle"
},
{
"code": "",
"text": "Thanks for reporting this. I’ve filed this as a bug in the server: https://jira.mongodb.org/browse/SERVER-54896",
"username": "Shane"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | writeConcern is not allowed within a multi-statement transaction | 2021-03-02T04:02:16.854Z | writeConcern is not allowed within a multi-statement transaction | 4,176 |
null | [
"python"
] | [
{
"code": " _id:object(\"603678958a6eade21c0790b8\")\n id1:3758\n date2:2010-01-01T00:00:00.000+00:00\n time3:1900-01-01T00:05:00.000+00:00\n date4 :2009-12-31T00:00:00.000+00:00\n time5:1900-01-01T19:05:00.000+00:00\n id6 :2\n id7:-79.09\n id8:35.97\n id9:5.5\n id10:0\n id11:-99999\n id12 :0\n id13 :-9999\n c14:\"U\"\n id15:0\n id16:99\n id17:0\n id18:-99\n id19:-9999\n id20:33\n id21:0\n id22:-99\n id23:0\n _id:object(\"603678958a6eade21c0790b8\")\n id1:3758\n date2:2010-01-01\n time3:00:05:00\n date4 :2009-12-31\n time5:19:05:00\n id6 :2\n id7:-79.09\n id8:35.97\n id9:5.5\n id10:0\n id11:-99999\n id12 :0\n id13 :-9999\n c14:\"U\"\n id15:0\n id16:99\n id17:0\n id18:-99\n id19:-9999\n id20:33\n id21:0\n id22:-99\n id23:0\n df['date4'] = df['date4'].astype('datetime64[ns]') \n df['date2'] = df['date2'].astype('datetime64[ns]') \n\n \n df['time3'] = df['time3'].apply(lambda x:datetime.datetime.strptime(x[0]+x[1]+\":\"+x[2]+x[3], '%H:%M'))\n df['time5'] = df['time5'].apply( lambda x: datetime.datetime.strptime(x[0] + x[1] + \":\" + x[2] + x[3], '%H:%M'))\n",
"text": "When i import my data to mongodb i get this:The ideal would be something like this:The code i have used to convert column date2,date4,time3,time5 to date and time is this:I tried some other things like datetime.datetime but nothing seems to work\nDoes anyone know how can i fix that?",
"username": "harris"
},
{
"code": "df['date4'].astype('datetime64[ns]') date()date",
"text": "MongoDB stores data as BSON. The BSON spec only has support for UTC datetimes which are represented as milliseconds since the Unix epoch.When you do df['date4'].astype('datetime64[ns]') what you are basically doing is converting a date object into a datetime object with (0, 0) time offset. This is required, because again BSON only supports datetimes (and not dates). Consequently, when you read the data back you are also seeing the trailing 0s the represent the time offset in addition to the date.Depending on what you are trying to do, there are many ways to remedy the situation. Since the datetime type is more-granular than the date type you are not losing any information during the type cast. You can simply call the date() method on the datetime objects returned by the server to get back the date. See datetime — Basic date and time types — Python 3.11.2 documentation",
"username": "Prashant_Mital"
},
{
"code": " date2:2010-01-01T00:00:00.000+00:00\n time3:1900-01-01T00:05:00.000+00:00\n date4 :2009-12-31T00:00:00.000+00:00\n time5:1900-01-01T19:05:00.000+00:00\n",
"text": "From your previous posts I see your parsing text files (like CSV) to upload them into MongoDB. I suggest you combine the ‘date4’ and ‘time4’ fields into a single datetime.datetime mongodb field ‘datetime4’ (or whatever name you like). As Prashant mentions above, the datetime type contains both a date and a time.",
"username": "Shane"
}
] | How to fix the date and time fields Pymongo | 2021-02-24T19:23:14.653Z | How to fix the date and time fields Pymongo | 6,507 |
null | [
"node-js"
] | [
{
"code": "",
"text": "hi, I have a device that sends me data every 2 seconds. when the data comes to my node js app I save the data immediately. but I want to save the data every 5 minutes. I tried to use set timeout but it caused memory leaks and so on. can I set a delay to MongoDB schema.",
"username": "BERAT_DINCKAN"
},
{
"code": "",
"text": "Hi Berat, welcome to the community!What’s your reason for only wanting to save the data every 5 minutes? e.g., is it to reduce the number of calls to the database, or that you only want the data visible to the app to be updated every 5 minutes?If it’s the later, then one option would be for your Node app to add the changes to a buffer collection, and then use a scheduled Atlas trigger (or a cron job if not using Atlas) to apply those changes to the “live” collection every 5 minutes.",
"username": "Andrew_Morgan"
},
{
"code": "",
"text": "first of all thanks for answering. My goal is to show data on my dashboard live and save it. if I save data every 2 seconds, nearly more than a hundred devices probably make millions of data, and when it comes to query it is not easy to find data with a 2GB digital ocean server .at the same time I can not make these devices send me data every 5 minutes because I created an alarm system that checks conditions about data which come to my node js server and data has come to me every 2 seconds. I want to get data every 2 seconds and show them on the dashboard but when I want to query sensor data it is gonna a little bit hard for me. I am going to try out the changes to a buffer collection. best regards.",
"username": "BERAT_DINCKAN"
},
{
"code": "",
"text": "Sounds to me like you want to do something likeLearn about the Bucket Schema Design pattern in MongoDB. This pattern is used to group similar documents together for easier aggregation.",
"username": "steevej"
},
{
"code": "",
"text": "sorry for answering late. I still searching for it. in the blog which you have sent, the blogger is talking about taking the data every minute. your answer helped me a lot but not solved my problem. thanks. Best regards",
"username": "BERAT_DINCKAN"
},
{
"code": "",
"text": "Hey guys I think I found a solution. when data comes, I update the current data of the device sensor and I will make a cronjob that finds the current data of the device sensors and save them in a sensor document as a nested document every 5 minutes. thanks, Steeve Juneau for adding this blog. Sorry for answering late. it took a lot of time to read and understand the concept of that blog because my English level is elementary. thank you all guys again. have a good day. problem solved",
"username": "BERAT_DINCKAN"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How can I set a delay while saving the data? | 2021-02-16T19:19:49.386Z | How can I set a delay while saving the data? | 4,130 |
null | [
"swift"
] | [
{
"code": "",
"text": "Once the app execute the “try! Realm()” command, it outputs on the command line \" nw_protocol_get_quic_image_block_invoke dlopen libquic failed\" and app just freeze and never execute the next line. I don’t know what to do and I can’t find any useful solution on the web. Can anyone help me?",
"username": "jorne_richard"
},
{
"code": "",
"text": "Welcome to the forums.When posting, it’s a best practice to include some code and then information about your setup. In this case we would want to know what versions of the macOS, XCode and Realm are installed. Maybe include your podfile and then a section of code that’s causing the issue.Then, include some of your troubleshooting; did you add a breakpoint before that line and step through the code to see if anything else was not right before that line? Did you ensure you are using a current Realm version? Also, is this a local or sync’d realm?Update your question and we’ll take a look.",
"username": "Jay"
},
{
"code": "# Uncomment the next line to define a global platform for your project\n platform :ios, '11.0'\n\ntarget 'A Cup' do\n # Comment the next line if you don't want to use dynamic frameworks\n use_frameworks!\n pod \"WaveAnimationView\"\n pod 'RealmSwift'\n pod 'ZFRippleButton'\n # Pods for A Cup\n\n target 'A CupTests' do\n inherit! :search_paths\n # Pods for testing\n pod 'RealmSwift'\n end\n\n target 'A CupUITests' do\n # Pods for testing\n end\n\nend\ntry! Realm()",
"text": "I’m using macOS11.2, XCode 12.3(12C33),and the default version of realm which is 10.7.0.This is my Podfile:I have set two breakpoints before and after the try! Realm(). The app paused before that line, and I pressed that “run the next line” button, and all buttons turned grey. After a long wait, there’s still no sign that then next breakpoint was activated. It seems that the app froze and never execute the next line. There’s no code calling realm before this line, and this line is in the ViewDidLoad().This is a local realm, and I haven’t configure anything about sync’d realm.I’ll be appreciate of the solution.",
"username": "jorne_richard"
},
{
"code": "",
"text": "Ok. so far everything looks correct. Can you now share some code? Does the issue occur in the simulator only or on an actual device?",
"username": "Jay"
},
{
"code": "try! Realm()try! Realm()",
"text": "It happens on both simulator and actual device. I didn’t make any change to the AppDelegate.swift file, and I called try! Realm() on the ViewDidLoad() in the first view’s viewcontroller file, which I think is the most prior method called. I mean, before the try! Realm() , I didn’t wrote or override anything before this line, and code after this line shouldn’t affect the initialization of realm…\nSo I don’t know what code I should share…",
"username": "jorne_richard"
},
{
"code": "",
"text": "Realm doesn’t just crash for no reason. We build and rebuild Realm projects on a daily basis without encountering the issue you’re describing - that that indicates there’s something different about your project than ours.Some troubleshooting to eliminate variables on your part would in order:Performing an internet search reveals that error in other situations as well - it’s possible one of your other podfiles is causing the error and it’s unrelated to Realm.Have you tried to create a brand new project and only add RealmSwift to it? e.g. your podfile would only contain the RealmSwift pod?If that works add another pod and build and run; does the error re-occur?If not, adding another; does that cause the error?These are the steps in finding errors - narrow it to a minimal amount of variables and start adding them back in until the problem happens - then you’ve isolated it.",
"username": "Jay"
},
{
"code": "",
"text": "Well, I’m still not able to isolate the problem, but I got more outputs this time.\nHere’s the output:\n2021-03-01 23:56:20.989269+0800 A Cup[12320:544065] nw_protocol_get_quic_image_block_invoke dlopen libquic failed\n2021-03-01 23:56:31.378352+0800 A Cup[12320:544062] Connection 1: received failure notification\n2021-03-01 23:56:31.378518+0800 A Cup[12320:544062] Connection 1: encountered error(3:-9816)\n2021-03-01 23:56:31.378992+0800 A Cup[12320:544062] [connection] nw_connection_copy_connected_local_endpoint [C1] Connection has no connected path\n2021-03-01 23:56:31.379197+0800 A Cup[12320:544062] [connection] nw_connection_copy_connected_remote_endpoint [C1] Connection has no connected path\n2021-03-01 23:56:31.381309+0800 A Cup[12320:544062] Task <011C542E-AC30-4F40-9ADF-BDF5B6FA65A1>.<2> HTTP load failed, 0/0 bytes (error code: -1200 [3:-9816])So what does it mean?",
"username": "jorne_richard"
},
{
"code": "",
"text": "It means you should do some further troubleshooting as mentioned previously; please start a new project and use a podfile that only contains RealmSwift. Build and run and see if you get the same error.",
"username": "Jay"
},
{
"code": "",
"text": "Well, finally I solved that. When setting datamodels, I have set a few variables to getter type, which caused that problem. After setting them to a specific value, problem solved. I apologize that I didn’t isolate the problem carefully before.\nBtw, may I ask why the problem occur when I set the variable to a getter type?",
"username": "jorne_richard"
},
{
"code": "",
"text": "Glad you found the issue! This is why we ask for code, models etc because without that, we’re just guessing.Btw, may I ask why the problem occur when I set the variable to a getter type?To answer that we would need to see how the Realm Objects (models) are set up so we can see what you mean by a getter type (Realm doesn’t have “getter” types).",
"username": "Jay"
}
] | iOS App freeze while initializing realm with a strange error | 2021-02-25T09:41:36.158Z | iOS App freeze while initializing realm with a strange error | 3,853 |
null | [
"swift"
] | [
{
"code": " struct SyncGroupsPage: View {\n ...\n @ObservedObject var repository = SyncGroupRepository.shared\n ...\n var body: some View {\n ...\n Button(action: { _ = repository.addSyncGroupToCollection(name: \"syncGroup=\\(String.random(10))\") }) ...\n }\n }\n class SyncGroupRepository: ObservableObject {\n ...\n @Published var syncGroups: RealmSwift.List<SyncGroup>?\n func addSyncGroupToCollection(name: String) -> SyncGroup {\n let syncGroup = SyncGroup(name: name)\n try! state.user?.realm?.write {\n self.syncGroups?.append(syncGroup)\n }\n return syncGroup\n }\n ...\n }\n",
"text": "Problem Context:Problem:Repository:Persistence is working as expected and if I leave the SyncGroupsPage view and come back in I see the updated values rendered.syncGroups in the repository aren’t publishing any events when an append occurs. How should I modify the pattern?",
"username": "Michael_Kofman"
},
{
"code": "struct ParentView: View {\n var body: some View {\n ChildView()\n .environment(\\.realmConfiguration,\napp.currentUser!.configuration(partitionValue: \"my-partition-key-value\"))\n }\n}\n\nstruct ChildView: View {\n var body: some View {\n GrandChildView()\n }\n}\n\nstruct GrandChildView: View {\n @ObservedResults(Item.self) var items\n @Environment(\\.realm) var itemRealm\n var body: some View {\n List {\n ForEach(items) { item in\n NameView(item: item)\n }\n }\n }\n}\n\nstruct NameView: View {\n @ObservedRealmObject var item: Item\n \n var body: some View {\n TextField(\"name\", text: $item.name)\n }\n}\nGrandChildViewitems$items.append",
"text": "Hi Michael,Realm-Cocoa 10.6 made working with Realm from SwiftUI views much easier – this meetup recording goes into how to take advantage of the new features (if you’re not using Realm Sync then you can skip the discussions on partitions).Here’s an example where I want to work with a realm for a specific partition ( “my-partition-key-value”) and so I inject it into the environment, you can skip that if you’re working with the default Realm:In GrandChildView you can append to items using $items.append without needing to explicitly create a transaction, and the view will update.",
"username": "Andrew_Morgan"
}
] | SwiftUI/Combine and RealmSwift.List Appending | 2021-02-26T19:29:26.030Z | SwiftUI/Combine and RealmSwift.List Appending | 2,218 |
null | [
"node-js",
"serverless"
] | [
{
"code": "",
"text": "Hi,We have a production app that used to use MongoDB Stitch. Recently we migrated our front-end from using “Stitch” to “Realm-Web” and its working fine. We also have a backend running on AWS Lambda that uses “mongodb-stitch-server-sdk”. We tried migrating to “realm” in the backend but it doesn’t work.I tried searching online if people are facing a similar challenge and I found this GitHub issueI am actually facing the exact same issue that is listed there. I wanted to ask here if there are any solutions or workarounds for this.Also in one of my previous posts, I asked about how long the “Stitch” library might be available as it is legacy library now.Since we are unable to migrate, I am wondering if we can rely on “Stitch” legacy library to stay alive and possibly how long until all the issues with “realm” is fixed.Thanks!",
"username": "Salman_Alam"
},
{
"code": "",
"text": "The issue is that AWS Lambda doesn’t support all primitive file system operations as Realm JavaScript is using.If you don’t need to sync data but can you use the functionality supported by Realm Web, you can use Realm Web for node too.",
"username": "Kenneth_Geisshirt"
},
{
"code": "",
"text": "Hi @Kenneth_Geisshirt, I am mainly using “authentication and MongoDB data access” features no syncing features. Can I go ahead then and use “realm-web” for AWS Lambda?Also do you happen to know if you have to specify “how/where” to store where to store the authentication credentials if you use “realm-web” in AWS Lambda Nodejs environment? I had to do this when using “stitch” library.const client = Stitch.initializeAppClient(\nappName,\nnew StitchAppClientConfiguration.Builder().withDataDirectory(\"/tmp\").build() );Do we have to do something similar if I used “realm-web” in AWS Lambda? I couldn’t find anything in the docs about it.",
"username": "Salman_Alam"
},
{
"code": "",
"text": "Can I go ahead then and use “realm-web” for AWS Lambda?Although I havn’t tested this myself, there should be no reason Realm Web couldn’t run on AWS Lambda.where to store the authentication credentials if you use “realm-web” in AWS Lambda Nodejs environment?The underlying method of persistence of credentials (refresh + access tokens) is abstracted, but in the time of writing this, only an in-memory storage implementation is available (will be used by default) for Realm Web when running in Node.js. This means a log-in is required in the beginning of every function invocation.I’m curious to learn more about your need / use-case here. I assume you want to store the refresh + access token of a user to avoid having to re-authenticate on every function invocation, is that correct? How would you ensure that this storage is not shared between different users (executing queries with another users credentials)? (I’m genuinely interested since I havn’t personally built stuff on AWS Lambda myself).I couldn’t find anything in the docs about it.I will relay this to our docs team. I assume this is because Realm JS is our official Node.js SDK, the use of Realm Web from Node.js is supported, but not our recommended way since it doesn’t ship with a sync client.",
"username": "kraenhansen"
}
] | Realm not working in AWS Lambda (serverless framework) | 2021-02-28T04:38:03.876Z | Realm not working in AWS Lambda (serverless framework) | 3,335 |
null | [] | [
{
"code": "",
"text": "Hi,Question is wrong depends on the data. We should not check the data.\nLogically, we do first sorting then do limiting. Question accepts vice versa. You are contradicting yourself.Please check.",
"username": "Umut_Tekin"
},
{
"code": "sortlimit",
"text": "Hi @Umut_Tekin,It may appear that when you reverse the order of sort and limit, it wouldn’t give you the right results but MongoDB flips their order when executing the query, delivering the results that the question prompt is looking for.",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Hi,Yes, that’ s exactly what I’ m talking about. It wouldn’ t give the right results. So, you have to check both of them. When results set change depending on the order of sort and limit, MongoDB shows 2 different replica set :)? It shouldn’ t. A query gives a result set, not two.Either, you have to provide copy - paste options for that question to check results or you have correct the answer the way conform all possible conditions.",
"username": "Umut_Tekin"
},
{
"code": "",
"text": "Then you mean to say that sort and limit need not to be in an order as mongo adjusting by itself?",
"username": "Jnana_Deep_Mundru"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Chpater 5 - Quiz 1: sort() and limit() | 2021-01-30T20:13:51.483Z | Chpater 5 - Quiz 1: sort() and limit() | 3,000 |
null | [
"queries"
] | [
{
"code": "db.<collectionname>.find({\n\t_id: ObjectId(\"602b297f89ead6c0dd7821b8\"),\n\tsystem: 'ABC',\n\tsentence: { $elemMatch:{ $regex:'^.*note.*' } }\n})\n{\n\t\"_id\" : ObjectId(\"602b297f89ead6c0dd7821b8\"),\n\t\"fds_language\" : \"EN\",\n\t\"fds_subjects\" : \"ERNS^ER_GEN\",\n\t\"publish_time\" : ISODate(\"2021-02-16T01:47:03.000Z\"),\n\t\"sa_categories\" : \"SA_EARNINGS\",\n\t\"sentence\" : [\n\t\t\"Revenue $21.8M vs year-ago $41.7M\",\n\t\t\"Adjusted EBITDA ($1.8M) vs year-ago $2.6M\",\n\t\t\"Editor's note: This comment was entered 15-Feb for the record.\"\n\t],\n\t\"story_date_time\" : ISODate(\"2020-12-21T21:07:29.000Z\"),\n\t\"system\" : \"ABC\"\n} \n",
"text": "HiI have following document which has “sentence” array and I was wondering how I can search array element based on certain text (e.g . in this case word is “note”)\nI have used following query but it is returning all the element of sentence instead of third element of array in this case//query://document :how can I change the query which allow regex based text search on element of sentence array and return only that element?Thanks",
"username": "Dhruvesh_Patel"
},
{
"code": "",
"text": "You could use the aggregation framework with a $unwind stage to get only the specific array element.",
"username": "steevej"
},
{
"code": "",
"text": "thanks , can you provide example for it ? I am still new to mongodb.",
"username": "Dhruvesh_Patel"
},
{
"code": "",
"text": "Hi @Dhruvesh_Patel,Here are some examples on our $unwind documentation page.Hope this helps!\nJason",
"username": "Jason_Tran"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Search on a string array content | 2021-03-01T14:01:35.807Z | Search on a string array content | 9,052 |
null | [
"indexes"
] | [
{
"code": "'organizations.users.id'",
"text": "I have a collection called Organizations, each organization has an embedded array of users. Each user has an id that I would like to be unique. I’ve attempted to set up a unique index on 'organizations.users.id', but I’ve found that that I am still able to create multiple users with the same id.After doing a bit of googling I’ve found that, while you can set unique indexes on embedded documents, they do not actually enforce uniqueness. I am wondering why this is? Is this a bug?Thanks!",
"username": "Greg_Fitzpatrick-Bel"
},
{
"code": "",
"text": "Hi @Greg_Fitzpatrick-Bel,Index constraints are applied at the document level, so a unique index currently cannot ensure unique values within a single document (for example, in an embedded array).Per Unique Constraint Across Separate Documents:The unique constraint applies to separate documents in the collection. That is, the unique index prevents separate documents from having the same value for the indexed key.Because the constraint applies to separate documents, for a unique multikey index, a document may have array elements that result in repeating index key values as long as the index key values for that document do not duplicate those of another document.There is some related discussion and a workaround using document validation in SERVER-1068: unique indexes not enforced within array of single document.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Unique Indexes on embedded documents | 2021-03-01T22:07:25.516Z | Unique Indexes on embedded documents | 5,604 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 4.1.0 MongoDB Java & JVM Drivers has been released, with support for the upcoming release of MongoDB 4.4.The documentation hub includes extensive documentation of the 4.1 driver, includingand much more.You can find a full list of bug fixes here .You can find a full list of improvements here .You can find a full list of new features here .https://mongodb.github.io/mongo-java-driver/4.1/apidocs/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 4.1.0 Released | 2020-07-30T19:31:32.055Z | MongoDB Java Driver 4.1.0 Released | 5,839 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 4.2.0 MongoDB Java & JVM Drivers has been released.The documentation hub includes extensive documentation of the 4.2 driver, includingand much more.You can find a full list of bug fixes here .You can find a full list of improvements here .You can find a full list of new features here .https://mongodb.github.io/mongo-java-driver/4.2/apidocs/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 4.2.0 Released | 2021-01-22T18:17:10.683Z | MongoDB Java Driver 4.2.0 Released | 3,189 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 4.2.1 MongoDB Java & JVM Drivers release is a patch to the 4.2.0 release.The documentation hub includes extensive documentation of the 4.2 driver, includingand much more.You can find a full list of bug fixes here .https://mongodb.github.io/mongo-java-driver/4.2/apidocs/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 4.2.1 Released | 2021-03-01T22:11:55.148Z | MongoDB Java Driver 4.2.1 Released | 2,192 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 3.11.3 MongoDB Java Driver release is a patch to the 3.11.2 release and a recommended upgrade.The documentation hubincludes extensive documentation of the 3.11 driver, includingand much more.You can find a full list of bug fixes here .http://mongodb.github.io/mongo-java-driver/3.12/javadoc/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 3.11.3 Released | 2021-03-01T22:14:45.432Z | MongoDB Java Driver 3.11.3 Released | 2,668 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 4.1.2 MongoDB Java & JVM Drivers release is a patch to the 4.1.1 release.The documentation hub includes extensive documentation of the 4.1 driver, includingand much more.You can find a full list of bug fixes here .https://mongodb.github.io/mongo-java-driver/4.1/apidocs/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 4.1.2 Released | 2021-03-01T22:09:51.561Z | MongoDB Java Driver 4.1.2 Released | 3,260 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 4.0.6 MongoDB Java & JVM Drivers release is a patch to the 4.0.5 release and a recommended upgrade.The documentation hub includes extensive documentation of the 4.0 driver, includingand much more.You can find a full list of bug fixes here .https://mongodb.github.io/mongo-java-driver/4.0/apidocs/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 4.0.6 Released | 2021-03-01T22:08:25.043Z | MongoDB Java Driver 4.0.6 Released | 3,082 |
null | [
"java",
"production"
] | [
{
"code": "",
"text": "The 3.12.8 MongoDB Java Driver release is a patch to the 3.12.7 release and a recommended upgrade.The documentation hub includes extensive documentation of the 3.12 driver, includingand much more.You can find a full list of bug fixes here .http://mongodb.github.io/mongo-java-driver/3.12/javadoc/ ",
"username": "Jeffrey_Yemin"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Java Driver 3.12.8 Released | 2021-03-01T22:06:23.667Z | MongoDB Java Driver 3.12.8 Released | 3,246 |
null | [] | [
{
"code": "",
"text": "I am talking about the assignment: “Use the shell space below to connect to your Atlas cluster.”I am managing to connect successfully, however I am failing the assignment with the message:[FAIL] “The cluster name is Sandbox” Did you name your Atlas cluster Sandbox?However, this is not the case, my cluster is named Sandbox.Does anyone else have this issue?",
"username": "Lisa_N_A"
},
{
"code": "",
"text": "Post a screenshot of the whole IDE so that we see how you connect.",
"username": "steevej"
},
{
"code": "",
"text": "Sure Please see the attached screenshots.Bildschirmfoto 2021-02-20 um 17.04.441325×703 37 KB Bildschirmfoto 2021-02-20 um 17.04.331325×704 61.6 KB Bildschirmfoto 2021-02-20 um 17.04.201323×702 25.2 KB",
"username": "Lisa_N_A"
},
{
"code": "",
"text": "Weird!May be the validation script is picky and assume that Sandbox is not the same as sandbox.Try putting an upper case S in your connection string.Also post screenshot of the Atlas screen for the status of your cluster.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks for the help!\nYes pretty weird…I tried with a capital S and it is not working either.\nHere is a screenshot of the status of my cluster.Bildschirmfoto 2021-02-20 um 17.21.061664×388 48.3 KB",
"username": "Lisa_N_A"
},
{
"code": "",
"text": "I have flagged your post so that they look at it quickly.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you so much!",
"username": "Lisa_N_A"
},
{
"code": "",
"text": "I have the same issue:3 total, 2 passed, 0 skipped:\n[PASS] “Successfully connected to the Atlas Cluster”\n[FAIL] “The cluster name is Sandbox”\nDid you name your Atlas cluster Sandbox?\n[PASS] “The username is m001-student”It’s some sort of nonsense…",
"username": "Rimantas_Belovas"
},
{
"code": "",
"text": "Did you try with other form of string?\nDoes it fail with it too?mongo “mongodb+srv://sandbox.xyz.mongodb.net/test” --username m001-student",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "I have successfully connected withmongo “mongodb+srv://m001-student:[email protected]”mongo works fine, only these test fail for no reason ",
"username": "Rimantas_Belovas"
},
{
"code": "",
"text": "Yes, it fails too.",
"username": "Lisa_N_A"
},
{
"code": "",
"text": "I did some testing and think I figured out the problem. I am pretty sure it is a bug.When creating the Project M001 the first cluster you create inside has to be the cluster named Sandbox.If you create the Project and then create any other cluster inside the test fails. Even if you delete the cluster again and only have the Sandbox cluster present.I have tested this and I am able to recreate the issue.",
"username": "Lisa_N_A"
},
{
"code": "mongo \"mongodb+srv://m001-student:[email protected]\"\nmongo \"mongodb+srv://sandbox.xyz.mongodb.net\" --username m001-student --password m001-mongodb-basics\n",
"text": "I have tried @Ramachandra_37567 idea of using the –username option rather than putting the username and password in the connection string and it works.So rather thanuseto connect before Run Test.",
"username": "steevej"
},
{
"code": "",
"text": "Yep, it worked!",
"username": "Rimantas_Belovas"
},
{
"code": "mongo \"mongodb+srv://sandbox.xyz.mongodb.net\" --username m001-student --password m001-mongodb-basics",
"text": "mongo \"mongodb+srv://sandbox.xyz.mongodb.net\" --username m001-student --password m001-mongodb-basicsI get this error while running above command. Please advisebash-4.4# mongo “mongodb+srv://sandbox.xyz.mongodb.net” --username m001-student --password m001-mongodb-basics\nDNSProtocolError: No SRV records for “_mongodb._tcp.sandbox.xyz.mongodb.net”\ntry ‘mongo --help’ for more information",
"username": "Kashif_Choudhry"
},
{
"code": "",
"text": "You have to use the URI of your own cluster. The cluster name sandbox.xyz.mongodb.net is an example.",
"username": "steevej"
},
{
"code": "",
"text": "Thanks @Lisa_N_A ! this is an interesting find. I’ve created a ticket to address it. Thanks for reporting and investigating on this. ",
"username": "Yulia_Genkina"
},
{
"code": "",
"text": "mongo “mongodb+srv://m001-student:[email protected]”I have the same exact issue, Do we have a solution for this?",
"username": "Rohit_Rai"
},
{
"code": "",
"text": "The solution is this thread, a few post above.",
"username": "steevej"
},
{
"code": "",
"text": "mongo “mongodb+srv://sandbox.xyz.mongodb.net” --username m001-student --password m001-mongodb-basicsI am trying to connect using in browser IDE and unable to do so since Wed.mongo “mongodb+srv://sandbox.xyz.mongodb.net” --username m001-student --password m001-mongodb-basicstried following also.mongo “mongodb+srv://Sandbox.mongodb.net/m001-student”Please let us know if it’s bug or correct command to connect?Thanks.",
"username": "Mahrukh_Khan"
}
] | Can't pass last assignment in Chapter 1 - Issue? | 2021-02-20T16:00:29.369Z | Can’t pass last assignment in Chapter 1 - Issue? | 3,390 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hi,I’m a newbie to MongoDB. My question is when creating a schema, is it better to put all your Documents in a collection to itself, i.e., _ID, doc_type, document. or is it best to embed the document into the collection that contains the medata data for that document, i.e., _iD, doc_type, doc_description, doc_name, document_count, etc.?Forgive me if examples are too rudimentary.Thanks,\nBill",
"username": "William_Jordan"
},
{
"code": "",
"text": "Hello @William_Jordan,I am assuming you are looking for database design (or data modeling) information for a certain type of data you have on your mind. The following is the link to documentation for Data Model Design, and it generally discusses:Effective data models support your application needs. The key consideration for the structure of your documents is the decision to embed or to use references.You can post specific data and details for more insight into this topic (for example the application data is users, blogs and comments/reviews information or a customer and invoices, etc.).MongoDB database is Databases and Collections and Documents. And, how you design the data for your application into these is the data modeling.",
"username": "Prasad_Saya"
}
] | Collection Schema | 2021-03-01T16:17:44.039Z | Collection Schema | 1,562 |
Subsets and Splits