image_url
stringlengths 113
131
⌀ | tags
sequence | discussion
list | title
stringlengths 8
254
| created_at
stringlengths 24
24
| fancy_title
stringlengths 8
396
| views
int64 73
422k
|
---|---|---|---|---|---|---|
null | [] | [
{
"code": "",
"text": "Hi everyone!In case you missed it, MongoDB World is now MongoDB.live: a free, virtual, two-day event full of learning, networking, and fun! (Learn more about MongoDB.live 2020 >>>)One exciting outcome of this format change is the new Community Cafe – a parallel content track where we’ll be hosting tutorials and workshops, kid-friendly activities, and a proverbial talent show of community members akin to previous years’ Builders Fest at MongoDB World. We’re looking for community members who want to show off their stuff. We want you!Here’s the idea proposal form with more details on how to get involved >>>The bottom line is: we want to get to know our community better. The theme for this year’s Community Cafe track is “Getting to Know Your Neighbors.” We invite you to take us into the pottery-throwing, stamp-collecting, dog-training, rodeo-riding, home-baking, upcycled-sculpting, piano-playing, and competitive dancing world of our MongoDB community at home.Time is short, so submit your ideas today. Even if they aren’t fully baked out yet, we can work with you to turn your talent into a fun session of sharing and learning. Drop a comment below if you have any questions. Thanks!",
"username": "Jamie"
},
{
"code": "",
"text": "Y’all, don’t make me do a live cooking show by myself ",
"username": "Jamie"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Show Us Your Talents in the MongoDB.live Community Cafe! | 2020-04-23T08:20:19.983Z | Show Us Your Talents in the MongoDB.live Community Cafe! | 2,992 |
null | [
"security",
"atlas"
] | [
{
"code": "",
"text": "So frustrated at the moment! I cannot assign a Privilege or Role to my user that limits access to my specific database and collection. The only way for me to see my dev-datasource is to assign readWriteAnyDatabase@admin to my User Name. The problem with this is I can see all datasources (uat-datasource & prod-datasource).What am I missing?",
"username": "Greg_Embry"
},
{
"code": "",
"text": "Hi @Greg_Embry,On Atlas when you create or update a user you can assign roles to them.On this screen (I modify a user on my cluster) you can see that there is a small “Add Default Privileges” button.\n\na8f535fb7245d399e3a92150d7c75b4c3d63ba2a810×497 21.3 KB\nClick on it and you have access to more built-in roles that restrict access to a namespace (myDatabse.myCollection). Base on your problem you probably need the “readWrite” built-in role.\n\nh821×808 42.7 KB\nTo be complete. If for specific tasks the built in roles offered are not sufficient you can always create custom roles by following this documentation page.I hope that answers your problem ",
"username": "Gaetan_MORLET"
},
{
"code": "",
"text": "dev-datasourceThanks for the quick response @Gaetan_MORLET !Check out the following screenshot. No matter how I monkey with the roles or individual privileges I cannot just see the dev-datasource as the ONLY datasource.\nimage1109×1203 55.5 KB\n",
"username": "Greg_Embry"
},
{
"code": "You have your dev and prod DBs on the same Atlas cluster. True?You want to connect with a user via NoSQLBooster and see only the database and the collection to which he has access (a dev DB). Correct?Currently, your user can no longer see anything when connected but before with the readWriteAnyDatabase built-in-role he could see all DBs ?",
"text": "Ok, the problem seems to concern the NoSQLBooster tool more than Atlas.First of all, for me and the community we will clarify certain points.You have your dev and prod DBs on the same Atlas cluster. True?You want to connect with a user via NoSQLBooster and see only the database and the collection to which he has access (a dev DB). Correct?\nIf yes I don’t use this tool so I don’t know how to do it. Maybe someone in the community can help you. On MongoDB Compass this is done automatically, the user sees only what he has access to.Currently, your user can no longer see anything when connected but before with the readWriteAnyDatabase built-in-role he could see all DBs ?\nI think this is because of your custom role. It also depends on the actions that NoSQLBooster needs to display the info.\nYou can in my opinion go back to readWrite built-in-role as I mentioned earlier. It has actions like collStats or listIndexes which can be useful for external tools.\n\nee808×726 32.4 KB\nSo in my opinion the problem comes more from your custom role and NoSQLBooster.Keep us in touch.",
"username": "Gaetan_MORLET"
},
{
"code": "",
"text": "Check out the following screenshot. No matter how I monkey with the roles or individual privileges I cannot just see the dev-datasource as the ONLY datasource.Hi Greg,Can you confirm if your issue is with administering users via the Atlas UI or using a third party tool (NoSQL Booster is mentioned in your follow-up post).Atlas is a managed service, so some administrative commands are limited (particularly if your cluster is provisioned on a shared tier like M0/M2/M5) and a few (like user and role management) must be performed via Atlas. Please refer to Unsupported Commands in Atlas for specific details.Per the Atlas documentation on Adding Database Users:Atlas rolls back any user modifications not made through the UI or API. You must use the Atlas UI or API to add, modify, or delete database users on Atlas clusters.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "UPDATE\nWell, thx for the MongoDB Compass mention. I had no idea that it was available. I installed Compass and it all looks good now. Pre-Atlas, I used Robo 3T and I could not connect to Atlas with it. I then came across NoSQLBooster and was able to connect, but still didn’t see the correct results.So all is good now.– Thx @Gaetan_MORLET & @Stennie_X for reaching out!",
"username": "Greg_Embry"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Atlas Database Users and/or Roles not working | 2020-04-21T23:11:19.496Z | Atlas Database Users and/or Roles not working | 5,555 |
null | [
"node-js"
] | [
{
"code": "if (collectionName) {\n delete collectionName;\n} else {\n make(collectionName);\n}\n",
"text": "I want to check a collection exists or not. If it exists delete it or if not exist make it.\nfor example:how to do this in mongodb?",
"username": "Ali_Abbas"
},
{
"code": "",
"text": "The exact details on how to do it depends on the driver your are using.",
"username": "steevej"
},
{
"code": "",
"text": "I am using 3.5.5 version",
"username": "Ali_Abbas"
},
{
"code": "",
"text": "A driver is language specific, not version specific. Which language, java, js, rust, go?",
"username": "steevej"
},
{
"code": "",
"text": "i am using node js .",
"username": "Ali_Abbas"
},
{
"code": "",
"text": "Just drop the collection every time, additional access to the collection will create it.drop(options, callback){Promise}lib/collection.js, line 1140Drop the collection from the database, removing it permanently. New accesses will create a new collection.http://mongodb.github.io/node-mongodb-native/3.5/api/Collection.html#drop",
"username": "chris"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | How to check if a collection exists and delete it? | 2020-04-23T07:49:09.779Z | How to check if a collection exists and delete it? | 14,598 |
null | [
"performance",
"server"
] | [
{
"code": "",
"text": "Hi guysI am new to mongodb with active interest to learn.Can anyone please help to know what basic steps we can starts with for checking performence and tunning mongodb server.Kind regards\nGaurav",
"username": "Gaurav_Gupta"
},
{
"code": "",
"text": "Best place to start is to take some courses at https://university.mongodb.com/.",
"username": "steevej"
}
] | MongoDB performance tuning | 2020-04-23T15:02:54.603Z | MongoDB performance tuning | 1,667 |
null | [] | [
{
"code": "",
"text": "I was reviewing the schemas in the 'Exploring Datasets in Compass and the quiz answers did not match up exactly to the fields. In the movies schema. For example, ‘director’ instead of ‘directors’, ‘genre’ instead of ‘genres’. Shouldn’t the spelling be identical?",
"username": "Brenda_41152"
},
{
"code": "",
"text": "Please make sure you are connected to the correct cluster/DB/collection\nWe do have director,genre fields in another DB",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Brenda_41152,As @Ramachandra_37567 mentioned, please make sure you are using the video.movies collection and not the mflix.movies collection.Hope it helps!If you have any other query then please feel free to get back to us.Happy Learning Thanks,\nShubham Ranjan\nCurriculum Support Engineer",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "2 posts were merged into an existing topic: Lab 1.1. Compass Schema shows null",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Closing this thread.",
"username": "Shubham_Ranjan"
}
] | Schema Field Names in Lab 1.1 Quiz | 2019-11-10T18:11:45.278Z | Schema Field Names in Lab 1.1 Quiz | 1,633 |
[
"ops-manager"
] | [
{
"code": "",
"text": "Hi,\nI am seeing in below link that community edition is not supported for 4.2 FCV under Backup considerations page.\nDo we any documentation for this limitation & why its not part of FCV?–",
"username": "Abhi_k"
},
{
"code": "",
"text": "Any reply on the above query by team?",
"username": "Abhi_k"
}
] | Opsmanager 4.2 Support for Community edition | 2020-04-14T14:26:01.247Z | Opsmanager 4.2 Support for Community edition | 1,840 |
|
[
"app-services-user-auth",
"stitch"
] | [
{
"code": "auth.loginWithCredential(new CustomCredential(token))\n{\"error\":\"expected field 'sub' to be in token metadata\",\"error_code\":\"AuthError\",\"link\":\"redacted\"}\n",
"text": "I am using Auth0 as a custom authentication provider with Stitch. The JWT access token sent to Stitch has the “sub” field in the payload.I want to set a metadata field “externalUserId” on the Stitch user object. Following steps at https://docs.mongodb.com/stitch/authentication/custom-token/#metadata-fields, I have setup fields as following:\nScreen Shot 2020-04-03 at 2.01.28 PM1388×456 42.3 KB\nI am using the browser SDK for authentication like following:The request fails with the following error:I have double-checked that the access-token being passed has the “sub” field.",
"username": "Akshay_Kumar"
},
{
"code": "* iss (issuer): Issuer of the JWT\n* sub (subject): Subject of the JWT (the user)\n* aud (audience): Recipient for which the JWT is intended\n* exp (expiration time): Time after which the JWT expires\n* nbf (not before time): Time before which the JWT must not be accepted for processing\n* iat (issued at time): Time at which the JWT was issued; can be used to determine age of the JWT\nissaudfunction (user, context, callback) {\n const namespace = 'http://example.com/';\n context.accessToken[namespace + 'email'] = user.email;\n callback(null, user, context);\n}\n{ \"http://example.com/email: [email protected]\" } Error:\nexpected field 'http://example.com/email' to be in token metadata\nhttp://example.com/email{ \"http://example\": { \"com/email\" : <val> } }\nError:\nexpected field '\"http://example.com/email\"' to be in token metadata\nfunction (user, context, callback) {\n const namespace = 'http://examplecom/';\n context.accessToken[namespace + 'email'] = user.email;\n callback(null, user, context);\n}\nhttp://examplecom/email",
"text": "I have exactly the same problem.Did you figure this out?After a lot of tinkering I ~concluded~ Stitch doesn’t allow certain fields as Metadata, like you want to.Auth0 Access Token basically use these fields:Out of all those, only iss is read by Stitch as metadata. The others are ignored. Under the hood Stitch might use those fields for other stiff. We know that’s at least the case for aud, that is (optionally) used for verification.SolutionMy goal was slightly different than OP’s, but close enough. I wanted to pass custom fields from Auth0 to Stitch through the access token.First I needed to create a rule in Auth0, adding a field to the token. Auth0 requires the fields to be namespaced in the form of a url:So I now had a token with a field { \"http://example.com/email: [email protected]\" } , which I could verify by decoding it.However Stitch wouldn’t read it, yieldingI struggled to see why Stitch wouldn’t read that. Hours. Until I remembered the note about the dot notation.Stitch interprets the Path http://example.com/email asAdding quotes didn’t work either:Finally the trick was to remove the dot altogether:And adding http://examplecom/email as the Path.",
"username": "Dalmo_Mendonca"
}
] | Stitch custom jwt authentication not able to set metadata fields from jwt | 2020-04-03T09:14:11.235Z | Stitch custom jwt authentication not able to set metadata fields from jwt | 2,700 |
|
null | [
"replication"
] | [
{
"code": "PRIMARY> rs.printReplicationInfo()\nconfigured oplog size: 160000MB\nlog length start to end: 12799secs (3.56hrs)\noplog first event time: Wed Apr 22 2020 11:21:03 GMT+0000 (UTC)\noplog last event time: Wed Apr 22 2020 14:54:22 GMT+0000 (UTC)\nnow: Wed Apr 22 2020 14:54:25 GMT+0000 (UTC)\n",
"text": "Hi,\nI am currently testing mongodb for a potentional use on a live system altough I have encountered an issue with mongodb replication.I have PSA setup on mongodb v4.2.6. 3 Servers with 10 core CPU, 20GB RAM ea. I am currently testing how MongoDB behaves if for instance I stop the replication manually.Configuration :\noplog was configured for each server with 160GB of size which is more than I need for the test that I am conducting also Secondary Sync Target was configured rs.syncFrom(“mongo01:27017”) ( to the Primary server)When conducting the test I am inserting roughly 20k transactions of JSON per second. After one minute I am stopping the replica server by turning off mongod and let the inserts continue for 5 for minutes. After this procedure I turn on the replica and these are the 2 issues I have encountered:When turning off the replication the transactions fall down to 2K per second compared to 20K per second I was achieving when my PSA set up was still up and running I would like to know why this is happening because I cannot see any errors apart from the fact that the secondary server is down. Can you provide any inside on this ?When restarting the replica the tps still remains at 2K per second and the replication lag continues to increase without eventually never recovering, I can see that this is not an oplog issue because I didn’t exceed the configured size.I can see some main flaws here, what happens if I have to recover my replica even after 5 minutes, do I have to recover everything by using this method : Replica Set Resync by Copying This isn’t as feasible as having the replica replicate the changes required only considering this is just 5 minutes of data (600k entries considering that the inserting rate dropped to 2K per sec)Is there any way to fix these issues please? Maybe a configuration which I am missing",
"username": "Emanuel_Mallia"
},
{
"code": "",
"text": "What happens to the TPS rate if you let the writes go beyond 1 minute without shutting down a replica? I’m wondering if your file system cache/storage subsystem is slowing down because of other reasons.",
"username": "Steve_Hand"
},
{
"code": "",
"text": "The TPS rate remains stable, I actually tested the whole setup for a whole day with 20k TPS without having any issues whatsoever, the issue started when I stopped the secondary with the first test just for 5 minutes since I saw that huge reduction in TPS, furthermore when starting the secondary again after those 5 minutes, even after stopping the application which was passing the data to mongo, the replication lag never recovered.",
"username": "Emanuel_Mallia"
},
{
"code": "",
"text": "What are you testing with? My tendency is too look at that rather than mongodb.As for rejoining node not catching up, you’ll need to look at the logs.",
"username": "chris"
}
] | Slow Replication Lag & Recovery of Replica Set | 2020-04-22T17:17:26.983Z | Slow Replication Lag & Recovery of Replica Set | 3,269 |
null | [
"java"
] | [
{
"code": "",
"text": "Hi there,Can somebody please tell me what is the best practice to keep database and collection instances for MongoDB Java and MongoDB C# drivers? Namely, is it safe to keep database and collection instances as a singleton i.e. to keep a single instance of database and collection during the whole application lifetime? I’ve already learned from the documentation that MongoClient should be singleton since it maintains the pool of connections. But what about database and collection instances? I’ve spent some time trying to research this and I’ve found out there are different opinions about this. Someone tells that it’s okay to keep a single instance of database and collection but someone tells that it’s not.Thanks!Regards,\nAlexey.",
"username": "alexey"
},
{
"code": "",
"text": "This is about the Java driver. The instances of MongoDB database and collections are Java objects, and they are treated as any other Java objects.How to manage Java objects within an application? This is about memory management in Java and Garbage Collection (GC). GC is automatic managing of memory space used by objects. GC is in turn controlled by Java Virtual Machine (JVM). The application runs in a JVM which in turn runs on your application or web server.There are lot of aspects and variables. It is a very general subject. I think you have to look for a specific strategy suitable for your situation - the application design, the programming practices, the JVM, memory, the application server, tuning, monitoring and analysis, etc.",
"username": "Prasad_Saya"
}
] | What is the best practice to keep database and collection instances for MongoDB C# / Java drivers | 2020-04-23T08:21:13.006Z | What is the best practice to keep database and collection instances for MongoDB C# / Java drivers | 3,662 |
null | [
"app-services-user-auth",
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hello,For at least one week we observe sporadic “Bad user authentication (BIND, REFRESH)” errors coming from client devices trying to connect to a Realm-cloud instance.This instance hosts just one realm database, which is accessed via single user account(user/pass). The same error occurs when we’re connected with Realm Studio(using the same account) after some time period.Please advise - is this a known Realm Cloud issue?Regards,\nIvo",
"username": "Ivo_Dimitrov"
},
{
"code": "",
"text": "Hi @Ivo_Dimitrov,The “Bad user authentication (BIND, REFRESH)” message is a client level error.Per the client error descriptions, the expected cause is:Indicates that the server has produced a bad token, or that the SDK has done something wrong.If you are still seeing this error intermittently can you provide some more details including:Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "ce hosts just one realm database, which is accessed via single user account(user/pass). The same error occurs when we’re connected with Realm StudNot sure if this is applicable, but I got a lot of BIND, REFRESH errors when attempting to read a read-only realm from Realm Cloud. The error message wasn’t really clear, but what happened in my case was that I had missed the part in the documentation that says that you need to open a read only realm asynchronously the first time to make sure it is fully downloaded.If you don’t do this, the realm will initially be empty which will cause a schema mismatch. And since the user doesn’t have permissions to change the realm schema you will get a BIND, REFRESH error.Not sure if this is why it is happening for you, but I thought I’d share what the problem was for me in case it helps.",
"username": "Simon_Persson1"
}
] | Bad user authentication (BIND, REFRESH) errors | 2020-03-16T19:00:45.272Z | Bad user authentication (BIND, REFRESH) errors | 2,944 |
null | [
"node-js"
] | [
{
"code": " node_modules\\mongodb\\lib\\core\\sdam\\server_description.js:85\nthis.hosts = this.hosts.map(host => host.toLowerCase());\nconst MongoClient = require('mongodb').MongoClient;\n\nconst assert = require('assert');\n\n// Connection URL\n\nconst url = 'XXXXXXXXX MY HOST';\n\n// Database Name\n\nconst dbName = 'myproject';\n\n// Use connect method to connect to the server\n\nMongoClient.connect(url, function (err, client) {\n\n assert.equal(null, err);\n\n console.log(\"Connected successfully to server\");\n\n const db = client.db(dbName);\n\n client.close();\n\n});\n\nvar BCDice = require('bcdice').default;\n\nrequire('bcdice/lib/diceBot/Cthulhu');\n\nfunction getInfo(gameType) {\n\n return BCDice.infoList.find(info => info.gameType === gameType);\n\n}\n\nfunction roll(command, gameType) {\n\n var bcdice = new BCDice();\n\n var [result, rands] = bcdice.roll(command, gameType);\n\n return {\n\n result,\n\n rands,\n\n };\n\n}\n\nconsole.log(getInfo('Cthulhu'))\n\nconsole.log(roll('CC', 'Cthulhu'));",
"text": "mongodb got error when i use bcdiceCode",
"username": "QQ_Zeix"
},
{
"code": "mongo",
"text": "Hi @QQ_Zeix, welcome!mongodb got error when i use bcdiceI’ve just tried your snippet code above without getting an error. My test ran successfully with bcdice v1.2.0 and MongoDB Node.JS driver v3.5.6.I doubt that this is related to bcdice module, I suspect this is related more to the MongoDB connection URL that you’ve specified. Please check that you’re using a valid Connection String URI. Make sure you’re able to connect using mongo shell with the same connection string URI.I would suggest to also review MongoDB Node.JS driver: Connect Tutorial. If you still having the same issue, please provide:Regards,\nWan.",
"username": "wan"
}
] | Get Error when I use bcdice | 2020-04-18T15:03:38.764Z | Get Error when I use bcdice | 1,725 |
null | [
"change-streams"
] | [
{
"code": "",
"text": "Hi everyone, like the title, I have found a bug in this package. It essentially uses mongodb tailable cursors to provide a message queue.It works well in normal environments but when using it with clusters there seems to be some data leaks and some messages are not delivered as expected.Does capped collections with tailable cursors have known bug when running with clusters?Has anyone any idea about how this could be fixed?Thanks",
"username": "Daniel_Lando"
},
{
"code": "",
"text": "Welcome to the community @Daniel_Lando!It works well in normal environments but when using it with clusters there seems to be some data leaks and some messages are not delivered as expected.I’d suggest building on the MongoDB Change Streams API rather than directly using tailable cursors. Change Streams provide a supported API for resumable event streams from replica sets and sharded clusters.Tailing the oplogs for a sharded cluster adds some complexity. For more details, see the two part series Tailing the MongoDB Oplog on Sharded Clusters and Pitfalls and Workarounds for Tailing the Oplog on a MongoDB Sharded Cluster. Note that these posts predate the Change Streams API, which was introduced in MongoDB 3.6 to provide a supported and scalable interface.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "watch",
"text": "Hi @Stennie_X! Thanks so much for your answer. I will try to implement that solution using watch isntead of tailable cursors, will update you on the result",
"username": "Daniel_Lando"
},
{
"code": "The $changeStream stage is only supported on replica sets",
"text": "I think I’m missing something or maybe I haven’t explained the problem well.When I try your solution I get this error:The $changeStream stage is only supported on replica setsI don’t use replica sets on my db, when I speak about clusters I mean nodejs clusters module (multiple instances of the same process using the same mongodb instance).So back to my question in such environment is there the possiblity that some messages are not delivered using tailable cursors? If so, is there any other solution?",
"username": "Daniel_Lando"
},
{
"code": "cluster",
"text": "Hi Daniel,Thanks for clarifying your use of “clusters”. I assumed in context that you were referring to a sharded cluster, since that is a more challenging environment for working with tailable cursors. Deployment type (standalone, replica set, or sharded cluster) and your version of MongoDB server would be definitely helpful details to include for future questions.I’d still suggest using the Change Streams API, although the issue with your message queue is more likely related to concurrency than tailable cursors.Can you provide more detail on how you are using the Node cluster? Do you have multiple workers watching and processing the same events?The $changeStream stage is only supported on replica setsChange streams require a replica set because data change events are based on the replica set oplog. However, you can enable this on a single node (if you don’t have any requirements for redundancy or failover) by converting a standalone to a single node replica set.To do so, follow the tutorial to Convert a Standalone to a Replica Set without adding any additional members.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "findvar lastId = new mongodb.ObjectId()\ncollection.find({_id: {$gt: lastId}})\nlastId",
"text": "Hi @Stennie_X,Thanks again, I have to inform you that I have fixed my problem, It was a bug in the package query used in the find method. On startup it was using something like:This was working as expeted in normal environment but with clusters seems this cause some data leaks. Fixed by creating a method to fetch last inserted document in the capped collection and using that id as lastId Thanks for your support, have a good day!",
"username": "Daniel_Lando"
}
] | Mongodb pub/sub using Tailable Cursors in Nodejs with clusters | 2020-04-20T16:56:19.765Z | Mongodb pub/sub using Tailable Cursors in Nodejs with clusters | 2,451 |
null | [
"replication",
"sharding"
] | [
{
"code": "",
"text": "How/where do you set the # of replicas/copies of data in a sharded replica? I want high write performance and less redundancy.",
"username": "Matthew_Zimmerman"
},
{
"code": "",
"text": "I don’t believe the documentation is clear here as I couldn’t figure this out for weeks (doing other things along the way too of course.)The answer is to create additional shards over additional replicasets. Each replicaset can then be assigned as a shard. So the # of copies you’re holding is defined in the replicaset. There is no automated “hey, we lost this shard and it held the second copy of record 10. Now there’s only one record 10, so copy it to another shard.” So the # of shards represent the number of possible buckets the record can go into. Additionally, the # of copies of that shard is handled by the underlying replicaset configuration of the shard.",
"username": "Matthew_Zimmerman"
},
{
"code": "",
"text": "Hi Matthew,Thank you for the feedback, I will pass that onto the documentation team.Documents in a sharded collection are distributed based on a user-defined shard key index where each document is associated with a single shard. You are correct that the replication factor is determined by the replication set configuration (there is no separate configuration for this).The minimum replica set size is a single member (typically only for development or testing), with 3 or more replica set members for a production deployment.Regards,\nStennie",
"username": "Stennie_X"
}
] | Number of replicas | 2020-04-04T21:44:07.342Z | Number of replicas | 1,417 |
null | [
"golang"
] | [
{
"code": "",
"text": "Hello,\nI create a collection with the _id in string, normally the _id is 3 characters. When I search the doc by _id, by the function FindOne, I can always get result. But we I find and update a doc by the function FindOneAndUpdate, with the same ID, I will get error message “an ObjectID string must be exactly 12 bytes long (got 3)”. The update operation is just to change a counter of the doc, and I am sure that the counter has been changed successfully, as I can check the changes happened from compass. So the things looks weird, the value changed successfully but db return an error.The version of my database v3.6.17.Can anyone give some hint to fix the problem?Regards,James",
"username": "Zhihong_GUO"
},
{
"code": "primitive.ObjectID",
"text": "Hi,Can you give us a code sample to work with? Three characters is not a valid ObjectID, so my guess is that the update is executing successfully, but you’re decoding the result into a struct where the field type is primitive.ObjectID, which fails. A code sample would help to figure out the root cause, though.– Divjot",
"username": "Divjot_Arora"
},
{
"code": "//The code here is the API that will access db by id in string\nfunc FindAndUpdateDocByStringID(coll *mongo.Collection, id string, update bson.M, result interface{}) error {\n\tprefix := coll.Name()\n\terr := coll.FindOneAndUpdate(context.TODO(), bson.D{{Key: \"_id\", Value: id}}, update).Decode(result)\n\tif err != nil {\n\t\tif strings.Contains(err.Error(), NoDocument) {\n\t\t\treturn errors.NotFound(prefix+\"_not_found\", err.Error())\n\t\t}\n\t\treturn errors.InternalServerError(prefix+\"_internal_error\", err.Error())\n\t}\n\treturn nil\n}\n\n//The code here is the calling of the API\n\tupdate := bson.M{\"$inc\": bson.M{\"nbr_details_views\": 1}}\n\tmodelPlace := Place{}\n\terr := dboperate.FindAndUpdateDocByStringID(collection, in.Value, update, &modelPlace)\n\n//The code here is the definition of the Place\ntype Place struct {\n\t// Id of the Place\n\tID string `json:\"id,omitempty\" bson:\"_id,omitempty\"`\n\t// UserID of the owner of the Place\n\tUserID primitive.ObjectID `json:\"user_id,omitempty\" bson:\"user_id,omitempty\"`\n\t// City of the Place\n\tCity string `json:\"city,omitempty\" bson:\"city,omitempty\"`\n\t// Description of the Place (long text)\n\tDescription string `json:\"description,omitempty\" bson:\"description,omitempty\"`\n\t// Photos lists the available photos URL\n\tPhotos []string `json:\"photos,omitempty\" bson:\"photos,omitempty\"`\n\t// Nbr of views on detail this place\n\tNbrDetailsViews int32 `json:\"nbr_details_views,omitempty\" bson:\"nbr_details_views,omitempty\"`\n}\n",
"text": "Hello Divjot, nice to meet you again. I hope you are doing well and are safe.",
"username": "Zhihong_GUO"
},
{
"code": "DecodeUserIDPlaceprimitive.ObjectIDDecodeDecodeBytesbson.Raw",
"text": "This error comes from the part of the code that tries to decode strings as ObjectIDs, so my guess is that it’s coming from the Decode call when decoding the UserID field as that’s the only field in the Place struct of type primitive.ObjectID. If you can easily reproduce this, it would be really helpful if you could replace the Decode call with DecodeBytes and print the result. That will return a bson.Raw that shows the exact document being returned by the server and can show us which field is causing the error.– Divjot",
"username": "Divjot_Arora"
}
] | FindOneAndUpdate by a string _id return error: an ObjectID string must be exactly 12 bytes long | 2020-04-19T11:40:41.073Z | FindOneAndUpdate by a string _id return error: an ObjectID string must be exactly 12 bytes long | 7,232 |
null | [] | [
{
"code": "",
"text": "Thanks to everyone that attended our first Spanish Virtual User Group about PyMongo queries. For everyone that missed the meeting 16 April 2020 here’s the recording: - YouTubeHuge “Gracias” to @valerybriz for the presentation. You’re a true community champion!",
"username": "Sven_Peters"
},
{
"code": "",
"text": "Thank you! for the opportunity to share! ",
"username": "valerybriz"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Virtual Meetup: PyMongo y algunas oscuras configuraciones | 2020-04-22T07:26:48.308Z | Virtual Meetup: PyMongo y algunas oscuras configuraciones | 3,052 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.2.6 is out and is ready for production deployment. This release contains only fixes since 4.2.5, and is a recommended upgrade for all 4.2 users.Fixed in this release:4.2 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Dima_Agranat"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.2.6 is released | 2020-04-21T08:13:33.878Z | MongoDB 4.2.6 is released | 2,761 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 3.6.18-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 3.6.17. The next stable release 3.6.18 will be a recommended upgrade for all 3.6 users.Fixed in this release:3.6 Release Notes | All Issues | DownloadsAs always, please let us know of any issues.– The MongoDB Team",
"username": "Dima_Agranat"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 3.6.18-rc0 is released | 2020-04-21T10:01:55.391Z | MongoDB 3.6.18-rc0 is released | 1,506 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "The company I work for is successful. We reach the limit of a single MongoDB.Up to now we have several hundred customers and all data gets stored in one MongoDB.I see two solutions:What are the pros/cons of both solutions? What do you suggest?",
"username": "Thomas_Guttler"
},
{
"code": "",
"text": "Hello Thomas.We reach the limit of a single MongoDBYou mean that reach the limit of O/S or hardware size. There is no limit in WiredTiger.For sharding or lot of single customer DBs it will depends of the characteristics of your workload and purposes of your main queries.Regards,\nAlexandre Araujo",
"username": "Alexandre_Araujo"
},
{
"code": "",
"text": "Hi Alexandre,thank you for your answer. Good to know that there is no limit it WiredTiger.Yes, we reach the limit of the virtual machine we use.You say it depends on the characteristics of your workload and purposes of your main queries.Could you please elaborate?When is sharding useful, and when is N mongoDBs useful?Regards,\nThomas Güttler",
"username": "Thomas_Guttler"
}
] | One big DB or several small DBs? | 2020-04-17T18:33:20.079Z | One big DB or several small DBs? | 1,499 |
null | [
"node-js"
] | [
{
"code": "const db = mongoskin.db('mongodb://@localhost:27017/test')\nconst id = mongoskin.helper.toObjectID\napp.post('/collections/:collectionName', (req, res, next) => {\n pino(req,res);\n req.log.info('pino output');\n req.collection.insert(req.body, {}, (e, results) => {\n if (e) return next(e)\n res.send(results.ops)\n })\n})\n",
"text": "I am having problems with this codeNode.js POST methodI was looking at insert documentation but I am struggling to understand .ops.\nWhee does it come from?",
"username": "Milenko_Markovic"
},
{
"code": "opsresultsres.send(results.ops)insertWriteOpResult",
"text": "ops is the insertWriteOpResult object’s property. The results in the code res.send(results.ops) refers to insertWriteOpResult.",
"username": "Prasad_Saya"
}
] | Node.js callback with MongoDB | 2020-04-22T06:52:59.405Z | Node.js callback with MongoDB | 1,917 |
null | [
"graphql",
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "Hello,\nwe are using the Cloud-Instance on realm. On some GraphQL query requests we get the error message “The requested service is temporarily unavailable.” with the hint “GitBook”.\nIn the Realm-Logs there is an entry “Realm Object Server has started and is listening on http://0.0.0.0:9080” then - it seems that there is a new instance startet.Following Questions for this:best regardsVolkhard",
"username": "Volkhard_Vogeler"
},
{
"code": "",
"text": "Hi Volkhard,The documentation link you found is for the on-premise version of Realm Object Server, which is no longer distributed for new installations (or relevant for troubleshooting this issue on Realm Cloud).The “503: Service Temporarily Unavailable” message is a general error code that will need investigation. For example, this may be a transient issue, something specific to your instance, or perhaps the result of excessive resource consumption for some GraphQL queries.Please create a support case for the team to investigate. It would be helpful if you can include an example of a query and date/time when the issue was observed.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Same error here… Support isn’t helping. We are about to go productive. Are you guys serious?",
"username": "Etienne_Hanser"
},
{
"code": "",
"text": "Hi @Etienne_Hanser,Reviewing your support case, I can see the team started investigating and corresponding the same day you filed your issue.I appreciate your frustration in the time to resolve this issue, but investigation of resource contention for shared infrastructure can take longer than dedicated deployments. GraphQL queries can be resource intensive, but the impact depends on your specific queries and workload.I noticed there were some adjustments to increase allocated resources for your cluster, but this wasn’t communicated to you as clearly as it could have been. When the team member was asking if your issue was resolved, they should have mentioned that they wanted to confirm the results of some resource adjustments they were monitoring.If you are preparing to go live with a production workload, I’d also suggest doing come capacity planning and consider if you have the right plan for your requirements. The Realm Cloud Standard plan ($30/month) is based on multi-tenant/shared resources, so performance is less predictable and scalable than a dedicated deployment.The Standard plan also does not include an SLA or extended support coverage, so you need to have realistic expectations around support response time. We have been ramping quite a few new team members and the support experience should be significantly improving.However, we definitely appreciate any feedback on how we can improve.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | GraphQL: error "The requested service is temporarily unavailable." on some queries | 2020-04-16T08:29:43.857Z | GraphQL: error “The requested service is temporarily unavailable.” on some queries | 4,176 |
null | [
"transactions"
] | [
{
"code": "",
"text": "Hi, does mongo support distributed transactions? We are trying to implement transaction management across different data stores that may include an RDBMS and MongoDb but I could not find any Mongodb class implementation using XADataSource.",
"username": "Aravindan_Varadan"
},
{
"code": "",
"text": "Hi,MongoDB supports multi-document ACID transactions within a single replica set (4.0+) or sharded cluster (4.2+).XA transactions are not a current feature, but you could raise this suggestion on the MongoDB Feedback site for others to watch & upvote.Note that XA transactions have different semantics and considerations, as per this blog post from Paul Done (one of our senior Solution Architects): XA Distributed Transactions are only Eventually Consistent.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Any update on XA support? | 2020-04-22T05:51:12.483Z | Any update on XA support? | 4,030 |
null | [
"dot-net"
] | [
{
"code": "var filter = Builders<Bsondocument>.Filter.In(\"idcategoria\", new [] { 139,241 } );\ncanal_1 \n nichos\n categorias \n idcategoria=135\n descricao=loren ipsun\n\n subcategorias\n idcategoria=139\n descricao=loren ipsun \n\n\ncanal_2\n nichos\n categorias\n idcategoria=200\n descricao=loren ipsun\n\n subcategorias\n idcategoria=241\n descricao=loren ipsun . . . \n",
"text": "I’m using the latest C # driver. I want to filter by idcategoria, regardless of level.\nI´m trying:but don´t work.\nMy database looks like:does anyone have an idea?",
"username": "Carlos_Coletti"
},
{
"code": "var filter = Builders<Bsondocument>.Filter.In(\"canal_1.nichos.categorias.idcategoria\", new [] { 139,241 } );",
"text": "this way it works:var filter = Builders<Bsondocument>.Filter.In(\"canal_1.nichos.categorias.idcategoria\", new [] { 139,241 } );\nbut I believe there are better ways",
"username": "Carlos_Coletti"
}
] | Driver C# - Filter by category, regardless of level? | 2020-04-22T03:13:08.211Z | Driver C# - Filter by category, regardless of level? | 1,428 |
null | [
"atlas"
] | [
{
"code": "",
"text": "Hi all,I’ve whitelist only my IP address and my application couldn’t connect to mongod atlas.But if I’ve whitelist ‘Allow from anywhere’, it works perfectly which is not my intention to ‘Allow from anywhere’.How do I make only my IP address works? I’ve tried added both public and private IPv4 address and it still doesn’t work.Please help. Thank you.",
"username": "why_npnt"
},
{
"code": "dig +short myip.opendns.com @resolver1.opendns.com",
"text": "First, if you can connect when Allow from anywhere, then the cluster is working file. That’s good to know.If you cannot connect when you white list your address, then probably you do not white list the correct address. If you have a VPN you must white list the address of the VPN that connects to the non-VPN part of the world. If you are behind a NAT network you must white list the address of the last gateway. You may try\ndig +short myip.opendns.com @resolver1.opendns.com to get your visible IP. The site https://whatismyipaddress.com/ may also provide you with this information.",
"username": "steevej"
},
{
"code": "",
"text": "I’m not using any VPN. I’m not using NAT network but I’m not sure if my ISP is using NAT network.\nI’ve used the public address from ‘https://whatismyipaddress.com/’ but I couldn’t connect except 0.0.0.0 ‘Allow from anywhere’.I don’t have dig in my git bash windows.",
"username": "why_npnt"
},
{
"code": "nslookup myip.opendns.com resolver1.opendns.com",
"text": "Do the same thing with nslookup. No web proxy either ?If ‘add this ip address’ doesn’t work then it is likely your ISP is doing some nasty.nslookup myip.opendns.com resolver1.opendns.com",
"username": "chris"
},
{
"code": "Resolve-DnsName -Server resolver1.opendns.com myip.opendns.com",
"text": "Also powershell version:Resolve-DnsName -Server resolver1.opendns.com myip.opendns.com",
"username": "chris"
},
{
"code": "",
"text": "Server: [resolver1.opendns dot com]\nAddress: 208.67.xxx.xxxNon-authoritative answer:\nName: [myip.opendns dot com]\nAddress: 112.199.xxx.xxxI have two different IP address. What does this means?\nI’ve whitelist 112.199.xxx.xxx but not the one above.Did Window shell\nOnly 1 row appear. Resolver1 is not appearing.\nName Type TTL Section IPAddressmyip.opendns.com A 0 Answer 112.199.xxx.xxx",
"username": "why_npnt"
},
{
"code": "M0M2M5",
"text": "The first address from nslookup is the ip addres of resolver1.opendns.com.Yes the 112.199.xxx.xxx is the one you are interested in.If that is the one that you’ve whitelisted and it doesn’t work, you should have a chat with your ISP.The connecting IP would be in mongo logs. But not available in Frre/Shared TiersFeature unavailable in Free and Shared-Tier Clusters\nThis feature is not available for M0 (Free Tier), M2 , and M5 clusters. To learn more about which features are unavailable, see Atlas M0 (Free Tier), M2, and M5 Limitations.",
"username": "chris"
}
] | Whitelist my IP Address in Network Access not working | 2020-04-20T16:57:21.573Z | Whitelist my IP Address in Network Access not working | 7,322 |
null | [
"aggregation"
] | [
{
"code": "{\"_id\":\"123\",\n \"field0\":10,\n \"field1\":10,\n \"Array\":[{\n \"field2\":\"test\",\n \"field3\":20,\n \"field13\": field1*field3\n }]\n }\n[{$addFields: {\"Array.field13\": {$multiply:[\"$Array.0.field3\",\"$field0\"]}}}][{$addFields: {\"Array.field13\": {$multiply:[\"$field1,\"$field0\"]}}}]",
"text": "Hi MongoDB community,\nI’m pretty new to mongoDB, so let me apology first for any silly issues I’ll raise to you.I’m trying to add a ‘calculated’ field (field13 = field1 * field3) into an array (Array) of documents.\nThat’s the final document I’m trying to get:To do so, I’m using $addfields and $multiply in an aggregation stage:[{$addFields: {\"Array.field13\": {$multiply:[\"$Array.0.field3\",\"$field0\"]}}}]Despite many attempts, I always get the error: $multiply only supports numeric types, not array. On the contrary, if I use[{$addFields: {\"Array.field13\": {$multiply:[\"$field1,\"$field0\"]}}}]\neverything works fine.What am I doing wrong in referring to the value of field3?Thank you in advance for any help.MC",
"username": "Matteo_Capuzzo"
},
{
"code": "$arrayElemAt",
"text": "You cannot reference array elements by “.x” type syntax - you need to use $arrayElemAt expression.",
"username": "Asya_Kamsky"
},
{
"code": "$addFields: {\"Array.field13\": {$multiply:[{$arrayElemAt:[\"$Array.field3\",0]},\"$field0\"]}}",
"text": "Hi Aysa,Finally, I got it!$addFields: {\"Array.field13\": {$multiply:[{$arrayElemAt:[\"$Array.field3\",0]},\"$field0\"]}}Thanks,Matteo",
"username": "Matteo_Capuzzo"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $multiply only supports numeric types, not string | 2020-04-20T16:55:56.571Z | $multiply only supports numeric types, not string | 7,460 |
[] | [
{
"code": "",
"text": "\nIMG_20200420_111313_0353024×3024 674 KB\nMine looks like latte in a pint glass. How’s your Monday going?",
"username": "Jamie"
},
{
"code": "",
"text": "Unfortunately all the days have blurred together into one giant swirl. But now that you mention that it’s Monday, that explains why a lot of my development systems are having issues after a weekend of normal maintenance. ",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "@Doug_Duncan come on!\nWhat would we have if suddenly well tested software, is smoothly run thru our CD? No last second config changes done after testing from the best friend of… We never would know that we just passed a maintenance weekend…\nFive years back in times of dev and ops I lost already my Saturday Indicator. These days dev threw a build on a Friday afternoon over the fence and on Saturday you got the nice wakeup calls, …So happy that my Monday is almost passed (37min left) - let’s see what we will find tomorrow All the best\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "@michael_hoeller, when you said that Monday is almost passed, I was thinking work hours and thought it strange with us being in much different time zones that we would end our work day at the same time. I realize now however that it is close to midnight there in Germany and Monday has truly come to an end for you.One must ask why you are still up at this time. I hope you’re not working this late and that you are up enjoying a nice quiet evening.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hello @Doug_DuncanOne must ask why you are still up at this time. I hope you’re not working this late and that you are up enjoying a nice quiet evening.a very valid remark.I actually was working close before the post. But it is not as bad as it might sound, I try to keep my day structured. As a remote worker and independent consultant I have to be flexible. I have customers in Germany and Oversea so I have to deal with timezones (-7 and -10 hours). Beside that I take some time off for my family, “homeschooling” and workout. So at the end not a standard setup but a well working for all in the loop.Cheers\nMichael",
"username": "michael_hoeller"
},
{
"code": "",
"text": "My ‘working hours’ have definitely shifted significantly in the After Days. I spend my morning and early afternoon hours taking meetings, then the afternoon and evening with the family, doing homeschool and chores and cooking dinner, then later evening hours back at work, getting things done with minimal distractions, and finally bed – which is regularly disrupted by my toddler who still hasn’t figured out how to consistently sleep yet. I’m tired, y’all. ",
"username": "Jamie"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Signs of a Monday | 2020-04-20T18:15:08.610Z | Signs of a Monday | 4,423 |
|
null | [] | [
{
"code": "",
"text": "Hi all,I’m hosting a virtual roundtable discussion this week to talk about the good, the bad, and the ugly of meetup group events. Please join me for a robust discussion around what’s great and not-so-great about user groups. I’m hopeful that this will be a fruitful discussion of fun stories, lessons learned, and encouragement for all our members to consider joining or even starting their own user group in their region. This will also be a great place to brainstorm new ideas for virtual sessions that you’d like to see presented in our Global Virtual Community group’s weekly sessions.The format of this session will be informal - like an unconference session. If you’ve never attended an unconference, the idea is that someone volunteers to facilitate discussion amongst the other attendees. There is no set agenda, so the flow of ideas and conversation happens more naturally. So, bring your ideas, your stories, your gripes, and your celebrations to share with the group.Register:About this event Sr. Community Manager at MongoDB, Jamie Langskov, leads this informal unconference-style discussion on all things meetups. Come chat with us live while we explore what makes a meetup event great, what to avoid, lessons learned from...Even if you can’t attend, please give a shout out about this event on your various social channels to get the word out and encourage our fellow MongoDB’ers to join us.See you there!Jamie",
"username": "Jamie"
},
{
"code": "",
"text": "Just curious before I sign up. I do not have a lot of meetup experience. I have attended a few meetups in the past but stopped going; usually because I find them too late in the evening (a big win for the virtual meetups). Plus the events themselves were not what I was expecting.\nTherefore I just wanted to ask before joining the round table if more experience would be needed to participate in the discussion?",
"username": "Natac13"
},
{
"code": "",
"text": "Hi @Natac13 from the sounds of things, this is going to be an open forum and those in attendance can participate freely. My guess is that they would welcome the unexperienced voice as much as they would those with deep experience.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi @Natac13, @Doug_Duncan is exactly right. We would benefit from your presence in this conversation as we’re looking to foster a discussion around meetups and your experience is just as important as any meetup connoisseur. In fact, I would say that your comments reflect exactly the kinds of feedback we’re looking to discuss. If I were the organizer of those events, I’d want to know more. The timing isn’t great - how do we accommodate? The expectations weren’t right - how do we communicate better? We’d absolutely love for you to attend this session.",
"username": "Jamie"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Virtual Roundtable Discussion: The Best and Worst of Meetup Events | 2020-04-20T21:40:25.293Z | Virtual Roundtable Discussion: The Best and Worst of Meetup Events | 4,276 |
null | [] | [
{
"code": "",
"text": "Hello, I’m having an issue downloading the windows software. I select the windows option, but no package name appears in the list. I’m clearly doing something wrong… please can someone help? Thanks.",
"username": "Claire_39473"
},
{
"code": "",
"text": "Downloading what exactly?",
"username": "007_jb"
},
{
"code": "",
"text": "The mongo shell… this is from the instructions:Chapter 0: SetupWhat is the Mongo Shell ?The mongo shell is an interactive JavaScript interface to MongoDB. You can use the mongo shell to query and update data as well as to perform administrative operations.In this course, we will be using the mongo shell to connect to our Atlas Cluster, practice running queries, and to interact with the MongoDB database.Installing Mongo Shell on WindowsNow we are going to download the MongoDB Enterprise server from the MongoDB download center . MongoDB Enterprise server is bundled with a number of executables that are part of the MongoDB ecosystem, including the mongo shell .1. Go to the MongoDB Download Center → select Windows x64 as your operating system from the dropdown menu then download the .msi file .",
"username": "Claire_39473"
},
{
"code": "",
"text": "Can you share a screenshot of what you’re seeing?",
"username": "007_jb"
},
{
"code": "",
"text": "\nimage1366×768 100 KB\n",
"username": "Claire_39473"
},
{
"code": "",
"text": "hope that is clear? I select windows OS but no package list appears and I cannot continue.",
"username": "Claire_39473"
},
{
"code": "",
"text": "That’s clear indeed!Try from a different browser.",
"username": "007_jb"
},
{
"code": "",
"text": "Yay, chrome seems to be working, thank you!",
"username": "Claire_39473"
},
{
"code": "",
"text": "",
"username": "Shubham_Ranjan"
}
] | Download for Windows | 2020-04-21T16:08:27.501Z | Download for Windows | 1,865 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "There is no QueryDocument in the MongoDB.Driver.QueryDocument namespace…Represents a BSON document that can be used where an IMongoQuery is expected.Why??",
"username": "Francis_Beaulieu"
},
{
"code": "Namespace: MongoDB.Driver\nAssembly: MongoDB.Driver.Legacy (in MongoDB.Driver.Legacy.dll) Version: 2.10.0+569905ff5e778c38ea19d9d0392496a83e1704ed\n<PackageReference Include=\"mongocsharpdriver\" Version=\"2.10.3\" />\ncsproj",
"text": "Hi @Francis_Beaulieu, welcome!There is no QueryDocument in the MongoDB.Driver.QueryDocument namespace…The assembly information on the API doc states:This means that this is part of the legacy package of MongoDB .NET/C# driver. You need to import the NuGet package mongocsharpdriver into your project as below example:You’re likely to have imported the new driver package MongoDB.Driver into your csproj file instead. This would be the recommended package if you’re starting a new project.Regards,\nWan.",
"username": "wan"
},
{
"code": "",
"text": "Thanks a lot Wan!It’s a little bit annoying all those packages, I don’t know why they mix us with so many packages…!?\n\nimage779×144 10.9 KB\nAnyway, in the MongoDB documentation, no mention of the right package to import, and in the NuGet package manager the first one if we search mongodb, is the other one… Should be improved may be?Regards ans Thanks again! ",
"username": "Francis_Beaulieu"
}
] | C# Driver QueryDocument? | 2020-04-20T21:47:47.929Z | C# Driver QueryDocument? | 3,042 |
null | [] | [
{
"code": "",
"text": "Hi all!\nI am trying to store documents in a collection which is coming from an api request. The size to total documents is 500 but in my mongodb it is saving only 50 of them. I tried to use createColletion method with the size of 500 but still saving 50 documents. How can I store all of 500 documents in the collection?",
"username": "Ali_Abbas"
},
{
"code": "",
"text": "Can you post the code you are using?",
"username": "Joe_Drumgoole"
},
{
"code": "db.createCollection( \"myColl\", { size: 500, ... } );\nsizesizesizesizesizemax",
"text": " Hi @Ali_Abbas and welcome to the community.I tried to use createColletion method with the size of 500 but still saving 50 documents.It sounds like you’re trying to do something like the following:This is just an assumption however since you didn’t provide any code examples.One thing to note is that size is the size in bytes for a capped collection. This has no meaning for a regular collection.The following comes from the db.createCollection() documentation for the size key:Optional. Specify a maximum size in bytes for a capped collection. Once a capped collection reaches its maximum size, MongoDB removes the older documents to make space for the new documents. The size field is required for capped collections and ignored for other collections.If you’re trying to create a capped collection, then you will need to determine your average document size and then multiply that by 500 to get an approximate value for size to hold the 500 documents. You can combine the size key with the max key to make sure you don’t have more than 500 documents in the collection, but you can’t guarantee a minimum number of documents unless you were to oversize the collection.If you’re trying to create a regular collection then you don’t need to do anything special to store 500 documents in the collection. Just start writing to it and you will be able to store as many documents as you have space available on the hard drive.If I’ve misunderstood what you were asking, please provide more information so that we can provide better answers.",
"username": "Doug_Duncan"
}
] | Store more documents in collection | 2020-04-21T06:17:40.322Z | Store more documents in collection | 1,781 |
null | [
"aggregation",
"compass"
] | [
{
"code": "",
"text": "Hi everyoneI was wondering how to implement the $nor operator in MongoDB Compass aggregation pipeline.Can anyone help me with this issue please?Sincerely\nEzequias Rocha",
"username": "Ezequias_Rocha"
},
{
"code": "{\n\t\"_id\" : 1,\n\t\"item\" : \"abc\",\n\t\"price\" : 10,\n\t\"fee\" : 2,\n\t\"date\" : ISODate(\"2014-03-01T08:00:00Z\")\n}\n{\n\t\"_id\" : 2,\n\t\"item\" : \"jkl\",\n\t\"price\" : 20,\n\t\"fee\" : 1,\n\t\"date\" : ISODate(\"2014-03-01T09:00:00Z\")\n}\n{\n\t\"_id\" : 3,\n\t\"item\" : \"xyz\",\n\t\"price\" : 5,\n\t\"fee\" : 0,\n\t\"date\" : ISODate(\"2014-03-15T09:00:00Z\")\n}\ndb.sales.aggregate(\n [\n {\n $match: {\n $nor: [\n { price: 20 },\n { fee: 2 }\n ]\n }\n }\n ]\n)\n",
"text": "Hi @Ezequias_Rocha!I don’t know if I understood correctly, but follow an example using a pipeline where $nor is applied to the $match stage. Consider the following documents in the sales collection:To search only documents where the price field is not equal to 20 or the fee field is not equal to 2, it would be as follows:I hope I have helped, otherwise detail your need a little more!",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "Hi @Leandro_DominguesIt could work but it is not working in MongoDB Compass. Could you check this out at this tool?Regards\nEzequias.",
"username": "Ezequias_Rocha"
},
{
"code": "mongo$match/**\n * query - The query in MQL.\n */\n{\n $nor: [ { \"price\": 20 }, { \"fee\": 2 } ]\n}\nExpected \"[\" or AggregationStage but \"{\" found.$or{\n $or: [ { \"price\": 20 }, { \"fee\": 2 } ]\n}",
"text": "The aggregation by @Leandro_Domingues works fine in the mongo shell. But, somehow it doesn’t in the MongoDB Compass (using version 1.19.6). I have the MongoDB version 4.2.3.The way the $match stage filter is specified in the Compass GUI is:There is an error: Expected \"[\" or AggregationStage but \"{\" found.The aggregation query with $or works fine in the Compass, e.g.,:",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Right @Prasad_SayaCould someone open an issue in MongoDB Jira please?It’s strange noone have already tested if before.Best wishes\nEzequias Rocha",
"username": "Ezequias_Rocha"
},
{
"code": "",
"text": "The issue: https://jira.mongodb.org/browse/COMPASS-4241",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Could someone open an issue in MongoDB Jira please?Hi Ezequias,FYI, you can also create bug reports in Jira. There’s single sign-on with the same account you are using for the community forum: https://jira.mongodb.org/browse/COMPASS.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This is fixed in the upcoming 1.21 release.",
"username": "Massimiliano_Marcon"
}
] | How to use $nor operator in Compass Aggregation Pipeline | 2020-04-09T14:05:15.569Z | How to use $nor operator in Compass Aggregation Pipeline | 3,600 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "We have been asked to design a solution for a leading cellular company with more than 40 Million customers to cater to the need of showing up the call and charges details.Write :There would be a nightly batch refreshing call and charges details for all customers into Mongo DBRead:Mongo DB needs to support data for an online system.There are multiple billing cycle and the customer can choose to part of any billing cycle.The customer can change the billing cycle once year.RetentionData needs to be retained for 1 year and then data needs to be Purged.VolumetricsCallAvg Call items per day per customer = 20So for 365 days and for all customer line items = 20 365 40 = 292 BillionDataData usage line item per day per customer = 10So for 365 days and for all customer line items = 10 365 40 = 146 BillionHeaderCustomer_numberCustomer NameCustomer AddressCustomer Mobile NumberBill periodBill dateTarif Plan namePrevious DuesPaymentsCurrent ChargesTotal Amount DueDue DateSummary of Current ChargesSubtotalTaxesTotal Current ChargesCall DetailsDateTimeCalled NumberDurationUnitsAmountData Usage detailsDateTimeUnits in (MB)AmountPoints to Ponder",
"username": "Anindya_Mohanty"
},
{
"code": "$lookup_id",
"text": "Hey @Anindya_MohantyMy thought on your points to ponder, which ultimately depend on the access patterns of your application:",
"username": "Natac13"
}
] | Which Design pattern works best for the problem here | 2020-04-20T21:24:12.926Z | Which Design pattern works best for the problem here | 1,848 |
null | [
"change-streams"
] | [
{
"code": " test 1:SECONDARY> watchCursor = db.getSiblingDB(\"local\").oplog.rs.watch()\n2020-04-21T12:24:22.502+0530 E QUERY [js] Error: command failed: {\n \"operationTime\" : Timestamp(1587452057, 1),\n \"ok\" : 0,\n \"errmsg\" : \"$changeStream may not be opened on the internal local database\",\n \"code\" : 73,\n \"codeName\" : \"InvalidNamespace\",\n \"$clusterTime\" : {\n \"clusterTime\" : Timestamp(1587452057, 1),\n \"signature\" : {\n \"hash\" : BinData(0,\"AAAAAAAAAAAAAAAAAAAAAAAAAAA=\"),\n \"keyId\" : NumberLong(0)\n }\n }",
"text": "When I try to use change streams with local database, I get below error. Why is it not allowed?",
"username": "Akshaya_Srinivasan"
},
{
"code": "locallocaladminconfigoplog.rsMongo.watch()mongo",
"text": "Hi Akshaya,Change streams can’t be opened on the local database because it is an internal system database (as per the error message). You cannot open change stream cursors on the local, admin, or config system databases (or any collections in those databases).It looks like you are trying to open a change stream on the oplog.rs collection, which is the underlying collection used for change stream events.When you use the Change Stream API, you open change streams at a collection, database, or deployment level. The supported equivalent for what you are trying to do would be watching all changes for the deployment via Mongo.watch() in the mongo shell. You can also perform this operation from a supported MongoDB driver.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Why change streams cannot be used with local database? | 2020-04-21T09:57:10.121Z | Why change streams cannot be used with local database? | 5,791 |
null | [
"app-services-user-auth",
"legacy-realm-cloud"
] | [
{
"code": "",
"text": "I am trying to use Firebase Authentication because username/password authentication is no longer recommended. The documentation says I have to get a public/private key pair (not sure what settings I should use). I need to call the private key with fs.readFileSync(’pathToMyPrivateKeyFile’);\nWhich gives me an error “no such file or directory”.\nI tried to use the absolute path and I tried to copy the key file in the func directory. Nothing works for me.",
"username": "Assocy_N_A"
},
{
"code": "",
"text": "Hi! I wrote the guide…For the keys, just use the default settings. It’s fine. More information can be found here https://www.ssh.com/ssh/keygen/.An absolute path wont work. Personally I have placed the key file in the root folder for the firebase cloud functions and then simply use the filename as the path provided to fs.readFileAsync. No such file or directory simply means you didn’t get the path right.Hope this helps!BTW… if any realm mods are reading this… where is the Realm swag I was promised for writing the article 2 years ago? ",
"username": "Simon_Persson"
},
{
"code": "",
"text": "BTW… if any realm mods are reading this… where is the Realm swag I was promised for writing the article 2 years ago? Hi @Simon_Persson,Sorry to hear your promised swag didn’t arrive. Can you send me a private message with more details on the arrangement so I can follow up?Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi Stennie! I don’t see a way to message you privately on these forums? Do you have an email I can reach out to?//Simon",
"username": "Simon_Persson"
},
{
"code": "",
"text": "Hi Simon,Private messaging is available for users of trust level 1 or higher (which you are).If you click on my username you should see a green button for “Message”. There are also messaging options if you visit your own user profile page.I’ll send you a note to get things started .Regards,\nStennie",
"username": "Stennie_X"
}
] | Firebase Authentication with Realm Cloud | 2020-04-04T21:44:25.276Z | Firebase Authentication with Realm Cloud | 5,857 |
null | [
"aggregation"
] | [
{
"code": "{\n \"_id\" : 2,\t\n \"items\" : [\n {\n \"id\": 1,\n \"name\": \"test\",\n \"value\": 5\n },\n {\n \"id\": 2,\n \"name\": \"test2\",\n \"value\": 10\n },\n {\n \"id\": 3,\n \"name\": \"test3\",\n \"value\": 15\n }\n ],\t\n};\n{\n \"_id\" : 2,\t\n \"items\" : [\n {\n \"id\": 2,\n \"name\": \"test2\",\n \"value\": 10\n },\n {\n \"id\": 1,\n \"name\": \"test\",\n \"value\": 5\n }, \n {\n \"id\": 3,\n \"name\": \"test3\",\n \"value\": 15\n }\n ],\t\n};\n{\"items.0\": <expression for update item with id equal to 0>}",
"text": "Hi all!Lately I’m using the aggregations functionality to update a document using the update command. I’m using MongoDB 4.2 version.My doc look like this:I want to update a specific value in the array items filtering by id, but I have the next requirement: different users can update this items and it can changes the position of them in any time, so after an user’s change the items can look like this:So, if two users launch a concurrent update about the same item, like item with id 2, one of them to update value and the other to change the position in the array, both operations must be success.Reviewing the documentation, I can get this behavior using the array expression operator, but the problem is I cannot update a single item in this array (or I have not found the way to do it), I must use the $map (or $reduce) operator to loop though the array, update the single item, and set the whole array to the field “items”.So the question is: there is any way to update only a single item in the array using aggregation expressions???Update the item using an expression like this:{\"items.0\": <expression for update item with id equal to 0>}will not work because other user can change the position in the array previously, so it’s not guaranteed the item with id equal to 0 is going to be in position 0.",
"username": "Juan_Antonio_Jimenez"
},
{
"code": "idupdateOnearrayFilters$",
"text": "So the question is: there is any way to update only a single item in the array using aggregation expressions???To update a single item (an embedded document’s field(s)) in the array, based on a condition(e.g., the id field), you can use arrayFilters or positional $ update operator with any of the update methods (e.g., updateOne).But, you can use an aggreation pipeline with the update operation as you are using the MongoDB version 4.2, but in this case it is much simpler using the arrayFilters or $ update operator.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks for you reply.Ok, I wrote that example because it’s simple, so in this case it’s more logical to use your answer. But in the case where we must use aggregations, like use conditional updates based on current field values, we have not other way to update the array items than replace it, or is there any?So, why is there not a positional operator like $ to update an element in the array using aggregations?",
"username": "Juan_Antonio_Jimenez"
},
{
"code": "$set$addFields$unset$project$replaceRoot$replaceWith$set$set$",
"text": "But in the case where we must use aggregations, like use conditional updates based on current field values, we have not other way to update the array items than replace it, or is there any?Pipeline provides additional functionality that cannot be achieved using the update operators. The pipeline allows only some of the aggregation pipeline stages - $set (or $addFields), $unset (or $project) and $replaceRoot (or $replaceWith). Though the update operator $set and the pipeline stage $set are named the same, they work in different ways and hence they have different purposes.The pipeline allows to transform the data and the resulting update happens upon the transformed data. There is no replacing operation happening in this case.So, why is there not a positional operator like $ to update an element in the array using aggregations?The array update operators, like the $, are specific to update operations. The aggregation pipeline allows different set of operators, which are more comprehensive and cover the functionality that is not available with the update operators.Reference: Updates with Aggregation Pipeline",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Ok, thanks for the info. Only one thing more.When you say this:The pipeline allows to transform the data and the resulting update happens upon the transformed data. There is no replacing operation happening in this case.What do you mean exactly? If in the pipeline I add/remove an element to the array, only the added/removed element is updated in Mongo, without touch the rest of element in the array?Thanks.",
"username": "Juan_Antonio_Jimenez"
},
{
"code": "",
"text": "Rest of the array remains same - only changes are updated. And, the update is atomic irrespective of the kind of update (change an element, or delete an element, or add elements, etc.) on the array.",
"username": "Prasad_Saya"
}
] | Update a single item in array using aggregations | 2020-04-20T09:40:17.516Z | Update a single item in array using aggregations | 4,321 |
[] | [
{
"code": "[root@lab mongo]# uname -a Linux lab 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux",
"text": "Hi Team, I am trying to compile v3.6.9-dbaas-testing branch as I want 3.6.9 version for my product on Centos7. I am following Build MongoDB From Source · mongodb/mongo Wiki · GitHub page for the prerequisites and steps.\nI am facing a problem where one time the compilation stops giving below error. However, if I try to compile same binary manually, it gets created without any error. PFA.\n\nimage3360×1868 1.72 MB\n--------------Some env info-------------\n[root@lab mongo]# uname -a Linux lab 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux\nimage3358×896 1.03 MB\nCould you please help me with resolving this?",
"username": "Harpinder_Kaur"
},
{
"code": "v3.6.9-dbaas-testingv3.6.9",
"text": "I am trying to compile v3.6.9-dbaas-testing branch as I want 3.6.9 versionHi Harpinder,Is there a reason you need to build from source? CentOS 7 is a supported platform for binary packages: Install MongoDB Community Edition on Red Hat or CentOS. I would recommend installing the latest version of the 3.6 production release series (currently 3.6.17) so you have the latest bug fixes and stability improvements. Minor releases do not include any backward-breaking changes, and there have been many improvements since MongoDB 3.6.9 was released in Nov, 2018.If you definitely want to build from source, the v3.6.9-dbaas-testing branch isn’t what you are looking for (it’s a testing branch). Releases are tagged, so you want to use the v3.6.9 tag: Release r3.6.9 · mongodb/mongo · GitHub.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "v3.6.9",
"text": "Thanks for the quick response, I’ll try the v3.6.9 tag and see if I face the same issue again.",
"username": "Harpinder_Kaur"
},
{
"code": "v3.6.9Mongo Ticket SERVER-36206--disable-warnings-as-errorsMongo Ticket SERVER-31439",
"text": "Hi Team, I am now building code from v3.6.9 tag.Problem::\nThe same issue as before:I am facing a problem where one time the compilation stops giving below error. However, if I try to compile same binary manually, it gets created without any error. PFA.\nimage3354×1522 1.21 MB\nI had to do below changes for resolving other issues faced while compilation:I had to include dummy version.json file, kept at root of code directory with below content as compilation was halting due to version not getting derived. Followed https://jira.mongodb.org/browse/SERVER-21317 for the workaroundAs per Mongo Ticket SERVER-36206, included --disable-warnings-as-errors flagPer Mongo Ticket SERVER-31439, added os.environ in PATH in SConstruct filePlease let me know how can I have smooth compilation?",
"username": "Harpinder_Kaur"
},
{
"code": "",
"text": "Hi, I have several comments here.First, please don’t use screenshots to post errors. It makes it much harder to, for instance, comment on a specific line of output.Second, would you please clarify why you are building the now quite old 3.6 release, rather than a more modern branch like v4.2? And, if there is a specific reason for building a v3.6 series release, why are you building the .9 version, rather than v3.6.17, which is the latest bugfix release in the v3.6 series? You definitely want to be running the newest bugfix release out of any branch.Third, there are several pieces of information that will be required to make forward progress here. At minimum, please tell us your operating system distro and version, the version of GCC that you are using, how you obtained the source code (e.g. git checkout, tarball from the downloads site, something else), any local edits to the source code, and the complete SCons invocation that you used to try and build.Fourth, the error shown in the screenshot is what I would expect to see if the build was cancelled with ^C or similar. No compiler or linker error has been shown. Is it possible that something else on the machine is interfering with the build? Could you be running out of memory or some other resource?I also have some comments on the various tickets you mentioned.https://jira.mongodb.org/browse/SERVER-21317 is fixed in v3.6.9, so you should not need to include a dummy version.json file. What error did you observe when you didn’t provide the version.json file? Note that you can also explicitly set a version by adding MONGO_VERSION=x.y.z on the command line. However, this should not generally be necessary.It is fine to use --disable-warnings-as-errors if you are using a newer compiler than the official compiler version. On MongoDB v3.6, that is GCC 5.4. Note though that we do no in house builds or tests of the v3.6 branch with newer compilers, and we don’t guarantee that builds using other compilers will work. If it is possible for you to build with GCC 5, that would be preferred to using a newer compiler.The issue the user faced in SERVER-31439 was likely a broken local toolchain. SCons intentionally does not propagate the users shell PATH into the build, because it makes the build non-deterministic. If your g++ binary isn’t able to locate the linker it was configured to use without setting PATH, then the toolchain installation is likely broken.I think if you can follow up on each of the points above, we can get things set up so that you have a working build without needing to work around any particular issues.Thanks,\nAndrew",
"username": "Andrew_Morrow"
},
{
"code": "[root@portal4 mongo-r3.6.9]# uname -a Linux portal4 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux[root@portal4 mongo-r3.6.9]# python --version Python 2.7.5[root@portal4 mongo-r3.6.9]# clang --versionclang version 3.4.2 (tags/RELEASE_34/dot2-final)Target: x86_64-redhat-linux-gnuThread model: posix[root@portal4 mongo-r3.6.9]# scons --versionSCons by Steven Knight et al.:script: v3.1.2.bee7caf9defd6e108fc2998a2520ddb36a967691, 2019-12-17 02:07:09, by bdeegan on octodogengine: v3.1.2.bee7caf9defd6e108fc2998a2520ddb36a967691, 2019-12-17 02:07:09, by bdeegan on octodogengine path: ['/usr/lib/scons/SCons']Copyright (c) 2001 - 2019 The SCons FoundationMongo Ticket SERVER-36206--disable-warnings-as-errorsMongo Ticket SERVER-31439python buildscripts/scons.py all --variables-files=etc/scons/propagate_shell_environment.vars --disable-warnings-as-errors[root@portal4 mongo-r3.6.9]# free -mtotal used free shared buff/cache availableMem: 11853 624 524 8 10704 11094Swap: 4095 0 4095[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py all --disable-warnings-as-errorsscons: Reading SConscript files ...Invalid MONGO_VERSION '', or could not derive from version.json or git metadata. Please add a conforming MONGO_VERSION=x.y.z[-extra] as an argument to SCons[root@portal4 mongo-r3.6.9]#[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py all scons: Reading SConscript files ...Invalid MONGO_VERSION '', or could not derive from version.json or git metadata. Please add a conforming MONGO_VERSION=x.y.z[-extra] as an argument to SCons[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py MONGO_VERSION=3.6.9 MONGO_GITHASH=none allscons: Reading SConscript files ...Mkdir(\"build/scons\")scons version: 2.5.0python version: 2 7 5 'final' 0Unknown variables specified: MONGO_GITHASH[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py allscons: Reading SConscript files ...scons version: 2.5.0python version: 2 7 5 'final' 0Checking whether the C compiler works... yesChecking whether the C++ compiler works... yesChecking that the C++ compiler can link a C++ program... yesChecking if C++ compiler \"g++\" is GCC... yesChecking if C compiler \"gcc\" is GCC... yesDetected a x86_64 processorChecking if target OS linux is supported by the toolchain... yesChecking if C compiler is GCC 5.3.0 or newer...noChecking if C++ compiler is GCC 5.3.0 or newer...noERROR: Refusing to build with compiler that does not meet requirementsSee /root/mongoCompilation_TagCode/mongo-r3.6.9/build/scons/config.log for details",
"text": "please don’t use screenshots to post errors.Noted, thanks for the feedbackwhy you are building the now quite old 3.6 release, rather than a more modern branch like v4.2?As per the product specifications I have to specifically work for MongoDB 3.6.9 version and there is no plan for updates in near future so I’ll have to stick to it only.Third, there are several pieces of information that will be required to make forward progress here.OS Distro and Version → Centos 7\n[root@portal4 mongo-r3.6.9]# uname -a Linux portal4 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/LinuxGCC Version\ngcc (GCC) 5.4.0\nFollowed https://community.webfaction.com/questions/20158/installing-gcc-54 link to install gcc 5.4.0 version.\nPython version\n[root@portal4 mongo-r3.6.9]# python --version Python 2.7.5\nClang version\n[root@portal4 mongo-r3.6.9]# clang --version\nclang version 3.4.2 (tags/RELEASE_34/dot2-final)\nTarget: x86_64-redhat-linux-gnu\nThread model: posix\nSCons version\n[root@portal4 mongo-r3.6.9]# scons --version\nSCons by Steven Knight et al.:\nscript: v3.1.2.bee7caf9defd6e108fc2998a2520ddb36a967691, 2019-12-17 02:07:09, by bdeegan on octodog\nengine: v3.1.2.bee7caf9defd6e108fc2998a2520ddb36a967691, 2019-12-17 02:07:09, by bdeegan on octodog\nengine path: ['/usr/lib/scons/SCons']\nCopyright (c) 2001 - 2019 The SCons Foundationhow you obtained the source code\nDownloaded from below linkRelease r3.6.9 · mongodb/mongo · GitHub.any local edits to the source codethe complete SCons invocation that you used to try and build.python buildscripts/scons.py all --variables-files=etc/scons/propagate_shell_environment.vars --disable-warnings-as-errorsCould you be running out of memory or some other resource?Not sure, no other process was running on machine while compiling the code. The VM was dedicated for compilation purpose only\n[root@portal4 mongo-r3.6.9]# free -m\ntotal used free shared buff/cache available\nMem: 11853 624 524 8 10704 11094\nSwap: 4095 0 4095What error did you observe when you didn’t provide the version.json file?This was the error:\n[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py all --disable-warnings-as-errors\nscons: Reading SConscript files ...\nInvalid MONGO_VERSION '', or could not derive from version.json or git metadata. Please add a conforming MONGO_VERSION=x.y.z[-extra] as an argument to SCons\n[root@portal4 mongo-r3.6.9]#\n[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py all \nscons: Reading SConscript files ...\nInvalid MONGO_VERSION '', or could not derive from version.json or git metadata. Please add a conforming MONGO_VERSION=x.y.z[-extra] as an argument to SConsNote that you can also explicitly set a version by adding MONGO_VERSION=x.y.z on the command line. However, this should not generally be necessary.I had tried below command which resulted in unknown variable message MONGO_GITHASH but instead of removing MONGO_GITHASH from command line I added version.json\n[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py MONGO_VERSION=3.6.9 MONGO_GITHASH=none all\nscons: Reading SConscript files ...\nMkdir(\"build/scons\")\nscons version: 2.5.0\npython version: 2 7 5 'final' 0\nUnknown variables specified: MONGO_GITHASHAlso git describe was not going to return any output as I had downloaded the code instead of cloning it. So opted for version.json workaround.If your g++ binary isn’t able to locate the linker it was configured to use without setting PATH, then the toolchain installation is likely broken.If I don’t add PATH in SConstruct file I get below error:\n[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py all\nscons: Reading SConscript files ...\nscons version: 2.5.0\npython version: 2 7 5 'final' 0\nChecking whether the C compiler works... yes\nChecking whether the C++ compiler works... yes\nChecking that the C++ compiler can link a C++ program... yes\nChecking if C++ compiler \"g++\" is GCC... yes\nChecking if C compiler \"gcc\" is GCC... yes\nDetected a x86_64 processor\nChecking if target OS linux is supported by the toolchain... yes\nChecking if C compiler is GCC 5.3.0 or newer...no\nChecking if C++ compiler is GCC 5.3.0 or newer...no\nERROR: Refusing to build with compiler that does not meet requirements\nSee /root/mongoCompilation_TagCode/mongo-r3.6.9/build/scons/config.log for details\nSo i had to include the PATH variable to SConstruct fileI hope I have answered the required questions, let me know if you need further information.\nThanks in advance!",
"username": "Harpinder_Kaur"
},
{
"code": "[root@portal4 mongo-r3.6.9]# uname -a Linux portal4 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux[root@portal4 mongo-r3.6.9]# python --version Python 2.7.5[root@portal4 mongo-r3.6.9]# clang --versionclang version 3.4.2 (tags/RELEASE_34/dot2-final)Target: x86_64-redhat-linux-gnuThread model: posix[root@portal4 mongo-r3.6.9]# scons --versionSCons by Steven Knight et al.:script: v3.1.2.bee7caf9defd6e108fc2998a2520ddb36a967691, 2019-12-17 02:07:09, by bdeegan on octodogengine: v3.1.2.bee7caf9defd6e108fc2998a2520ddb36a967691, 2019-12-17 02:07:09, by bdeegan on octodogengine path: ['/usr/lib/scons/SCons']Copyright (c) 2001 - 2019 The SCons FoundationMongo Ticket SERVER-36206--disable-warnings-as-errorsMongo Ticket SERVER-31439python buildscripts/scons.py all --variables-files=etc/scons/propagate_shell_environment.vars --disable-warnings-as-errors[root@portal4 mongo-r3.6.9]# free -mtotal used free shared buff/cache availableMem: 11853 624 524 8 10704 11094Swap: 4095 0 4095[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py all --disable-warnings-as-errorsscons: Reading SConscript files ...Invalid MONGO_VERSION '', or could not derive from version.json or git metadata. Please add a conforming MONGO_VERSION=x.y.z[-extra] as an argument to SCons[root@portal4 mongo-r3.6.9]#[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py allscons: Reading SConscript files ...Invalid MONGO_VERSION '', or could not derive from version.json or git metadata. Please add a conforming MONGO_VERSION=x.y.z[-extra] as an argument to SCons[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py MONGO_VERSION=3.6.9 MONGO_GITHASH=none allscons: Reading SConscript files ...Mkdir(\"build/scons\")scons version: 2.5.0python version: 2 7 5 'final' 0Unknown variables specified: MONGO_GITHASH[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py allscons: Reading SConscript files ...scons version: 2.5.0python version: 2 7 5 'final' 0Checking whether the C compiler works... yesChecking whether the C++ compiler works... yesChecking that the C++ compiler can link a C++ program... yesChecking if C++ compiler \"g++\" is GCC... yesChecking if C compiler \"gcc\" is GCC... yesDetected a x86_64 processorChecking if target OS linux is supported by the toolchain... yesChecking if C compiler is GCC 5.3.0 or newer...noChecking if C++ compiler is GCC 5.3.0 or newer...noERROR: Refusing to build with compiler that does not meet requirementsSee /root/mongoCompilation_TagCode/mongo-r3.6.9/build/scons/config.log for details",
"text": "why you are building the now quite old 3.6 release, rather than a more modern branch like v4.2?As per the product specifications I have to specifically work for MongoDB 3.6.9 version and there is no plan for updates in near future so I’ll have to stick to it only.There is no good reason for a specification to demand a version of MongoDB with any more specificity than major.minor version, which is what describes the feature set. So, it would be reasonable for a product specification to say it required MongoDB v3.6, or v4.2, or similar. But specifying the patch level simply means that the resulting product will be forever ineligible for bugfixes to the database layer, which can’t be what was intended. You should endeavor to get this restriction lifted so you can use v.3.6.17 or newer. Or, please let us know what is so special about v3.6.9 that you can’t use a newer v3.6 version. Is there a bug in v3.6 versions newer than v3.6.9 that has not yet been fixed on the v3.6 branch? If so, we would want to know so we could address it. More information here will be very helpful.Third, there are several pieces of information that will be required to make forward progress here.OS Distro and Version → Centos 7\n[root@portal4 mongo-r3.6.9]# uname -a Linux portal4 3.10.0-957.21.3.el7.x86_64 #1 SMP Tue Jun 18 16:35:19 UTC 2019 x86_64 x86_64 x86_64 GNU/LinuxGCC Version\ngcc (GCC) 5.4.0\nFollowed https://community.webfaction.com/questions/20158/installing-gcc-54 link to install gcc 5.4.0 version.\nPython version\n[root@portal4 mongo-r3.6.9]# python --version Python 2.7.5\nClang version\n[root@portal4 mongo-r3.6.9]# clang --version\nclang version 3.4.2 (tags/RELEASE_34/dot2-final)\nTarget: x86_64-redhat-linux-gnu\nThread model: posix\nSCons version\n[root@portal4 mongo-r3.6.9]# scons --version\nSCons by Steven Knight et al.:\nscript: v3.1.2.bee7caf9defd6e108fc2998a2520ddb36a967691, 2019-12-17 02:07:09, by bdeegan on octodog\nengine: v3.1.2.bee7caf9defd6e108fc2998a2520ddb36a967691, 2019-12-17 02:07:09, by bdeegan on octodog\nengine path: ['/usr/lib/scons/SCons']\nCopyright (c) 2001 - 2019 The SCons FoundationThis all looks fine. Note that the system SCons version is irrelevant, since the MongoDB source tree contains its own copy of SCons, used by buildscripts/scons.py.how you obtained the source code\nDownloaded from below linkRelease r3.6.9 · mongodb/mongo · GitHub.Fine, but see my notes below. You will have an easier time if you download instead from the download center.any local edits to the source code[quote=“Harpinder_Kaur, post:4, topic:2701”]More on this below.This shouldn’t be necessary if you are building with GCC 5.4 as you indicate. What happens if you don’t use it?This also shouldn’t be necessary, especially since you are using --variables-files=…/propagate_shell_environment.vars, which does this for you. I suggest backing this change out, as it will make your build nondeterministic.the complete SCons invocation that you used to try and build.python buildscripts/scons.py all --variables-files=etc/scons/propagate_shell_environment.vars --disable-warnings-as-errorsWith a properly configured GCC 5.4 installation, neither of these flags should be required. Also, I don’t see where you are specifying that you want to use your GCC 5.4 installation. I’d expect to see something like CC=gcc-5.4 CXX=g+±5.4 here. Where did you install GCC-5.4 and what are the paths to the GCC 5.4 gcc and g++ binaries?Could you be running out of memory or some other resource?Not sure, no other process was running on machine while compiling the code. The VM was dedicated for compilation purpose only\n[root@portal4 mongo-r3.6.9]# free -m\ntotal used free shared buff/cache available\nMem: 11853 624 524 8 10704 11094\nSwap: 4095 0 4095What error did you observe when you didn’t provide the version.json file?This was the error:\n[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py all --disable-warnings-as-errors\nscons: Reading SConscript files ...\nInvalid MONGO_VERSION '', or could not derive from version.json or git metadata. Please add a conforming MONGO_VERSION=x.y.z[-extra] as an argument to SCons\n[root@portal4 mongo-r3.6.9]#\n[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py all\nscons: Reading SConscript files ...\nInvalid MONGO_VERSION '', or could not derive from version.json or git metadata. Please add a conforming MONGO_VERSION=x.y.z[-extra] as an argument to SConsNote that you can also explicitly set a version by adding MONGO_VERSION=x.y.z on the command line. However, this should not generally be necessary.I had tried below command which resulted in unknown variable message MONGO_GITHASH but instead of removing MONGO_GITHASH from command line I added version.json\n[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py MONGO_VERSION=3.6.9 MONGO_GITHASH=none all\nscons: Reading SConscript files ...\nMkdir(\"build/scons\")\nscons version: 2.5.0\npython version: 2 7 5 'final' 0\nUnknown variables specified: MONGO_GITHASHThis was my mistake - the variable is called MONGO_GIT_HASH (note the extra _).Also git describe was not going to return any output as I had downloaded the code instead of cloning it. So opted for version.json workaround.The problem is that the github based archives don’t contain the version metadata, which is why you need to mess around with MONGO_VERSION and MONGO_GIT_HASH to make this work. However, the source downloads from the mongodb download site do contain the data already. Please try downloading the source from Download MongoDB Community Server | MongoDB instead. That will contain a pre-populated version.json.Alternatively, use MONGO_GIT_HASH, and it won’t try to call git describe.If your g++ binary isn’t able to locate the linker it was configured to use without setting PATH, then the toolchain installation is likely broken.If I don’t add PATH in SConstruct file I get below error:\n[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py all\nscons: Reading SConscript files ...\nscons version: 2.5.0\npython version: 2 7 5 'final' 0\nChecking whether the C compiler works... yes\nChecking whether the C++ compiler works... yes\nChecking that the C++ compiler can link a C++ program... yes\nChecking if C++ compiler \"g++\" is GCC... yes\nChecking if C compiler \"gcc\" is GCC... yes\nDetected a x86_64 processor\nChecking if target OS linux is supported by the toolchain... yes\nChecking if C compiler is GCC 5.3.0 or newer...no\nChecking if C++ compiler is GCC 5.3.0 or newer...no\nERROR: Refusing to build with compiler that does not meet requirements\nSee /root/mongoCompilation_TagCode/mongo-r3.6.9/build/scons/config.log for detailsPlease post the contents of the config.log. Anytime you get this message, you should post the contents of config.log.",
"username": "Andrew_Morrow"
},
{
"code": "ScopeGuardImpl1python buildscripts/scons.py MONGO_VERSION=3.6.9 --variables-files=etc/scons/propagate_shell_environment.vars allfile /root/mongoCompilation_TagCode/mongo-r3.6.9/SConstruct,line 1083:\tConfigure(confdir = build/scons/opt/sconf_temp)scons: Configure: Checking whether the C compiler works... scons: Configure: \"build/scons/opt/sconf_temp/conftest_0.c\" is up to date.scons: Configure: The original builder output was: |build/scons/opt/sconf_temp/conftest_0.c <- | | | |int main() | |{ | | return 0; | |} | | |gcc -o build/scons/opt/sconf_temp/conftest_0.o -c build/scons/opt/sconf_temp/conftest_0.cscons: Configure: yes`scons: Configure: Checking whether the C++ compiler works... ` `scons: Configure: \"build/scons/opt/sconf_temp/conftest_1.cpp\" is up to date.` `scons: Configure: The original builder output was:` ` |build/scons/opt/sconf_temp/conftest_1.cpp <-` ` | |` ` | |int main()` ` | |{` ` | | return 0;` ` | |}` ` | |` ` |` `g++ -o build/scons/opt/sconf_temp/conftest_1.o -c build/scons/opt/sconf_temp/conftest_1.cpp` `scons: Configure: yes`scons: Configure: Checking that the C++ compiler can link a C++ program... scons: Configure: \"build/scons/opt/sconf_temp/conftest_2.cpp\" is up to date.scons: Configure: The original builder output was: |build/scons/opt/sconf_temp/conftest_2.cpp <- | | | |#include <iostream> | |#include <cstdlib> | | | |int main() { | | std::cout << \"Hello, World\" << std::endl; | | return EXIT_SUCCESS; | |} | | |g++ -o build/scons/opt/sconf_temp/conftest_2.o -c build/scons/opt/sconf_temp/conftest_2.cppg++ -o build/scons/opt/sconf_temp/conftest_2 build/scons/opt/sconf_temp/conftest_2.oscons: Configure: yes`scons: Configure: Checking if C++ compiler \"g++\" is GCC... ` `scons: Configure: \"build/scons/opt/sconf_temp/conftest_3.cpp\" is up to date.` `scons: Configure: The original builder output was:` ` |build/scons/opt/sconf_temp/conftest_3.cpp <-` ` | |` ` | |#if defined(__GNUC__) && !defined(__clang__)` ` | |/* we are using toolchain defined(__GNUC__) && !defined(__clang__) */` ` | |#else` ` | |#error` ` | |#endif` ` | |` ` |` `g++ -o build/scons/opt/sconf_temp/conftest_3.o -c build/scons/opt/sconf_temp/conftest_3.cpp` `scons: Configure: yes`scons: Configure: Checking if C compiler \"gcc\" is GCC... scons: Configure: \"build/scons/opt/sconf_temp/conftest_4.c\" is up to date.scons: Configure: The original builder output was: |build/scons/opt/sconf_temp/conftest_4.c <- | | | |#if defined(__GNUC__) && !defined(__clang__) | |/* we are using toolchain defined(__GNUC__) && !defined(__clang__) */ | |#else | |#error | |#endif | | |gcc -o build/scons/opt/sconf_temp/conftest_4.o -c build/scons/opt/sconf_temp/conftest_4.cscons: Configure: yes`scons: Configure: \"build/scons/opt/sconf_temp/conftest_5.c\" is up to date.` `scons: Configure: The original builder output was:` ` |build/scons/opt/sconf_temp/conftest_5.c <-` ` | |` ` | |#if defined(__x86_64) || defined(_M_AMD64)` ` | |/* Detected x86_64 */` ` | |#else` ` | |#error not x86_64` ` | |#endif` ` | |` ` |` `gcc -o build/scons/opt/sconf_temp/conftest_5.o -c build/scons/opt/sconf_temp/conftest_5.c` `scons: Configure: Detected a x86_64 processor`scons: Configure: Checking if target OS linux is supported by the toolchain... scons: Configure: \"build/scons/opt/sconf_temp/conftest_6.c\" is up to date.scons: Configure: The original builder output was: |build/scons/opt/sconf_temp/conftest_6.c <- | | | |#if defined(__APPLE__) | |#include <TargetConditionals.h> | |#endif | |#if defined(__linux__) | |/* detected linux */ | |#else | |#error | |#endif | | |gcc -o build/scons/opt/sconf_temp/conftest_6.o -c build/scons/opt/sconf_temp/conftest_6.cscons: Configure: yes file /root/mongoCompilation_TagCode/mongo-r3.6.9/SConstruct,line 1824:\tConfigure(confdir = build/scons/opt/sconf_temp)scons: Configure: Checking if C compiler is GCC 5.3.0 or newer...scons: Configure: \"build/scons/opt/sconf_temp/conftest_7.c\" is up to date.scons: Configure: The original builder output was: |build/scons/opt/sconf_temp/conftest_7.c <- | | | |#if !defined(__GNUC__) || defined(__clang__) | |#error | |#endif | | | |#if (__GNUC__ < 5) || (__GNUC__ == 5 && __GNUC_MINOR__ < 3) || (__GNUC__ == 5 && __GNUC_MINOR__ == 3 && __GNUC_PATCHLEVEL__ < 0) | |#error GCC 5.3.0 or newer is required to build MongoDB | |#endif | | | |int main(int argc, char* argv[]) { | | return 0; | |} | | |Compiling build/scons/opt/sconf_temp/conftest_7.obuild/scons/opt/sconf_temp/conftest_7.c:7:2: error: #error GCC 5.3.0 or newer is required to build MongoDB #error GCC 5.3.0 or newer is required to build MongoDB ^scons: Configure: noscons: Configure: Checking if C++ compiler is GCC 5.3.0 or newer...scons: Configure: \"build/scons/opt/sconf_temp/conftest_8.cpp\" is up to date.scons: Configure: The original builder output was: |build/scons/opt/sconf_temp/conftest_8.cpp <- | | | |#if !defined(__GNUC__) || defined(__clang__) | |#error | |#endif | | | |#if (__GNUC__ < 5) || (__GNUC__ == 5 && __GNUC_MINOR__ < 3) || (__GNUC__ == 5 && __GNUC_MINOR__ == 3 && __GNUC_PATCHLEVEL__ < 0) | |#error GCC 5.3.0 or newer is required to build MongoDB | |#endif | | | |int main(int argc, char* argv[]) { | | return 0; | |} | | |Compiling build/scons/opt/sconf_temp/conftest_8.obuild/scons/opt/sconf_temp/conftest_8.cpp:7:2: error: #error GCC 5.3.0 or newer is required to build MongoDB #error GCC 5.3.0 or newer is required to build MongoDB ^scons: Configure: no",
"text": "This shouldn’t be necessary if you are building with GCC 5.4 as you indicate. What happens if you don’t use it?The same one mentioned in SERVER-36206’s description related to ScopeGuardImpl1 and other functionsI suggest backing this change out, as it will make your build nondeterministicNow started fresh compilation with below command as per your suggestion:\npython buildscripts/scons.py MONGO_VERSION=3.6.9 --variables-files=etc/scons/propagate_shell_environment.vars allWith a properly configured GCC 5.4 installation, neither of these flags should be required.GCC is installed in $HOME/gcc/bin/gcc as per https://community.webfaction.com/questions/20158/installing-gcc-54Please post the contents of the config.log.file /root/mongoCompilation_TagCode/mongo-r3.6.9/SConstruct,line 1083:\n\tConfigure(confdir = build/scons/opt/sconf_temp)\nscons: Configure: Checking whether the C compiler works... \nscons: Configure: \"build/scons/opt/sconf_temp/conftest_0.c\" is up to date.\nscons: Configure: The original builder output was:\n |build/scons/opt/sconf_temp/conftest_0.c <-\n | |\n | |int main()\n | |{\n | | return 0;\n | |}\n | |\n |\ngcc -o build/scons/opt/sconf_temp/conftest_0.o -c build/scons/opt/sconf_temp/conftest_0.c\nscons: Configure: yes\n`scons: Configure: Checking whether the C++ compiler works... ` `scons: Configure: \"build/scons/opt/sconf_temp/conftest_1.cpp\" is up to date.` `scons: Configure: The original builder output was:` ` |build/scons/opt/sconf_temp/conftest_1.cpp <-` ` | |` ` | |int main()` ` | |{` ` | | return 0;` ` | |}` ` | |` ` |` `g++ -o build/scons/opt/sconf_temp/conftest_1.o -c build/scons/opt/sconf_temp/conftest_1.cpp` `scons: Configure: yes`\nscons: Configure: Checking that the C++ compiler can link a C++ program... \nscons: Configure: \"build/scons/opt/sconf_temp/conftest_2.cpp\" is up to date.\nscons: Configure: The original builder output was:\n |build/scons/opt/sconf_temp/conftest_2.cpp <-\n | |\n | |#include <iostream>\n | |#include <cstdlib>\n | |\n | |int main() {\n | | std::cout << \"Hello, World\" << std::endl;\n | | return EXIT_SUCCESS;\n | |}\n | |\n |\ng++ -o build/scons/opt/sconf_temp/conftest_2.o -c build/scons/opt/sconf_temp/conftest_2.cpp\ng++ -o build/scons/opt/sconf_temp/conftest_2 build/scons/opt/sconf_temp/conftest_2.o\nscons: Configure: yes\n`scons: Configure: Checking if C++ compiler \"g++\" is GCC... ` `scons: Configure: \"build/scons/opt/sconf_temp/conftest_3.cpp\" is up to date.` `scons: Configure: The original builder output was:` ` |build/scons/opt/sconf_temp/conftest_3.cpp <-` ` | |` ` | |#if defined(__GNUC__) && !defined(__clang__)` ` | |/* we are using toolchain defined(__GNUC__) && !defined(__clang__) */` ` | |#else` ` | |#error` ` | |#endif` ` | |` ` |` `g++ -o build/scons/opt/sconf_temp/conftest_3.o -c build/scons/opt/sconf_temp/conftest_3.cpp` `scons: Configure: yes`\nscons: Configure: Checking if C compiler \"gcc\" is GCC... \nscons: Configure: \"build/scons/opt/sconf_temp/conftest_4.c\" is up to date.\nscons: Configure: The original builder output was:\n |build/scons/opt/sconf_temp/conftest_4.c <-\n | |\n | |#if defined(__GNUC__) && !defined(__clang__)\n | |/* we are using toolchain defined(__GNUC__) && !defined(__clang__) */\n | |#else\n | |#error\n | |#endif\n | |\n |\ngcc -o build/scons/opt/sconf_temp/conftest_4.o -c build/scons/opt/sconf_temp/conftest_4.c\nscons: Configure: yes\n`scons: Configure: \"build/scons/opt/sconf_temp/conftest_5.c\" is up to date.` `scons: Configure: The original builder output was:` ` |build/scons/opt/sconf_temp/conftest_5.c <-` ` | |` ` | |#if defined(__x86_64) || defined(_M_AMD64)` ` | |/* Detected x86_64 */` ` | |#else` ` | |#error not x86_64` ` | |#endif` ` | |` ` |` `gcc -o build/scons/opt/sconf_temp/conftest_5.o -c build/scons/opt/sconf_temp/conftest_5.c` `scons: Configure: Detected a x86_64 processor`\nscons: Configure: Checking if target OS linux is supported by the toolchain... \nscons: Configure: \"build/scons/opt/sconf_temp/conftest_6.c\" is up to date.\nscons: Configure: The original builder output was:\n |build/scons/opt/sconf_temp/conftest_6.c <-\n | |\n | |#if defined(__APPLE__)\n | |#include <TargetConditionals.h>\n | |#endif\n | |#if defined(__linux__)\n | |/* detected linux */\n | |#else\n | |#error\n | |#endif\n | |\n |\ngcc -o build/scons/opt/sconf_temp/conftest_6.o -c build/scons/opt/sconf_temp/conftest_6.c\nscons: Configure: yes\n \nfile /root/mongoCompilation_TagCode/mongo-r3.6.9/SConstruct,line 1824:\n\tConfigure(confdir = build/scons/opt/sconf_temp)\nscons: Configure: Checking if C compiler is GCC 5.3.0 or newer...\nscons: Configure: \"build/scons/opt/sconf_temp/conftest_7.c\" is up to date.\nscons: Configure: The original builder output was:\n |build/scons/opt/sconf_temp/conftest_7.c <-\n | |\n | |#if !defined(__GNUC__) || defined(__clang__)\n | |#error\n | |#endif\n | |\n | |#if (__GNUC__ < 5) || (__GNUC__ == 5 && __GNUC_MINOR__ < 3) || (__GNUC__ == 5 && __GNUC_MINOR__ == 3 && __GNUC_PATCHLEVEL__ < 0)\n | |#error GCC 5.3.0 or newer is required to build MongoDB\n | |#endif\n | |\n | |int main(int argc, char* argv[]) {\n | | return 0;\n | |}\n | |\n |\nCompiling build/scons/opt/sconf_temp/conftest_7.o\nbuild/scons/opt/sconf_temp/conftest_7.c:7:2: error: #error GCC 5.3.0 or newer is required to build MongoDB\n #error GCC 5.3.0 or newer is required to build MongoDB\n ^\nscons: Configure: no\n``\nscons: Configure: Checking if C++ compiler is GCC 5.3.0 or newer...\nscons: Configure: \"build/scons/opt/sconf_temp/conftest_8.cpp\" is up to date.\nscons: Configure: The original builder output was:\n |build/scons/opt/sconf_temp/conftest_8.cpp <-\n | |\n | |#if !defined(__GNUC__) || defined(__clang__)\n | |#error\n | |#endif\n | |\n | |#if (__GNUC__ < 5) || (__GNUC__ == 5 && __GNUC_MINOR__ < 3) || (__GNUC__ == 5 && __GNUC_MINOR__ == 3 && __GNUC_PATCHLEVEL__ < 0)\n | |#error GCC 5.3.0 or newer is required to build MongoDB\n | |#endif\n | |\n | |int main(int argc, char* argv[]) {\n | | return 0;\n | |}\n | |\n |\nCompiling build/scons/opt/sconf_temp/conftest_8.o\nbuild/scons/opt/sconf_temp/conftest_8.cpp:7:2: error: #error GCC 5.3.0 or newer is required to build MongoDB\n #error GCC 5.3.0 or newer is required to build MongoDB\n ^\nscons: Configure: no",
"username": "Harpinder_Kaur"
},
{
"code": "lddmongo: linux-vdso.so.1 => (0x00007ffd8fdab000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f12b6bf0000) librt.so.1 => /lib64/librt.so.1 (0x00007f12b69e8000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f12b67e4000) libstdc++.so.6 => /root/gcc/lib/libstdc++.so.6 (0x00007f12b646a000) libm.so.6 => /lib64/libm.so.6 (0x00007f12b6168000) libgcc_s.so.1 => /root/gcc/lib/libgcc_s.so.1 (0x00007f12b5f51000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f12b5d35000) libc.so.6 => /lib64/libc.so.6 (0x00007f12b5968000) /lib64/ld-linux-x86-64.so.2 (0x00007f12b87db000)mongobridge: linux-vdso.so.1 => (0x00007ffc89bf9000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007fc0c9d47000) librt.so.1 => /lib64/librt.so.1 (0x00007fc0c9b3f000) libdl.so.2 => /lib64/libdl.so.2 (0x00007fc0c993b000) libstdc++.so.6 => /root/gcc/lib/libstdc++.so.6 (0x00007fc0c95c1000) libm.so.6 => /lib64/libm.so.6 (0x00007fc0c92bf000) libgcc_s.so.1 => /root/gcc/lib/libgcc_s.so.1 (0x00007fc0c90a8000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fc0c8e8c000) libc.so.6 => /lib64/libc.so.6 (0x00007fc0c8abf000) /lib64/ld-linux-x86-64.so.2 (0x00007fc0cad79000)mongod: linux-vdso.so.1 => (0x00007ffd2a1f0000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f54cd874000) librt.so.1 => /lib64/librt.so.1 (0x00007f54cd66c000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f54cd468000) libstdc++.so.6 => /root/gcc/lib/libstdc++.so.6 (0x00007f54cd0ee000) libm.so.6 => /lib64/libm.so.6 (0x00007f54ccdec000) libgcc_s.so.1 => /root/gcc/lib/libgcc_s.so.1 (0x00007f54ccbd5000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f54cc9b9000) libc.so.6 => /lib64/libc.so.6 (0x00007f54cc5ec000) /lib64/ld-linux-x86-64.so.2 (0x00007f54d09e3000)mongoperf: linux-vdso.so.1 => (0x00007ffefb7a0000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f7ecb5f9000) librt.so.1 => /lib64/librt.so.1 (0x00007f7ecb3f1000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f7ecb1ed000) libstdc++.so.6 => /root/gcc/lib/libstdc++.so.6 (0x00007f7ecae73000) libm.so.6 => /lib64/libm.so.6 (0x00007f7ecab71000) libgcc_s.so.1 => /root/gcc/lib/libgcc_s.so.1 (0x00007f7eca95a000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f7eca73e000) libc.so.6 => /lib64/libc.so.6 (0x00007f7eca371000) /lib64/ld-linux-x86-64.so.2 (0x00007f7ece703000)mongos: linux-vdso.so.1 => (0x00007ffc3a9db000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007ff4a0bd3000) librt.so.1 => /lib64/librt.so.1 (0x00007ff4a09cb000) libdl.so.2 => /lib64/libdl.so.2 (0x00007ff4a07c7000) libstdc++.so.6 => /root/gcc/lib/libstdc++.so.6 (0x00007ff4a044d000) libm.so.6 => /lib64/libm.so.6 (0x00007ff4a014b000) libgcc_s.so.1 => /root/gcc/lib/libgcc_s.so.1 (0x00007ff49ff34000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ff49fd18000) libc.so.6 => /lib64/libc.so.6 (0x00007ff49f94b000) /lib64/ld-linux-x86-64.so.2 (0x00007ff4a289a000)ldd/usr/bin/mongo: linux-vdso.so.1 => (0x00007fff931f3000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f1d641ea000) libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007f1d63d88000) libssl.so.10 => /lib64/libssl.so.10 (0x00007f1d63b16000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f1d63912000) librt.so.1 => /lib64/librt.so.1 (0x00007f1d6370a000) libm.so.6 => /lib64/libm.so.6 (0x00007f1d63408000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f1d631f2000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f1d62fd6000) libc.so.6 => /lib64/libc.so.6 (0x00007f1d62c09000) /lib64/ld-linux-x86-64.so.2 (0x00007f1d65f83000) libz.so.1 => /lib64/libz.so.1 (0x00007f1d629f3000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f1d627a6000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f1d624bd000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f1d622b9000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f1d62086000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f1d61e76000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f1d61c72000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f1d61a4b000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f1d617e9000)/usr/bin/mongod: linux-vdso.so.1 => (0x00007ffd64af4000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f47af003000) libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007f47aeba1000) libssl.so.10 => /lib64/libssl.so.10 (0x00007f47ae92f000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f47ae72b000) librt.so.1 => /lib64/librt.so.1 (0x00007f47ae523000) libm.so.6 => /lib64/libm.so.6 (0x00007f47ae221000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f47ae00b000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f47addef000) libc.so.6 => /lib64/libc.so.6 (0x00007f47ada22000) /lib64/ld-linux-x86-64.so.2 (0x00007f47b2328000) libz.so.1 => /lib64/libz.so.1 (0x00007f47ad80c000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f47ad5bf000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f47ad2d6000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f47ad0d2000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f47ace9f000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f47acc8f000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f47aca8b000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f47ac864000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f47ac602000)/usr/bin/mongoperf: linux-vdso.so.1 => (0x00007fff219f1000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f096fd2c000) libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007f096f8ca000) libssl.so.10 => /lib64/libssl.so.10 (0x00007f096f658000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f096f454000) librt.so.1 => /lib64/librt.so.1 (0x00007f096f24c000) libm.so.6 => /lib64/libm.so.6 (0x00007f096ef4a000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f096ed34000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f096eb18000) libc.so.6 => /lib64/libc.so.6 (0x00007f096e74b000) /lib64/ld-linux-x86-64.so.2 (0x00007f0972fea000) libz.so.1 => /lib64/libz.so.1 (0x00007f096e535000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f096e2e8000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f096dfff000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f096ddfb000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f096dbc8000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f096d9b8000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f096d7b4000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f096d58d000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f096d32b000)/usr/bin/mongos: linux-vdso.so.1 => (0x00007fff4d1d6000) libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f9317ac2000) libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007f9317660000) libssl.so.10 => /lib64/libssl.so.10 (0x00007f93173ee000) libdl.so.2 => /lib64/libdl.so.2 (0x00007f93171ea000) librt.so.1 => /lib64/librt.so.1 (0x00007f9316fe2000) libm.so.6 => /lib64/libm.so.6 (0x00007f9316ce0000) libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f9316aca000) libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f93168ae000) libc.so.6 => /lib64/libc.so.6 (0x00007f93164e1000) /lib64/ld-linux-x86-64.so.2 (0x00007f931994b000) libz.so.1 => /lib64/libz.so.1 (0x00007f93162cb000) libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f931607e000) libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f9315d95000) libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f9315b91000) libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f931595e000) libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f931574e000) libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f931554a000) libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f9315323000) libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f93150c1000)",
"text": "Also now after compiling tag 3.6.9 code, there are only few shared objects listed for binaries. However, the actual product binary has many more additional shared objects. Would you help me understand how can I get the same list of shared objects/shared libraries in my binaries too…what all changes should be done or is it fine if we want to add different shared objects to our binaries?ldd output for binaries created from code for 3.6.9 tag\nmongo:\n linux-vdso.so.1 => (0x00007ffd8fdab000)\n libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f12b6bf0000)\n librt.so.1 => /lib64/librt.so.1 (0x00007f12b69e8000)\n libdl.so.2 => /lib64/libdl.so.2 (0x00007f12b67e4000)\n libstdc++.so.6 => /root/gcc/lib/libstdc++.so.6 (0x00007f12b646a000)\n libm.so.6 => /lib64/libm.so.6 (0x00007f12b6168000)\n libgcc_s.so.1 => /root/gcc/lib/libgcc_s.so.1 (0x00007f12b5f51000)\n libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f12b5d35000)\n libc.so.6 => /lib64/libc.so.6 (0x00007f12b5968000)\n /lib64/ld-linux-x86-64.so.2 (0x00007f12b87db000)\nmongobridge:\n linux-vdso.so.1 => (0x00007ffc89bf9000)\n libresolv.so.2 => /lib64/libresolv.so.2 (0x00007fc0c9d47000)\n librt.so.1 => /lib64/librt.so.1 (0x00007fc0c9b3f000)\n libdl.so.2 => /lib64/libdl.so.2 (0x00007fc0c993b000)\n libstdc++.so.6 => /root/gcc/lib/libstdc++.so.6 (0x00007fc0c95c1000)\n libm.so.6 => /lib64/libm.so.6 (0x00007fc0c92bf000)\n libgcc_s.so.1 => /root/gcc/lib/libgcc_s.so.1 (0x00007fc0c90a8000)\n libpthread.so.0 => /lib64/libpthread.so.0 (0x00007fc0c8e8c000)\n libc.so.6 => /lib64/libc.so.6 (0x00007fc0c8abf000)\n /lib64/ld-linux-x86-64.so.2 (0x00007fc0cad79000)\nmongod:\n linux-vdso.so.1 => (0x00007ffd2a1f0000)\n libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f54cd874000)\n librt.so.1 => /lib64/librt.so.1 (0x00007f54cd66c000)\n libdl.so.2 => /lib64/libdl.so.2 (0x00007f54cd468000)\n libstdc++.so.6 => /root/gcc/lib/libstdc++.so.6 (0x00007f54cd0ee000)\n libm.so.6 => /lib64/libm.so.6 (0x00007f54ccdec000)\n libgcc_s.so.1 => /root/gcc/lib/libgcc_s.so.1 (0x00007f54ccbd5000)\n libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f54cc9b9000)\n libc.so.6 => /lib64/libc.so.6 (0x00007f54cc5ec000)\n /lib64/ld-linux-x86-64.so.2 (0x00007f54d09e3000)\nmongoperf:\n linux-vdso.so.1 => (0x00007ffefb7a0000)\n libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f7ecb5f9000)\n librt.so.1 => /lib64/librt.so.1 (0x00007f7ecb3f1000)\n libdl.so.2 => /lib64/libdl.so.2 (0x00007f7ecb1ed000)\n libstdc++.so.6 => /root/gcc/lib/libstdc++.so.6 (0x00007f7ecae73000)\n libm.so.6 => /lib64/libm.so.6 (0x00007f7ecab71000)\n libgcc_s.so.1 => /root/gcc/lib/libgcc_s.so.1 (0x00007f7eca95a000)\n libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f7eca73e000)\n libc.so.6 => /lib64/libc.so.6 (0x00007f7eca371000)\n /lib64/ld-linux-x86-64.so.2 (0x00007f7ece703000)\nmongos:\n linux-vdso.so.1 => (0x00007ffc3a9db000)\n libresolv.so.2 => /lib64/libresolv.so.2 (0x00007ff4a0bd3000)\n librt.so.1 => /lib64/librt.so.1 (0x00007ff4a09cb000)\n libdl.so.2 => /lib64/libdl.so.2 (0x00007ff4a07c7000)\n libstdc++.so.6 => /root/gcc/lib/libstdc++.so.6 (0x00007ff4a044d000)\n libm.so.6 => /lib64/libm.so.6 (0x00007ff4a014b000)\n libgcc_s.so.1 => /root/gcc/lib/libgcc_s.so.1 (0x00007ff49ff34000)\n libpthread.so.0 => /lib64/libpthread.so.0 (0x00007ff49fd18000)\n libc.so.6 => /lib64/libc.so.6 (0x00007ff49f94b000)\n /lib64/ld-linux-x86-64.so.2 (0x00007ff4a289a000)**ldd output for actual mongo binaries from 3.6.9 setup **\n/usr/bin/mongo:\n linux-vdso.so.1 => (0x00007fff931f3000)\n libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f1d641ea000)\n libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007f1d63d88000)\n libssl.so.10 => /lib64/libssl.so.10 (0x00007f1d63b16000)\n libdl.so.2 => /lib64/libdl.so.2 (0x00007f1d63912000)\n librt.so.1 => /lib64/librt.so.1 (0x00007f1d6370a000)\n libm.so.6 => /lib64/libm.so.6 (0x00007f1d63408000)\n libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f1d631f2000)\n libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f1d62fd6000)\n libc.so.6 => /lib64/libc.so.6 (0x00007f1d62c09000)\n /lib64/ld-linux-x86-64.so.2 (0x00007f1d65f83000)\n libz.so.1 => /lib64/libz.so.1 (0x00007f1d629f3000)\n libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f1d627a6000)\n libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f1d624bd000)\n libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f1d622b9000)\n libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f1d62086000)\n libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f1d61e76000)\n libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f1d61c72000)\n libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f1d61a4b000)\n libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f1d617e9000)\n/usr/bin/mongod:\n linux-vdso.so.1 => (0x00007ffd64af4000)\n libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f47af003000)\n libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007f47aeba1000)\n libssl.so.10 => /lib64/libssl.so.10 (0x00007f47ae92f000)\n libdl.so.2 => /lib64/libdl.so.2 (0x00007f47ae72b000)\n librt.so.1 => /lib64/librt.so.1 (0x00007f47ae523000)\n libm.so.6 => /lib64/libm.so.6 (0x00007f47ae221000)\n libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f47ae00b000)\n libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f47addef000)\n libc.so.6 => /lib64/libc.so.6 (0x00007f47ada22000)\n /lib64/ld-linux-x86-64.so.2 (0x00007f47b2328000)\n libz.so.1 => /lib64/libz.so.1 (0x00007f47ad80c000)\n libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f47ad5bf000)\n libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f47ad2d6000)\n libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f47ad0d2000)\n libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f47ace9f000)\n libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f47acc8f000)\n libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f47aca8b000)\n libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f47ac864000)\n libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f47ac602000)\n/usr/bin/mongoperf:\n linux-vdso.so.1 => (0x00007fff219f1000)\n libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f096fd2c000)\n libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007f096f8ca000)\n libssl.so.10 => /lib64/libssl.so.10 (0x00007f096f658000)\n libdl.so.2 => /lib64/libdl.so.2 (0x00007f096f454000)\n librt.so.1 => /lib64/librt.so.1 (0x00007f096f24c000)\n libm.so.6 => /lib64/libm.so.6 (0x00007f096ef4a000)\n libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f096ed34000)\n libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f096eb18000)\n libc.so.6 => /lib64/libc.so.6 (0x00007f096e74b000)\n /lib64/ld-linux-x86-64.so.2 (0x00007f0972fea000)\n libz.so.1 => /lib64/libz.so.1 (0x00007f096e535000)\n libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f096e2e8000)\n libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f096dfff000)\n libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f096ddfb000)\n libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f096dbc8000)\n libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f096d9b8000)\n libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f096d7b4000)\n libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f096d58d000)\n libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f096d32b000)\n/usr/bin/mongos:\n linux-vdso.so.1 => (0x00007fff4d1d6000)\n libresolv.so.2 => /lib64/libresolv.so.2 (0x00007f9317ac2000)\n libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007f9317660000)\n libssl.so.10 => /lib64/libssl.so.10 (0x00007f93173ee000)\n libdl.so.2 => /lib64/libdl.so.2 (0x00007f93171ea000)\n librt.so.1 => /lib64/librt.so.1 (0x00007f9316fe2000)\n libm.so.6 => /lib64/libm.so.6 (0x00007f9316ce0000)\n libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007f9316aca000)\n libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f93168ae000)\n libc.so.6 => /lib64/libc.so.6 (0x00007f93164e1000)\n /lib64/ld-linux-x86-64.so.2 (0x00007f931994b000)\n libz.so.1 => /lib64/libz.so.1 (0x00007f93162cb000)\n libgssapi_krb5.so.2 => /lib64/libgssapi_krb5.so.2 (0x00007f931607e000)\n libkrb5.so.3 => /lib64/libkrb5.so.3 (0x00007f9315d95000)\n libcom_err.so.2 => /lib64/libcom_err.so.2 (0x00007f9315b91000)\n libk5crypto.so.3 => /lib64/libk5crypto.so.3 (0x00007f931595e000)\n libkrb5support.so.0 => /lib64/libkrb5support.so.0 (0x00007f931574e000)\n libkeyutils.so.1 => /lib64/libkeyutils.so.1 (0x00007f931554a000)\n libselinux.so.1 => /lib64/libselinux.so.1 (0x00007f9315323000)\n libpcre.so.1 => /lib64/libpcre.so.1 (0x00007f93150c1000)",
"username": "Harpinder_Kaur"
},
{
"code": "ScopeGuardImpl1python buildscripts/scons.py MONGO_VERSION=3.6.9 --variables-files=etc/scons/propagate_shell_environment.vars all",
"text": "This shouldn’t be necessary if you are building with GCC 5.4 as you indicate. What happens if you don’t use it?The same one mentioned in SERVER-36206’s description related to ScopeGuardImpl1 and other functionsI didn’t mean not to use GCC 5.4. I meant that you shoould not need --disable-warnings-as-errors if you are actually using GCC 5.4, as you suggest you are.I suggest backing this change out, as it will make your build nondeterministicNow started fresh compilation with below command as per your suggestion:\npython buildscripts/scons.py MONGO_VERSION=3.6.9 --variables-files=etc/scons/propagate_shell_environment.vars allBut you are still using --variables-files=etc/scons/propagate_shell_environment.vars. That is the part I’m saying you should remove.With a properly configured GCC 5.4 installation, neither of these flags should be required.GCC is installed in $HOME/gcc/bin/gcc as per https://community.webfaction.com/questions/20158/installing-gcc-54In that case, you need to tell the build to use that compiler. So you should be saying something like:python buildscripts/scons.py MONGO_VERSION=3.6.9 MONGO_GIT_HASH=unknown CC=$HOME/gcc/bin/gcc CXX=$HOME/gcc/bin/g++Please post the contents of the config.log.If the above change to include the CC and CXX variables doesn’t fix it, please re-run your build adding the flags VERBOSE=1 and --config=force, and then repost your config.log.My suspicion this entire time is that you have never actually been using your installed GCC 5.4.",
"username": "Andrew_Morrow"
},
{
"code": "",
"text": "Also now after compiling tag 3.6.9 code, there are only few shared objects listed for binaries. However, the actual product binary has many more additional shared objects. Would you help me understand how can I get the same list of shared objects/shared libraries in my binaries too…what all changes should be done or is it fine if we want to add different shared objects to our binaries?It looks to me like you are comparing the ldd output of the community build of 3.6.9 that you made to the ldd output of the enterprise build of 3.6.9. The enterprise build includes closed source components that are not available to you. You will not be able to recreate the exact list of library dependencies in your build of the community sources, nor do you need to, since those libraries are included only to support features that are available only in the enterprise version.",
"username": "Andrew_Morrow"
},
{
"code": "",
"text": "The enterprise build includes closed source components thatThanks for clearing out.",
"username": "Harpinder_Kaur"
},
{
"code": "--variables-files=etc/scons/propagate_shell_environment.vars[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py allscons: Reading SConscript files ...scons version: 2.5.0python version: 2 7 5 'final' 0Checking whether the C compiler works... yesChecking whether the C++ compiler works... yesChecking that the C++ compiler can link a C++ program... yesChecking if C++ compiler \"g++\" is GCC... yesChecking if C compiler \"gcc\" is GCC... yesDetected a x86_64 processorChecking if target OS linux is supported by the toolchain... yesChecking if C compiler is GCC 5.3.0 or newer...noChecking if C++ compiler is GCC 5.3.0 or newer...noERROR: Refusing to build with compiler that does not meet requirementsSee /root/mongoCompilation_TagCode/mongo-r3.6.9/build/scons/config.log for detailspython buildscripts/scons.py MONGO_VERSION=3.6.9 MONGO_GIT_HASH=unknown CC=$HOME/gcc/bin/gcc CXX=$HOME/gcc/bin/g++",
"text": "My suspicion this entire time is that you have never actually been using your installed GCC 5.4.If I didn’t use --variables-files=etc/scons/propagate_shell_environment.vars in command line then I used to get below error:[root@portal4 mongo-r3.6.9]# python buildscripts/scons.py all\nscons: Reading SConscript files ...\nscons version: 2.5.0\npython version: 2 7 5 'final' 0\nChecking whether the C compiler works... yes\nChecking whether the C++ compiler works... yes\nChecking that the C++ compiler can link a C++ program... yes\nChecking if C++ compiler \"g++\" is GCC... yes\nChecking if C compiler \"gcc\" is GCC... yes\nDetected a x86_64 processor\nChecking if target OS linux is supported by the toolchain... yes\nChecking if C compiler is GCC 5.3.0 or newer...no\nChecking if C++ compiler is GCC 5.3.0 or newer...no\nERROR: Refusing to build with compiler that does not meet requirements\nSee /root/mongoCompilation_TagCode/mongo-r3.6.9/build/scons/config.log for detailsAnd now as per your suggestion, now started clean compilation with below command\npython buildscripts/scons.py MONGO_VERSION=3.6.9 MONGO_GIT_HASH=unknown CC=$HOME/gcc/bin/gcc CXX=$HOME/gcc/bin/g++",
"username": "Harpinder_Kaur"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Facing issues while compiling "v3.6.9-dbaas-testing" branch | 2020-04-12T20:34:37.716Z | Facing issues while compiling “v3.6.9-dbaas-testing” branch | 3,892 |
|
null | [
"java"
] | [
{
"code": "",
"text": "Hi All,I’m using Java driver 3.12 and facing the following issue:I’m trying bulk insert a huge amount of documents with a retry mechanism in case there is connection disruption issue. The problem is, once the retry kicks in, it starts the bulk insert all over again. Since there are some documents have been inserted in previous try, it results in “Duplicate Key error”.My question is that is there a way to ignore the “duplicate key error” in subsequent retry ?Update: I have a follow-up question. The above problem can be solved using “Upsert” opertation instead of “Insert” (using Replace model instead of Insert model in the bulk write). But performance is an important factor here (dealing with ~ 100k records) and my assumption is “Upsert” is significantly more expensive than “Insert”. Is that assumption valid ?Thanks,\nT",
"username": "Tuan_Dinh"
},
{
"code": "List<WriteModel<Document>> bulkWrites = new ArrayList<>();\nDocument doc1 = new Document(\"_id\", 13).append(\"fld\", \"a\");\nDocument doc2 = new Document(\"_id\", 13).append(\"fld\", \"a\");\nDocument doc3 = new Document(\"_id\", 14).append(\"fld\", \"c\");\nbulkWrites.add(new InsertOneModel<Document>(doc1));\nbulkWrites.add(new InsertOneModel<Document>(doc2));\nbulkWrites.add(new InsertOneModel<Document>(doc3));\n\nBulkWriteOptions bulkWriteOptions = new BulkWriteOptions().ordered(false);\t\t\nBulkWriteResult bulkResult = null;\n\ntry {\n\tbulkResult = collection.bulkWrite(bulkWrites, bulkWriteOptions);\n}\ncatch(MongoBulkWriteException e) {\n // print a short error message _and_ the result (inserted count)\n System.out.println(e.toString());\n\tSystem.out.println(e.getWriteResult().getInsertedCount());\n}\nfinally {\n // print the result when there are no errors\n if (bulkResult != null) {\n System.out.println(bulkResult.getInsertedCount());\n\t}\n}",
"text": "My question is that is there a way to ignore the “duplicate key error” in subsequent retry ?You can ignore the error by catching the exception. For example:",
"username": "Prasad_Saya"
},
{
"code": "e.getWriteResult().getInsertedCount()\n",
"text": "Thanks @Prasad_Saya for the quick reply.What I meant by “ignoring the duplicate key error” is for the database to ignore the document with duplicated key and keep inserting the rest of the bulk write. (The context here is the retry, first bulkwrite attempt inserted some documents already, and then there is a connectivity issue. Next, the retry kicks in and do the bulkwrite all over again, is there away for mongo to keep inserting valid document eventhough there are already inserted documents in the bulk ?)By the way, in your example, how many entries would have been inserted into the db ? And what is the value of:",
"username": "Tuan_Dinh"
},
{
"code": "e.getWriteResult().getInsertedCount()20",
"text": "What I meant by “ignoring the duplicate key error” is for the database to ignore the document with duplicated key and keep inserting the rest of the bulk write.The bulk operation tries to insert every document you had supplied to insert. If the document already exists in the collection, the document is not inserted (because of the duplicate key error), and the next document’s insert operation will be attempted. Note that the insert will be attempted on all the documents supplied.By the way, in your example, how many entries would have been inserted into the db ? And what is the value of: e.getWriteResult().getInsertedCount()The first time the code is run, it inserts two documents and the e.getWriteResult().getInsertedCount() returns 2. If you run the same code again, no documents are inserted and output is 0.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks @Prasad_Saya, you have pointed out my fundamental misunderstanding of the bulkwrite operation. My assumption was that it would halt the process soon the “duplicated key error” occurs but in fact, it keeps attempting all documents.Appreciate it!",
"username": "Tuan_Dinh"
},
{
"code": "",
"text": "You are welcome ",
"username": "Prasad_Saya"
},
{
"code": "ordered(false)",
"text": "My assumption was that it would halt the process soon the “duplicated key error” occurs but in fact, it keeps attempting all documents.Hi Tuan,Your assumption is actually correct. There are two modes for bulk write operations: Ordered (the default) or Unordered.Borrowing descriptions from the Bulk Write Operations documentation:With an ordered list of operations, MongoDB executes the operations serially. If an error occurs during the processing of one of the write operations, MongoDB will return without processing any remaining write operations in the list. See Ordered Bulk Write.With an unordered list of operations, MongoDB can execute the operations in parallel, but this behavior is not guaranteed. If an error occurs during the processing of one of the write operations, MongoDB will continue to process remaining write operations in the list. See Unordered Bulk Write.@Prasad_Saya’s example above sets ordered(false) in the options, so will continue on duplicate key errors.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Thank @Stennie_X for further clarifying the issue.",
"username": "Tuan_Dinh"
}
] | Handling "Duplicated key error" in bulk insert retry scenarios | 2020-04-17T01:17:03.118Z | Handling “Duplicated key error” in bulk insert retry scenarios | 27,288 |
null | [
"legacy-realm-cloud"
] | [
{
"code": "Task <41DE1866-CE63-42FA-A315-F0ED0BA37A82>.<1> finished with error [-1001] Error Domain=NSURLErrorDomain Code=-1001 \"The request timed out.\" UserInfo={_kCFStreamErrorCodeKey=-2102, NSUnderlyingError=0x600000c9f000 {Error Domain=kCFErrorDomainCFNetwork Code=-1001 \"(null)\" UserInfo={_kCFStreamErrorCodeKey=-2102, _kCFStreamErrorDomainKey=4}}, _NSURLErrorFailingURLSessionTaskErrorKey=LocalDataTask <41DE1866-CE63-42FA-A315-F0ED0BA37A82>.<1>, _NSURLErrorRelatedURLSessionTaskErrorKey=(\n\"LocalDataTask <41DE1866-CE63-42FA-A315-F0ED0BA37A82>.<1>\"), NSLocalizedDescription=The request timed out., NSErrorFailingURLStringKey=https://[myapp].cloud.realm.io/auth, NSErrorFailingURLKey=https://[myapp].us1.cloud.realm.io/auth, _kCFStreamErrorDomainKey=4}\n",
"text": "Hi,A few days ago my app stopped syncing with realm cloud.\nLogining in work, but no syncing takes place. After a minute I get this error from the framework:No other clue. I updated to the most recently pod. Nothing.\nNothing really changed on my code so this seems like a realm cloud issue.",
"username": "donut"
},
{
"code": "",
"text": "Welcome to the community @donut!Can you raise a support case for the team to investigate? The “request timed out” error may be a client/networking issue or perhaps something specific to your Realm Cloud deployment.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Realm Cloud - Timeout, will not sync | 2020-04-20T20:50:06.755Z | Realm Cloud - Timeout, will not sync | 1,878 |
null | [
"golang"
] | [
{
"code": "// IsDup returns whether err informs of a duplicate key error because\n// a primary key index or a secondary unique index already has an entry\n// with the given value.\nfunc IsDup(err error) bool {\n\tif wes, ok := err.(mongo.WriteException); ok {\n\t\tfor i := range wes.WriteErrors {\n\t\t\tif wes.WriteErrors[i].Code == 11000 || wes.WriteErrors[i].Code == 11001 || wes.WriteErrors[i].Code == 12582 || wes.WriteErrors[i].Code == 16460 {\n\t\t\t\treturn true\n\t\t\t}\n\t\t}\n\t}\n\treturn false\n}\n",
"text": "Hello,\nIn order to check if an error is duplicate key error because index, I have to write mine (because I don’t find how to do this unlike with mgo driver). Is it possible to provide it in the Go driver?Currently I use this:Many thx",
"username": "Jerome_LAFORGE"
},
{
"code": "IsDup",
"text": "Hi @Jerome_LAFORGE,There is an open ticket in our Jira project to add such a helper: https://jira.mongodb.org/browse/GODRIVER-972. It’s under the “Improve error types” project, which we’re hoping to get to in the coming quarter. For the time being, I think your IsDup helper looks good.– Divjot",
"username": "Divjot_Arora"
}
] | IsDup function missing (golang) | 2020-04-18T09:10:47.950Z | IsDup function missing (golang) | 3,894 |
null | [] | [
{
"code": "",
"text": "When we say an intent exclusive lock on collection A precludes an exclusive lock, does that mean: there can be no exclusive lock on the collection? No exclusive locks on documents in the collection? Neither?? Does a collection or database level intent lock contain a reference to a specific document(s), or is it truly a lock on that collection or database and does not contain a reference to its children? Does the intent lock exist only for optimization purposes, to prevent operations and queries from checking lower level locks?I’ve been scouring the web, the old google group, stackoverflow etc. for the answers to these questions. Really appreciate any clarification thank you",
"username": "Blah_McBlahson"
},
{
"code": "",
"text": "Hello,I’ll start with this page as a reference: Multiple granularity locking - WikipediaMongoDB uses these types of locks (except it does not implement the SIX flavor of lock). MongoDB uses MGL with this hierarchy:It does not implement Document level locks in this scheme; instead, the pluggable storage engine is expected to handle concurrency between all readers and writers at this level.There is one Global lock resource, one Database lock resource per database, and one Collection lock resource per collection. Every access to a document must first lock the resources in the hierarchy order. For a reader in the foo.bar collection, this would be: Global IS, Database foo IS, Collection bar IS. Once all of those locks are granted, in order, the reader is then allowed to enter the storage engine and establish a cursor to read documents in that collection.You might consider the MGL scheme a form of optimization, but the alternative of having one lock resource per document would make the database operationally impossible, since operations that need exclusive access to a higher level (say, a drop-database operation which needs exclusive access to a Database) would first need to acquire potentially millions of document locks before it could proceed.",
"username": "Eric_Milkie"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | If there's an X lock on document A, does the IX lock on the collection containing A affect what locks can be on document B in the same collection? | 2020-03-15T11:59:12.627Z | If there’s an X lock on document A, does the IX lock on the collection containing A affect what locks can be on document B in the same collection? | 2,553 |
null | [
"charts",
"on-premises"
] | [
{
"code": "web-certsdb-certsstitchServerRunning failure: Can't connect to Stitch Server at http://localhost:8080. Too many failed attempts. Last error: connect ECONNREFUSED 127.0.0.1:8080Stitch startup logsmetadatasslx509 auth mechanismNodejs Mongooseshell promptStudio3TsslcertificateauthorityfilesslclientcertificatekeyfilemetadataappautheventshostinglogMongodb is reachable",
"text": "I’m trying to install Mongodb Charts on-premises version 19.12 but faced to the following issue apparently regarding ssl certificates but don’t know where is the problem and whether the issue is related to problem in web-certs or db-certs.Here is my situation when following the Official GuideI’m able to proceed to step 9 and rundocker stack deploy -c charts-docker-swarm-19.12.1.yml mongodb-chartsbut Stitch server failed to start and reports: stitchServerRunning failure: Can't connect to Stitch Server at http://localhost:8080. Too many failed attempts. Last error: connect ECONNREFUSED 127.0.0.1:8080 .And the Stitch startup logs usingdocker exec -it $(docker container ls --filter name=_charts -q) cat /mongodb-charts/logs/stitch-startup.logreads: error starting up servers: tls: private key does not match public keyHere are some more info:I’m confused why the test connection script reports a valid URI and passes the ssl certificates to the replicaset properly but the main container fails to use it and discontinues the operation specially when the logs clearly shows that Mongodb is reachable and in practice I can see that it could add a new database to the replica successfully.Any ideas/suggestions/thoughts would be greatly appreciated!\nThank you\n-Omid",
"username": "Omid"
},
{
"code": "",
"text": "Hi Omid -Sorry to hear you’re having problems - this does look like a tricky issue. Unfortunately I don’t know exactly what’s wrong, but I can give you some more info that may help.First, I can tell you that this issue would be related to the DB Certs for X.509 auth, not the Web Certs which are used for the HTTPS setup. This is because Stitch doesn’t know or care about the web setup, so it must be failing as it tries to connect to the database.As to why it’s failing to start after the URI was successfully validated, my guess is that this is due to some subtle difference in behaviour across drivers. For technical reasons, the verification process uses the Node.js driver, but Stitch uses the Go driver. I don’t know why the Go driver isn’t happy with your keys, but you might be able to do some more targeted searching with this info.Let me know what you discover.\nTom",
"username": "tomhollander"
},
{
"code": "",
"text": "Hi @tomhollander\nThanks for your reply. I agree, most likely the problem is related to the Go driver. I’ll take a closer look at it and try to figure out how it treats the keys. I will come back to you.-Omid",
"username": "Omid"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | Mongodb Charts Installation: Stitch failed to start: error starting up servers: tls: private key does not match public key | 2020-04-18T11:48:46.153Z | Mongodb Charts Installation: Stitch failed to start: error starting up servers: tls: private key does not match public key | 3,813 |
null | [
"compass",
"atlas"
] | [
{
"code": "",
"text": "Hi,I have subscribed to the free mongodb instance. Im using Mongodb Compass Version 1.20.5 on a Mac and not able to connect to my instance, if i use the below connection string:\nmongodb+srv://admxxx:@cluster0-cqdef.mongodb.net/testBut if I use the below connection string, Im able to connect:\nmongodb://admxxx:@cluster0-shard-00-00-cqdef.mongodb.net:27017,cluster0-shard-00-01-cqdef.mongodb.net:27017,cluster0-shard-00-02-cqdef.mongodb.net:27017/test?replicaSet=Cluster0-shard-0&ssl=true&authSource=adminThanks\nJag",
"username": "Jagjit_Singh"
},
{
"code": "",
"text": "Can you connect with the shell using the SRV string?If not, the most likely cause is that your are using a DNS resolver that does not support DNS seedlist.",
"username": "steevej"
},
{
"code": "",
"text": "Can you post a screenshot?",
"username": "Massimiliano_Marcon"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Cannot connect to MongoDB Atlas using Compass | 2020-04-20T09:47:58.246Z | Cannot connect to MongoDB Atlas using Compass | 1,907 |
null | [
"data-modeling"
] | [
{
"code": "{\n enterprise_id: 1,\n date: <today>,\n sites: {\n OK: {\n 12: {<site object>},\n 23: {<another site>},\n 45: {<site>}\n ...\n },\n FAILED: {\n 99: {<site>},\n ....\n }\n }\n}\n",
"text": "Dear all,I need to save data into a nested array of objects for some reports, but I’d like the structure to be something like this:first question would be: this structure looks OK knowing that I might have enterprises with 10k+ sites? or should I have a documents for each site?second question: if I’ll go on with the initial structure, how can I access (and update) (for example: site 12 from OKs).Thank you very much,\nSilviu",
"username": "Silviu_Stoica"
},
{
"code": "sitesidsites{\n \"_id\" : ObjectId(\"5e9d100bf4e2664344ac733d\"),\n \"enterprise_id\" : \"1\",\n \"date\" : ISODate(\"2020-04-20T02:59:23.118Z\"),\n \"sites\" : [\n {\n \"id\" : \"12\",\n \"status\" : \"OK\",\n \"fld1\" : \"some_value_1\"\n },\n {\n \"id\" : \"23\",\n \"status\" : \"OK\",\n \"fld1\" : \"some_value_2\"\n },\n {\n \"id\" : \"99\",\n \"status\" : \"FAILED\",\n \"fld1\" : \"some_value_3\"\n }\n ]\n}\nidstatusidenterprise_iddb.collection.findOne( \n { enterprise_id: \"1\" },\n { sites: { $elemMatch: { id: \"23\", status: \"OK\" } } }\n )\n\ndb.collection.findOne( \n { enterprise_id: \"1\" },\n { sites: { $elemMatch: { id: \"23\" } } }\n )\n$elemMatch{\n \"_id\" : ObjectId(\"5e9d100bf4e2664344ac733d\"),\n \"sites\" : [\n {\n \"id\" : \"23\",\n \"status\" : \"OK\",\n \"fld1\" : \"some_value_2\"\n }\n ]\n}\n$elemmatchdb.collecion.updateOne(\n { enterprise_id: \"1\", sites: { $elemMatch: { id: \"23\", status: \"OK\" } } },\n { $set: { \"sites.$.fld1\": \"new_value_22\" } }\n)\n{\n \"_id\" : ObjectId(\"5e9d100bf4e2664344ac733d\"),\n \"sites\" : [\n {\n \"id\" : \"23\",\n \"status\" : \"OK\",\n \"fld1\" : \"new_value_22\"\n }\n ]\n}\ndb.sites.updateOne(\n { enterprise_id: \"1\", \"sites.id\": \"23\" },\n { $set: { \"sites.$.fld1\": \"new_value_99\" } }\n)\nfld1\"new_value_99\"{ \"sites.id\": 1 } ){ \"sites.id\": 1, \"sites.status\": 1 }{ enterprise_id: 1, \"sites.id\": 1 }",
"text": "This is just an approach to your situation.first question would be: this structure looks OK knowing that I might have enterprises with 10k+ sites? or should I have a documents for each site?Have a structure like the following - assuming that the number of sites will be at some constant number (not growing indefinitely), and the site id’s are unique within the sites array.second question: if I’ll go on with the initial structure, how can I access (and update) (for example: site 12 from OKs).You can query for a specific site id and status or specific site id only - for a given enterprise_id:The output will be same for both the queries. Note the $elemMatch is the projection operator:You can update a specific site’s field values as follows. Note the $elemmatch is the update operator:Query and find that updated value:Another way to update:Here the fld1’s value would have changed to \"new_value_99\".Indexing is to be used for fast access for queries as well as updates and sorting. Indexes on array fields is called as Multikey Indexes. This index will give a fast access to sites array queries and updates. Also, Multikey indexes result in large sized indexes and occupy more memory during operation (and consequently can affect the performance).Some example indexes (and need to be determined based upon your needs): { \"sites.id\": 1 } ) or { \"sites.id\": 1, \"sites.status\": 1 } or { enterprise_id: 1, \"sites.id\": 1 }.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Hello,thank you for your reply. I’ll give it a try and come back with the outcome.Cheers,\nSilviu",
"username": "Silviu_Stoica"
}
] | Numeric key for array of objects | 2020-04-19T20:36:19.685Z | Numeric key for array of objects | 3,240 |
null | [
"react-native",
"typescript"
] | [
{
"code": "D:/code/react/t8/node_modules/node-pre-gyp/lib/info.jsModule not found: Can't resolve 'aws-sdk' in 'D:\\code\\react\\t8\\node_modules\\node-pre-gyp\\lib'\n",
"text": "i want install realm 5.0.3 on react native 0.62.0\nbut i catch this error :\nFailed to compileThis error occurred during the build time and cannot be dismissed.what is problem?",
"username": "11117"
},
{
"code": "aws-sdk",
"text": "Hi,The aws-sdk package is not a dependency of the Realm React Native SDK and I cannot reproduce this error trying to follow the normal Realm React Native installation instructions. I suspect this dependency is related to another package you are trying to install (or have specified in your application dependencies).Can you provide more information on the steps to reproduce this issue:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "hithanks for your attentionafter a project was generated by “expo init”, added realm.then id add realm code based on realm site. after more try i find problem is related to this package:\nnode-pre-gypi am using:\nnode 13.13how can i solve it?",
"username": "11117"
},
{
"code": "expo init",
"text": "after a project was generated by “expo init”, added realm.then id add realm code based on realm site. after more try i find problem is related to this package:\nnode-pre-gypHi,Please provide the actual commands used and more information (output or error messages) to help reproduce the issue.For example:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "expo init",
"text": "what options did you use with expo init ?\nnothing only:\nexpo init did you have all dependencies successfully installed before adding Realm?\nyes: npm install runwhat specific steps did you take to “add Realm code”?\nnpm install realm",
"username": "11117"
},
{
"code": "aws-sdknode-pre-gypnpm install realm --save --prodexpo initexpo init? Choose a template: (Use arrow keys)\n ----- Managed workflow -----\n❯ blank a minimal app as clean as an empty canvas\n blank (TypeScript) same as blank but with TypeScript configuration\n tabs several example screens and tabs using react-navigation\n ----- Bare workflow -----\n minimal bare and minimal, just the essentials to get you started\n minimal (TypeScript) same as minimal but with TypeScript configuration\nrunnpm install",
"text": "Hi,I still can’t reproduce the error with the information provided so far, but since aws-sdk is a dev dependency for node-pre-gyp you could try: npm install realm --save --prod.expo init provides several application template options to select from, eg:Which of these template options did you choose?Note that this command line will install the run package from npm. You probably just wanted npm install.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "create-react-native-app",
"text": "Hi @11117,I just realised there is an important note below the installation info in the Realm React Native documentation:Please note that Expo does not support Realm, create-react-native-app will not work.Even if you can get past your initial install issues, apparently Expo currently only packages JavaScript code so deploying an app with compiled dependencies (like Realm) is not supported.You can upvote & watch the Expo does not support Realm feature request for updates.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Install realm 5.0.3 on react native 0.62.0 | 2020-04-16T18:51:12.037Z | Install realm 5.0.3 on react native 0.62.0 | 4,020 |
null | [] | [
{
"code": "",
"text": "why eval is removed in 4.2 version and what is alternate to this in 4.2\nif you have removed the eval command there is no meaning to use the scriptoperation.",
"username": "Swapnil_Jain"
},
{
"code": "evalevalevaleval$exprevaleval",
"text": "Welcome to the MongoDB Community @Swapnil_Jain!why eval is removed in 4.2 versionThe eval server command has been deprecated since MongoDB 3.0 and is definitely not recommendable for performance or security reasons. See eval behaviour in the MongoDB 4.0 documentation for some of the caveats.For example: server-side eval requires full permissions to your deployment, takes a global write lock while evaluating the function, and does not use indexes.what is alternate to this in 4.2The recommended approach is to write functions using the native MongoDB query and aggregation operators. Successive major server releases have significantly added to available operators and server functionality where eval might previously have been considered.In MongoDB 3.6+ you can use the $expr query operator to perform more advanced document manipulation using aggregation expressions. MongoDB 4.2 also added support for On-Demand Materialised Views.If you are doing complex calculations that cannot be expressed with query operators or aggregation, this work is likely better performed in your application code. If reducing network latency is your key goal for server-side scripting, you could consider co-locating your application code with your database instances.MongoDB Atlas (our cloud MongoDB service) also provides some alternatives with serverless functions and triggers.there is no meaning to use the scriptoperation .Server-side eval is not analogous to stored procedures: concurrency and performance is severely limited.If you have existing usage of eval and aren’t sure what approach to take, please provide more details on the functions you are replacing.Regards,\nStennie",
"username": "Stennie_X"
}
] | eval replacement in 4.2 | 2020-04-18T01:58:12.252Z | eval replacement in 4.2 | 8,302 |
null | [
"cxx"
] | [
{
"code": "",
"text": "I’d like to use mongo-cxx-driver 3.5.0 within my project which is compiled by VS2013. Now, I managed to compile mongo-c-driver 1.16.3, mongo-cxx-driver 3.5.0 with VS2015.\nTrying run the test example with 2013, I receive hundrets of errors which are not really leading to help. I also considered to upgrade my project to VS2015, but that leads to the same amount of problems.Is there any recommended way for my case?\nThanks a lot",
"username": "Uwe_Dittus"
},
{
"code": "",
"text": "@Uwe_Dittus, please provide the errors you are seeing. Without those it is hardly even possible to guess at what is going wrong or how to fix it. It would also be helpful to know how you built and installed the C driver and the C++ driver. For example, what commands did you use and were there any warnings or errors in either of those builds?",
"username": "Roberto_Sanchez"
},
{
"code": "------ Build started: Project: Project1, Configuration: Debug Win32 ------\nSource.cpp\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\string_view.hpp(62): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\string_view.hpp(62): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\element.hpp(25): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\element.hpp(25): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\element.hpp(115): error C2039: 'type' : is not a member of 'bsoncxx'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\element.hpp(25): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\element.hpp(25): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\view.hpp(27): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\view.hpp(27): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\view.hpp(27): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\view.hpp(27): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\value.hpp(25): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\value.hpp(25): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\value.hpp(76): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\value.hpp(76): error C2610: 'bsoncxx::v_noabi::document::value::value(bsoncxx::v_noabi::document::value &&)' : is not a special member function which can be defaulted\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\value.hpp(77): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\value.hpp(77): error C2610: 'bsoncxx::v_noabi::document::value &bsoncxx::v_noabi::document::value::operator =(bsoncxx::v_noabi::document::value &&)' : is not a special member function which can be defaulted\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\value.hpp(82): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\value.hpp(89): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\value.hpp(113): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\value.hpp(117): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\value.hpp(26): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\value.hpp(26): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\value.hpp(77): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\value.hpp(77): error C2610: 'bsoncxx::v_noabi::array::value::value(bsoncxx::v_noabi::array::value &&)' : is not a special member function which can be defaulted\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\value.hpp(78): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\value.hpp(78): error C2610: 'bsoncxx::v_noabi::array::value &bsoncxx::v_noabi::array::value::operator =(bsoncxx::v_noabi::array::value &&)' : is not a special member function which can be defaulted\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\value.hpp(83): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\value.hpp(90): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\value.hpp(108): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\value.hpp(112): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\decimal128.hpp(25): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\decimal128.hpp(25): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\decimal128.hpp(45): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\oid.hpp(26): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\oid.hpp(26): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\oid.hpp(66): error C3083: 'stdx': the symbol to the left of a '::' must be a type\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\oid.hpp(66): error C2039: 'string_view' : is not a member of 'bsoncxx'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\oid.hpp(66): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\oid.hpp(66): error C2143: syntax error : missing ',' before '&'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(29): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(29): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(86): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(86): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(86): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(111): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(111): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(111): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(111): error C2440: 'initializing' : cannot convert from 'bsoncxx::v_noabi::type' to 'int'\n This conversion requires an explicit cast (static_cast, C-style cast or function-style cast)\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(111): error C2439: 'bsoncxx::v_noabi::types::b_utf8::type_id' : member could not be initialized\n d:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(111) : see declaration of 'bsoncxx::v_noabi::types::b_utf8::type_id'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(147): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(147): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(147): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(179): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(179): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(179): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(204): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(204): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(204): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(228): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(228): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(228): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(244): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(244): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(244): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(262): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(262): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(262): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(287): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(287): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(287): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(287): error C2440: 'initializing' : cannot convert from 'bsoncxx::v_noabi::type' to 'int'\n This conversion requires an explicit cast (static_cast, C-style cast or function-style cast)\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(287): error C2439: 'bsoncxx::v_noabi::types::b_date::type_id' : member could not be initialized\n d:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(287) : see declaration of 'bsoncxx::v_noabi::types::b_date::type_id'\n d:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(287) : see declaration of 'bsoncxx::v_noabi::types::b_date::type_id'\n d:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(287) : see declaration of 'bsoncxx::v_noabi::types::b_date::type_id'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(346): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(346): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(346): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(362): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(362): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(362): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(400): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(400): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(400): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(419): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(419): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(419): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(458): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(458): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(458): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(494): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(494): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(494): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(530): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(530): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(530): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(559): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(559): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(559): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(578): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(578): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(578): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(603): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(603): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(603): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(633): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(633): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(633): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(649): error C2144: syntax error : 'auto' should be preceded by ';'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(649): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\types.hpp(649): error C2853: 'type_id' : a non-static data member cannot have a type that contains 'auto'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\core.hpp(31): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\core.hpp(31): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\core.hpp(54): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\core.hpp(55): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\stream\\closed_context.hpp(20): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\stream\\closed_context.hpp(20): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\optional.hpp(45): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\stdx\\optional.hpp(45): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp(24): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp(24): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp(90): error C3646: 'noexcept' : unknown override specifier\n d:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp(139) : see reference to class template instantiation 'bsoncxx::v_noabi::view_or_value<View,Value>' being compiled\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp(101): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\view_or_value.hpp(114): error C3646: 'noexcept' : unknown override specifier\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\view_or_value.hpp(25): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\view_or_value.hpp(25): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\view_or_value.hpp(28): error C2039: 'view_or_value' : is not a member of 'bsoncxx'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\view_or_value.hpp(28): error C2143: syntax error : missing ';' before '<'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\array\\view_or_value.hpp(28): error C2059: syntax error : '<'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\view_or_value.hpp(24): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\view_or_value.hpp(24): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\view_or_value.hpp(27): error C2039: 'view_or_value' : is not a member of 'bsoncxx'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\view_or_value.hpp(27): error C2143: syntax error : missing ';' before '<'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\document\\view_or_value.hpp(27): error C2059: syntax error : '<'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\concatenate.hpp(23): warning C4091: 'inline ' : ignored on left of 'int' when no variable is declared\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\concatenate.hpp(23): error C2143: syntax error : missing ';' before 'namespace'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\concatenate.hpp(44): error C2664: 'bsoncxx::v_noabi::document::view::view(const bsoncxx::v_noabi::document::view &)' : cannot convert argument 1 from 'const bsoncxx::v_noabi::document::view_or_value' to 'const bsoncxx::v_noabi::document::view &'\n Reason: cannot convert from 'const bsoncxx::v_noabi::document::view_or_value' to 'const bsoncxx::v_noabi::document::view'\n No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\concatenate.hpp(54): error C2664: 'bsoncxx::v_noabi::document::view::view(const bsoncxx::v_noabi::document::view &)' : cannot convert argument 1 from 'const bsoncxx::v_noabi::document::view_or_value' to 'const bsoncxx::v_noabi::document::view &'\n Reason: cannot convert from 'const bsoncxx::v_noabi::document::view_or_value' to 'const bsoncxx::v_noabi::document::view'\n No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\concatenate.hpp(76): error C2664: 'bsoncxx::v_noabi::array::view::view(const bsoncxx::v_noabi::array::view &)' : cannot convert argument 1 from 'const bsoncxx::v_noabi::array::view_or_value' to 'const bsoncxx::v_noabi::array::view &'\n Reason: cannot convert from 'const bsoncxx::v_noabi::array::view_or_value' to 'const bsoncxx::v_noabi::array::view'\n No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\concatenate.hpp(86): error C2664: 'bsoncxx::v_noabi::array::view::view(const bsoncxx::v_noabi::array::view &)' : cannot convert argument 1 from 'const bsoncxx::v_noabi::array::view_or_value' to 'const bsoncxx::v_noabi::array::view &'\n Reason: cannot convert from 'const bsoncxx::v_noabi::array::view_or_value' to 'const bsoncxx::v_noabi::array::view'\n No user-defined-conversion operator available that can perform this conversion, or the operator cannot be called\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\concatenate.hpp(101): error C2780: '_OutTy *std::move(_InIt,_InIt,_OutTy (&)[_OutSize])' : expects 3 arguments - 1 provided\n c:\\program files (x86)\\microsoft visual studio 12.0\\vc\\include\\xutility(2510) : see declaration of 'std::move'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\concatenate.hpp(101): error C2780: '_OutIt std::move(_InIt,_InIt,_OutIt)' : expects 3 arguments - 1 provided\n c:\\program files (x86)\\microsoft visual studio 12.0\\vc\\include\\xutility(2497) : see declaration of 'std::move'\nd:\\libs\\mongo-cxx-driver\\3.5.0\\include\\bsoncxx\\v_noabi\\bsoncxx\\builder\\concatenate.hpp(101): fatal error C1003: error count exceeds 100; stopping compilation\nDone building project \"Project1.vcxproj\" -- FAILED.\n> ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ==========\n",
"text": "Hi Roberto, thanks for your reply,I built:The build log from the code example on VS2013",
"username": "Uwe_Dittus"
},
{
"code": "",
"text": "@Uwe_Dittus that out put is only for building the code sample. From that, it seems like a preprocessor macro issue. However, without the commands and associated output from the actual builds for the C driver and C++ driver I can only guess. The only idea that comes to me is that something happened with the C++ driver’s config.hpp header. Perhaps it wasn’t configured, perhaps it wasn’t installed. Though, it could be something else entirely. Please provide more details on the build/installation of the drivers so that I can provide a more concrete suggestion.",
"username": "Roberto_Sanchez"
},
{
"code": "d:\\mongo-cxx-driver>cmake -G \"Visual Studio 14 2015\" -DCMAKE_INSTALL_PREFIX=d:\\mongo-cxx-driver\\3.5.0\\ -DCMAKE_PREFIX_PATH=d:\\LIBS\\mongo-c-driver\\1.16.3\\ -DBOOST_ROOT=d:\\LIBS\\boost\\1.60.0 -DCMAKE_CXX_STANDARD=11 -DCMAKE_CXX_FLAGS=\"/EHsc\"<ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='RelWithDebInfo|Win32'\">\n <ClCompile>\n <AdditionalIncludeDirectories>D:\\mongo-cxx-driver\\src;D:\\LIBS\\boost\\1.60.0\\include;D:\\LIBS\\mongo-c-driver\\1.16.0\\msvc-14.0-Win32\\include\\libbson-1.0;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <AssemblerListingLocation>$(IntDir)</AssemblerListingLocation>\n <CompileAs>CompileAsCpp</CompileAs>\n <DebugInformationFormat>ProgramDatabase</DebugInformationFormat>\n <ExceptionHandling>Sync</ExceptionHandling>\n <InlineFunctionExpansion>OnlyExplicitInline</InlineFunctionExpansion>\n <Optimization>MaxSpeed</Optimization>\n <PrecompiledHeader>NotUsing</PrecompiledHeader>\n <RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary>\n <PreprocessorDefinitions>NDEBUG;MONGO_CXX_DRIVER_COMPILING;CMAKE_INTDIR=\"RelWithDebInfo\";BSONCXX_EXPORT;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <ObjectFileName>$(IntDir)</ObjectFileName>\n </ClCompile>\n <ResourceCompile>\n <PreprocessorDefinitions>WIN32;NDEBUG;MONGO_CXX_DRIVER_COMPILING;CMAKE_INTDIR=\\\"RelWithDebInfo\\\";BSONCXX_EXPORT;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalIncludeDirectories>D:\\mongo-cxx-driver\\src;D:\\LIBS\\boost\\1.60.0\\include;D:\\LIBS\\mongo-c-driver\\1.16.0\\msvc-14.0-Win32\\include\\libbson-1.0;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </ResourceCompile>\n <Midl>\n <AdditionalIncludeDirectories>D:\\mongo-cxx-driver\\src;D:\\LIBS\\boost\\1.60.0\\include;D:\\LIBS\\mongo-c-driver\\1.16.0\\msvc-14.0-Win32\\include\\libbson-1.0;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <OutputDirectory>$(ProjectDir)/$(IntDir)</OutputDirectory>\n <HeaderFileName>%(Filename).h</HeaderFileName>\n <TypeLibraryName>%(Filename).tlb</TypeLibraryName>\n <InterfaceIdentifierFileName>%(Filename)_i.c</InterfaceIdentifierFileName>\n <ProxyFileName>%(Filename)_p.c</ProxyFileName>\n </Midl>\n <Link>\n <AdditionalDependencies>D:\\LIBS\\mongo-c-driver\\1.16.0\\msvc-14.0-Win32\\lib\\bson-1.0.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;comdlg32.lib;advapi32.lib</AdditionalDependencies>\n <AdditionalLibraryDirectories>%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalOptions>%(AdditionalOptions) /machine:X86</AdditionalOptions>\n <GenerateDebugInformation>true</GenerateDebugInformation>\n <IgnoreSpecificDefaultLibraries>%(IgnoreSpecificDefaultLibraries)</IgnoreSpecificDefaultLibraries>\n <ImportLibrary>D:/mongo-cxx-driver/src/bsoncxx/RelWithDebInfo/bsoncxx.lib</ImportLibrary>\n <ProgramDataBaseFile>D:/mongo-cxx-driver/src/bsoncxx/RelWithDebInfo/bsoncxx.pdb</ProgramDataBaseFile>\n <SubSystem>Console</SubSystem>\n </Link>\n <ProjectReference>\n <LinkLibraryDependencies>false</LinkLibraryDependencies>\n </ProjectReference>\n</ItemDefinitionGroup>\n<ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='RelWithDebInfo|Win32'\">\n <ClCompile>\n <AdditionalIncludeDirectories>D:\\mongo-cxx-driver\\src;D:\\LIBS\\mongo-c-driver\\1.16.0\\msvc-14.0-Win32\\include\\libmongoc-1.0;D:\\LIBS\\mongo-c-driver\\1.16.0\\msvc-14.0-Win32\\include\\libbson-1.0;D:\\LIBS\\boost\\1.60.0\\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <AssemblerListingLocation>$(IntDir)</AssemblerListingLocation>\n <CompileAs>CompileAsCpp</CompileAs>\n <DebugInformationFormat>ProgramDatabase</DebugInformationFormat>\n <ExceptionHandling>Sync</ExceptionHandling>\n <InlineFunctionExpansion>OnlyExplicitInline</InlineFunctionExpansion>\n <Optimization>MaxSpeed</Optimization>\n <PrecompiledHeader>NotUsing</PrecompiledHeader>\n <RuntimeLibrary>MultiThreadedDLL</RuntimeLibrary>\n <PreprocessorDefinitions>NDEBUG;MONGO_CXX_DRIVER_COMPILING;CMAKE_INTDIR=\"RelWithDebInfo\";MONGOCXX_EXPORTS;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <ObjectFileName>$(IntDir)</ObjectFileName>\n </ClCompile>\n <ResourceCompile>\n <PreprocessorDefinitions>WIN32;NDEBUG;MONGO_CXX_DRIVER_COMPILING;CMAKE_INTDIR=\\\"RelWithDebInfo\\\";MONGOCXX_EXPORTS;%(PreprocessorDefinitions)</PreprocessorDefinitions>\n <AdditionalIncludeDirectories>D:\\mongo-cxx-driver\\src;D:\\LIBS\\mongo-c-driver\\1.16.0\\msvc-14.0-Win32\\include\\libmongoc-1.0;D:\\LIBS\\mongo-c-driver\\1.16.0\\msvc-14.0-Win32\\include\\libbson-1.0;D:\\LIBS\\boost\\1.60.0\\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </ResourceCompile>\n <Midl>\n <AdditionalIncludeDirectories>D:\\mongo-cxx-driver\\src;D:\\LIBS\\mongo-c-driver\\1.16.0\\msvc-14.0-Win32\\include\\libmongoc-1.0;D:\\LIBS\\mongo-c-driver\\1.16.0\\msvc-14.0-Win32\\include\\libbson-1.0;D:\\LIBS\\boost\\1.60.0\\include;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n <OutputDirectory>$(ProjectDir)/$(IntDir)</OutputDirectory>\n <HeaderFileName>%(Filename).h</HeaderFileName>\n <TypeLibraryName>%(Filename).tlb</TypeLibraryName>\n <InterfaceIdentifierFileName>%(Filename)_i.c</InterfaceIdentifierFileName>\n <ProxyFileName>%(Filename)_p.c</ProxyFileName>\n </Midl>\n <Link>\n <AdditionalDependencies>D:\\LIBS\\mongo-c-driver\\1.16.0\\msvc-14.0-Win32\\lib\\mongoc-1.0.lib;..\\bsoncxx\\RelWithDebInfo\\bsoncxx.lib;D:\\LIBS\\mongo-c-driver\\1.16.0\\msvc-14.0-Win32\\lib\\bson-1.0.lib;kernel32.lib;user32.lib;gdi32.lib;winspool.lib;shell32.lib;ole32.lib;oleaut32.lib;uuid.lib;comdlg32.lib;advapi32.lib</AdditionalDependencies>\n <AdditionalLibraryDirectories>%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n <AdditionalOptions>%(AdditionalOptions) /machine:X86</AdditionalOptions>\n <GenerateDebugInformation>true</GenerateDebugInformation>\n <IgnoreSpecificDefaultLibraries>%(IgnoreSpecificDefaultLibraries)</IgnoreSpecificDefaultLibraries>\n <ImportLibrary>D:/mongo-cxx-driver/src/mongocxx/RelWithDebInfo/mongocxx.lib</ImportLibrary>\n <ProgramDataBaseFile>D:/mongo-cxx-driver/src/mongocxx/RelWithDebInfo/mongocxx.pdb</ProgramDataBaseFile>\n <SubSystem>Console</SubSystem>\n </Link>\n <ProjectReference>\n <LinkLibraryDependencies>false</LinkLibraryDependencies>\n </ProjectReference>\n</ItemDefinitionGroup>\n<?xml version=\"1.0\" encoding=\"utf-8\"?>\n<Project DefaultTargets=\"Build\" ToolsVersion=\"12.0\" xmlns=\"http://schemas.microsoft.com/developer/msbuild/2003\">\n <ItemGroup Label=\"ProjectConfigurations\">\n <ProjectConfiguration Include=\"Debug|Win32\">\n <Configuration>Debug</Configuration>\n <Platform>Win32</Platform>\n </ProjectConfiguration>\n <ProjectConfiguration Include=\"Release|Win32\">\n <Configuration>Release</Configuration>\n <Platform>Win32</Platform>\n </ProjectConfiguration>\n </ItemGroup>\n <PropertyGroup Label=\"Globals\">\n <ProjectGuid>{82C85CDB-8FDC-4241-8A9F-21C7781D38CF}</ProjectGuid>\n <RootNamespace>Project1</RootNamespace>\n </PropertyGroup>\n <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.Default.props\" />\n <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\" Label=\"Configuration\">\n <ConfigurationType>Application</ConfigurationType>\n <UseDebugLibraries>true</UseDebugLibraries>\n <PlatformToolset>v120</PlatformToolset>\n <CharacterSet>MultiByte</CharacterSet>\n </PropertyGroup>\n <PropertyGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\" Label=\"Configuration\">\n <ConfigurationType>Application</ConfigurationType>\n <UseDebugLibraries>false</UseDebugLibraries>\n <PlatformToolset>v120</PlatformToolset>\n <WholeProgramOptimization>true</WholeProgramOptimization>\n <CharacterSet>MultiByte</CharacterSet>\n </PropertyGroup>\n <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.props\" />\n <ImportGroup Label=\"ExtensionSettings\">\n </ImportGroup>\n <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n </ImportGroup>\n <ImportGroup Label=\"PropertySheets\" Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n <Import Project=\"$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props\" Condition=\"exists('$(UserRootDir)\\Microsoft.Cpp.$(Platform).user.props')\" Label=\"LocalAppDataPlatform\" />\n </ImportGroup>\n <PropertyGroup Label=\"UserMacros\" />\n <PropertyGroup />\n <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Debug|Win32'\">\n <ClCompile>\n <WarningLevel>Level3</WarningLevel>\n <Optimization>Disabled</Optimization>\n <SDLCheck>true</SDLCheck>\n <AdditionalIncludeDirectories>D:\\LIBS\\boost\\1.60.0\\include\\;D:\\LIBS\\mongo-c-driver\\1.16.3\\msvc-14.0-Win32\\include\\libmongoc-1.0;D:\\LIBS\\mongo-c-driver\\1.16.3\\msvc-14.0-Win32\\include\\libbson-1.0;D:\\LIBS\\mongo-cxx-driver\\3.5.0\\msvc-14.0-win32\\include\\bsoncxx\\v_noabi;D:\\LIBS\\mongo-cxx-driver\\3.5.0\\msvc-14.0-win32\\include\\mongocxx\\v_noabi;%(AdditionalIncludeDirectories)</AdditionalIncludeDirectories>\n </ClCompile>\n <Link>\n <GenerateDebugInformation>true</GenerateDebugInformation>\n <AdditionalLibraryDirectories>D:\\LIBS\\mongo-cxx-driver\\3.5.0\\msvc-14.0-win32\\lib;%(AdditionalLibraryDirectories)</AdditionalLibraryDirectories>\n </Link>\n </ItemDefinitionGroup>\n <ItemDefinitionGroup Condition=\"'$(Configuration)|$(Platform)'=='Release|Win32'\">\n <ClCompile>\n <WarningLevel>Level3</WarningLevel>\n <Optimization>MaxSpeed</Optimization>\n <FunctionLevelLinking>true</FunctionLevelLinking>\n <IntrinsicFunctions>true</IntrinsicFunctions>\n <SDLCheck>true</SDLCheck>\n </ClCompile>\n <Link>\n <GenerateDebugInformation>true</GenerateDebugInformation>\n <EnableCOMDATFolding>true</EnableCOMDATFolding>\n <OptimizeReferences>true</OptimizeReferences>\n </Link>\n </ItemDefinitionGroup>\n <ItemGroup>\n <ClCompile Include=\"Source.cpp\" />\n </ItemGroup>\n <Import Project=\"$(VCTargetsPath)\\Microsoft.Cpp.targets\" />\n <ImportGroup Label=\"ExtensionTargets\">\n </ImportGroup>\n</Project>",
"text": "The only idea that comes to me is that something happened with the C++ driver’s config.hpp header. Perhaps it wasn’t configured, perhaps it wasn’t installed.Don’t know if it should, but including <mongocxx/config/config.hpp> does not help.Alright… not sure if you exactly asked for this but here comes:d:\\mongo-cxx-driver>cmake -G \"Visual Studio 14 2015\" -DCMAKE_INSTALL_PREFIX=d:\\mongo-cxx-driver\\3.5.0\\ -DCMAKE_PREFIX_PATH=d:\\LIBS\\mongo-c-driver\\1.16.3\\ -DBOOST_ROOT=d:\\LIBS\\boost\\1.60.0 -DCMAKE_CXX_STANDARD=11 -DCMAKE_CXX_FLAGS=\"/EHsc\"These is the bsoncxx_shared.vcxproj, at least the what I guess is the relevant part:… and the same for mongocxx_shared.vcxproj… AND the project file for the code sample:",
"username": "Uwe_Dittus"
},
{
"code": "noexcept",
"text": "OK. I see the problem. The C++ Driver uses the noexcept keyword, which is not supported in MSVC prior to version 2015. For instance, this GitHub issue in another project provides a more detailed explanation. Building the C++ Driver is not supported with MSVC prior to version 2015, so you will need to move from 2013 to a newer version in order successfully build.",
"username": "Roberto_Sanchez"
},
{
"code": "noexceptthrow()",
"text": "Kind of funny actually. I had this issue exactly vise versa couple days ago. Soo, anyone tried to override noexcept with throw() ?",
"username": "Uwe_Dittus"
}
] | Using mongo-cxx-driver with VS2013 | 2020-04-16T13:04:38.860Z | Using mongo-cxx-driver with VS2013 | 4,053 |
null | [
"installation"
] | [
{
"code": "sudo Mongod --dbpath /System/Volumes/MacProSSD1TB/Data/data/db\n2020-04-19T10:00:25.693+1000 I CONTROL [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'\n2020-04-19T10:00:25.697+1000 W ASIO [main] No TransportLayer configured during NetworkInterface startup\n2020-04-19T10:00:25.698+1000 I CONTROL [initandlisten] MongoDB starting : pid=6480 port=27017 dbpath=/System/Volumes/MacProSSD1TB/Data/data/db 64-bit host=192-168-1-102.tpgi.com.au\n2020-04-19T10:00:25.698+1000 I CONTROL [initandlisten] db version v4.2.5\n2020-04-19T10:00:25.698+1000 I CONTROL [initandlisten] git version: 2261279b51ea13df08ae708ff278f0679c59dc32\n2020-04-19T10:00:25.698+1000 I CONTROL [initandlisten] allocator: system\n2020-04-19T10:00:25.698+1000 I CONTROL [initandlisten] modules: none\n2020-04-19T10:00:25.698+1000 I CONTROL [initandlisten] build environment:\n2020-04-19T10:00:25.698+1000 I CONTROL [initandlisten] distarch: x86_64\n2020-04-19T10:00:25.698+1000 I CONTROL [initandlisten] target_arch: x86_64\n2020-04-19T10:00:25.698+1000 I CONTROL [initandlisten] options: { storage: { dbPath: \"/System/Volumes/MacProSSD1TB/Data/data/db\" } }\n2020-04-19T10:00:25.699+1000 I STORAGE [initandlisten] exception in initAndListen: NonExistentPath: Data directory /System/Volumes/MacProSSD1TB/Data/data/db not found., terminating\n2020-04-19T10:00:25.699+1000 I NETWORK [initandlisten] shutdown: going to close listening sockets...\n2020-04-19T10:00:25.699+1000 I - [initandlisten] Stopping further Flow Control ticket acquisitions.\n2020-04-19T10:00:25.699+1000 I CONTROL [initandlisten] now exiting\n2020-04-19T10:00:25.699+1000 I CONTROL [initandlisten] shutting down with code:100\n",
"text": "MongoDB expects the Data/data/db folders to be in a location which Mac OS Catalina does not allow. As a consequence the folders must be placed elsewhere, on one of the disk volumes. But modifying the MongoDB settings to find the new data folders is not working properly.There are many supposed solutions for this in various places on the web but none of them have solved my issues.MongoDB was installed using homebrew. There is no MongoDB directory on the file system - should there be?The actual database file structure for this installation is: /System/Volumes/MacProSSD1TB/Data/data/dbWhen I run the Mongod command I get the following errors:It seems to me that I need to:\n(a) specify to MongoDB exactly where the new data/db folders are\n(b) grant the appropriate permissions do that Mongood can access themHow do I do this?",
"username": "Sunbeam_Rapier"
},
{
"code": "",
"text": "/System/Volumes/MacProSSD1TB/Data/data/dbWhy sudo while starting mongod?\nNormally it is not recommended to use sudoDoes this dirpath physically exist on your mac box?\n/System/Volumes/MacProSSD1TB/Data/data/db\nDid you create data/db dir?\nIf yes as which user\ncd /System/Volumes/MacProSSD1TB/Data\nls -lrt data\nChek permissions\nMake sure the user who is running mongod has read/write privileges on this dir",
"username": "Ramachandra_Tummala"
},
{
"code": "mongodbrew install mongodb",
"text": " Hi @Sunbeam_Rapier welcome to the MongoDB community. The error you are getting is that the mongod process cannot find the data path you specified. Have you created that?Also note that on Mac, if you use homebrew you can brew install mongodb and this will set everything up for you.",
"username": "Doug_Duncan"
},
{
"code": "2020-04-19T14:07:21.196-0600 I STORAGE [initandlisten] exception in initAndListen: IllegalOperation: Attempted to create a lock file on a read-only directory: data/db, terminating\n",
"text": "Chek permissions\nMake sure the user who is running mongod has read/write privileges on this dirIf the directory existed and the permissions were incorrect the error would be along the lines of",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Hi DougThe issue is that Mac OS Catalina does not allow files to be placed in the location expected by MongoDB.The simplest workaround is to place the Data/data/db files in the User’s directory and then use a --dbpath statement when running mongod.Unless you are installing on a server or have multiple users sharing a mac, there is no reason for these files to be anywhere other than the user’s directory anyway.",
"username": "Sunbeam_Rapier"
},
{
"code": "",
"text": "I have now moves the file structure to my Mac User directory and I use a --dbpath statement when running mongod.It works with sudo - so I can complete the tutorials. I think it very unlikely I will ever use MongoDB in a production environment but, if I do, I will have another look at sudo and access permissions.Resolving this has been an unpleasant and lengthy process and, if MongoDB were serious about Mac users, they would have resolved this issue ages ago. I dpn’t like to spend time in the terminal - for me this is not what a Mac is all about…",
"username": "Sunbeam_Rapier"
},
{
"code": "mongodb-community/usr/local/etc/mongod.conflog directory path/usr/local/var/log/mongodbdata directory path/usr/local/var/mongodbbrew servicesbrew services start mongodb-communitydbPathmongod--dbpathdbPath/data/db",
"text": "MongoDB was installed using homebrew. There is no MongoDB directory on the file system - should there be?Hi,If you installed using the mongodb-community recipe as per Install MongoDB Community Edition on macOS, the installation should have created:The expected way to manage MongoDB would be using brew services (for example: brew services start mongodb-community), which will use the default configuration file and paths. You can change the dbPath and other options by editing the configuration file before starting MongoDB.If you start mongod manually (as per your initial example), you will have to provide the --dbpath and other desired options. The legacy hardcoded default dbPath of /data/db is not supported in Catalina.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Installation issues for MongoDB on Mac OS Catalina | 2020-04-19T00:08:35.438Z | Installation issues for MongoDB on Mac OS Catalina | 10,631 |
null | [
"python"
] | [
{
"code": "{ \n type: \"LineString\", \n coordinates: [[-29.9673156, -51.1497259], [-29.969095, -51.114017]] \n}\n",
"text": "Let’s say that some documents have GeoJSON / LineString fields that represent someone’s displacement. The coordinates that make up the LineString were brought from the google map api.Question:Is it possible to know the length of this LineString in kilometers?PS. I’m using python / pymongo",
"username": "Matheus_Saraiva"
},
{
"code": "haversine",
"text": "Hi Matheus,There is currently no inbuilt function to calculate the distance between two arbitrary geo locations in the same document, but there are well-known algorithms such as haversine that you could use.The most straightforward approach would be to use an existing Python library (for example: haversine), but you should also be able to implement this using trigonometry operators in an aggregation pipeline.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Geospatial: LineString length in kilometer | 2020-04-19T20:37:05.416Z | Geospatial: LineString length in kilometer | 2,487 |
null | [
"aggregation",
"queries"
] | [
{
"code": "{ \n \"_id\" : ObjectId(\"5e89e93b5ee91429cc9cf7dc\"), \n \"Saison\" : \"1415\", \n \"Jahr\" : \"2014/15\", \n \"Liga\" : \"2. Landesliga Süd\", \n \"Ak\" : \"KMS\", \n \"AkBezeichnung\" : \"Kampfmannschaft\", \n \"Runde\" : NumberInt(5), \n \"HeimMs\" : \"Mühlbach/Pzg.\", \n \"AuswMs\" : \"SC ikarus Pfw.\", \n \"Datum\" : ISODate(\"2014-08-31T13:00:00.000+0000\"), \n \"HeimTore\" : NumberInt(0), \n \"AuswTore\" : NumberInt(5), \n \"Schiedsrichter\" : \"\", \n \"NichtGewertet\" : false, \n \"Torschützen\" : [\n {\n \"Torfolge\" : \"1 : 0\", \n \"SpielerName\" : \"Grünwald Hannes\", \n \"Gegnername\" : \"\", \n \"Minute\" : NumberInt(12), \n \"Bemerkung\" : \"\", \n \"AnzahlTore\" : NumberInt(1)\n }, \n {\n \"Torfolge\" : \"2 : 0\", \n \"SpielerName\" : \"Krameter Silvio\", \n \"Gegnername\" : \"\", \n \"Minute\" : NumberInt(22), \n \"Bemerkung\" : \"\", \n \"AnzahlTore\" : NumberInt(1)\n }, \n {\n \"Torfolge\" : \"3 : 0\", \n \"SpielerName\" : \"Grüll Marco\", \n \"Gegnername\" : \"\", \n \"Minute\" : NumberInt(52), \n \"Bemerkung\" : \"Foulelfmeter\", \n \"AnzahlTore\" : NumberInt(1)\n }, \n {\n \"Torfolge\" : \"4 : 0\", \n \"SpielerName\" : \"Grünwald Hannes\", \n \"Gegnername\" : \"\", \n \"Minute\" : NumberInt(71), \n \"Bemerkung\" : \"\", \n \"AnzahlTore\" : NumberInt(1)\n }, \n {\n \"Torfolge\" : \"5 : 0\", \n \"SpielerName\" : \"Krameter Silvio\", \n \"Gegnername\" : \"\", \n \"Minute\" : NumberInt(78), \n \"Bemerkung\" : \"\", \n \"AnzahlTore\" : NumberInt(1)\n }\n ]\n}\n",
"text": "I’m an MongoDB-newbie, so please allow following question:I have thousands of documents with structure like below. First, I would like to query in array “Torschützen” just for subdocuments where field “Bemerkung” is not empty. How can this be done?And second, is it possible to get just fields of subdocuments from array “Torschützen” , without explizitly eleminating fields outside this array, e.g. _id: 0, Saison: 0, Liga: 0, …?Thank you in advance for your support!",
"username": "Thomas_Stuefer"
},
{
"code": "db.getCollection(\"test\").aggregate(\n [\n { \n \"$unwind\" : { \n \"path\" : \"$Torschützen\", \n \"preserveNullAndEmptyArrays\" : false\n }\n }, \n { \n \"$match\" : { \n \"Torschützen.Bemerkung\" : { \n \"$exists\" : true, \n \"$ne\" : \"\"\n }\n }\n }, \n { \n \"$project\" : { \n \"_id\" : 0.0, \n \"Saison\" : 0.0, \n \"Liga\" : 0.0\n }\n }\n ], \n { \n \"allowDiskUse\" : false\n }\n);\n$project",
"text": "You can do something like this. however, you can modify it and create a more efficient query to have a smaller output.And sorry, I’m not sure if understood your second question correctly but basically by adding a $project stage to the aggregation pipeline the output contains only that part of document you’re looking for.Hope this helps. You may see this page: https://docs.mongodb.com/manual/aggregation/ for more details. This one https://docs.mongodb.com/manual/tutorial/query-array-of-documents/ may help as well.",
"username": "Omid"
},
{
"code": "",
"text": "Thank you very much for your quick reply, that was exactly what I searched for!!!I tried it without using aggregation framework, just with a “normal” find:db.Spiele.find({…, ‘Torschützen.Bemerkung’: {’$exists’: true, ‘$ne’: ‘’}})But I didn’t get right response, so is my request just possible with aggregation (unfortunately I’m not familar with it at the moment)?And concerning my second question: Yes, you are right, it’s simple, just using “{’$project’: {’_id’: 0, ‘Torschützen’: 1}”",
"username": "Thomas_Stuefer"
}
] | Querying array for subdocuments | 2020-04-19T09:50:21.529Z | Querying array for subdocuments | 1,891 |
null | [
"configuration"
] | [
{
"code": "# mongod.conf\n\n# for documentation of all options, see:\n# http://docs.mongodb.org/manual/reference/configuration-options/\n\n# where to write logging data.\nsystemLog:\n destination: file\n logAppend: true\n logRotate: rename\n path: /var/log/mongodb/mongod.log\n\n# Where and how to store data.\nstorage:\n dbPath: /data/mongo\n journal:\n enabled: true\n# wiredTiger:\n# engineConfig:\n# directoryForIndexes: true\n\n# engine:\n# mmapv1:\n\n# how the process runs\nprocessManagement:\n fork: true # fork and run in background\n pidFilePath: /var/run/mongodb/mongod.pid # location of pidfile\n timeZoneInfo: /usr/share/zoneinfo\n\n# network interfaces\nnet:\n port: 27017\n bindIp: 0.0.0.0 # Listen to local interface only, comment to listen on all interfaces.\n ssl:\n mode: requireSSL\n PEMKeyFile: /etc/ssl/myPEMKeyFile.pem\n CAFile: /etc/ssl/myCAFile.pem\n\nsecurity:\n authorization: enabled\n clusterAuthMode: x509\n# keyFile: /etc/ssl/keyfile\n\n#operationProfiling:\n\nreplication:\n replSetName: acsutrs02\n\n#sharding:\n\n## Enterprise-Only Options\n\n#auditLog:\n\n#snmp:\n# wiredTiger:\n# engineConfig:\n# directoryForIndexes: true\n",
"text": "I have the following mongo.conf file:The issue I have is with these three lines:When they are commented out, the mongod service starts with no issue. When uncommented, I get the following error message:mongod.service - MongoDB Database Server\nLoaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)\nActive: failed (Result: exit-code) since Sat 2020-04-18 05:19:49 UTC; 5s ago\nDocs: https://docs.mongodb.org/manual\nProcess: 10120 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=100)\nProcess: 10118 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)\nProcess: 10115 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)\nProcess: 10112 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)\nMain PID: 9048 (code=exited, status=0/SUCCESS)\nApr 18 05:19:49 ageutcl02mg01 systemd[1]: Starting MongoDB Database Server…\nApr 18 05:19:49 ageutcl02mg01 mongod[10120]: about to fork child process, waiting until server is ready for connections.\nApr 18 05:19:49 ageutcl02mg01 mongod[10120]: forked process: 10124\nApr 18 05:19:49 ageutcl02mg01 mongod[10120]: ERROR: child process failed, exited with error number 100\nApr 18 05:19:49 ageutcl02mg01 systemd[1]: mongod.service: control process exited, code=exited status=100\nApr 18 05:19:49 ageutcl02mg01 systemd[1]: Failed to start MongoDB Database Server.\nApr 18 05:19:49 ageutcl02mg01 systemd[1]: Unit mongod.service entered failed state.\nApr 18 05:19:49 ageutcl02mg01 systemd[1]: mongod.service failed.Any suggestions as to what I have wrong in my mongo.conf file? I’ve tested it in a YAML validator and it came back ok, so I’m not sure what I have wrong. I’ve also compared this to a Mongo instance in our Production environment, and those parameters are set and the instance is running. This is v3.6.14 running on Centos Linux 7.",
"username": "Gary_Hampson"
},
{
"code": "",
"text": "Do you have another instance running on the same port,dbpath?\nPlease stop it and try to run mongod again",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "One thing that comes to mind is that may be this directory holds data files created by the in memory storage engine which is not compatible with wired tiger. So when you explicitly specify wired tiger in your config file, the server simply refuse to start. An ls of /data/mongo could provide clues if this is the case.See https://docs.mongodb.com/manual/core/storage-engines/",
"username": "steevej"
},
{
"code": "journalctl -u mongod2020-04-19T19:06:55.715+0000 I STORAGE [initandlisten] exception in initAndListen: InvalidOptions: Requested option conflicts with current storage engine option for directoryForIndexes; you requested true but the current server storage is already set to false and cannot be changed, terminating\n",
"text": "You have to restart with a new or empty data directory.This error should have appeared. It might be in mongo logs or journalctl -u mongod",
"username": "chris"
}
] | Trouble with mongo.conf | 2020-04-18T06:08:09.979Z | Trouble with mongo.conf | 4,311 |
null | [
"database-tools"
] | [
{
"code": "",
"text": "I have been doing an incremental backup using mongodump but whenever i am trying to restore using mongorestore I am getting \"duplicate id error’ I know that it doesnt merge duplicate element …Is there any other way to avoid it without --drop the database.",
"username": "raushan_sharma"
},
{
"code": "--drop",
"text": "Hey @raushan_sharmaFor performing a entire dump and restore I think you would have to include the --drop flag to avoid the duplicate _id errors. This is because restore is a full restore.",
"username": "Natac13"
},
{
"code": "",
"text": "@raushan_sharma I’m not aware of mongodump having incremental backup. Is this a third party ‘mongodump’ ?",
"username": "chris"
}
] | Mongorestore giving _id duplicate error | 2020-04-19T09:41:43.979Z | Mongorestore giving _id duplicate error | 2,464 |
null | [
"cxx"
] | [
{
"code": "BSONCXX_POLY_USE_STD=ON\nCMAKE_CXX_STANDARD=17\n[ 13%] Linking CXX executable test_bson.exe\nCMakeFiles\\test_bson.dir/objects.a(bson_builder.cpp.obj):bson_builder.cpp:(.text+0xf17): undefined reference to `_imp___ZN7bsoncxx7v_noabi5types10b_document7type_idE'\nCMakeFiles\\test_bson.dir/objects.a(bson_builder.cpp.obj):bson_builder.cpp:(.text+0x10ae): undefined reference to `_imp___ZN7bsoncxx7v_noabi5types7b_int327type_idE'\nCMakeFiles\\test_bson.dir/objects.a(bson_builder.cpp.obj):bson_builder.cpp:(.text+0x1563): undefined reference to `_imp___ZN7bsoncxx7v_noabi5types7b_array7type_idE'\nCMakeFiles\\test_bson.dir/objects.a(bson_builder.cpp.obj):bson_builder.cpp:(.text+0x16f2): undefined reference to `_imp___ZN7bsoncxx7v_noabi5types7b_int327type_idE'\nCMakeFiles\\test_bson.dir/objects.a(bson_builder.cpp.obj):bson_builder.cpp:(.text+0x1b5d): undefined reference to `_imp___ZN7bsoncxx7v_noabi5types7b_int327type_idE'\nCMakeFiles\\test_bson.dir/objects.a(bson_builder.cpp.obj):bson_builder.cpp:(.text+0x1e42): undefined reference to `_imp___ZN7bsoncxx7v_noabi5types6b_bool7type_idE'\nCMakeFiles\\test_bson.dir/objects.a(bson_builder.cpp.obj):bson_builder.cpp:(.text+0x2292): undefined reference to `_imp___ZN7bsoncxx7v_noabi5types7b_int327type_idE'\nCMakeFiles\\test_bson.dir/objects.a(bson_builder.cpp.obj):bson_builder.cpp:(.text+0x2579): undefined reference to `_imp___ZN7bsoncxx7v_noabi5types6b_bool7type_idE'\ncollect2.exe: error: ld returned 1 exit status\nmingw32-make[2]: *** [src\\bsoncxx\\test\\CMakeFiles\\test_bson.dir\\build.make:236: src/bsoncxx/test/test_bson.exe] Error 1\nmingw32-make[1]: *** [CMakeFiles\\Makefile2:987: src/bsoncxx/test/CMakeFiles/test_bson.dir/all] Error 2\nmingw32-make: *** [Makefile:160: all] Error 2",
"text": "os: Windows 10\ncompiler: MinGW w64 8.1.0 i686\nmongo-c-driver: 1.16.2\nmongo-cxx-driver: commit 4629521 of branch releases/v3.5Compiled with the options:I receive the following errors when try to compile mongo-cxx-driver:",
"username": "AlexxanderX"
},
{
"code": "",
"text": "@AlexxanderX what are the complete command lines you used to compiler the C driver and the C++ driver?",
"username": "Roberto_Sanchez"
},
{
"code": "cd MY_PROJECT/extlibs\ncurl -LO https://github.com/mongodb/mongo-c-driver/archive/1.16.2.zip\n7z x 1.16.2.zip -r\ncd mongo-c-driver-1.16.2/build\ncmake -G\"MinGW Makefiles\" -DCMAKE_BUILD_TYPE=Release -DBUILD_VERSION=1.16.2 -DCMAKE_INSTALL_PREFIX=MY_PROJECT/extlibs/mongo-c-driver-1.16.2 ..\nmingw32-make install\ncmakemakecd MY_PROJECT/extlibs\ncurl -LO https://github.com/mongodb/mongo-cxx-driver/archive/r3.5.0.zip\n7z x r3.5.0.zip -r\ncd mongo-cxx-driver-r3.5.0/build\ncmake -G\"MinGW Makefiles\" -DCMAKE_BUILD_TYPE=Release -DBUILD_VERSION=3.5.0 -DCMAKE_INSTALL_PREFIX=MY_PROJECT/extlibs/mongo-cxx-driver-r3.5.0 -DBSONCXX_POLY_USE_STD=ON -DCMAKE_CXX_STANDARD=17 -DCMAKE_PREFIX_PATH=MY_PROJECT/extlibs/mongo-c-driver-1.16.2 ..\nmingw32-make\ncmakemake",
"text": "For C driver I used:Here is the log of cmake and make: mongo-c-driver build log - Pastebin.comAnd for the mongo-cxx-driver:Here is the output of cmake and make: mongo-cxx-driver build log - Pastebin.comThank you for looking into this.",
"username": "AlexxanderX"
},
{
"code": "",
"text": "@AlexxanderX our test matrix for the C++ Driver only includes Visual Studio for Windows builds. I was not able to build with MinGW either. Since GCC works on other platforms, I am not sure why MinGW would not work on Windows, but it looks you will need to use Visual Studio. Note that you will want to build both the C driver and C++ driver with the same tool chain. Building the C driver with MinGW and the C++ driver with Visual Studio is likely to result in problems.",
"username": "Roberto_Sanchez"
}
] | Undefined reference to bsoncxx when try to compile | 2020-04-17T18:33:12.586Z | Undefined reference to bsoncxx when try to compile | 4,060 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hello! I am very new to Mongo and so I’m not at all familiar with it’s capabilities. I may be interested in switching over a project from SQL to NoSQL, but before I delve too deeply I wanted to get some opinions.My goal is to be able to store an entire Google Maps route in a database. This would consist of an array of LatLng objects. Several thousand LatLng objects. Is Mongo capable of storing an array of this size? Is it advisable? Should I find another path forward?A little background on what I am trying to accomplish. I am working on a ride share app. A user would be entering their departure point, destination point, and date. The database then returns all routes that match the date (as well as some other search criteria such as passenger number, etc). At this point, I need to determine which of the routes are within a particular distance to the departure and end points that the user entered.What I am doing currently is making an Google API call for each returned route, using the route’s start and end point. I then compute the distance to this route from the API result. The problem is that the API calls are slow. On average, about 0.2 seconds per call. If I could simply get the route returned with the database query, I can reduce the processing time quite a bit.So, is this possible using Mongo?? I should also add, that I am building the project using Laravel. Does anyone have experience running Mongo with Laravel? Is it easy to integrate? Are there other NoSQL options that might be better?Many thanks for any input.",
"username": "Michael_Forcella"
},
{
"code": "",
"text": "Hi, I think Mongo works very well for this use-case. Have you seen this page? https://docs.mongodb.com/manual/geospatial-queries/ It’s a very good starting point.\nRegarding the max size of the array I’d say you can store large size of data as long as it’s below max size of BSON document (16MB). For more accurate info see this page: docs.mongodb.com/manual/reference/limits/#BSON-Document-Size\nAlso defining proper db indices would help a lot when it comes to query large volume of data.\nFor using Mongo in your Laravel project you may see this page about Mongodb PHP drivers: https://docs.mongodb.com/drivers/php/",
"username": "Omid"
},
{
"code": "",
"text": "Thanks so much for all the info. I think I’ll take the plunge and see what I can figure out.",
"username": "Michael_Forcella"
}
] | Using MongoDB to store a Google Maps route | 2020-04-18T20:34:18.754Z | Using MongoDB to store a Google Maps route | 4,691 |
null | [
"queries"
] | [
{
"code": "Id (1)\n{\n Item: pen1\n price: 10\n}\n\nid(2)\n{\n Item: pen2\n price: 15\n}\n",
"text": "How to update multiple documents based on “_id” to change field value which contains different values.Eg.requires is to update price value based on Id…\nLike update Id(1) price->20\nId(2) price->35please let me know…for updating this in 1000+ records…",
"username": "shubham_udata"
},
{
"code": "bulkWritebulkWriteinsertupdatedeletereplace",
"text": "Hey @shubham_udataI think your best bet is to use bulkWrite and give it the array of updates you are making on the collection.\nI have linked the docs for this below.You would create an array of all the updates you want done to the collection. Each item of the bulkWrite array can perform insert, update, delete and replace operations.\nIt would be up to you to determine which action is appropriate and how to update the record.",
"username": "Natac13"
}
] | Update multiple documents with different values | 2020-04-19T02:41:17.544Z | Update multiple documents with different values | 25,689 |
null | [
"installation"
] | [
{
"code": "",
"text": "Hey Team, I am interested to install MongoDB silently using msiexec.exeLet’s consider a scenario\nI am installing MongoDB from GUI and I unselect Mongo as service. Then proceed with the installation.I want to achieve the above scenario through msiexec.exe command then what all value should have opted. Can you help me understand the same.msiexec.exe /l*v mdbinstall.log /qb /i mongodb-win32-x86_64-2012plus-4.2.5-signed.msi ADDLOCAL=“ServerService,Client” SHOULD_INSTALL_COMPASS=“0”As a default offering; are these two the only component which needs to be added to ADDLOCAL\nor we need to as some other values need to be added.",
"username": "Anil_Murmu"
},
{
"code": "",
"text": "It dependes on what other components you need\nAs per mongo doc those are the two components givenPlease go thru Jira tickets.There are some associated bugs like installation fails when you use more components separated by comma\nhttps://jira.mongodb.org/browse/SERVER-39025\nOne suggestion was to use addlocal=ALL and then uninstall unneeded components from programsAlso in your scenario if you unchecked Mongo as service i think you need to use ServernoService\nMay be others who did this type of silent installtion can help you more on this",
"username": "Ramachandra_Tummala"
}
] | MongoDB silent installation | 2020-04-17T06:17:49.647Z | MongoDB silent installation | 3,471 |
null | [
"dot-net"
] | [
{
"code": "",
"text": "Hi,I’m using MongoDB.Driver 2.10.3 with a .NET core 3 asp.net mvc app and Azure document db emulator / Azure cloud db. It works for the most part but I hit the following exception when the document reaches 2-3 MB doing a ReplaceOneAsync request with upsert=true:MongoDB.Driver.MongoCommandException: ‘Command update failed: Message: {“Errors”:[“Request size is too large”]}\nActivityId: 7caef9ed-0000-0000-0000-000000000000, Request URI: /apps/DocDbApp/services/DocDbServer14/partitions/a4cb495a-38c8-11e6-8106-8cdcd42c33be/replicas/1p/, RequestStats:\nRequestStartTime: 2020-04-19T00:03:47.1617890Z, RequestEndTime: 2020-04-19T00:03:47.1617890Z, Number of regions attempted:1\n, SDK: Microsoft.Azure.Documents.Common/2.9.2.’Is there any work around for it apart from splitting to smaller documents ? This isn’t ideal for me as I have base 64 encoded images I want to store which are several MB in size.Appreciate any help with this. Thanks",
"username": "Paul_Clayden"
},
{
"code": "",
"text": "I’m using MongoDB.Driver 2.10.3 with a .NET core 3 asp.net mvc app and Azure document db emulator / Azure cloud db.Hi Paul,Azure’s Cosmos DB includes an emulation of the MongoDB API with a distinct server implementation: features, behaviour, and limits will differ from the reported MongoDB server version. Questions on Cosmos are better posted on Azure’s community support channels.Is there any work around for it apart from splitting to smaller documents ? This isn’t ideal for me as I have base 64 encoded images I want to store which are several MB in size.CosmosDB currently has a 2MB document limit (versus MongoDB’s 16MB limit), so you will have to change your approach to store larger documents in Cosmos.You could also consider using MongoDB Atlas on Azure instead, which runs MongoDB Enterprise with the standard 16MB document size limit.Regards,\nStennie",
"username": "Stennie_X"
}
] | ReplaceOneAsync upsert exception: Request size is too large | 2020-04-19T01:21:01.523Z | ReplaceOneAsync upsert exception: Request size is too large | 3,431 |
null | [] | [
{
"code": "",
"text": "Let’s play a game!Today we are happy to announce the first of a series of weekly online scavenger hunts from MongoDB. Answer a series of 10 clues by visiting MongoDB resources including our blog, forums, documentation and more and learn about these resources while earning some awesome rewards!Each participant who completes a single scavenger hunt will earn:Each Monday we will post a new set of clues giving you the chance to earn additional badges and other rewards. A special reward will also be given to anyone who completes all the weekly scavenger hunts this spring and summer (more details to come).Every scavenger hunt will run from Monday until 5pm US Eastern time on Friday.Have an idea for a clue or question that should be included? Let us know!Scavenger Hunt Week One",
"username": "Ryan_Quinn"
},
{
"code": "",
"text": "Hi @Ryan_Quinn this sounds like a great way to pass the time and learn more. I clicked on the link but got the following note from Google forms:You need permissionThis form can only be viewed by users in the owner’s organization.Try contacting the owner of the form if you think this is a mistake. Learn More.I thought it might be the browser I use (Brave) blocking something, but I get the same message in plain Chrome and Safari. ",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Thanks for letting us know. Sorry about that! The form should now work as expected.",
"username": "Ryan_Quinn"
},
{
"code": "",
"text": "Thanks for the quick turnaround on this @Ryan_Quinn! I am indeed able to access the form now.",
"username": "Doug_Duncan"
},
{
"code": "",
"text": "Just finished, It was really fun! Already waiting for the next one.",
"username": "LuArGa"
},
{
"code": "",
"text": "So I have found all the answers, however I cannot complete the address section as it will not take a postal code from Canada even though it ask for State/Province and mentions the phone number is required for addresses outside of the US.Is this open in Canada? I would really like a sticker pack! Will I lose my opportunity this week since the forum does not accept postal codes?UPDATE\nI put the postal code in with the address field and left the Zipcode section blank. Please update this so that Canadians can participate ",
"username": "Natac13"
},
{
"code": "",
"text": "Same thing here from Montreal. I entered 11111 as the zipcode.",
"username": "steevej"
},
{
"code": "",
"text": "I should have used the classic 90201 like I did when I was 12 and signing up for hotmail. lol\nHoping to still get my sticker pack. Although Canada post has lost my last 2 packages due to the COVID policy changes with signatures…",
"username": "Natac13"
},
{
"code": "",
"text": "Finished this one, waiting for the next one! It was fun ",
"username": "Aditya_Sanil"
},
{
"code": "",
"text": "Agreed! Great idea MongoDB!",
"username": "Natac13"
},
{
"code": "",
"text": "@Natac13 @steevej - I’m sorry you ran into trouble with the form. Can you share a bit more detail so we can look into this? The form as configured should not have a “zip code” field and features a field labeled “postal code” which is not required and has no validation set up. The form does require name, address, city, country and email.",
"username": "Ryan_Quinn"
},
{
"code": "",
"text": "Yep sorry @Ryan_Quinn you are right, it does say Postal Code. However the error message says Must be a number\nHere is a screen shot\n\n685×507 14.1 KB\nI was able to complete the form without anything in the Postal Code field. I just move the right postal code to the address field. Hopefully I can still get the sticker pack! ",
"username": "Natac13"
},
{
"code": "",
"text": "Also I am curious if this line has wrong date on the formThis scavenger hunt will end on April 10, 2020 at 5pm US Eastern Time.Seeing that this was posted Monday April 13, 2020",
"username": "Natac13"
},
{
"code": "",
"text": "@Natac13 - That shouldn’t be a problem for shipping and I’ll make a note to correct for this when we process the results. This issue should also now be corrected. It appears that the field had required turned off and a validation entry that had been cleared but still specified “number”. That’s been removed. Thanks again for the feedback and for helping make this fun rather than frustrating ",
"username": "Ryan_Quinn"
},
{
"code": "",
"text": "No problem @Ryan_Quinn. Glad to help when I can.",
"username": "Natac13"
},
{
"code": "",
"text": "Thanks to everyone who participated! Check back Monday for next week’s all new scavenger hunt ",
"username": "Ryan_Quinn"
},
{
"code": "",
"text": "Congrats to all those who earned their Internet Detective badges this week!",
"username": "Jamie"
},
{
"code": "",
"text": "Hey, I completed the hunt too (I sure got to the stage where I entered the postal code). How is that I didn’t earn the badge? Is it for a few “chosen” ones?",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "After filling those details at the end it would have given you Thanks message and badge will be shipped to your address",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi Prasad,Badges are awarded to everyone who completed the scavenger hunt and provided their user name in the form. That said, it’s a manual load process right now and it’s possible that I or the uploader made a mistake. Let me look into it. Thanks for letting me know!Jamie",
"username": "Jamie"
}
] | Let's Play a Game! - MongoDB Scavenger Hunt #1 | 2020-04-13T17:31:06.230Z | Let’s Play a Game! - MongoDB Scavenger Hunt #1 | 9,232 |
null | [] | [
{
"code": "",
"text": "M001 Chapter 1 - Lesson 1.5 ~time in video 3:28\nfilter shows {“birth year”: {\"$gte\": 1985,\"$lt\": 1990}}\nwhereas compass version 1.20.5 shows {‘birth year’: {$gte: 1985,$lt: 1990}}also on Documents tab, video shows APPLY, notes say button changed to FIND, Compass version 1.20.5 shows button as ANALYZEThanks for the lessons, they are great!",
"username": "brandon_46935"
},
{
"code": "",
"text": "M001 Chapter 1 - Lesson 1.5 ~time in video 3:28\nfilter shows {“birth year”: {\"$gte\": 1985,\"$lt\": 1990}}\nwhereas compass version 1.20.5 shows {‘birth year’: {$gte: 1985,$lt: 1990}}The lectures are using a much older version of Compass. We’ve been told that they are due to be updated soon.also on Documents tab, video shows APPLY, notes say button changed to FIND, Compass version 1.20.5 shows button as ANALYZECan you expand on what you mean by “notes”? Perhaps share a screenshot.",
"username": "007_jb"
},
{
"code": "",
"text": "After completing the course I can’t seem to go back and actually run it through? In any case, I was referring to the notes below the video for Lesson 1.5 stating the button change from APPLY to FIND, but I think I goofed on this as when I look back I see FIND is under the Documents tab currently in Compass, perhaps I had Schema tab active and saw ANALYZE…confusion on my part probably. Long story short, disregard the button issue ",
"username": "brandon_46935"
},
{
"code": "",
"text": "",
"username": "system"
}
] | Lesson 1.5 minor tweaks for Compass 1.20.5 | 2020-04-17T17:19:01.252Z | Lesson 1.5 minor tweaks for Compass 1.20.5 | 940 |
null | [] | [
{
"code": "",
"text": "When I click on the analyze schema button, it loads and then it shows only a blank screen.\nCan anyone help?",
"username": "Haroon_54223"
},
{
"code": "",
"text": "I had this happen to me once when working with the schema for “citibike.trips”. It never finished populating the screen. Did you try restart your session?",
"username": "dppdoran"
},
{
"code": "",
"text": "Yes, I restarted the session several times.",
"username": "Haroon_54223"
},
{
"code": "",
"text": "I had the same problem. Restarting Compass worked for me.",
"username": "Saniok"
},
{
"code": "",
"text": "when I first open mongodb-compass I can get to the schema for video.movies, everything else fails thus farIf I go look at something else, then come back to video.movies, then the schema is gone and won’t come backsampling collection\nanalyzing documentsthen a blank paneeven for ships.shipwrecks with only 11k documentsthis is frustrating as the course depends on the tool, but the tool is not workingrestarting compass is not resolving the issue, neither did rebooting",
"username": "lufthans"
},
{
"code": "",
"text": "I am running Compass on MacOS. I only hit a problem once when loading the documents tab for the citibikes.trips collection, but apart from that all is working fine working from home or the office.Have you tried re-installing Compass? Can you install Wireshark and run a packet trace that filters on port 27017 (filter would be tcp.port == 27017)? The trace would allow you to see if the load is stalling.Cheers!!Dermot",
"username": "dppdoran"
},
{
"code": "",
"text": "Hi @Haroon_54223,You can try restarting Compass and ensure that you have good internet connection. Please share the screenshot of Compass Schema section and also the let me know the operating system that you are using.Thanks,\nSonali",
"username": "Sonali_Mamgain"
},
{
"code": "",
"text": "\n1.PNG1280×839 25.7 KB\nAfter I click the analyze schema button.\n2.PNG1280×836 20.9 KB\nI’m currently using Windows 8.1 64bit and I have a good internet connection.",
"username": "Haroon_54223"
},
{
"code": "",
"text": "Also I have\n",
"username": "Haroon_54223"
},
{
"code": "",
"text": "Hi Lunfthans, i am also facing the same problem.\nHow you resolved this issue ?\nCan you please help in this !",
"username": "Saurabh_14148"
},
{
"code": "",
"text": "Share a screenshot of your version of Compass.\nShare a screenshot of the problem.",
"username": "007_jb"
},
{
"code": "",
"text": "\nimage1266×721 62.8 KB\nI cannot access Schema View.",
"username": "Dakshitaa_31561"
},
{
"code": "",
"text": "What is your Compass version?\nPlease use stable version",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @Saurabh_14148,It looks like a temporary network issue to me. If you are still facing this problem then please share the following information.Share a screenshot of your version of Compass.\nShare a screenshot of the problem.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Hi @Dakshitaa_31561,I hope you found @Ramachandra_37567’s response helpful. Please download the latest stable version of Compass.Please feel free to get back to us if you are still having any issue.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Hi Shubham,\nThanks,it working now",
"username": "Saurabh_14148"
},
{
"code": "",
"text": "Hey @Shubham_Ranjan,\nI have the same problem as @Dakshitaa_31561 but I already have the latest stable version. I also already re installed it, but it still does not work properly and is showing me any schema tab.Do you or someone else knows, where there could the be the worm in the program.",
"username": "Chris_46845"
},
{
"code": "",
"text": "You have the latest Community edition but you don’t have the latest Stable version.",
"username": "007_jb"
},
{
"code": "",
"text": "Hi @Chris_46845,This You have the latest Community edition but you don’t have the latest Stable version.~ Shubham",
"username": "Shubham_Ranjan"
},
{
"code": "",
"text": "Okay ,\nwell at least the penny dropped now.\nI saw now yet, that there are different 1.20.5 Versions, that wasn’t that clear for me till now.Thanks",
"username": "Chris_46845"
}
] | Schema view not working | 2019-04-26T08:49:12.645Z | Schema view not working | 1,917 |
null | [
"cxx"
] | [
{
"code": "BSONCXX::documentBSONCXX::document",
"text": "Hi,\ni receive a JSON format string from another DLL and like to parse that into a BSONCXX::document.\nBy reading the documentation and scrolling through the code, it seems like there is no such feature in the cxx-driver. Did I miss something here?Alternatively, I could pass the BSONCXX::document created in the other DLL and pass it as plain old data. But afaik this is also nothing prepared for this, right?Thanks",
"username": "Uwe_Dittus"
},
{
"code": "BSONCXX::from_json()BSONCXX::to_json()",
"text": "Nevermind…Just found BSONCXX::from_json() and BSONCXX::to_json()Sorry",
"username": "Uwe_Dittus"
}
] | Passing bsoncxx::document through DLL | 2020-04-18T11:50:29.939Z | Passing bsoncxx::document through DLL | 1,963 |
null | [
"compass"
] | [
{
"code": "",
"text": "Compass works fine on Crostini (The linux container on Chrome OS) but it does not load the saved favorites after a restart. It does save favorite files in the home directory but doesn’t load them after a restart.\nMy Chrome OS version is Version 81.0.4044.103 (Official Build) (64-bit)I hope someone can help.",
"username": "Eric_Roijen"
},
{
"code": "",
"text": "Please try to clear cache or restart Compass\nAre you using SSL parameters\nPlease have a look at this linkhttps://jira.mongodb.org/browse/COMPASS-4066",
"username": "Ramachandra_Tummala"
}
] | Compass not saving favorites on Crostini | 2020-04-18T06:08:13.917Z | Compass not saving favorites on Crostini | 2,608 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "Hello, I have a collection will store two status of a product, the payment_status is about the status of “paid” or “unpaid”, the deliver_status is status about “delivered” or “undelivered”. Some of the user with role “seller” can change the value of payment_status, some users with role “customer” can change the value of delivered_status. For every changes on every status, I want to record the value change and who made such change. Is there a way to do that? To make sure the payment_status can only be changed by role “seller”, is there way to do that, like in schema validation?Thanks,\nJames",
"username": "Zhihong_GUO"
},
{
"code": "{\n _id: 5437t3952305t,\n username: natac13,\n userType: 'seller',\n}\nif (user.userType === 'seller') {\n // do action\n}\n",
"text": "Hey @Zhihong_GUOIt sounds like you want a history of all the changes made to a product document. Correct me if I am wrong.\nHowever, I want to ask one thing. Is the history data accessed regularly? I ask because my initial thought would be to store the history of a product change in a new collection and reference the history document to the product document. This history collection would be of all changes made. Therefore the collection grows instead of an field array growing.\nOr you could have a field on the product document as an array which stores the history. Do this if you are fairly confident that the array will not grow infinitely.\nAs for making sure only authorized people can perform certain action; I would get the user data and use server code to check that the current user has the correct permissions to perform the action being requested.Therefore ifLet me know if this helps.",
"username": "Natac13"
},
{
"code": "",
"text": "Hello Sean,Many thanks for the answer. I think the history and the query on the history will not be very frequently, so I will try your second solution, the array based solution in my app. I will keep you updated for any progress.Regards,James",
"username": "Zhihong_GUO"
},
{
"code": "",
"text": "Glad to be able to help. Just lastly I want to clarify. If the history is accessed infrequently, then I would store it in a different collection entirely. That way you are keeping the product documents smaller in size. Then reference the history collection on the product document",
"username": "Natac13"
},
{
"code": "",
"text": "OK, quite clear. Thanks for the information.",
"username": "Zhihong_GUO"
}
] | Keep tracking some changes in field of a doc | 2020-04-12T00:33:13.550Z | Keep tracking some changes in field of a doc | 4,021 |
null | [] | [
{
"code": "",
"text": "Hii,I am facing constant 503 and 504 errors when accessing docs.mongodb.com past 1 day. It says - “Error 503 between bytes timeout”. Also, the community forum is way too slow today. Is there any issues going on?",
"username": "shrey_batra"
},
{
"code": "ping docs.mongodb.com",
"text": "Hi Shrey,We have several monitoring systems for our sites including a public Pingdom page (status.mongodb.com) and have not had any reported issues. Our global team also uses both sites extensively (and from a variety of home network connections given the current Covid situation).However, I would not be surprised if there was more internet congestion this past weekend given the Easter holidays observed in many countries combined with social distancing requirements.The Docs & Community sites are currently hosted on separate infrastructure — Docs is a static site using Fastly CDN; the Community site is a web application. If both sites are slow, I suggest checking if your ISP has a status page for known issues (or you could try contacting their support).I am facing constant 503 and 504 errors when accessing docs.mongodb.com past 1 day. It says - “Error 503 between bytes timeout”.The 503 errors suggest a possible issue with your local Fastly CDN servers fetching or caching some of the documentation content. These are more difficult issues to troubleshoot remotely because CDNs play some tricks with DNS resolution, but hopefully the problems are transient.To help investigate this, when you encounter a page error can you confirm:Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "Hi @Stennie_X, yes it may be a load factor. The community forum is now opening, a bit lag is there though. All my other websites are responding so not maybe a ISP issue. Docs are opening but sending me 503 / 504 errors a few times now. This is a screen shot of the docs website, opening the graphLookup aggregation stage. Though now it is responding well, it crashes intermittently.",
"username": "shrey_batra"
},
{
"code": "",
"text": "ping docs.mongodb.comSeeing slowness from mongodocs\nGetting same messageError 503 Backend unavailable, connection timeout\nBackend unavailable, connection timeoutPinging mongodb.map.fastly.net [151.101.10.133] with 32 bytes of data:\nReply from 151.101.10.133: bytes=32 time=63ms TTL=60\nReply from 151.101.10.133: bytes=32 time=63ms TTL=60",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Hi @shrey_batra, @Ramachandra_Tummala,Thanks for providing some extra details - the team is investigating with Fastly.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Documentation site downtime | 2020-04-14T07:47:15.376Z | Documentation site downtime | 4,918 |
null | [] | [
{
"code": "",
"text": "Mongo Version: 3.2.8\nDeployment is replica set with an arbiter.Observing following issue during peak system utilization:Primary MongoDB Server:\n2020-03-25T00:06:18.640+0000 I COMMAND [ftdc] serverStatus was very slow: { after basic: 0, after asserts: 0, after backgroundFlushing: 0, after connections: 0, after dur: 0, after extra_info: 0, after globalLock: 0, after locks: 0, after network: 0, after opcounters: 0, after opcountersRepl: 0, after repl: 310, after storageEngine: 490, after tcmalloc: 490, at end: 4970 }\n2020-03-25T00:06:25.992+0000 I REPL [ReplicationExecutor] Starting an election, since we’ve seen no PRIMARY in the past 10000msSecondary MongoDB Server:\n2020-03-25T00:06:26.657+0000 I REPL [ReplicationExecutor] Error in heartbeat request to rats2.sm2:33000; ExceededTimeLimit: Operation timed out",
"username": "rimal_patel"
},
{
"code": "insert query update delete getmore command flushes mapped vsize res faults qr|qw ar|aw netIn netOut conn set repl time\n *18 *0 *42 *2 0 2|0 0 7.75G 18.9G 205M 1 0|0 0|0 237b 13.4k 25 TBFM SEC 2020-04-09T11:24:15Z\n *16 *0 *143 *0 0 1|0 0 7.75G 18.9G 204M 2 0|0 0|0 79b 12.9k 25 TBFM SEC 2020-04-09T11:24:16Z\n *11 *0 *130 *2 0 11|0 0 7.75G 18.9G 202M 1 0|0 0|0 805b 29.4k 25 TBFM SEC 2020-04-09T11:24:17Z\n *4 *0 *233 *1 0 2|0 0 7.75G 18.9G 204M 2 0|0 0|0 237b 13.4k 25 TBFM SEC 2020-04-09T11:24:18Z\n *3 *0 *232 *1 0 4|0 0 7.75G 18.9G 203M 0 0|0 0|0 353b 14.2k 25 TBFM SEC 2020-04-09T11:24:19Z\n *2 *0 *80 *0 0 5|0 0 7.75G 18.9G 203M 0 0|0 0|0 311b 14.5k 25 TBFM SEC 2020-04-09T11:24:20Z\n *79 *0 *328 *0 0 3|0 0 7.75G 18.9G 205M 5 0|0 0|0 295b 13.8k 25 TBFM SEC 2020-04-09T11:24:21Z\n *14 *0 *258 *0 0 6|0 0 7.75G 18.9G 202M 2 0|0 0|0 369b 14.9k 25 TBFM SEC 2020-04-09T11:24:22Z\n *2 *0 *170 *0 0 18|0 0 7.75G 18.9G 189M 0 0|1 0|0 1.29k 32.0k 25 TBFM SEC 2020-04-09T11:24:23Z\n *1 *0 *13 *0 0 2|0 0 7.75G 18.9G 185M 1 0|1 0|0 137b 13.3k 25 TBFM SEC 2020-04-09T11:24:24Z\ninsert query update delete getmore command flushes mapped vsize res faults qr|qw ar|aw netIn netOut conn set repl time\n *8 *0 *413 *0 0 2|0 0 7.75G 18.9G 183M 0 0|0 1|0 237b 13.4k 25 TBFM SEC 2020-04-09T11:24:25Z\n *12 *0 *274 *0 0 1|0 0 7.75G 18.9G 184M 0 0|0 0|0 79b 12.9k 25 TBFM SEC 2020-04-09T11:24:26Z\n *8 *0 *210 *0 0 2|0 0 7.75G 18.9G 181M 0 0|0 0|0 237b 13.4k 25 TBFM SEC 2020-04-09T11:24:27Z\n *3 *0 *224 *0 0 2|0 0 7.75G 18.9G 180M 0 0|0 0|0 237b 13.4k 25 TBFM SEC 2020-04-09T11:24:28Z\n *6 *0 *134 *0 0 6|0 0 7.75G 18.9G 181M 0 0|0 0|0 465b 14.6k 25 TBFM SEC 2020-04-09T11:24:29Z\n *7 *0 *190 *0 0 4|0 0 7.75G 18.9G 181M 2 0|0 0|1 253b 14.1k 25 TBFM SEC 2020-04-09T11:24:30Z\n *7 *0 *364 *0 0 3|0 0 7.75G 18.9G 181M 0 0|0 0|0 295b 13.8k 25 TBFM SEC 2020-04-09T11:24:31Z\n *4 *0 *245 *0 0 8|0 0 7.75G 18.9G 179M 0 0|0 0|0 485b 15.7k 25 TBFM SEC 2020-04-09T11:24:32Z\n *7 *0 *207 *0 0 10|0 0 7.75G 18.9G 176M 2 0|0 0|0 801b 16.5k 25 TBFM SEC 2020-04-09T11:24:33Z\n *7 *0 *115 *0 0 8|0 0 7.75G 18.9G 175M 1 0|0 0|0 513b 28.0k 25 TBFM SEC 2020-04-09T11:24:34Z\ninsert query update delete getmore command flushes mapped vsize res faults qr|qw ar|aw netIn netOut conn set repl time\n *6 *0 *181 *0 0 2|0 0 7.75G 18.9G 177M 2 0|0 0|0 237b 13.4k 25 TBFM SEC 2020-04-09T11:24:35Z\n *12 *0 *329 *0 0 7|0 0 7.75G 18.9G 177M 0 0|0 0|0 455b 27.6k 26 TBFM SEC 2020-04-09T11:24:36Z\n *13 *0 *119 *0 0 5|0 0 7.75G 18.9G 175M 0 0|0 0|0 429b 14.7k 25 TBFM SEC 2020-04-09T11:24:37Z\n",
"text": "mongostat output for refrence during the failure:",
"username": "rimal_patel"
}
] | Ftdc serverStatus was very slow results in mongo failures ReplicationExecutor] Starting an election, since we've seen no PRIMARY in the past 10000ms | 2020-04-03T20:02:52.285Z | Ftdc serverStatus was very slow results in mongo failures ReplicationExecutor] Starting an election, since we’ve seen no PRIMARY in the past 10000ms | 3,527 |
null | [] | [
{
"code": "",
"text": "I currently have an AWS backed atlas cluster peered to instances in AWS working fine. I’ve just added a peering connection to a VPC in GCP from this same cluster but I can’t get my instances to connect - it resolves the public IP which I don’t want. Adding -pri to the address just times out.The peering connection shows available, and I see the routes populated over in GCP…Anybody manage to make this work? Is this even a supported use case?",
"username": "Adam_Parmelee"
},
{
"code": "",
"text": "Hi Adam,Apologies for the confusion here: The peering only applies to the cluster in the same cloud provider. So for example if you deployed a cluster in that Project that was on GCP then it could leverage that peering connection from GCP.In order to connect from one cloud to an Atlas cluster in another, public IP whitelisting must be used. Note that Atlas requires end to end TLS (encryption over the wire).Cheers\n-Andrew",
"username": "Andrew_Davidson"
}
] | Atlas peering between cloud providers? | 2020-04-13T20:28:33.225Z | Atlas peering between cloud providers? | 1,371 |
null | [
"atlas"
] | [
{
"code": "",
"text": "am following an online tutorial and the tutor whitelist his IP address and get to create his username and password while creating a connection but in my case, this is not displaying?secondly, on a page, I saw a display saying billed $0.56/hr while I am still on free trial use of atlas account? I want to know why or am getting sth wrong?Thanks\nSuleiman",
"username": "MOHAMMAD_SULAIMAN"
},
{
"code": "",
"text": "Whitelisting IP is done under the Network access tab on the left of the screen for Atlas. Where as user management is done under Database access.I cannot speak to why you have incurred a cost. The billing info should tell you what it is for. Mongo is pretty good at being detailed for cost.",
"username": "Natac13"
},
{
"code": "",
"text": "Thank you @Natac13, have seen it, to try it later",
"username": "MOHAMMAD_SULAIMAN"
},
{
"code": "",
"text": "Hey @MOHAMMAD_SULAIMANHere is the link to the docs about whitelisting ip\nhttps://docs.atlas.mongodb.com/security-whitelist/",
"username": "Natac13"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Why am I not getting a display to whitelist my IP address? | 2020-04-14T12:12:52.891Z | Why am I not getting a display to whitelist my IP address? | 2,326 |
null | [
"views"
] | [
{
"code": "db.getCollection(\"gaDonors_don\").aggregate(\n [\n { \n \"$project\" : { \n \"_id\" : 0.0\n }\n }\n ], \n { \n \"allowDiskUse\" : false\n }\n);\ndb.createView(\n \"gaDonors_don_view\",\n \"gaDonors_don\",\n [ { $project: { \"_id\": 0 } } ],\n { \"allowDiskUse: false }\n)\n",
"text": "Hi all: I was wondering if there was a way to extract views from a collection s.t. you receive the create syntax for the view?Using a tool, like Studio3T, I can see the syntactical representation for the view:What I need is a tool/command that will generate this output from the view:adv(thanks)ance! ",
"username": "Micheal_Shallop"
},
{
"code": "db.getCollectionInfos()explain",
"text": "Hi @Micheal_Shallop,I believe the MongoDB documentation may help answer your question:Specifically, it says:The view definition is public; i.e. db.getCollectionInfos() and explain operations on the view will include the pipeline that defines the view. As such, avoid referring directly to sensitive fields and values in view definitions.Does that help?",
"username": "Justin"
},
{
"code": "",
"text": "Not really… if you’re familiar with PHPMyAdmin, there’s an option therein to export schema minus data.S’why I was looking for functionality similar to what exists on the mysql side…most of the time I’m using either the cli or tools like Studio3T to create objects… then I go back and dump the create commands for those objects and embed that code into my product-deployment scripts. (Thought the explanation might help explain what I was looking for!)",
"username": "Micheal_Shallop"
},
{
"code": "mongodump dump/dbname/v1.metadata.json\n {\"options\":{\"viewOn\":\"people\",\"pipeline\":[{\"$project\":{\"_id\":{\"$numberDouble\":\"0.0\"}}}]},\"indexes\":[],\"uuid\":\"\"}\nmongodump -d dbname -c v1",
"text": "You can use mongodump as on views it will dump out the view definition. Example, I have view “v1” on collection “people” and in my dump I get this file:The above was a result of mongodump -d dbname -c v1",
"username": "Asya_Kamsky"
},
{
"code": "{allowDiskUse:any}db.system.views.find()\n{ \"_id\" : \"demo.v1\", \"viewOn\" : \"people\", \"pipeline\" : [ { \"$project\" : { \"_id\" : 0 } } ] }\n{ \"_id\" : \"demo.v2\", \"viewOn\" : \"people\", \"pipeline\" : [ { \"$match\" : { \"a\" : { \"$ne\" : 1 } } } ], \"collation\" : { \"locale\" : \"fr\", \"caseLevel\" : false, \"caseFirst\" : \"off\", \"strength\" : 1, \"numericOrdering\" : false, \"alternate\" : \"non-ignorable\", \"maxVariable\" : \"punct\", \"normalization\" : false, \"backwards\" : false, \"version\" : \"57.1\" } }\n\n db.system.views.aggregate([{$replaceWith:{\n create:{$substr:[\"$_id\", {$add:[1,{$indexOfCP:[\"$_id\",\".\"]}]},999]}, \n viewOn: \"$viewOn\", pipeline:\"$pipeline\", collation:\"$collation\" \n }}])\n { \"create\" : \"v1\", \"viewOn\" : \"people\", \"pipeline\" : [ { \"$project\" : { \"_id\" : 0 } } ] }\n { \"create\" : \"v2\", \"viewOn\" : \"people\", \"pipeline\" : [ { \"$match\" : { \"a\" : { \"$ne\" : 1 } } } ], \"collation\" : { \"locale\" : \"fr\", \"caseLevel\" : false, \"caseFirst\" : \"off\", \"strength\" : 1, \"numericOrdering\" : false, \"alternate\" : \"non-ignorable\", \"maxVariable\" : \"punct\", \"normalization\" : false, \"backwards\" : false, \"version\" : \"57.1\" } }\ndb.runCommand()$replaceWith$replaceRoot:{newRoot:",
"text": "Actually a couple of additional comments.You cannot pass {allowDiskUse:any} to view definitions. You can see some of the rationale here: https://jira.mongodb.org/browse/SERVER-27440However, if you want to dump out a command which will create the view, you can do it like this (warning, no guarantees that this will continue to work in the future):Here are two ways to get the data about views from the system.views collection. Note that the second version is the exact document you would pass to db.runCommand() to create the view.P.S. I’m running 4.2 where $replaceWith is an alias for $replaceRoot:{newRoot: (slightly shorter to write)",
"username": "Asya_Kamsky"
},
{
"code": "",
"text": "@Asya_Kamsky - Thank you for the system queries to generate the script for reproducing the create commands - I can definitely use that in my deployment automation!re: allowDiskUse - I’d not actually used that yet when creating views but I did think that was an acceptable option param. Now I know. Again, thank you for the help!–mike",
"username": "Micheal_Shallop"
},
{
"code": "",
"text": "Thanks very much for this, was exactly what we needed ",
"username": "John_Clark"
}
] | Fetching view schema...? | 2020-02-12T19:12:19.452Z | Fetching view schema…? | 4,901 |
null | [
"node-js"
] | [
{
"code": "function iterate(cursor, db, maxDocCount, callBack){\n cursor.hasNext((error, result) => {\n if(error) throw error;\n if(result && docCount < maxDocCount){\n cursor.next((error, url2) => {\n if(error) throw error;\n //*just displaying the URLs for now, eventually will need to test the URL, update the status and only iterate again after the updated (real URL) or skipped (non-existing URL)\n console.log(rowIndex, url2);\n docCount++;\n iterate(cursor, db, callBack);\n //*/\n });\n }else callBack();\n });\n};\n() => {\n db.close();\n console.log(\"done in \" + ((new Date() - start) / 1000) + \" sec(s)\");\n}\n",
"text": "Good morning,I am new to MongoDB and new here, too. I have a collection of over 25 million documents in a collection. Each document consists of a simple JSON with a numeric id, a URL and an integer to store the status of that document. I was able to use a cursor and the forEach method but I want to test the URL and update the status if it’s an existing URL. It seems that I can’t stop a forEach loop before the whole collection has been processed. A while loop inside the forEach loop also gave me an error at document #1000, cursor not found. Therefore I am using the hasNext() and next() cursor methods instead:I created this function below:The callback function is this:The script works fine at first but then it stops at around document #565. It’s sometimes a few records earlier or a few ones later. The callback function never runs, it just stops and returns the command prompt. I have checked the MongoDB log file, it acknowledges the job but says nothing about why it stopped. I am on Windows 10 using Node in the command line.I intend on using the request package to test the URLs. If the request returned a status code and no error then I want to update the status of that MongoDB document from 0 to 1. If that’s no valid URL I want to skip it. Then I increment the docCount variable and process the next document.Am I doing something fundamentally wrong with my code? Is there a better way?Thanks,Alban",
"username": "Alban_Gerome"
},
{
"code": "",
"text": "I was able to get past that issue in the end by splitting the data set into “pages” of 10 documents. 10 is an arbitrary number but by using limit() and skip() I was able to loop through the 10 documents on the current page, skip to the next page, rinse and repeat until the forEach loop reaches the last document. I was very impressed with the speed, too. Now I need my script to check the urls but that’s another challenge.",
"username": "Alban_Gerome"
},
{
"code": "",
"text": "I was able to use a cursor and the forEach method but I want to test the URL and update the status if it’s an existing URLIn general, to update documents in a collection based upon a condition, you use the one of the many update methods. All these update methods take a query filter to specify a condition.See Update Documents.",
"username": "Prasad_Saya"
}
] | Iterating through collection with hasNext() and next() stops at document ~565 | 2020-04-13T12:28:24.691Z | Iterating through collection with hasNext() and next() stops at document ~565 | 3,346 |
null | [
"java"
] | [
{
"code": "",
"text": "Hello,I want to code a Database with MongoDB Reactivestreams, but a friend of mine and myself are not able to import either the SubscriberHelpers.PrintDocumentSubscriber, the SubscriberHelper.ObservableSubscriber, the SubscriberHelpers.OperationSubscriber and the SubscriberHelpers.PrintSubscriber. I am using Java 1.8 and tried every version of Reactivestreams, but none of the version let me import these. Please Help",
"username": "Max_S"
},
{
"code": "SubscriberHelpersSubscriberHelpers.java",
"text": "I want to code a Database with MongoDB Reactivestreams, but a friend of mine and myself are not able to import either the SubscriberHelpers.PrintDocumentSubscriber, the SubscriberHelper.ObservableSubscriber, …The SubscriberHelpers is a utility class shown with the examples; it is not part of the driver software. It is used as shown in the Quick Tour (with examples) section of the Reactive Streams Java Driver page.The source code for the SubscriberHelpers.java can be found at: mongo-java-driver-reactivestreams/examples/tour/src/main/tour/. The usage of the code and examples source code is also there - you can compile the Java code and use it with your program.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Reactivestreams missing import in every version | 2020-04-16T17:25:51.082Z | Reactivestreams missing import in every version | 4,043 |
null | [
"cxx"
] | [
{
"code": "",
"text": "HiI want to use insertMany, but I don’t need the list of inserted _id’s. This will impact, how small ever, the network bandwidth. Is there an alternative bulk insert method that does not return the list?I’m specifically using mongocxx 3.5.0.",
"username": "rompelstompel"
},
{
"code": "k_unacknowledgedauto writeConcern = mongocxx::write_concern{}\nwriteConcern.acknowledge_level(mongocxx::write_concern::level::k_unacknowledged); \ninsert_many()",
"text": "Hi @rompelstompel, welcome!I want to use insertMany, but I don’t need the list of inserted _id’s.Depending on your use case, you can set the acknowledge_level to k_unacknowledged. i.e.However, please note that if there are any errors during the insert (one or many documents) it wouldn’t be returned as well which could lead to missing data without you knowing.I would recommend just to leave it as is, and just discard the returned result from insert_many().Regards,\nWan.",
"username": "wan"
}
] | insertMany without returning a list of _id's | 2020-04-02T15:01:56.933Z | insertMany without returning a list of _id’s | 2,564 |
null | [
"aggregation",
"mongodb-shell"
] | [
{
"code": "",
"text": "Hello everyoneIs there any command I can use to remove a field from MongoDB entire collection?I used the $unset aggregation command but it does not remove from the entire collection at once.Sincerely\nEzequias Rocha",
"username": "Ezequias_Rocha"
},
{
"code": "db.coll.updateMany(\n {}, \n {\n $unset: {\"yourField\": \"\"}\n }\n)\n",
"text": "Hello everyoneIs there any command I can use to remove a field from MongoDB entire collection?I used the $unset aggregation command but it does not remove from the entire collection at once.Hi! You can use:",
"username": "Leandro_Domingues"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Remove field from entire collection | 2020-04-16T13:30:34.439Z | Remove field from entire collection | 2,035 |
null | [
"server",
"release-candidate"
] | [
{
"code": "",
"text": "MongoDB 4.2.6-rc0 is out and is ready for testing. This is a release candidate containing only fixes since 4.2.5. The next stable release 4.2.6 will be a recommended upgrade for all 4.2 users.Fixed in this release:4.2 Release Notes 6 | All Issues 1 | Downloads 2As always, please let us know of any issues.– The MongoDB Team",
"username": "Dima_Agranat"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.2.6-rc0 is released | 2020-04-16T14:58:26.246Z | MongoDB 4.2.6-rc0 is released | 2,175 |
null | [
"production",
"server"
] | [
{
"code": "",
"text": "MongoDB 4.0.18 is out and is ready for testing. This release contains only fixes since 4.0.17, and is a recommended upgrade for all 4.0 usersFixed in this release:4.0 Release Notes 2 | All Issues 2 | Downloads 1As always, please let us know of any issues.– The MongoDB Team",
"username": "Dima_Agranat"
},
{
"code": "",
"text": "",
"username": "Stennie_X"
}
] | MongoDB 4.0.18 is released | 2020-04-16T14:42:57.545Z | MongoDB 4.0.18 is released | 1,991 |
[
"stitch"
] | [
{
"code": "{\"error\":\"execution time limit exceeded\",\"error_code\":\"ExecutionTimeLimitExceeded\n",
"text": "Hi,\nWelcome everyone,\nI post this here just in case someone already encountered this issue before asking help from mongo team.\nWe use google spreadsheet script to send data to mongodb using stitch as a middle-man.\nThe spreadsheet is 650 lines long with three columns, half-way across this is the error that we get :\nCapture d’écran 2020-04-08 à 21.34.14792×494 49.1 KB\nWould any have a clue on how to solve this ?",
"username": "Pierre_Gancel"
},
{
"code": "",
"text": "Hey, having a similar problem, in the official documentation it says that “Function runtime is limited to 90 seconds”.\nThat’s most probably the reason we’re getting that error.\nHope there’s a way to bypass that limitation somehow.",
"username": "Ray_Remnant"
},
{
"code": "",
"text": "Hi @Ray_Remnant,\nOfficial reply is no way to bypass this limitation,\nSwitching back to excel and local loading to avoid further errors,\nbest",
"username": "Pierre_Gancel"
}
] | Stitch 503 exceeded","error_code":"ExecutionTimeLimit Exceeded" | 2020-04-08T20:23:09.136Z | Stitch 503 exceeded”,”error_code”:”ExecutionTimeLimit Exceeded” | 2,850 |
|
null | [
"atlas"
] | [
{
"code": "use test\n\ndb.createUser(\n{\n user: \"test\",\n pwd: \"Password123\",\n roles: [\n {\n role: \"readWrite\",\n db: \"test\"\n }]\n})\n\n2020-04-15T21:56:12.268+0100 E QUERY [js] uncaught exception: Error: couldn't add user: not authorized on test to execute command\n",
"text": "Hello ,Is there a way to automate db and dbUser creation on Atlas? Even with Atlas admin i can’t create db and user via following as it is giving a permission error. My aim is to create a Jenkins Job that will be a part of automation of entire infrastructure:So what do you suggest to automate this db and dbUser creation via Jenkins ?",
"username": "Omer_Sen"
},
{
"code": "",
"text": "Welcome to the community @Omer_Sen!As noted in the documentation on Configuring Database Users:Atlas rolls back any user modifications not made through the UI or API. You must use the Atlas UI or API to add, modify, or delete database users on Atlas clusters.You need to use the Atlas API for automating the management of Atlas database users or other aspects of your clusters.If you are using any infrastructure management tools, there may be a recipe available leveraging the Atlas API. For example, HashiCorp Terraform has an officially approved and tested MongoDB Atlas Provider plugin.Regards,\nStennie",
"username": "Stennie_X"
}
] | Automated Database and DatabaseUser creation on Atlas | 2020-04-15T21:30:51.810Z | Automated Database and DatabaseUser creation on Atlas | 3,061 |
null | [
"kafka-connector"
] | [
{
"code": "",
"text": "Hi I recently started looking at MongoDB Kafka source connector and wondering how to specify the fields in a collection, I would like to monitor for any data change? Also how list of fields I am interested in to be in the payload?\nCheers",
"username": "Deepak_Jain"
},
{
"code": "$match",
"text": "Hi @Deepak_Jain, welcome!how to specify the fields in a collection, I would like to monitor for any data change?Try to use the Custom Pipeline configuration setting to filter the change events output. The data change is based on change stream events, so if you can define $match to filter certain fields that you would like to monitor.Regards,\nWan.",
"username": "wan"
}
] | White listing fields to be monitored for a change in a collection | 2020-04-09T06:59:54.449Z | White listing fields to be monitored for a change in a collection | 1,607 |
null | [] | [
{
"code": "",
"text": "(I didn’t know which category was this better for)I just signed up to the forum last week and today I got an “email summary” with links to different post, but all the links were to MongoDB - Sign Inlike:https://stage.developer.mongodb.com/community/forums/t/mongodb-4-0-18-rc0-is-released/2565And, I don’t have credentials for those links ThanksDiego",
"username": "Diego_Medina"
},
{
"code": "",
"text": "Hi Diego,Apologies for any confusion. Staging is a separate test site used by the MongoDB team to validate upgrades and configuration changes for the forum software.If you received any emails referencing the staging site, those were due to a misconfiguration. We’re looking into why the emails were triggered, but you can safely ignore these.Thanks,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Email summary goes to staging | 2020-04-16T00:15:27.411Z | Email summary goes to staging | 2,978 |
null | [
"configuration"
] | [
{
"code": "",
"text": "Hello Experts,\nI am beginner to operate MongoDB and I have big challenge to handle/control host memory usage.\nI run 3 replica mongoDB environment, each server has 125GB memory and 2 of them were using almost 50% of system memory which is almost 60GB out of 125GB. And I have no idea how to control the memory usage under 10GB.\nSo I am absolutely looking for somebody who can advise on above matter.Will stay tuned and thanks in advance to whom give support on this matter.",
"username": "ChulHyun_Han"
},
{
"code": "",
"text": "See: https://docs.mongodb.com/manual/core/wiredtiger/#memory-useWith WiredTiger, MongoDB utilizes both the WiredTiger internal cache and the filesystem cache.Starting in MongoDB 3.4, the default WiredTiger internal cache size is the larger of either:50% of (RAM - 1 GB), or\n256 MB.Via the filesystem cache, MongoDB automatically uses all free memory that is not used by the WiredTiger cache or by other processes.",
"username": "chris"
},
{
"code": "storage:\n dbPath: /ebs01/mongo/data/\n engine: wiredTiger\n directoryPerDB: true\n wiredTiger:\n engineConfig:\n cacheSizeGB: 8\n directoryForIndexes: true\n",
"text": "You can limit the size of WiredTiger internal cache by setting up storage.wiredTiger.cacheSizeGB parameter during mongod startup or in configuration file. Below is the snippet of a configuration file used in one of my deployment where i limit the internal cache to 8GB.Hope this helps.Regards,\nE",
"username": "errythroidd"
}
] | MongoDB using almost 50% memory usage | 2020-04-14T05:47:50.472Z | MongoDB using almost 50% memory usage | 5,171 |
null | [
"queries",
"performance"
] | [
{
"code": "db.collection.count({\"a\" : 100, \"b\" : {$in : [1, 4]}, \"c\" : -1, \"d\" : {$in : [1,2,3]}, \"e\" : \"hello\"})\n{\"a\" : -1, \"b\" : -1, \"c\" : -1, \"d\" : -1, \"e\" : -1}\nexplainIXSCAN{\n \"executionStats\": {\n \"executionSuccess\": true,\n \"nReturned\": 141603,\n \"executionTimeMillis\": 177, <-------------\n \"totalKeysExamined\": 141604,\n \"totalDocsExamined\": 0,\n \"executionStages\": {\n \"stage\": \"IXSCAN\",\n \"nReturned\": 141603,\n \"executionTimeMillisEstimate\": 13 <-----------\n }\n }\n}\n",
"text": "I’m analyzing a count command on a collection, which has 440k docs:In order to improve the performance, I create the following index:The result of explain shows that mongodb does a IXSCAN step to do the counting. However, I’m confused by the executionTimeMillis:You can see that executionStats.executionTimeMillis is much larger than executionStats.executionStages.executionTimeMillisEstimate .It seem that the IXSCAN step only takes 13ms, but why count takes 117ms? Is it possible to optimize this?",
"username": "new_sewe"
},
{
"code": "executionStats.executionTimeMillis",
"text": "@new_sewe and welcome to the community.executionStats.executionTimeMillis is the overall time for execution. This time is not only the time that the query runs, but it includes the time it takes to generate/pick the execution plan. Do you have multiple indexes that could be used to satisfy this query? Any indexes that have any of the fields in the query will be considered when building out a plan.As for index optimization, you generally want to have your equality match fields (‘a’, ‘c’ and ‘e’) in your example as the left most fields in your index. Which of those would be placed first would be based on your data and how selective it is. This means the first field in the index would be the one that filters out the most data to be returned.After your equality match fields, the next fields in the index will generally be used for sorting. The last fields in the index will be used for inequality/range based matches (‘b’ and ‘e’ in your query).Indexing is an art that must be practiced to get right. Make sure you don’t have unnecessary indexes, and make sure the indexes you do have are optimized for the queries you run the most.",
"username": "Doug_Duncan"
}
] | executionTimeMillis is much larger than executionTimeMillisEstimate | 2020-04-15T01:51:59.237Z | executionTimeMillis is much larger than executionTimeMillisEstimate | 5,229 |
null | [
"cxx"
] | [
{
"code": "libbson-1.0Config.cmake\nlibbson-1.0-config.cmake\n",
"text": "Hi, I’m currently trying to compile mongocxx-driver under Ubuntu, and I ran into some issues.I use the guide found at\nhttp://mongocxx.org/mongocxx-v3/installation/\nand have successfully installed the dependencies as described under\nhttp://mongoc.org/libmongoc/current/installing.htmlUnder Step 4, Configure the driver, I ran into the issues.\nSo far, any attempt of calling cmake resulted in this:<<\nCMake Error at src/bsoncxx/CMakeLists.txt:81 (find_package):\nBy not providing “Findlibbson-1.0.cmake” in CMAKE_MODULE_PATH this project\nhas asked CMake to find a package configuration file provided by\n“libbson-1.0”, but CMake did not find one.Could not find a package configuration file provided by “libbson-1.0”\n(requested version 1.13.0) with any of the following names:Add the installation prefix of “libbson-1.0” to CMAKE_PREFIX_PATH or set\n“libbson-1.0_DIR” to a directory containing one of the above files. If\n“libbson-1.0” provides a separate development package or SDK, be sure it\nhas been installed.Those files do not exist. I have libbson-1.0, but not any libbson cmake file.So far, I found the similar questionsHi,\n\nI've manually installed `libbson`, `mongo-c-driver` and `mongo-cxx-driver…` using their respective `master` branches from github.\n\nThey are all installed under `/usr/local/*` and I've kept their respective build directories (in my home dir) just in case.\nRegardless of passing the actual directory to cmake, I keep getting:\n\n```\n-- No build type selected, default is Release\nCMake Error at CMakeLists.txt:97 (find_package):\n By not providing \"FindLibBSON.cmake\" in CMAKE_MODULE_PATH this project has\n asked CMake to find a package configuration file provided by \"LibBSON\", but\n CMake did not find one.\n\n Could not find a package configuration file provided by \"LibBSON\"\n (requested version 1.3.4) with any of the following names:\n\n LibBSONConfig.cmake\n libbson-config.cmake\n\n Add the installation prefix of \"LibBSON\" to CMAKE_PREFIX_PATH or set\n \"LibBSON_DIR\" to a directory containing one of the above files. If\n \"LibBSON\" provides a separate development package or SDK, be sure it has\n been installed.\n```\n\nThe cmake modules for the aforementioned libraries are (by default) installed in:\n\n```\ns -l /usr/local/lib/cmake/\ntotal 24\ndrwxr-xr-x 2 root root 4096 Jul 17 14:46 libbson-1.0\ndrwxr-xr-x 2 root root 4096 Jul 17 15:11 libbsoncxx-3.1.1-pre\ndrwxr-xr-x 2 root root 4096 Jul 17 14:46 libbson-static-1.0\ndrwxr-xr-x 2 root root 4096 Jul 17 14:51 libmongoc-1.0\ndrwxr-xr-x 2 root root 4096 Jul 17 14:51 libmongoc-static-1.0\ndrwxr-xr-x 2 root root 4096 Jul 17 15:11 libmongocxx-3.1.1-pre\n```\nI've even gone as far as exporting CMAKE_ROOT:\n\n```\nexport CMAKE_ROOT=/usr/local/lib/cmake/:/usr/lib/cmake/:/usr/share/cmake-3.5/Modules\n```\n\nTrying various locations doesn't work:\n\n- system wide: `cmake .. -DLIBBSON_DIR=/usr/local/lib/cmake/libbson-1.0`\n- local build dir: `cmake .. -DLIBBSON_DIR=/home/alex/codez/robot_platform/platform/src/vendors/libbson/build`\n\nThe issue appears to be that libbson has the following cmake module `libbson-1.0-config.cmake` to which I setup a symbolic link, but still there is no difference.\nbut those did not really help me. I feel that if I blindly start to edit some CMakeFile, this will do me no good.If it is relevant, my last attempt at configuration used the command\ncmake … \n-DCMAKE_BUILD_TYPE=Release \n-DBSONCXX_POLY_USE_MNMLSTC=1 \n-DCMAKE_INSTALL_PREFIX=/usr/localI tried out several variants and also to execute it under sudo, same outcome.\nI also tried to set CMAKE_PREFIX_PATH to something that contains the libbson library files, but as it specifically asks for a libbson cmake file, I didn’t expect it to work to begin with.",
"username": "Ksortakh_Kraxthar"
},
{
"code": "",
"text": "@Ksortakh_Kraxthar how did you install the C driver? Was it from source or from Ubuntu repository packages? What commands did you use?",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "@Robert_Sanchez\nYou were correct, the C driver wasn’t properly built, and it works now.\nThanks, I would never have guessed that a dependency was missing from the error message.",
"username": "Ksortakh_Kraxthar"
},
{
"code": "",
"text": "@Ksortakh_Kraxthar That’s excellent news. Feel free to let us know if you encounter any other issues. Incidentally, depending on what version of Ubuntu you are using, you may find that a recent C driver is available directly via apt.",
"username": "Roberto_Sanchez"
},
{
"code": "$ sudo apt-get install libmongoc-1.0-0\n\n$ cmake ... \n-DCMAKE_BUILD_TYPE=Release \n-DCMAKE_PREFIX_PATH=/opt/mongo-c-driver \n-DCMAKE_INSTALL_PREFIX=/opt/mongo-cxx-driver\n\nbsoncxx version: 3.5.0\nCMake Error at src/bsoncxx/CMakeLists.txt:98 (find_package):\nBy not providing \"Findlibbson-1.0.cmake\" in CMAKE_MODULE_PATH this project\nhas asked CMake to find a package configuration file provided by\n\"libbson-1.0\", but CMake did not find one.\n\nCould not find a package configuration file provided by \"libbson-1.0\"\n(requested version 1.13.0) with any of the following names:\n\nlibbson-1.0Config.cmake\nlibbson-1.0-config.cmake",
"text": "@Roberto_SanchezI’m experiencing the same problem, but installing the C driver didn’t seem to resolve it:",
"username": "Willow_Willis"
},
{
"code": "sudo apt-get install libmongoc-1.0-0sudo apt-get install libmongoc-dev/usr-DCMAKE_PREFIX_PATH=/opt/mongo-c-driver",
"text": "@Willow_Willis it is not clear what you are trying to do. The command sudo apt-get install libmongoc-1.0-0 installs the runtime library, not the development library and headers. You probably need sudo apt-get install libmongoc-dev instead.Additionally, the packages will install everything under /usr, so passing -DCMAKE_PREFIX_PATH=/opt/mongo-c-driver is probably not what you want. If you already have a C Driver built and installed there, it is possible that it will be used instead of the C Driver from distro packages.",
"username": "Roberto_Sanchez"
},
{
"code": "sudo apt-get install libmongoc-1.0-0\nsudo apt-get install libbson-1.0\nsudo apt-get install cmake libssl-dev libsasl2-dev\n\nwget https://github.com/mongodb/mongo-c-driver/releases/download/1.16.2/mongo-c-driver-1.16.2.tar.gz\ntar xzf mongo-c-driver-1.16.2.tar.gz\ncd mongo-c-driver-1.16.2\nmkdir cmake-build\ncd cmake-build\ncmake -DENABLE_AUTOMATIC_INIT_AND_CLEANUP=OFF ..\nsudo make install\n\ngit clone https://github.com/mongodb/mongo-cxx-driver.git \\\n --branch releases/stable --depth 1\ncd mongo-cxx-driver/build\n\nsudo cmake .. \\\n -DCMAKE_BUILD_TYPE=Release \\\n -DBSONCXX_POLY_USE_MNMLSTC=1 \\\n -DCMAKE_INSTALL_PREFIX=/usr/local\n\nsudo make EP_mnmlstc_core\nsudo make\nsudo make install\n",
"text": "@Willow_Willis Not sure if it helps, but the full set of commands I used was:Especially note the second block, which I originally left out until @Robert_Sanchez pointed me at the C driver.\nMight be that some parts are redundant, but worked for me.",
"username": "Ksortakh_Kraxthar"
},
{
"code": "libmongoc-1.0-0libbson-1.0sudo apt-get install libmongoc-dev",
"text": "If you are building the C driver from source, it is better to not have libmongoc-1.0-0 and libbson-1.0 installed via apt. If your distro has a new enough C Driver package, then a simple sudo apt-get install libmongoc-dev is all you need to have the C Driver available to you.",
"username": "Roberto_Sanchez"
},
{
"code": "$ sudo apt-get install libmongoc-dev\n$ sudo cmake .. \n-DCMAKE_BUILD_TYPE=Release \n-DBSONCXX_POLY_USE_MNMLSTC=1 \n-DCMAKE_INSTALL_PREFIX=/usr/local\n\nbsoncxx version: 3.5.0\nCMake Error at src/bsoncxx/CMakeLists.txt:98 (find_package):\n Could not find a configuration file for package \"libbson-1.0\" that is\n compatible with requested version \"1.13.0\".\n\n The following configuration files were considered but not accepted:\n\n /usr/lib/x86_64-linux-gnu/cmake/libbson-1.0/libbson-1.0-config.cmake, version: 1.9.2\nsudo apt-get install zlib1g-dev",
"text": "I removed the runtime library and headers, then tried installing the development library from apt. Looks like my distro doesn’t have a new enough C Driver:So I removed the development library and followed @Ksortakh_Kraxthar’s instructions for installing the mongo-c-driver from source (without installing the client libraries again). I did need to install zlib ( sudo apt-get install zlib1g-dev ) before it would make install, however.THEN I was finally able to make MongoCxx!Thanks to both of you for your help.",
"username": "Willow_Willis"
},
{
"code": "",
"text": "@Willow_Willis then I suspect you are running Debian stretch, or Ubuntu Bionic. Both of those would have rather old C driver versions, which would not be sufficient for building the latest C++ driver. I’m glad you were able to sort it out. If you are interested in building packages that you can install with apt, then this post may interest you: C and C++ Driver for Debian & Ubuntu users - #2",
"username": "Roberto_Sanchez"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Problems trying to compile MongoCxx | 2020-03-27T21:36:28.577Z | Problems trying to compile MongoCxx | 11,604 |
null | [
"node-js"
] | [
{
"code": "appappappappappapp",
"text": "I’m in the early stages of a new project, which is going to be a Koa server (Node) utilizing the MongoDB Node driver. Fair warning: I may not use exactly correct terminology - I feel like I’m fumbling around in the darkness here as I learn how to put all the pieces together.I’m wanting to ensure that my MongoDB connection opens when the server starts, and that it tidily closes during a shutdown sequence.Right now, my server is structured as such:\napp.js starts a Koa server, defines the middleware stack, and the Koa instance is exported as app (an object).\nserver.js imports app and listens on a port. I think this is where I should fire up the Mongo connection - but I can’t conceptualize how I would pass it to app for use.Further, the middleware of app will make calls to handler methods in another directory to handle business logic (including DB interaction) - so it is not even app itself that is using the Mongo connection app's middleware calls functions that use the Mongo connection.I hope I’ve made myself clear - I made every effort to. If what I’m describing seems weird, it may not be because I did a poor job describing it - it may be because this is my first run-through of building a backend from the ground up. All constructive criticism is appreciated.",
"username": "Michael_Jay"
},
{
"code": "index.jsapp.listen()dbConnection.jsSIGTERMSIGINT",
"text": "Hey @Michael_Jay. You are making sense. I have not worked with Koa specifically. However I use express.js all the time. And what I do is launch the server in index.js initializing the server and with the app.listen() part. Then in a different file name dbConnection.js I have a function that connects to Atlas. This function could be imported and executed once your Koa server is listening on whatever port.\nWhen you close the server down with SIGTERM or SIGINT then close the connection on the MongoClient which is returned from the connection function.Here is a tutorial I found. Please note I did not fully read it or check it. However it talks about connecting to mongodb with koa.While the MERN stack is one of the frontrunners in building rich, responsive and extensible Single Page and other web apps, the flexible…\nReading time: 12 min read\nLet me know if you need anymore help or a better explanation. ",
"username": "Natac13"
},
{
"code": "",
"text": "Also you could check out MongoDB University",
"username": "Natac13"
},
{
"code": "",
"text": "Thanks @Natac13. I completed the entirety of the MongoDB University Developer’s curriculum - but the entirety of the backend was pre-baked for the M220JS class. However, I did start looking back at what they did, and things are beginning to click.That backend is set up such that the MongoClient calls “injectDB” methods on DAOs - so I think that’s how the client is passed to the persistence layer. I hadn’t paid much attention to the DAOs previously, but my backend was basically structured with a DAO layer anyway, so I’m trying to emulate what they did for this first project.They don’t bother with closing the connection, and that Medium article doesn’t either. So I’ve still got some exploring to do there - but this is a learning process so I’m just trying to take it day by day.I didn’t even know about SIGINT and SIGTERM (I’m currently a Windows user), so that’s something to chew on.Thanks again for the help and encouragement.",
"username": "Michael_Jay"
},
{
"code": "",
"text": "This is a re-posting of my answer provided on the MongoDB University. Hopefully the more experienced people from this forum can provide feedback.I would say DAO because API is usually unaware of the schema.The API represents your business logic.The DAO represent the implementation of your business logic.You should let the server enforced the schema because it will be enforced with manual updates. Which might be a bad idea because you won’t be able to fix implementation issues by manually do what was supposed to be done.This being written. I personally stay away from schema enforcement. It is a unnecessary overhead when you are serious about unit testing.",
"username": "steevej"
}
] | Newbie question: Opening Mongo connection in Node application with native Node driver | 2020-04-09T17:59:08.015Z | Newbie question: Opening Mongo connection in Node application with native Node driver | 2,674 |
null | [
"replication"
] | [
{
"code": "2020-04-14T14:30:36.647+0100 I COMMAND [initandlisten] command local.oplog.rs command: getMore { getMore: 29008001961, collection: \"oplog.rs\", $db: \"local\" } originatingCommand: { find: \"oplog.rs\", filter: { ts: { $gte: Timestamp(1584541935, 5) } }, oplogReplay: true, $db: \"local\" } planSummary: COLLSCAN cursorid:29008001961 keysExamined:0 docsExamined:62847 numYields:491 nreturned:62847 reslen:16777354 locks:{ Global: { acquireCount: { r: 492 } }, Database: { acquireCount: { r: 492 } }, oplog: { acquireCount: { r: 492 } } } protocol:op_msg 114ms\n",
"text": "Hi, I have a MongoDB database made of a replica set with 3 members.\nFor various reasons the database got corrupted and whenever I try to start any of the members, they throw various error and crash again.What I would like to do is to save the data.I was thinking about moving from a replica set installation to a Standalone one to simplify things, but since I can’t connect to any of the members I can’t run the Mongo Shell and change the configuration (e.g. rs.remove(“member-host”))I have tried to run “mongo --repair” and it seems to go through the data quite happily (WiredTiger progress WT_SESSION.verify etc.), but when it’s finished and I try to start MongoDB it still doesn’t work. It gets stuck trying to recover from the OpLog e.gMany thanks!",
"username": "Geppo_Rello"
},
{
"code": "",
"text": "Please share some logs, it might help finding the root cause of the problem.",
"username": "steevej"
},
{
"code": "",
"text": "Best to see the log first before taking any action.If you want to start a node standalone. Remove the replicaSet configuration/flags.\nIt is usually a good idea to block external connections or bind to a different port.Once you have found an acceptable node shut it down. You can use the data files to seed a node, no need to export/dump.Much of this is covered extensively in the documentation:",
"username": "chris"
},
{
"code": "",
"text": "Thank you Chris!\nI was able to start one node standalone by removing the --replSet parameter. I can’t believe I didn’t think of that: sometimes when you look at something for too long…\nI am now able to connect with MongoDump and I’m exporting the only collection I really need to save.\nThank you again!",
"username": "Geppo_Rello"
}
] | How to recover data | 2020-04-14T15:47:50.946Z | How to recover data | 3,281 |
null | [
"java"
] | [
{
"code": "",
"text": "Dear All,I have an array with multiple objects within and I want to update the value of a particular field in all the objects within the array using java streams.\nCan you guide me on the same ?Thanks,\nJay",
"username": "Jayathirtha_Katti"
},
{
"code": "",
"text": "To update all sub-documents (or objects) with an array - you use the all positional operator $[ ] for specifying the update.I am not sure using Java Streams API will be of any use in this case; but, you can be little more detailed about why you want to use Streams API. What is it that you think needs the usage of Streams in this case (perhaps, you have special case)? Please give an example.MongoDB Java driver has specific API methods to perform the update using the all positional operator. With the Java driver API calls the update on a document (and all the objects within the array) happens atomically.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Thanks Prasad. I will use the MongoDB Java Driver and update.In the meanwhile, I also have a situation, I have to perform some masking strategy on multiple DB’s for specific fields within the collections. There is search criteria that I need to use.\nDo I have to fetch each matching document and update one by one ? Because, the masking strategy differs for each of the fields.Thanks in advance for your help !!\nJay.",
"username": "Jayathirtha_Katti"
},
{
"code": "$redact",
"text": "Do I have to fetch each matching document and update one by one ?An update to a document is atomic. An update for multiple documents can be batched together and submitted to be performed on the database server as a single operation; but, still each document is updated individually.In the meanwhile, I also have a situation, I have to perform some masking strategy on multiple DB’s for specific fields within the collections.I don’t know the masking startegey you have in place or you are intending to implement (something like Field Level Redaction). $redact is an Aggregation pipeline stage.Note that an update operation can be combined with an aggregation pipeline.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "Just to give you some details -\nI have multiple DB’s - DB1, DB2 … and multiple collections within.\nNow, there are few fields/columns that are identified within each of the collections that I need to mask/obfuscate.\nEach field has a different masking algorithm (first 4 and last four as XXXX, or full masking - XXXXXXX and for date - mask everything to XXXX except for year).\nSo, I have to fetch the required documents (using the search criteria), then based on the fields value in that document, perform the masking logic and update back the document into DB.\nThis I have to repeat for every document, in all the collections and DB.\nI am using Java Mongodb Driver as well as Java Reactive Streams driver for this.",
"username": "Jayathirtha_Katti"
}
] | Updating values in a nested array using java streams | 2020-04-13T01:55:11.456Z | Updating values in a nested array using java streams | 3,507 |
null | [
"indexes"
] | [
{
"code": "",
"text": "I’m looking to figure out whether or not I might need to create a “migrations” collection.The complete task is that I implement a unique constraint on an arbitrary field of a collection. I understand that to do so, a proxy collection is needed. This documentation seems to explain that well; https://docs.mongodb.com/manual/tutorial/unique-constraints-on-arbitrary-fields/.What I don’t understand is if I need a “migrations” collection in order to keep track of migrations that have occurred for that proxy collection or if I even need migrations. Somehow I’m under the impression that schemas update based on the object that is inserted into the database.I look forward to your response! Cheers!",
"username": "Kate_Pond"
},
{
"code": "$jsonSchema",
"text": "Hi Kate,Welcome to the MongoDB community! The complete task is that I implement a unique constraint on an arbitrary field of a collection. I understand that to do so, a proxy collection is needed. This documentation seems to explain that well; https://docs.mongodb.com/manual/tutorial/unique-constraints-on-arbitrary-fields/. The documentation you referenced is specific to creating additional unique indexes for a sharded collection, although this may not be clear from a direct link to the page. MongoDB does not support unique indexes across shards, except when the unique index contains the full shard key as a prefix of the index.If you are working with an unsharded collection, you can create unique indexes without the need for a proxy collection. What I don’t understand is if I need a “migrations” collection in order to keep track of migrations that have occurred for that proxy collection or if I even need migrations. Somehow I’m under the impression that schemas update based on the object that is inserted into the database. MongoDB does not have a fixed schema catalog, so there isn’t a strict requirement for all documents in a collection to have the same structure (or to keep track of migrations). You can impose schema validation requirements for insert/update operations using JSON Schema, but changing schema validation rules does not perform migrations of existing documents.Depending on your use case, it may make sense to implement a migration strategy if your code is expecting identical schema in all documents. However, with flexible schema you have more control over the impact of migrations (instead of being limited to the “all-or-none” approach of a fixed schema).For example, you could add a schema version to documents and migrate them incrementally when they are next read by your application (or as a background task). There’s also a $jsonSchema query operator if you want to find documents matching a specific schema pattern.Regards,\nStennie",
"username": "Stennie_X"
},
{
"code": "",
"text": "This is very helpful. I’m not completely sure what a sharded collection is, but I’m fairly sure that I don’t have one. Thank you for the welcome and the information @Stennie_X! Cheers!",
"username": "Kate_Pond"
},
{
"code": "mongomongos>db.collectionname.getShardDistribution()",
"text": "I’m not completely sure what a sharded collection is, but I’m fairly sure that I don’t have one.Hi Kate,If you are connected to a sharded cluster using the mongo shell, the prompt should change to mongos>. You can also check if a collection is sharded by calling db.collectionname.getShardDistribution(), which will report something like “Collection test.collectionname is not sharded.” for an unsharded collection.Sharding is an approach for distributing data across multiple servers (or “shards”). Sharding is typically used to scale deployments with very large data sets and high throughput operations, but can also be useful for workload isolation (for example Segmenting Data by Location or Tiering Hardware for Varying SLA or SLO). Collections in a sharded cluster can be unsharded (the default) or sharded.A sharded collection is partitioned based on a shard key index that you define based on one or more field values. Sharding enables horizontal scaling since each shard only has to manage a subset of the data for a sharded collection.From an application point of view, a sharded collection is a single logical collection: you can query a sharded collection without being aware of which shard has the relevant results. However, since each shard only has a subset of the data enforcing uniqueness values other than the shard key requires some extra consideration (per the link you originally referenced).If you are interested in learning more about MongoDB, there are free online courses available at MongoDB University and a few learning paths (Developer or DBA/Operations) with recommended courses to take.Regards,\nStennie",
"username": "Stennie_X"
}
] | Unique Constraints on Arbitrary Fields & Migrations Collection? | 2020-04-13T17:27:01.788Z | Unique Constraints on Arbitrary Fields & Migrations Collection? | 2,504 |
null | [
"php"
] | [
{
"code": "",
"text": "I am using PHP 7.3.16 with mongodb driver 1.7.4 , mongodb Library 1,6 connected with composer.simple connect to DВ$mongo = new MongoDB\\Client(“mongodb://:****@serverIP:27017”);And I got error :\" Parse error : syntax error, unexpected ‘function’ (T_FUNCTION), expecting identifier (T_STRING) or \\ (T_NS_SEPARATOR) in /usr/local/bin/vendor/mongodb/mongodb/src/functions.php on line 31\"line 31 is : “use function end;”I can not understand the possible reason",
"username": "Vladimir_Anokhin"
},
{
"code": "uri$uri = sprintf(\"mongodb://%s:%s@server:27017/foo\", rawurlencode(\"user\"), rawurlencode(\"pwd\"));\n$client = new MongoDB\\Client($uri);\n",
"text": "Hi @Vladimir_Anokhin, welcome!$mongo = new MongoDB\\Client(“mongodb:// : ****@serverIP:27017”);Looking at the connection URI, looks like you may have a special characters in the password part. The uri is a URL, hence any special characters in its components need to be URL encoded according to RFC 3986. You can try to encode the user/password of the URI, for example:Regards,\nWan.",
"username": "wan"
}
] | Problem with mongodb PHP library | 2020-04-05T22:36:09.122Z | Problem with mongodb PHP library | 2,868 |
[
"replication"
] | [
{
"code": "",
"text": "The web pages:https://docs.mongodb.com/manual/mongo/Talk about connecting to a replicate set by specifying a list of host:port entries, but they don’t define what it means to connect to a list of host:port entries. Where is that information?After searching for a while, I found something under:https://docs.mongodb.com/manual/replication → Redundancy and Data Availability → Replication in MongoDB → Asynchronous Replication → Automatic Failover → Read Operations → Read Preferencewhich says:By default, clients read from the primary; however, clients can specify a read preference to send read operations to secondariesIs there a better place for getting this info?Seems like this keep piece of information should be included with the connection string info.",
"username": "Benjamin_Slade"
},
{
"code": "",
"text": "Hey @Benjamin_SladeDo these docs help? Select replica setHigh level overview is that the list of servers also the failover to happen if a node goes down. When you connect to a standalone or to one member of a replica set there is no failover.",
"username": "Natac13"
},
{
"code": "",
"text": "Thanks, and I appreciate the response, but that doc specifies the connection string format without even a brief reference to what a list of host/port pairs means. So no, it doesn’t help.My point was that the documentation for the connection string format should include a brief summary of that it means to connect to a list of host/port pairs, and include a link to the “Read Preference” page.If I was grading this for a class, the “Connection String URI Format” page would get a “D”. Missing key information essential to it’s purpose.Is it possible to volunteer to update the manual pages? I would be glad to make suggested changes.Ben",
"username": "Benjamin_Slade"
},
{
"code": "host[:port]",
"text": "Is it possible to volunteer to update the manual pages?Hi Benjamin,We definitely appreciate any feedback on improving the documentation.The bottom right of a documentation page should have a “Was this page helpful” feedback widget. If you click on Yes/No you can provide additional suggestions for improvement:\n \ndocs-helpful-details660×660 29.5 KB\nAggregate feedback information on helpfulness of pages is useful to highlight pages needing improvement to the documentation team.You can also raise suggestions directly in Jira: https://jira.mongodb.org/browse/DOCS.My point was that the documentation for the connection string format should include a brief summary of that it means to connect to a list of host/port pairs, and include a link to the “Read Preference” page.The Connection String page you referenced does include:a summary under Replica Set Option:When connecting to a replica set, provide a seed list of the replica set member(s) to the host[:port] component of the uri.Read Preference Options and links to related pages.However, “seed list” isn’t currently linked to a longer explanation (which should be added to the Glossary). I raised DOCS-13593 with a suggestion to add this to the glossary:The seed list of host:port pairs provided in a connection string is used by drivers/clients for initial discovery of the current replica set configuration. Upon successful connection to a member of the seed list, clients will retrieve a canonical list of replica set members to connect to, which may be different from the original seed list. Per the standard Server Discovery and Monitoring (SDAM) specification, clients will use the hostnames listed in the replica set config, not the seed list.Regards,\nStennie",
"username": "Stennie_X"
}
] | Improving documentation for connecting to a replica set | 2020-04-13T20:28:41.665Z | Improving documentation for connecting to a replica set | 1,864 |
|
null | [
"queries"
] | [
{
"code": "{\n \"_id\" : ObjectId(\"5e6c26153facb910290f0869\"),\n \"attributes\" : [ \n {\n \"k\" : \"first_name\",\n \"v\" : \"John\"\n }, \n {\n \"k\" : \"last_name\",\n \"v\" : \"Doe\"\n }, \n {\n \"k\" : \"email\",\n \"v\" : \"[email protected]\"\n },\n ],\n \"events\" : [ \n {\n \"event\" : \"add_to_cart\",\n \"event_data\" : [ \n {\n \"k\" : \"product_name\",\n \"v\" : \"T-shirt\"\n }, \n {\n \"k\" : \"price\",\n \"v\" : 30\n }\n ],\n \"created_at\" : ISODate(\"2020-03-14T00:32:21.000Z\")\n }, \n ],\n \"created_at\" : ISODate(\"2020-03-14T00:32:21.000Z\"),\n}\ndb.getCollection('clients').aggregate([\n {\n \"$match\": {\n \"deleted_at\": {\n \"$exists\": false\n }\n }\n },\n {\n \"$addFields\": {\n \"event_count\": {\n '$size': {\n \"$filter\" : {\n \"input\" : \"$events\", \n 'as' : 'events',\n 'cond' : {\n '$and': {\n // How to add elemMatch here?**\n } \n }\n } \n }\n }\n }\n },\n])\n",
"text": "Hi everyone,Is it possible to use elemMatch within filter?This is an example of my data:I would like how it is possible to get users who had event “add_to_cart” with specific key/value attributes.How to combine $elemMatch and $filter?",
"username": "jellyx"
},
{
"code": "$elemMatch{ _id: 1, results: [ { product: \"abc\", score: 10 }, { product: \"xyz\", score: 5 } ] }\n{ _id: 2, results: [ { product: \"abc\", score: 8 }, { product: \"xyz\", score: 7 } ] }\n{ _id: 3, results: [ { product: \"abc\", score: 7 }, { product: \"xyz\", score: 8 } ] }\ndb.survey.find(\n { results: { $elemMatch: { product: \"xyz\", score: { $gte: 8 } } } }\n)\n {\n \"$addFields\": {\n \"event_count\": {\n '$size': {\n \"$filter\" : {\n \"input\" : \"$events\", \n 'as' : 'events',\n 'cond' : {\n '$and': [\n { '$$events.k': { $eq: something } },\n { '$$events.v': { $eq: something } }\n ] \n }\n } \n }\n }\n }\n }\n",
"text": "I am not 100% sure. However $elemMatch would need to know the array field to operate on. As seen with the docs exampleI think you might be able to achomplish what you are after with the following.Again I am not 100% sure. I just wanted to share some thoughts",
"username": "Natac13"
},
{
"code": " \"$filter\" : {\n \"input\" : \"$events\", \n \"as\" : \"events\",\n 'cond' : {\n '$and': {\n '$eq': [ '$$events.event.event_data.k', \"product_name\" ]\n } \n }\n } \n",
"text": "Thank you for the answer.Hmm, the problem is that it is possible to have another key-value array in v key. And another.I also tried to use the following filter:However, it doesn’t return anything. Zero results. Although there should be.",
"username": "jellyx"
},
{
"code": "$and",
"text": "If you used that exact example then the $and is incorrect as it takes an array instead of an object.\nSee the docs",
"username": "Natac13"
},
{
"code": " '$and': [\n {'$eq': [ '$$events.event.event_data.k', \"product_name\" ]}\n ]",
"text": "Unfortunately, I haven’t understood you.This also does not work:",
"username": "jellyx"
},
{
"code": "db.inventory.aggregate(\n [\n {\n $project:\n {\n item: 1,\n qty: 1,\n result: { $and: [ { $gt: [ \"$qty\", 100 ] }, { $lt: [ \"$qty\", 250 ] } ] }\n }\n }\n ]\n)\n$and$and'$$events.event_data.k'$$events$filter",
"text": "Check out this example in the docsSee how there are 2 expressions in the $and. If you only need one expression check then no need for the $and operationAnd I think you want '$$events.event_data.k' as you are setting the events array to $$events in the $filter operation",
"username": "Natac13"
},
{
"code": "$$events.event_data.k \"$filter\" : {\n \"input\" : \"$events\", \n \"as\": \"events\",\n \"cond\": { \"$eq\": [ \"$$events.event_data.k\", \"product_name\" ] }\n } \n",
"text": "There will be more filters, but I simplified it here. Yes, you’re right. I should be $$events.event_data.k but still nothing:This seems weird to me. ",
"username": "jellyx"
}
] | Is it possible to use $elemMatch within $filter? | 2020-04-13T22:51:38.359Z | Is it possible to use $elemMatch within $filter? | 5,161 |
null | [
"data-modeling"
] | [
{
"code": "",
"text": "We have internal conventions to always camelCase field and collection names, but developers are prone to forgetfulness and honest mistakes. As per Murphy’s Law, anything that can go wrong will go wrong.Without an enforcement mechanism, snake_case, PascalCase, UPPER_CASE, Space Case, and other strange cases are bound to appear, and once they get coded throughout the system, they become difficult and time consuming to fix. Each individual occurrence doesn’t have a hugely negative impact by itself, but when they add up over time, overall they become a significant barrier to readability and increase mental burden (worst of all it just ain’t pretty to look at!).In the same spirit, we love using ESLint and Prettier in our javascript-based projects because they provide instant feedback, reduce the amount of decisions we have to make, and save us from ever having to think about, watch for, or argue about style violations.",
"username": "Roman_Scher"
},
{
"code": "",
"text": "The post Mongo Schema Diagram or Report has info about ways to get the schema details for the collections. I think you can extract the field names by each collection from the output of a method and check if the field names meet the conventions - by using another script.",
"username": "Prasad_Saya"
},
{
"code": "",
"text": "I see, that’s more of a reactive approach with delayed feedback than proactive enforcement with instant feedback, but could be helpful if paired with some scripts that report deviations to the team",
"username": "Roman_Scher"
}
] | Is there any way to enforce camelCased field and collection names? | 2020-03-25T22:05:03.449Z | Is there any way to enforce camelCased field and collection names? | 6,154 |
null | [
"java",
"python",
"performance"
] | [
{
"code": "import time\nimport pymongo\nimport multiprocessing.pool\n\nif __name__ == '__main__':\n client = pymongo.MongoClient('mongodb://username:pass@DB_URL2,DB_URL3,DB_URL4,DB_URL5/test-r-demo?replicaSet=rs0&retryWrites=true&readPreference=secondary')\n collection = client['test-r-demo']['fpi_user']\n TOTAL_OPS = 500000\n C_THREADS = 50\n\n def work(collection):\n collection.find_one({'phone_number': '03052506670'}, {'id': 1})\n\n with multiprocessing.pool.ThreadPool(C_THREADS) as p:\n threads = []\n for i in range(TOTAL_OPS):\n threads.append(collection)\n\n start_time = time.time()\n ret = p.map(work, threads)\n end_time = time.time()\n print('Total {} operations, with {} threads, took {}s'.format(TOTAL_OPS, C_THREADS, round(end_time - start_time, 3)))\nmongostatqueryimport com.mongodb.*;\nimport com.mongodb.client.MongoClient;\nimport com.mongodb.client.MongoClients;\n//import com.mongodb.MongoClientURI;\nimport com.mongodb.client.MongoCollection;\nimport com.mongodb.client.MongoDatabase;\nimport org.bson.Document;\nimport org.example.stresstest.StressTestThread;\n\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Executors;\nimport java.util.concurrent.ThreadPoolExecutor;\nimport java.util.concurrent.TimeUnit;\nimport static com.mongodb.client.model.Filters.eq;\n\npublic class StressTest {\n public static void main(String[] args) {\n\n\n int NUM_OPS = 500000;\n int NUM_THREADS = 50;\n /*\n initializing mongo client\n */\n MongoClient client = MongoClients.create(\"mongodb://username:pass@DB_URL2,DB_URL3,DB_URL4,DB_URL5/test-r-demo?replicaSet=rs0&retryWrites=true&readPreference=secondary\");\n\n /*\n initialize database\n */\n MongoDatabase database = client.getDatabase(\"test-r-demo\");\n\n /*\n initialize collection\n */\n MongoCollection<Document> collection = database.getCollection(\"fpi_user\");\n\n /*\n Initialize thread pool executor with fixed threads\n */\n ThreadPoolExecutor threadPool = (ThreadPoolExecutor)Executors.newFixedThreadPool(NUM_THREADS);\n\n /*\n Loop for retrieving data from database using collection\n for this we are creating threads and assign task to each thread\n */\n long startTime = System.currentTimeMillis();\n for (long i=0; i<NUM_OPS; i++) {\n Runnable runnable = new StressTestThread(collection);\n threadPool.execute(runnable);\n }\n\n threadPool.shutdown();\n long totalTime = (System.currentTimeMillis() - startTime) / 1000;\n String out = String.format(\"Completed %1s operations in %2s seconds with a an average of %3s ops/sec.\", NUM_OPS, (totalTime), (totalTime / NUM_THREADS));\n System.out.println(out);\n\n }\n}\npackage org.example.stresstest;\nimport com.mongodb.client.MongoCollection;\nimport org.bson.Document;\nimport static com.mongodb.client.model.Filters.eq;\nimport static com.mongodb.client.model.Projections.*;\n\npublic class StressTestThread implements Runnable {\n\n private MongoCollection<Document> collection;\n\n public StressTestThread(MongoCollection<Document> collection){\n this.collection = collection;\n }\n\n @Override\n public void run() {\n Document myDoc = collection.find(eq(\"phone_number\", \"03052506670\")).projection(fields(include(\"id\"), excludeId())).first();\n //System.out.println(myDoc.get(\"id\"));\n }\n}\n",
"text": "Hi all. I was looking into some performance related tasks for an application that I am developing using Flask, served via Gunicorn. I am using MongoDB as my primary database and PyMongo as the driver to connect with MongoDB server via Python. Lately I felt there was something that was slowing down the overall calls to DB and to application server as well. I looked into it and tried to compare PyMongo with other drivers (Java). I came up with two scripts. My Python script looks like this:While executing the above script I went on the DB machine turned on mongostat to observe the query ops/sec. It barely crosses 1200 mark on each DB machine.However, the results are totally different when using the Java MongoDB driver:StressTest.javaStressTestThread.javaSurprisingly the max ops/sec when using MongoDB’s Java driver, reaches to 13k easily. What could be the potential problem?",
"username": "Ahmed_Dhanani"
},
{
"code": "excludeId() _id def work(collection):\n collection.find_one({'phone_number': '03052506670'}, {'id': 1, '_id': 0})\n",
"text": "I noticed that the Java benchmark is using excludeId() in the query projection but the Python benchmark is not. This allows the server to perform the covered query optimization which may explain some of the difference. Please make this change to exclude the _id field:",
"username": "Shane"
},
{
"code": "excludeId()",
"text": "I also tried removing the excludeId() projection from the Java snippet to see if it causes a degradation or some difference in the figures. But it stays at a ~13K ops/sec.",
"username": "Ahmed_Dhanani"
},
{
"code": "_id",
"text": "Hi Shane. Thanks for looking into it. I tried to exclude the _id field using projection, but that didn’t help. It still doesn’t cross 1200 ops/sec.",
"username": "Ahmed_Dhanani"
}
] | Massive difference between max number of ops/sec with PyMongo and MongoDB Java driver | 2020-04-13T18:55:25.940Z | Massive difference between max number of ops/sec with PyMongo and MongoDB Java driver | 3,345 |
null | [
"node-js",
"beta"
] | [
{
"code": "",
"text": "The MongoDB Node.js team is pleased to announce version 3.6.0-beta.0 of the driver. This beta release adds support for MongoDB 4.4.Reference: MongoDB Node.js Driver\nAPI: Index\nChangelog: node-mongodb-native/HISTORY.md at 3.6 · mongodb/node-mongodb-native · GitHubWe invite you to try the driver immediately, and report any issues to the NODE project.",
"username": "mbroadst"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Node.js Driver 3.6.0-beta.0 Released | 2020-04-14T12:44:36.208Z | MongoDB Node.js Driver 3.6.0-beta.0 Released | 3,451 |
null | [
"node-js",
"production"
] | [
{
"code": "mapwithTransactionnullreadPreferenceTagsreadPreferenceTags",
"text": "The MongoDB Node.js team is pleased to announce version 3.5.6 of the driver@dobesv helped identify a regression where a map function would be applied twice\nif defined on a cursor, and that cursor was used to stream data.User @linus-hologram originally reported an issue with a TypeError when the lambda passed to the withTransaction helper rejected with a null value. @vkarpov15 submitted the fix.A bug was fixed where readPreferenceTags with a single value in the connection string was not properly interpreted as an array of tags. This prevented the Use Analytics Nodes to Isolate Workload guidance from working correctly.User @sean-daley reported seeing this in an AWS Lambda environment, but has proven to be somewhat of a heisenbug. We are rolling out a fix here that ensures sessions (implicit or not) are not used after they have been explicitly ended.Reference: http://mongodb.github.io/node-mongodb-native/3.5/ \nAPI: http://mongodb.github.io/node-mongodb-native/3.5/api/ \nChangelog: https://github.com/mongodb/node-mongodb-native/blob/3.5/HISTORY.md We invite you to try the driver immediately, and report any issues to the NODE project.Thanks to all who contributed to this release!The MongoDB Node.js team",
"username": "mbroadst"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB Node.js Driver 3.5.6 Released | 2020-04-14T11:55:10.743Z | MongoDB Node.js Driver 3.5.6 Released | 2,318 |
null | [
"replication"
] | [
{
"code": "",
"text": "Hello all,While deploying MongoDB 4.2.0 replica set - lets say a 3 node cluster onto on-prem VM systems. Its quite possible to have occasional network glitches. I was faced with a situation where in we had frequent failovers due to network glitches.I tried to check online docs as to how we can make MongoDB tolerant to network glitches.\nI found that we have to disable “enableElectionHandoff”. Once we do that mongoDB respects\n“settings.electionTimeoutMillis” - default 10 seconds.Lets say node A goes down, then it takes 10 seconds to decide who must be the next primary. So after 10 seconds, lets say node B conducts election and becomes primary. So disabling “enableElectionHandoff” works well.Lets take a situation where NodeA suffers network glitch, and its not visible to Node B and Node C and it comes back online after 5 seconds. Now I expect Node A to become primary automatically. But what happens is that Node A joins back, now all the 3 nodes are in secondary mode. At the end of 10 seconds, Node B becomes primary and not Node A.My initial assumption was that I have made mongoDB tolerant to network failures. But thats not seem to be happening here. Either ways with or without “enableEletionHandoff” we have a situation where Node B becomes primary and we have to manually failback.How do we deal with this? How do we make mongodb tolerant to network errors.",
"username": "Gowtham_Raj_Elangova"
},
{
"code": "",
"text": "If you have a preference for node that become Primary use priority in the replSet config.",
"username": "chris"
},
{
"code": "",
"text": "Even if i use priority, there happens a case where low priority members become primary for some time - lets say 2 mins, post which election happens again and the member with higher priority takes over.",
"username": "Gowtham_Raj_Elangova"
}
] | How to avoid frequent failovers | 2020-04-09T17:08:16.714Z | How to avoid frequent failovers | 1,373 |
[
"php",
"beta"
] | [
{
"code": "mongodbdeletedeletefindAndModifyhintcomposer require mongodb/mongodb^1.7.0@beta\nmongodb",
"text": "The PHP team is happy to announce that version 1.7.0-beta1 of the MongoDB PHP library is now available. This library is a high-level abstraction for the mongodb extension.Release HighlightsThis beta release provides support for new features in MongoDB 4.4.For authentication, this release adds support for the new MONGODB-AWS authentication mechanism. The SCRAM mechanism now supports shorter conversation when authenticating with the server.The delete command (and its helpers), delete operations in bulk writes, as well as all findAndModify operations now support specifying a hint option.As previously announced, this version drops compatibility with PHP 5.6, limiting support to PHP 7.0 and newer.A complete list of resolved issues in this release may be found at:\nhttps://jira.mongodb.org/secure/ReleaseNote.jspa?projectId=12483&version=26998DocumentationDocumentation for this library may be found at:FeedbackIf you encounter any bugs or issues with this library, please report them via this form:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12483&issuetype=1InstallationThis library may be installed or upgraded with:Installation instructions for the mongodb extension may be found in the PHP.net documentation.",
"username": "Andreas_Braun"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB PHP Library 1.7.0-beta1 Released | 2020-04-14T10:16:46.583Z | MongoDB PHP Library 1.7.0-beta1 Released | 2,760 |
|
null | [
"php",
"beta"
] | [
{
"code": "deletedeletefindAndDeletehintdriverdriverOptionspecl install mongodb-1.8.0beta1\npecl upgrade mongodb-1.8.0beta1\n",
"text": "The PHP team is happy to announce that version 1.8.0beta1 of the mongodb PHP extension is now available on PECL.Release HighlightsThis beta release provides support for new features in MongoDB 4.4.For authentication, this release adds support for the new MONGODB-AWS authentication mechanism. The SCRAM mechanism now supports shorter conversation when authenticating with the server.The delete command (and its helpers), delete operations in bulk writes, as well as findAndDelete operations now support specifying a hint option.For drivers built on top of the extension, there is a new driver key in the driverOptions when creating a manager. This can be used to pass custom metadata for use during the server handshake.As previously announced, this version drops compatibility with PHP 5.6, limiting support to PHP 7.0 and newer.A complete list of resolved issues in this release may be found at: Release Notes - MongoDB JiraDocumentationDocumentation is available on PHP.net:\nPHP: MongoDB - ManualFeedbackWe would appreciate any feedback you might have on the project:\nhttps://jira.mongodb.org/secure/CreateIssue.jspa?pid=12484&issuetype=6InstallationYou can either download and install the source manually, or you can install the extension with:or update with:Windows binaries are available on PECL:\nhttp://pecl.php.net/package/mongodb",
"username": "Andreas_Braun"
},
{
"code": "",
"text": "",
"username": "system"
}
] | MongoDB PHP Extension 1.8.0beta1 Released | 2020-04-14T08:01:46.724Z | MongoDB PHP Extension 1.8.0beta1 Released | 3,163 |
null | [
"atlas-functions",
"stitch"
] | [
{
"code": "",
"text": "Is it possible to expose a function to users, which can read and write to a collection, without exposing the collection directly to users?\nTo get a function accessing a collection i have to add a rule. With that rule, the collection can be accessed by users. What am i missing. How to do this?",
"username": "Corona"
},
{
"code": "",
"text": "Hi @Corona, welcome!Is it possible to expose a function to users, which can read and write to a collection, without exposing the collection directly to users?If I understood your question correctly, yes. In Stitch, you can define Roles and Permissions in a way that a Stitch user can read and write to a collection (or even certain fields). While the user is interacting via Stitch, they are bound by the roles and permissions that you have defined.If you have further questions, could you clarify what you’re trying to achieve?Regards,\nWan.",
"username": "wan"
}
] | Stitch encapsulate collection access with function | 2020-04-01T08:54:10.326Z | Stitch encapsulate collection access with function | 1,750 |
null | [
"dot-net",
"atlas-search"
] | [
{
"code": "",
"text": "I would like to have what MongoDB Compass produced as a BsonArray for the C# driver be just another step more.\nEssentially MongoDB Compass created a BsonArray of BsonDocuments for C# that included within it my $searchBeta query.\nHowever I do not find documentation on how to execute this $searchBeta statement laid out in a bunch of BsonDocuments within my BsonArray.\nHow do I take the next step and take what is essentially what MongoDB Compass created for me and execute and get results back from the MongoDB Driver for C#?",
"username": "Jeff_Lee"
},
{
"code": "var pipeline = new BsonDocument[]{\n new BsonDocument{ {\"$searchBeta\", new BsonDocument{{\"<operator>\", \"<specifications>\"}}} }, \n};\n\nvar agg = collection.Aggregate<BsonDocument>(pipeline);\n",
"text": "Hi @Jeff_Lee, welcome!However I do not find documentation on how to execute this $searchBeta statement laid out in a bunch of BsonDocuments within my BsonArray.$searchBeta is an aggregation pipeline stage, and you should be able to construct aggregation pipeline stages using MongoDB .NET/C# with BsonDocument array. For example:If you have further questions, could you elaborate your questions with examples please ?Regards,\nWan.",
"username": "wan"
}
] | C# and Atlas Search | 2020-04-01T06:02:46.208Z | C# and Atlas Search | 2,802 |
null | [] | [
{
"code": "",
"text": "do I need to put it in my node folder with npm before mongo --nodb works?",
"username": "Jordan_Stafford_20154"
},
{
"code": "",
"text": "It is nice that you highlighted in blue because we can spot right away that you have a space rather than a colon between the mongo part and the rest.",
"username": "steevej"
},
{
"code": "",
"text": "Thank you for that, been looking right past it the whole time!",
"username": "Jordan_Stafford_20154"
},
{
"code": "",
"text": "Closing this thread as the issue has been resolved.",
"username": "Shubham_Ranjan"
}
] | MongoDB shell is in my path, but will not launch | 2020-04-13T23:57:26.078Z | MongoDB shell is in my path, but will not launch | 1,330 |
null | [
"indexes"
] | [
{
"code": "",
"text": "I’ve a Rails application that gets data from Mongo. It returns error when trying to export the data to csv -error\": “internal-server-error”,“exception”: “Mongo::Error::OperationFailure : Executor error during find command :: caused by :: Sort operation used more than the maximum 33554432 bytes of RAM. Add an index, or specify a smaller limit. (96)”Created few indexes on the collection but its still throwing this error.Any suggestions how to fix it?",
"username": "Deepti_Gupta"
},
{
"code": "",
"text": "Please make sure index exists on the sort field",
"username": "Ramachandra_Tummala"
},
{
"code": "",
"text": "Adding the index on the sort field fixed the issue. Thanks",
"username": "Deepti_Gupta"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | Sort operation used more than the maximum 33554432 bytes of RAM | 2020-04-09T20:25:35.878Z | Sort operation used more than the maximum 33554432 bytes of RAM | 30,018 |
null | [
"aggregation"
] | [
{
"code": "",
"text": "Hi! I’m reading the documentation about $merge and was wondering if it outputs anything after it was executed. For example, is there a way to find out how many records were merged or inserted after the $merge stage? When I run it in the shell, it does the merge as expected, but then I have to check to see if the new collection has been created or updated afterwards.",
"username": "Jennifer_Maston"
},
{
"code": "",
"text": "Disappointingly not yet possible.",
"username": "007_jb"
},
{
"code": "",
"text": "This topic was automatically closed 5 days after the last reply. New replies are no longer allowed.",
"username": "system"
}
] | $merge question | 2020-04-13T20:09:42.470Z | $merge question | 1,116 |
null | [
"golang",
"field-encryption"
] | [
{
"code": "Connect error for client with automatic encryption: exec: \"mongocryptd\": executable file not found in $PATH$PATH/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/bin:/usr/local/go/bin:/usr/local/lib/",
"text": "Hello, I was trying to test out client field level encryption following this example field-level-encryption-sandbox/golang_fle_install.sh at master · mongodb-labs/field-level-encryption-sandbox · GitHub. I didn’t make any modification as I wanted to see if it will work on my computer. Well, when I run it I get this error:\nConnect error for client with automatic encryption: exec: \"mongocryptd\": executable file not found in $PATH\nI installed the required libmongocrypt following these instructions for Ubuntu GitHub - mongodb/libmongocrypt: Required C library for Client Side and Queryable Encryption in MongoDBI am missing something? my $PATH is:\n/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/bin:/usr/local/go/bin:/usr/local/lib/",
"username": "Student_al_St"
},
{
"code": "libmongocryptmongocryptdlibmongocryptmongocryptdmongocryptdlibmongocryptmongomongocryptd",
"text": "Hi,Automatic encryption requires both libmongocrypt and mongocryptd. libmongocrypt is a C library that the driver uses to do encryption, decryption, and key caching. mongocryptd is a server process that is automatically spawned by the driver when performing auto-encryption. When encrypting a command, the driver first sends the command to mongocryptd, which analyzes it and marks the fields that need to be encrypted. The marked version of the command is then sent to libmongocrypt to do the actual encryption.My guess is that you’re running the community version of the server. As noted in the mongo package docs (mongo package - go.mongodb.org/mongo-driver/mongo - Go Packages), automatic encryption is only available for the enterprise version of MongoDB. This is because the mongocryptd binary is only available on enterprise builds.If you are not able to use an enterprise build, you can disable auto encryption and do explicit encryption instead. The data will still be automatically decrypted, as the decryption process does not require use of mongocryptd. See mongo-go-driver/client_side_encryption_examples_test.go at master · mongodb/mongo-go-driver · GitHub and mongo-go-driver/client_side_encryption_examples_test.go at master · mongodb/mongo-go-driver · GitHub for examples of explicit encryption.– Divjot",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "Thanks @Divjot_Arora,\nThe 2 links you shared were useful.Best!",
"username": "Student_al_St"
},
{
"code": "",
"text": "@Divjot_Arora,\nI have some follow up questions\nWhy is that (from the example on GH) it requires to first compile the code with a cse tag? It also creates so many collections, about 3 in total, can’t it just that be saved into a single collection?Unrelated to the above, what is the best approach to have a custom TTL for Gridfs documents? any guidance on this?",
"username": "Student_al_St"
},
{
"code": "cseencryption.testKeyVaultClientEncryption.CreateDataKeylocalMasterKeytest.coll{\"encryptedField\": <encryptedField>}",
"text": "The cse build tag stands for “client-side encryption”. We require this build tag because this feature requires linking against an external C library. The build tag allows users who don’t need this feature to compile and use the driver without worrying about installing libmongocrypt. If you’re curious, this is done by keep two separate copies of the source code, each conditionally compiled based on the build tag. One copy has actual logic to interact with libmongocrypt and implement the feature and the other contains function stubs that panic if called. You can see this by running the script without specifying the build tag.As for the collections created, I believe both examples create two collections:encryption.testKeyVault: This is the key vault collection and is used to store the data key created by ClientEncryption.CreateDataKey. These data keys are used to encrypt/decrypt fields. Note that the key material for these keys is also encrypted using the master key. In the examples I linked above, this is localMasterKey. The key vault is necessary collection for client-side encryption.test.coll: This is the collection where the application stores its data. In the example, this data looks like {\"encryptedField\": <encryptedField>}.For your question about GridFS, can you provide more details? Do you want to delete the entire GridFS file (i.e. delete all of the chunks for the file and any other information related to it) after a TTL?If you have any follow-up questions about client-side encryption, it may be helpful to create a new topic with the #go-driver tag for the GridFS question.– Divjot",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "Thank you,I will keep this thread for client-side encryption only and,\nYou said I will need to install an different (go) drive?",
"username": "Student_al_St"
},
{
"code": "-tags cse",
"text": "For client-side encryption, you do not need to install a different driver. You can install the Go driver using Go modules as documented at GitHub - mongodb/mongo-go-driver: The Official Golang driver for MongoDB and then install libmongocrypt. After everything is installed, you can put either of the examples I linked earlier in your application and everything should work if you compile with -tags cse.",
"username": "Divjot_Arora"
},
{
"code": "-tags cse",
"text": "Yes, that’s exactly how I tested it when you first shared the links. I thought from your previous response I needed a different go driver.I see that forcing the -tags cse isn’t a friendly way of using CSE. I’d imagine how it fits in unit testing. We can’t always compile each time you need to run a certain portion of your code because the whole program is now dependent on the tag to be specified. Wouldn’t it be best to simply provide an API interface of the process that will instantiate the CSE?",
"username": "Student_al_St"
},
{
"code": "#cgogo rungo test-tags csego test -tags cse ...",
"text": "Unfortunately, it’s not that simple. Files like mongo-go-driver/mongocrypt.go at master · mongodb/mongo-go-driver · GitHub need to be behind some sort of build flag or other build constraint. If they’re not, the #cgo lines will try to link against libmongocrypt, which will immediately fail if the user has not installed libmongocrypt on their system. The idea was to make users who wanted this feature opt-in via the build flag so that users who don’t need it didn’t have to install libmonogcrypt for no reason. Additionally, there are certain Docker images that don’t have cgo support at all, so it’s important that any cgo-based features in the driver are not automatically compiled.For your point about always requiring compilation, I don’t think this is true. Both go run and go test support adding -tags cse command line option. We take advantage of this in many of our CI tasks, which call go test -tags cse ... rather than compiling anything.",
"username": "Divjot_Arora"
},
{
"code": "",
"text": "thanks, @Divjot_Arora. I appreciate you clarifying this.",
"username": "Student_al_St"
}
] | Issue with Go driver for client field encryption | 2020-04-04T01:08:34.228Z | Issue with Go driver for client field encryption | 4,287 |
null | [
"mongoose-odm"
] | [
{
"code": "",
"text": "I have an array inside my schema that creates timestamps for each array index created. I am trying to sort my documents so that the most active timestamp inside each function gets displayed first when returned from the find function. But the issue I am having is that I am using Mongoose Pagination and it only returns 10 documents at a time.Example:\nLet’s say I have documents of the user’s login information and inside each document, I have an array that stores every timestamp when they login. When calling the find function, I want to pass a query that goes through each selected document and retrieve the most updated timestamp from that specific array.",
"username": "Jon_Paricien"
},
{
"code": "$project$reduce",
"text": "Hi @Jon_Paricien,Can you describe the pagination problem a bit more? If you have an array inside a single document containing the information you need, the entire array should be returned with the single document, preventing the need to paginate.If you’re looking specifically for the most recent timestamp within an array of a single document, you can use aggregation with $project and $reduce to provide only the most recent timestamp from the array.Thanks,Justin",
"username": "Justin"
},
{
"code": "",
"text": "I’ve tried this but I am not getting the results correctly. How can I combine the $filter function with $reduce?",
"username": "Jon_Paricien"
},
{
"code": "",
"text": "Hi @Jon_Paricien,Do you have any examples of data and your desired result? I’ll need some sample data and examples of what you’ve attempted to help.Thanks,Justin",
"username": "Justin"
},
{
"code": "",
"text": "With your help, I was able to figure out the problem. Thank you @Justin",
"username": "Jon_Paricien"
}
] | Sort before returning documents in Mongo Query and Mongoose pagination | 2020-03-13T17:40:25.291Z | Sort before returning documents in Mongo Query and Mongoose pagination | 3,486 |
Subsets and Splits