source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
193,653 | I'm building a REST api where clients are authenticated using client certificates. A client in this case is not an individual user, but some sort of a presentation layer. Users are authenticated using a custom approach and it's the responsibility of the presentation layer to see that this is properly done (note: I know this is not the proper approach, but the api is not public). I would like to pass the user name for each request (not the password), but I'm not sure where to do this. Would it be a good idea to use the Authorization header? | Using the Authorization header seems like the right thing to do. It's the entire purpose of the Authorization header. From https://www.rfc-editor.org/rfc/rfc7235#section-4.2 : The "Authorization" header field allows a user agent to authenticate
itself with an origin server -- usually, but not necessarily, after
receiving a 401 (Unauthorized) response. Its value consists of
credentials containing the authentication information of the user
agent for the realm of the resource being requested. If you have your own auth scheme document it, but there's no need to reinvent the wheel. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/193653",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/87131/"
]
} |
193,665 | I'm a freelance programmer and recently I finished a website, it all works fine but there was one user that complained to my client that he couldn't log in. This problem was clearly a cookie-restriction/old browser one (I couldn't create the problem myself and hunderds of users are working with the website just fine) Now my client said: I've paid you to make something and somebody complains; so you didn't do your work correctly. What can I do in this situation and how do you guys handle this? | If you haven't already done so, define the minimum system requirements of your website, e.g. supported browsers¹, minimum display size, required cookie permissions, etc. If the user did not satisfy the minimum system requirements, it's not your fault that it didn't work for him. Investigate the issue, prove that the minimum system requirements were not satisfied and send the client an invoice for the time you spent doing this.² Of course, in some cases it's not so easy: You might be convinced that the problem lies "on the user's side", but you might not be able to prove it without putting a lot of effort into it. In that case, you should talk to your customer: I have performed some tests, and I'm pretty sure that the problem is a weird firewall configuration/a buggy IE plugin/etc. However, to prove this, I'll have to put a lot of effort into it. If I do that and it turns out that the fault was not on my side, I will have to send you a bill for the work done by me. Are you sure that you want me to continue investigating this issue? ¹ This doesn't mean it won't work with other browsers, it only limits your warranty to these browsers. Usually, the customer will understand that you cannot test your web site extensively with every browser out there. Ideally this should be cleared up-front: Support for IE8-10, FF12-19 and Safari 5 is included in the offer. IE7 can be included for an additional $xxx, IE6 for an additional $xxxx. ² Sending an invoice is a very powerful sign: Even if the customer complains and you end up canceling the invoice as a goodwill gesture, the client learns that unjustified complaints can cost money. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/193665",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/87154/"
]
} |
193,802 | One of my teammates is a jack of all trades in our IT shop and I respect his insight. However, sometimes he reviews my code (he's second in command to our team leader, so that's expected) without a heads up. So sometimes he reviews my changes before they complete the end goal and makes changes right away... and has even broken my work once. Other times, he has made unnecessary improvements to some of my code that is 3+ months old. This annoys me for a few reasons: I am not always given a chance to fix my mistakes He has not taken the time to ask me what I was trying to accomplish when he is confused, which could affect his testing or changes I don't always think his code is readable Deadlines are not an issue and his current workload doesn't require any work in my projects other than reviewing my code changes. Anyways, I have told him in the past to please keep me posted if he sees something in my work that he wants to change so that I could take ownership of my code (maybe I should have said "shortcomings") and he's not been responsive. I fear that I may come off as aggressive when I ask him to explain his changes to me. He's just a quiet person who keeps to himself, but his actions continue. I don't want to banish him from making code changes (not like I could), because we are a team--but I want to do my part to help our team. Added clarifications: We share 1 development branch. I do not wait until all my changes complete a single task because I risk losing some significant work--so I make sure my changes build and do not break anything. My concern is that my teammate doesn't explain the reason or purpose behind his changes. I don't think he should need my blessing, but if we disagree on an approach I thought it would be best to discuss the pros and cons and make a decision once we both understand what is going on. I have not discussed this with our team lead yet as I would prefer to resolve personal disagreements without getting management involved unless it is necessary. Since my concern seemed more of personal issue than a threat to our work, I chose to not bother the team lead. I am working on code review process ideas--to help promote the benefits of more organized code reviews without making it all about my pet peeves. | I think most developers find themselves in this position at some point, and I hope that every developer who's felt victimized realizes how frustrating it will be when he or she becomes the senior and feels compelled to clean up code written by juniors. For me, avoiding conflict in this situation comes down to two things: Courtesy . Talking to someone about his/her code allows a dev to know that you're interested and you can discuss it as grown up professionals. Forget about "code ownership" - the team owns the code . Other people wanting to make the changes is a good thing. If a senior dev makes changes that are "unreadable" or worse, then back them out. You don't need to be aggressive, just let an editor know that his/her changes didn't work, and you're more than happy to discuss your reversion. Remember, team ownership of code is great and it cuts both ways. If you see something that doesn't make sense in someone else's code, then fix it. Being overly possessive and inadequately communicative is a surefire way to a create a poisonous development environment. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/193802",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/53681/"
]
} |
193,821 | We have a URL in the following format /instance/{instanceType}/{instanceId} You can call it with the standard HTTP methods: POST, GET, DELETE, PUT. However, there are a few more actions that we take on it such as "Save as draft" or "Curate" We thought we could just use custom HTTP methods like: DRAFT, VALIDATE, CURATE I think this is acceptable since the standards say "The set of common methods for HTTP/1.1 is defined below. Although this set can be expanded, additional methods cannot be assumed to share the same semantics for separately extended clients and servers." And tools like WebDav create some of their own extensions. Are there problems someone has run into with custom methods? I'm thinking of proxy servers and firewalls but any other areas of concern are welcome. Should I stay on the safe side and just have a URL parameter like action=validate|curate|draft? | One of the fundamental constraints of HTTP and the central design feature of REST is a uniform interface provided by (among other things) a small, fixed set of methods that apply universally to all resources. The uniform interface constraint has a number of upsides and downsides. I'm quoting from Fielding liberally here. A uniform interface: is simpler. decouples implementations from the services that they provide. allows a layered architecture, including things like HTTP load balancers (nginx) and caches (varnish). On the other hand, a uniform interface: degrades efficiency, because information is transferred in a standardized form rather than one which is specific to an application's needs. The tradeoffs are "designed for the common case of the Web" and have allowed a large ecosystem to be built which provides solutions to many of the common problems in web architectures. Adhering to a uniform interface will allow your system to benefit from this ecosystem while breaking it will make it that difficult. You might want to use a load balancer like nginx but now you can only use a load balancer that understands DRAFT and CURATE. You might want to use an HTTP cache layer like Varnish but now you can only use an HTTP cache layer that understands DRAFT and CURATE. You might want to ask someone for help troubleshooting a server failure but no one else knows the semantics for a CURATE request. It may be difficult to change your preferred client or server libraries to understand and correctly implement the new methods. And so on. The correct * way to represent this is as a state transformation on the resource (or related resources). You don't DRAFT a post, you transform its draft state to true or you create a draft resource that contains the changes and links to previous draft versions. You don't CURATE a post, you transform its curated state to true or create a curation resource that links the post with the user that curated it. * Correct in that it most closely follows the REST architectural principles. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/193821",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9033/"
]
} |
193,824 | Can someone please explain the best way to solve this problem. Suppose I have three classes Person Venue Vehicle I have a DAO method that needs to return some or all of these attributes from each of the classes after doing a query. Please note, by requirements I am using one DAO for all three classes and no frameworks.
Only my own MVC implementation How do I accomplish this? It seems very wrong to make a class PersonVenueVehicle and return that as an object to get the instance field, values. I was taught that the database entities must be reflected by classes, if this is case how is it implemented in such a situation? | One of the fundamental constraints of HTTP and the central design feature of REST is a uniform interface provided by (among other things) a small, fixed set of methods that apply universally to all resources. The uniform interface constraint has a number of upsides and downsides. I'm quoting from Fielding liberally here. A uniform interface: is simpler. decouples implementations from the services that they provide. allows a layered architecture, including things like HTTP load balancers (nginx) and caches (varnish). On the other hand, a uniform interface: degrades efficiency, because information is transferred in a standardized form rather than one which is specific to an application's needs. The tradeoffs are "designed for the common case of the Web" and have allowed a large ecosystem to be built which provides solutions to many of the common problems in web architectures. Adhering to a uniform interface will allow your system to benefit from this ecosystem while breaking it will make it that difficult. You might want to use a load balancer like nginx but now you can only use a load balancer that understands DRAFT and CURATE. You might want to use an HTTP cache layer like Varnish but now you can only use an HTTP cache layer that understands DRAFT and CURATE. You might want to ask someone for help troubleshooting a server failure but no one else knows the semantics for a CURATE request. It may be difficult to change your preferred client or server libraries to understand and correctly implement the new methods. And so on. The correct * way to represent this is as a state transformation on the resource (or related resources). You don't DRAFT a post, you transform its draft state to true or you create a draft resource that contains the changes and links to previous draft versions. You don't CURATE a post, you transform its curated state to true or create a curation resource that links the post with the user that curated it. * Correct in that it most closely follows the REST architectural principles. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/193824",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/87270/"
]
} |
193,834 | Suppose I have a form in my web application where users can upload a profile picture. I've got few requirements about file size, dimensions etc, but when the user uploads the image, how should I name them on my system? I suppose it would need to be consistent and also unique. Maybe a GUID? a5c627bedc3c44b7ae7c06a44fb3fcf8.jpg A timestamp? 129899740140465735.jpg A hash? Ex: md5 b1a9acaf295cf14ffbc5b6538294562c.jpg Is there a standard or recommended way to do this? | You should try to meet two goals: Uniqueness, and usefulness. Using a GUID guarantees uniqueness, but one day the files may become detached from their original source, and then you will be in trouble. My typical solution is to embed crucial information into the filename, such as the userID (if it belongs to a user) or the date and time uploaded (if this is significant), or the filename used when uploading it. This may really save your skin one day, when the information embedded in the filename allows you to, for example, recover from a bug, or the accidental deletion of records. If all you have is GUIDs, and you lose the catalogue, you will have a heck of a job cleaning that up. For example, if a file "My Holiday: Florida 23.jpg" is uploaded, by userID 98765, on 2013/04/04 at 12:51:23 I would name it something like this, adding a random string ad8a7dsf9 : 20130404125123-ad8a7dsf9-98765-my-holiday-florida-23.jpg Uniqueness is ensured by the date and time, and random string (provided it is properly random from /dev/urandom or CryptGenRandom. If the file is ever detached, you can identify the user, the date and time, and the title. Everything is folded to lower case and anything non-alphanumeric is removed and replaced by dashes, which makes the filename easy to handle using simple tools (e.g. no spaces which can confuse badly written scripts, no colons or other characters which are forbidden on some filesystems, and so on). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/193834",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81480/"
]
} |
193,859 | I am the maintainer of a project which has a large non-technical userbase. I've been maintaining it for about 4 years now and adding new features as they've been requested. I'd like to move on to other projects now and stop developing for this application. Because of the non-technical nature of the users, there have been very few code contributions in the past. I don't believe I will be able to find anyone else to take over the project in my stead. Bugs, issues, feature requests - these are still coming in. I am still responding to emails for help, as I am not sure if I should ignore them, tell them that I'm not working on the application, or if I should respond to emails in only certain cases. What is the best way to 'abandon' this project, but still let users use the application? Update (July 2016) - It didn't go as planned. I made an announcement in the README and soon after, I started receiving contributions of a more substantial nature. Pull requests with bug fixes, features, documentation, issue activity. Since then, the project has felt 'reinvigorated' and I'm now happily maintaining it along with newer projects. I have collaborators as well. At a guess, it may have been the kind of contributions which were affecting my view of the project and with the quality of contributions improving, it didn't feel like a chore any more. | I'm guessing this is not a project at a workplace where you are a paid employee and something you do in your spare time for free? If you are making no money from this, then clearly there is no incentive for you, and no incentive for anyone else to come in fresh to deal with it. (unless maybe it is for a charity or similar voluntary organisation) As an alternative, why not look at the possibility of adding paid for features. This way you may have some incentive to continue. You might find people willing to pay, especially when the alternative is for the system to stop being actively developed.
(of course people may abandon your system, but what do you care, you already aren't being paid). Another option could be to use the project to learn new technologies? Is it a website? Upgrade to the latest technology? Convert from Asp.Net to MVC4 for example? build a mobile version, make it service based and create an iOS app front end for it? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/193859",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/87306/"
]
} |
193,953 | Recent QA testing has found some regression bugs in our code. My team lead blames recent refactoring efforts for the regressions. My team lead's stance is "refactor, but don't break too many things", but wouldn't tell me how many is "too many". My stance is it's QA's job to find bugs that we can't find, and refactoring usually introduces breaking changes. So sure, I can be careful, but I don't knowingly release code with bugs to QA. I do it because I don't see them. If the refactoring was necessary, how many regression bugs should be considered too many? | You are right that refactoring code is important. It prevents code rot and improves code. It makes for cleaner code. But good code is not only clean code, it's code that is correct, and thus by definition contains as few bugs as possible (ideally none). The first goal of your code is to produce its expected result. So if your refactoring is introducing bugs you might want to consider the net effect of those refactorings. You should refactor code that is tested. If it's not, add tests and then refactor. This way you know you haven't broken anything. This will help mitigate the risks of a similar situation from happening in the future. As for refactoring introducing bugs, refactoring should not alter the behavior of a program. I will quote from Wikipedia : disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior Sadly nowadays, refactoring has come to mean anything from this definition to total rewrites. I would alter what your team lead said from: refactor but don't break too many things To: refactor and don't break anything As for finding bugs being the job of the QA, quality isn't someone else's problem. The goal should be that QA finds nothing. Realistically finding as little as possible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/193953",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/87366/"
]
} |
193,955 | I'm trying to remember a word, I think it's related to computational or database theory. The closest synonym is atomic but that's not exactly it. Basically it's a kind of computation that should produce the same result even when run multiple times in a row, meaning it doesn't create side effects for itself. I specifically ran across this word in a Stack Overflow answer about a chmod command (or some other permission related operation). Hopefully that's enough to go on. Poking around Wikipedia isn't much help. | You might be thinking of " Idempotent ". Idempotence is the property of certain operations in mathematics and computer science, that they can be applied multiple times without changing the result beyond the initial application. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/193955",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/78221/"
]
} |
194,083 | In my ASP.net MVC4 web application I use IEnumerables, trying to follow the mantra to program to the interface, not the implementation. Return IEnumerable(Of Student) vs Return New List(Of Student) People are telling me to use List and not IEnumerable, because lists force the query to be executed and IEumerable does not. Is this really best practice? Is there any alternative? I feel strange using concrete objects where an interface could be used. Is my strange feeling justified? | There are times when doing a ToList() on your linq queries can be important to ensure your queries execute at the time and in the order that you expect them to. Those scenarios are however rare and nothing one should worry too much about until they genuinely run into them. Long story short, use IEnumerable anytime you only need iteration, use IList when you need to index directly and need a dynamically sized array (if you need indexing on a fixed size array then just use a standard array). As for the execution time thing, you can always use a list as an IEnumerable variable, so feel free to return an IEnumerable by doing a .ToList(); , or pass in a parameter as an IEnumerable by executing .ToList() on the IEnumerable to force execution right then and there. Just be careful that anytime you force execution with .ToList() you don't hang on to the IEnumerable variable which you just did that to and execute it again, or else you'll end up doubling the iterations in your LINQ query unnecessarily. In regards to MVC, there is really nothing special to note here. It's going to follow the same execution time rules as the rest of .NET, I think you might have someone who was bit by confusion caused by the delayed execution semantics in the past and blamed it on MVC telling you this is somehow related, but it's not. The delayed execution semantics confuse everybody at first (and even for a good while afterwards; they can be a touch tricky). Again though, just don't worry about it until you really care about ensuring a LINQ query doesn't get executed twice or require it executed in a certain order relative to other code, at which point assign your variable to itself.ToList() to force execution and you'll be fine. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194083",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81480/"
]
} |
194,094 | I work in a middle sized team which shares the same source code and while have a continues integration in place, but as all of us has to work in the same branch, the build is almost always broken. As we also have a rule, which has been introduced recently to alleviate the broken builds, which states that no one is allowed to check-in while to build is red. Having said that, during a day everyone have a handful of 10-15 minutes windows where we allowed to check-in. And as the team is growing, the windows of check-in opportunities shrinking even more. That forces developers to accumulate their changes locally, which results in a bigger change sets which even more difficult to ensure that the changes are not breaking anything. You can see the vicious cycle. What can you recommend to allow me to stay effective working in environment like this.
Also, please keep in mind that I am a developer, not a manager, and can't change the process or other people behavior much. | To start with, this comment: ... having a branch implies an extra complexity and thus extra work ... is wholly false. I often hear it from people who aren't accustomed to branching, but it's still wrong. If you have many developers accumulating changes locally, their local changes constitute a de-facto branch of the main repository. When they finally push, this is a de-facto merge . The fact that your branches and merges are implicit doesn't remove the extra complexity you're concerned about, it just hides it. The truth is that making this process explicit might help you (plural = the whole team) learn to manage it. Now, you say no one is allowed to check-in while to build is red but in this case the build can never be fixed, so it can't be quite accurate. In general, if it's reasonable to build locally (ie, the build isn't too big or slow), developers should do that before pushing anyway. I assume this isn't the case, because then they wouldn't push broken builds in the first place, but it'd be great if you could clarify this point. In general the answer to how to stay efficient when a build is almost always broken is: stop breaking the build . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194094",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
194,286 | This is a problem I've run into a few times. Imagine you have a record that you want to store into a database table. This table has a DateTime column called "date_created". This one particular record was created a long time ago, and you're not really sure about the exact date, but you know the year and month. Other records you know just the year. Other records you know the day, month and year. You can't use a DateTime field, because "May 1978" isn't a valid date. If you split it up into multiple columns, you lose the ability to query. Has anyone else run into this, if so how did you handle it? To clarify the system I'm building, it is a system that tracks archives. Some content was produced a long time ago, and all that we know is "May 1978". I could store it as May 1 1978, but only with some way to denote that this date is only accurate to the month. That way some years later when I'm retrieving that archive, I'm not confused when the dates don't match up. For my purposes, it is important to differentiate "unknown day in May 1978" with "May 1st, 1978". Also, I would not want to store the unknowns as 0, like "May 0, 1978" because most database systems will reject that as an invalid date value. | Store all dates in normal DATE field in the database and have additional accuracy field how accurate DATE field actually is. date_created DATE,
date_created_accuracy INTEGER, date_created_accuracy: 1 = exact date, 2 = month, 3 = year. If your date is fuzzy (e.g May 1980) store it at start of period (e.g. May 1st. 1980). Or if your date is accurate to year (e.g. 1980) store it as January 1st. 1980 with corresponding accuracy value. This way can easily query in a somewhat natural way and still have notion how accurate dates are. For example this allows you to query dates between Jan 1st 1980 and Feb 28th 1981 , and get fuzzy dates 1980 and May 1980 . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194286",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62121/"
]
} |
194,340 | Recently I read a lot about noSQL DBMSs. I understand CAP theorem , ACID rules, BASE rules and the basic theory. But didn't find any resources on why is noSQL scalable more easily than RDBMS (e.g. in case of a system that requires lots of DB servers)? I guess that keeping constraints and foreign keys cost resources and when a DBMS is distributed, it is a lot more complicated. But I expect there's a lot more than this. Can someone please explain how noSQL/SQL affects scalability? | noSQL databases give up a massive amount of functionality that a SQL database gives you by it's very nature. Things like automatic enforcement of referential integrity, transactions, etc. These are all things that are very handy to have for some problems, and which require some interesting techniques to scale outside of a single server (think about what happens if you need to lock two tables for an atomic transaction, and they are on different servers!). noSQL databases don't have all that. If you need that stuff, you need to do it yourself, but if you DON'T need it (and there are a lot of applications that don't), then boy howdy are you in luck. The DB doesn't have to do all of these complex operations and locking across much of the dataset, so it's really easy to partition the thing across many servers/disks/whatever and have it work really fast. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194340",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/87707/"
]
} |
194,433 | I've been learning more about Big O Notation and how to calculate it based on how an algorithm is written. I came across an interesting set of "rules" for calculating an algorithms Big O notation and I wanted to see if I'm on the right track or way off. Big O Notation: N function(n) {
For(var a = 0; i <= n; i++) { // It's N because it's just a single loop
// Do stuff
}
} Big O Notation: N 2 function(n, b) {
For(var a = 0; a <= n; a++) {
For(var c = 0; i <= b; c++) { // It's N squared because it's two nested loops
// Do stuff
}
}
} Big O Notation: 2N function(n, b) {
For(var a = 0; a <= n; a++) {
// Do stuff
}
For(var c = 0; i <= b; c++) { // It's 2N the loops are outside each other
// Do stuff
}
} Big O Notation: NLogN function(n) {
n.sort(); // The NLogN comes from the sort?
For(var a = 0; i <= n; i++) {
// Do stuff
}
} Are my examples and the subsequent notation correct? Are there additional notations I should be aware of? | Formally, big-O notation describes the degree of complexity. To calculate big-O notation: identify formula for algorithm complexity. Let's say, for example, two loops with another one nested inside, then another three loops not nested: 2N² + 3N remove everything except the highest term: 2N² remove all constants: N² In other words two loops with another one nested inside, then another three loops not nested is O (N²) This of course assumes that what you have in your loops are simple instructions. If you have for example sort() inside the loop, you'll have to multiply complexity of the loop by the complexity of the sort() implementation your underlying language/library is using. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194433",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4584/"
]
} |
194,439 | Setup : Suppose you are teaching an introduction to Databases class, the students are CS students that have a working knowledge of tree structures, how they can speed up searches, and have probably implemented a few in their lifetime. Question : How would you describe the way in which a database uses indexes to search a table for a set of keys? What structure is a database index most similar to? Bonus : How does someone write a SQL query where clause to take advantage of the searching capability of the index they design on a given table? Answers should correspond to all database products as a whole. I'm looking for general tips which allow faster searching on all databases. Plain english descriptions please, no code, Big O searching descriptions are fine. This question might be too specific for this site, I considered asking on StackExchange but since I'm requesting a plain english description of a broad concept I thought this site would be Ok. | Database indexes are modeled after textbook indexes, then made more efficient: The non-indented parts are the primary part you're searching on, and the indented part underneath some of them further identifies specific topics. Each indentation level is similar to another column on the index. Taking advantage of indexes is (I think) partially implementation-specific. For example: If you query column food for "chicken" , the index will be utilized. For "chick%" , I would say it depends on the database/type of index, although all the ones I know of will still use it. Similar rules apply for querying columns food and drink for "chicken" and "water" : First it limits results based on the first column in the index, then the second - just as if you used the outer index, then the indented index, in a textbook. Likewise for "chik%" and "wat%" However, "%ken" cannot be searched in an index in the databases I know of, because they index from the front of the word, not the back - same as textbook indexes. So the database will have to scan the whole table. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194439",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22139/"
]
} |
194,446 | I've worked in some projects where most of the business logic was implemented on the database (mostly through stored procedures). On the other side, I've heard from some fellow programmers that this is a bad practice ("Databases are there to store data. Applications are there to do the rest"). Which of these approaches is the generally better? The pros of implementing business logic in the DB I can think of are: Centralization of business logic; Independency of application type, programming language, OS, etc; Databases are less prone to technology migration or big refactorings (AFAIK); No rework on application technology migration (e.g.: .NET to Java, Perl to Python, etc). The cons: SQL is less productive and more complex for business logic programming, due to the lack of libraries and language constructs the most application-oriented languages offer; More difficult (if possible at all) code reuse through libraries; Less productive IDEs. Note: The databases I'm talking about are relational, popular databases like SQL Server, Oracle, MySql etc. Thanks! | Business logic doesn't go into the database If we're talking about multi-tier applications, it seems pretty clear that business logic, the kind of intelligence that runs a particular enterprise, belongs in the Business Logic Layer, not in the Data Access Layer. Databases do a few things really well: They store and retrieve data They establish and enforce relationships between different data entities They provide the means to query the data for answers They provide performance optimizations. They provide access control Now, of course, you can codify all sorts of things in a database that pertain to your business concerns, things like tax rates, discounts, operation codes, categories and so forth. But the business action that is taken on that data is not generally coded into the database, for all sorts of reasons already mentioned by others, although an action can be chosen in the database and executed elsewhere. And of course, there may be things that are performed in a database for performance and other reasons: Closing out an accounting period Number crunching Nightly batch processes Fail-over Naturally, nothing is engraved in stone. Stored Procedures are suitable for a wide array of tasks simply because they live on the database server and have certain strengths and advantages. Stored Procedures Everywhere? There's a certain allure to coding all of your data storage, management and retrieval tasks in stored procedures, and simply consuming the resulting data services. You certainly would benefit from the maximum possible performance and security optimizations that the database server could provide, and that's no small thing. But what do you risk? Vendor lock-in The need for developers with special skill sets Spartan programming tools, overall Extremely tight software coupling No separation of concerns And of course, if you need a web service (which is probably where this is all heading, anyway), you're still going to have to build that. So what is typical practice? I would say that a typical, modern approach is to use an Object-Relational Mapper (such as Entity Framework) to create classes that model your tables. You can then speak to your database through a repository that returns collections of objects, a situation that is very familiar to any competent software developer. The ORM dynamically generates SQL corresponding to your data model and the information requested, which the database server then processes to return query results. How well does this work? Very well, and much more rapidly than writing stored procedures and views. This generally covers about 80% of your data access requirements, mostly CRUD. What covers the other 20%? You guessed it: stored procedures, which all of the major ORMs support directly. Can you write a code generator that does the same thing as an ORM, but with stored procedures? Sure you can. But ORMs are generally vendor-independent, well-understood by everyone, and better supported. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194446",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35382/"
]
} |
194,504 | I'm writing Ruby code for a simple encryption exercise and have frequently run across this dilemma (the exercise is a solitaire cipher if you must know). It is a question of whether I should pad out my logic with descriptive variables and single step statements that make the function readable instead of a concise, even dense statements that eliminate repetition and/or minimizes opportunities for errors. My most recent example: My program takes input, and because of rigid format guidelines, it can easily determine if input should be encrypted or decrypted. To simplify, once the encryption key and message are converted/generated to be compatible, it's a matter of subtracting the key from the encrypted message or adding the key to an unencrypted message, to get the desired output(think of the key as encryption, message + encryption = code; code - encryption = message). The DRY position says to me that I should convert my encrypted message differently from my unencrypted message so that the function which takes encryption key and applies it to the message never needs to distinguish. I've found that this means I need some nested if statements in the function but the logic appears to be solid. This code, however, is not easily readable. This would require some commenting to be clear. I could, on the other hand, write two different functions that are called based on a flag that is set when the application determines encryption or decryption. This would be simpler to read but would duplicate the high level function of applying the encryption key to a message (causing it to be encrypted or decrypted). Should I lean toward readable code, or concise code?
Or have I missed another way to get this functionality and satisfy both principles?
Is it a position along a scale in which one must consider the purpose of the project and make the best decisions to serve that purpose? So far, I tend to emphasize concise, DRY code over readable code. | DRY is a guideline, not a religion. Those who take it to the point of DRY above all else, have taken it too far. First and foremost, usable code is paramount. If the code isn't useful and usable, it isn't... and there isn't any point in writing it in the first place. Secondly, some day, someone will have to maintain your code. If your code isn't maintainable, they will break your "beautiful" dense, concise, DRY design while cursing your name. Don't do this. I've been that person and every time I see a certain name in the code annotation, I shudder. Making dense code that lack "descriptive variables" and sticking everything into nested ternary expressions with lambdas without any documentation is clever. It is nice to know that you can do it - but don't. Clever code is very difficult to debug. Avoid writing clever code . Most of the time spent in software is spent in maintenance of the software - not writing it the first time. Write the code so that you (or someone else) can quickly and easily fix bugs and add features as required to it with as few ideal design breaking changes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194504",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/76086/"
]
} |
194,542 | I'm currently on a medium-sized team of web developers. We're using jira for bug tracking. We're working on a product with frequent layout changes. A lot of times bugs are filed about a bug in the layout in some browser. Sometimes, by the time we get around to dealing with a low priority bug, the layout has already changed and it is no longer relevant. What should we close it as? What I mean is how we should treat these issues? While Jira is the bug tracking software we use, I'm more interested in how to handle these sort of issues in general. Does it even matter? (We might return to the layout later, but it's very unlikely) | We resolve such issues as 'Obsolete'. This is not a default resolution option in JIRA but it is easy enough to add. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194542",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/56971/"
]
} |
194,580 | I've been doing a lot of reading online trying to figure out how to write asynchronous JavaScript code. One of the techniques that has come up a lot in my research is to use callbacks. While I understand the process of how to write and execute a callback function, I'm confused why callbacks seem to automagically make the JavaScript execution asynchronous. So, my question is: how does adding in callback functions to my JavaScript code make said code automagically async? | It doesn't. Just taking a callback or passing a callback doesn't mean it's asynchronous. For example, the .forEach function takes a callback but is synchronous. var available = false;
[1,2,3].forEach( function(){
available = true;
});
//code here runs after the whole .forEach has run,
//so available === true here The setTimeout takes a callback too and is asynchronous. function myFunction( fn ) {
setTimeout( function() {
fn(1,2,3);
}, 0 );
}
var available = false;
myFunction( function() {
available = true;
});
//available is never true here Hooking to any asynchronous event in Javascript always requires a callback but that doesn't
mean calling functions or passing them around is always asynchronous. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194580",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29122/"
]
} |
194,614 | Programmers often talk about the time complexity of an algorithm, e.g. O(log n) or O(n^2). Time complexity classifications are made as the input size goes to infinity, but ironically infinite input size in computation is not used. Put another way, the classification of an algorithm is based on a situation that algorithm will never be in: where n = infinity. Also, consider that a polynomial time algorithm where the exponent is huge is just as useless as an exponential time algorithm with tiny base (e.g., 1.00000001^n) is useful. Given this, how much can I rely on the Big-O time complexity to advise choice of an algorithm? | With small n Big O it is just about useless and it's the hidden constants or even actual implementation that will more likely be the deciding factor for which algorithm is better. This is why most sorting functions in standard libraries will switch to a faster insertion sort for those last 5 elements. The only way to figure out which one will be better is benchmarking with realistic data sets. Big O is good for large data sets and discussing on how an algorithm will scale, it's better to have a O(n log n) algorithm than a O(n^2) when you expect the data to grow in the future, but if the O(n^2) works fine the way it is and the input sizes will likely remain constant, just make note that you can upgrade it but leave it as is, there are likely other things you need to worry about right now. (Note all "large" and "smalls" in the previous paragraphs are meant to be taken relatively; small can be a few million and big can be a hundred it all depends on each specific case) Often times there will be a trade-off between time and space: for example quicksort requires O(log n) extra memory while heapsort can use O(1) extra memory, however the hidden constants in heapsort makes it less attractive (there's also the stability issue which make mergesort more attractive if you don't mind payign the extra memory costs). Another thing to consider is database indexes, these are additional tables that require log(n) time to update when a record is added, removed or modified, but lets lookups happen much faster ( O(log n) instead of O(n) ). deciding on whether to add one is a constant headache for most database admins: will I have enough lookups on the index compared to the amount of time I spend updating the index? One last thing to keep in mind: the more efficient algorithms are nearly always more complicated than the naive straight-forward one (otherwise it would be the one you would have used from the start). This means a larger surface area for bugs and code that is harder to follow, both are non-trivial issues to deal with. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194614",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/86883/"
]
} |
194,615 | I'm developing a WordPress theme and am planning to sell it myself. I was thinking of having a 14- or 30-day refund policy for customers, but my concern is that people can essentially get the theme for free, if they: 1) buy it, 2) download the files, and then 3) request a refund. Then they would have both the theme and their money back. I've been looking into software refund policies and noticed there are a few different schools of thoughts on providing customer refunds: School of Thought #1 - Provide refunds. If your software is good quality, not many people will request a refund. My response: But in my case, even if users think the theme is high quality, they can still request a refund and also keep the theme. I have also put a lot of work into the development and testing of the theme, so it is quality work. School of Thought #2 - Provide refunds, but some customers will abuse the refund process, so have an activation code and only provide refunds to people who have not activated the software. My response: This sounds like a good idea, but there isn't an activation mechanism for themes in WordPress, so I don't know how I could implement it. School of Thought #3 - No refunds. My response: This seems really inflexible. I'm not against refunding customer payments for good reason, but I don't want to give away my work, either. Have you heard of any other good options in setting up a refund policy? | With small n Big O it is just about useless and it's the hidden constants or even actual implementation that will more likely be the deciding factor for which algorithm is better. This is why most sorting functions in standard libraries will switch to a faster insertion sort for those last 5 elements. The only way to figure out which one will be better is benchmarking with realistic data sets. Big O is good for large data sets and discussing on how an algorithm will scale, it's better to have a O(n log n) algorithm than a O(n^2) when you expect the data to grow in the future, but if the O(n^2) works fine the way it is and the input sizes will likely remain constant, just make note that you can upgrade it but leave it as is, there are likely other things you need to worry about right now. (Note all "large" and "smalls" in the previous paragraphs are meant to be taken relatively; small can be a few million and big can be a hundred it all depends on each specific case) Often times there will be a trade-off between time and space: for example quicksort requires O(log n) extra memory while heapsort can use O(1) extra memory, however the hidden constants in heapsort makes it less attractive (there's also the stability issue which make mergesort more attractive if you don't mind payign the extra memory costs). Another thing to consider is database indexes, these are additional tables that require log(n) time to update when a record is added, removed or modified, but lets lookups happen much faster ( O(log n) instead of O(n) ). deciding on whether to add one is a constant headache for most database admins: will I have enough lookups on the index compared to the amount of time I spend updating the index? One last thing to keep in mind: the more efficient algorithms are nearly always more complicated than the naive straight-forward one (otherwise it would be the one you would have used from the start). This means a larger surface area for bugs and code that is harder to follow, both are non-trivial issues to deal with. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194615",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/87930/"
]
} |
194,635 | Could someone explain the rationale, why in a bunch of most popular languages (see note below) comparison operators (==, !=, <, >, <=, >=) have higher priority than bitwise operators (&, |, ^, ~)? I don't think I've ever encountered a use where this precedence would be natural. It's always stuff like: if( (x & MASK) == CORRECT ) ... // Chosen bits are in correct setting, rest unimportant
if( (x ^ x_prev) == SET ) // only, and exactly SET bit changed
if( (x & REQUIRED) < REQUIRED ) // Not all conditions satisfied The cases where I'd use: flags = ( x == 6 | 2 ); // set bit 0 when x is 6, bit 1 always. are near to nonexistent. What was the motivation of language designers to decide upon such precedence of operators? For example, all but SQL at the top 12 languages are like that on Programming Language Popularity list at langpop.com: C, Java, C++, PHP, JavaScript, Python, C#, Perl, SQL, Ruby, Shell, Visual Basic. | Languages have copied that from C, and for C, Dennis Ritchie explains that initially, in B (and perhaps early C), there was only one form & which depending on the context did a bitwise and or a logical one. Later, each function got its operator: & for the bitwise one and && for for logical one. Then he continues Their tardy introduction explains an infelicity of C's precedence rules. In B one writes if (a == b & c) ... to check whether a equals b and c is non-zero; in such a conditional expression it is better that & have lower precedence than == . In converting from B to C, one wants to replace & by && in such a statement; to make the conversion less painful, we decided to keep the precedence of the & operator the same relative to == , and merely split the precedence of && slightly from & . Today, it seems that it would have been preferable to move the relative precedences of & and == , and thereby simplify a common C idiom: to test a masked value against another value, one must write if ((a & mask) == b) ... where the inner parentheses are required but easily forgotten. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194635",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30296/"
]
} |
194,646 | Question What are the possible ways to solve a stack overflow caused by an recursive algorithm? Example I'm trying to solve Project Euler problem 14 and decided to try it with a recursive algorithm. However, the program stops with a java.lang.StackOverflowError. Understandably. The algorithm indeed overflowed the stack because I tried to generate a Collatz sequence for a very large number. Solutions So I was wondering: what standard ways are there to solve a stack overflow assuming your recursive algorithm was written correctly and would always end up overflowing the stack? Two concepts that came to mind were: tail recursion iteration Are ideas (1) and (2) correct? Are there other options? Edit It would help to see some code, preferably in Java, C#, Groovy or Scala. Perhaps don't use the Project Euler problem mentioned above so it won't get spoiled for others, but take some other algorithm. Factorial maybe, or something similar. | Tail call optimization is present in many languages and compilers. In this situation, the compiler recognizes a function of the form: int foo(n) {
...
return bar(n);
} Here, the language is able to recognize that the result being returned is the result from another function and change a function call with a new stack frame into a jump. Realize that the classic factorial method: int factorial(n) {
if(n == 0) return 1;
if(n == 1) return 1;
return n * factorial(n - 1);
} is not tail call optimizatable because of the inspection necessary on the return. ( Example source code and compiled output ) To make this tail call optimizeable, int _fact(int n, int acc) {
if(n == 1) return acc;
return _fact(n - 1, acc * n);
}
int factorial(int n) {
if(n == 0) return 1;
return _fact(n, 1);
} Compiling this code with gcc -O2 -S fact.c (the -O2 is necessary to enable the optimization in the compiler, but with more optimizations of -O3 it gets hard for a human to read...) _fact(int, int):
cmpl $1, %edi
movl %esi, %eax
je .L2
.L3:
imull %edi, %eax
subl $1, %edi
cmpl $1, %edi
jne .L3
.L2:
rep ret ( Example source code and compiled output ) One can see in segment .L3 , the jne rather than a call (which does a subroutine call with a new stack frame). Please note this was done with C. Tail call optimization in Java is hard and depends on the JVM implementation (that said, I haven't seen any that do it, because it is hard and implications of the required Java security model requiring stack frames - which is what TCO avoids) -- tail-recursion + java and tail-recursion + optimization are good tag sets to browse. You may find other JVM languages are able to optimize tail recursion better (try clojure (which requires the recur to tail call optimize), or scala). That said, There is a certain joy in knowing that you wrote something right - in the ideal way that it can be done. And now, I'm going to get some scotch and put on some German electronica ... To the general question of "methods to avoid a stack overflow in a recursive algorithm"... Another approach is to include a recursion counter. This is more for detecting infinite loops caused by situations beyond one's control (and poor coding). The recursion counter takes the form of int foo(arg, counter) {
if(counter > RECURSION_MAX) { return -1; }
...
return foo(arg, counter + 1);
} Each time you make a call, you increment the counter. If the counter gets too big, you error out (in here, just a return of -1, though in other languages you may prefer to throw an exception). The idea is to prevent worse things from happening (out of memory errors) when doing a recursion that is much deeper than expected and likely an infinite loop. In theory, you shouldn't need this. In practice, I've seen poorly written code that has hit this because of a plethora of small errors and bad coding practices (multithreaded concurrency issues where something changes something outside the method that makes another thread go into an infinite loop of recursive calls). Use the right algorithm and solve the right problem. Specifically for the Collatz Conjecture, it appears that you are trying to solve it in the xkcd way: You are starting at a number and doing a tree traversal. This rapidly leads to a very large search space. A quick run to calculate the number of iterations for the correct answer results in about 500 steps. This shouldn't be an issue for recursion with a small stack frame. While knowing the recursive solution is not a bad thing, one should also realize that many times the iterative solution is better . A number of ways of approaching converting a recursive algorithm to an iterative one can be seen on Stack Overflow at Way to go from recursion to iteration . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194646",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5611/"
]
} |
194,686 | In my opinion, one of the greatest things about Scala is its interoperability with Java and its similar syntax. One thing that I found strange is the use of the _ operator for package wilcard imports instead of the * operator that is used in Java. Is there a technical reason for using _ instead of * ? If not, then why was this change done? | In Scala, the * is a valid identifier. One could write: val * = "trollin'"
println(*) With the result being: trollin' One could write a class named * as such: class * {
def test():String = {
"trollin'"
}
} So with that being the case, when I have a class * in the package us.hexcoder and I write: import us.hexcoder.* You would be saying that you wish to import a class with the name * . Because of this, Scala needed to use another symbol to indicate a wildcard import. For whatever reason, they decided to use _ as the wildcard symbol. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194686",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35897/"
]
} |
194,764 | what is LPCTSTR and LPCTSTR -like (for instance HDC ) and what it does stand for? | Quoting Brian Kramer on the MSDN forums LPCTSTR = L ong P ointer to a C onst T CHAR STR ing (Don't worry, a long pointer is the same as a pointer. There were two flavors of pointers under 16-bit
windows.) Here's the table: LPSTR = char* LPCSTR = const char* LPWSTR = wchar_t* LPCWSTR = const wchar_t* LPTSTR = char* or wchar_t* depending on _UNICODE LPCTSTR = const char* or const wchar_t* depending on _UNICODE | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194764",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
194,822 | I'd like to know if it makes sense to divide the project I'm working on in two repositories instead of one. From what I can say: Frontend will be written in html+js Backend in .net The backend doesn't depend on frontend and the frontend doesn't depend on the backend The frontend will use a restful api implemented in the backend. The frontend could be hosted on any static http server. As of now, the repository has this structure: root: frontend/* backend/* I think it's a mistake to keep both project in the same repository. Since both project do not have dependencies between each others, they should belong in individual repositories and if needed a parent repository that has submodules. I've been told that it's pointless and that we won't get any benefit from doing that. Here are some of my arguments: We have two modules that don't depend between each others. Having source history of both projects in the long term may complicate things (try searching in the history for something in the frontend while you have half of the commits that are completely unrelated to the bug you're looking for) Conflict and merging (This shouldn't happen but having someone pushing to the backend will force other developer to pull backend changes to push frontend changes.) One developer might work only on the backend but will always have to pull the frontend or the other way around. In the long run, when it will be time to deploy. In some way, the frontend could be deployed to multiple static server while having one backend server. In every case, people will be forced to either clone the whole backend with it or to make custom script to push to all servers the frontend only or to remove the backend. Easier to just push/pull only the frontend or backend than both if only one is needed. Counter argument (One person might work on both projects), Create a third repo with submodule and develop with it. History is kept seperated in individual modules and you can always create tags where version of backend/frontend do really work together in sync. Having both frontend/backend together in one repo doesn't mean that they will work together. It's just merging both history into one big repo. Having frontend/backend as submodules will make things easier if you want to add a freelancer to the project. In some case, you don't really want to give full access to the codebase. Having one big module will make things harder if you want to restrict what the "outsiders" can see/edit. Bug introduction and fixing bug, I inserted a new bug in the frontend. Then someone fix a bug in the backend. With one repository, rolling back before the new bug will also rollback the backend which could make it difficult to fix. I'd have to clone the backend in a different folder to have the backend working while fixing the bug in the frontend... then trying to remerge things up... Having two repository will be painless because moving the HEAD of one repo won't change the other. And testing against different version of backend will be painless. Can someone give me more arguments to convince them or at least tell me why it is pointless (more complicated) to divide the project in two submodules. The project is new and the codebase is a couple of days old so it's not too soon to fix. | At my company, we use a separate SVN repository for every component of the system. I can tell you that it gets extremely frustrating. Our build process has so many layers of abstraction. We do this with Java, so we have a heavy build process with javac compilation, JibX binding compilation, XML validation, etc. For your site, it may not be a big deal if you don't really "build it" (such as vanilla PHP). Downsides to splitting a product into multiple repositories Build management - I can't just checkout code, run a self-contained build script and have a runnable / installable / deployable product. I need an external build system that goes out to multiple repos, runs multiple inner build scripts, then assembles the artifacts. Change tracking - Seeing who changed what, when, and why. If a bug fix in the frontend requires a backend change, there are now 2 divergent paths for me to refer back to later. Administration - do you really want to double the number of user accounts, password policies, etc. that need to be managed? Merging - New features are likely to change a lot of code. By splitting your project into multiple repositories, you are multiplying the number of merges needed. Branch creation - Same deal with branching, to create a branch, you now have to create a branch in each repository. Tagging - after a successful test of your code, you want to tag a version for release. Now you have multiple tags to create, one in each repository. Hard to find something - Maybe frontend/backend is straightforward, but it becomes a slippery slope. If you split into enough modules, developers may have to investigate where some piece of code lives in source control. My case is a bit extreme as our product is split across 14 different repos and each repo is then divided into 4-8 modules. If I remember, we have somewhere around 80 or some "packages" which all need to be checked out individually and then assembled. Your case with just backend/frontend may be less complicated, but I still advise against it. Extreme examples can be compelling arguments for or against pretty much anything :) Criteria I would use to decide I would consider splitting a product into multiple source code repositories after considering the following factors: Build - Do the results of building each component merge together to form a product? Like combining .class files from a bunch of components into a series of .jar or .war files. Deployment - Do you end up with components that get deployed together as one unit or different units that go to different servers? For example, database scripts go to your DB server, while javascript goes to your web server. Co-change - Do they tend to change frequently or together? In your case, they may change separately, but still frequently. Frequency of branching/merging - if everybody checks into trunk and branches are rare, you may be able to get away with it. If you frequently branch and merge, this may turn into a nightmare. Agility - if you need to develop, test, release and deploy a change on a moment's notice (likely with SaaS), can you do it without spending precious time juggling branches and repos? Your arguments I also don't agree with most of your arguments for this splitting. I won't dispute them all because this long answer will get even longer, but a few that stand out: We have two modules that don't depend between each others. Non-sense. If you take your backend away, will your frontend work? That's what I thought. Having source history of both projects in the long term may complicate
things (try searching in the history for something in the frontend
while you have half of the commits that are completely unrelated to
the bug you're looking for) If your project root is broken into frontend/ and backend/, then you can look at the history of those hierarchies independently. Conflict and merging (This shouldn't happen but having someone pushing
to the backend will force other developer to pull backend changes to
push frontend changes.) One developer might work only on the backend
but will always have to pull the backend or the other way around. Splitting your project into different repos doesn't solve this. A frontend conflict and a backend conflict still leaves you with 2 conflicts, whether it's 1 repository times 2 conflicts or 2 repositories times 1 conflict. Somebody still needs to resolve them. If the concern is that 2 repos means a frontend dev can merge frontend code while a backend dev merges backend code, you can still do that with a single repository using SVN. SVN can merge at any level. Maybe that is a git or mercurial limitation (you tagged both, so not sure what SCM you use)? On the other hand With all this said, I have seen cases where splitting a project into multiple modules or repositories works. I even advocated for it once for a particular project where we integrated Solr into our product. Solr of course runs on separate servers, only changes when a changeset is related to search (our product does much more than search), has a separate build process and there are no code artifacts or build artifacts shared. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194822",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12039/"
]
} |
194,975 | Many programmers know the joy of whipping up a quick regular expression, these days often with help of some web service, or more traditionally at interactive prompt, or perhaps writing a small script which has the regular expression under development, and a collection of test cases. In either case the process is iterative and fairly quick: keep hacking at the cryptic-looking string until it matches and captures what you want and will reject what you don't want. For a simple case result might be something like this, as a Java regexp: Pattern re = Pattern.compile(
"^\\s*(?:(?:([\\d]+)\\s*:\\s*)?(?:([\\d]+)\\s*:\\s*))?([\\d]+)(?:\\s*[.,]\\s*([0-9]+))?\\s*$"
); Many programmers also know the pain of needing to edit a regular expression, or just code around a regular expression in a legacy code base. With a bit editing to split it up, above regexp is still very easy to comprehend for anyone reasonably familiar with regexps, and a regexp veteran should see right away what it does (answer at the end of the post, in case someone wants the exercise of figuring it out themselves). However, things don't need to get much more complex for a regexp to become truly write-only thing, and even with diligent documentation (which everybody of course does for all complex regexps they write...), modifying the regexps becomes a daunting task. It can be a very dangerous task too, if regexp is not carefully unit tested (but everybody of course has comprehensive unit tests for all their complex regexps, both positive and negative...). So, long story short, is there a write-read solution/alternative for regular expressions without losing their power? How would the above regexp look like with an alternative approach? Any language is fine, though a multi-language solution would be best, to the degree regexps are multi-language. And then, what the earlier regexp does is this: parse a string of numbers in format 1:2:3.4 , capturing each number, where spaces are allowed and only 3 is required. | A number of people have mentioned composing from smaller parts, but no one's provided an example yet, so here's mine: string number = "(\\d+)";
string unit = "(?:" + number + "\\s*:\\s*)";
string optionalDecimal = "(?:\\s*[.,]\\s*" + number + ")?";
Pattern re = Pattern.compile(
"^\\s*(?:" + unit + "?" + unit + ")?" + number + optionalDecimal + "\\s*$"
); Not the most readable, but I feel like it's clearer than the original. Also, C# has the @ operator which can be prepended to a string in order to indicate that it is to be taken literally (no escape characters), so number would be @"([\d]+)"; | {
"source": [
"https://softwareengineering.stackexchange.com/questions/194975",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/70135/"
]
} |
195,032 | I was writing this code: private static Expression<Func<Binding, bool>> ToExpression(BindingCriterion criterion)
{
switch (criterion.ChangeAction)
{
case BindingType.Inherited:
var action = (byte)ChangeAction.Inherit;
return (x => x.Action == action);
case BindingType.ExplicitValue:
var action = (byte)ChangeAction.SetValue;
return (x => x.Action == action);
default:
// TODO: Localize errors
throw new InvalidOperationException("Invalid criterion.");
}
} And was surprised to find a compile error: A local variable named 'action' is already defined in this scope It was a pretty easy issue to resolve; just getting rid of the second var did the trick. Evidently variables declared in case blocks have the scope of the parent switch , but I'm curious as to why this is. Given that C# does not allow execution to fall through other cases (it requires break , return , throw , or goto case statements at the end of every case block), it seems quite odd that it would allow variable declarations inside one case to be used or conflict with variables in any other case . In other words variables appear to fall through case statements even though execution cannot. C# takes great pains to promote readability by prohibiting some constructs of other languages that are confusing or or easily abused. But this seems like it's just bound to cause confusion. Consider the following scenarios: If were to change it to this: case BindingType.Inherited:
var action = (byte)ChangeAction.Inherit;
return (x => x.Action == action);
case BindingType.ExplicitValue:
return (x => x.Action == action); I get " Use of unassigned local variable 'action' ". This is confusing because in every other construct in C# that I can think of var action = ... would initialize the variable, but here it simply declares it. If I were to swap the cases like this: case BindingType.ExplicitValue:
action = (byte)ChangeAction.SetValue;
return (x => x.Action == action);
case BindingType.Inherited:
var action = (byte)ChangeAction.Inherit;
return (x => x.Action == action); I get " Cannot use local variable 'action' before it is declared ". So the order of the case blocks appears to be important here in a way that's not entirely obvious -- Normally I could write these in any order I wish, but because the var must appear in the first block where action is used, I have to tweak case blocks accordingly. If were to change it to this: case BindingType.Inherited:
var action = (byte)ChangeAction.Inherit;
return (x => x.Action == action);
case BindingType.ExplicitValue:
action = (byte)ChangeAction.SetValue;
goto case BindingType.Inherited; Then I get no error, but in a sense, it looks like the variable is being assigned a value before it's declared. (Although I can't think of any time you'd actually want to do this -- I didn't even know goto case existed before today) So my question is, why didn't the designers of C# give case blocks their own local scope? Are there any historical or technical reasons for this? | I think a good reason is that in every other case, the scope of a “normal” local variable is a block delimited by braces ( {} ). The local variables that are not normal appear in a special construct before a statement (which is usually a block), like a for loop variable or a variable declared in using . One more exception are local variables in LINQ query expressions, but those are completely different from normal local variable declarations, so I don't think there is a chance of confusion there. For reference, the rules are in §3.7 Scopes of the C# spec: The scope of a local variable declared in a local-variable-declaration is the block in which the declaration occurs. The scope of a local variable declared in a switch-block of a switch statement is the switch-block . The scope of a local variable declared in a for-initializer of a for statement is the for-initializer , the for-condition , the for-iterator , and the contained statement of the for statement. The scope of a variable declared as part of a foreach-statement , using-statement , lock-statement or query-expression is determined by the expansion of the given construct. (Though I'm not completely sure why is the switch block explicitly mentioned, since it doesn't have any special syntax for local variable declations, unlike all the other mentioned constructs.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195032",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81884/"
]
} |
195,081 | There is a new hype with the long awaited lambda expressions in Java 8; every 3 day another article appears with them about how cool they are. As far as I have understood a lambda expression is nothing more than an anonymous inner class with a single method (at least at the byte-code level). Besides this it comes with another nice feature - type inference but I believe the equivalent of this can be achieved with generics on some level (of course not in such a neat way as with lambda expressions). Knowing this, are lambda expressions going to bring something more than just a syntactic sugaring in Java? Can I create more powerful and flexible classes or other object-oriented constructs with lambda expressions that aren't possible to be built with current language features? | tl;dr: while it's mostly syntactic sugar, that nicer syntax makes lots of things practical that used to end in endless, unreadable lines of braces and parentheses. Well, it's actually the other way around as lambdas are much older than Java. Anonymous inner classes with a single method are (were) the closest Java came to lambdas. It's an approximation that was "good enough" for some time, but has a very nasty syntax. On the surface, Java 8 lambdas seem to be not much more than syntactic sugar, but when you look below the surface, you see tons of interesting abstractions. For example the JVM spec treats a lambda quite differently from a "true" object, and while you can handle them as if they where objects, the JVM is not required to implement them as such. But while all that technical trickery is interesting and relevant (since it allows future optimizations in the JVM!), the real benefit is "just" the syntactic sugar part. What's easier to read: myCollection.map(new Mapper<String,String>() {
public String map(String input) {
return new StringBuilder(input).reverse().toString();
}
}); or: myCollection.map(element -> new StringBuilder(element).reverse().toString()); or (using a method handle instead of a lambda): myCollection.map(String::toUpperCase); The fact that you can finally express in a concise way which would previously be 5 lines of code (of which 3 are utterly boring) brings a real change of what is practical (but not of what is possible, granted). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195081",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57792/"
]
} |
195,099 | I couldn't understand the reason of it. I always use String class like other developers, but when I modify the value of it, new instance of String created. What might be the reason of immutability for String class in Java? I know there are some alternatives like StringBuffer or StringBuilder. It's just curiosity. | Concurrency Java was defined from the start with considerations of concurrency. As often been mentioned shared mutables are problematic. One thing can change another behind the back of another thread without that thread being aware of it. There are a host of multithreaded C++ bugs that have croped up because of a shared string - where one module thought it was safe to change when another module in the code had saved a pointer to it and expected it to stay the same. The 'solution' to this is that every class makes a defensive copy of the mutable objects that are passed to it. For mutable strings, this is O(n) to make the copy. For immutable strings, making a copy is O(1) because it isn't a copy, its the same object that can't change. In a multithreaded environment, immutable objects can always be safely shared between each other. This leads to an overall reduction in memory usage and improves memory caching. Security Many times strings are passed around as arguments to constructors - network connections and protocals are the two that most easily come to mind. Being able to change this at an undetermined time later in the execution can lead to security issues (the function thought it was connecting to one machine, but was diverted to another, but everything in the object looks like it connected to the first... its even the same string). Java lets one use reflection - and the parameters for this are strings. The danger of one passing a string that can get modified through the way to another method that reflects. This is very bad. Keys to the Hash The hash table is one of the most used data structures. The keys to the data structure are very often strings. Having immutable strings means that (as above) the hash table does not need to make a copy of the hash key each time. If strings were mutable, and the hash table didn't make this, it would be possible for something to change the hash key at a distance. The way the Object in java works, is that everything has a hash key (accessed via the hashCode() method). Having an immutable string means that the hashCode can be cached. Considering how often Strings are used as keys to a hash, this provides a significant performance boost (rather than having to recalculate the hash code each time). Substrings By having the String be immutable, the underlying character array that backs the data structure is also immutable. This allows for certain optimizations on the substring method the be done (they aren't necessarily done - it also introduces the possibility of some memory leaks too). If you do: String foo = "smiles";
String bar = foo.substring(1,5); The value of bar is 'mile'. However, both foo and bar can be backed by the same character array, reducing the instantiation of more character arrays or copying it - just using different start and end points within the string. foo | | (0, 6)
v v
smiles
^ ^
bar | | (1, 5) Now, the downside of that (the memory leak) is that if one had a 1k long string and took the substring of the first and second character, it would also be backed by the 1k long character array. This array would remain in memory even if the original string that had a value of the entire character array was garbage collected. One can see this in String from JDK 6b14 (the following code is from a GPL v2 source and used as an example) public String(char value[], int offset, int count) {
if (offset < 0) {
throw new StringIndexOutOfBoundsException(offset);
}
if (count < 0) {
throw new StringIndexOutOfBoundsException(count);
}
// Note: offset or count might be near -1>>>1.
if (offset > value.length - count) {
throw new StringIndexOutOfBoundsException(offset + count);
}
this.offset = 0;
this.count = count;
this.value = Arrays.copyOfRange(value, offset, offset+count);
}
// Package private constructor which shares value array for speed.
String(int offset, int count, char value[]) {
this.value = value;
this.offset = offset;
this.count = count;
}
public String substring(int beginIndex, int endIndex) {
if (beginIndex < 0) {
throw new StringIndexOutOfBoundsException(beginIndex);
}
if (endIndex > count) {
throw new StringIndexOutOfBoundsException(endIndex);
}
if (beginIndex > endIndex) {
throw new StringIndexOutOfBoundsException(endIndex - beginIndex);
}
return ((beginIndex == 0) && (endIndex == count)) ? this :
new String(offset + beginIndex, endIndex - beginIndex, value);
} Note how the substring uses the package level String constructor that doesn't involve any copying of the array and would be much faster (at the expense of possibly keeping around some large arrays - though not duplicating large arrays either). Do note that the above code is for Java 1.6. The way the substring constructor is implemented was changed with Java 1.7 as documented in Changes to String internal representation made in Java 1.7.0_06 - the issue bing that memory leak that I mentioned above. Java likely wasn't seen as being a language with lots of String manipulation and so the performance boost for a substring was a good thing. Now, with huge XML documents stored in strings that are never collected, this becomes an issue... and thus the change to the String not using the same underlying array with a substring, so that the larger character array may be collected more quickly. Don't abuse the Stack One could pass the value of the string around instead of the reference to the immutable string to avoid issues with mutability. However, with large strings, passing this on the stack would be... abusive to the system (putting entire xml documents as strings on the stack and then taking them off or continuing to pass them along...). The possibility of deduplication Granted, this wasn't an initial motivation for why Strings should be immutable, but when one is looking at the rational of why immutable Strings are a good thing, this is certainly something to consider. Anyone who has worked with Strings a bit knows that they can suck memory. This is especially true when you're doing things like pulling data from databases that sticks around for awhile. Many times with these stings, they are the same string over and over again (once for each row). Many large-scale Java applications are currently bottlenecked on memory. Measurements have shown that roughly 25% of the Java heap live data set in these types of applications is consumed by String objects. Further, roughly half of those String objects are duplicates, where duplicates means string1.equals(string2) is true. Having duplicate String objects on the heap is, essentially, just a waste of memory. ... With Java 8 update 20, JEP 192 (motivation quoted above) is being implemented to address this. Without getting into the details of how string deduplication works, it is essential that the Strings themselves are immutable. You can't deduplicate StringBuilders because they can change and you don't want someone changing something from under you. Immutable Strings (related to that String pool) means that you can go through and if you find two strings that are the same, you can point one string reference to the other and let the garbage collector consume the newly unused one. Other languages Objective C (which predates Java) has NSString and NSMutableString . C# and .NET made the same design choices of the default string being an immutable. Lua strings are also immutable. Python as well. Historically, Lisp, Scheme, Smalltalk all intern the string and thus have it be immutable. More modern dynamic languages often use strings in some way that requires that they be immutable (it may not be a String , but it is immutable). Conclusion These design considerations have been made again and again in a multitude of languages. It is the general consensus that immutable strings, for all of their awkwardness, are better than the alternatives and lead to better code (fewer bugs) and faster executables overall. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195099",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/78230/"
]
} |
195,104 | How can I say in english, that programming language can be compiled to other programming languages? The example can be Haxe . On the website the language is decribed as "Multiplatform": Multiplatform:
Haxe can be compiled to all popular programming platforms with its
fast compiler – JavaScript, Flash, NekoVM, PHP, C++, C# and Java
(soon) but multiplatform can be misunderstood as hardware or software platforms (win, mac, linux). So what is the most suitable word describing this ability? | The usual term is "transcompilation" - not a common term, but the technically correct one. It's not just for trendy new scripting languages (CoffeeScript -> JavaScript) either; the first C++ compilers were in fact transcompilers to C. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195104",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/80316/"
]
} |
195,308 | I'm working on a wishlist system, where users can add items to their various wishlists, and I plan to allow users to re-order the items later on. I am not really sure about the best way to go about storing this in a database while remaining fast and not turning in to a mess (this app will be used by a fairly large user base, so I don't want it to go down to clean up stuff). I initially tried a position column, but it seems like that would be quite inefficient having to change every other item's position value when you move them around. I have seen people using a self-reference to refer to the previous (or next) value, but again, it seems like you would have to update a whole lot of other items in the list. Another solution I've seen is using decimal numbers and just sticking items in the gaps between them, which seems like the best solution so far, but I'm sure there has to be a better way. I would say a typical list would contain up to about 20 or so items, and I will probably limit it to 50. The re-ordering would be using drag and drop and will probably be done in batches to prevent race conditions and such from the ajax requests. I'm using postgres (on heroku) if it matters. Does anyone have any ideas? Cheers for any help! | First, don't try to do anything clever with decimal numbers, because they'll spite you. REAL and DOUBLE PRECISION are inexact and may not properly represent what you put into them. NUMERIC is exact, but the right sequence of moves will run you out of precision and your implementation will break badly. Limiting moves to single ups and downs makes the whole operation very easy. For a list of sequentially-numbered items, you can move an item up by decrementing its position and incrementing the position number of whatever the previous decrement came up with. (In other words, item 5 would become 4 and what was item 4 becomes 5 , effectively a swap as Morons described in his answer.) Moving it down would be the opposite. Index your table by whatever uniquely identifies a list and position and you can do it with two UPDATE s inside a transaction that will run very quickly. Unless your users are rearranging their lists at superhuman speeds, this isn't going to cause much of a load. Drag-and-drop moves (e.g., move item 6 to sit between items 9 and 10 ) are a little trickier and have to be done differently depending on whether the new position is above or below the old one. In the example above, you have to open up a hole by incrementing all positions greater than 9 , updating item 6 's position to be the new 10 and then decrementing the position of everything greater than 6 to fill in the vacated spot. With the same indexing I described before, this will be quick. You can actually make this go a bit faster than I described by minimizing the number of rows the transaction touches, but that's a microoptimization you don't need until you can prove there's a bottleneck. Either way, trying to outdo the database with a home-brewed, too-clever-by-half solution doesn't usually lead to success. Databases worth their salt have been carefully written to do these operations very, very quickly by people who are very, very good at it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195308",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/69271/"
]
} |
195,386 | Originally, this is a part of another question. Why is sizeof called a compile-time operator? Isn't it actually a run-time operator? And if it is indeed a compile-time operator, how does it help in producing portable code which runs the same in different computers ? Please explain in detail. | sizeof() gives you the size of the data type , not the size of a particular instance of that type in memory. For example, if you had a string data object that allocated a variable size character array at runtime, sizeof() could not be used to determine the size of that character array. It would only give you the size of the pointer. The size of a data type is always known at compile time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195386",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/87293/"
]
} |
195,470 | Let's say we have an abstract class and let this class has only abstract methods. Is this abstract class different from an interface that has same methods only? What I am looking to know is if there are any differences both philosophically, objectively and in the underlying programming language implementation between an Abstract Class with only abstract members and an equivalent Interface? | Technically, the differences aren't really significant but, conceptually, they are entirely different things and that leads to the technical differences others have mentioned. An abstract superclass is exactly what it sounds like, it's a common type that is shared by many other types, like Cats and Dogs are Animals. An interface is also exactly what it sounds like, it's an interface through which other classes can communicate with the object. If you want to make a Cat Walk, you're ok, cause Cat implements a CanWalk interface. Same for a Lizard, though they walk very differently. A Snake, on the other hand, does not implement CanWalk, so you can't tell it to Walk. Meanwhile, Lizard and Snake (or possibly more explicit subclasses -- I'm not an expert) might both shed their skin, and thus implement CanShed, while a Cat couldn't do that. But they're all still Animals and have some common properties, like whether they're alive or dead. This is why all methods on an interface must be implemented as public (or explicitly, in C#). Cause what's the point in an interface that's hidden from the class interfacing with the object? It's also why you can have multiple interfaces to an object, even when a language doesn't support multiple inheritance. To go back to your question, when you look at it this way, there's very rarely a reason to have an entirely abstract superclass. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195470",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/78230/"
]
} |
195,541 | In a recent interview I asked the interviewers "how do you go about evaluating new technologies and libraries (such as SignalR) and bringing them in to use?". They said they don't, that instead they write everything themselves so they don't have to rely on anyone else. The firm doesn't work for the government or defence contractors or on any safety-critical projects or anything like that. They were just your average, medium-size software development firm. My question is: how common is it for teams to write everything themselves? Should I be concerned by teams that do? Edit -- Most every response has said this is something to be concerned by. Would a second interview be an appropriate time to ask them to clarify/repeat their position on writing everything in house? | An attitude of never using third-party libraries is preposterous. Writing everything yourself is a horrible use of your company's time, unless there is a strict business requirement that every line in the codebase was written by an employee of the company -- but that is an unusual scenario, especially for a private-sector firm like you've described. A more rational and thorough answer may have been that they would only use third-party libraries that: Meet the needs of the code they would otherwise write themselves Were available under a license compatible with the company's business model Included tests Passed a code review If those criteria were met (and in my experience the code review is very flexible especially in the presence of good tests), you're no longer "relying on anyone else" -- you're relying on existing, available, and preferably robust code. If the code is open source, then in the worst case, the third-party library becomes unmaintained. But who cares? The tests prove that the library is suited for your needs! Moreover, an aversion to established third-party libraries seriously hinders programer productivity. Let's say the company was writing web applications and refused to use (e.g.) jQuery, so instead wrote their own alternative cross-browser library for simplifying DOM manipulation. With near-certainty we can assume that their implementation: Will have an API foreign to developers already familiar to jQuery Will not be as well-documented as jQuery Will not have relevant Google results when encountering problems using the library Will not be as field-tested as jQuery All of those points are major barriers to programmer productivity. How can a business afford to give up productivity like that? You've updated your question to ask whether this is appropriate to bring up in a second interview. It absolutely is. Maybe you misinterpreted your interviewer's answer in the first interview, or maybe the interviewer just incorrectly explained the company's position and a new interviewer can clarify it. If you explain that you're concerned about their stance on external libraries, there are at least two possible outcomes: They're open to change, and your concern about their process makes you look better than some other candidates. They are not open to change, and they think of you as "the kind of developer we wouldn't want to hire." Doesn't matter, that's not the kind of place you want to work anyways. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195541",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31673/"
]
} |
195,571 | I am talking about 20-30+ millions lines of code, software at the scale and complexity of Autodesk Maya for example. If you freeze the development as long as it needs to be, can you actually fix all the bugs until there is simply not a single bug, if such a thing could be verified by computers? What are the arguments for and against the existence of a bug-free system? Because there is some notion that every fix you make creates more bugs, but I don't think that's true. By bugs I meant from the simplest typos in the UI, to more serious preventative bugs that has no workaround. For example a particular scripting function calculates normals incorrectly. Also even when there are workarounds, the problem still has to be fixed. So you could say you can do this particular thing manually instead of using the provided function but that function still has to be fixed. | As Mikey mentioned, writing bugless code is not the goal. If that is what you are aiming for, then I have some very bad news for you. The key point is that you are vastly underestimating the complexity of software. First things first--You're ignoring the bigger picture of how your program runs. It does not run in isolation on a perfect system. Even the most basic of "Hello World" programs runs on an operating system, and therefore, even the most simple of programs is susceptible to bugs that may exist in the operating system. The existence of libraries makes this more complex. While operating systems tend to be fairly stable, libraries are a mixed bag when it comes to stability. Some are wonderful. Others ... not so much ... If you want your code to be 100% bug free, then you will need to also ensure that every library you run against is completely bug free, and many times this simply isn't possible as you may not have the source code. Then there are threads to think about. Most large scale programs use threads all over the place. We try to be careful and write threads in such a way where race conditions and deadlock do not occur, but it simply is not possible to test every possible combination of code. In order to test this effectively, you would need to examine every possible ordering of commands going through the CPU. I have not done the math on this one, but I suspect that enumerating all of the possible games of Chess would be easier. Things go from hard to impossible when we look at the machine itself. CPU's are not perfect. RAM is not perfect. Hard drives are not perfect. None of the components within a machine are designed to be perfect--they're designed to be "good enough". Even a perfect program will eventually fail due to a hiccup by the machine. There's nothing you can do to stop it. Bottom line: Can you write "Bug free software"? NO Anyone who tells you otherwise is clueless. Just try to write software that is easy to understand and maintain. Once you've done that, you can call it a day. EDIT: Some people commented about an excellent point that I had completely overlooked: the compiler. Unless you are writing in assembly, it is entirely possible that the compiler will mess up your code (even if you prove that your code is "perfect"). A list of bugs in GCC, one of the more commonly used compilers: http://gcc.gnu.org/bugzilla/buglist.cgi?product=gcc&component=c%2B%2B&resolution=--- | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195571",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17728/"
]
} |
195,639 | I need some opinion. GCC was always a very good compiler, but recently it is losing "appeal".
I have just found that on Windows GCC does not have std::thread support, forcing Windows users to use another compiler because the most exciting feature is still missing. But why really doesn't GCC still have threads support under Windows? License problems? ABI incompatibilities? (Well there are already several cross-platform libraries using multithreading: boost, POCO, SDL, wxwidgets, etc. Wouldn't it be simple to use already existing, and MIT/libpng licensed, code to fit this hole instead of shipping GCC releases with no thread support?) Recently, looking at compiler comparisons, GCC has the widest support for C++11 features with respect to other compilers, except for the fact that on Windows this is not true because we are still lacking atomics, mutexes and threads :/ I'd like to know more about this topic, but the only thing I can find is people asking for help because: "thread" does not exist in std namespace Looking at tickets tracking and mail discussions of GCC/TDM-GCC, there were requests for thread support since 2009. Possible that after 4 years still no solution? What's really happening? | I understood that GCC is falling out of favour because the people maintaining it have become somewhat arrogant, and now that LLVM is here (and is very good) people are voting with the feet. Slashdot had a discussion about LLVM's new support for C++11 . _merlin says: Oh I don't think anyone thinks it's evil, just that it's pure
self-interest rather than generosity. GCC's phenomenal popularity has
led to its maintainers growing massive egos and behaving like total
[ * *** ]. Bugs are introduced faster than Red Hat and Apple can get
patches for them accepted, and they have a nasty habit of not looking
at bug reports, then closing them due to inactivity without actually
fixing them which chimes in with your observation about the 4-year delay. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195639",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/80926/"
]
} |
195,642 | In many languages (a wide list, from C to JavaScript): commas , separate arguments (e.g. func(a, b, c) ), while semicolons ; separate sequential instructions (e.g. instruction1; instruction2; instruction3 ). So why is this mapping reversed in the same languages for for loops : for ( init1, init2; condition; inc1, inc2 )
{
instruction1;
instruction2;
} instead of (what seems more natural to me) for ( init1; init2, condition, inc1; inc2 )
{
instruction1;
instruction2;
} ? Sure, for is (usually) not a function, but arguments (i.e. init , condition , increment ) behave more like arguments of a function than a sequence of instructions. Is it due to historical reasons / a convention, or is there a good rationale for the interchange of , and ; in loops? | We write loops like: for(x = 0; x < 10; x++) The language could have been defined so that loops looked like: for(x = 0, x < 10, x++) However, think of the same loop implemented using a while loop: x = 0;
while(x < 10)
{
x++;
} Notice that the x=0 and x++ are statements, ended by semicolons. They aren't expressions like you would have in a function call. Semicolons are used to separate statements, and since two of the three elements in a for loop are statements, that's what is used there. A for loop is just a shortcut for such a while loop. Additionally, the arguments don't really act like arguments to a function. The second and third are repeatedly evaluated. It's true they aren't a sequence, but they also aren't function arguments. Also, the fact that you can use commas to have multiple statements in the for loop is actually something you can do outside the for loop. x = 0, y= 3; is a perfectly valid statement even outside of a for loop. I don't know of any practical use outside the for loop though. But the point is that commas always subdivide statements; it's not a special feature of the for loop. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195642",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/72479/"
]
} |
195,660 | I started logging failed logins attempts on my website with a message like Failed login attempt by qntmfred I've noticed some of these logs look like Failed login attempt by qntmfredmypassword I'm guessing some people had a failed login because they typed their username and their password in the username field. Passwords are hashed in the database, but if somehow the db got compromised, these log messages could be a way for an attacker to figure out passwords for whatever small percentage of people end up having a failed login such as this. Is there a better way to handle this? Should I even worry about this possibility? | Try it like this: If the username exists, log "failed login attempt by username ". If not, log "failed login attempt by IP 123.45.67.89 " instead. That should take care of the problem of having passwords show up in the log accidentally. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195660",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3888/"
]
} |
195,708 | Why does Haskell have a built-in if/then/else , which is dependent on the Bool type, instead of having a simple library function? Such as if :: Bool -> a -> a -> a
if True x _ = x
if False _ y = y | It's purely for the nice sugar of the if , then , and else keywords; in fact, GHC (with the RebindableSyntax extension enabled) will desugar the syntax by simply calling whatever ifThenElse function is in scope. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195708",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/61231/"
]
} |
195,793 | I am taking Martin Odersky's coursera course on functional programming with scala, and for now I have learned two things that together don't make sense: Scala doesn't support multiple inheritance Nothing is a subtype of every other type These two statements cannot live together, so how exactly is this done? and what exactly is the meaning of "subtype of every other type" Edit 1 In the Scala API , Nothing is defined as abstract final class Nothing extends Any ... so how can it extend other classes? | Subtyping and inheritance are two different things! Nothing doesn't extend everything, it's a subtype , it only extends Any . The specification [§3.5.2] has a special case governing the subtyping-relationship of Nothing : §3.5.2 Conformance [...] For every value type T , scala.Nothing <: T <:scala.Any For every type constructor T (with any number of type parameters) scala.Nothing <: T <: scala.Any [...] Where <: basically means "is a subtype of". As for how this is done: We don't know, it's compiler magic and an implementation detail. Quite often a language does things you as a programmer can't. As a counterpart to Nothing : Everything in Scala inherits from Any , everything except Any . Why doesn't Any inherit from something? You can't do that. Why can Scala do that? Well, because Scala set the rules, not you. Nothing being a subtype of everything is just an other instance of this. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195793",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35897/"
]
} |
195,813 | I have this idea running around in my head, to generate and evaluate random mathematical expressions. So, I decided to give it a shot and elaborate an algorithm, before coding it to test it. Example: Here are some example expressions I want to generate randomly: 4 + 2 [easy]
3 * 6 - 7 + 2 [medium]
6 * 2 + (5 - 3) * 3 - 8 [hard]
(3 + 4) + 7 * 2 - 1 - 9 [hard]
5 - 2 + 4 * (8 - (5 + 1)) + 9 [harder]
(8 - 1 + 3) * 6 - ((3 + 7) * 2) [harder] The easy and medium ones are pretty straight-forward. Random int s separated by random operators, nothing crazy here. But I'm having some trouble getting started with something that could create one of the hard and harder examples. I'm not even sure a single algorithm could give me the last two. What I am considering: I can't say I tried those ideas, because I didn't really want to waste much time going in a direction that had no chance of working in the first place. But still, I thought of a couple solutions: Using trees Using regular expressions Using a crazy "for-type" loop (surely the worst) What I'm looking for: I'd like to know which way you believe is the best to go, between the solutions I considered, and your own ideas. If you see a good way to start, I'd appreciate a lead in the right direction, e.g. with the beginning of the algorithm, or a general structure of it. Also note that I will have to evaluate those expressions. This can be done either after the expression is generated, or during its creation. If you take that in consideration in your answer, that's great. I'm not looking for anything language-related, but for the record, I'm thinking of implementing it in Objective-C, as that's the language I'm most working with recently. Those examples did not include the : operator, as I only want to manipulate int s, and this operator adds many verifications. If your answer gives a solution handling this one, that's great. If my question needs any clarification, please ask in the comments. Thanks for your help. | Here's a theoretic interpretation of your problem. You are looking to randomly generate words (algebraic expression) from a given language (the infinite set of all syntactically correct algebraic expressions). Here's a formal description of a simplified algebraic grammar supporting only addition and multiplication: E -> I
E -> (E '+' E)
E -> (E '*' E) Here, E is an expression (i.e., a word of your language) and I is a terminal symbol (i.e., it's not expanded any further) representing an integer. The above definition for E has three production rules . Based on this definition, we can randomly build a valid arithmetic as follows: Start with E as the single symbol of the output word. Choose uniformly at random one of the non-terminal symbols. Choose uniformly at random one of the production rules for that symbol, and apply it. Repeat steps 2 - 4 until only terminal symbols are left. Replace all terminal symbols I by random integers. Here's an example of the application of this algorithms: E
(E + E)
(E + (E * E))
(E + (I * E))
((E + E) + (I * E))
((I + E) + (I * E))
((I + E) + (I * I))
((I + (E * E)) + (I * I))
((I + (E * I)) + (I * I))
((I + (I * I)) + (I * I))
((2 + (5 * 1)) + (7 * 4)) I assume you would choose to represent an expression with an interface Expression which is implemented by classes IntExpression , AddExpression and MultiplyExpression . The latter two then would have a leftExpression and rightExpression . All Expression subclasses are required to implement an evaluate method, which works recursively on the tree structure defined by these objects and effectively implements the composite pattern . Note that for the above grammar and algorithm, the probability of expanding an expression E into a terminal symbol I is only p = 1/3 , while the probability to expand an expression into two further expressions is 1-p = 2/3 . Therefore, the expected number of integers in a formula produced by the above algorithm is actually infinite. The expected length of an expression is subject to the recurrence relation l(0) = 1
l(n) = p * l(n-1) + (1-p) * (l(n-1) + 1)
= l(n-1) + (1-p) where l(n) denotes the expected length of the arithmetic expression after n applications of production rules. I therefore suggest that you assign a rather high probability p to the rule E -> I such that you end up with a fairly small expression with high probability. EDIT : If you're worried that the above grammar produces too many parenthesis, look at Sebastian Negraszus' answer , whose grammar avoids this problem very elegantly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195813",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/71491/"
]
} |
195,825 | I am reading the book programming in Lua . It said that Closures provide a valuable tool in many contexts. As we have seen, they are
useful as arguments to higher-order functions such as sort. Closures are valuable for functions that build other functions too, like our newCounter example;
this mechanism allows Lua programs to incorporate sophisticated programming
techniques from the functional world. Closures are useful for callback functions,
too. A typical example here occurs when you create buttons in a conventional
GUI toolkit. Each button has a callback function to be called when the user
presses the button; you want different buttons to do slightly different things
when pressed. For instance, a digital calculator needs ten similar buttons, one
for each digit. Y ou can create each of them with a function like this: function digitButton (digit)
return Button{label = tostring(digit),
action = function ()
add_to_display(digit)
end}
end It seems that if I call the digitButton , it will return the action (this will create a closure), so, I can access the digit passed to digitButton . My question is that: Why we need call back functions? what situations can I apply this to? The author said: In this example, we assume that Button is a toolkit function that creates new
buttons; label is the button label; and action is the callback closure to be
called when the button is pressed. The callback can be called a long time after
digitButton did its task and after the local variable digit went out of scope, but
it can still access this variable. according to the author, I think a similar example is like this: function Button(t)
-- maybe you should set the button here
return t.action -- so that you can call this later
end
function add_to_display(digit)
print ("Display the button label: " .. tostring(digit))
end
function digitButton(digit)
return Button{label = tostring(digit),
action = function ()
add_to_display(digit)
end}
end
click_action = digitButton(10)
click_action() thus, the callback can be called a long time after digitButton did its task and after the local variable digit went out of scope. | Guy 1 to Guy 2: hey dude I wanna do something when a user clicks in there, call me back when that happens alright? Guy 2 calls back Guy 1 when a user clicks here. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195825",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/85632/"
]
} |
195,989 | Lately I've been trying to split long methods into several short ones. For example: I have a process_url() function which splits URLs into components and then assigns them to some objects via their methods. Instead of implementing all this in one function, I only prepare the URL for splitting in process_url() , and then pass it over to process_components() function, which then passes the components to assign_components() function. At first, this seemed to improve readability, because instead of huge 'God' methods and functions I had smaller ones with more descriptive names.
However, looking through some code I've written that way, I've found that I now have no idea whether these smaller functions are called by any other functions or methods. Continuing previous example: someone looking at the code might think that process_components() functionality is abstracted into a function because it's called by various methods and functions, when in fact it's only called by process_url() . This seems somewhat wrong. The alternative is to still write long methods and functions, but indicate their sections with comments. Is the function-splitting technique I described wrong? What is the preferred way of managing large functions and methods? UPDATE: My main concern is that abstracting code into a function might imply that it could be called by multiple other functions. SEE ALSO: discussions on reddit at /r/programming (provides a different perspective rather than most of the answers here) and /r/readablecode . | Testing code that does lots of things is difficult. Debugging code that does lots of things is difficult. The solution to both of these problems is to write code that doesn't do lots of things. Write each function so that it does one thing and only one thing. This makes them easy to test with a unit test (one doesn't need umpteen dozen unit tests). A co-worker of mine has the phrase he uses when judging if a given method needs to be broken up into smaller ones: If, when describing the activity of the code to another programmer you use the word 'and', the method needs to be split into at least one more part. You wrote: I have a process_url() function which splits URLs into components and then assigns them to some objects via their methods. This should be at least two methods. It is ok to wrap them in one publicly facing method, but the workings should be two different methods. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/195989",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/89095/"
]
} |
196,022 | I am in a position where I have been asked to review some code that fixes a problem that I don't believe exists. The fixer, who is more senior than me, insists his fix is necessary but it appears to be no more than C++ sophistry to me. Part of our deployment process is a code review, and as the 2nd highest engineer in a small company I am expected to review changes. I believe that reviewers are just as responsible for code changes as the original coder, and I am unwilling to accept responsibility for this change. How would you go about rejecting this review? | Ask for a test case that fails without the change that succeeds with the change. If he can't produce one, you use that as justification. If he can produce one then you need to explain why the test is invalid. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196022",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36039/"
]
} |
196,038 | I have started a project which will duplicate Dropbox or Google Drive behavior but using Amazon S3 az a backend. Idea is very simple, a Node.js server that watchs a directory for file changes and PUT them on the S3. Or it will look at S3 for changes and applies them to file system structure. I'be uploaded very early version of my app to Github. You can find it here . Because I am a web developer, I am using web technologies to solve the problem. I'm afraid of my limited mindset and picking wrong tools for the job. There are other solutions to this problem. One is S3FS which is a FUSE file system for Unix systems. In my opinion that is very hard to use and limited to platform. My solution uses Node.js to overcome cross-platform issues. I can pack my Node.js app with App.js and make it an easy to use software. To clarify, my questions are: Is HTTP/HTTPS good enough for file transfer? Is Node.js good enough for working with File System? Scalability: can this approach fail in large file sizes? | Ask for a test case that fails without the change that succeeds with the change. If he can't produce one, you use that as justification. If he can produce one then you need to explain why the test is invalid. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196038",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24256/"
]
} |
196,043 | During the current (2013) Google Code Jam contest, there was a problem that took C++ and Java people 200+ lines of code as compared to Python people that solved the same problem only using 40 lines of code. Python is not directly comparable with C++ and Java but the difference in verbosity I thought might perhaps have an influence on the efficiency of the algorithm. How important is knowing the right algorithm compared to the choice of language? Could an excellently implemented Python program be implemented in C++ or Java in a better way (using the same algorithm) and does this have any relation to the natural verbosity of certain programming languages? | Obviously, if you consider this question in the context of something like Google Code Jam, then algorithmic thinking is clearly more important when having to solve algorithmic problems. In everyday life, however, about a million other factors have to be considered as well, which makes the question much less black vs white. Just a counter-example: If you need 200 more lines in Java, but everyone in your company knows Java, this isn't a big deal. If you could write it in 5 lines of Python or any other language, but you would be the only one in the company to know that language - it is a big deal. Such big a deal in fact, that you will not even be allowed to do so and instead have to write it in Java. From a craftman's perspective, we always try to approach with the right tool for the job, but the word right in there is so tricky that one can easily get it wrong. On the contrary, I found algorithmic thinking in companies to be almost absent. Only few select people possess it, whereas the average joe often already has troubles estimating runtime complexities of loops, searches, etc. In terms of algorithmic competitions, however, my personal experience from competing in them for several years, clearly tells me that you should stick to one language. Speed is a major factor and you simply cannot afford to waste time on your tools, when you should dedicate it to solving the problems within the time limit. Also consider that writing 200 lines of Java code without thinking is still much faster than hand-crafting 50 lines of complicated python code requiring a lot of thinking, yet both solving more or less the same problem. Oh and finally, make sure you understand the major differences between algorithmic competition code and company production code. I have seen fantastic algorithmic coders, that wrote horrible code I would not ever accept in a product. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196043",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/89138/"
]
} |
196,046 | Dynamic typing newbie here, hoping for some wizened words of wisdom. I'm curious if there is a set of best practices out there for dealing with function arguments (and let's be honest, variables in general) in dynamically typed languages such as Javascript. The issue I often run into is with regards to readability of code: I'm looking at a function I wrote a while ago and I have no clue what the structure of the argument variables actually is. It's usually ok at the moment of development of new code: everything's fresh in my head, every variable and parameter makes sense because I just wrote them. A week later? Not so much. For example, say I'm trying to crunch a bunch of data about user sessions on a website and get something useful out of it: var crunchSomeSessionData = function(sessionsMap, options) {
[...]
} Disregarding the fact that the function name isn't helpful - that obviously is a huge deal - I actually don't know anything at all about what the structure of sessionsMap or options is. Ok.. I have k/v pairs the sessionsMap object, since it's called Map, but are the value a primitive, an array, another hash of stuff? What is options? An array? A whitespace separated string? I have a few options: clarify the structure exactly in the comment header for the function. The problem is that now I have to maintain the code in two places. have as useful of a name as possible. e.g. userIdToArrayOfTimestampsMap or even have some kind of pseudo-Hungarian dialect for variable naming that only I speak that explains what the types are and how they're nested. This leads to really verbose code, and I'm a fan of keeping stuff under 80 col. break functions down until I'm only ever passing around primitives or collections of primitives. I imagine it might work, but then I'd likely end up with micro-functions that have one or two lines at most, functions that exist only for the purpose of readability. Now I have to jump all over the file and recompose the function in my head, which just made readability worse. some languages offer destructuring, which to some extent can almost be thought of as extra documentation for what the argument type is going to contain. could create a "class" for the specific type of object, even though it'd not make a huge difference in a prototypal language like JS, and would probably add more maintenance overhead than necessary. Alternatively, if available, one can try to use protocols, maybe something along the lines of Clojure's deftype/defrecord etc. In the statically typed world this is not nearly as much of an issue. In C# for example you get a: public void DoStuff(Dictionary<string, string> foo) {[...]}; Ok, easy peasy, I know exactly what I'm getting, no need to read the function header, or go back to the caller and figure out what it's concocting etc. What's the solution here? Are all people developing in dynamically typed languages continuously boggled by what types their subroutines are getting? Are there mitigation strategies? | Obviously, if you consider this question in the context of something like Google Code Jam, then algorithmic thinking is clearly more important when having to solve algorithmic problems. In everyday life, however, about a million other factors have to be considered as well, which makes the question much less black vs white. Just a counter-example: If you need 200 more lines in Java, but everyone in your company knows Java, this isn't a big deal. If you could write it in 5 lines of Python or any other language, but you would be the only one in the company to know that language - it is a big deal. Such big a deal in fact, that you will not even be allowed to do so and instead have to write it in Java. From a craftman's perspective, we always try to approach with the right tool for the job, but the word right in there is so tricky that one can easily get it wrong. On the contrary, I found algorithmic thinking in companies to be almost absent. Only few select people possess it, whereas the average joe often already has troubles estimating runtime complexities of loops, searches, etc. In terms of algorithmic competitions, however, my personal experience from competing in them for several years, clearly tells me that you should stick to one language. Speed is a major factor and you simply cannot afford to waste time on your tools, when you should dedicate it to solving the problems within the time limit. Also consider that writing 200 lines of Java code without thinking is still much faster than hand-crafting 50 lines of complicated python code requiring a lot of thinking, yet both solving more or less the same problem. Oh and finally, make sure you understand the major differences between algorithmic competition code and company production code. I have seen fantastic algorithmic coders, that wrote horrible code I would not ever accept in a product. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196046",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25883/"
]
} |
196,105 | Reading the comments to this answer , specifically: Just because you can't write a test doesn't mean it's not broken. Undefined behaviour which usually happens to work as expected (C and C++ are full of that), race conditions, potential reordering due to a weak memory model... – CodesInChaos 7 hours ago @CodesInChaos if it cant be reproduced then the code written to 'fix' cant be tested either. And putting untested code into live is a worse crime in my opinion – RhysW 5 hours ago ...has me wondering if there are any good general ways to consistently trigger very infrequently occurring in production problems caused by race conditions in test case. | After having been in this crazy business since about 1978, having spent almost all of that time in embedded real-time computing, working multitasking, multithreaded, multi-whatever systems, sometimes with multiple physical processors, having chased more than my fair share of race conditions, my considered opinion is that the answer to your question is quite simple. No. There's no good general way to trigger a race condition in testing. Your ONLY hope is to design them completely out of your system. When and if you find that someone else has stuffed one in, you should stake him out an anthill, and then redesign to eliminate it. After you have designed his faux pas (pronounced f***up) out of your system, you can go release him from the ants. (If the ants have already consumed him, leaving only bones, put up a sign saying "This is what happens to people who put race conditions into XYZ project!" and LEAVE HIM THERE.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196105",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/16898/"
]
} |
196,125 | Lets says I have a Car class: public class Car
{
public string Engine { get; set; }
public string Seat { get; set; }
public string Tires { get; set; }
} Lets say we're making a system about a parking lot, I'm going to use a lot of the Car class, so we make a CarCollection class, it may has a few aditionals methods like FindCarByModel : public class CarCollection
{
public List<Car> Cars { get; set; }
public Car FindCarByModel(string model)
{
// code here
return new Car();
}
} If I'm making a class ParkingLot , what's the best practice? Option #1: public class ParkingLot
{
public List<Car> Cars { get; set; }
//some other properties
} Option #2: public class ParkingLot
{
public CarCollection Cars { get; set; }
//some other properties
} Is it even a good practice to create a ClassCollection of another Class ? | Prior to generics in .NET, it was common practice to create 'typed' collections so you would have class CarCollection etc for every type you needed to group. In .NET 2.0 with the introduction of Generics, a new class List<T> was introduced which saves you having to create CarCollection etc as you can create List<Car> . Most of the time, you will find that List<T> is sufficient for your purposes, however there may be times that you want to have specific behaviour in your collection, if you believe this to be the case, you have a couple of options: Create a class which encapsulates List<T> for example public class CarCollection { private List<Car> cars = new List<Car>(); public void Add(Car car) { this.cars.Add(car); }} Create a custom collection public class CarCollection : CollectionBase<Car> {} If you go for the encapsulation approach, you should at minimum expose the enumerator so you would declare it as follows: public class CarCollection : IEnumerable<Car>
{
private List<Car> cars = new List<Car>();
public IEnumerator<Car> GetEnumerator() { return this.cars.GetEnumerator(); }
} Without doing that, you can't do a foreach over the collection. Some reasons you might want to create a custom collection are: You don't want to fully expose all the methods in IList<T> or ICollection<T> You want to perform additional actions upon adding or removing an item from the collection Is it good practice? well that depends on why you are doing it, if it is for example one of the reasons I have listed above then yes. Microsoft do it quite regularly, here are some fairly recent examples: System.Web.Http.Filters.HttpFilterCollection System.Web.Mvc.ModelBinderProviderCollection As for your FindBy methods, I would be tempted to put them in extension methods so that they can be used against any collection containing cars: public static class CarLookupQueries
{
public static Car FindByLicencePlate(this IEnumerable<Car> source, string licencePlate)
{
return source.SingleOrDefault(c => c.LicencePlate == licencePlate);
}
...
} This separates the concern of querying the collection from the class which stores the cars. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196125",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40390/"
]
} |
196,236 | I don't hate using assembly language, since I have written some in my os course. But obviously, assembly language lacks abstraction, you have to pay more attention to the details. Is assembly language really essential to write TAOCP? | He not only uses MIXAL, his assembly language for MIX, but also MIX, a model for a simple computer (like one which was used in the sixties). This is a model for teaching with which he is, to some extent, independent of development in the field. If he'd used another programming language (which one, by the way, would you think would have been suited?), say NPL (nifty programming language), he would have had to either abandon the idea of using MIX or to introduce a compiler of some computer language of choice (which is a far more complex thing than what he is dealing with in Vol 1). That way it would not have become TAOCP but TAONPLP. The first one is independent of such a choice and, for this reason, timeless in a way few books about programming will ever be. The second one would probably be forgotten by now... Also, as long as computers are working in principle the way his MIX does, it is a good thing to take that into account if you are really interested in learning how to work with them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196236",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/85632/"
]
} |
196,416 | The consensus seems to be that one should follow the convention of the platform they're developing for. See: Underscore or camelcase? Naming conventions: camelCase versus underscore_case? However, PHP doesn't seem to strictly follow any convention internally (no surprises there), even for methods and functions (e.g. mysqli::set_local_infile_default , PDOStatement::debugDumpParams ); however, underscores seem to be dominant in function names. However, what I couldn't find was this: what's the dominant naming convention for variables in PHP? | There is no definitive naming convention in PHP, and they differ by framework: Zend does not permit underscores Symfony also encourages camelCase Wordpress encourages underscores and does not like camelCase CodeIgniter also promotes underscores So: Use whatever your framework uses or create your own naming convention. At least for function names and class methods, there is a one thing to consider, but some frameworks discard it: PHP is case insensitive in that case , so aTonalFunction() and atonalFunction() are both calls to the same function. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196416",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/89095/"
]
} |
196,441 | I'm a CS student and I have been coding for a few months shy of a year now, and I seem to have developed what I think may be a "bad" habit and I'm wondering if anyone does the same (or whether it's a bad habit at all). When I'm coding / solving a problem with code, I find that my initial implementation is lengthy and overly-complicated; in other words, there is a lot of extraneous code (variables, checks) that is simply not needed. When I finish the initial "draft," and make sure the code actually works, I simplify it and make it easier to understand/less verbose. I think the reason I do this is that I have trouble foreseeing what I will need to complete a task and end up over-compensating and creating complexities that should not or need not exist. Anyone have any tips or advice on how to improve this facet of my coding style, or any input as to whether the habit is actually a bad one? | Sounds like a good habit to me. First rule in coding is to make it work. Once you've done that, clean up your code and make it neat, understandable and simpler if you can. However - if you are spending a lot of time over designing your solution and wasting a lot of time creating stuff that doesn't need to exist, that's possibly a bad habit - like if you were creating a bunch of classes and just going a bit overdesiging mad. But as long as you're working towards "making it work" then it doesn't sound like it's a problem. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196441",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/89506/"
]
} |
196,502 | Now, when I make a programming mistake with pointers in C, I get a nice segmentation fault, my program crashes and the debugger can even tell me where it went wrong. How did they do that in the time when memory protection wasn't available? I can see a DOS programmer fiddling away and crashing the entire OS when he made a mistake. Virtualization wasn't available, so all he could do was restart and retry. Did it really go like that? | I can see a DOS programmer fiddling away and crashing the entire OS when he made a mistake. Yeah, that's pretty much what happened. On most systems that had memory maps, location 0 was marked invalid, so that null pointers could be easily detected, because that was the most common case. But there were lots of other cases, and they caused havoc. At the risk of sounding like a geezer, I should point out that the current focus on debugging is not the way of the past. Much more effort was previously made to write correct programs, rather than to remove bugs from incorrect programs. Some of that was because that was our goal, but a lot was because the tools made things hard. Try writing your programs on paper or on punched cards, not in an IDE, and without the benefit of an interactive debugger. It gives you a taste for correctness. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196502",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/70273/"
]
} |
196,604 | Before asking my question, I must explain the situation. I'm working for a company as a junior software engineer. One of the seniors always stops me when I have finished my development and want to commit. He always wants me to wait for him to review it. This is ok, because usually he finds some bugs, and does some optimizations. However I must commit my code before the deadline. When I finished, I call him and say it is finished. He usually comes late. So my code is late too. My question is, what should I do? Should I wait him for a review? EDIT: Addition to the question. I'm curious about one more issue. I want to be free when coding. How could I gain the trust for freedom of development? Some explanations: I've talked with him about this. But it didn't help. We use an issue tracker already, but there isn't any task for reviews. There are just development and test tasks. | So my code is late too. No, it is not your code, it is the code of you and the senior. You are working as a team, you have a shared responsibility, and when you two miss a deadline, it is the fault of both of you. So make sure the one who makes the deadlines notices that. If that person sees that as a problem, too, he will surely talk to both of you together - that may help more than a single talk with you coworker. And to your EDIT: I want to be free when coding. How could I gain the trust for development freedom? Reviewing code is one of the most important quality savers. It is virtually impossible to write excellent code without a second pair of eyes, even when you have >20 years of programming experience. So in a good team, everyones code should be constantly reviewed - your senior's code as well as your code. This has nothing to do with distrust against you in person (or, at least, it should not). As long as you believe that "free coding" without a second pair of eyes is better, you are still a junior programmer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196604",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/78230/"
]
} |
196,628 | When working on code, I face many of the same challenges that my teammates do, and I have written some helpful functions and classes, and so have they. If there is good communication, I'll hear about some great thing someone put together, and six months later when I need it I may remember it and call that function, saving myself time. If I don't remember it, or never knew about it, I will probably re-invent the wheel. Is there a particular practice of documenting these kinds of things? How do you make them easy to find? If your team has no such documentation, how do you find out if your wheel already exists? EDIT: All but one of the answers so far deals with an ideal situation, so let me sum up those solutions: documentation & communication; wikis, stand-up meetings, etc. Those are all great things, but they rely on programmers having the time (and skills) to write up the documentation and attend the meetings and take notes and remember everything. The most popular answer so far (Caleb's) is the only one that could be used by a programmer who is incapable of documentation and meetings, and does only one thing: programming. Programming is what a programmer does, and yes, a great programmer can write up documentation, unit tests, etc., but let's face it - most of us prefer programming to documenting. His solution is one where the programmer recognizes re-usable code and pulls it out to its own class or repository or whatever, and by the fact that it is isolated, it becomes find-able and eases the learning curve for using it.... and this was accomplished by programming. In a way I see it like this: I've just written three functions, and it occurs to me that someone else should know about them. I could document them, write them up, announce them at a meeting, etc. - which I can do, but it's not my strength - or .... I can extract them to a class, name it well, make them function like a black box, and stick it where other class files go. Then a short email announcing it is easy. Other developers can scan the code and understand it better than they could an isolated function used in code they don't fully understand - that context is removed. I like this because it means having a set of well-named class files, with well-named methods, is a good solution that is accomplished by good programming. It does not require meetings, and it softens the need for detailed documentation. Are there any more ideas in this vein ... for isolated and time-pressed developers? | Libraries. Frameworks. Version control. If you've got reusable code, the very last thing you want is for different team members to copy the source code into their project. If they do that, chances are that they'll change a bit here and tweak a bit there, and soon you've got dozens of functions or methods that all have the same name but which each work a little bit differently. Or, perhaps more likely, the original author will continue to refine the code to fix bugs, make it more efficient, or add features, but the copied code will never get updated. The technical name for that is a huge freakin' mess . The right solution is to pull that reusable stuff out of whatever project you built it for in the first place and put it into a library or framework in its own version control repository. That makes it easy to find, but also discourages making changes without consideration for all the other projects that might be using it. You might consider having several different libraries: one for well-tested code that's no longer likely to change, one for stuff that seems stable but which hasn't been thoroughly tested and reviewed, one for proposed additions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196628",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/70099/"
]
} |
196,654 | It seems like Python , PHP , and Ruby all use the name "argv" to refer to the list of command line arguments. Where does the name "argv" come from? Why not something like "args"? My guess is that it comes from C, where the v would stand for "vector". Wikipedia has a footnote that says: the vector term in this variable's name is used in traditional sense
to refer to strings. However, there isn't any source for this info. Really, I'm curious if it has roots that trace even farther back. Did C use it because something before that used it? | While the other answers note that argv comes from C, where did C get the idea to call an array a "vector"? Directly, it came from BCPL . Though argv refers to the vector of (string) arguments, BCPL did have strings stored in vectors, but they were string literals and they worked like Pascal strings. The vector had two elements: the length at literal!0 and the characters at literal!1 . According to Clive Feather , strings were manipulated by "unpacking" them into character arrays, transforming the array then "repacking" them into strings: compare this with C where strings are character arrays. So yes, C used v for vector because something else had done so before. Now, did anything before BCPL use vector in this way? BCPL was itself a simplification of the "Cambridge[or Combined] Programming Language": this used vector as a synonym for a 1-dimensional array and matrix as a synonym for a 2-dimensional array. This is consistent with the notation in mathematics of vectors and matrices, though in CPL they're just handy mnemonics and don't have any of the properties associated with the mathematical structures. Can we push back further in time regarding computing languages? One potential branch of our trail runs cold. CPL was heavily influenced by Algol 60 (the 1963 update). Now ALGOL 68 had types that were described as "packed vectors", such as bits and bytes : but these weren't in earlier releases of Algol which just had ARRAY referring to array. As BCPL comes from 1966, CPL must have been before that (but after 1963): ALGOL 68 (standardised in 1968 and 1973) cannot have been a direct influence. On the other hand, Main Features of CPL also makes reference to McCarthy's LISP system . While this doesn't use vector to refer to a data structure in the system itself, those being S-expressions , M-expressions and L-expressions (L-expressions are strings, so any association between vector and string has disappeared), it does use vector in another sense to represent the "values of a number of variables" representing "the state of the machine at any time". So we have evidence for an assumption made in the comments: that use of the word 'vector' to mean 'array' in computing comes from application of the similar term in mathematics. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196654",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57435/"
]
} |
196,706 | I work in a control systems company, where the primary work is SCADA and PLC , along with other control systems stuff. Software development is not really something the company does, apart from little bits here and there, until there was a decision to create an internal project management and appraisal system. This project has been taken on by people who came here as software people originally and we are mostly junior. The project started off small, so we only documented stuff like design, database stuff etc, but we never really agreed upon a coding format/conventions. We started using StyleCop to make sure we had well documented code, but I feel we need an official document for coding conventions/practices so we can continue a good standard and if there is any more major development work in the future, whomever works on it has a good baseplate. Therein lies the problem, I have no idea how to draft up a document for coding conventions and standards, all I can think of is examples of good vs bad practice (for example camel case when naming variables, avoiding Hungarian notation etc) we are all competent enough programmers (apparently) but we just don't have a charter for this kind of stuff. To put a point on it, my question is: What are the key aspects and contents of a good coding standards document? | What are the key aspects and contents of a good coding standards document? Being supported by tools which enable automated checking of the code . If I know that I can't commit to version control any piece of code which doesn't match some rules, I would be encouraged to follow those rules in my code. If, on the other hand, some fellow programmer have written somewhere that I need to follow a rule, I don't give a crap about those rules. Being well thought-out, instead of being your personal opinion . You don't plainly say: "from now on, we don't use regions any longer, because I don't like regions." Rather, you would explain that regions encourage code growth and don't solve any major problem . The reason is that in the first case, your fellow colleague would answer: "well, I like regions, so I would still use them". In the second case, on the other hand, it would force people who disagree to come with constructive criticism, suggestions and arguments, eventually making you change your original opinion. Being well documented . Lack of documentation creates confusion and room for interpretation ; confusion and possibility of interpretation lead to style difference, i.e. the thing standards want to suppress. Being widespread, including outside your company . A "standard" used by twenty programmers is less standard than a standard known by hundreds of thousands of developers all around the world. Since you're talking about StyleCop, I suppose that the application is written in one of the .NET Framework languages. In that case, unless you have serious reasons to do differently, just stick with Microsoft's guidelines. There are several benefits in doing it rather then creating your own standards. Taking the four previous points: You don't need to rewrite StyleCop rules to fit your own standards. I don't say it's hard to write your own rules, but if you can avoid doing it, it means you have more time doing something useful instead. Microsoft's guidelines are very well thought. There are chances that if you disagree with some of them, it might be because you don't understand them. This was exactly my case; when I started C# development, I found a few rules totally dumb. Now, I completely agree with them, because I finally understood why they were written this way. Microsoft's guidelines are well documented, so you don't have to write your own documentation. New developers who will be hired in your company later may already be familiar with Microsoft's coding standards. There are some chances that no one will be familiar with your internal coding style. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196706",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81895/"
]
} |
196,789 | The first part of the license implies you can do basically whatever you want with it (copy, modify, sell etc).
But the second part says that these freedoms must propagate into all copies of the software. My interpretation of that is, you can incorporate the software into your proprietary project, but that portion must remain open....so any modifications to the software must keep that license attached, forcing my changes to be open sourced. Isn't this the reason people consider the GPL to be restrictive/viral? Because it forces modifications to be open sourced? Here is a copy of the license: Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions: The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. | Not quite. Here's the basic idea. As you pointed out, "you can incorporate the software into your proprietary project, but that portion must remain open" under the MIT license. If you have 100 features in your proprietary product, and one of them is based on MIT-licensed code, that's fine. However, if you have 100 features in your product, and one of them is based on GPL-licensed code, the GPL forces you to open-source the entire rest of the product . That's why it's called a viral license: it doesn't stay in its own code, but "infects" the rest of your codebase as well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196789",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/89857/"
]
} |
196,796 | I am maintaining a small code base that I am considering selling– but I want to allow developers that use it to submit code for addition to the trunk. This way they can extend the framework and the community can benefit. I want them to retain copyright to their code, but to allow me to use it however I desire, for commercial and non-commercial purposes. I don't want to be obligated to release any submitted source(although as copyright holders they may). The end user product will be binary distributions of the compiled code base. Is there a good existing license for a project of this nature? I could compromise a little to use an existing license which does not fully match this, if it will save me the legal headache. | Not quite. Here's the basic idea. As you pointed out, "you can incorporate the software into your proprietary project, but that portion must remain open" under the MIT license. If you have 100 features in your proprietary product, and one of them is based on MIT-licensed code, that's fine. However, if you have 100 features in your product, and one of them is based on GPL-licensed code, the GPL forces you to open-source the entire rest of the product . That's why it's called a viral license: it doesn't stay in its own code, but "infects" the rest of your codebase as well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196796",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44413/"
]
} |
196,830 | Should boolean methods always take the affirmative form, even when they will only ever be used in the negative form? Say I wanted to check whether an entity exists before creating one, my argument is that the first form below is better than the second form, whether or not the method is ever used in the affirmative form. In summary, I find if(!affirmative) easier to read than if(negative) . I have a colleague who disagrees, thoughts? First Form: int entity_id = 42;
if(!entity_exists(entity_id)) create_entity(entity_id); Second Form: int entity_id = 42;
if(entity_not_exist(entity_id)) create_entity(entity_id); | Should boolean methods always take the affirmative form, even when
they will only ever be used in the negative form? Making rules about such things seems a little much -- I wouldn't want to see a guideline in a coding standards document that says thou shalt not use negative names for boolean properties . But as a matter of personal style, I think trying to keep the names positive could be a fine ideal. However, I think it's also good to avoid the need for that skinny and easily-missed ! . One can often find ways to turn a negative name into a positive one: accountHasCharges accountIsClear (same as !accountHasCharges ) Clarity is the most important consideration, and a good reason for avoiding negative method names is that they can lead to double negatives or worse: isComplete // okay isNotComplete // !isComplete is usually better isIncomplete // could make sense if 'incomplete' is a known state of the object !isNotComplete // horrible !isNotComplete == 0 // may lead to permanent vacation | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196830",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/89889/"
]
} |
196,929 | The principle is defined as modules having one reason to change . My question is, surely these reasons to change are not known until the code actually starts to change?? Pretty much every piece of code has numerous reasons why it could possibly change but surely attempting to anticipate all of these and design your code with this in mind would end up with very poor code. Isn't it a better idea to only really start to apply SRP when requests to change the code start coming in? More specifically, when a piece of code has changed more than once for more than one reason, thus proving it has more than one reason to change. It sounds very anti-Agile to attempt to guess reasons for change. An example would be a piece of code which prints a document. A request comes in to change it to print to PDF and then a second request is made to change it to apply some different formatting to the document. At this point you have proof of more than a single reason to change (and violation of SRP) and should make the appropriate refactoring. | Of course, the YAGNI principle will tell you to apply SRP not before you really need it. But the question you should ask yourself is: do I need to apply SRP first and only when I have to actually change my code? To my experience, the application of SRP gives you a benefit much earlier: when you have to find out where and how to apply a specific change in your code. For this task, you have to read and understand your existing functions and classes. This gets very much easier when all your functions and classes have a specific responsibility. So IMHO you should apply SRP whenever it makes your code easier to read, whenever it makes your functions smaller and more self-describing. So the answer is yes , it makes sense to apply SRP even for new code. For example, when your printing code reads a document, formats the document and prints the result to a specific device, these are 3 clear separable responsibilities. So make at least 3 functions out of them, give them according names. For example: void RunPrintWorkflow()
{
var document = ReadDocument();
var formattedDocument = FormatDocument(document);
PrintDocumentToScreen(formattedDocument);
} Now, when you get a new requirement to change document formatting or another one to print to PDF, you know exactly at which of these functions or locations in code you have to apply changes, and even more important, where not. So, whenever you come to a function you don't understand because the function does "too much", and you are not sure if and where to apply a change, then consider to refactor the function into separate, smaller functions. Don't wait until you have to change something. Code is 10x more often read than changed, and smaller functions are much easier to read. To my experience, when a function has a certain complexity, you can always split the the function into different responsibilities, independent of knowing which changes will come in the future. Bob Martin typically goes a step further, see the link I gave in my comments below. EDIT: to your comment: The main responsibility of the outer function in the example above is not to print to a specific device, or to format the document - it is to integrate the printing workflow . Thus, at the abstraction level of the outer function, a new requirement like "docs should not be formatted anymore" or "doc should be mailed instead of printed" is just "the same reason" - namely "printing workflow has changed". If we talk about things like that, it is important to stick to the right level of abstraction . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196929",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/85655/"
]
} |
196,990 | class Boxer:
def punch(self, punching_bag, strength):
punching_bag.punch(strength)
class PunchingBag:
def punch(self, strength):
print "Punching bag punched with strength", strength
boxer = Boxer()
punching_bag = PunchingBag()
boxer.punch(punching_bag, 2) No doubts that punch is a good method name in case of a boxer.
But is name punch also good for the method of punching bag?
In both cases I mean punch as a command (i.e. do punch). | A good rule of thumb is that method names should be verbs or predicates such that the object you call them on ( self in standard Python convention, this in most other languages) becomes the subject. By this rule, file.close is kind of wrong, unless you go with the mental model that the file closes itself, or that the file object doesn't represent the file itself, but rather a file handle or some sort of proxy object. A punching bag never punches itself though, so punchingBag.punch() is wrong either way. be_punched() is technically correct, but ugly. receive_punch() might work, or handle_punch() . Another approach, quite popular in JavaScript, is to treat such method calls as events, and the convention there is to go with the event name, prefixed with 'on', so that would be on_punched() or on_hit() . Alternatively, you could adopt the convention that past participles indicate passive voice, and by that convention, the method name would be just punched() . Another aspect to consider is whether the punching bag actually knows what hit it: does it make a difference whether you punch it, beat it with a stick, or run into it with a truck? If so, what is the difference? Can you boil the difference down to an argument, or do you need different methods for different kinds of received punishment? A single method with a generic parameter is probably the most elegant solution, because it keeps the degree coupling low, and such a method shouldn't be called punched() or handle_punch() , but rather something more generic like receive_hit() . With such a method in place, you can implement all sorts of actors that can hit punching bags, without changing the punching bag itself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196990",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/90036/"
]
} |
196,996 | I am taking an introductory course on python and the instructor says that python is a high level language and C and C++ are low level languages. It's just confusing. I thought that C, C++, Python, Java, etc were all high level languages. I was reading questions at stackoverflow on C, C++, etc and they all seem to refer to those languages as high level. It seems to me that some programmers use those terms interchangeably. | High level and low level are relative terms so the usage has changed over time. In the 70s UNIX made waves because it showed that an operating system could be written primarily in a high level language: C. At the time C was considered high level as in contrast to assembler. Nowadays C is considered a low level language because neither the language nor the standard libraries provide any of the bread and butter data structures like vectors, dictionaries, iterators, and so on. You can have all those structures in a C program, but you'll end up writing them yourself. Python, Java, etc. are high level relative to C because many of those standard data structures are built in to the language or are part of the standard libraries. Having those right out of the box makes it easier to program at a more abstract level. C is low level in a 2nd sense: it enables direct manipulation of the computer hardware (at least as direct as the OS will allow). The most common implementations of Python, Java, etc. are at least one step further removed from the hardware because they run in a VM. If you want to manipulate the hardware from Python you'll have write an extension to the Python VM, usually in C or C++. C++ is an odd case. It provides tons of nice data structures as part of the standard library, but it also allows low-level manipulation of the hardware. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/196996",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66433/"
]
} |
197,107 | In divide and conquer algorithms such as quicksort and mergesort, the input is usually (at least in introductory texts) split in two , and the two smaller data sets are then dealt with recursively. It does make sense to me that this makes it faster to solve a problem if the two halves takes less than half the work of dealing with the whole data set. But why not split the data set in three parts? Four? n ? I guess the work of splitting the data in many, many sub sets makes it not worth it, but I am lacking the intuition to see that one should stop at two sub sets. I have also seen many references to 3-way quicksort. When is this faster? What is used in practice? | It does make sense to me that this makes it faster to solve a problem if the two halves takes less than half the work of dealing with the whole data set. That is not the essence of divide-and-conquer algorithms. Usually the point is that the algorithms cannot "deal with the whole data set" at all. Instead, it is divided into pieces that are trivial to solve (like sorting two numbers), then those are solved trivially and the results recombined in a way that yields a solution for the full data set. But why not split the data set in three parts? Four? n? Mainly because splitting it into more than two parts and recombining more than two results
results in a more complex implementation but doesn't change the fundamental (Big O) characteristic of the algorithm - the difference is a constant factor, and may result in a slowdown if the division and recombination of more than 2 subsets creates additional overhead. For example, if you do a 3-way merge sort, then in the recombination phase you now have to find the biggest of 3 elements for every element, which requires 2 comparisons instead of 1, so you'll do twice as many comparisons overall. In exchange, you reduce the recursion depth by a factor of ln(2)/ln(3) == 0.63, so you have 37% fewer swaps, but 2*0.63 == 26% more comparisons (and memory accesses). Whether that is good or bad depends on which is more expensive in your hardware. I have also seen many references to 3-way quicksort. When is this faster? Apparently a dual pivot variant of quicksort can be proven to require the same number of comparisons but on average 20% fewer swaps, so it's a net gain. What is used in practice? These days hardly anyone programs their own sorting algorithms anymore; they use one provided by a library. For example, the Java 7 API actually uses the dual-pivot quicksort. People who actually do program their own sorting algorithm for some reason will tend to stick to the simple 2-way variant because less potential for errors beats 20% better performance most of the time. Remember: by far the most important performance improvement is when the code goes from "not working" to "working". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197107",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/45839/"
]
} |
197,257 | I use eclipse for coding, and the language which we use is Java.
Once it was suggested by someone that to properly format the code,
using the auto formatter(CTRL+SHIFT+F)
While this command does format the code, but sometimes I feel that overall
look becomes weird, and it is not actually very readable. So is this a recommended thing to do?
If not what is the better of formatting our code in eclipse ? | Strict code formatting rules are useful when several developers work on the same code using a version control system. Merging can be a pain if different developers have different formatting rules as the same code would look different for the merging tool. Eclipse (or any good IDE for that matter) has code formatting rules that can be customized in the preferences section (Java > Code Style > Formatter). Choose what you like best, but have also a look at the Java standard code conventions . Many open source projects also have their own code conventions that can be enforced with the Eclipse formatter. In addition there are standard tools like CodeStyle, PMD and Findbugs that enforce additional rules and help avoid common (low-level) anti-patterns and mistakes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197257",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/76637/"
]
} |
197,482 | I'm working on a VB.Net WinForms project and found myself writing code like this: this.Fizz.Enabled = this.Buzz.Enabled = someCondition; I couldn't decide whether that was bad code or not. Are there any .NET guidelines for when/when-not to do assignment chaining? | I never do it. I always put each assignment on its own line. Clarity is king. In practice, I seldom use enough state variables to make chaining assignments necessary. If I get to that point, I start looking for ways to trim the amount of state I am using. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197482",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44862/"
]
} |
197,488 | We have a research project with idea->prototype->statistics development cycle.
Anyway, our final product is a prototype, so the statistics collection suite is
not used persistently. Supposing I have following class: class Transform {
private:
someData;
public:
transformForward (inp_block, outp_block);
transformBackward (inp_block, outp_block);
}; Imagine that it is a part of a big system. I need to periodically collect some statistics on such transform (internal data could be considered as well). It seems that adding statistical routines would be a violation of single-responsibility principle. The second smell is that I do not want to touch code that uses Transform instances to explicitly invoke these routines. I would be best if I could just
trigger some kind of switch so that the statistics for that module will be collected. I've met that challenge a number of times and I have a feeling that I'm constantly reinventing the wheel. Is there some good practices for configuring and collecting the statistics suite for a compound system without interfering into it's internal code base? UPDATE: As I can see from the answers proposed, my question is too non-specific, so I'll provide more concrete example. Consider an image-compression system composed by two huge blocks: Predictor and Encoder . There are a lot of various prediction and compression algorithms, during our research we need to explore the behavior of the components under various conditions. We should answer questions like "how many times the pixel is processed within each context", "how well does this predictor works", "how does each predictor affects the Encoder 's internal state" and many others. Anyway, our final product is just a Codec with no statistical suite shipped with it; all kind of statistics collection is used internally during our research. Thus the question arises: how could one build flexible statistics engine that knows the very internals of the system? How could one keep the system itself independent of the statistics engine? | I never do it. I always put each assignment on its own line. Clarity is king. In practice, I seldom use enough state variables to make chaining assignments necessary. If I get to that point, I start looking for ways to trim the amount of state I am using. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197488",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/85080/"
]
} |
197,501 | Is it ok to call a webservice from a Domain object?. As I write the question I am thinking that you should never do that, as it is poor design, but the situation is the following: I have a domain object called Postman that works very closely with a Message object. This Message is provided by a web service. Without the Message, the Postman object could not do its business logic. I understand that this code doesn't smell good but, since the Postman depends on the Message, it seems logical to call the web service in order to get the Message. There is no other way to get the Message without the web service. | I never do it. I always put each assignment on its own line. Clarity is king. In practice, I seldom use enough state variables to make chaining assignments necessary. If I get to that point, I start looking for ways to trim the amount of state I am using. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197501",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/90484/"
]
} |
197,562 | Exception handling in C++ is limited to try/throw/catch. Unlike Object Pascal, Java, C# and Python, even in C++ 11, the finally construct has not been implemented. I have seen an awful lot of C++ literature discussing "exception safe code". Lippman writes that exception safe code is an important but advanced, difficult topic, beyond the scope of his Primer - which seems to imply that safe code is not fundamental to C++. Herb Sutter devotes 10 chapters to the topic in his Exceptional C++ ! Yet it seems to me that many of the problems encountered when attempting to write "exception safe code" could be quite well solved if the finally construct was implemented, allowing the programmer to ensure that even in the event of an exception, the program can be restored to a safe, stable, leak-free state, close to the point of allocation of resources and potentially problematic code. As a very experienced Delphi and C# programmer I use try.. finally blocks quite extensively in my code, as do most programmers in these languages. Considering all the 'bells and whistles' implemented in C++ 11, I was astonished to find that 'finally' was still not there. So, why has the finally construct never been implemented in C++? It's really not a very difficult or advanced concept to grasp and goes a long ways towards helping the programmer to write 'exception safe code'. | It's really just a matter of understanding the philosophy and idioms of C++. Take your example of an operation that opens a database connection on a persistent class and has to make sure that it closes that connection if an exception is thrown. This is a matter of exception safety and applies to any language with exceptions (C++, C#, Delphi...). In a language that uses try / finally , the code might look something like this: database.Open();
try {
database.DoRiskyOperation();
} finally {
database.Close();
} Simple and straightforward. There are, however, a few disadvantages: If the language doesn't have deterministic destructors, I always have to write the finally block, otherwise I leak resources. If DoRiskyOperation is more than a single method call - if I have some processing to do in the try block - then the Close operation can end up being a decent bit away from the Open operation. I can't write my cleanup right next to my acquisition. If I have several resources that need to be acquired then freed in an exception-safe manner, I can end up with several layers deep of try / finally blocks. The C++ approach would look like this: ScopedDatabaseConnection scoped_connection(database);
database.DoRiskyOperation(); This completely solves all of the disadvantages of the finally approach. It has a couple of disadvantages of its own, but they're relatively minor: There's a good chance you need to write the ScopedDatabaseConnection class yourself. However, it's a very simple implementation - only 4 or 5 lines of code. It involves creating an extra local variable - which you're apparently not a fan of, based on your comment about "constantly creating and destroying classes to rely on their destructors to clean up your mess is very poor" - but a good compiler will optimize out any of the extra work that an extra local variable involves. Good C++ design relies a lot on these sorts of optimizations. Personally, considering these advantages and disadvantages, I find RAII (Resource Acquisition Is Initialization) a much preferable technique to finally . Your mileage may vary. Finally, because RAII is such a well-established idiom in C++, and to relieve developers of some of the burden of writing numerous Scoped... classes, there are libraries like ScopeGuard and Boost.ScopeExit that facilitate this sort of deterministic cleanup. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197562",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26621/"
]
} |
197,569 | I am reading Clean Code by Uncle Bob. Because I am not a native-English speaker, I couldn't understand following statement: Classes and objects should have noun or noun phrase names like Customer , WikiPage , Account , and AddressParser . Avoid words like Manager , Processor , Data , or Info in the name of a class. A class name
should not be a verb. As I know, none of the Manager , Processor , Data , and Info is a verb, isn't it? What is the actual point he want to emphasize? | The three points are separate: Class names should be nouns or noun phrases . This means that the name of the class should be something that would be the subject of a verb. In the case of object-oriented design, methods would be the verbs that take place on the thing that the class is a representation of. Some words should be avoided. Manager indicates a possible god class . Info and Data may indicate a dummy data container. Words like this may indicate poor modeling of the problem space. Verbs should never be class names. See the first point - classes model things, methods model actions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197569",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/45866/"
]
} |
197,575 | I am trying to write a system log parser. It will receive messages from the FreeBsd syslog daemon through stdin. It will use those messages to determine if an ip should be banned or not.
The problem I have is that after x seconds the ban should be removed, but if I won't get any input from stdin, my program will just block waiting for it. So in the mean time I can't do anything. I fixed it using threads, but isn't there a better way to do it? For example something like this: while true:
while <data in stdin>:
handleData
doSomeStuff() So as long nothing comes in from stdin I want to execute doSomeStuff, and if there is data handle it. | The three points are separate: Class names should be nouns or noun phrases . This means that the name of the class should be something that would be the subject of a verb. In the case of object-oriented design, methods would be the verbs that take place on the thing that the class is a representation of. Some words should be avoided. Manager indicates a possible god class . Info and Data may indicate a dummy data container. Words like this may indicate poor modeling of the problem space. Verbs should never be class names. See the first point - classes model things, methods model actions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197575",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/89343/"
]
} |
197,584 | I'm trying to make a case for not putting the structure in the parent BaseModule class I've shown below. I'm more for a Strategy Pattern, and minimizing inheritance in favor of has-a relationships, but I am not really sure of what principle of OOAD this pattern is breaking. What principle of OOAD is this pattern breaking? Potential Pattern: public interface IStringRenderer
{
string Render();
}
public class BaseModel : IRequestor
{
public abstract void Init();
public abstract void Load();
}
public class BaseModule<TModel> : IStringRenderer where TModel : BaseModel
{
public TModel Model { get; set; }
public override string Render()
{
return
string.Join(
"",
new string[]
{
"<header/>",
"<main-container>",
"<side-nav>",
"<side-navigation/>",
"</side-nav>",
"<main-content>",
RenderMainContent(),
"</main-content>"
});
}
public abstract string RenderMainContent();
}
public class MyModuleModel : BaseModel
{
public List<int> Ints { get; set; }
...
}
public class MyModule : BaseModule<MyModuleModel>
{
public override string RenderMainContent()
{
return
string.Join(",", Model.Ints.Select(s => s.ToString()).ToArray());
}
} Preferred Pattern (duplicated some code to be clear): public interface IStringRenderer
{
string Render();
}
public class BaseModel : IRequestor
{
public abstract void Init();
public abstract void Load();
}
public class MyModuleModel : BaseModel
{
public List<int> Ints { get; set; }
...
}
public class MyModule : IStringRenderer
{
public MyModuleModel Model { get; set; }
public MyModule(MyModuleModel model)
{
this.Model = model;
}
public override string Render()
{
return
string.Join(
"",
new string[]
{
"<header/>",
"<main-container>",
"<side-nav>",
"<side-navigation/>",
"</side-nav>",
"<main-content>",
RenderMainContent(),
"</main-content>"
});
}
public string RenderMainContent()
{
return
string.Join(",", Model.Ints.Select(s => s.ToString()).ToArray());
}
} There are parts of this pattern that I'm not trying to focus on, but instead I am thinking about where the 'structure', or the 'tags' are living. In the first example they live in the parent, but in the second example they have been pushed into the derived class. Right now, the examples are pretty simple, but something like 'side-nav' could potentially get complicated, and may need to be controlled by the child anyway. I feel the principle here is that I don't feel like the 'tag' structure shouldn't be in the parent class as in the first example. There are other things I've removed in my 'preferred version' -- namely the generics. (Any suggested reading on good/bad for that choice is welcome). Proponents of the first example's embedded structure like the fact that they can wield changes to large amounts of child classes. I feel offering the flexibility to derived classes is the way to go, and if there needs to be a parent that parent only offers functionality and cross-cutting features, but does not bottle up canned structure. | The three points are separate: Class names should be nouns or noun phrases . This means that the name of the class should be something that would be the subject of a verb. In the case of object-oriented design, methods would be the verbs that take place on the thing that the class is a representation of. Some words should be avoided. Manager indicates a possible god class . Info and Data may indicate a dummy data container. Words like this may indicate poor modeling of the problem space. Verbs should never be class names. See the first point - classes model things, methods model actions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197584",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48325/"
]
} |
197,650 | I am a developer in a 5-member team and I believe our project is headed for disaster. I'll describe why in a moment, but my question is: how should I behave? The deadline is in 1.5 months, and I feel no matter what we do, this project will fail. I'm of the opinion that we should just terminate the project and stop wasting our time, but politically I think it's impossible for our manager to do that. What should I do in this case? Should I put in extra effort, or should I just take it easy? And what should I say to the manager? Reasons this project is headed for failure: With deadline approaching many of the must-have features are not finished Application is unstable and very difficult to use System is very convoluted, code very hard to understand, very difficult to change - Data model is too driven by a complex relational database (100+ tables) Unclear leadership; the manager responds to new information with major changes Almost no automated tests or unit test Heavily depends on other systems, but no integration tests yet In fact, we actually just inherited this project (along with the mess) around 1-2 months ago from another dev team under the same manager, who has worked on it for a few months. | Communicate your concerns in the most concise and non-confrontational way possible up the management ladder. Summarize the risks, but do not impose your conclusion on them. Management must always have the choice of what to do, but it is your job to assess and communicate the situation. Use email, so as to leave a paper trail when things go south. Having done that, keep working on the project in good faith. Keep in mind, you may not know everything there is to know about the project, its backers, and financial decisions behind it. Management decisions that seem stupid to you might actually be based on smart reasoning that isn't visible to you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197650",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8486/"
]
} |
197,675 | I've just been told by my boss that I will receive a negative performance review on Monday. He wants to talk to me about why I am so slow and why my bug fix rate is so low. I love programming and solving problems but I actually do find my job really really hard. I've actually been a programmer for about 10 years. But this is my first multithreading embedded linux job - I've been here 2 years and it's obvious to everyone that I'm still struggling. And I think I've become so demoralised and feel so marginalised that I've lost a lot of the fire that I had at the start of the job. Has anyone ever been in a similar situation and how do you go about increasing your bug fix rate? Update:
I had the review. I have been put on a 3 month 'employee development program' (of the type mentioned by Dunk ). Not sure whether I can turn this around. But even if I do have to move on, I've learned a lot from this experience. Another Update It's now about 6 weeks since the first review.
My advice to anyone facing the same situation is to be humble enough to take criticism and learn from your mistakes. And to not be afraid to look dumb. Ask loads and loads of questions. Let people know you're trying to learn and keep asking until you understand.
But be prepared for it not to work out. I'm constructing a portfolio of code ... as well as giving it my best shot. Yet another update I am hesitant to put this on here, since I'm concerned that I will not be able to refer future employers to my stackoverflow profile... But anyway, it might be of interest for someone reading this question, but I actually lost my job a few weeks ago. I'm in the midst of brushing up on all the skills I need to - I've taken a lot from the advice given here. | Your boss may be correct: you may be "underperforming" (more on that in a minute). But it may not be just your level of competence that's to blame. I don't think it would be a reach to suggest forces outside your control are causing you stress, which is having a negative effect on your performance. Let's have a look at a few of the reasons your boss may now be bringing this up: Culture and Politics There may be forces beyond your control requiring your boss to now voice his concern. It's important to understand the system you are working in. Your job is to make your boss look good. The only way to do that is to understand the pressures he/she is under. Ability It's possible that ability is not up to par, as you say he openly stated. Here is what I would do in this situation: Get specific feedback from your boss on how he measures performance. Are you not closing as many bugs as person X? Is there a set number of bugs you should be solving? If you are working alone then you need to make sure that the people measuring your performance are measuring it fairly and not based on some preconceived idea. If your performance is slow and based on a real gap, identify that gap and put a detailed plan together with your boss with the aim of closing it. This review is also a good opportunity to bring up the fact that you are not happy. It's good that you've identified that you don't love this job. But figure out why. What part of your job do you like and what don't you? It might be that this job isn't for you... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197675",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
197,838 | The advantage of using tabs for indentation is that people can configure their editor to use the tab width they are comfortable with. The only argument against this seems to be that people don't want their code to look differently based on editor settings. However, since code is ultimately being written to be easy to read later, it seems much more beneficial to allow the reader to set the width to their preferred value. On the other hand, because tabs have a different width in different environments, you will get aligning issues when you use it to format multi-line statements and other things. An example of this is: print(name + " is now " + age +
"years old!") But in this case the spacing in front of the second line has a very different purpose than that used to indent blocks. Since the purpose here is to align the second line with the print( characters, the best solution here is to use spaces, since they're guaranteed to be as wide as each character in the line above. To summarize, what is wrong with ...using tabs for indentation where everyone can set their own preferred width ...using spaces for aligning multi-line code where the width does matter Since most of these questions are typically just tabs vs spaces, I hope to gain some more insight here about the advantages/disadvantages of using each for their own separate use cases. | 1. The first downside is that it quickly becomes a mess. One of the most used Visual Studio extensions is Productivity Power Tools. This extension has an option which alerts a person when the file uses both tabs and spaces, and suggests to replace either tabs by spaces, or spaces by tabs. This is because when you open a file, you have no visual clue if it's using spaces or tabs. One developer will use only tabs, another developer - only spaces, including for indentation, and another one will use both, in a way you suggested. When those three developers will work together, given that they all use different style, a fourth developer would have no clue whatsoever if one should use tabs or spaces, and when. 2. The second downside is that today's IDEs are not done for that. Instead of solving, as you believe, the actual issue, it will only increase it. Imagine the well-indented/aligned piece of code using your convention: → Some code here(firstArgument,
→ ···············secondOne,
→ ···············andTheLast); Actually, in practice, what may happen is: → Some code here(firstArgument,
→ → → → ···secondOne,
→ → → → ···andTheLast); because, honestly, I can hardly see myself taping repeatedly Space key, as well as determine exactly the moment where I should stop putting tabs and start using spaces (i.e. one tab, then spaces). After a few modifications, the same code may quickly become: → Some code here(firstArgument,
→ → → → ···secondOne,
→ → → ·······modified); Now, when somebody will open the file in an IDE where a tab is equal to two spaces instead of four, that's what he will see: → Some code here(firstArgument,
→ → → → ···secondOne,
→ → → ·······modified); in other words: Some code here(firstArgument,
secondOne,
modified); 3. Finally, the problem doesn't exist in the first place. You're assuming that the first argument should be on the same line as the method itself. But the easiest way would be to simply put all arguments to a new line when they are too long. The code above may simply be written like this: → Some code here(
→ → firstArgument,
→ → secondOne,
→ → andTheLast); See? It's all aligned, and would be no matter how much spaces the tab measures. Conclusion: Formatting should be the task of the IDE. Developers have already enough work to care about the size of tabs, how much spaces will an IDE insert, etc. The code should be formatted correctly, and displayed correctly on other configurations, without forcing developers to think about it. By introducing the convention where tabs are used for indentation, and spaces, for alignment, you are mitigating the task of caring about tabs/spaces stuff from IDEs to developers. That's wrong. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197838",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57949/"
]
} |
197,883 | I would like to know what options are available for documenting a project which has already been developed, as the developers who worked on didn't write even a single page of documentation. The project has no other details other than many pages of scripts with functions written and modified by developers who worked on this project from the past 2 years. All I have is the database schema and project files. I would like to know if there is any way to organize this project and document it so that it could be helpful for the developers who will be working on this project in the future. The project was developed with PHP and MySQL. The functions are poorly commented so I can't get good results when I run it with doxygen. | Who will be reading the documentation? What will the documentation be used for? These are the most important questions to answer. For example, documentation for maintenance developers would focus more on structure whereas documentation for developers integrating with the product would focus more on web services and database structure. In general, do as much documentation as is required and no more. Many organizations require documentation because someone insisted it is best practice but the documentation ends up gathering dust. Assuming that people will actually use the documentation, do not try to capture the code and database to the smallest level. Developers will look at the code for minutiae. Instead, focus on details that are not apparent in the code , for example: The use cases the product meets. This may be difficult considering the age of the product but capturing what the product is meant to do gives vital context to non-technical readers and testers. Who are the competitors in the market (if any)? Is there anything excluded from the product's scope? Any clear non-functional requirements . For example, was the product written to handle a certain volume? How old can the data be? Where is caching used? How are users authenticated? How does access control work? A context diagram showing interaction with other systems, such as the database, authentication sources, backup, monitoring and so on. (If known) Risks and how they were mitigated along with a decision register . This is probably difficult in retrospect but there are often critical decisions that influence a design. Capture any that you know. Common design patterns or design guidelines . For example, is there a standard way of accessing the database? Is there a coding or naming standard? Critical code paths , usually using flow charts or UML activity or sequence diagrams. There may not be any in the project but these are usually ones business users have articulated. Even if all this information is not available, start now . The developers that come after you will thank you. Good automated unit tests or test cases can also be useful documentation, albeit hard to access for less technical people. It also sounds like you need to make a cultural change to include documentation . Start small but, ideally, the project should not be "done" until it has at least a minimal level of documentation. This is probably the hardest step because the above are things you can control. This is something others must buy into. However, it can also be the most rewarding, particularly if the next project you do comes with good documentation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197883",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/90858/"
]
} |
197,931 | When Enum is used as below, say if we have enum Designation
{
Manager = 0,
TeamLead = 1,
Associate = 2
} then write the below code if (designation == Designation.TeamLead) //somecode Now if we decide to change the enum element from " TeamLead " to " Lead " then we have to modify the above line of code as well i.e. designation == Designation.TeamLead to designation == Designation.Lead . So what is the best practice. | The most basic answer to your question is this: Most C# IDEs I know of have a Refactoring option that easily lets you rename variables and references, such as enum types. If you highlight TeamLead, anywhere it's used, right-click it and look for a Rename option, you should be able to change references in all code files of your project. If you reference it as a string at some point, it may be good to do a full text search and handle individual cases. However, my full answer, which I can only hope you care about, is that this is the wrong way of going about things in an object-oriented language. You should especially take note of times when you're seperating large blocks of logic by an if-else/switch, and even using part of one block in another, as a time when object-oriented design would help you. Here's how I'd do it, in incomplete pseudo-code: abstract class ProjectMember {
public abstract void reassignTask(Task t);
}
class Manager : ProjectMember {
public void reassignTask(Task t) {
// TODO: manager case
}
}
class TeamLead : ProjectMember {
public void reassignTask(Task t) {
// TODO: TeamLead case
}
}
class Associate : ProjectMember {
public void reassignTask(Task t) {
// TODO: associate case
}
}
// this is the function where you'd be getting rid of a switch statement, by way of the above code.
void reassignMemberTask(ProjectMember mem, Task t) {
mem.reassignTask(t);
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/197931",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/75041/"
]
} |
198,006 | I've been using Python for a little while now and I am enjoying it, but a few of my friends are telling me to start using a language like C# or Java instead and give these reasons: Python is a scripting language Python apps don't port well to people who don't have Python It's hard to make good GUIs in Python since it's a scripting language I like the batteries included approach to Python and the ability to download and upload pre-built modules from PyPI is really useful to me. Is there any specific reason why Python is considered a weak language? | Because people readily dismiss things they don't know much about with pseudo-intelligent rationalizations? I'm not much of a python fan, but those criticisms are bogus. Python is a general purpose programming language that happens to be good for scripting tasks. It's not a weakness. If you want to package software written in python with an all-in-one installer, there's almost nothing stopping you from including Python. It's not hard; you'd have to have a platform-specific installer, but this would be true for most multi-platform apps you could build. There are even tools to make that process pretty painless; see, for example http://hackerboss.com/how-to-distribute-commercial-python-applications/ There are plenty of good GUI solutions for Python, and any other scripting languages. For a long list of options, see http://wiki.python.org/moin/GuiProgramming There are fair criticisms of Python that reasonable people can make, but there's no reason to completely dismiss it based on the fact that it isn't C# or Java. For many people, that's a good reason TO use Python. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198006",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/88139/"
]
} |
198,085 | So, we've got a guy who likes to write methods that take Objects as parameters, so they can be 'very flexible.' Then, internally, he either does direct casting, reflection or method overloading to handle the different types. I feel like this is a bad practice, but I can't explain exactly why, except that it makes it more difficult to read. Are there other more concrete reasons why this would be a bad practice? What would those be? So, some folks have asked for an example. He has an interface defined, something like: public void construct(Object input, Object output); And he plans to use these by putting several in a list, so it sort of builds bits and adds them to the output object, like so: for (ConstructingThing thing : constructingThings)
thing.construct(input, output);
return output; Then, in the thing that implements construct , there is a rickety reflection thing that finds the right method that matches the input/output and calls it, passing input/output. I keep telling him, it works, I understand it works, but so does putting everything into a single class. We're talking about reuse and maintainability, and I think he's actually constraining himself and has less reuse than he thinks. The maintainability, while probably high for him right now, will likely be very low for him and anyone else in the future. | From a practical point of view I see these problems: A bloat of possible run type errors -- unless a lot of dynamic type checking which could be avoided with the Java included strong type checker. A lot of unnecessary casts Difficulty understanding what a method does by its signature From a theoretical point of view I see these problems: A lack of contracts of the interface of a class. If all parameters are of Object type then you aren't declaring anything informative to the client classes. A lack of overloading possibilities The incorrectness of override. You are able to override a method and change its parameter types thus breaking everything which is inheritance related. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198085",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91010/"
]
} |
198,117 | Our specific situation is that we are creating an agreement between ourselves and another team for shared control or use of a PHP based web application that we have been building. We have a set of standards and conventions documented in our technical specs. However, I have been asked to define them in terms of an industry standard. I am aware of different terminology for coding conventions. Hungarian notation, CamelCase, for example. And some defined standards for very specific things, PSR-0 for example covers namespaces in PHP. It seems the point is to try to give us stronger ground in the agreement, by pointing to industry standards rather than an arbitrary standard we set forth. They're looking for things like the PSR-0 or an ISO#### or IEEE#### that we can point to. My personal concern is that I've spent the time working with the team to show them the standards we have and to get buy in on the bigger parts of it. So I'm worried we'll either end up having to conform to some standard that doesn't match how we like to program, or the better case will be picking and choosing pieces of standards if they exist, to fit our current conventions. Anyway, the core of the question is, Are there any industry standards regarding code quality, coding processes, or other areas that would help us keep their programmers from making disruptive or difficult to maintain code? I realize we can't prevent it, but if we can point to the agreement, at least we have recourse other than refactoring or recoding it ourselves. Any other thoughts or suggestions would be helpful. | From a practical point of view I see these problems: A bloat of possible run type errors -- unless a lot of dynamic type checking which could be avoided with the Java included strong type checker. A lot of unnecessary casts Difficulty understanding what a method does by its signature From a theoretical point of view I see these problems: A lack of contracts of the interface of a class. If all parameters are of Object type then you aren't declaring anything informative to the client classes. A lack of overloading possibilities The incorrectness of override. You are able to override a method and change its parameter types thus breaking everything which is inheritance related. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198117",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50402/"
]
} |
198,284 | This question may sound dumb, but why does 0 evaluates to false and any other [integer] value to true is most of programming languages? String comparison Since the question seems a little bit too simple, I will explain myself a little bit more: first of all, it may seem evident to any programmer, but why wouldn't there be a programming language - there may actually be, but not any I used - where 0 evaluates to true and all the other [integer] values to false ? That one remark may seem random, but I have a few examples where it may have been a good idea. First of all, let's take the example of strings three-way comparison, I will take C's strcmp as example: any programmer trying C as his first language may be tempted to write the following code: if (strcmp(str1, str2)) { // Do something... } Since strcmp returns 0 which evaluates to false when the strings are equal, what the beginning programmer tried to do fails miserably and he generally does not understand why at first. Had 0 evaluated to true instead, this function could have been used in its most simple expression - the one above - when comparing for equality, and the proper checks for -1 and 1 would have been done only when needed. We would have considered the return type as bool (in our minds I mean) most of the time. Moreover, let's introduce a new type, sign , that just takes values -1 , 0 and 1 . That can be pretty handy. Imagine there is a spaceship operator in C++ and we want it for std::string (well, there already is the compare function, but spaceship operator is more fun). The declaration would currently be the following one: sign operator<=>(const std::string& lhs, const std::string& rhs); Had 0 been evaluated to true , the spaceship operator wouldn't even exist, and we could have declared operator== that way: sign operator==(const std::string& lhs, const std::string& rhs); This operator== would have handled three-way comparison at once, and could still be used to perform the following check while still being able to check which string is lexicographically superior to the other when needed: if (str1 == str2) { // Do something... } Old errors handling We now have exceptions, so this part only applies to the old languages where no such thing exist (C for example). If we look at C's standard library (and POSIX one too), we can see for sure that maaaaany functions return 0 when successful and any integer otherwise. I have sadly seen some people do this kind of things: #define TRUE 0
// ...
if (some_function() == TRUE)
{
// Here, TRUE would mean success...
// Do something
} If we think about how we think in programming, we often have the following reasoning pattern: Do something
Did it work?
Yes ->
That's ok, one case to handle
No ->
Why? Many cases to handle If we think about it again, it would have made sense to put the only neutral value, 0 , to yes (and that's how C's functions work), while all the other values can be there to solve the many cases of the no . However, in all the programming languages I know (except maybe some experimental esotheric languages), that yes evaluates to false in an if condition, while all the no cases evaluate to true . There are many situations when "it works" represents one case while "it does not work" represents many probable causes. If we think about it that way, having 0 evaluate to true and the rest to false would have made much more sense. Conclusion My conclusion is essentially my original question: why did we design languages where 0 is false and the other values are true , taking in account my few examples above and maybe some more I did not think of? Follow-up: It's nice to see there are many answers with many ideas and as many possible reasons for it to be like that. I love how passionate you seem to be about it. I originaly asked this question out of boredom, but since you seem so passionate, I decided to go a little further and ask about the rationale behind the Boolean choice for 0 and 1 on Math.SE :) | 0 is false because they’re both zero elements in common semirings . Even though they are distinct data types, it makes intuitive sense to convert between them because they belong to isomorphic algebraic structures. 0 is the identity for addition and zero for multiplication. This is true for integers and rationals, but not IEEE-754 floating-point numbers: 0.0 * NaN = NaN and 0.0 * Infinity = NaN . false is the identity for Boolean xor (⊻) and zero for Boolean and (∧). If Booleans are represented as {0, 1}—the set of integers modulo 2—you can think of ⊻ as addition without carry and ∧ as multiplication. "" and [] are identity for concatenation, but there are several operations for which they make sense as zero. Repetition is one, but repetition and concatenation do not distribute, so these operations don’t form a semiring. Such implicit conversions are helpful in small programs, but in the large can make programs more difficult to reason about. Just one of the many tradeoffs in language design. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198284",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81338/"
]
} |
198,292 | I tried to find if there's a similar question but didn't even know what keywords should I use :) I got a method in an interface accepting an other interface as a parameter: bool CanDoIt(AnInterface subject) One of the implementations needs to fulfill the requirement that the method should return false for all concrete types (implementing AnInterface) different than X. Simple implementation could look like (didn't wrote it since got no UT for that): bool CanDoIt(AnInterface subject)
{
if(!(subject is X)) return false;
else ...
} I know that this is a code smell accepting an interface and then checking it's concrete type - unfortunately for now I got to stick with it. I would like to write a test that would check if the requirement is implemented correctly. So that test would fail if someone would ever add a new type implementing AnInterface and changed the method not to meet the requirement. If writing such a test isn't possible then what can I do to consider the coded tested? Would writing a test using few exemplary types would be enough, or is it better not to write it at all? | 0 is false because they’re both zero elements in common semirings . Even though they are distinct data types, it makes intuitive sense to convert between them because they belong to isomorphic algebraic structures. 0 is the identity for addition and zero for multiplication. This is true for integers and rationals, but not IEEE-754 floating-point numbers: 0.0 * NaN = NaN and 0.0 * Infinity = NaN . false is the identity for Boolean xor (⊻) and zero for Boolean and (∧). If Booleans are represented as {0, 1}—the set of integers modulo 2—you can think of ⊻ as addition without carry and ∧ as multiplication. "" and [] are identity for concatenation, but there are several operations for which they make sense as zero. Repetition is one, but repetition and concatenation do not distribute, so these operations don’t form a semiring. Such implicit conversions are helpful in small programs, but in the large can make programs more difficult to reason about. Just one of the many tradeoffs in language design. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198292",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91144/"
]
} |
198,453 | When doing unit tests the "proper" way, i.e. stubbing every public call and return preset values or mocks, I feel like I'm not actually testing anything. I'm literally looking at my code and creating examples based on the flow of logic through my public methods. And every time the implementation changes, I have to go and change those tests, again, not really feeling that I'm accomplishing anything useful (be it mid- or long-term). I also do integration tests (including non-happy-paths) and I don't really mind the increased testing times. With those, I feel like I'm actually testing for regressions, because they have caught multiple, while all that unit tests do is show me that the implementation of my public method changed, which I already know. Unit testing is a vast topic, and I feel like I'm the one not understanding something here. What's the decisive advantage of unit testing vs integration testing (excluding the time overhead)? | When doing unit tests the "proper" way, i.e. stubbing every public
call and return preset values or mocks, I feel like I'm not actually
testing anything. I'm literally looking at my code and creating
examples based on the flow of logic through my public methods. This sounds like the method you are testing needs several other class instances (which you have to mock), and calls several methods on its own. This type of code is indeed difficult to unit-test, for the reasons you outline. What I have found helpful is to split up such classes into: Classes with the actual "business logic". These use few or no calls to other classes and are easy to test (value(s) in - value out). Classes that interface with external systems (files, database, etc.). These wrap the external system and provide a convenient interface for your needs. Classes that "tie everything together" Then the classes from 1. are easy to unit-test, because they just accept values and return a result. In more complex cases, these classes may need to perform calls on their own, but they will only call classes from 2. (and not directly call e.g. a database function), and the classes from 2. are easy to mock (because they only expose the parts of the wrapped system that you need). The classes from 2. and 3. cannot usually be meaningfully unit-tested (because they don't do anything useful on their own, they are just "glue" code). OTOH, these classes tend to be relatively simple (and few), so they should be adequately covered by integration tests. An example One class Say you have a class which retrieves a price from a database, applies some discounts and then updates the database. If you have this all in one class, you'll need to call DB functions, which are hard to mock. In pseudocode: 1 select price from database
2 perform price calculation, possibly fetching parameters from database
3 update price in database All three steps will need DB access, so a lot of (complex) mocking, which is likely to break if the code or the DB structure changes. Split up You split into three classes: PriceCalculation, PriceRepository, App. PriceCalculation only does the actual calculation, and gets provided the values it needs. App ties everything together: App:
fetch price data from PriceRepository
call PriceCalculation with input values
call PriceRepository to update prices That way: PriceCalculation encapsulates the "business logic". It's easy to test because it does not call anything on its own. PriceRepository can be pseudo-unit-tested by setting up a mock database and testing the read and update calls. It has little logic, hence few codepaths, so you do not need too many of these tests. App cannot be meaningfully unit-tested, because it is glue-code. However, it too is very simple, so integration testing should be enough. If later App gets too complex, you break out more "business-logic" classes. Finally, it may turn out PriceCalculation must do its own database calls. For example because only PriceCalculation knows which data its needs, so it cannot be fetched in advance by App. Then you can pass it an instance of PriceRepository (or some other repository class), custom-tailored to PriceCalculation's needs. This class will then need to be mocked, but this will be simple, because PriceRepository's interface is simple, e.g. PriceRepository.getPrice(articleNo, contractType) . Most importantly, PriceRepository's interface isolates PriceCalculation from the database, so changes to the DB schema or data organisation are unlikely to change its interface, and hence to break the mocks. Note: I recently noticed that this concept is fairly similar to Alistair Cockburn's Hexagonal architecture . So I guess I've just been reinventing the wheel...or maybe great minds think alike? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198453",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91262/"
]
} |
198,471 | How would you define Continuous Integration and what specific components does a CI server contain? I want to explain to someone in the marketing department what Continuous Integration is. They understand Source control - i.e. they use Subversion. But I'd like to explain to them properly what CI is. The Wikipedia Article never properly defines it, the Martin Fowler article only gives the following, which is basically a tautology followed by a vague explanation of 'integration': Continuous Integration is a software development practice where members of a team integrate their work frequently, usually each person integrates at least daily - leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. Update : I sent them this image, I couldn't find a simpler one. Update 2 : Feed back from the marketing chap (for the time when there was 3 questions): I actually like all 3 answers – for different reasons. I feel like logging in just to thank them all! Obviously he can't - so thanks on his behalf :) Update 3 : I've realised looking at the Wikipedia article that it does contain the principles which, when you take just the headings, is quite a good list: Maintain a code repository Automate the build Make the build self-testing Everyone commits to the baseline every day Every commit (to baseline) should be built Keep the build fast Test in a clone of the production environment Make it easy to get the latest deliverables Everyone can see the results of the latest build Automate deployment | When someone changes the files that make up the software product and then attempts to check them in (in other words, attempts to integrate the changes into the main product code) you want to make sure that the software product can still be successfully built. There is usually an external system, called the CI server , that either periodically or on every change, will grab the source files from version control, and attempt to build the product (compile/test/package). If the CI server can successfully do a build, the changes have been successfully integrated. The CI server also has to be able to broadcast if the build failed or succeeded, so systems like Jenkins (one of the most used CI server today) will have ways to send emails/texts as well as a dashboard-like web interface with a bunch of information about current and past builds, who checked-in code, when things broke, etc. (On your image above, this would be the Feedback Mechanism .) CI is important, because it ensures that on a continuous basis, you have a working product. This is important for all the developers who are working on the software product as well as for all the people who want to have access to daily releases of the software product, like QA. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198471",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27956/"
]
} |
198,481 | I was applying markdown comments in the xml comments of a config file when the XmlParser reported that two hyphens ( -- ) are not allowed in xml comments. Checking the XML Specification , it appears that xml comment isn't designed to contain two hyphens for compatibility reasons with SGML parsers. Why do SGML parsers disallow double hyphens in comments? | This page outlines quite a bit of the HTML/SGML history, and the rather convoluted rules of those two consecutive hyphens (double dash). The relevant part about SGML: To put it simply, the double dash at the start and end of the comment do not start and end the comment. Double dash indicates a change in what the comment is allowed to contain. The first -- starts the comment, and tells the browser that the comment is allowed to contain > characters without ending the comment. The second -- does not end the comment. It tells the browser that if it encounters a > character, it must then end the comment. If another -- is added, then it goes back to allowing the > characters. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198481",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12773/"
]
} |
198,565 | I notice that when I code I often use a pattern that calls a class method and that method will call a number of private functions in the same class to do the work. The private functions do one thing only. The code looks like this: public void callThisMethod(MyObject myObject) {
methodA(myObject);
methodB(myObject); // was mehtodB before
}
private void methodA(MyObject myObject) {
//do something
}
private void methodB(MyObject myObject) {
//do something
} Sometimes there are 5 or more private methods, in the past I have moved some of the private methods into another class, but there are occasions when doing so would be creating needless complexity. The main reason I don't like this pattern is that is not very testable, I would like to be able to test all of the methods individually, sometimes I will make all of the methods public so can I can write tests for them. Is there a better design I can use? | It's a good pattern. This is what Uncle Bob refers to when he talks about extracting til you drop . It shouldn't affect your tests though. You should only be testing the public interface. Yes, you can reflect your way in and test the private methods, but you're still going to have to test them again when you test your public methods, so it doesn't solve the perceived problem. The "problem" comes when you're testing a private method multiple times because you have multiple public methods using them. Sometimes, in fact more frequently than you might think, this doesn't matter. Particularly if you're testing by behaviour, rather than trying to test each line of code. All you want to know is "Does this public method do what it's supposed to do?" You don't care about the implementation details. It does matter, however, when you're testing the same piece of complex logic repeatedly, because it adds multiple tests to each behaviour. At that point, simply extract it into a service and mock it out. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198565",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/86834/"
]
} |
198,652 | I started my career as a .NET developer 3 months ago and after a long training plan on diverse technologies, patterns and concepts the developers who were supervising me have decided that I am ready to join one of the many projects the company handles. I am very excited to finally be able to start coding. The team I have joined is rather small for now because were starting with a new project, which is great because I get to be involved in the entire life cycle of the project. It is a web based SPA project with a backed that uses ASP.NET MVC/ASP.NET Web API and in front-end the Durandal framework and related libraries. My problem is that after having a meeting with my colleagues and establishing the tasks and estimations for the next month I find myself in a position that I do not know if I am capable of taking on any of the tasks. I have never done any of the created tasks and I do not know how should I proceed. For example one of tasks created is creating a generic error handling mechanism for the entire application. How does one usually proceed when faced with tasks that he has never done? | There are several things you can and should do to prepare for the task: Think about the problem and draw some diagrams. Make sure that you know what the problem is that you are trying to solve. Do research on what you are trying to do. The internet is a valuable source of information. I am not saying ask Stack Overflow -- I am saying do research on how other people have already solved a problem like yours or approached it. This what google came up with: "Exception Handling as a System Wide Concern" . Personally, I always try to learn from others. Lastly, and this might the most important, talk the other people on your team to get more clarification and direction on what to do. I always want to see new engineers come ask for guidance on projects. Not knowing how to do something will never really end. Every day I run into problems that I have never tackled before. Having the ability to figure out how to solve new problems is extremely important. Even old problems are never totally old -- in programming, there is almost always a new twist or a request for your solution to work in a new or different way. This is why I am an engineer; I love to figure out new stuff. Never stop learning new things. Learning is what makes you better. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198652",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/80637/"
]
} |
198,671 | I program in C mostly. However, it is pretty obvious that many more commercial applications are done in C++. As far as I can tell, C++ is a very complex language, with seemingly convoluted syntax and too many constructs. C++ also encourages the abuse of Objects where structs and functions will do. In fact, the only significant advantage I see in C++ is the use of templated generic types (though, according to the developers of Go, generics are bad for programs). Basically, my question is, did I miss something? Or is C++ more popular purely by merit of luck or marketing? Edit: I'm sorry that I apparently asked a loaded question; in retrospect I can see that the way I worded it appears to be complete flamebait. What I meant was, since C++ has so many different constructs and paradigms available to it, why hasn't it been replaced by languages that do less but are better at that specific thing? For instance, both Java and C# are much better suited for OOP than C++ is, while C is much simpler for system-level programming, and something like lisp is more suited for functional programming. Why is C++ used over one or more of these other languages? | Basically, my question is, did I miss something? I believe you did, but it has less to do with programming languages and more to do with the human tendency to denigrate the unfamiliar. We do that. It's natural. Rising above it takes a willingness to endure the cognitive dissonance that comes with the comparisons to the familiar when learning something new. You're doing two things you shouldn't: First, you're looking at one language through the lens of another. One of the things you'll start to understand as your horizons become broader is that programming languages are just toolboxes with variations on the same set of familiar tools. The variations exist to solve specific problems. Some toolboxes contain basic tools that force you to do a lot of things yourself; others give you things to make certain complex tasks easier. My wife, who makes jewelry, has a dozen pairs of pliers that have very unusual jaws designed to solve specific problems. To me they're just funny-looking because her problems aren't mine. Second, you're looking at every tool that's not already in your toolbox as something ripe for abuse because it doesn't fit your concept of "how it is." I have news for you: anyone with a lot of experience in a language can find a way to abuse any of its constructs. C -- and don't get me wrong here, because I've written a ton of it -- is especially rife with opportunities for abuse. Writing C is like owning a chainsaw: it's a great tool for removing unwanted limbs, but the line between use and abuse lies in whether those limbs are attached to your trees or your neighbors. Constructs that enforce better behavior are the equivalent of adding blade guards and chain brakes to chain saws: they were put there by the more-disciplined to keep them from having to clean up the messy results of the less-disciplined abusing their tools and hurting themselves or others. A little light bulb will go on above your head the first time you realize you can do a complete reimplementation of a class without having to wonder if any other code is getting away with directly writing structure fields instead of calling the setter function you so thoughtfully provided. Or is C++ more popular purely by merit of luck or marketing? None of the above. C++ is in wide use for the same reason as any other popular language: people have found it a useful tool for getting things done. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198671",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91264/"
]
} |
198,783 | In a typical (well-designed) MVC web app, the database is not aware of the model code, the model code is not aware of the controller code, and the controller code is not aware of the view code. (I imagine you could even start as far down as the hardware, or perhaps even further, and the pattern might be the same.) Going the other direction, you can go just one layer down. The view can be aware of the controller but not the model; the controller can be aware of the model but not the database; the model can be aware of the database but not the OS. (Anything deeper is probably irrelevant.) I can intuitively grasp why this is a good idea but I can't articulate it. Why is this unidirectional style of layering a good idea? | Layers, modules, indeed architecture itself, are means of making computer programs easier to understand by humans . The numerically optimal method of solving a problem is almost always an unholy tangled mess of non-modular, self-referencing or even self-modifying code - whether it's heavily optimized assembler code in embedded systems with crippling memory constraints or DNA sequences after millions of years of selection pressure. Such systems have no layers, no discernible direction of information flow, in fact no structure that we can discern at all. To everyone but their author, they seem to work by pure magic. In software engineering, we want to avoid that. Good architecture is a deliberate decision to sacrifice some efficiency for the sake of making the system understandable by normal people. Understanding one thing at a time is easier than understanding two things that only make sense when used together. That is why modules and layers are a good idea. But inevitably modules must call functions from each other, and layers must be created on top of each other. So in practice, it's always necessary to construct systems so that some parts require other parts. The preferred compromise is to build them in such a way that one part requires another, but that part doesn't require the first one back. And this is exactly what unidirectional layering gives us: it is possible to understand the database schema without knowing the business rules, and to understand the business rules without knowing about the user interface. It would be nice to have independence in both directions - allowing someone to program a new UI without knowing anything at all about the business rules - but in practice this is virtually never possible. Rules of thumb such as "No cyclical dependencies" or "Dependencies must only reach down one level" simply capture the practically achievable limit of the fundamental idea that one thing at a time is easier to understand than two things. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198783",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9971/"
]
} |
198,918 | At our company we typically make sure that we write an end-to-end test for our websites/web apps. That means we access a URL, fill in a form, submit the form to another URL and check the results of the page. We do this to test form validation, test that the HTML templates have the correct context variables, etc. We also use it to indirectly test the underlying logic. I was told by a co-worker that the reason for this is that we can rip out and change the underlying implementation at any point as long as the end-to-end tests pass. I'm wondering if this sort of decoupling makes sense or if it's just a way to avoid writing tests for smaller units of code? | End-to-end tests are also necessary. How else can you know you hooked up all the units together correctly? On very simple code, it is possible to test all the paths through the code with only end-to-end tests, but as you get more layers, it becomes prohibitively more costly to do so. For example, say you have three layers, each with five possible paths. To test all the paths through the entire application would require 5 3 end-to-end tests, but you can test all the paths through each unit with only 5·3 unit tests. If you only do end-to-end testing, a lot of paths end up getting neglected, mostly in error handling and boundary conditions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198918",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
198,919 | Many languages like C++ , C# , and Java allow you to create objects that represent simple types like integer or float . Using a class interface you can override operators and perform logic like checking if a value exceeds a business rule of 100. I'm wondering if it's possible in some languages to define these rules as annotations or attributes of a variable/property. For example, in C# you might write: [Range(0,100)]
public int Price { get; set; } Or maybe in C++ you could write: int(0,100) x = 0; I've never seen something like this done, but given how dependent we have become on data validation before storage. It's strange that this feature hasn't been added to languages. Can you give example of languages where this is possible? | Pascal had subrange types, i.e. decreasing the number of numbers that fit into a variable. TYPE name = val_min .. val_max; Ada also has a notion of ranges: http://en.wikibooks.org/wiki/Ada_Programming/Types/range From Wikipedia.... type Day_type is range 1 .. 31;
type Month_type is range 1 .. 12;
type Year_type is range 1800 .. 2100;
type Hours is mod 24;
type Weekday is (Monday, Tuesday, Wednesday, Thursday, Friday, Saturday, Sunday); can also do subtype Weekend is Weekday (Saturday..Sunday);
subtype WorkDay is Weekday (Monday..Friday); And here's where it gets cool year : Year_type := Year_type`First -- 1800 in this case...... C does not have a strict subrange type, but there are ways to mimic one (at least limited) by using bitfields to minimize the number of bits used. struct {int a : 10;} my_subrange_var;} . This can work as an upper bound for variable content (in general I would say: don't use bitfields for this , this is just to proof a point). A lot of solutions for arbitrary-length integer types in other languages rather happen on the library-level, I.e. C++ allows for template based solutions. There are languages that allow for monitoring of variable states and connecting assertions to it. For example in Clojurescript (defn mytest
[new-val]
(and (< new-val 10)
(<= 0 new-val)))
(def A (atom 0 :validator mytest)) The function mytest is called when a has changed (via reset! or swap! ) checks whether conditions are met. This could be an example for implementing subrange behaviour in late-binding languages (see http://blog.fogus.me/2011/09/23/clojurescript-watchers-and-validators/ ). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/198919",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52871/"
]
} |
199,055 | I am a researcher, and in my research I do a lot of programming. I am a big fan of the open-source concept - especially in research, where transparency and reproducibility is already a big part of the culture. I gladly contribute as much as I can to the community, and releasing my code for anyone to use is part of that. However, in research there is always a certain measure of uncertainty about what the stuff you produce will be used for. I fully understand that I can't copyright any results or conclusions - but I can protect how others use my code, and I would like to make sure that there is no (legal) way to incorporate software I produce in military applications. I've read through a few of the shorter ones of the common OSS licenses, and summaries of some more, but they all seem to focus solely on the questions "do you earn money on my code?" and "do you make my code available with your program?" - nothing about what the program actually does with the code. Are there any good open-source licenses that explicitly prohibit all kinds of military applications? Update: After reading up some more on how OSS works, I've realized that a license that meets my needs by definition will not be open-source, since open-source licenses cannot discriminate against fields. Thus, I'm rather looking for a license that is like an open-source license, except that it prohibits military use. I want this license to be already existing, authored or at least reviewed by someone who actually knows licensing, since I don't. Also, in response to a couple of remarks that this will be difficult to enforce: yes, I realize that. But this is more for myself than for the legal implications; if I use a license like this, and a military organization uses my code anyway, they are breaking the law and they are doing it despite my explicit instructions not to. Thus, the potentially gruesome things that they do with applications that include software I've written are no longer "on my conciousness", since they stole the software from me. (And somewhere I have a naïve hope that if they need something I've done, and my license prohibits them from using it legally, they'll get someone elses program that does the same thing and allows them to use it. Not that governments always do, but they always should abide by the law...) It's a moral safeguard, so to speak, rather than something I actually expect to bring up in court (if my mediocre code is ever used by CIA...) | How would one enforce such a license? Would you prohibit any military use? If the software checks air pressure in tires, and someone decides to use it on a military Hummer, is that a prohibited use? Can people in the military industrial complex use it to plan their monthly picnic? Would it be an acceptable use if the software improved ballistic missile trajectories, and the improved accuracy of the weapon prevented civilians from being killed? Or would any use in a weapon be prohibited? These are the kinds of questions you have to ask yourself, if you want to make a software license that satisfies your sensibilities. Nevertheless, I'd try an keep it simple. Yahoo's Terms of Use state that their software must not be used "to operate nuclear facilities, life support or other mission critical applications where human life or property may be at stake." That's probably as good a clause as any, if you add the word "weapons" to the prohibited list of uses. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199055",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30307/"
]
} |
199,090 | I have a debate with a programmer colleague about whether it is a good or bad practice to modify a working piece of code only to make it testable (via unit tests for example). My opinion is that it is OK, within the limits of maintaining good object oriented and software engineering practices of course (not "making everything public", etc). My colleague's opinion is that modifying code (which works) only for testing purposes is wrong. Just a simple example, think of this piece of code that is used by some component (written in C#): public void DoSomethingOnAllTypes()
{
var types = Assembly.GetExecutingAssembly().GetTypes();
foreach (var currentType in types)
{
// do something with this type (e.g: read it's attributes, process, etc).
}
} I have suggested that this code can be modified to call out another method that will do the actual work: public void DoSomething(Assembly asm)
{
// not relying on Assembly.GetExecutingAssembly() anymore...
} This method takes in an Assembly object to work on, making it possible to pass your own Assembly to do the testing on. My colleague didn't think this is a good practice. What is considered a good and common practice ? | Modifying code to make it more testable has benefits beyond testability. In general, code that is more testable Is easier to maintain, Is easier to reason about, Is more loosely coupled, and Has a better overall design, architecturally. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199090",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25605/"
]
} |
199,196 | My lecturer mentioned today that it was possible to "label" loops in Java so that you could refer to them when dealing with nested loops. So I looked up the feature as I didn't know about it and many places where this feature was explained it was followed by a warning, discouraging nested loops. I don't really understand why? Is it because it affects the readability of the code? Or is it something more "technical"? | Nested loops are fine as long as they describe the correct algorithm. Nested loops have performance considerations (see @Travis-Pesetto's answer), but sometimes it's exactly the correct algorithm, e.g. when you need to access every value in a matrix. Labeling loops in Java allows to prematurely break out of several nested loops when other ways to do this would be cumbersome. E.g. some game might have a piece of code like this: Player chosen_one = null;
...
outer: // this is a label
for (Player player : party.getPlayers()) {
for (Cell cell : player.getVisibleMapCells()) {
for (Item artefact : cell.getItemsOnTheFloor())
if (artefact == HOLY_GRAIL) {
chosen_one = player;
break outer; // everyone stop looking, we found it
}
}
} While code like the example above may sometimes be the optimal way to express a certain algorithm, it is usually better to break this code into smaller functions, and probably use return instead of break . So a break with a label is a faint code smell ; pay extra attention when you see it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199196",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/79854/"
]
} |
199,198 | Recently I've been involved in an agile project (using scrum) where management came up with the idea that the team would nominate a developer 'MVP' as well as a QA 'MVP' at the end of each sprint, voted on by the team. The MVP then gets a small monetary reward and free lunch as well as a trophy to display on his desk. We've had two sprints so far with this reward system in place. The good I see from this is the following: More bugs have been fixed (which is what upper management wants to see, a number change in the direction they want) The MVP from each 'team' gets recognized and gets a self esteem boost (or is it an ego boost?) I've noticed a few what I would consider bad sides to doing such a thing (at least from a developer standpoint): There are a few developers who are so concerned with the number that the quality of bug fixes has gone down. Fixes in one area are causing regressions in another. There are a few developers who are cherry picking 'easier/quicker' bugs to boost their bug count. Could be good of bad here I guess. Higher priority (which a lot of the time correlates to 'harder/longer to fix') defects actually become lower priority. Blocking defects are not addressed in a timely manner, as usually they take longer and require more coordination with QA. The team aspect within the Dev team has been lost. The team aspect of Dev and QA working together as one hasn't improved either, but hasn't really changed too much from before. Work beyond bug fixes, or working towards THAT number, isn't easily recognizable/tracked. I do believe that each of the 'bad' above can be addressed to some degree, depending on how the team handles each. My question is, then, has anyone successfully pulled anything like this off, where you recognize an MVP per sprint? If so, what do you think contributed to that success? | Agile emphasizes team effort , not the effort of individuals. So no, this approach is clearly not agile. Rather than encourage team collaboration, this encourages each team members to focus on his own result. It can even lead to members avoiding helping each other (or worse), which in the long run will stop the team from getting better. I suggest rewarding the team as a whole if they did a good job. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199198",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17423/"
]
} |
199,217 | In Rich Hickey's thought-provoking goto conference keynote " The Value of Values " at 29 minutes he's talking about the overhead of a language like Java and makes a statement like, "All those interfaces kill your reuse." What does he mean? Is that true? In my search for answers, I have run across: The Principle of Least Knowledge AKA The Law of Demeter which encourages airtight API interfaces. Wikipedia also lists some disadvantages. Kevlin Henney's Imperial Clothing Crisis which argues that use, not reuse is the appropriate goal. Jack Diederich's " Stop Writing Classes " talk which argues against over-engineering in general. Clearly, anything written badly enough will be useless. But how would the interface of a well-written API prevent that code from being used? There are examples throughout history of something made for one purpose being used more for something else . But in the software world, if you use something for a purpose it wasn't intended for, it usually breaks. I'm looking for one good example of a good interface preventing a legitimate but unintended use of some code. Does that exist? I can't picture it. | Haven't watched the full Rich Hickey presentation, but if I understand him correctly, and judging from what he says about the 29-minute mark, he seems to be arguing about types killing reuse. He is using the term "interface" loosely as a synonym for "named type", which makes sense. If you have two entities { "name":"John" } of type Person , and { "name": "Rover" } of type Dog , in Java-land they probably cannot interoperate unless they share a common interface or ancestor (like Mammal , which means writing more code). So the interfaces/types here are "killing your reuse": even though Person and Dog look the same, one cannot be used interchangeably with the other, unless you write additional code to support that. Note Hickey also jokes about projects in Java needing lots of classes ("Who here has written a Java application using just 20 classes?"), which seems one consequence of the above. In "value-oriented" languages, however, you won't assign types to those structures; they are just values which happen to share the same structure (in my example, they both have a name field with a String value) and therefore can easily interoperate, e.g. they can be added to the same collection, passed to the same methods, etc. To sum up, all this seems to be something about structural equality vs explicit type/interface equality . Unless I missed something from the portions of the video I haven't watched yet :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199217",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62323/"
]
} |
199,310 | In languages that don't allow underscores in integer literals , is it a good idea to create a constant for 1 billion? e.g. in C++: size_t ONE_BILLION = 1000000000; Certainly, we shouldn't create constants for small numbers like 100. But with 9 zeros, it's arguably easy to leave off a zero or add an extra one in code like this: tv_sec = timeInNanosec / 1000000000;
tv_nsec = timeInNanosec % 1000000000; | Create one called NANOSECONDS_IN_ONE_SECOND instead as that what it represents. Or a shorter, better name if you can think of one. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199310",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/61473/"
]
} |
199,311 | In any given class definition, I've seen the method definitions ordered in various ways: alphabetical, chronological based on most common usage, alphabetical grouped by visibility, alphabetical with getters and setters grouped together, etc. When I start writing a new class, I tend to just type everything in, then reorder when I'm done writing the entire class. On that note, I've got three questions: Does order matter? Is there a "best" order? I'm guessing there's not, so what are the pros and cons of the different ordering strategies? | In some programming languages, order does matter because you can't utilize things until after they've been declared. But barring that, for most languages it doesn't matter to the compiler. So then, you're left with it mattering to humans. My favorite Martin Fowler quote is: Any fool can write code that a computer can understand. Good programmers write code that humans can understand. So I'd say that the ordering of your class should depend on what makes it easy for humans to understand. I personally prefer the step-down treatment that Bob Martin gives in his Clean Code book. Member variables at the top of the class, then constructors, then all other methods. And you order the methods to be close together with how they are used within the class (rather than arbitrarily putting all public then private then protected). He calls it minimizing the "vertical distance" or something like that (don't have the book on me at the moment). Edit: The basic idea of "vertical distance" is that you want to avoid making people jump all around your source code just to understand it. If things are related, they should be closer together. Unrelated things can be farther apart. Chapter 5 of Clean Code (great book, btw) goes into a ton of detail on how Mr. Martin suggests ordering code. He suggests that reading code should work kind of like reading a newspaper article: the high-level details come first (at the top) and you get more detail as you read down. He says, "If one function calls another, they should be vertically close, and the caller should be above the callee, if at all possible." Additionally, related concepts should be close together. So here's a contrived example which is bad in many ways (poor OO design; never use double for money) but illustrates the idea: public class Employee {
...
public String getEmployeeId() { return employeeId; }
public String getFirstName() { return firstName; }
public String getLastName() { return lastName; }
public double calculatePaycheck() {
double pay = getSalary() / PAY_PERIODS_PER_YEAR;
if (isEligibleForBonus()) {
pay += calculateBonus();
}
return pay;
}
private double getSalary() { ... }
private boolean isEligibleForBonus() {
return (isFullTimeEmployee() && didCompleteBonusObjectives());
}
public boolean isFullTimeEmployee() { ... }
private boolean didCompleteBonusObjectives() { ... }
private double calculateBonus() { ... }
} The methods are ordered so they are close to the ones that call them, working our way down from the top. If we had put all the private methods below the public ones, then you'd have to do more jumping around to follow the flow of the program. getFirstName and getLastName are conceptually related (and getEmployeeId probably is too), so they are close together. We could move them all down to the bottom, but we wouldn't want to see getFirstName at the top and getLastName at the bottom. Hopefully this gives you the basic idea. If you're interested in this kind of thing, I strongly recommend reading Clean Code . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199311",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91272/"
]
} |
199,331 | A certain failure of OOP is shown with a class Square inheriting from Rectangle, where logically Square is a specialization of Rectangle and should therefore inherit from it, but everything falls apart when you attempt to change a Square's length or width. Is there a specific term for describing what is going wrong with that case? | Wikipedia merely refers to it as the Circle-ellipse problem The circle-ellipse problem in software development (sometimes known as the square-rectangle problem ) illustrates a number of pitfalls which can arise when using subtype polymorphism in object modelling. The issues are most commonly encountered when using object-oriented programming. This is the L in the acronym S.O.L.I.D. which is known as the Liskov substitution principle . This problem arises as a violation of that principle. The problem concerns which subtyping or inheritance relationship should exist between classes which represent circles and ellipses (or, similarly, squares and rectangles). More generally, the problem illustrates the difficulties which can occur when a base class contains methods which mutate an object in a manner which might invalidate a (stronger) invariant found in a derived class, causing the Liskov substitution principle to be violated... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199331",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/84087/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.