source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
206,035
I know there is a difference between INNER JOIN and FULL OUTER JOIN , I can see it, but, what is the difference between the two following: JOIN ... ON... and INNER JOIN...ON... and still yet JOIN...ON... vs FULL OUTER JOIN...ON... Reason being is I think maybe just using JOIN is messing up a query I am working on that is posted on SO, link to question HERE. So basically what is the syntactical difference between the actual set operations themselves? Thank You,
JOIN and INNER JOIN are the same, the inner keyword is optional as all joins are considered to be inner joins unless otherwise specified. The difference between JOIN and FULL OUTER JOIN is the same as the difference between INNER JOIN and FULL OUTER JOIN . An INNER JOIN will only return matched rows if a row in table A matches many rows in table B the table A row will be repeated with each table B row and vice versa. A FULL OUTER JOIN will return everything an inner join does and return all unmatched rows from each table.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206035", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59333/" ] }
206,182
I am seeing a lot of programmers turning away from management and administration roles. They want to build stuff. And as a result, a lot of these positions are filled by non-technical people. I fail to see how they add value. Is scheduling meetings, booking offsites and other administrative work enough to justify their role?
I fail to see how they currently add value and is scheduling meetings, booking offsites and other administration works enough for their role? Don't underestimate the amount of interaction your manager does with other departments. They handle budgets, training plans, HR paperwork. They protect the developers from getting sucked into meetings with other departments and provide a unified front for your group. In short, their job is to protect self-motivated developers from all of the other demotivating things that exist in business.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206182", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16639/" ] }
206,197
I have been doing a lot of code review lately, and I am unsure of the positive and negative effects and professionalism of putting positive and/or funny comments in code reviews. We use Github as our code review platform on my team, so the comments are viewable by anyone. I generally try to use this platform so the entire process from start to finish is visible and historical.
It's important to highlight positives as well as negatives. I know if I were reviewing the refactor of a particular hellish subsystem into something neat and clean, I'd probably buy the programmer a pizza for his efforts. If you're using reviews as training, it's doubly important - highlighting a good piece of code will be helpful for the junior programmers also reviewing that code. They will have a chance to ask questions about why a particular approach or technique is better than another.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206197", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/97843/" ] }
206,293
Is there a reason, historical or otherwise, why the modulus operator is part of a small set of standard operators in what seems like many languages? ( +, -, *, / and % , for Java and C, with ** in Ruby and Python). It seems strange to include mod as a "fundamental" (not to knock it, I use it plenty, but I also use exponentiation, absolute value, floor/ceiling or others -- they seem just as useful and necessary). Was this an old decision made in some specification which Java, C, Ruby and Python all follow or a language they are all descended from? As far as I can tell most Lisp dialects only include +, -, / and * . At first I wondered if mod was particularly easy to implement at the binary level (would that even make a difference, regarding decisions about what should be a "fundamental" operator and what shouldn't?) but it seems not to be. Is it just much more commonly used in programming than I think?
I am sure it is common because many CPU architectures implement modulus as a second output of the integer divide instruction. I don't recall it being present in 1970s CPUs (6800, 8080, Z80, 1604, etc.), but by the 1980s, the Intel 8086 and 8088, as well as the Motorola 6809 had it. The PDP-11 instruction architecture specified DIV producing a quotient and a remainder from the beginning (1970), though the MUL and DIV instructions were not present on early designs, but could be transparently emulated by an "instruction not implemented trap" and implemented with a handler that did bit twiddling. Probably the PDP-11 feature encouraged the very first edition of the C language providing the % feature. (Ever notice how a percent sign has a slash in it? That makes it a cleverish choice for a division related operator.) The presence of modulus in C alone can probably explain its presence in all modern languages. C has a very large family of descendants and was otherwise quite influential.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206293", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/197372/" ] }
206,310
I'm looking into building my first mobile app. One of the core features of the application is that multiple devices/users will have access to the same data -- and all of them will have CRUD rights. I believe the architecture should involve a central server where all the data is stored. The devices will use an API to interact with the server to perform its data operations (e.g. adding a record, editing a record, deleting a record). I imagine a scenario where synchronizing the data will become a problem. Assume the application should work when it is not connected to the Internet, and thus cannot communicate with this central server. So: User A is offline and edits record #100 User B is offline and edits record #100 User C is offline and deletes record #100 User C goes online (presumably, record #100 should get deleted on the server) User A and B goes online, but the records they edited no longer exist All sorts of scenarios similar to the above can come up. How is this generally handled? I plan to use MySQL, but am wondering if it's not appropriate for such a problem.
I'm currently working on a mobile/desktop/distributed app with exactly the same requirements and issues. First of all, these requirements are not inherent to mobile apps per se, but to any disconnected/distributed client-server transactions (parallel programming, multithreading, you get the point). As such they are, of course, typical issues to address in mobile apps. Generally, what this all boils down to is that you have a potential data record that is distributed to n clients, who may edit it at the same time. What you need is a proper version control/locking mechanism, a proper rights/access management, a proper synchronization/caching strategy For (1) you may apply some patterns: There are two frequently used locking strategies: Optimistic Offline Locking , and Pessimistic Offline Locking . Some of these come applied in different version control "patterns", such as MultiVersion Concurrency Control (MVCC), which uses a counter (sort of a very simple "time stamp") for every data record, which is updated whenever the record is changed. (2) and (3) are very broad issues themselves, which need to be dealt with independently of (1). Some advice from my experience: Use a client-server technology that abstracts away most of the issues for you. I highly recommend some web technology such as CouchDb , which handles (1) via Optimistic Offline Locking + MVCC, (2) via Web API, and (3) via Http caching very well. Try not to invent things yourself if you can rely on proven technologies and approaches. I believe any hour spent researching and comparing existing technologies/patterns is far better spent than trying to implement your own system(s). Try to use homogeneous technologies if possible. By "homogeneous" I mean technologies that have been built with the same principles in mind, e.g. web 2.0 usage scenarios. An example: Using a proper CouchDb and REST Client (Web API) with a local caching strategy is a better choice than using SQL for mobile apps. I strongly advise against the use of MySQL because it is a technology that was not explicitly made for such usage scenarios. It works, but you are much better off with a database system that already embraces the web communication and concurrency style (such as many NoSQL Databases). By the way, I have settled for CouchDb with a custom local client working against the CouchDb APIs, which works and scales beautifully. I switched from using MSQL + (N)Hibernate and paid a high price for not making the right choice (meaning not doing enough research) in the first place.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206310", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/93973/" ] }
206,321
I have an open source script for a specific site (I'm trying not to call anything by name here) that I and a few other developers recently moved to GitHub. We've gotten several new developers since we moved to the new system, including one very active one in particular. However, this active one has started changing a lot of the project. First of all, he deleted our versioning system (not like Git, but like that -- we called it versions v4.1.16 ) and said it would be better to simply push the code to the site when we think it's ready. Now there's no centralized place to put release notes, which has gotten annoying. The thing that has gotten me just about ready to pack my bags and go was the push script. Another developer on the project wrote a simple Python-based push script. Since we keep multiple versions of the script online in various places, I began coding a larger Java program with a graphical interface that will replace the Python script. I went on IRC to notify everyone about it, and I got a very annoying response from the programmer saying that the old Python-based script can do everything mine can do and is so much more lightweight (he also commented about the fact that he thought Python was better than Java and so on). I looked over the code for the old push script and saw that none of the features he said existed were there. So now I want to know what to do. I've spent a lot of my time on this project, so I don't want to just get up and leave, but I'm finding it hard to work with this new developer. On the flip side, he is now the #1 committer on the project, with even more commits than the lead developer. I'm not really sure what to do about this. Has anybody else experienced this problem? If so, what did you do? UPDATE 1 : I have disabled everyone's commit access and I am requesting people go through pull requests. I also proposed several measures to fix the other issues. Everyone else hasn't shown any support for it. The troublesome dev has simply said that people who don't follow the "commit action" closely can think that the project is disorganized when it really isn't. I obviously don't agree with this, so I am seriously contemplating resigning from the project. UPDATE 2 : The lead developer began ranting about the fact that one of my commits supposedly deleted three newlines in the code (the revert commit showed up just after I posted the discussion, and doesn't even reference my "commit"), and then the two of them began discussing whether to revoke my commit access. So, I have done the logical thing and left the project. Thanks for your help with this everyone!
You can quit. Not the most constructive thing to do, but sometimes it's the only option. If you do, don't sit around and moan about how you had to give it up, take that energy and put it straight into something else - 'move on' in other words. You can fork it. There's no reason why you have to work with anyone. Fork, improve the code and let the others continue to have a little ego-fest of their own. Your new project will simply compete with the old and its up to you whether you make a success of it, or the old one beats you in terms of users and features. You can engage with the rest of the development team on the project to voice your concerns. Don't make it personal, but make out that you're unhappy with code churn, or lack of established quality processes, or unhappy that the new decisions are just pushed out without agreement from everyone. You'll either be told that nothing's wrong enough to change, or you'll get a few others agreeing with you that the team needs to fix things up. That might end up with the disruptive guy losing his commit access. Maybe you'll all agree that some of the changes are not improvements and the project needs to be reverted. (This latter option is the most likely outcome, unless it turns into a massive argument of entrenched opinions.) It can be difficult when someone comes along and changes the safe and comfy routines you've become used to, but it could be said that having someone come along and shake up the old, cozy practices are good things in themselves.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206321", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/88139/" ] }
206,374
Everyone says that I should make my code modular, but isn't it less efficient if I use more method calls rather than fewer, but larger, methods? What is the difference in Java, C, or C++ for that matter? I get that it is easier to edit, read and understand, especially in a group. So is the computation time loss insignificant compared to the code tidiness benefits?
Yes, it is irrelevant. Computers are tireless, near-perfect execution engines working at speeds totally un-comparable to brains. While there is a measurable amount of time that a function call adds to the execution time of a program, this is as nothing compared to the additional time needed by the brain of the next person involved with the code when they have to disentangle the unreadable routine to even begin to understand how to work with it. You can try the calculation out for a joke - assume that your code has to be maintained only once , and it only adds half an hour to the time needed for someone to come to terms with the code. Take your processor clock speed and calculate: how many times would the code have to run to even dream of offsetting that? In short, taking pity on the CPU is completely, utterly misguided 99.99% of the time. For the rare remaining cases, use profilers. Do not assume that you can spot those cases - you can't.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206374", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/97417/" ] }
206,431
I'm a Java developer with a bit more than a year of experience which places me somewhere above a junior, but not among mid-level developers yet. Recently I was offered a long-term project which is about studying an existing banking application code for 4 months and then introducing changes when needed. As a not-so-experienced programmer I'm looking for ways to develop and I wonder what such a project might give. Would you consider dealing with a big and probably not-so-well written application a good practice for a beginner?
Troubleshooting existing code is a super way to develop as a programmer. If the code is bad, you will learn the impact of the mistakes they made, and maybe avoid some of them when you are doing the designing. If the code is good, you will learn something about how to make a maintainable application. You will also learn to deal with the complexity of a real business application. Since this is in the banking sector, you will learn about things like federal regulation and internal accounting controls that you may never have even thought of. These are good things to know when you get asked to design something else in the financial world. And financial programming can be quite a lucrative sector to work in, so getting banking experience may be very good for you. You may even learn that just because something was written 15 years ago, in a language that you would prefer not to use, it is not necessarily bad. It has been running successfully all this time after all. If, as with most legacy apps, the application doesn't have unit testing you need to really be sure a change won't affect something else, you may learn how to add that testing and how to sell management on why adding that testing is a good idea.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206431", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/98079/" ] }
206,475
It was my freelancer job at oDesk. I have done several jobs earlier in given time, but is was the first time I missed the deadline. It was a very lengthy job and I tried my best but I still missed the deadline. Now, I am very scared. Because it's my fault that I missed the deadline. My question is: Is this is a big concern or are missed deadlines common among programming jobs, so I shouldn't worry too much about this?
Yes. Missed deadlines are common in software development. Many freelancers meet deadlines by incurring in technical debt or hiding the dirt under the rug. Paraphrasing Frederick Brooks' The Mythical Man Month : Deadlines are often missed because project leaders continue to estimate software tasks the same way they do civil engineering tasks, which is a flawed approach because software is a novel, handicraft industry with no clear body of norms. This is so true that you cannot revoke a programmer's "permit" to code for malpractice, nor you can sue someone for programming without a title. Software development has inherent complexity that other disciplines lack. A big program can have more components than a car, and these components can interact in more, different ways. Software is hard to visualize, so different kinds of diagrams are used to see different aspects of a project, and these aspects may not be orthogonal. Civil engineering, on the other hand, has blueprints allowing you to see plumbing, wiring etc. all in the same chart (or layers) in an orthogonal way. It's not common, after a bridge or building is half built, for the client to completely change the scope of the project. This is often the case in software projects. The state of the art in software development hasn't reached the point where software projects are repeatable and almost risk free. Even the largest software companies like Microsoft can miss deadlines by months or years. Most vaporware is nothing but software projects that were cut because of these kinds of problems. In conclusion: Bad estimates and underestimation of complexity, due to the handicraft nature of the software development process, mean it remains an immature discipline.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206475", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
206,536
I am writing a Java web application that consists mainly of a bunch of similar pages in which every page has several tables and a filter that applies to those tables. The data on these tables comes from an SQL database. I am using myBatis as ORM, which may not be the best choice in my case, since the database is poorly designed and mybatis is a more database oriented tool. I am finding that I am writing a lot of duplicate code because, due to the database's poor design, I have to write different queries for similar things as those queries can be very different. That is, I cannot easily parametricise the queries. This propagates into my code and instead of populating rows on columns in my table with a simple loop I have code like: get A Data (p1, ..., pi); get B Data (p1, ..., pi); get C Data (p1, ..., pi); get D Data (p1, ..., pi); ... And this soon explodes when we have different tables with different columns. It also adds to the complexity the fact that I'm using "wicket", which is, in effect a mapping of objects to html elements in the page. So my Java code becomes an adapter between the database and the front end, which has me creating a lot of wiring, boilerplate code with some logic intermingled in it. Would the correct solution be wrapping the ORM mappers with an extralayer that presents a more homogeneous interface to the db or is there a better way to deal with this spaghetti code I'm writing? EDIT: More info about the database The database holds mainly phone calls information. The poor design consists of: Tables with an artificial ID as primary key that has nothing to do with the domain knowledge. No unique, triggers, checks or foreign keys whatsoever. Fields with a generic name that match different concepts for different records. Records that can be categorised only by crossing with other tables with different conditions. Columns which should be numbers or dates stored as strings. To sum it up, a messy / lazy design all around.
Object orientation is valuable specifically because these types of scenarios arise, and it gives you tools to reasonably design abstractions that allow you to encapsulate complexity. The real question here is, where do you encapsulate that complexity? So let me step back a moment and speak to what 'complexity' I'm referring to here. Your problem (as I understand it; correct me if I'm wrong) is a persistence model which is not an effectively usable model for the tasks you need to complete with the data. It may be effective and usable for other tasks, but not for your tasks. So what do we do when we have data that does not present a good model for our means? Translate. It's the only thing you can do. That translation is the 'complexity' I refer to above. So now that we accept we're going to translate the model, we need to decide on a couple of factors. Do we need to translate both directions? Are both directions going to be translated the same, as in: (Tbl A, Tbl B) -> Obj X (read) Obj X -> (Tbl A, Tbl B) (write) or do insertion/update/delete activities represent a different type of object such that you read data as Obj X, but data is inserted/updated from Obj Y? Which of these two ways you wish to go, or if no update/insert/delete is possible are important factors in where you want to put the translation. Where do you translate? Back to the first statement I made in this answer; OO allows you to encapsulate complexity and what I refer to here is the fact that not only should you, but you must encapsulate that complexity if you wish to ensure it doesn't leak out and seep into all of your code. At the same time, it's important to recognize you can't have a perfect abstraction so worry less about that than about having a very effective and usable one. Again now; your problem is: Where do you put this complexity? Well you have choices. You can do it in the database using stored procedures. This has the drawback of often not playing very well with ORMs but that's not always true. Stored procedures afford some benefits, including performance often. Stored procedures however can require a lot of maintenance, but it's up to you to analyze your particular scenario and say if the maintenance will be more or less than other choices. I personally am very skilled with stored procedures, and as such this fact of available talent reduces overhead; never underestimate the value of making decisions based on what you do know. Sometimes the suboptimal solution can be more optimal than the correct solution because you or your team know how to create and maintain it better than the optimal solution. Another in-database option is views. Depending on your database server these may be highly optimal or sub-optimal or not even effective at all, one of the drawbacks can be query times depending on what indexing options are available in your database. Views become an even better choice if you never need to make any data modification (insert/update/delete). Stepping past the database you have the old standby of using the repository pattern. This is a time-tested approach which can be very effective. Drawbacks tend to include boiler plate but well-factored repositories can avoid some amount of this, and even when these do result in unfortunate amounts of boiler plate, repository's tend to be simple code that's easy to understand and maintain as well as presenting a good API/abstraction. Also repositories can be good for their unit-testability which you lose with in-database options. There are tools like auto-mapper out there that may make using an ORM plausible where they can do the translation between database-model from orm to usable models, but some of these tools can be tricky to maintain/understand behaving more like magic; though they create a minimum of overhead code resulting in less maintenance overhead when well understood. Next you're stepping further and further from the database , which means there's going to be greater amounts of code that are going to deal with the un-translated persistence model, which is going to be genuinely unpleasant. In these scenarios you talk about putting the translation layer in your UI which it sounds like you may be doing now. This is generally a very bad idea, and decays terribly over time. Now let's start talking crazy . The Object is not the only end-all be-all abstraction that exists. There have been a profundity of abstractions developed over the many years that computer science has been studied and even before then from the study of math. If we're going to start getting creative, let's start talking about known abstractions available that have been studied. There's the actor model. This is an interesting approach because it says all you do is send messages to other code which effectively delegates all the work to that other code, which is very effective in encapsulating the complexity away from all your code. This could work in so far as you send a message to an actor saying "I need Obj X sent to Y" and you have a receptacle waiting for a response at location Y which then processes Obj X. You could even send a message that instructs "I need Obj X and computation Y, Z done to it" and then you don't even need to wait; the translation occurs on the other side of that message pass and you can just move on if you don't need a read of it's result. This can be slight abuse of the actor model for your purposes, but it all depends; the main goal of Actor Model is asynchrony and concurrency to be handled easily and well, both of those things are however just abstractions which can act as boundaries to encapsulate the complexity we refer to here (or any form of complexity for that matter). Another encapsulation boundary is process boundaries. These can be used for segregating complexity very effectively. You could create the translation code as a web service where the communication is simple HTTP, using SOAP, REST, or if you really want your own protocol (not suggested). STOMP isn't altogether a bad newer protocol. Or use a normal daemon service with a system-local publicized memory pipe for communicating very quickly again using whichever protocol you choose. This actually has some pretty good benefits: You can have multiple processes running that do translation for older and newer version support at the same time allowing you to update the translation service to publicize an object model V2, and then separately at a later point update the consuming code to work with the new object model. You can do interesting things like pinning the process to a core for performance, you also get an amount of security safety in this approach by making that the only process running with the security privileges to touch that data. You will get a very strong boundary when you're talking about process boundaries that will stay fixed ensuring minimum leakage of your abstraction for a long time because writing code in the translation space will not be able to be called outside of the translation space since they won't share process scope, ensuring a fixed set of usage scenarios by contract. Ability for asynchronous/non-blocking updates being simpler. Drawbacks are obviously more maintenance than is commonly necessary, communication overhead affecting performance and the maintenance. There is a great variety of ways to encapsulate complexity that may allow that complexity to be placed in ever more strange and curious places in your system. Using forms of higher order functions (often times faked using strategy pattern or various other odd forms of object patterns), you can do some very interesting things. That's right, let's start talking about a monad. You could create this translation layer in a very independent fashion of small specific functions that do the independent translations necessary, but hide all of those translation functions away not visible so they are hardly accessible to outside code. This has the benefit of reducing reliance on them allowing them to change easily without effecting much external code. You then create a class that accepts higher order functions (anonymous functions, lambda functions, strategy objects, however you need to structure them) which work on any of the nice OO model type objects. You then let the underlying code which accepts those functions do the literal execution using the appropriate translation methods. This creates a boundary where all the translation not only exists on the other side of the boundary away from all your code; it is only used on that side allowing the rest of your code to not even know anything about it other than where the entry point for that boundary is. Ok, yeah that really is talking crazy, but who knows; you might just be that crazy (seriously, do not undertake monads with a craziness rating below 88%, there is real risk of bodily injury).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206536", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41085/" ] }
206,558
I know some people that are currently working on a project for the US military (low security level, non-combat human resources type data). An initial state of the project code was submitted to the military for review, and they ran the program through some sort of security analyzer tool. It returned a report of known security issues in the code and required changes that needed to be implemented before delivery of the final product. One of the items that needed to be resolved was removal of part of the project that was written in Ruby as it is a dynamic language. What is the background/reason for not allowing a dynamic language to be used in a secure setting? Is this the government being slow to adopt new technologies? Or do dynamic languages pose an additional security risk compared to static languages (ala C++ or Java )?
There are a number of 'neat' things that can be done in dynamic languages that can be tucked away in parts of the code that aren't immediately obvious to another programmer or auditor as to the functionality of a given piece of code. Consider this sequence in irb (interactive ruby shell): irb(main):001:0> "bar".foo NoMethodError: undefined method `foo' for "bar":String from (irb):1 from /usr/bin/irb:12:in `<main>' irb(main):002:0> class String irb(main):003:1> def foo irb(main):004:2> "foobar!" irb(main):005:2> end irb(main):006:1> end => nil irb(main):007:0> "bar".foo => "foobar!" What happened there is I tried to call the method foo in a String constant. This failed. I then opened up the String class and defined the method foo o return "foobar!" , and then called it. This worked. This is known as an open class and gives me nightmares every time I think of writing code in ruby that has any sort of security or integrity. Sure it lets you do some neat things quite fast... but I could make it so every time someone stored a string, it stored it to a file, or sent it over the network. And this little bit of redefining the String can be tucked anywhere in the code. Many other dynamic languages have similar things that can be done. Perl has Tie::Scalar that can behind the scenes change how a given scalar works (this is a bit more obvious and requires a specific command that you can see, but a scalar that is passed in from somewhere else could be a problem). If you have access to the Perl Cookbook, look up Recipe 13.15 - Creating Magic Variables with tie. Because of these things (and others often part of dynamic languages), many approaches to static analysis of security in code doesn't work. Perl and Undecidability shows this to be the case and points out even such trivial problems with syntax highlighting ( whatever / 25 ; # / ; die "this dies!"; poses challenges because the whatever can be defined to take arguments or not at runtime completely defeating a syntax highlighter or static analyzer). This can get even more interesting in Ruby with the ability to access the environment that a closure was defined in (see YouTube: Keeping Ruby Reasonable from RubyConf 2011 by Joshua Ballanco). I was made aware of this video from an Ars Technica comment by MouseTheLuckyDog . Consider the following code: def mal(&block) puts ">:)" block.call t = block.binding.eval('(self.methods - Object.methods).sample') block.binding.eval <<-END def #{t.to_s} raise 'MWHWAHAW!' end END end class Foo def bar puts "bar" end def qux mal do puts "qux" end end end f = Foo.new f.bar f.qux f.bar f.qux This code is fully visible, but the mal method could be somewhere else... and with open classes, of course, it could be redefined somewhere else. Running this code: ~/$ ruby foo.rb bar >:) qux bar b.rb:20:in `qux': MWHWAHAW! (RuntimeError) from b.rb:30:in `' ~/$ ruby foo.rb bar >:) qux b.rb:20:in `bar': MWHWAHAW! (RuntimeError) from b.rb:29:in `' In this code, the closure was able to access all of the methods and other bindings defined in the class at that scope. It picked a random method and redefined it to raise an exception. (see the Binding class in Ruby to get an idea of what this object has access to) The variables, methods, value of self, and possibly an iterator block that can be accessed in this context are all retained. A shorter version that shows the redefinition of a variable: def mal(&block) block.call block.binding.eval('a = 43') end a = 42 puts a mal do puts 1 end puts a Which, when run produces: 42 1 43 This is more than the open class that I mentioned above that makes static analysis impossible. What is demonstrated above is that a closure that is passed somewhere else, carries with it the full environment that it was defined in. This is known as a first class environment (just as when you can pass around functions, they are first class functions, this is the environment and all of the bindings available at that time). One could redefine any variable that was defined in the scope of the closure. Good or bad, complaining about ruby or not (there are uses where one would want to be able to get at the environment of a method (see Safe in Perl)), the question of "why would ruby be restricted in for a government project" really is answered in that video linked above. Given that: Ruby allows one to extract the environment from any closure Ruby captures all bindings in the scope of the closure Ruby maintains all bindings as live and mutable Ruby has new bindings shadow old bindings (rather than cloning the environment or prohibiting rebinding) With the implications of these four design choices, it is impossible to know what any bit of code does. More about this can be read at Abstract Heresies blog . The particular post is about Scheme where such a debate was had. (related on SO: Why doesn't Scheme support first class environments? ) Over time, however, I came to realise that there was more difficulty and less power with first-class environments than I had originally thought. At this point I believe that first-class environments are useless at best, and dangerous at worst. I hope this section shows the danger aspect of first class environments and why it would be asked to remove Ruby from the provided solution. Its not just that Ruby is a dynamic language (as mentioned else-answer, other dynamic languages have been allowed in other projects), but that there are specific issues that make some dynamic languages even more difficult to reason about.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206558", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/9479/" ] }
206,668
I am migrating a 10-years-old big CVS repository to Git. It seemed obvious to split this multiple-projects repository into several Git ones. But the decision-makers are used to CVS, therefore their point of view is influenced by CVS philosophy. To convince them to migrate from one CVS repo to different Git repositories I need to give them some arguments. When I speak with mates working on Git repo for years, they say that using multiple Git repo is the way to use Git. I do not know really why (they give me some ideas). I am a newbie in this field so I ask here my question. What are the arguments to use multiple Git repositories instead of a single one containing different applications and libraries from different teams? I have already listed: branches/tags impact the whole Git repository files => pollutes other team projects 4GB limit Git repo size but this is wrong git annotate may be slower on bloat Git repo... Eamon Nerbonne has noticed the related question: Choosing between Single or multiple projects in a git repository? The reason the team managers finally have accepted the split: the single Git repo (550 MB) was requiring 13 minutes to be cloned on Windows (one minute on Linux) . The bloat CVS repo split in 100 Git repositories: each dead apps in one repo each stabilized library in one repo (source code almost never changed any longer) related apps/libs kept together in one repo moved large files not used for compilation (config...) to other repos (Git does not like large files) skipped other unrelevant files ( *.jar , *.pcb , *.dll , *.so , *.backup ...) Successfully installed the repo tool used by Android Open Source Project in order to handle all these Git repos: easy installation on Linux more difficult on Windows because of Cygwin and NTFS native symlinks requirements
You're dealing with multiple teams and multiple projects. Likely decades of work went into the codebase. The short answer is that your teams and projects have varying needs and varying dependencies. The monolithic repository approach reduces commits to "Everything is stable in this configuration!!!" (i.e. unrealistic, huge commits sourced from many teams). That, or many intermediate points of incompatibilities for many projects. Either way, there's a lot of wasted energy invested in supporting configurations which were simply never meant to be. Your repositories should be structured independently instead, and should have multiple repositories which represent their dependencies. The dependencies should be configured, updated, and tested by the project maintainers at appropriate points in development. ProjectA saw its last major release 3 years ago. It is in maintenance mode and has "older" system requirements. It should refer to an appropriate set of dependencies. It has 20 dependencies. ProjectB was just released. It has more modern System Requirements, and was developed and tested by another team. It has 15 dependent libraries (=repos), 10 of which are shared with ProjectA. These projects generally refer to different commits of their dependent libraries. Dependencies are updated at appropriate points in development. ProjectC is yet to be released. It's very similar to ProjectB, but includes significant changes and improvements to its dependencies. Developers of ProjectB are only interested in taking the stable releases of the dependencies they share with ProjectC. ProjectB's team makes some commits to the shared dependencies, although they are mostly bugfixes and optimizations at this time. A monolithic repository would either hold development of ProjectC back in order to maintain support for ProjectA, or ProjectC's changes would break A and B, or developers would just end up not sharing/reusing code. With multiple (distributed) repositories, each team can work independently and minimize impacting the other projects while reusing and constantly improving the codebases. This also keeps teams from shifting focus/speed when changes come in from other teams. The centralized monolithic repository makes each team dependent on every team's move, and that would have to be synchronized.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206668", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/81263/" ] }
206,793
I was asked to sell the source code (along with existing users) of small utility app I created years ago. I've investigated how to put a price on the source code but so far haven't come up with a good solution. I've searched the net, but haven't found anything useful. Then I came across a few others who also sold their source code with users, but their prices seem unrealistically high. For example, one person calculated price per user at about $200. He had 80 users and ended up selling the source with users for $30k. How did he come up with this price? Can I find a good price with this formula: (number of users x app price) + (app price x num of new users in one year) ? If this is a good formula, how do you price source that doesn't yet have users?
Selling the source code for an app is very much like selling a business. The standard formula is price = revenue * 3 + assets . The multiplication of 3 is a factor of supply and demand. The more buyers a business has the higher the multiplier. When we hear about a business being purchased by ABC Corp in the news, it's often for a large figure. Those businesses can have a multiplier of 5 or higher. For businesses that don't have a revenue history, then they depend upon an evaluation. The evaluation is an estimate of projected revenue and the multiplier is applied to that. So we can calculate the multiplier for your example; 1.875 = 30000 / 16000 = 30000 / (200 * 80) Assuming he sold all his licenses in 1 year, he (your example) would have a multiplier of 1.875 with no additional assets. That's not a very good deal for the programmer. Especially when you factor in future upgrades from those users adding to revenue. Why is it not a good deal? The buyer can recover his costs in less than 2 years. Most people take longer to pay off a car loan. When we speak to the buyer in terms of setting a price, we discuss how long the buyer would like to recover his investment and start profiting from his purchase. You are saying I'm giving up this source code, and its future revenue to you . The price is set based upon an estimate of what the future would be. If you have not received any revenue from your source code, then you will have to argue with the buyer what the evaluation of its future revenue will be.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206793", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/21538/" ] }
206,816
The experts in clean code advise not to use if/else since it's creating an unreadable code. They suggest rather using IF and not to wait till the end of a method without real need. Now, this if/else advice confuses me. Do they say that I should not use if/else at all (!) or to avoid if/else nesting? Also, if they refer to the nesting of if/else , should I not do even a single nesting or I should limit it to max 2 nestings (as some recommends)? When I say single nesting, I mean this: if (...) { if (...) { } else { } } else { } EDIT Also tools like Resharper will suggest reformatting if/else statements. It usually concerts them to if stand-alone statement, and sometimes even to ternary expression.
I think this advice came from a software metric which called Cyclomatic complexity or Conditional complexity check this wiki page Definition: The cyclomatic complexity of a section of source code is the count of the number of linearly independent paths through the source code. For instance, if the source code contained no decision points such as IF statements or FOR loops, the complexity would be 1, since there is only a single path through the code. If the code had a single IF statement containing a single condition there would be two paths through the code, one path where the IF statement is evaluated as TRUE and one path where the IF statement is evaluated as FALSE. So Do they say that I should not use if/else at all (!) or to avoid if/else nesting? No you shouldn't avoid if/else at all! but you should be careful about Cyclomatic complexity of your code. More code complexity tends to have more defects. According to this the right limit would be 11 . A tool called Code Complete categorize the score like that Ref. : 0-5 - the routine is probably fine 6-10 - start to think about ways to simplify the routine 10+ - break part of the routine into a second routine and call it from the first routine How to calculate Cyclomatic complexity "basically": From example here : consider the following pseudo code : if (A=354) { if (B>C) {/*decision 1*/ A=B} else {/*decision 2*/ A=C} }/*decision 3 , in case A is NOT equal 354*/ print A please note that this metric is concerned about number of decision in your program/code so from the code above we could say that we have 3 decisions ( 3 values of A) let's calculate it formally so consider the following control flow of the code: The complexity M is then defined* as: Ref M = E − N + 2P where E = the number of edges of the graph N = the number of nodes of the graph P = the number of connected components (exit nodes). by applying the formula E(flow lines) = 8 , N (nodes)= 7 , P (exit nodes) =1 then M = 8-7 + (2*1) = 3 *there is also another definition stated but it is close : M = E − N + P check the reference for theory. Please note that many of static code analysis tools could calculate the Cyclomatic complexity of your code.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206816", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/21538/" ] }
206,832
I will be involved in a project where all the software design is made by a local team and these designs are sent to an offshore team for coding. This is the first time I face a project with this characteristics and for me it feels kind of odd: The managers expects us to make very detailed design documents so there's no space for error for the offshore team; from my perspective they are making us coding in paper while we can do it in an IDE. So, my question is is this approach good, or proven right? What are the main considerations our software process has to have to have success in our project?
My opinion: If all you'll give the offshore people is documents and diagrams, you will have a lot of miscommunication and disappointment . My recommendation Don't give them so many documents, but rather interfaces and abstract classes in order to straitjacket them into your design goals . Require them to use a known naming standard. Require them to use unit tests. Send one of your designers/architects offshore to their premises to supervise the process, it will still be cheaper than coding in-house, but you will get better results.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206832", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/48738/" ] }
206,840
I was asked this question "How would you find weight of Aeroplane" in an interview and I am not sure why this question was one of the two question asked in the interview. I tried to answer it using all possible ways but could not give the correct answer.(found the correct answer after google search) How much such questions decide your selection in the interview? Here was my approach: 1. If measurement of plane is given then i will calculate volume and multiply by density, will consider fuel weight plus other dead weight. 2. Using water displacement method if i can put plane in water and somehow measure how much water is displaced. But found using google search that right approach was to put place on a ship and mark the level of water on the hull, then remove plane and then ship will go up. And start putting weight on the ship till marked hull reaches the water level.
Job Interview 2.0: Now With Riddles! would be an article from TheDailyWTF that notes some of these including the weight of a 747, which is a type of plane: Thankfully, Microsoft realized that the type of people who enjoy these riddles aren’t always good programmers, and good programmers aren’t always the type who enjoy these riddles. In fact, some of the folks who can solve these riddles are precisely the type of people you don’t want as programmers. Would you want to work with the guy who builds a water-displacement scale/barge, taxis a 747 to the docks, and then weights the jumbo jet using that, instead of simply calling Boeing in the first place? Unfortunately, Microsoft’s realization came too late: a whole mini-industry has spawned around the concept of Job Interview 2.0. If Microsoft did it, it must work, right? There are books written on brainteasers in the interview, consultants who will help your company annoy the hell out candidates with your very own custom brainteasers, and now, everyone from small software firms to big ole’ banks are asking stupid riddle questions. The key point in these questions is that it isn't so much that there is a correct answer as much as it is how well can you communicate how you'd solve this problem and upon revisions to the problem, what alternative approaches would you take. For the weight of a plane, I'd probably look at specifications which should note this as part of the basics about the plane. Failing that, then there are a few other approaches one can take.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206840", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/98434/" ] }
206,937
Are buffer overflows acceptable from a graduate developer? Are we setting the bar too high? What are the expected capabilities of graduate/junior engineers? Context: We are currently recruiting for a Junior Developer position working mainly in C on Linux. As part of the process, we require candidates to complete a code test at their leisure in C. So far we have rejected two candidates on the basis that their code, although readable and in one case rather idiomatic, suffered from buffer overflow errors due to unbounded buffer writes. [Edit]: We explicitly ask for error-checked, production quality code. We provide a test & build framework for the candidates [Update]: As a result of this thread, and conversations we have had with other developers in person, we are changing the way we carry out code tests and who we target with our recruiting. We decided that a candidate being unable to fix or understand a buffer overflow means that he would be unsuitable for the work we carry out, in particular he would take more mentoring than we are comfortable with. We will therefore still reject candidates that cannot eventually submit a robust code sample. However, we have put in place some measures to make the recruitment process more productive for both us and the candidates. In particular: We make our expectation more explicit, with clear explanation of what we mean by production quality, and a warning that the code is expected to be robust with respect to input and errors. We now link candidates to resources on defensive programming and the C standard library in the description of the code test. We changed our target audience from Junior developers and graduates to target people with some relevant experience. In case the submitted code fails in some way but would otherwise be accepted, we now provide a minimum test case that causes the error condition and give the candidates a chance to correct their mistakes (unless the code is rejected for some other reason). We'll also point out problematic lines/functions if appropriate. The goal of the tests itself has now slightly changed from a front end filter to a chance to build a better picture of the candidate, in particular it will inform our phone discussion. That said, we are still willing to reject based solely on code. [Update 2015-07-09]: Andy Davis from Nujob has written an interesting and relevant article on the use of a code test from the candidate's perspective, and the article is worth looking at. Find it here .
I don't think you've set the bar too high, I think you might need a different bar. I think code tests are useful for determining the competency of a candidate, but they shouldn't be pass/fail. You should use the results of the code test to start a dialog with the candidate. If you see mistakes that they've made (especially if they're junior developers) point them out and ask them what they'd do differently or if they understand why there is a problem.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/206937", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3235/" ] }
207,060
Our Scrum Master keeps referring to bugs as technical debt. Is he right, are bugs considered to be technical debt in the world of Agile?
I think the answer here is fairly simple - the key feature of technical debt is that its something we incur by choice. We choose to make architectural, design or implementation decisions that we expect will cause us issues later in order to achieve specific objectives sooner. A bug is not something we choose to have in our code - so de-facto its not technical debt. Of course one can make all kinds of interesting (and possibly valid) arguments about choices made post discovery but fundamentally (and particularly in the context of the question) no, bugs are not technical debt - sounds more like abuse of buzzword bingo to me. As a postscript - I don't agree with the assertion that its a given that technical debt will lead to bugs in and of itself as that makes far to many assumptions about the nature of the choices made. For example you can have well written, well structured, test covered code that still makes - say - architectural compromises for early delivery. Similarly you could choose not automate your deployment processes which won't lead to bugs but will probably lead to a lot of stress and pain. Of course if the debt is that you've written code that's not SOLID (or whatever) then yes... but that's by no means always the case.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207060", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/86834/" ] }
207,136
What is the difference between a Future and a promise? (In Akka and Gpars.) They look the same to me as both block and return the value of the future when get is called and a promise is to get the result of a future.
I'll talk about Akka/Scala, because I'm not familiar with Gpars nor with Akka/Java. In Scala 2.10, which includes the relevant part of Akka in the standard distribution, a Future is essentially a read-only reference to a yet-to-be-computed value. A Promise is a pretty much the same except that you can write to it as well . In other words, you can read from both Future s and Promise s, but you can only write to Promise s. You can get the Future associated with a Promise by calling the future method on it, but conversion in the other direction is not possible (because it would be nonsensical).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207136", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41843/" ] }
207,232
This question is intended to apply to any OO programming language that supports exception handling; I am using C# for illustrative purposes only. Exceptions are usually intended to be raised when an problem arises that the code cannot immediately handle, and then to be caught in a catch clause in a different location (usually an outer stack frame). Q: Are there any legitimate situations where exceptions are not thrown and caught, but simply returned from a method and then passed around as error objects? This question came up for me because .NET 4's System.IObserver<T>.OnError method suggests just that: exceptions being passed around as error objects. Let's look at another scenario, validation. Let's say I am following conventional wisdom, and that I am therefore distinguishing between an error object type IValidationError and a separate exception type ValidationException that is used to report unexpected errors: partial interface IValidationError { } abstract partial class ValidationException : System.Exception { public abstract IValidationError[] ValidationErrors { get; } } (The System.Component.DataAnnotations namespace does something quite similar.) These types could be employed as follows: partial interface IFoo { } // an immutable type partial interface IFooBuilder // mutable counterpart to prepare instances of above type { bool IsValid(out IValidationError[] validationErrors); // true if no validation error occurs IFoo Build(); // throws ValidationException if !IsValid(…) } Now I am wondering, could I not simplify the above to this: partial class ValidationError : System.Exception { } // = IValidationError + ValidationException partial interface IFoo { } // (unchanged) partial interface IFooBuilder { bool IsValid(out ValidationError[] validationErrors); IFoo Build(); // may throw ValidationError or sth. like AggregateException<ValidationError> } Q: What are the advantages and disadvantages of these two differing approaches?
Returning exceptions instead of throwing them can make semantical sense when you have a helper-method for analyzing the situation and returning an appropriate exception which is then thrown by the caller (you could call this an "exception factory"). Throwing an exception in this error analyzer function would mean that something went wrong during the analysis itself, while returning an exception means that the kind of error was analyzed succesfully. One possible use-case could be a function which turns HTTP response codes into exceptions: Exception analyzeHttpError(int errorCode) { if (errorCode < 400) { throw new NotAnErrorException(); } switch (errorCode) { case 403: return new ForbiddenException(); case 404: return new NotFoundException(); case 500: return new InternalServerErrorException(); … default: throw new UnknownHttpErrorCodeException(errorCode); } } Note that throwing an exception means that the method was used wrong or had an internal error, while returning an exception means that the error code was identified succesfully.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207232", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4906/" ] }
207,239
I'm managing a team of like 15 developers now, and we are stuck at a point on choosing the technology, where the team is broken into two completely opposite teams, debating over usage of WCF vs. Web API. Team A which supports usage of Web API, brings forward these reasons: Web API is just the modern way of writing services ( Wikipedia ) WCF is an overhead for HTTP. It's a solution for TCP, and Net Pipes, and other protocols WCF models are not POCO, because of [DataContract] & [DataMember] and those attributes SOAP is not as readable and handy as JSON SOAP is an overhead for network compared to JSON (transport over HTTP) No method overloading Team B which supports the usage of WCF, says: WCF supports multiple protocols (via configuration) WCF supports distributed transactions Many good examples and success stories exist for WCF (while Web API is still young) Duplex is excellent for two-way communication This debate is continuing, and I don't know what to do now. Personally, I think that we should use a tool only for its right place of usage . In other words, we'd better use Web API, if we want to expose a service over HTTP, but use WCF when it comes to TCP and Duplex. By searching the Internet, we can't get to a solid result. Many posts exist for supporting WCF, but on the contrary we also find people complaint about it. I know that the nature of this question might sound arguable, but we need some good hints to decide. We're stuck at a point where choosing a technology by chance might make us regret it later. We want to choose with open eyes. Our usage would be mostly for web, and we would expose our services over HTTP. In some cases (say 5 to 10 percent) we might need distributed transactions though. What should I do now? How do I manage this debate in a constructive way?
When both sides have good arguments and the opinions on the issue are too strong to come to a consensus, you as a manager need to make a decision and end the debate. Otherwise it will just turn in circles and fortify the positions of all participants even more. The longer you wait, the harder will it be for the "losing" side to admit defeat and work productively with the outcome. Write down all the arguments, value their importance for the project, and then make your decision. When you can't, flip a coin. Your project can likely be completed succesfully with either technology, and wasting valuable time with unnecessary debates will just cost unnecessary money.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207239", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31418/" ] }
207,308
I am reading a book on Java Programming, and want to confirm I understand the definition of the word "vector". Wikipedia says vector is "A one-dimensional array", source http://en.wikipedia.org/wiki/Vector . Wouldnt it be simpler to call the array simply an array? Is there any reason we need to use such fancy language as "vector"? Is there a difference between an array and a vector? Source: Cracking the Coding Interview, 4th Edition, by Gayle McDowell, page 47. FAQ Q - Why didnt you post this on english.stackexchange.com? A - Because I think only computer science oriented people will have a good answer.
In typical usage, an "array" can mean either a single-dimensional array, or a multidimensional array. Also, in mathematics, a matrix is a 2-dimensional array while a vector is a 1-dimensional array.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207308", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/97642/" ] }
207,361
I've been working as a software developer for about 2 years by now. It seems obvious that in a field so rapidly evolving as software development, you need to spend time on learning new technologies, frameworks, etc. I always thought I could take it for granted that if I need to learn something to solve a problem at work, I am free to spend the hours needed learning that at work as well. However, I have had discussions about this topic with various colleagues, and we were holding very different opinions, apparently on the span of two extremes: Your employer pays you for knowing stuff. You got hired for having knowledge on the field of expertise required for doing this job, and if the field emerges so that you need to aquire more knowledge, it is only natural that you do so in your free time. And the other extreme being Whatever makes me more productive at work in the long run, is worth spending the time on at work, because the employer will eventually profit from that. This does of course apply to learning new techniques, but also, e.g. learning VIM to get faster, etc. But even when discussing how long to spend time on something with the guys tending to the second extreme, we were of vastly different opinions, ranging from "an hour every now and then is okay" to "however long it takes". Does your workplace encourage learning new skills and if so what processes do they have to encourage this? How much time do you spend learning new things (and not writing production code) during your day as a programmer?
Managers (like me) are hesitant to specify explicit training budgets. According to Parkinson's Law , such a budget would be consumed or even exhausted regardless of the actual needs in knowledge development. If you just call your learning time project work and keep it in reasonable proportion to your overall work and your overall achievements, nobody will object. The percentage varies and depends on your age, experience and working area. I would regard between two and 15 days of training per year as normal. New employees often need more. In a very innovative environment, the learning and researching percentage is typically higher than usual. We have a mentoring scheme for junior developers. Whenever somebody changes his/her working area, additional training is obviously required. The learning issue is a matter of self marketing . No team would tolerate a member who is constantly unavailable due to demonstrative self-study or extensive absence in exotic training courses. Try to appear well-informed without utilizing excessive resources for your learning. The project time needed to experiment and learn is mostly treated discretely. Would you personally pay a craftsman for getting to know your brand of car? For knowledge-deficiencies which are in contrast to your job description , private engagement would be taken for granted. Example: If you are supposed to be a Senior Java Developer, you should not ask for a basic Java training.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207361", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/45455/" ] }
207,386
Note : if you need to consider a specific OS to be able to answer, please consider Linux. Whenever I run a program, it will be given a virtual memory space to run in, with an area for its stack and one for its heap. Question 1 : do the stack and the heap have a static size limit (e.g., 2 gigabytes each), or is this limit dynamic, changing according to the memory allocations during the execution of the program (i.e., 4 gigabytes total to be used by both, so if a program only uses the stack, it will be able to have a stack with 4 gigabytes)? Question 2 : How is the limit defined? Is it the total available RAM memory? Question 3 : What about the text (code) and data sections, how are they limited?
There are two different memory limits. The virtual memory limit and the physical memory limit. Virtual Memory The virtual memory is limited by size and layout of address space available. Usually at the very beginning is the executable code and static data and past that grows the heap, while at the end is area reserved by kernel, before it the shared libraries and stack (which on most platforms grows down). That gives heap and stack free space to grow, the other areas being known at process startup and fixed. The free virtual memory is not initially marked as usable, but is marked such during allocation. While heap can grow to all available memory, most systems don't auto-grow stacks. IIRC default limit for stack is 8MiB on Linux and 1MiB on Windows and can be changed on both systems. The virtual memory also contains any memory-mapped files and hardware. One reason why stack can't be auto-grown (arbitrarily) is that multi-threaded programs need separate stack for each thread, so they would eventually get in each other's way. On 32-bit platforms the total amount of virtual memory is 4GiB, both Linux and Windows normally reserving last 1GiB for kernel, giving you at most 3GiB of address space. There is a special version of Linux that does not reserve anything giving you full 4GiB. It is useful for the rare case of large databases where the last 1GiB saves the day, but for regular use it is slightly slower due to the additional page table reloads. On 64-bit platforms the virtual memory is 64EiB and you don't have to think about it. Physical Memory Physical memory is usually only allocated by the operating system when the process needs to access it. How much physical memory a process is using is very fuzzy number, because some memory is shared between processes (the code, shared libraries and any other mapped files), data from files are loaded into memory on demand and discarded when there is memory shortage and "anonymous" memory (the one not backed by files) may be swapped. On Linux what happens when you run out of physical memory depends on the vm.overcommit_memory system setting. The default is to overcommit. When you ask the system to allocate memory, it gives some to you, but only allocates the virtual memory. When you actually access the memory, it will try to get some physical memory to use, discarding data that can be reread or swapping things out as necessary. If it finds it can't free up anything, it will simply remove the process from existence (there is no way to react, because that reaction could require more memory and that would lead to endless loop). This is how processes die on Android (which is also Linux). The logic was improved with logic which process to remove from existence based on what the process is doing and how old it is. Than android processes simply stop doing anything, but sit in the background and the "out of memory killer" will kill them when it needs memory for new ones.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207386", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/34340/" ] }
207,391
I would like to know your opinion about my setup right now, how I do things with SVN and if you could find some better solution for me. I'd appreciate it very much if someone could come up with some better solutions for my problems. I would like to do it the best way possible and the most efficiently. My situation is as follows: I'm maintaining a PHP website which is connected to mysql database. I have a subdomain on which I'm testing everything. There are also two databases: first for testing purposes, second for the production server. I have a project in Netbeans and I'm connected to ftp with it so every change I make is directly transferred to the website. Also I'm connected to SVN server and the testing part of the website is in trunk directory. Every time I want to upload the changes to the server I'm commiting my trunk and then I'm copying it to tags. After that I got another project in Netbeans prepared, already connected to the same SVN, but to the last tag copy. After I copy the trunk to tag directory, I switch to the second project and switch to the newest version in tags. Now I have in some another directory every configuration file prepared in proper directory structure - I copy it to my production project. After that I click "Run" in Netbeans and that opens a window which allows me to accept the transfers to the sftp, but this time to the production part of the website. My question is: is it the best way? Aren't there any more intuitive tools or options in SVN, so I don't have to copy the configuration files every time?
There are two different memory limits. The virtual memory limit and the physical memory limit. Virtual Memory The virtual memory is limited by size and layout of address space available. Usually at the very beginning is the executable code and static data and past that grows the heap, while at the end is area reserved by kernel, before it the shared libraries and stack (which on most platforms grows down). That gives heap and stack free space to grow, the other areas being known at process startup and fixed. The free virtual memory is not initially marked as usable, but is marked such during allocation. While heap can grow to all available memory, most systems don't auto-grow stacks. IIRC default limit for stack is 8MiB on Linux and 1MiB on Windows and can be changed on both systems. The virtual memory also contains any memory-mapped files and hardware. One reason why stack can't be auto-grown (arbitrarily) is that multi-threaded programs need separate stack for each thread, so they would eventually get in each other's way. On 32-bit platforms the total amount of virtual memory is 4GiB, both Linux and Windows normally reserving last 1GiB for kernel, giving you at most 3GiB of address space. There is a special version of Linux that does not reserve anything giving you full 4GiB. It is useful for the rare case of large databases where the last 1GiB saves the day, but for regular use it is slightly slower due to the additional page table reloads. On 64-bit platforms the virtual memory is 64EiB and you don't have to think about it. Physical Memory Physical memory is usually only allocated by the operating system when the process needs to access it. How much physical memory a process is using is very fuzzy number, because some memory is shared between processes (the code, shared libraries and any other mapped files), data from files are loaded into memory on demand and discarded when there is memory shortage and "anonymous" memory (the one not backed by files) may be swapped. On Linux what happens when you run out of physical memory depends on the vm.overcommit_memory system setting. The default is to overcommit. When you ask the system to allocate memory, it gives some to you, but only allocates the virtual memory. When you actually access the memory, it will try to get some physical memory to use, discarding data that can be reread or swapping things out as necessary. If it finds it can't free up anything, it will simply remove the process from existence (there is no way to react, because that reaction could require more memory and that would lead to endless loop). This is how processes die on Android (which is also Linux). The logic was improved with logic which process to remove from existence based on what the process is doing and how old it is. Than android processes simply stop doing anything, but sit in the background and the "out of memory killer" will kill them when it needs memory for new ones.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207391", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/98863/" ] }
207,401
Suppose one had a relatively large program (say 900k SLOC in C#), all commented/documented thoroughly, well organized and working well. The entire code base was written by a single senior developer who no longer with the company. All the code is testable as is and IoC is used throughout--except for some strange reason they did not write any unit tests. Now, your company wants to branch the code and wants unit tests added to detect when changes break the core functionality. Is adding tests a good idea? If so, how would one even start on something like this? EDIT OK, so I had not expected answers making good arguments for opposite conclusions. The issue may be out of my hands anyway. I've read through the "duplicate questions" as well and the general consensus is that "writing tests is good"...yeah, but not too helpful in this particular case. I don't think I am alone here in contemplating writing tests for a legacy system. I'm going to keep metrics on how much time is spent and how many times the new tests catch problems (and how many times they don't). I'll come back and update this a year or so from now with my results. CONCLUSION So it turns out that it is basically impossible to just add unit test to existing code with any semblance of orthodoxy. Once the code is working you obviously cannot red-light/green-light your tests, it usually not clear which behaviors are important to test, not clear where to begin and certainly not clear when you are finished. Really even asking this question misses the main point of writing tests in the first place. In the majority of cases I found it actually easier to re-write the code using TDD than to decipher the intended functions and retroactively add in unit tests. When fixing a problem or adding a new feature it is a different story, and I believe that this is the time to add unit tests (as some pointed out below). Eventually most code gets rewritten, often sooner than you'd expect--taking this approach I've been able to add test coverage to a surprisingly large chunk of the existing codebase.
While tests are a good idea, the intention was for the original coder to build them as he was building the application to capture his knowledge of how the code is supposed to work and what may break, which would have then been transferred to you. In taking this approach, there is a high probability that you will be writing the tests that are least likely to break, and miss most of the edge cases that would have been discovered while building the application. The problem is that most of the value will come from those 'gotchas' and less obvious situations. Without those tests, the test suite loses virtually all of its effectiveness. In addition, the company will have a false sense of security around their application, as it will not be significantly more regression proof. Typically the way to handle this type of codebase is to write tests for new code and for the refactoring of old code until the legacy codebase is entirely refactored. Also see .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207401", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1960/" ] }
207,423
We have several bugfix branches that are starting to pile up. They have been merged into master, and deployed to production. Is there a good benchmark for when these branches should be cleaned up? Should they ever be cleaned up, or is it good to have the historical data?
The way git works is that a branch name is just a pointer to a specific commit. Once you merge a hotfix branch into master, your hotfix and master will point to exactly the same place in the commit tree. As you make more commits on master, the hotfix branch will continue pointing at the same place while master will get updated. Your history will always be preserved. So basically the only reason to keep hotfix branch after a merge is if you plan to make any more changes to the same hotfix, which doesn't make much sense once you release the hotfix. So you should feel perfectly safe deleting the branch after the merge. One more thing you could do though, is once the hotfix is merged, create a tag on the master branch identifying that point as the hotfix release.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207423", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/97843/" ] }
207,620
I've been using MVC/MV* since I started actually organizing my code years ago. I've been using it so long that I can't even think of any other way to structure my code and every job I've had after being an intern was MVC based. My question is, what are the downfalls of MVC? In what cases would MVC be a bad choice for a project and what would be the (more) correct choice? When I look up MVC alternatives, nearly every result is just different types of MVC. To narrow down the scope so this doesn't get closed, let say for web applications. I do work on the backend and front-end for different projects, so I can't say just front-end or backend.
You should always remember - MVC is a UI-related pattern. If you are building a complex application you should take everything, that is not related to UI, out of MVC triplets to any other classes, subsystems or layers. It was my biggest mistake. I spent a long time understanding that simple rule: Do not spread a MVC pattern amongst the whole application, Limit it to UI-related stuff only. Always check if the code you write is logically in the correct place, meaning it logically fits into it's area of responsibility of the class you place it in. If not - move the code away as soon as you understand it. All the patterns that you call MVC-alternatives (i.e. Model-View-Presenter, Model-View-ViewModel) are just a way of implementing the general MVC concept.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207620", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14558/" ] }
207,710
First of all, in this question I'd like to stay away from the polemic on whether source code commenting is good or bad. I'm just trying to understand more clearly what people mean when they talk about comments that tell you WHY, WHAT or HOW. We often see guidelines like "Comments should tell you WHY; code itself should tell you HOW". It is easy to agree with the statement on an abstract level. However, people usually drop this like a dogma, and leave the room without further explanation. I've seen this used in so many different places and contexts, that it looks like people can agree on the catchphrase, but they seem to be talking about different things entirely. So, back to the question: if comments should tell you WHY, what is this WHY we are talking about? Is this the reason why that piece of code exists in the first place? Is this what that piece code should be doing? I would really appreciate if someone could give a clear explanation, and then add some good examples (bad examples are not really needed, but fell free to add them for contrast). There are many questions on whether comments are good or bad, but no one that addresses the specific question of what are good examples of comments that tell you WHY.
The most common and most distinctive example is comments around various workarounds. For example this one: https://github.com/git/git/blob/master/compat/fopen.c : /* * The order of the following two lines is important. * * FREAD_READS_DIRECTORIES is undefined before including git-compat-util.h * to avoid the redefinition of fopen within git-compat-util.h. This is * necessary since fopen is a macro on some platforms which may be set * based on compiler options. For example, on AIX fopen is set to fopen64 * when _LARGE_FILES is defined. The previous technique of merely undefining * fopen after including git-compat-util.h is inadequate in this case. */ #undef FREAD_READS_DIRECTORIES #include "../git-compat-util.h" You'll surely find more examples in Git and Linux sources; both projects try to follow this rule. I also recommend to follow this rule even more strictly with commit logs . For code comments it may happen that you fix the code, but forget to update the comment. With the amount of code in usual project, it is guaranteed to happen sooner or later. On the other hand the commit log is tied to the particular change and can be recalled using the "annotate"/"blame" functionality of the version control system. Again Git and Linux have some good examples. Look e.g. at this commit . (not copying here, it's too long). It has four paragraphs taking almost whole page (and a bit over screenful) describing what exactly was wrong and why it was wrong and than goes on and and modifies all of whoping SIX lines. They use comments like this for two purposes: All submitted changes are reviewed and the commit log is what has to explain the change to the reviewer. When a bug is found, the relevant logs are retrieved using "pickaxe" or "blame" to avoid reverting to earlier also incorrect behaviour. (note: it took me at most 10 minutes random browsing of the git repo to come up with these two examples, so it would sure be easy to find more there)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207710", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/63610/" ] }
207,726
I am writing my first application for Android and will use the SQLite database so will be trying to limit the size as much as possible, but I think the question applies in general to database design. I am planning to store records that will have text and the date of creation. The app is a stand-alone app, i.e. it will not link to the internet and only one user will be updating it, so there is no chance that there will be more than one entry with a given date. Does my table still need an ID column? If so, what are the advantages of using the ID as a record identifier as opposed to the Date?
IMHO, using a date column as a primary key is best avoided. I have worked on systems where a date field is used as a primary key and writing queries to pull back subsets of the data is a bit of a drag if you're working with date fields. Some other points you might want to consider: You might think that a point in time is unique, but that rather depends on the granularity of the date column. Is it minutes, seconds, milliseconds etc. Can you be absolutely sure that you'll never get a primary key violation? Finally, should you wish to migrate the database to another platform, you may again, encounter problems where the granularity of the date data differs between the platforms. You do of course have to balance the ideal with what you have to work with. If space is really that much of a concern, using the date column could be the lesser of two evils. That is a design decision you'll have to make. Edit: I should point out that in no way does this indicate that it a poor design decision. Just that there could be issues with the practicalities of the RDBMS in question.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207726", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/99053/" ] }
207,752
Why is the Scala Option type not called Maybe, just as in Haskell? Maybe makes a lot more "semantic sense" to me, but maybe Option has different behaviour I am not aware of. Is there any particular reason why Option in Scala was not called Maybe?
Scala is also inspired by Ocaml, which uses Option . Options are an Ocaml standard type that can be either None (undefined) or Some x where x can be any value. Options are widely used in Ocaml to represent undefined values (a little like NULL in C, but in a type and memory safe way)... I think the name chosen is a matter of taste.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207752", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/99154/" ] }
207,802
One programmer is testing and comparing the same application which uses the same database structure, and the same data, only in two separate databases, one with Oracle 8 and one with Oracle 9. The app runs a query with no ORDER BY clause. He claims that the ORDER-BY-less query should return the rows in the same order in both databases. I tell him there's no warranty of the same row order unless you explicity provide an ORDER BY clause. The database has the same indexes and keys. But the explain plan shows that in one of the databases the engine is using the key of one of the joined tables whereas in the other database it's using another's. He insinuates that the two DB environtments are not equal, which is so because they have different statistics, different rdbms engines, etc., but not because I failed to replicate every index the original database has. I tell him he must explicity provide an ORDER BY clause if the order is really that important. The question So I can explain him better: In what order does a query fetch rows when you don't explicity provide an ORDER BY clause, and why doesn't that query return the rows in the same order ?
From Wikipedia : The ORDER BY clause identifies which columns are used to sort the resulting data, and in which direction they should be sorted (options are ascending or descending). Without an ORDER BY clause, the order of rows returned by an SQL query is undefined. So it's undefined. The SQL specification doesn't state the specific order that records are to be returned, so it's going to be implementation dependent. With no indexes on the table, the sensible order would be the order in which the records were inserted. With a Primary Key defined, the sensible order would be the order of the Primary Key. But since the ANSI spec doesn't require a specific order, it's up to the vendor, and their sensibilities may differ from yours or mine. Since the order is not stated in the specification, it is unwise to rely on the behavior of a particular vendor's implementation, since it can vary from one vendor to another, and the vendor may change the order any time they wish, without warning. As you said, just include the ORDER BY clause, if order is important.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207802", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/61852/" ] }
207,835
On my current project I am responsible for the implementation of a service which involves the consumption of newly created RESTful APIs, documented as solely supporting JSON. The client consistently makes requests with the accept header of 'application/json' and content-type of 'application/json'. However some endpoints send a response with a content-type of HTML, even a HTML body. To me this is clearly the wrong approach and can never be justified. Throughout the project this same practice has been applied across two different vendors and two different services. I found myself having to justify why the services needed to be changed. The vendors stated that the client should cope with this and even my REST library of choice has been questioned (RestEasy) because it doesn't cope with this by default 'out the box'. This has been a major point of frustration. I can't find many references to back up my argument, I assume this is because the point is moot as it's so obvious. The question is, am I missing something? am I being pedantic about this? Is it OK to have a JSON API that doesn't have a content-type of application/json in this scenario? References would be appreciated. How do you resolve this situation from a commercial point of view?
When you are sending an accept header requesting a specific media type, the server should not send back something else, and most certainly not with a 200 OK status code From Restpatterns.org : If no Accept header field is present, then it is assumed that the client accepts all media types. If an Accept header field is present, and if the server cannot send a response which is acceptable according to the combined Accept field value, then the server SHOULD send a 406 (not acceptable) response. (Emphasis mine) Restpatterns.org takes this from the actual HTTP standard: Header field definitions - Accept In short: you are not being pedantic. The services are not following the HTTP standard if they are returning HTML when the accept header specifically tells them to return application/json and nothing else.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207835", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/99217/" ] }
207,929
I often read that developers must write beautiful code, but for a beginner as I am it remains obscure of what is beautiful code and how do you recognize it? The corollary question is: How to write beautiful code and what are some practical habits to improve your code's quality? , what should I care about to make the code I write beautiful (and what shoud I learn).
"Beauty is bought by judgement of the eye". That said, I think most programmers will agree that beautiful code demonstrates a balance between clarity and transparency, elegance, efficiency and aesthetics. Clarity and Transparency : Clarity is how easily a reader can deduce what the code does. Transparent code does what it seems to do. If code seems to do one thing but actually does something else (or something more), it's not transparent - it's misleading. Elegance : there are many ways to implement most algorithms, but some ways are clumsy while other ways are neat and graceful. Succinctness often adds elegance, but excessive succinctness can reduce clarity. Efficiency : avoiding unnecessary use of resources (such as CPU time, memory, and I/O). Aesthetics : being easy on the eyes. This is quite subjective. It mostly comes down to style. One important consideration is to have a consistent style. Code which changes, for example, indenting style halfway through, is ugly.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207929", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/96621/" ] }
207,987
Bugs happen and sometimes data has to be fixed in production. What is the safest way to go about this from a big company standpoint? Are there tools that can help? Here are some considerations driving this requirement... We need to log who ran the query and what they ran Ideally we need to give the person access to only run queries against the tables of interest and only for a short time Whatever is running the queries needs to have some smarts about it to not allow long running and locking SQL to run without explicit permission This process needs to be DB agnostic or at least understand DB2, Oracle, and SQL Server. We are trying to reduce the risk of ad-hoc prod fix-up queries from doing the "wrong thing" and at the same time add some security/audtis to the process. Thoughts or Ideas?
Never ever update production databases manually. Write scripts. Triple check them, and have multiple people do that, not just a single person doing it three times. Include post-change validation queries in those scripts. Whenever the situation allows, test the whole change within a transaction which is rolled back at the end, after the post-change validation has run. When confident with the results, change the rollback to a commit. Test those scripts ad nauseam against a test database. Make a backup prior to running the script against the production database. Run the scripts. Check, validate and triple check the changed data using the post-change-validation scripts. Do a visual check anyway. If anything seems off, back off and restore the backup. Do not proceed with the changed data as the production data until you are absolutely sure that everything is ok and you have sign off from the (business) managers involved.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/207987", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/9346/" ] }
208,015
My question is related with System.in and System.out classes (there might be others like those in the Standard library). Why is that? Isn't that a bad practice in OOP? Shouldn't it be used like: System.getIn() and System.getOut() ? I've always had this question and I hope I can find a good answer here.
The definition for the in and out fields within the System class are: public final static PrintStream out; public final static InputStream in; These are constants. They happen to be objects too, but they are constants. It is very much the same as the Math class: public static final double E = 2.7182818284590452354; public static final double PI = 3.14159265358979323846; Or in the Boolean class: public static final Boolean TRUE = new Boolean(true); public static final Boolean FALSE = new Boolean(false); Or in the Color class: public final static Color white = new Color(255, 255, 255); public final static Color black = new Color(0, 0, 0); public final static Color red = new Color(255, 0, 0); When accessing a public constant that doesn't change, there isn't a significant advantage to encapsulate it - conceptually or performance based. Its there. It isn't going to change. There is no real difference between Color.white and System.out .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208015", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/99390/" ] }
208,062
After reading gnat's answer to Why a static main method in Java and C#, rather than a constructor? I take his answer to mean that the purpose of a Java class with a static main method is to define a program entry point and that it is not meant to be the program itself. There may be a better way to do this, but I usually have the class with the static main method to do something simple like this: public class MenuLauncher { public static void main(String[] args) { Menu menu = new Menu(); menu.run(); } } Would the code above be the best practice for OOP where the class with static main doesn't do much more than launch or start program logic contained within a separate non-static object; after-all main is static so wouldn't the MenuLauncher class itself be very limited? Since main is a starting point I don't see any other purpose for the class other than to be a point of entry. Is there a Java naming convention commonly used for classes that contain a main method and serve the purpose of being a program entry-point?
No, there is no widely used naming conventions for this. Examples I have seen are Main , Application , XLauncher or X , where X is the name of the project/application. And yes, I think it's good for this class to contain only the minimum logic/code necessary to set up the application and start it. But I'm sure there are a lot of God Objects and Big Balls of Mud out there with a main method tacked onto a multi thousand line monstrosity.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208062", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/91765/" ] }
208,114
I have been coding for a while, but mostly scripts and simple applications. I've moved into a new role where it is all about developing Web Apps and using a proper MVC architecture, so I am desperately trying to learn about all that very quickly. I hope this question is not too similar to " Best Practices for MVC Architecture " but as I am going through a few different tutorials, I noticed that some have multiple controllers for different things. How many controllers does a single web app need? I realize this would be difficult to answer without an example so I'll provide one: Application: User logs in. User can do one of three things: a) Upload a file (stored in a mongodb database with meta data). b) Search for a file. c) Log out. My question is a general one, but I gave the example to help out anyone trying to answer.
For your example I would create two controllers: Sessions Controller for Login and Logout (create and destroy session for REST like layout) Files Controller for everything on files (index=search and create=upload) In general a RESTful approach where you think about everything as a resource that can be displayed, created, edited and destroyed gives you a good idea how to structure things. As you can see from my examples I don't stick too close to every single verb in REST. You would most likely need more controllers for further functionality. For example a Users Controller where users can create new accounts. And in addition to this you would need an admin interface where you can edit the resources with higher privileges. In such a case it is quite common to have nearly every controller duplicated. A very very rough estimate to get an initial idea could be one controller for every table in your database that users can access. But this is really only a very crude measurement.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208114", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/67306/" ] }
208,182
As I can see, smart pointers are used extensively in many real-world C++ projects. Though some kind of smart pointers are obviously beneficial to support RAII and ownership transfers, there is also a trend of using shared pointers by default , as a way of "garbage collection" , so that the programmer do not have to think about allocation that much. Why are shared pointers more popular than integrating a proper garbage collector like Boehm GC ? (Or do you agree at all, that they are more popular than actual GCs?) I know about two advantages of conventional GCs over reference-counting: Conventional GC algorithms has no problem with reference-cycles . Reference-count is generally slower than a proper GC. What are the reasons for using reference-counting smart pointers?
Some advantages of reference counting over garbage collection: Low overhead. Garbage collectors can be quite intrusive (e.g. making your program freeze up at unpredictable times while a garbage collection cycle processes) and quite memory-intensive (e.g. your process's memory footprint unnecessarily grows to many megabytes before garbage-collection finally kicks in) More predictable behavior. With reference counting, you are guaranteed that your object will be freed the instant the last reference to it goes away. With garbage collection, on the other hand, your object will be freed "sometime", when the system gets around to it. For RAM this isn't usually a big problem on desktops or lightly loaded servers, but for other resources (e.g. file handles) you often need them be closed ASAP to avoid potential conflicts later on. Simpler. Reference counting can be explained in a few minutes, and implemented in an hour or two. Garbage collectors, especially ones with decent performance, are extremely complex and not many people understand them. Standard. C++ includes reference counting (via shared_ptr) and friends in the STL, which means that most C++ programmers are familiar with it and most C++ code will work with it. There isn't any standard C++ garbage collector, though, which means that you have to choose one and hope it works well for your use case -- and if it doesn't, it's your problem to fix, not the language's. As for the alleged downsides of reference counting -- not detecting cycles is an issue, but one that I've never personally ran into in the last ten years of using reference counting. Most data structures are naturally acyclical, and if you do come across a situation where you need cyclical references (e.g. parent pointer in a tree node) you can just use a weak_ptr or a raw C pointer for the "backwards direction". As long as you are aware of the potential problem when you're designing your data structures, it's a non-issue. As for performance, I've never had a problem with the performance of reference counting. I have had problems with the performance of garbage collection, in particular the random freeze-ups that GC can incur, to which the only solution ("don't allocate objects") might as well be rephrased as "don't use GC".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208182", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
208,238
Nearly all open source software licenses require (or at least lawyers generally suggest they require) users to include the full license in the root of the project that they are protecting. One lawyer I spoke to suggests this is a legacy of the CD age, when it was necessary that a full license be included in a jewel case. But today, we're living in the cloud age. Why can't I, for instance, simply host the full license at my website, and include the title + URL of that license in the header of my source files? Bonus: If it's generally agreed that established licenses must be kept intact in the root, why hasn't the OSI of FSF approved a license that you can refer to by URL, and what is keeping someone from creating that license?
From the GPL FAQ (but the advice is applicable to all licenses): Why does the GPL require including a copy of the GPL with every copy of the program? Including a copy of the license with the work is vital so that everyone who gets a copy of the program can know what his rights are. It might be tempting to include a URL that refers to the license, instead of the license itself. But you cannot be sure that the URL will still be valid, five years or ten years from now. Twenty years from now, URLs as we know them today may no longer exist. The only way to make sure that people who have copies of the program will continue to be able to see the license, despite all the changes that will happen in the network, is to include a copy of the license in the program. (emphasis mine) The moment the site hosting you license goes down or changes its URL paths, people who have copies of your software can no longer verify what rights they may safely exercise. Suppose even that you could somehow guarantee that that exact URL will be forever online: the ability for users to verify that their use of your software is legal still depends upon the ability to connect to that particular URL. While this requirement may not onerous in your particular city/country/planet, it may be onerous elsewhere. You should not impose this requirement, especially when the workaround (including the full license text) is trivial. You might answer this complaint by saying, "So what? If the URL does go down or is not accessible, an unambiguous descriptor like 'GNU GPL v3' should be sufficient. Full-text copies of the GPL are plentiful; users can look up the license themselves." A few problems immediately spring to mind: This doesn't generalize to license identifiers that are less clear (the phrase "BSD license" comes to mind). This doesn't generalize well to licenses that are less common or have been customized ("GPL with linking exceptions" comes to mind: which linking exceptions?). How common does a license need to be before it's reasonable to expect a user to find it reliably by name? This still requires users to have an Internet connection, which may not be the case, even if they had a connection at the time they got the software. (And they may not have had Internet access when they got the software: "the CD age" has not yet ended in many parts of the world. As an additional case, consider national populations that have widespread Internet access but censor large parts of it.) A consequence of freely-redistributable software is that a recipient may not receive a copy of your software directly from you or through a distribution channel you originally anticipated. One final argument against license links is noted by MichaelT's comment below: it could allow you to dynamically, retroactively change the license. This could be done intentionally, but it could also be done by accident, if you changed the license between versions of the software, but used the same license link for both versions, thereby clobbering your old license out of existence. Such a switch would add difficulty for people who need to prove they got their older copy under a different license than the current version. So why do I have to keep the license in the project root? I'm not a lawyer, but I've never seen any compelling argument that you do need to keep licenses in the project root. Even the GPL, which specifies that the license must accompany each copy of the work, is silent on how it must accompany the work. (This may be because the GPL could be applied in non-software contexts, where the notion of "root directory" is not meaningful.) Keeping the license in the root directory is probably a good idea because it maximizes the likelihood the user will see it, and thereby minimizes both user frustration and the likelihood of complaints against you for trying to hide the license in some obscure directory. If you have many licenses, it might make more sense to place them all in their own folder, and include an obvious project README that contains file paths to find the license for each component. Placing your license in the directory root is a helpful practice also because it can disambiguate the licenses of modules that are licensed differently that the work as a whole. Suppose my project FooProj uses the stand-alone module BarMod. FooProj might be GPL-licensed, while the standalone module might be MIT-licensed. When I first open FooProj, I see a copy of the GPL in the root and understand that work as a whole is GPL-licensed. When I descend into the folder for BarMod, I see a new license file there, and I understand that the contents of this folder are MIT-licensed. Of course, this is only a helpful aid; you should always indicate the licensing of your modules explicitly in a README, NOTICE, or similar file. In sum, using the file root is a matter of convenience and clarity. I have not seen any legally binding open-source license text that requires it, nor do I know of any reason why it would be legally required. Your license should be reasonably easy for the recipient to discover; including the license in the project root is sufficient, but not necessary, to satisfy this criterion.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208238", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/39006/" ] }
208,271
Suppose a REST API, in response to a HTTP GET request, returns some additional data in a sub-object owner : { id: 'xyz', ... some other data ... owner: { name: 'Jo Bloggs', role: 'Programmer' } } Clearly, we don't want anyone to be able to PUT back { id: 'xyz', ... some other data ... owner: { name: 'Jo Bloggs', role: 'CEO' } } and have that succeed. Indeed, we probably aren't going to even implement a way for that to even potentially succeed, in this case. But this question is not just about sub-objects: what, in general, should be done with data that should not be modifiable in a PUT request? Should it be required to be missing from the PUT request? Should it be silently discarded? Should it be checked, and if it differs from the old value of that attribute, return a HTTP error code in the response? Or should we use RFC 6902 JSON patches instead of sending the whole JSON?
There is no rule, either in the W3C spec or the unofficial rules of REST, that says that a PUT must use the same schema/model as its corresponding GET . It's nice if they're similar , but it's not unusual for PUT to do things slightly differently. For example, I've seen a lot of APIs that include some kind of ID in the content returned by a GET , for convenience. But with a PUT , that ID is determined exclusively by the URI and has no meaning in the content. Any ID found in the body will be silently ignored. REST and the web in general is heavily tied to the Robustness Principle : "Be conservative in what you do [send], be liberal in what you accept." If you agree philosophically with this, then the solution is obvious: Ignore any invalid data in PUT requests. That applies to both immutable data, as in your example, and actual nonsense, e.g. unknown fields. PATCH is potentially another option, but you shouldn't implement PATCH unless you're actually going to support partial updates. PATCH means only update the specific attributes I include in the content ; it does not mean replace the entire entity but exclude some specific fields . What you're actually talking about is not really a partial update, it's a full update, idempotent and all, it's just that part of the resource is read-only. A nice thing to do if you choose this option would be to send back a 200 (OK) with the actual updated entity in the response, so that clients can clearly see that the read-only fields were not updated. There are certainly some people who think the other way - that it should be an error to attempt to update a read-only portion of a resource. There is some justification for this, primarily on the basis that you would definitely return an error if the entire resource was read-only and the user tried to update it. It definitely goes against the robustness principle, but you might consider it to be more "self-documenting" for users of your API. There are two conventions for this, both of which correspond to your original ideas, but I'll expand on them. The first is to prohibit the read-only fields from appearing in the content, and return an HTTP 400 (Bad Request) if they do. APIs of this sort should also return an HTTP 400 if there are any other unrecognized/unusable fields. The second is to require the read-only fields to be identical to the current content, and return a 409 (Conflict) if the values do not match. I really dislike the equality check with 409 because it invariably requires the client to do a GET in order to retrieve the current data before being able to do a PUT . That's just not nice and is probably going to lead to poor performance, for somebody, somewhere. I also really don't like 403 (Forbidden) for this as it implies that the entire resource is protected, not just a part of it. So my opinion is, if you absolutely must validate instead of following the robustness principle, validate all of your requests and return a 400 for any that have extra or non-writable fields. Make sure your 400/409/whatever includes information about what the specific problem is and how to fix it. Both of these approaches are valid, but I prefer the former one in keeping with the robustness principle. If you've ever experienced working with a large REST API, you'll appreciate the value of backward compatibility. If you ever decide to remove an existing field or make it read-only, it is a backward compatible change if the server just ignores those fields, and old clients will still work. However, if you do strict validation on the content, it is not backward compatible anymore, and old clients will cease to work. The former generally means less work for both the maintainer of an API and its clients.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208271", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6877/" ] }
208,458
According to rules of TDD unit tests are written before production code, but what about Integration tests that exercises interaction between concrete (non mocks) wired objects? Should they be written before unit tests or after production code just to test the "wiring" ? Note that I'm not talking about Acceptance or functional tests but Lower level integration tests.
The Rspec Book , among other BDD resources, suggests a cycle like this: In essence, the process is: While behaviour required Write an integration test for a specific behaviour While integration test failing Write a unit test to fulfil partial behavior While unit test failing Write code to make unit test pass Commit While refactoring can be done Refactor While unit test failing Write code to make unit test pass Commit Push Disclaimer: There's no doubt in my mind that this leads to the best code and product, but it can be time-consuming. There are all sorts of difficulties around data and determinism, when it comes to saying that integration tests should always pass. It's not appropriate in all circumstances; sometimes you just have to get stuff out of the door. That said, having an ideal process in mind is great. It gives you a point from which to compromise.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208458", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/99754/" ] }
208,513
On our team, we use Git as our source control. We have several areas of code that are almost independent but have some overlap. Lately we have been discussing workflows and approaches to using source control. One complaint that comes up when I promote using a feature branch workflow is that people often run into complicated merge conflicts that they incorrectly resolve. By complicated, I mean "not obvious as to how to resolve". In light of this, other workflows are being more actively used, such a "pull rebase"-based workflow. As an advocate of the feature branch approach, I'm not really getting the complaint. Yes, you have to keep your local feature branches up-to-date from master or wherever, but that's about the only real problem I see. I'm thinking that if your merges are always complicated and may have secondary effects, then that's more of a teamwork problem than a Git problem. Am I correct in thinking this? Are complicated merge conflicts a sign of anything good or bad?
It's not impossible that the problem is your code. If your codebase has a lot of inter-relationships between modules, then every change is going to have tendrils everywhere, and every a dev interacts with anyone else's code, it's going to be a nightmare. I'd tend to think you'd notice this in other ways first, but it's possible that you're so used to it that you can't see it anymore.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208513", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/36853/" ] }
208,656
I already posted this question on SO and it did ok. It was unfortunately closed though(only needs one vote to reopen) but someone suggested I post it on here as it is a better fit so the following is literally a copy paste of the question I was reading the comments on this answer and I saw this quote. Object instantiation and object-oriented features are blazing fast to use (faster than C++ in many cases) because they're designed in from the beginning. and Collections are fast. Standard Java beats standard C/C++ in this area, even for most optimized C code. One user (with really high rep I might add) boldly defended this claim, stating that heap allocation in java is better than C++'s and added this statement defending the collections in java And Java collections are fast compared to C++ collections due largely to the different memory subsystem. So my question is can any of this really be true, and if so why is java's heap allocation so much faster.
This is an interesting question, and the answer is complex. Overall, I think it is fair to say that the JVM garbage collector is very well designed and extremely efficient. It's probably the best general purpose memory management system. C++ can beat the JVM GC with specialised memory allocators that are designed for specific purposes. Examples might be: Per-frame memory allocators, which wipe the whole memory area at periodic intervals. These are frequently used in C++ games, for example, where a temporary memory area is used once per frame and immediately discarded. Custom allocators managing a pool of fixed-sized objects Stack based allocation (although note that the JVM also does this in various circumstances, e.g. via escape analysis ) Specialised memory allocators are, of course, limited by definition. They usually have restrictions on object lifecycle and/or restrictions on the type of object that can be managed. Garbage collection is much more flexible. Garbage collection also gives you a some significant advantages from a performance perspective: Object instantiation is indeed extremely fast. Because of the way that new objects are allocated sequentially in memory, it often requires little more than one pointer addition, which is certainly faster than typical C++ heap allocation algorithms. You avoid the need for lifecycle management costs - e.g. reference counting (sometimes used as an alternative to GC) is extremely poor from a performance perspective since the frequent incrementing and decrementing of reference counts adds a lot of performance overhead (typically much more than GC). If you use immutable objects, you can take advantage of structural sharing to save memory and improve cache efficiency. This is used heavily by functional languages on the JVM like Scala and Clojure. It is very difficult to do this without GC, because it is extremely hard to manage the lifetimes of shared objects. If you believe (as I do) that immutability and structural sharing are key to building large concurrent applications, then this is arguably the biggest performance advantage of GC. You can avoid copying if all types of object and their respective lifecycles are managed by the same garbage collection system. Contrast with C++, where you often have to take full copies of data because the destination requires a different memory management approach or has a different object lifecycle. Java GC has one major downside: because the work of collecting garbage is deferred and done in chunks of work at periodic intervals, it causes occasional GC pauses to collect garbage, which can affect latency. This is usually not a problem for typical applications, but can rule Java out in situations where hard realtime is a requirement (e.g. robotic control). Soft realtime (e.g. games, multimedia) is typically OK.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208656", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/99883/" ] }
208,677
I'm in the middle of developing a new programming language to solve some business requirements, and this language is targeted at novice users. So there is no support for exception handling in the language, and I wouldn't expect them to use it even if I added it. I've reached the point where I have to implement the divide operator, and I'm wondering how to best handle a divide by zero error? I seem to have only three possible ways to handle this case. Ignore the error and produce 0 as the result. Logging a warning if possible. Add NaN as a possible value for numbers, but that raises questions about how to handle NaN values in other areas of the language. Terminate the execution of the program and report to the user a severe error occurred. Option #1 seems the only reasonable solution. Option #3 is not practical as this language will be used to run logic as a nightly cron. What are my alternatives to handling a divide by zero error, and what are the risks with going with option #1.
I would strongly advise against #1, because just ignoring errors is a dangerous anti-pattern. It can lead to hard-to-analyze bugs. Setting the result of a division by zero to 0 makes no sense whatsoever, and continuing program execution with a nonsensical value is going to cause trouble. Especially when the program is running unattended. When the program interpreter notices that there is an error in the program (and a division-by-zero is almost always a design error), aborting it and keeping everything as-is is usually preferred over filling your database with garbage. Also, you will unlikely be successful with thoroughly following this pattern through. Sooner or later you will run into error situations which just can't be ignored (like running out of memory or a stack overflow) and you will have to implement a way to terminate the program anyway. Option #2 (using NaN) would be a bit of work, but not as much as you might think. How to handle NaN in different calculations is well-documented in the IEEE 754 standard, so you can likely just do what the language your interpreter is written in does. By the way: Creating a programming language usable by non-programmers is something we've been trying to do since 1964 (Dartmouth BASIC). So far, we've been unsuccessful. But good luck anyway.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208677", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/52871/" ] }
208,700
I've landed my first contract (hooray, self employment!) and the company is asking for time estimates. Programmers are notoriously bad at time estimates, and I know I've been laughably wrong before. It's fixed bid, so I'm not concerned about charging. I'm just concerned about expectation management. So far I've itemized the work I need to do, estimated the time it would take, then significantly padded that time. I'm still nervous about it, though. Is it acceptable to write in time for "unexpected delays"? I've estimated 4 weeks, and I'd like to add a 5th week for problems I haven't thought of. Is that something people do? Would you balk if someone gave you an estimate with that in it?
First, congratulations on your contract! Ok, enough celebrating, let's get down to business. ;) I've been a consultant for over 15 years -- here's my advice. In project management, what you are talking about it "contingency" planning -- and you absolutely should do it, else you are likely to disappoint your client (and make yourself unhappy throughout the project). However, you should NOT specifically put it into the plan as it's own line item -- as then when you need it (and you most likely will), it only makes you look like a bad planner, and by definition you will be behind schedule. There is a motto you live by: "Under-promise and Over-Deliver". Set expectations (in this case delivery time) low, but not so low that the client would be put off, and then beat the timeline (and demonstrate that you are ahead of schedule). Instead, of one contingency block at the end of the project, you should distribute contingency planning throughout the project. Assuming you haven't already committed to delivering in your estimated 4 weeks, suggest and plan for 6, with your planned 4 weeks of effort spread out evenly over all 6 weeks. This will make you, and your client much happier, as you should generally be slightly "ahead of schedule" throughout. :) Important: You should plan progress updates / partial demos on a frequency that: 1) mitigates the risk of building what they asked for but not what they want 2) builds customer confidence while not being overly burdensome on you. Be SURE to plan for this time working with the customer, giving demos, tweaking things, etc. In a project of that length, that is most likely every three days or so. Finally, when you are planning the work, front-load the most "risky" or "unknown" items first and plan the most contingency for them. Risk takes many forms -- and it's most often not the technical stuff. Generally, the biggest risk is that the customer doesn't TRULY know exactly what they want. You want to get stuff in front of them early and often to ensure you are in alignment. This means prototypes, mockups and such. If there is misalignment or misunderstanding, you want to find it as early as possible! Generally, if you find this out very early (before significant work has been done), you can renegotiate the contract to work for both parties. The biggest mistake freelancers make is delivering what the customer asked for, but not what they want. You need to understand that you are responsible for ensuring that both parties are the same page. On riskier technical items, do enough of a proof-of-concept early so that you know you won't crash into a roadblock late in the game. Fight the tendency most people have to focus on the stuff they already know to "build momentum". Have fun with it and I hope this helps. If you would, let us know how it went after you complete the project. Good luck!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208700", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/51196/" ] }
208,776
I have re-written an open source project from java to haxe, then from haxe compiled to javascript, with totally different UI So, the question is, is the code considered to be mine after rewriting it to another language in a closed source project? can I use it freely with no worries about original copyrights?
No. It is derived from the original open-source project, thus a so-called derivative work , still protected by the original copyright. In copyright law , a derivative work is an expressive creation that includes major, copyright-protected elements of an original, previously created first work (the underlying work )... For copyright protection to attach to a later, allegedly derivative work, it must display some originality of its own. It cannot be a rote, uncreative variation on the earlier, underlying work. The latter work must contain sufficient new expression, over and above that embodied in the earlier work for the latter work to satisfy copyright law’s requirement of originality ...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208776", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43303/" ] }
208,862
Where I work, using Properties are discouraged. We have code generation tools to "speed things up". All object data ends up being Public fields. If you request properties anyway, you get something like this: Public sFoo as String Public Property Foo as String Get Return sFoo End Get Set(ByVal value as String) sFoo = value End Set End Property I can't think of a reasonable argument to do this. Are there any good reasons to use both a Public Property with a Public Field?
No. There's no good reason for this. It: Confuses other programmers by doing something that makes no sense, Exposes the innards of your class unnecessarily, Provides two entry points for the same thing, ?? Use Auto-Implemented Properties instead.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/208862", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59570/" ] }
209,036
Does an Open Source license exist that allows me to retain rights to revoke usage of software/source at any time, for any reason, and without warning? I want to allow others to use my software and source code for free but at the same time, I want the ability to revoke rights to usage if I don't agree with the ways that the software and/or source is being used.
That would not be an Open Source license by the definition of the Open Source Initiative : 5. No Discrimination Against Persons or Groups The license must not discriminate against any person or group of persons. Rationale: In order to get the maximum benefit from the process, the maximum diversity of persons and groups should be equally eligible to contribute to open sources. Therefore we forbid any open-source license from locking anybody out of the process. Some countries, including the United States, have export restrictions for certain types of software. An OSD-conformant license may warn licensees of applicable restrictions and remind them that they are obliged to obey the law; however, it may not incorporate such restrictions itself. 6. No Discrimination Against Fields of Endeavor The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research. Rationale: The major intention of this clause is to prohibit license traps that prevent open source from being used commercially. We want commercial users to join our community, not feel excluded from it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209036", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/47475/" ] }
209,099
I used to think that it's not, but yesterday I had to do it. It's an application that uses Akka (an actor system implementation for the JVM) to process asynchronous jobs. One of the actors performs some PDF manipulation, and because the library is buggy, it dies with a StackOverflowError every now and then. The second aspect is that Akka is configured to shutdown its whole actor system if any JVM fatal error (e.g. StackOverflowError) is caught. The third aspect is that this actor system is embedded inside a web app (for WTF-ish, legacy, reasons), so when the actor system is shut down, the web app is not. The net effect is that on a StackOverflowError our job processing application becomes just an empty web app. As a quick fix I had to catch the StackOverflowError being thrown, so that the thread pool of the actor system isn't torn down. This lead me to think that maybe it's sometimes okay to catch such errors especially in contexts like this? When there's a thread pool processing arbitrary tasks? Unlike an OutOfMemoryError I can't imagine how a StackOverflowError can leave an application in an inconsistent state. The stack is cleared after such an error, so computation can go on normally. But maybe I'm missing something important. Also, let it be noted that I'm all for fixing the error in the first place (as a matter of fact I have already fixed an SOE in this same app a few days ago), but I really don't know when this kind of situation might arise. Why would it be better to restart the JVM process instead of catching the StackOverflowError , mark that job as failed, and continue with my business? Is there any compelling reason to never catch SOEs? Except "best practices", which is a vague term that tells me nothing.
As a general rule, if it were absolutely never, ever acceptable to do something, and there was agreement about that, the language implementers would not have allowed it. There are almost no such unanimously clear-cut maxims. (Luckily, because that's what keeps us human programmers in jobs!) It looks very much as if you've found a situation where catching this error is the best option for you: it lets your application work, while all other alternatives don't, and that's what counts in the end. All "best practices" are simply summations of long experiences with many cases that can usually be used in place of a detailed analysis of a specific case to save time; in your case, you've already done the specific analysis and got a different result. Congratulations, you're capable of independent thought! (That said, surely there are situations where a stack overflow might leave an application inconsistent just like a memory exhaustion. Just imagine that some object is constructed and then initialized with the help of nested internal method calls - if one of them throws, the object may very well be in a state not supposed to be possible, just as if an allocation had failed. But that doesn't mean that your solution couldn't still be the best one.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209099", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/5546/" ] }
209,123
I was making a Python program to measure the growth of codereview.SE . My approach was to get the "Site stats" shown on the front page and store them on my hard drive. I plan to do this once every day. So far I have made enough to get the stats and append them to a text file. The python script can be viewed on github . The format I am using is the following 22-08-2013 questions 9073 answers 15326 answered 88 users 26102 visitors/day 7407 22-08-2013 questions 9073 answers 15326 answered 88 users 26102 visitors/day 7407 I just ran the script twice to get the format I would be using in the file. Initially this seemed good to me because I would be storing it myself and the format would be the same so it would be easily parsed but not I am not sure. It seems that using a database should be a better here because that way retrieving data should be easier. Just a note, I have never used any database and have no knowledge of SQL, MySQL or any other variants of RDBMS. So this brings me to the question. When should a database be preferred for storing the data over storing the data in a text file? Are there some pointers that I can look for when making decisions about whether I need a database or simple text files? PS: If better tags can be added please do so. I had some doubts about the tags which could be added.
When should a database be preferred for storing the data over storing the data in a text file? Wikipedia tells us that a database is an organized collection of data . By that measure, your text file is a database. It goes on to say: The data are typically organized to model relevant aspects of reality in a way that supports processes requiring this information. For example, modeling the availability of rooms in hotels in a way that supports finding a hotel with vacancies. That part is subjective -- it doesn't tell us specifically how the data should be modeled or what operations need to be optimized. Your text file consists of a number of distinct records, one for each day, so you're modeling an aspect of reality in a way that's relevant to your problem. I realize that when you say "database" you're probably thinking of some sort of relational database management system, but thinking of your text file as a database changes your question from "when should I use a database?" to "what kind of database should I use?" Seeing things in that light makes the answer easier to see: use a better database when the one you've got no longer meets your requirements. If your Python script and simple text file work well enough, there's no need to change. With only one new record per day and computers getting faster each year, I suspect that your current solution could be viable for a long time. A decade's worth of data would give you only 3650 records that, once parsed, would probably require less than 75 kilobytes. Imagine that instead of one small record per day, you decided to record every question asked on CodeReview, who asked it, and when. Furthermore, you also collect all the answers and the relevant metadata. You could store all that in a text file, but a flat file would make it difficult to find information when you needed it. There'd be too much data to read the whole thing into memory, so whenever you wanted to find a question or answer, you'd have to scan through the file until you found what you were looking for. When you wanted to find all the questions asked by a given user, you'd have to scan through the entire file. If you wanted to find all the questions that have "bugs" as a tag, you'd have to scan through the file. That'd be horribly slow, so you might decide to speed things up by building some indexes that tell you where to look in the file to find a given record. You could have an index for questions, another for users, a third for answers, and so on. When you wanted to find a question you'd search the (much smaller) question index, get the position of the question in the main data file, and jump quickly to the right spot in the file. That'd be a big performance improvement. Indeed, that's pretty much what a database management system is. So, use a DBMS when it's what you need. Use it when you have a lot of data, when you need to be able to access that data quickly and perhaps in ways that you can't entirely predict at the outset. If you have different kinds of data -- different types of records -- that are connected to each other, use a RDBMS so that you can relate the various records appropriately.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209123", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/92918/" ] }
209,295
The computer scientist Peter Norvig argued in his essay Teach Yourself To Program in 10 Years that you need about 10,000 hours of practice. But Jeff Atwood argued in his post How To Become a Better Programmer by Not Programming that he believes the only way to become a better programmer is by not programming. These articles are in opposition. Is there a scientific evidence to prove the Jeff Atwood approach?
You're missing the point. Jeff Atwood is saying that being an excellent programmer requires more than just coding skills. It also requires being a good designer, working well with other people, and in general becoming a better thinker and problem solver. The greatest missing skill is somebody who's both good at understanding the engineering and who has good relationships with the hard-core engineers, and bridges that to working with the customers. -- Bill Gates Peter Norvig's point is that you can't just pick up a copy of "Become a Master Programmer in 24 Hours" and expect that to work. But that's exactly how many folks who ask questions at Stack Overflow seem to approach programming. They think they can load up Eclipse, learn a few keywords, and write the next Angry Birds. It takes a little more than that.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209295", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/92949/" ] }
209,311
I am a web and software developer involved in the creation of mobile apps. I am currently working on a project with a looming deadline. I am wondering if I should be committing code rapidly and large and testing later, or doing tiny commits and testing each one. Thank you!
"We test later" always means "we test never", because there's never any time left to do it later. Whenever you have something changed that is worth testing, test it now. You won't introduce fewer bugs just because you make more commits and fewer tests between the commits. Instead, you will pile bogus code on bogus code which, after n changes, becomes not n times hard to fix, but 2^n times, because you lose the feedback on which of your changes was the root cause for the bug.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209311", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/100113/" ] }
209,376
Explaining the difference between strictness of languages and paradigms to a colleague of mine, I ended up asserting that: Tolerant languages, such as dynamic and interpreted languages, are used best for prototypes and small projects or medium-size web applications. When choosing elegant dynamic languages such as Python or JavaScript with Node.js, the benefits are: Fast development, Reduced boilerplate code, Ability to attract young, creative programmers who flee “corporate languages” like Java. Statically typed/compiled languages are best for applications which require higher strictness such as business-critical apps or apps for medium to large-size apps. Well-known paradigms and patterns developed for decades, Ease of static checking, Ability to find many professional developers with decades of experience. Strict languages such as Haskell, Ada or techniques such as Code contracts in C# are better for systems which favor safety over flexibility (even if Haskell can be extremely flexible), such as life critical systems and systems which are expected to be extremely stable. The benefits are: Ability to catch as many bugs as possible at compile time, Ease of static checking, Ease of formal proofs. However, by looking at the languages and technologies used for large-scale projects by large corporations, it seems that my assertion is wrong . For example, Python is successfully used for large systems such as YouTube or other Google applications which require an important amount of strictness. Is there still a correlation between the scale of the project and the strictness of the language/paradigm which should be used? Is there a third factor that I've forgotten to take in account? Where am I wrong?
An interesting case study on the matters of scaling projects that use dynamic and interpreted language can be found in Beginning Scala by David Pollak. I started searching for a way to express the code in my brain in a simpler, more direct way. I found Ruby and Rails. I felt liberated. Ruby allowed me to express concepts in far fewer lines of code. Rails was so much easier to use than Spring MVC, Hibernate, and the other “streamlined” Java web frameworks. With Ruby and Rails, I got to express a lot more of what was in my head in a shorter period of time. It was similar to the liberation I felt when I moved from C++ to Java... As my Ruby and Rails projects grew beyond a few thousand lines of code and as I added team members to my projects, the challenges of dynamic languages became apparent. We were spending more than half our coding time writing tests, and much of the productivity gains we saw were lost in test writing . Most of the tests would have been unnecessary in Java because most of them were geared toward making sure that we’d updated the callers when we refactored code by changing method names or parameter counts. Also, I found that working on teams where there were mind melds between two to four team members, things went well in Ruby, but as we tried to bring new members onto the team, the mental connections were hard to transmit to new team members . I went looking for a new language and development environment. I was looking for a language that was as expressive as Ruby but as safe and high-performance as Java... As you can see, major challenges in project scaling for author turned out to be in test development and knowledge transfer. In particular, author goes into more details in explaining the differences in test writing between dynamically and statically typed languages in Chapter 7. In section "Poignantly Killing Bunnies: Dwemthy’s Stairs" author discusses Scala port of a particular Ruby example: Why the Lucky Stiff... introduces some of Ruby’s metaprogramming concepts in Dwemthy’s Array in which a rabbit battles an array of creatures. N8han14 updated the example to work in Scala ... Compared to the Ruby code, the library parts of the Scala code were more complex. We had to do a lot of work to make sure our types were correct. We had to manually rewrite Creature’s properties in the DupMonster and the CreatureCons classes. This is more work than method_missing . We also had to do a fair amount of work to support immutability in our Creatures and Weapons. On the other hand, the result was much more powerful than the Ruby version. If we had to write tests for our Ruby code to test what the Scala compiler assures us of, we’d need a lot more lines of code. For example, we can be sure that our Rabbit could not wield an Axe. To get this assurance in Ruby, we’d have to write a test that makes sure that invoking |^ on a Rabbit fails. Our Scala version ensures that only the Weapons defined for a given Creature can be used by that Creature, something that would require a lot of runtime reflection in Ruby... Reading above can make one think that as projects grow even larger, test writing might become prohibitively cumbersome. This reasoning would be wrong, as evidenced by examples of successful very large projects mentioned in this very question ("Python is successfully used for... YouTube"). Thing is, scaling of the projects isn't really straightforward. Very large, long-living projects can "afford" different test development process, with production quality test suites, professional test dev teams and other heavyweight stuff. Youtube test suites or Java Compatibility Kit sure live a different life than tests in a small tutorial project like Dwemthy’s Array .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209376", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6605/" ] }
209,421
Developing a web application which should allow User to schedule appointment based on their TimeZone. And I am storing the User scheduled datetime as server datetime into database field. While showing schedule information retrieved the value from Database and converted into user timzone. processing in Code base I am converting the DateTime based on the user timezone. Please suggest is this best practice or any easy way is exist?
Welcome to one of the hardest problems in non-computational programming - properly representing dates and times to end users. Realistically, timestamps should be stored in a fixed single representation regardless of how they will be interpreted, because no matter how hard you try, you will always have ambiguous cases, and you can't resolve them without a fixed representation. And you've picked one of the worst of the use cases - scheduling an appointment. The only worse common use case is air travel, where a trip might start in one time zone and end in another, possible at an earlier local time. Always, always, ALWAYS store in UTC and display in the user's preferred or explicitly specified timezone. If at all possible, make the user tell you what timezone they believe the timestamp to be in when they input it ( e.g. , have an explicit timezone field and pre-populate it with their preferred zone).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209421", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/46506/" ] }
209,524
My school's CS program avoids any mention of object oriented programming, so I've been doing some reading on my own to supplement it -- specifically, Object Oriented Software Construction by Bertrand Meyer. Meyer makes the point repeatedly that classes should hide as much information about their implementation as possible, which makes sense. In particular, he argues repeatedly that attributes (i.e., static, non-computed properties of classes) and routines (properties of classes that correspond to function/procedure calls) should be indistinguishable from each other. For example, if a class Person has the attribute age , he asserts that it should be impossible to tell, from the notation, whether Person.age corresponds internally to something like return current_year - self.birth_date or simply return self.age , where self.age has been defined as a constant attribute. This makes sense to me. However, he goes on to claim the following: The standard client documentation for a class, known as the short form of the class, will be devised so as not to reveal whether a given feature is an attribute or a function (in cases for which it could be either). i.e., he claims that even the documentation for the class should avoid specifying whether or not a "getter" performs any computation. This, I don't follow. Isn't the documentation the one place where it would be important to inform users of this distinction? If I were to design a database filled with Person objects, wouldn't it be important to know whether or not Person.age is an expensive call, so I could decide whether or not to implement some sort of cache for it? Have I misunderstood what he's saying, or is he just a particularly extreme example of OOP design philosophy?
I don't think Meyer's point is that you shouldn't tell the user when you have an expensive operation. If your function is going to hit the database, or make a request to a webserver, and spend several hours computing, other code is going to need to know that. But the coder using your class doesn't need to know whether you've implemented: return currentAge; or: return getCurrentYear() - yearBorn; The performance characteristics between those two approaches is so minimal it shouldn't matter. The coder using your class really shouldn't care which you have. That's meyer's point. But that's not always the case, for example, suppose you have a size method on a container. That could be implemented: return size; or return end_pointer - start_pointer; or it could be: count = 0 for(Node * node = firstNode; node; node = node->next) { count++ } return count The difference between the first two really shouldn't matter. But the last one could have serious performance ramifications. That's why the STL, for example, says that .size() is O(1) . It doesn't document exactly how the size is calculated, but it does give me the performance characteristics. So : document performance issues. Don't document implementation details. I don't care how std::sort sorts my stuff, as long as it does so properly and efficiently. Your class also shouldn't document how it calculates things, but if something has an unexpected performance profile, document that.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209524", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/100569/" ] }
209,532
I'm watching a video on C# about Variables. The author declares a variable inside a method and he named it like this: string MyName ="James"; my question is: which convention is recommended by .Net Framework. Is it Pascal casing as in the above example or is it camel case?
I don't think there's something like an 'official' convention. As far as I know, the following is considered good practice by many experienced C# developers: PascalCase for public member variables (string MyName = "James") camelCase for local variables (string myName = "James") _leadingUnderscore for private member variables (string _myName = "James") With this approach, one can distinguish between local variables as well as public and private members by the case of their first letter. As with any coding convention, this is also subject to personal preferences. Therefore, there is no definite answer. A general goal should be to keep the code as readable and comprehensible as possible.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209532", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/100577/" ] }
209,565
Many people use the term Snake Case to describe variables or other symbols with_the_form_of_underscores . In the past week, I've launched several broad searches. I can't find anything about the origin of this term that is more detailed than what Wikipedia says (above) When did the earliest record of this term enter into use?
A person named Jack Dahlgren claims on Quora he invented the term in 2002 when he worked at Intel. Here's what he posted at above link: I believe that I am the one who coined this term back in 2002 when I was at Intel and we were evaluating Sharepoint Team Services. Based on the unfortunate tendency of Sharepoint to escape spaces in names with [underscore] characters (among other things) I recommended a policy of using underscores to replace all spaces so that URLs would be slightly shorter and much more readable. Given the existing "camelCase" name with humps in the middle, I called it "snake_case" or if there were two flat spots, I jokingly called it "road_kill_case". Considering the size of Intel and my interactions with Microsoft product team, it is possible that this is the origin, but it is such a simple phase that I think it could have been invented independently elsewhere too.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209565", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/45150/" ] }
209,678
While looking at the code in C for division/multiplication by 2, it is found that shift operator is used rather than division operator. What is the advantage of using shift over division operator?
Multiplication is complex typically... unless one of the multiplicands is the base the numbers are in themselves. When working with base 10 math, multiplying by 10 is trivial: "append the same number of zeros as the 10 has". 2 * 10 = 20 and 3 * 100 = 300 This is very easy for us. The exact same rule exists in binary. In binary, 2 * 3 = 6 is 10 * 11 = 110 , and 4 * 3 = 12 is 100 * 11 = 1100 . For a system already working with bits (ANDs and ORs), operations such as shift and roll already exist as part of the standard tool set. It just happens that translating N * 2^M into binary becomes shift N by M places If we are doing something that isn't a power of 2 in binary, we've got to go back to the old fashioned multiply and add. Granted, binary is a bit 'easier', but a bit more tedious at the same time. 11 * 14 becomes (from Wikipedia on binary multiplier - a good read as it links to other multiplication algorithms for binary... shifting powers of two is still much easier): 1011 (this is 11 in decimal) x 1110 (this is 14 in decimal) ====== 0000 (this is 1011 x 0) 1011 (this is 1011 x 1, shifted one position to the left) 1011 (this is 1011 x 1, shifted two positions to the left) + 1011 (this is 1011 x 1, shifted three positions to the left) ========= 10011010 (this is 154 in decimal) You can see, we're still doing shifts, and adds. But lets change that to 11 * 8 to see how easy it becomes and why we can just skip to the answer: 1011 (this is 11 in decimal) x 1000 (this is 8 in decimal) ====== 0000 (this is 1011 x 0) 0000 (this is 1011 x 0, shifted one position to the left) 0000 (this is 1011 x 0, shifted two positions to the left) + 1011 (this is 1011 x 1, shifted three positions to the left) ========= 1011000 (this is 88 in decimal) By just skipping to that last step, we have drastically simplified the entire problem without adding lots of 0s that are still 0s. Dividing is the same thing as multiplying, just the reverse. Just as 400 / 100 can be summarized as 'cancel the zeros', so too can this be done in binary. Using the example of 88 / 8 from the above example 1011 r0 ________ 1000 )1011000 1000 ---- 0110 0000 ---- 1100 1000 ---- 1000 1000 ---- 0000 You can see the steps in the long way of doing long division for binary is again quite tedious, and for a power of two, you can just skip to the answer by in effect, canceling the zeros. (as a side note, if this is an interesting area for you, you may find browsing the binary tag on Math.SE , well... interesting.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209678", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43250/" ] }
209,693
I'm working on an enterprise project which will be deployed in many SMBs and Enterprises. The support for this project would be struggling and so I want to create a coding pattern for errors ( Like HTTP status Codes ). This will enable help desk people to refer to documents and troubleshoot the problems as soon as possible. What are the best practices and recommendations to do this? Any help to do this will be useful.
There is a difference between error codes and error return values. An error code is for the user and help desk. An error return value is a coding technique to indicate that your code has encountered an error. One can implement error codes using error return values, but I would advice against that. Exceptions are the modern way to report errors, and there is no reason why they should not carry an error code within them. This is how I would organize it (Note that points 2-6 are language agnostic): Use a custom exception type with an additional ErrorCode property. The catch in the main loop will report this field in the usual way (log file / error pop-up / error reply). Use the same exception type in all of your code. Do not start at 1 and don't use leading zeros. Keep all error codes to the same length, so a wrong error code is easy to spot. Starting at 1000 usually is good enough. Maybe add a leading 'E' to make them clearly identifiable for users (especially useful when the support desk has to instruct users how to spot the error code). Keep a list of all error codes, but don't do this in your code . Keep a short list on a wiki-page for developers, which they can easily edit when they need a new code. The help desk should have a separate list on their own wiki. Do not try to enforce a structure on the error codes. There will always be hard-to-classify errors and you don't want to discuss for hours whether an error should be in the 45xx group or in the 54xx group. Be pragmatic . Assign each throw in your code a separate code. Even though you think it's the same cause, the help desk might need to do different things in different cases. It's easier for them to have "E1234: See E1235" in their wiki, than to get the user to confess what he has done wrong. Split error codes if the help desk asks for it. A simple if (...) throw new FooException(1234, ".."); else throw new FooException(1235, ".."); line in your code might save half an hour for the help desk. And never forget that the purpose of the error codes is to make life easier for the help desk .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209693", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/100162/" ] }
209,760
This question has been bothering me for some time now and today I figured I would Google it. I've read some stuff about it and it seemed very similar to what I've always known as processor cache . Is there a difference between the two or am I right when I think they are the same? Is a register actually required to be inside a CPU for it to work? According to Wikipedia a register is a place in the CPU where memory can be quickly accessed and modified before being sent back to the RAM. Did I understand this wrong or are the cache and register actually the same?
They're not quite the same. The registers are the places where the values that the CPU is actually working on are located. The CPU design is such that it is only able to actually modify or otherwise act on a value when it is in a register. So registers can work logic, whereas memory (including cache) can only hold values the CPU reads from and writes to. Imagine a carpenter at work. He has a few items in his hands (registers) and then, very close by on his workbench (cache) things he is frequently working on, but not using right this moment, and then in the workshop (main memory) things that pertain to the project at hand but that are not immediately important enough to be on the workbench. EDIT: Here's a simple explanation for how register logic works. Let's imagine we have four registers named R1..R4. If you compile a statement that looks like this: x = y + z * 3; the compiler would output machine code that (when disassembled) looks something like this: LOAD R1, ADDRESS_Z //move the value of Z into register 1 MUL R1, 3 //multiply the value of register 1 by 3 LOAD R2, ADDRESS_Y //move the value of Y into register 2 ADD R1, R2 //adds the value in R2 to the value in R1 STORE R1, ADDRESS_X //move the value of register 1 into X Since most modern CPUs have registers that are either 32 or 64 bits wide, they can do math on any value up to the size they can hold. They don't need special registers for smaller values; they just use special ASM instructions that tell it to only use part of the register. And, much like the carpenter with only two hands, registers can only hold a small amount of data at once, but they can be reused, passing active data in and out of them, which means that "a lot of registers" don't end up being needed. (Having a lot available does allow compilers to generate faster code, of course, but it's not strictly necessary.)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209760", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/99300/" ] }
209,982
In working with python for the first time, I've found that I end up writing multiple classes in the same file, which is opposed to other languages like Java, which uses one file per class. Usually, these classes are made up of 1 abstract base class, with 1-2 concrete implementations who's use varies slightly. I've posted one such file below: class Logger(object): def __init__(self, path, fileName): self.logFile = open(path + '/' + filename, 'w+') self.logFile.seek(0, 2) def log(self, stringtoLog): self.logFile.write(stringToLog) def __del__(self): self.logFile.close() class TestLogger(Logger): def __init__(self, serialNumber): Logger.__init__('/tests/ModuleName', serialNumber): def readStatusLine(self): self.logFile.seek(0,0) statusLine = self.logFile.readLine() self.logFile.seek(0,2) return StatusLine def modifyStatusLine(self, newStatusLine): self.logFile.seek(0,0) self.logFile.write(newStatusLine) self.logFile.seek(0,2) class GenericLogger(Logger): def __init__(self, fileName): Logger.__init__('/tests/GPIO', fileName): def logGPIOError(self, errorCode): self.logFile.write(str(errorCode)) As seen above, I have a Logger base class, with a couple of implementation differences below that. The Question: Is this standard for python, or for any language? What problems could arise from using this implementation if any? Please note: I'm not really looking for guidance on this specific file, but in a more general sense. What if the classes ended up being 3-5 moderately complex methods? Would it make sense to split them then? Where is the cutoff for saying you should split a file up?
It's fine. It's fine in C++ as well, for reference. Keeping tightly-coupled things together is sensible practice. Avoiding inappropriate coupling is also good practice. Striking the right balance isn't a matter of strict rules, but of, well, striking a balance between different concerns. Some rules of thumb: Size Excessively large files can be ugly, but that's hardly the case here. Ugliness is probably a good enough reason to split a file, but developing that aesthetic sense is largely a matter of experience, so it doesn't help you figure out what to do a priori Separation of Concerns If your concrete implementations have very different internal concerns, your single file accumulates all those concerns. For example, implementations with non-overlapping dependencies make your single file depend on the union of all those dependencies. So, it might sometimes be reasonable to consider the sub-classes' coupling to their dependencies outweighs their coupling to the interface (or conversely, the concern of implementing an interface is weaker than the concerns internal to that implementation). As a specific example, take a generic database interface. Concrete implementations using an in-memory DB, an SQL RDBMS and a web query respectively may have nothing in common apart from the interface, and forcing everyone who wants the lightweight in-memory version to also import an SQL library is nasty. Encapsulation Although you can write well-encapsulated classes in the same module, it could encourage unnecessary coupling just because you have access to implementation details that wouldn't otherwise be exported outside the module. This is just poor style I think, but you could enforce better discipline by splitting the module if you really can't break the habit.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/209982", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/93290/" ] }
210,255
When I worked as a freelancer, I encountered lots of cases where customers were protecting their ideas and source code of their projects (such as web applications) as much as possible, no matter how unimportant, uninteresting and unoriginal were the projects and the concepts behind. I've already posted a question about keeping the ideas secret , and received many great answers. Now, my concern is more about source code secrecy. According to my observations of: The codebases I had to work on during my career, My own willingness to keep some of my own source code secret, and: A few articles like, for example, Open response to Simon Stuart by the popular Programmers.SE contributor Mason Wheeler , I conclude that source code is kept secret mostly for those reasons: Because the author is ashamed of the code of such a bad quality, or the company fears losing reputation if somebody sees such bad codebase, or that given the low quality of the codebase, it will not bring anything useful to anybody to open source it: even if somebody would be interested, he would hardly be able to run the solution (or, often, even compile). Because parts of the code are stolen (mostly from open source projects covered by a license which restricts its usage in a given situation), Because the code relies on security by obscurity and the author doesn't care about Kerckhoffs's principle . Because the product is so breakable that showing the code would cause too much harm: if a closed-source app with all those security leaks would withstand a newbie hacker, the same open sourced app would have far smaller chances, because even the beginner hacker would just have to study the code to discover all the holes. If it's not clear what I'm talking about, here's an example: if (credentials.password === 'masterPassword12345') { isLoggedIn = true currentUser = credentials.userName } else { authenticate(credentials) } Because the author over-estimated the source code (and his own skills and expertise). Example: believing that a home-made cryptography-related algorithm (which was never reviewed by anybody) is better than any well-known one. Because the author believes that the idea behind the code is great, and that it would be stolen. Because of the "It's not perfect enough" syndrome. In other words, the developer is willing to release the source code to public when the code is "good enough", but day after day, there are still things to improve, so the code would never be released. All of those reasons give a rather negative image of people who are against publishing the source code. Are there valid cases to not release to the public the high-quality code which follows Kerckhoffs's principle?
Some people and most companies have a strange perception about the value of code. "We spent $100,000 on this project therefore the code must be worth that" and feel a need to protect it. In reality most code is more like paint. You spend $100 on paint and $200 dollars to apply it to your walls. But now the paint is worth nothing, you cannot sell it, nobody wants it, and even if they did you cannot take it of your wall and put it on somebody else's wall. It may enhance the value of the building but you cannot realize this without selling the building. You could "steal" Amazons code base (most of it is freely available from various open source projects) and set up an Ammassons web site but you would not take over much of Amazons business. Code is a necessary part of any modern businesses infrastructure, but, it only has value as part of a process and culture, on its own its worth nothing. I would add there are some situations where the code is vital to the business and would be valuable enough to any competitor that it should be kept secret: To prevent malicious manipulation of your facilities -- a good example would be Google's "page rank" system which is constantly being "gamed" to give web sites an unjustifiably high rank. Automated Trading Algorithms -- an unscrupulous competitor could study the algorithm and fool your system into selling too low and buying too high. A "faster/better" algorithim -- if your software' s unique selling point is a faster better algorithm for sorting/compressing/whatever then it probably pays to keep this a trade secret for as long as possible.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210255", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/6605/" ] }
210,272
I really don't know where else to ask. So here it goes. I'm working at a very tiny company that makes ERP software and websites. 1 developer with + 10 years experience. 2 developers with + 3years experience. 3 developers with + 1 year experience. That's it. No team, no DBA, no system admin. There is no one around here who has expertise in web development, so I happened to be in charge of web development. But I've only got 3 years experience ! as a developer !. I know in a small company, you are asked to do lots of different things but is it too much, if I have to do sysadmin, database architecture, software design and development ? Oh plus I'm asked to do all that across different platforms. I am currently working with JSP, ASP, MSSQL, MySQL, Oracle, Windows Server, and Linux. In database, I do from writing queries to backup & recovery. Server setup, system crash recovery, DB & Server migration also. Plush HTML, Javascript and CSS :) Number of projects that I'm in charge of : 5 I'm not an expert at all of them ! I have to search the Internet, read tutorials and ask questions in StackOverflow to get all that done ! So I ask you, is this normal ? Is this a normal practice ? Will I face the same situation whichever small company I go ? I'm working in South Korea. How is it in your country ? P.S Thank you all for your opinion. I was going to upvote all of you because all of you helped me see it in a different way, but apparently you need 15 reputation to upvote :(
In my experience, yes, it is perfectly normal for developers in small companies to be expected to cover a broad range of roles. It is certainly normal for a company so small that it only has three developers to not have a specialized DBA or sysadmin. However, I would find it unusual for such a small company to use such a broad range of technologies. JSP and ASP? Windows and Linux? SQL Server, MySQL and Oracle?? Usually, small companies will focus on one technology platform to avoid spreading themselves too thin. If your work involved full-stack development on one platform - e.g. ASP.NET + SQL Server + Windows, or Java + Oracle + Linux - would you still feel overwhelmed? Anyway, if you want to specialize to a greater extent, yes, you should look to larger companies. The bigger the team, the more plausible and beneficial it is to have specialists.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210272", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/101245/" ] }
210,274
Does Lisp still have any special feature which has NOT been adopted by other programming languages? By Lisp, I mean all the Lisp programming languages as a whole. I've been told how amazing Lisp is and know that many languages have been inspired by Lisp. But does Lisp still have any exclusive design feature that just cannot be done in any other language? The reason I asked the question is that recently, being an amateur programmer myself, I began to learn Clojure just for fun, and the result is that I found lots of Lisp-related posts and comments, saying but one thing: "Lisp is unique", but other modern programming languages have already adopted and stolen lots of ideas from Lisp, like conditionals, recursion, and the function as a first-class citizen. And even metaprogramming can be done by many languages. Did I miss out something and is "Lisp still different"? Or I'm lucky because other modern languages have stolen all the good parts from Lisp so that it's not necessary to dig into the parentheses Lisp world , and "Lisp was different".
A canonical reference for this type of question is Paul Graham's What Made Lisp Different . The two remaining key features of Lisp that are not widely available, according to this article at the time of its writing, are: 8. A notation for code using trees of symbols. 9. The whole language always available. There is no real distinction between read-time, compile-time, and runtime. You can compile or run code while reading, read or run code while compiling, and read or compile code at runtime. The commentary addresses each point and names popular languages where that feature is available. 8, which (with 9) is what makes Lisp macros possible, is so far still unique to Lisp, perhaps because (a) it requires those parens, or something just as bad, and (b) if you add that final increment of power, you can no longer claim to have invented a new language, but only to have designed a new dialect of Lisp ; -) Note that this article was last revised in 2002, and in the past 11 years there have been a wide variety of new languages, some of which may incorporate all these Lisp features into their design.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210274", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/101246/" ] }
210,312
I am visually impaired. With glasses I see well enough to drive, but at the font size I'm comfortable working at I can only see about 15 lines of 100 characters at a time. This has affected my coding style. One thing I do is write shorter functions. My code tends to get good reviews because these short functions with good names make the higher level functions very readable, but in high performance situations some folks make comments about how much space I'm taking up on the stack by passing variables down several layers for processing. A second thing I do is divide classes up between files to make shorter files. This reduces the scrolling distance to get to relevant functions and depending on organization may allow me to put the files up on different monitors to look at them together. Both of these practices make for more documentable units that most coding styles require I document, which further aggravates the issue by extending the length of my file and the distance between related functions. I'm currently using Visual Studio, which allows code folding at the function and comment block level (which I use frequently) but does not fold at the bracket level like Notepad++ does. The editor that offers better code folding doesn't have all the intellisense features of VS. I could use regions in VS, but this looks very cluttered if used every 10 lines. Folding is occasionally helpful to get completed code out of view while I'm working on a different feature of the code. Can anyone recommend better coding practices to help with limited visibility of the code?
Here are a couple suggestions. If you haven't already choose a font from these recommendations that makes it easier for you to see. Many monitors support a 90 degree rotation. This is much better for reading and will allow you to get more lines on your screen. You can undock all of the VS tools and put them on the second monitor and just have a big code monitor to maximize visibility.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210312", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/101302/" ] }
210,346
I have been working on a system alone for about four years. I have built it from the ground up. It is not a perfect system. It is very complex, it is buggy, and the business is now becoming aware of this. After all this time, other developers at the company are getting interested in the project, and they are becoming more involved. I am a bit worried they will blame me for the problems. Am I being paranoid? Have others experienced a similar situation? How can I soften the glare of the spotlight on my buggy code?
Everybody Loves a Good Code Bash / WTF Session I am now worried that they will find bugs and blame me for the problems. Of course they will find bugs. You said it yourself: it's buggy (you already found bugs) and complex (it's very likely to have more). And yes they'll blame you for it. Because it's a large codebase and they will, over time, get accustomed to the habit of tracking the problem down to your code. And it is your code after all. However , that doesn't mean that everything you did was bad, so they may ( if they are patient and kind enough) come to praise the bits you did well, or recognize value in particularly well-crafted areas of the code ( assuming these exist). Am I being paranoid No , but you seem a bit afraid of criticism , be it fair or not. or is there some logic in this? As stated above, it's pretty common and normal. They'll find problems. Lots of them. You did the thing. It seems logical they'll blame you, as you are, after all, responsible for the code. However it's not necessarily your fault for the way things HAD to be done : the company should have dedicated more resources and eyeballs to the project earlier on, and conducted more regular reviews. But from the standpoint of other developers (and damn, are we the picky and bashing kind...) it will often turn into a case of "oh great, yet another example of X's famous bad design pattern or practice ". Add lots of subjectivity into the mix (design decisions, coding style, etc...) and it's a great recipe for a perpetual code bash. Does anyone else have a similar experience? Pretty much anyone who's ever written code that's been maintained by someone else, or who's maintained code written by someone else. It's good to have been on both sides of the fence. Some Advice on Preparing your Hand-Over Defend your Design Decisions. Take the time to explain the reasoning behind your design decisions , both good and bad. You did things one way at that time, and there were reasons for it. Maybe you'd do it differently now, and maybe you already knew a different way back then, but you picked THAT way. Make sure to say WHY. If, however, you can't find a reason, then... ... Don't Make Excuses. If something's awful because of you, say so. They'll respect you more for that. If a piece of the code sucks because you were green at the time, say so. If it sucks because you didn't know a better way at the time, say so. If it sucks because you didn't have the time, say so. We don't care why you didn't have the time. But it's good to know you couldn't do better at the time. Don't deflect blame where and when it's deserved. Remember the 1st Rule of Damage Control Get it in the open before someone else does. And we do mean all of it and very early . What works for politicians, bankers and press and marketing agencies works for crap code as well (and all aspects of life). If your screw-ups have to come out (and here they likely will), it's best that they come out on your terms and you keep the control . Don't Sweat It You'll get bashed, and you'll bash other developers over the time of your career. Just make sure to keep it light-hearted , positive , and open . It's a 2-way street, so be kind in dispatching severe but justified criticism , and be humble in accepting your share of it. And whatever you do, steer away from holy wars. Personal Experience Personally, I know I've sometimes been very unkind to coworkers or said unkind things about code that had been written by people before my time. And while I hope that most of the time my criticism was at least somewhat founded, I'm sure there were times it wasn't, for various (probably bad) reasons. We all do that crap. Don't pretend otherwise, fellow reader frowning at this. I'm on to you! I've also been bashed a few times, and I've stood my ground when I deemed it right (or strategically worth it). But I've also accepted blame more often than not, because I fucked up . And I still do on a daily basis. Because, as mentioned, there's usually a reason for it. As a consequence of this, it's become a tradition to hold informal code bashing sessions with co-workers. Not the bad kind (though it's been debated heartily on this site whether a "good" kind of code bash can exist). Just the kind where you kick back on a Friday afternoon and sip your coffee looking at dark areas of your codebase and highlight the best picks of the week. Then you fix them. And you don't assign blame for them. You don't even say "what a stupid way of doing X". You just chuckle at it, refactor it, review it with coworkers so your refactoring is vetted and history doesn't repeat itself, and you move on. And you know what? Sometimes you'll even hit some shitty code somewhere and realize it was yours, and you'll humbly submit it for bashing. Because you sucked, and what's fair is fair. And you pin it to a virtual or physical wall for all to see and remember to avoid it in the future. For the record, at my company we have formal review meetings required by our processes, and some informal ones (way more frequent) whenever we feel we want other people to vet our stuff. And then we have informal code bashing sessions, which are more for fun. But in the end, they all have the same outcome: we improve code, and that's what matters. As Jeff Atwood puts it in What's Wrong with The Daily WTF (emphasis his): [The Daily WTF] is therapeutic, even educational. But whether the code in question is catastrophically stupid or just plain ill-advised, we have to do something about it . Until we do, we are implicitly perpetuating the painful, costly cycle of bad coders writing bad code, ad infinitum. And that hurts all of us. And in general, if you approach it this way, it comes out as rather positive. You improve code, and people tend to become more forgiving. We occasionally fondly remember some out-of-this-world kind of bug, hack or plainly non-sensical piece of code, but then when a new co-worker shows up and says something like "hang on, how could someone have been so crazy/dumb as to do something like THAT?!" when they run into one of our code bash archives, we just shrug and say "hey, they probably had a good reason at the time, you know!" And then you hope people have the same attitude after you're gone... But you won't be there to know that anyways. Sure, a constant random number generator or an isEmpty(String) implementation duplicated across 73 different packages might seem like utter stupidity. And it is, in a perfect scenario. But, quite possibly, the guy wasn't in that perfect fairy-tale scenario. The guy had a reason at the time. Maybe not a good one, but it doesn't matter. Maybe it was just the wooshing sound of a deadline flying by. Give him a break. I write shitty code all the time because of more reasons than I'd care to list. We all do. If your coworkers are worth 2 cents and aren't total jackasses, they'll give you the same benefit of the doubt. Nobody Gives a Damn! But then again... Who gives a damn? It's your code. Own up to it. Own up to your mistakes. Side Note: The Case for Open-Source This is one area where open-source is said to shine. After all, in someone else's wise words: Given enough eyeballs, all bugs are shallow. - Eric S. Raymond's Linus Law , excerpted from The Cathedral and the Bazaar More eyeballs during development phases means a lesser likelihood that bugs will creep up during production phases. You shouldn't be afraid of scrutiny. You'd think the larger the crowd would imply the stricter the scrutiny. But strangely enough, in some large corporations, the level of scrutiny isn't linked to the size of the engineering workforce at all: [GREAT] Companies like Google have a single code repository for most projects, so pretty much anyone can see, review and comment on anything. [BAD] Others, for some good and bad reasons, compartmentalize everything and only a few number of people review code. The result is bugs and duplication. Open-source encourages the embrace of scrutiny. Yes, someone will be able to dig up that stupid padding error you did back in your early days that triggered a memory allocation that could be exploited to cause a pretty big fat booboo with severe consequences. So what? It's gone now, because someone found it. And nobody's to say you'd still do the mistake now. Or maybe you would, because, well, we all have days where we need coffee. Or maybe we just still suck. But it doesn't matter, there's a giant herd of people willing to try your crap code and fix it - if it's worth anything at all for them to use. You need to take that leap of faith : accept that people will be generally benevolent and you won't get scarred for life; And you won't be branded with the seal of the World's Worst Developer Ever . Quite the countrary, this will help you to improve over time. The mythical lone developer can get better, but he won't get the powerful and useful feedback received while working with a team. If you're afraid of scrutiny, you're in the wrong game.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210346", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/65549/" ] }
210,359
tl;dr Some widely used programs, which generate html, will only generate opening paragraph tags, and not closing ones, assuming that the browser will properly close paragraphs. On the face of it, it seems to me that the assumption that browsers will properly close paragraphs is not correct. Is my interpretation correct? More generally, what tradeoffs are involved in this kind of decision? Browsing through moinmoin source code, the following line of code caught my eye: # We only open those tags and let the browser auto-close them: _auto_closing_tags = set(['p']) ( source ) After reading throug the rest of the implementation, I've convinced myself that yes, indeed, when moinmoin generates html code for one of its pages, it will correctly generate paragraph open tags, where appropriate, while at the same time purposefully avoiding any of the paragraph close tags (despite being able to trivially do so). For my specific, rather unusual, use case, this behaviour is not correct. I'm tempted to submit a bug report and/or change the behaviour. However, it looks like this design decision was thoughtfully made. I'm not well enough versed in the intricacies of the html standard, or the various browser implementations, to be able to tell if this is correct behaviour in general, and I have the feeling that my instinct to correct/change this behaviour might be misguided. Is this code making a valid assumption about browser implementations? Is the generated html valid? More generally, what tradeoffs might I be missing here?
End tags for p elements were optional in HTML, and were only required in XHTML. However, the HTML5 draft introduces a set of conditions for when the p end tag is actually optional: A p element's end tag may be omitted if the p element is immediately followed by an address, article, aside, blockquote, dir, div, dl, fieldset, footer, form, h1, h2, h3, h4, h5, h6, header, hgroup, hr, menu, nav, ol, p, pre, section, table, or ul, element, or if there is no more content in the parent element and the parent element is not an a element. Source: HTML5 specification That said, the only argument I've ever heard for omitting the end tags for p elements is document size. It's completely up to you to decide if that makes sense for your document or not. Personally I tend to include all optional end tags, just in case I don't meet the requirements for when the end tag is optional.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210359", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4091/" ] }
210,360
The problem I am on a software project which has about 10 developers, we share source code via Mercurial. We have a development and production branch per release. Repeatedly during the course of the project we have had source code from one branch i.e. v1 getting into patch and maintenance branches for earlier releases of software i.e. v2. This results in either time spent backing out the wrong commit, or wrong (possibly non-QAd) code reaching and getting deployed in the wrong branch if we don't notice that the code has gone into the wrong branch. Our branch and merge design/method v1-test v1-patch1 v1-patch2 ^---------^-----------^ v1-prod / / \ \ -----------------------/ \ \ v1-dev \ \ \ --------------------------\ v2-dev \ \ \ ^-------^------------- v2-prod v2-test v2-patch1 Hence we will work on a release development branch, until it's deemed ready , branch it off for a single testing/UAT/Production branch, where all releases and maintenance is done. Tags are used to build releases of this branch. While v1 is being tested, a branch will have been made for v2 and developers will start working on new features. What tends to happen is that a developer commits work due for v2-dev branch into v1-dev or v1-prod, or worse, they merge v2-dev into v1-prod (or similar such mistakes). We tell most developers not to access the -prod branches, however code still sneaks in. A group of more senior developers `look after' the -prod branch. It should be noted that while v2 has just started development, there may still be some quite hefty patches going into v1 to fix issues. I.e. v1 may not just be getting the odd small patch. What we've tried so far Having a separate -prod branch, with gatekeepers. A -prod branch should raise warnings through its name and most developers don't need to ever be in that branch. This has not really reduced the problem. Raised awareness of this problem amongst the developers, to try and make them more vigilant. Again this has not been very successful. Possible reasons I see for developers committing to the wrong branch Too complex a branch design Having active development in multiple branches in parallel. (The project does exhibit symptoms of using the avalanche-model .) Developers don't understand the DVCS well enough Questions I've read which were somewhat relevant I've read this question on not committing to the wrong branch and I feel that the answers regarding visual cues may be helpful. However I am not entirely convinced that the problems we're experiencing are not symptoms of a more fundamental problem. With the visual clues, we can incorporate them into the command line easily, however about half the team use eclipse which I'm unsure how to incorporate visual cues. Question What methods, in the form of software, project management or governance can we use to reduce (ideally stop) commits to the wrong branch taking up our time or dirtying our deployed code? Specific comment on the reasons I believe may be contributing as outlined above would be appreciated, but this shouldn't limit your reply.
The problem is you are changing what the meaning of a branch is part way through the process. Initially, the v1 dev branch is for development. All new features go there. At some point in the future, it becomes a maintenance branch for the v1 release branch. This is the crux of the problem. Its not that the developers are sloppy, its that the permissions and roles of the branch are sloppy and subject to change. What you need to do is establish what role each branch as, and maintain that role. If the role changes, branch. For example: developer commits | | | | | | | | v v v v v v v v dev +--+---------------------+-------------------> | ^ ^ | ^ ^ | | | | | | v1 +----+------+----+ | | | prod patches | | | | | | | | | v2 +-----+-----+----+ prod patches In this model, developers always commit to dev. If you are building a patch, you check the patch into that release's branch (or better yet, branch the release branch for a patch and then merge it back into the release branch). One article that you should read (and its probably an understatement for 'should') is Advanced SCM Branching Strategies by Stephen Vance. In this paper, I first define branching in a general sense. I then discuss various strategies for branching, starting with the obvious and moving up to several that are more appropriate for larger development efforts. Along the way, I discuss the pros and cons of each strategy, using them to motivate the changes that compose the more complex strategies... In this article, he identifies five roles that branches may have. Sometimes a branch may fill two roles and roles do not necessarily need a new branch as long as the role policies do not change mid branch (you will occasionally see mention of "branch on incompatible policy"). These roles are: Mainline. This is where branches are made from. Always branching from the mainline makes merges easier since the two branches will have a common ancestor that isn't branch upon branch upon branches. Development. This is where developers check in code. One may have multiple development branches to isolate high risk changes from the ones that are routine and mundane. Maintenance. Bug fixes on an existing production environment. Accumulation. When merging two branches, one may not want to risk destabilizing the mainline. So branch the mainline, merge the branches into the accumulator and merge back to the mainline once things are settled. Packaging. Packaging a release happens in the packaging branches. This often becomes the release and serves to isolate the release effort from development. See How to deal with undesired commits that break long-running release builds? for an example of where the packaging conflicts with development. In your example, you've got a cascading mainline (this is a problem - it makes merges more difficult - what happens if you want to merge a fix for v1 into v2 and v3?), a dev branch that becomes a maintenance branch (change of policy, this is a problem). Ok, you say, thats great, but this was written for perforce which is a centralized VCS - I'm using DVCS. Lets look at the git-flow model and see how it applies. The master branch (blue) is the release branch - for tagging. It is not the mainline. The mainline is actually the develop branch (yellow). The release branches (green) are the packaging role. Low risk development happens in the mainline, high risk development happens in the feature branches (pink). In this model, accumulation is done in the develop branch. Maintenance are considered 'hot fixes' which are red. While the role policies aren't exact match (each product has its own slightly different lifecycle), they are a match. Doing this should simplify your branching policy and make it easier for everyone involved.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210360", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/72064/" ] }
210,372
I've seen an increasing trend in the programming world saying that it is good practice to separate code blocks into their own functions. Obviously, if that code block is reusable, you should do that. What I do not understand is this trend of using a function call as essentially a comment that you hide your code behind if the code is not reusable. That's what code folding is for. Personally, I also hate reading code like this because it feels like it has the same problem as the GOTO statement - it becomes spaghetti code where if I'm trying to follow the program's flow I'm constantly jumping around and can't logically follow the code. It is much easier to me to follow code that is linear but has a single comment over sections of code labeling what it does. With code folding, this is essentially the same exact thing, except the code stays in a nice linear fashion. When I try to explain this to my colleagues, they say comments are evil and clutter - how is a comment on top of a block of folded code any different from a function call that will never get called more than once? How is overusing functions different than overusing comments? How are frequent use of functions different from the problems with GOTO statements? Can someone please explain the value of the programming paradigm to me?
Code organization is all about displaying enough information to convey a single idea. The sweet spot is getting your code pared down enough that a single idea can fit in a single unit of code. Your unit of code can be a function, a class, etc. These are merely tools of organization. As with any tool, it can be over used or used incorrectly. Having a one line function makes no sense unless the function conveys a meaningful idea. Having a large imperative function that conveys many ideas is hard to digest and reuse. It's all about striking the right balance, and even that is subjective.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210372", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/89108/" ] }
210,399
I'm still puzzled as why we have new in Go. When you want to instantiate a struct, you do t := Thing{} and you can get a pointer to a new instance by doing t := &Thing{} But there's also this possibility : t := new(Thing) This last one seems a little alien to the rest of the language. &Thing{} is as clear and concise as new(Thing) and it uses only constructs you often use elsewhere. It's also more extensible as you might change it to &Thing{3} or &Thing{Feets:7} . In my opinion, having a supplementary keyword 1 is costly, it makes the language more complex and adds to what you must know. And it might mask to newcomers what's behind instantiating a struct. It also makes one more reserved word. So what's the reasoning behind new ? Is it sometimes useful ? Should we use it ? 1 : Yes, I know it's not a keyword at the grammar level, you can shadow it , but that doesn't change the fact it's, for the reasonable developer, a reserved word.
The best way to ask is probably to the people working on it; exactly what I did ! Tl;dr: it was originally there before make and &{} , and it's still the function to use in some situations. Basically, here are the most important parts quoted: So what's the reasoning behind new ? Is it something useful ? Should we use it ? You cannot do this without new v := new(int) *v++ fmt.Println(*v) new isn't a headline feature of Go, you won't find it used often, but when you need it, it is there. Cheers Dave After another answer showing this kind of solution: vv := 0 v := &vv *v++ fmt.Println(*v) I asked for further clarification: So basically, Dave's point doesn't really stand? There are places where it's inconvenient to sneak in a new variable just to take its address. new(T) has an immediately straightforward meaning, rather than being a multi-step idiom. Dave's point only falls if mere technical possibility (of doing without new ) is compelling on its own. Wasn't this discussed because it was just obvious that Go should have it because almost every language has it? The "shall we keep new ?" discussion pops up from time to time. Since we can't take it out until Go 2, if I understand the Promise correctly, there doesn't seem to be much to be had from going round the loop again; by the time Go 2 is thinkaboutable, we might have some different and better ideas ... Chris It's also there mostly for historical reasons: you need to consider the history of the project. i think new is introduced first before there is make. That is true. In fact we struggled for a while before coming up with the idea of make. If you look at the repository logs you can see that make only shows up in January 2009, revision 9a924177598f. The new builtin function also preceded the idea of &{} for taking the address of a composite literal (and that syntax is in some sense wrong; it probably ought to be (*T){fields of T} but there wasn't enough reason to change it). The new function is not strictly necessary but code does seem to use it in practice. It's hard to get rid of it at this point. Ian
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210399", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/52680/" ] }
210,406
Well while developing a web application that allow users to draw graphs (flow charts, ER diagram , UML, .... etc) the information of drawn items and their relation to each other and position on canvas is expressed as JSON object The question is: 1- is saving such graph in database as raw (json object text) is enough (in one field of a table) 2- or is it always required to break down the json object into logical pieces and store them into a well built database structure! option 1- other than making devilopers life easier it will eleminate the need of recreating json object from small bits and pieces of information when the user open a graph he have built and saved long time ago option 2- will have fine grain details in proper structure but will cost kind of heavy processing while opening a saved graph from Database which way is correct way to go ?
The best way to ask is probably to the people working on it; exactly what I did ! Tl;dr: it was originally there before make and &{} , and it's still the function to use in some situations. Basically, here are the most important parts quoted: So what's the reasoning behind new ? Is it something useful ? Should we use it ? You cannot do this without new v := new(int) *v++ fmt.Println(*v) new isn't a headline feature of Go, you won't find it used often, but when you need it, it is there. Cheers Dave After another answer showing this kind of solution: vv := 0 v := &vv *v++ fmt.Println(*v) I asked for further clarification: So basically, Dave's point doesn't really stand? There are places where it's inconvenient to sneak in a new variable just to take its address. new(T) has an immediately straightforward meaning, rather than being a multi-step idiom. Dave's point only falls if mere technical possibility (of doing without new ) is compelling on its own. Wasn't this discussed because it was just obvious that Go should have it because almost every language has it? The "shall we keep new ?" discussion pops up from time to time. Since we can't take it out until Go 2, if I understand the Promise correctly, there doesn't seem to be much to be had from going round the loop again; by the time Go 2 is thinkaboutable, we might have some different and better ideas ... Chris It's also there mostly for historical reasons: you need to consider the history of the project. i think new is introduced first before there is make. That is true. In fact we struggled for a while before coming up with the idea of make. If you look at the repository logs you can see that make only shows up in January 2009, revision 9a924177598f. The new builtin function also preceded the idea of &{} for taking the address of a composite literal (and that syntax is in some sense wrong; it probably ought to be (*T){fields of T} but there wasn't enough reason to change it). The new function is not strictly necessary but code does seem to use it in practice. It's hard to get rid of it at this point. Ian
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210406", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20211/" ] }
210,428
In case of code where you have to do a resource cleanup before exiting a function, is there a major performance difference between these 2 ways of doing it. Cleaning the resource before every return statement void func() { login(); bool ret = dosomething(); if(ret == false) { logout(); return; } ret = dosomethingelse(); if(ret == false) { logout(); return; } dootherstuff(); logout(); } Cleaning the resource in a finally block void func() { login(); try { bool ret = dosomething(); if(ret == false) return; ret = dosomethingelse(); if(ret == false) return; dootherstuff(); } finally { logout(); } } I did some basic tests in sample programs and there doesn't seem to be much of a difference. I so much prefer the finally way of doing this - but I was wondering if it will cause any performance hit in a big project.
As indicated in How slow are Java exceptions? one can see that the slowness of try {} catch {} is within the instatiation of the exception itself. Creating an exception will fetch the entire call stack from the runtime and this is where the expense is. If you are not creating an exception, this is only very slightly an increase of time. In the example given in this question there aren't any exceptions, one wouldn't expect any slowdown from creating them - they aren't created. Instead, what is here is a try {} finally {} to handle resource deallocation within the finally block. So to answer the question, no, there is no real runtime expense within a try {} finally {} structure that doesn't use exceptions (it isn't unheard of, as seen). What is possibly expensive is the maintenance time when one reads the code and sees this not typical code style and the coder has to get their mind around that something else happens in this method after the return before returning to the previous call. As has been mentioned, maintenance is an argument for both ways of doing this. For the record, after consideration, my preference would be the finally approach. Consider the maintenance time of teaching someone a new language structure. Seeing try {} finally {} isn't something that one often sees in Java code and thus can be confusing to people. There is a degree of maintenance time for learning a bit more advanced structures in Java than what people are familiar with seeing. The finally {} block always runs. And this is why you should use it. Consider also the maintenance time of debugging the non-finally approach when someone forgets to include a logout at the proper time, or calls it at the improper time, or forgets to return / exit after calling it so that it is called twice. There are so many possible bugs with this that the use of try {} finally {} makes impossible to have. When weighing these two costs, it is expensive in maintenance time not to use the try {} finally {} approach. While people can dicker about how many fractional milliseconds or additional jvm instructions the try {} finally {} block is compared to the other version, one must also consider the hours spent in debugging the less than ideal way of addressing resource deallocation. Write maintainable code first, and preferably in a way that will prevent bugs from being written later.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210428", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/69796/" ] }
210,461
So I wanted to inherit from a sealed class in csharp and got burned. There is just no way to unseal it unless you have access to the source. Then it got me thinking "why sealed even exists?". 4 months ago. I couldn't figure it out, despite reading many things about it, such as: Jon Skeet's wishes " classes were sealed by default in .NET. " Prefer composition over inheritance? "You should not seal all classes (...)" How do you mock a Sealed class? I've tried to digest all that since then, but it's just too much for me. Eventually, yesterday I tried it again. I've scanned over all of them again, plus few more: Why should a class be anything other than "abstract" or "final/sealed"? In over 15 years programming, first time I've heard of SOLID , out of an answer from a question already linked and I obviously didn't read it all 4 months ago Finally, after lots of pondering, I decided to heavily edit the original question based on the new title. The old question were too broad and subjective. It was basically asking: In the old title: One good reason to use sealed In the body: How to properly modify a sealed class? Forget about inheritance? Use composition? But now, understanding (which I didn't yesterday) that all sealed does is preventing inheritance , and we can and should indeed use composition over inheritance , I realized what I needed was practical examples. I guess my question here is (and in fact have always been) exactly what Mr.Mindor suggested me in a chat : How can designing for inheritance cause extra cost?
This is not so difficult to comprehend. sealed was created SPECIFICALLY for Microsoft in order to make their lives easier, save tons of money and help their reputation. Since it is a language feature everyone else can use it also but YOU will probably never ever need to use sealed. Most people are complaining about not being able to extend a class and if they do then they say well everyone knows it is the developers responsibility to make it work correctly. That is correct, EXCEPT those same people have no clue on the history of Windows and one of the problems sealed is trying to solve. Let's suppose a developer extended a .Net core class (because sealed did not exist) and got it to work perfectly. Yay, for the developer. The developer delivers the product to the customer. The developer's app works great. The customer is happy. Life is good. Now Microsoft releases a new operating system, which includes fixing bugs in this particular .Net core class. The customer wants to keep up with the times and chooses to install the new operating system. All of a sudden, the application that the customer likes so much no longer works, because it did not take into account the bugs that were fixed in the .Net core class. Who gets the blame? Those familiar with Microsoft's history know that Microsoft's new OS will get the blame and not the application software that misused windows libraries. So it then becomes incumbent on Microsoft to fix the problem instead of the application company who created the problem. This is one of the reasons why Windows code became bloated. From what I've read, the Windows operating system code is littered with specific if-then checks for if a specific application and version is running and if so then do some special processing to allow the program to function. That's a lot of money spent by Microsoft to fix another company's incompetence. Using sealed doesn't completely eliminate the above scenario, but it does help.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210461", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/4261/" ] }
210,472
I am using the MIT LICENSE in all of my github projects. The second line says 2013 at the top. For having the copyright to hold in future(i.e. after 2013) does it need to be changed each year or is it good as it is? Do I add to it, modify it or leave it as it is? The MIT License (MIT) Copyright (c) 2013 Aseem Bansal <[email protected]> //Rest of the MIT LICENSE
That year in your code is part of a copyright notice . It indicates the effective creation date of your software, which affects the time window of your copyright. It is not, strictly speaking, related to the license (although the MIT license happens to include a provision that the copyright notice must be preserved in all copies of the software). You should update the year if and only if you made changes to your software in that year. Updating your copyright notice to include a year in which you made no copyrightable changes would be a misrepresentation of your copyright term. In the United States, this is currently only relevant if you are a corporation, but may be relevant to non-corporate authors in other countries. (In the U.S., copyright terms for natural individuals are currently a function of how long you live, not when you create a work.) The FSF has some helpful guidance on including a correctly-dated copyright notice in your software (intended for use with the GPL, but applicable to all software): The copyright notice should include the year in which you finished preparing the release (so if you finished it in 1998 but didn't post it until 1999, use 1998). You should add the proper year for each release; for example, “Copyright 1998, 1999 Terry Jones” if some versions were finished in 1998 and some were finished in 1999. If several people helped write the code, use all their names. For software with several releases over multiple years, it's okay to use a range (“2008-2010”) instead of listing individual years (“2008, 2009, 2010”) if and only if every year in the range, inclusive, really is a “copyrightable” year that would be listed individually; and you make an explicit statement in your documentation about this usage. It is not clear from the FSF guidance whether uploading incomplete, in-progress work to a public repository counts as "finishing a release". My guess is yes, if the work was deliberately made available for public download, but I'm not a lawyer. So, to summarize: if you made changes that year, do include the year in a comma-separated list in your copyright notice. if did not make copyrightable changes that year, do not include that year in your copyright notice.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210472", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/92918/" ] }
210,503
As a member of our company's QA team, I frequently get entirely unenthusiastic feedback from developers in their responses to test results in our agile, web-based software-as-a-service shop. Most of our testing is manual, since automated testing doesn't really make sense for us right now, and developers are usually reluctant to listen to any change suggestions beyond those that prevent javascript/500 errors. I understand that fixes/changes require work, and our developers are rarely short on work to do, but I don't think developers respect QAs input. Unfortunately, our product owners are vacant: acceptance testing doesn't exist, and user stories usually are only one sentence long, and don't provide the developer with much to go off of. There is no other feedback mechanism to development other than from customers x-weeks later, who aren't designers/developers either, of course, and whose suggestions are all over the board. I am technically competent, at worse, and am capable of simple development on our LAMP stack and feel confident that developers respect my knowledge. However, I have for the most part given up on feedback beyond that which prevents critical errors--which affect data integrity, or bottom-line functionality. This has raised the question of whether seniority, or pay grade is a significant factor in how seriously developers value QAs input. In our case, where we don't do automated testing and QA members likely don't have as much technical expertise, it kind of makes sense that we make less than developers (between 60-70%, depending on time in grade). I don't believe in the argument that the opinion of the team member with the biggest pay check is the most important, however I can imagine how it's difficult to take feedback from team members who have a year or two less experience, are not as technically knowledgeable, and make noticeably less. In the end the best idea should win, but unfortunately that might be decided after the enhancement has been on production for several months, and users love it or hate it.
It sounds to me like you have a dysfunctional team with a cowboy culture and you're trying to figure out what the root cause is. You are proposing a hypothesis that maybe developers don't respect test because of some sort of implicit hierarchy or length of service or some other factor, but you're not necessarily presenting evidence for the case, you're essentially asking "could this be what's wrong?" In fact, it sounds like many things are wrong with the organization, and any issues driven by perceptions of status or power are merely symptomatic of poor leadership. You are not an agile shop if you are not practicing any of the enabling mechanisms of agile development. One line stories are not agile; those are mere bullet points on wishlists. A story contains a business motivation, a description of the customer's interaction with the product, and a definition of when the story is done. If you don't have those three things, you don't have enough information to decide what should happen or how you know you've done it right, so the story will never be "done". That's treading water, not making progress. Developers will never be short of work to do in such organizations, because they'll constantly be firefighting, aided only by tiny buckets of their own urine. Some of the "definition of done" can be part of a general team agreement, but specific acceptance criteria for any story, even if terse, are essential. There are very few cases in which "automated testing doesn't really make sense for us right now". It may be the case that the test team isn't the right organizational locus to deliver automated testing, especially early on, but it always makes sense to have automated testing. While it's ok in my book for developers to do a little bit of exploratory coding without formal automated tests (I'm not a TDD or even BDD purist), it seems horrifying to me as a developer that I'd consider releasing code to a test organization with no developer-written automated tests. Unit Tests and BDD tests, written by developers, and scenarios preferably written by Product Owners, are essential parts of agile delivery. Figuring out the best use of a test organization in an agile team is a tricky problem for which there is no single formula for success. In an organization which has no definition of done, it will be highly difficult to demonstrate value, because there's no way of knowing if the test team has contributed to "done." I've worked in old school waterfall teams as well as agile teams with 1) no distinct test organization or 2) moderately integrated test teams, 3) partially integrated test teams with separate stories and work product, and 4) gated release models, where some QA involvement happened alongside ordinary development but there was a distinct "test pass" due to some legacy or regulatory reason. The "right" model for the test team will actually depend on the level of technical sophistication of the test team members. I think having test team members with moderate or better technical sophistication pair with a developer while writing code, to suggest cases for automation, can be a great model. But a test team can be reasonably effective in validating that the stories have measurable acceptance criteria, doing some exploratory testing as developers check in code, and trying to augment developer unit testing with integration scenarios and fleshing out special cases. It's even sort of ok to have a throw-the-build-over-the wall approach in some circumstances, as long as there's a way of converting stories into test cases and there's some sort of feedback loop with the Product Owners and Developers. But you won't really get there without active buyoff from your management and product owners on what the organizational priorities are and what test's role should be. I doubt there've been any serious conversations in your team other than "oh, I've worked on other software projects and that means I know we need to have some sort of test effort. Let's hire a test team." Most average and some above average developers will be tolerant of organizational inertia that doesn't demand they engage with the test team. In order for real progress to be made, some management or consensus driven initiative to drive "better" development practices needs to happen. As a developer and as a former STE, STE Lead and SDET, I have nearly zero interest in how senior the test team members are or how much they are paid. What I care about is how they can help me ship better software. I personally like leveraging the skills of people who can work through tons of scenarios that I can't meaningfully explore given the team's desired organizational velocity; I'd be happy to walk through a test team member on how to start from existing unit tests or scenarios and build better coverage, or read test plans and provide feedback. But I might settle for focusing on "just good enough" coverage on my end and let the product owners and maybe just hope that the testers catch what I miss, if that's all the organization appears to value. Somehow, either you are going to need to start selling to your management or to your most sympathetic developers on taking a more, dare I say it, agile approach to development and quality. I can't give you a formula for this, because I've not been that great at driving such things in organizations resistant to change, but the best you can hope for is a business value driven case (talking to the business side) or perhaps a craftsmanship/continuous improvement case on the technical side.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210503", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/98191/" ] }
210,558
How do programming languages define and save functions/methods? I am creating an interpreted programming language in Ruby, and I am trying to figure out how to implement function declaration. My first idea is to save the content of the declaration in a map. For example, if I did something like def a() { callSomething(); x += 5; } Then I would add an entry into my map: { 'a' => 'callSomething(); x += 5;' } The problem with this is that it would become recursive, because I would have to call my parse method on the string, which would then call parse again when it encountered doSomething , and then I would run out of stack space eventually. So, how do interpreted languages handle this?
Would I be correct in assuming that your "parse" function not only parses the code but also executes it at the same time? If you wanted to do it that way, instead of storing the contents of a function in your map, store the location of the function. But there's a better way. It takes a bit more effort up-front, but it yields much better results as complexity increases: use an Abstract Syntax Tree. The basic idea is that you only parse the code once, ever. Then you have a set of data types representing operations and values, and you make a tree of them, like so: def a() { callSomething(); x += 5; } becomes: Function Definition: [ Name: a ParamList: [] Code:[ Call Operation: [ Routine: callSomething ParamList: [] ] Increment Operation: [ Operand: x Value: 5 ] ] ] (This is just a text representation of the structure of a hypothetical AST. The actual tree would probably not be in text form.) Anyway, you parse your code out into an AST, and then you either run your interpreter over the AST directly, or use a second ("code generation") pass to turn the AST into some output form. In the case of your language, what you would probably do is have a map that maps function names to function ASTs, instead of function names to function strings.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210558", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/100175/" ] }
210,668
I'm working on a small team that will begin working on a large new project with another small team. The other team is currently working on a legacy system that they have been working on for years. The manager has decided that the developers from my team will be rotating every few months to replace developers working on the legacy system. That way the other team will have a chance to work on the new project and have a better understanding of the new system. I want to know the benefits and drawbacks (if any) of rotating the developers from the project every 2-3 months. I know that this is a similar question to "Is rotating the lead developer a good or bad idea?" , but that question focuses on a lead developer. This question is about rotating the entire team on and off the project (tech. lead for the new project may or may not be rotated -- I don't know yet).
I'm surprised that everybody thinks this is such a good thing. The authors of Peopleware (which, IMO, is still one of the precious few software project management books actually worth reading) strongly disagree. Almost the entire Part IV of the book is dedicated to this very issue. The software team is an incredibly important functional unit. Teams need to jell to become really productive. It takes time (a lot of time) for team members to earn each others' respect, to learn each others' habits and quirks and strengths and weaknesses. Certainly, from personal experience, I can say that after a year of working with certain people, I've learned to laugh off certain things that used to rile me up, my estimates as team lead are much better, and it's not too difficult to get the work distributed so as to make everyone happy. It wasn't like that in the beginning. Now you might say, "Oh, but we're not breaking up the whole team, just moving a few people." But consider (a) how blindly unproductive their replacements are going to be in the beginning, and (b) how many times you'll find yourself or other teams saying, without even thinking, "I really liked X" or "This would have been easier with Y still around" , subtly and unconsciously offending the new members and creating schisms within the existing team, even sowing discontent among the "old" members. People don't do this on purpose , of course, but it happens almost every time. People do it without thinking. And if they force themselves not to, they end up focusing on the issue even more, and are frustrated by the forced silence. Teams and even sub-teams will develop synergies that get lost when you screw around with the structure. The Peopleware authors call it a form of "teamicide". That being said, even though rotating team members is a horrible practice, rotating teams themselves is perfectly fine. Although well-run software companies should have some concept of product ownership, it's not nearly as disruptive to a team to move that entire team to a different project, as long as the team actually gets to finish the old project or at least bring it to a level they're happy with. By having team stints instead of developer stints, you get all the same benefits you would expect to get with rotating developers (documentation, "cross-pollination", etc.) without any of the nasty side-effects on each team as a unit. To those who don't really understand management, it may seem less productive, but rest assured that the productivity lost by splitting up the team totally dwarfs the productivity lost by moving that team to a different project. P.S. In your footnote you mention that the tech lead might be the only person not to be rotated. This is pretty much guaranteed to mess up both teams. The tech lead is a leader, not a manager, he or she has to earn the respect of the team, and is not simply granted authority by higher levels of management. Putting an entire team under the direction of a new lead whom they've never worked with and who is very likely to have different ideas about things like architecture, usability, code organization, estimation... well, it's going to be stressful as hell for the lead trying to build credibility and very unproductive for the team members who start to lose cohesion in the absence of their old lead. Sometimes companies have to do this, i.e. if the lead quits or gets promoted, but doing it by choice sounds insane.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210668", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8093/" ] }
210,670
When modeling a chat, should a chat object receive messages or a message be sent to a chat object? I'm not sure whether there is a definitive answer but maybe someone can point out benefits of one over the other. message.send(chat); VS. chat.sendMessage(message);
I'm surprised that everybody thinks this is such a good thing. The authors of Peopleware (which, IMO, is still one of the precious few software project management books actually worth reading) strongly disagree. Almost the entire Part IV of the book is dedicated to this very issue. The software team is an incredibly important functional unit. Teams need to jell to become really productive. It takes time (a lot of time) for team members to earn each others' respect, to learn each others' habits and quirks and strengths and weaknesses. Certainly, from personal experience, I can say that after a year of working with certain people, I've learned to laugh off certain things that used to rile me up, my estimates as team lead are much better, and it's not too difficult to get the work distributed so as to make everyone happy. It wasn't like that in the beginning. Now you might say, "Oh, but we're not breaking up the whole team, just moving a few people." But consider (a) how blindly unproductive their replacements are going to be in the beginning, and (b) how many times you'll find yourself or other teams saying, without even thinking, "I really liked X" or "This would have been easier with Y still around" , subtly and unconsciously offending the new members and creating schisms within the existing team, even sowing discontent among the "old" members. People don't do this on purpose , of course, but it happens almost every time. People do it without thinking. And if they force themselves not to, they end up focusing on the issue even more, and are frustrated by the forced silence. Teams and even sub-teams will develop synergies that get lost when you screw around with the structure. The Peopleware authors call it a form of "teamicide". That being said, even though rotating team members is a horrible practice, rotating teams themselves is perfectly fine. Although well-run software companies should have some concept of product ownership, it's not nearly as disruptive to a team to move that entire team to a different project, as long as the team actually gets to finish the old project or at least bring it to a level they're happy with. By having team stints instead of developer stints, you get all the same benefits you would expect to get with rotating developers (documentation, "cross-pollination", etc.) without any of the nasty side-effects on each team as a unit. To those who don't really understand management, it may seem less productive, but rest assured that the productivity lost by splitting up the team totally dwarfs the productivity lost by moving that team to a different project. P.S. In your footnote you mention that the tech lead might be the only person not to be rotated. This is pretty much guaranteed to mess up both teams. The tech lead is a leader, not a manager, he or she has to earn the respect of the team, and is not simply granted authority by higher levels of management. Putting an entire team under the direction of a new lead whom they've never worked with and who is very likely to have different ideas about things like architecture, usability, code organization, estimation... well, it's going to be stressful as hell for the lead trying to build credibility and very unproductive for the team members who start to lose cohesion in the absence of their old lead. Sometimes companies have to do this, i.e. if the lead quits or gets promoted, but doing it by choice sounds insane.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210670", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41578/" ] }
210,737
I am a front-end developer who barely even see a file with .h or .c extension. I know basic C syntax, I've learned it in Unreality but never was interested in such low level programming because it was simply too much setup for simple things. I am very interested in learning all aspects of Computer Science but I want to believe I do not really have to know a specific language in order to understand most of concepts in Computer Science. Yet when I start reading books and articles about fundamental Computer Science concepts like Data Structures and Algorithm Design it seems that I have to learn C, because all examples and even lessons are in C (and sometimes Java). My question is, is C as a programming language essential for Computer Science or we just happened to have all of our resources in CS written in C? Can one learn Computer Science without learning C?
I'm going to go against the flow here and say yes, you do have to learn C. I actually agree with the points in many of the other answers, but you make the very strong statement that I am very interested in learning all aspects of Computer Science but I want to believe I do not really have to know a specific language in order to understand most of concepts in Computer Science. (emphasis mine) Well, operating systems and network stacks are two huge aspects of Computer Science, and all the dominant operating systems and network stacks are written largely in C. If you want to understand those, you should learn C. Yes, some schools do manage to teach their OS classes in Java, but it's like reading Homer in English. Besides, C is not that big of language. If you really want to learn all aspects of computer science you should shrug and say 'meh', 'what's one more language?'
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210737", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24256/" ] }
210,818
I learnt some about pipelining but those were 4-stage and 5-stage and I think that modern pipelining typical is much longer and more complicated in practice. How long are typical pipelines and how much can we expect them to increase and where is the point of reaching diminshing returns in performance gains for longer pipelines?
Intel had 5 pipeline stages in its original Pentium architecture. The number of stages peaked at 31 in the Prescott family, but decreased after that. Today, in the Core series II processors (i3, i5, and i7), there are 14 stages in the processor pipeline. Microarchitecture Pipeline stages P5 (Pentium) 5 P6 (Pentium 3) 10 P6 (Pentium Pro) 14 NetBurst (Willamette) 20 NetBurst (Northwood) 20 NetBurst (Prescott) 31 NetBurst (Cedar Mill) 31 Core 14 Bonnell 16 Sandy Bridge 14 Silvermont 14 to 17 Haswell 14 Skylake 14 Kabylake 14 Prescott achieved only modest gains in performance over its predecessor, and its more complex design demanded substantially more power relative to its performance gains. Although there were other contributing factors to Prescott's disappointing performance, it seems clear that increasing the number of pipelining stages eventually achieves diminishing returns. References Prescott Pushes Pipelining Limits The Intel Architecture Processor Pipeline List of Intel CPU Microarchitectures The Optimum Pipeline Depth for a Microprocessor
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210818", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/12893/" ] }
210,900
Even though I've programmed on a professional level for some years I still do not fully understand error handling. Although my applications work fine, the error handling isn't implemented at a professional level and is a mix and match of a number of techniques. There is no structure behind my error handling. I'd like to learn and understand how it's implemented at a professional level. This is one area where I lack knowledge. When should I use an exceptions and when should I return a success status, to be checked in the logic flow? Is it OK to mix exception and returning a status? I code in C# mainly.
Use exceptions for exceptional things, the things you can't reasonably expect to encounter too often, things which indicate that something goes wrong. For example, if the network is down, it is an exceptional thing for a web server. If the database is unavailable, it means that something is wrong. If the configuration file is missing, it probably means that the user messed up with it. Don't use exceptions to handle incorrect code. In order to check the correctness of the code, you should use either the assertions, or, in .NET Framework 4 and later, Code contracts (which replace assertions and have additional, particularly valuable features). Don't use exceptions in non-exceptional cases. The fact that the user, when asked to enter a number, entered "dog" is not so exceptional to deserve an exception. Be careful when choosing the types of exceptions. Create your own types when needed. Carefully chose the inheritance, keeping in mind that catching parents will catch the children as well. Never throw Exception . Don't use return codes for errors. Error codes are easily masked, ignored, forgotten. If there is a error, either handle it, or propagate it to the upper stack. In cases where a method is expected to return a error and the error is not exceptional, use enums, never error numbers. Example: // Note that the operation fails pretty often, since it deals with the servers which are // frequently unavailable, and the ones which send garbage instead of the actual data. private LoadOperationResult LoadProductsFromWeb() { ... } The meaning of LoadOperationResult.ServerUnavailable , LoadOperationResult.ParsingError , etc. is much more explicit than, say, remembering that code 12 means that the server is down, and code 13 — that the data cannot be parsed. Use error codes when they refer to the common ones, known by every developer who works in the specific domain. For example, don't reinvent an enum value for HTTP 404 Not Found or HTTP 500 Internal Server Error. Beware of booleans. Sooner or later, you will want to know not only whether a specific method succeeded or failed, but why. Exceptions and enums are much more powerful for that. Don't catch every exception (unless you're at the very top of the stack). If you catch an exception, you should be ready to handle it. Catching everything is showing that you don't care if your code runs correctly. This may solve the "I don't want to search right now how to fix this", but will hurt you sooner or later. In C#, never rethrow exceptions like this: catch (SomeException ex) { ... throw ex; } because you're breaking the stack. Do this instead: catch (SomeException) { ... throw; } Make an effort when writing exception messages. How many times I've seen something like throw Exception("wrong data") or throw Exception("shouldn't call this method in this context") . Other developers, including yourself six months later, would have no idea what data is wrong and why or why shouldn't we call some method in a context, nor which context precisely. Don't show exception messages to the user. They are not expected for ordinary people, and often are even unreadable for developers themselves. Don't localize exception messages. Searching the documentation for a localized message is exhausting and pointless: every message should be in English and English only. Don't focus exclusively on exceptions and errors: logs are also extremely important. In .NET, don't forget to include exceptions in XML documentation of the method: /// <exception cref="MyException">Description of the exception</exception> Including exceptions in XML documentation makes things much easier for the person who is using the library. There is nothing more annoying than trying to guess which exception could be possibly thrown by a method and why. In this sense¹, Java exception handling provides a stricter, better approach. It forces you to either deal with exceptions potentially thrown by the called methods, or declare in your own method that it can throw the exceptions you don't handle, making things particularly transparent. ¹ This being said, I find Java distinction between exceptions and errors pretty useless and confusing, given that the language has checked and unchecked exceptions. Luckily, .NET Framework has only exceptions, and no errors.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/210900", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/94755/" ] }
211,049
I had a look at XSLT for transforming one XML file into another one (HTML, etc.). Now while I see that there are benefits to XSLT (being a standardized and used tool) I am reluctant for a couple of reasons XSLT processors seem to be quite huge / resource hungry XML is a bad notation for programming and thats what XSLT is all about. It do not want to troll XSLT here though I just want to point out what I dislike about it to give you an idea of what I would expect from an alternative. Having some Lisp background I wonder whether there are better ways for tree-structure transformations based upon some lisp. I have seen references to DSSSL, sadly most links about DSSSL are dead so its already challenging to see some code that illustrates it. Is DSSSL still in use? I remember that I had installed openjade once when checking out docbook stuff. Jeff Atwood's blog post seems to hint upon using Ruby instead of XSLT. Are there any sane ways to do XML transformations similar to XSLT in a non-xml programming language? I would be open for input on Useful libraries for scripting languages that facilitate XML transformations especially (but not exclusively) lisp-like transformation languages, or Ruby, etc. A few things I found so far: A couple of places on the web have pointed out Linq as a possible alternative. Quite generally I any kind of classifications, also from those who have had the best XSLT experience. For scheme http://cs.brown.edu/~sk/Publications/Papers/Published/kk-sxslt/ and http://www.okmij.org/ftp/Scheme/xml.html
It's difficult to assess technologies when you don't have deep experience of them, but of course that's exactly when you have to make your decisions, so there's no simple answer to that dilemma. You cite two concerns: performance and usability. I'll try to address both below. Firstly, performance. Performance of course depends not only on the language but also on the implementation, and also on the expertise of the users. Different XSLT processors can vary widely in performance, and the same processor can vary widely depending on how it is used (with Saxon, for example, people who have performance problems are very often found to be using it with DOM, which is a poor combination, and performance can increase ten-fold if you use Saxon's native tree model instead). So the first advice is don't take performance on hearsay, measure it; and the second advice is to make sure that the person doing the measuring has enough experience not to make silly mistakes. More easily said than done. Crudely, you can separate transformation jobs into two categories: simple and complex. For simple transformations, with a good XSLT processor the time is all spent parsing and serializing and the XSLT processing time hardly comes into the picture. Since any other technology is going to incur the same parsing and serialization costs, the choice of transformation technology isn't going to make a big difference (except perhaps very for very low-level coding using streaming, but not many people can afford the programming time and skills needed to implement that). For complex transformations on large documents, you start to get the same issues as with SQL programming: achieving good performance requires good interaction between the skills and knowledge of the programmer, and the capabilities of the optimizer. As with SQL, it's very easy in such a high-level language to write a few simple statements that result in the processor having to do a very large amount of work. But also as with SQL, programmers who know what they are doing will do much better than novices. Second, usability. The XML-based syntax for XSLT is very off-putting to a lot of people on first encounter with the language. But there are good reasons and real benefits for doing it this way: there is the "template" argument, that a lot of the code consists of XML to be written to the result document, and the best way to write XML is in XML. And there is the "reflection" argument; in large complex systems, it is very common to find stylesheets that generate stylesheets. Then there is the "tools" argument; if you are in an XML shop, you probably have a lot of XML tooling such as syntax-directed editors, and it's good to be able to use the same tools to handle your programs and your data. The disadvantages turn out to be fairly cosmetic in comparison: there's the number of keystrokes involved in editing (easily fixed with a good editing tool), and there's the verbosity of the code (reducing its readability). The verbosity is vastly reduced in XSLT 2.0 with the introduction of features such as regular expressions and stylesheet functions: many stylesheets are reduced to a half or a third in size when they take full advantage of XSLT 2.0. Your mention of DSSSL leaves me with a wry smile. I've never used DSSSL, but the stories I heard were that it was unsuceessful because its syntax was arcane, and unrelated to the syntax of the data (SGML). The use of an XML syntax for XSLT was strongly motivated by experience with DSSSL. There are people who love XSLT and there are people who hate it. Unsurprisingly, those who use it a lot tend to fall into the first category. Those who dislike it are generally those who haven't learnt to "think the XSLT way". You could argue that a programming language shouldn't affect the way you think, but it does: writing in a rule-based language takes a different mindset from writing in an imperative language. The first reaction of many programmers is that they feel less in control (describing the problem, rather than telling the computer what to do step by step). It's very similar to the reaction you used to see when people were first introduced to SQL. These days, people learn SQL earlier in their careers so there's less mental readjustment required. Ultimately, you should choose a technology based on objective measurable criteria, not on love/hate reactions. It's difficult to make those measurements. But there are lots of people using XSLT very intensively and very successfully, so there is no doubt that it can be done.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211049", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/80034/" ] }
211,053
Haskell often provides two versions of a given function f, i.e: f :: Int ... genericF :: Integral i => i ... There exist many standard library functions with those two versions: length, take, drop, etc. Quoting the description of genericLength : The genericLength function is an overloaded version of length. In particular, instead of returning an Int, it returns any type which is an instance of Num. It is, however, less efficient than length. My question is: where does the efficiency loss comes from? Can't the compiler detect that we are using genericLength as an Int and therefore us length for better performance? Why isn't length generic by default?
It's difficult to assess technologies when you don't have deep experience of them, but of course that's exactly when you have to make your decisions, so there's no simple answer to that dilemma. You cite two concerns: performance and usability. I'll try to address both below. Firstly, performance. Performance of course depends not only on the language but also on the implementation, and also on the expertise of the users. Different XSLT processors can vary widely in performance, and the same processor can vary widely depending on how it is used (with Saxon, for example, people who have performance problems are very often found to be using it with DOM, which is a poor combination, and performance can increase ten-fold if you use Saxon's native tree model instead). So the first advice is don't take performance on hearsay, measure it; and the second advice is to make sure that the person doing the measuring has enough experience not to make silly mistakes. More easily said than done. Crudely, you can separate transformation jobs into two categories: simple and complex. For simple transformations, with a good XSLT processor the time is all spent parsing and serializing and the XSLT processing time hardly comes into the picture. Since any other technology is going to incur the same parsing and serialization costs, the choice of transformation technology isn't going to make a big difference (except perhaps very for very low-level coding using streaming, but not many people can afford the programming time and skills needed to implement that). For complex transformations on large documents, you start to get the same issues as with SQL programming: achieving good performance requires good interaction between the skills and knowledge of the programmer, and the capabilities of the optimizer. As with SQL, it's very easy in such a high-level language to write a few simple statements that result in the processor having to do a very large amount of work. But also as with SQL, programmers who know what they are doing will do much better than novices. Second, usability. The XML-based syntax for XSLT is very off-putting to a lot of people on first encounter with the language. But there are good reasons and real benefits for doing it this way: there is the "template" argument, that a lot of the code consists of XML to be written to the result document, and the best way to write XML is in XML. And there is the "reflection" argument; in large complex systems, it is very common to find stylesheets that generate stylesheets. Then there is the "tools" argument; if you are in an XML shop, you probably have a lot of XML tooling such as syntax-directed editors, and it's good to be able to use the same tools to handle your programs and your data. The disadvantages turn out to be fairly cosmetic in comparison: there's the number of keystrokes involved in editing (easily fixed with a good editing tool), and there's the verbosity of the code (reducing its readability). The verbosity is vastly reduced in XSLT 2.0 with the introduction of features such as regular expressions and stylesheet functions: many stylesheets are reduced to a half or a third in size when they take full advantage of XSLT 2.0. Your mention of DSSSL leaves me with a wry smile. I've never used DSSSL, but the stories I heard were that it was unsuceessful because its syntax was arcane, and unrelated to the syntax of the data (SGML). The use of an XML syntax for XSLT was strongly motivated by experience with DSSSL. There are people who love XSLT and there are people who hate it. Unsurprisingly, those who use it a lot tend to fall into the first category. Those who dislike it are generally those who haven't learnt to "think the XSLT way". You could argue that a programming language shouldn't affect the way you think, but it does: writing in a rule-based language takes a different mindset from writing in an imperative language. The first reaction of many programmers is that they feel less in control (describing the problem, rather than telling the computer what to do step by step). It's very similar to the reaction you used to see when people were first introduced to SQL. These days, people learn SQL earlier in their careers so there's less mental readjustment required. Ultimately, you should choose a technology based on objective measurable criteria, not on love/hate reactions. It's difficult to make those measurements. But there are lots of people using XSLT very intensively and very successfully, so there is no doubt that it can be done.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211053", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/101692/" ] }
211,137
I don't understand why a static method can't use non-static data. Can anybody explain what the problems are and why we can't do it?
In most OO languages, when you define a method inside a class, it becomes an Instance Method . When you create a new instance of that class, via the new keyword, you initialize a new set of data unique to just that instance. The methods belonging to that instance can then work with the data you defined on it. Static Methods , by contrast, are ignorant of individual class instances. The static method is similar to a free function in C or C++. It isn't tied to a specific instantiation of the class. This is why they cannot access instance values. There's no instance to take a value from! Static Data is similar to a static method. A value that is declared static has no associated instance. It exists for every instance, and is only declared in a single place in memory. If it ever gets changed, it will change for every instance of that class. A Static Method can access Static Data because they both exist independently of specific instances of a class. It might help to look at how you invoke a static method, compared to a instance method. Let's say we had the following class (using Java-like pseudocode): class Foo { // This static value belongs to the class Foo public static final string name = "Foo"; // This non-static value will be unique for every instance private int value; public Foo(int value) { this.value = value; } public void sayValue() { println("Instance Value: " + value); } public static void sayName() { println("Static Value: " + name); } } Foo foo1 = new Foo(10); Foo foo2 = new Foo(20); foo1.sayValue(); // Prints "Instance Value: 10" - called on foo1 foo2.sayValue(); // Prints "Instance Value: 20" - called on foo2 Foo.sayName(); // Prints "Static Value: Foo" - called on Foo (not foo1 or foo2) Update As COME FROM points out in the comments, a static method is capable of working with non-static data, but it must be passed explicitly. Let's assume the Foo class had another method: public static Foo Add(Foo foo1, Foo foo2) { return new Foo(foo1.value + foo2.value); } Add is still static, and has no value instances of its own, but being a member of the class Foo it can access the private value fields of the passed-in foo1 and foo2 instances. In this case, we're using it to return a new Foo with the added values of both passed-in values. Foo foo3 = Foo.Add(foo1, foo2); // creates a new Foo with a value of 30
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211137", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/95092/" ] }
211,276
I am trying to learn how to do events in C#, and according to the MSDN Events tutorial , Events are declared using delegates. If you have not yet studied the Delegates Tutorial, you should do so before continuing so now I'm trying to understand Delegates first, and failing horribly. What are they for? What problem are you trying to solve where you delegate something? I THINK they're for passing methods into other methods but what is the point of that? Why not just have a method that does whatever you're doing in the first place?
Passing functions to other functions is a very good way of generalizing code , especially when you have lots of nearly-duplicate code with only the logic in the middle differing. My favorite (simple) example of this is making a Benchmark function, that accepts a delegate containing the code you wish to benchmark. I'll be using lambda syntax and Func/Action objects, because they're much more concise (and common) than seeing explicitly created delegates nowadays. public void Benchmark(int times, Action func) { var watch = new Stopwatch(); double totalTime = 0.0; for (int i = 0; i < times; i++) { watch.Start(); func(); // Execute our injected function watch.Stop(); totalTime += watch.EllapsedTimeMilliseconds; watch.Reset(); } double averageTime = totalTime / times; Console.WriteLine("{0}ms", averageTime); } You can now pass in any block of code to that benchmark function (along with the number of times you want to run it) and get back the average execution time! // Benchmark the amount of time it takes to ToList a range of 100,000 items Benchmark(5, () => { var xs = Enumerable.Range(0, 100000).ToList(); }); // You can also pass in any void Function() by its handler Benchmark(5, SomeExpensiveFunction); If you couldn't inject the code you wanted to benchmark into the middle of that function, you'd probably end up copying and pasting the logic around anywhere that you wanted to use it. Linq makes extensive use of function passing, allowing you to have a whole host of really flexible operations on sets of data. Let's take the Where function as an example. It filter's out elements of a list that return "true" when passed a comparison function. var xs = new[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; // This function compares each element, and returns true for those that are above 5 // Result = { 6, 7, 8, 9, 10 } var above5 = xs.Where((x) => x > 5); // This function only returns true if an element is even // Result = { 2, 4, 6, 8, 10 } var evens = xs.Where((x) => x % 2 == 0); The Where function itself very generalized. It simply loops over a collection, and yields a new collection containing only the values that match some predicate function. It's up to you to inject the code that tells it precisely what it's looking for.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211276", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25432/" ] }
211,292
I've been reading about law of Demeter and I would like to know how to solve this traversing model properties problem that I see a lot on Objective-C. I know that there is a similar question but in this case I'm not calling a method from the last property that do some calculations, instead, I'm just setting values (Ok, I know that getter are methods but my intention here is just to get the value, not to change the state of some object). For example: self.priceLabel.text = self.media.ad.price.value; Should I change that for something like: self.priceLabel.text = [self.media adPriceValue]; and inside the Media.m - (NSString *)adPriceValue { return [self.ad priceValue]; } and inside the Ad.m - (NSString *)priceValue { return [self.price value]; } Is that a good solution? Or Am I creating unnecessary methods?
Passing functions to other functions is a very good way of generalizing code , especially when you have lots of nearly-duplicate code with only the logic in the middle differing. My favorite (simple) example of this is making a Benchmark function, that accepts a delegate containing the code you wish to benchmark. I'll be using lambda syntax and Func/Action objects, because they're much more concise (and common) than seeing explicitly created delegates nowadays. public void Benchmark(int times, Action func) { var watch = new Stopwatch(); double totalTime = 0.0; for (int i = 0; i < times; i++) { watch.Start(); func(); // Execute our injected function watch.Stop(); totalTime += watch.EllapsedTimeMilliseconds; watch.Reset(); } double averageTime = totalTime / times; Console.WriteLine("{0}ms", averageTime); } You can now pass in any block of code to that benchmark function (along with the number of times you want to run it) and get back the average execution time! // Benchmark the amount of time it takes to ToList a range of 100,000 items Benchmark(5, () => { var xs = Enumerable.Range(0, 100000).ToList(); }); // You can also pass in any void Function() by its handler Benchmark(5, SomeExpensiveFunction); If you couldn't inject the code you wanted to benchmark into the middle of that function, you'd probably end up copying and pasting the logic around anywhere that you wanted to use it. Linq makes extensive use of function passing, allowing you to have a whole host of really flexible operations on sets of data. Let's take the Where function as an example. It filter's out elements of a list that return "true" when passed a comparison function. var xs = new[] { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; // This function compares each element, and returns true for those that are above 5 // Result = { 6, 7, 8, 9, 10 } var above5 = xs.Where((x) => x > 5); // This function only returns true if an element is even // Result = { 2, 4, 6, 8, 10 } var evens = xs.Where((x) => x % 2 == 0); The Where function itself very generalized. It simply loops over a collection, and yields a new collection containing only the values that match some predicate function. It's up to you to inject the code that tells it precisely what it's looking for.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211292", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/100745/" ] }
211,319
The this keyword is primarily used in three situations. The first and most common is in setter methods to disambiguate variable references. The second is when there is a need to pass the current class instance as an argument to a method of another object. The third is as a way to call alternate constructors from within a constructor. However someone at work decided too put a PMD/Checkstyle rules at work that force us to put "this" in front of all the variable and method. Is it really viable to do that or it's just to verbose ? Related: What is the accepted style for using the `this` keyword in Java?
This question straddles the boundary of opinion-based vs. factual information; nevertheless I consider it valuable enough to leave open because I find the usual answers of "this is a style question, do what you like" too simplistic. So here's my pointed opinion: Using this to refer to attributes of a class within the class is redundant, and so increases code verbosity with no clear benefit. Not only can you almost always look up the status of a variable via the tools you use; more importantly, if you have to look it up, then your class is too big in the first place . Just like a method that requires you to scroll up to find the declaration of a local variable is too long , if you have to look up a member field while writing code within a class, then that class does too much . In other words, using this.fieldName except for disambiguation is unnecessary cruft, and if you need it for the reason commonly given, you have greater problems than scope issues.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211319", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/21557/" ] }
211,337
I recently stumbled upon Microsoft's framework for code contracts. I read a bit of documentation and found myself constantly asking: "Why would I ever want to do this, as it does not and often cannot perform a static analysis." Now, I have a kind of defensive programming style already, with guarding exceptions like this: if(var == null) { throw new NullArgumentException(); } I'm also using NullObject Pattern alot and have rarely any problems. Add Unit Tests to it and you're all set up. I have never used asserts and never missed them. Quite the contrary. I really hate code that has lots of meaningless asserts in it, which is just noise to me and distracts me from what I really want to see. Code contracts, at least the Microsoft way are much the same - and even worse. They add lots of noise and complexity to the code. In 99% an exception will be thrown anyway - so I don't care whether it's from the assert/contract or the actual problem. Just very, very few cases remain in which the program states really becomes corrupted. So frankly, what is the benefit of using code contracts? Is there any at all? If you already use unit tests and code defensively I feel that introducing contracts is just not worth the cost and puts noise in your code that a maintainer will curse when he's updating that method, much like I do when I cannot see what the code is doing due to useless asserts. I have yet to see a good reason to pay that price.
You might just as well have asked when static typing is better than dynamic typing. A debate that has been raging for years with no end in sight. So have a look at questions like Dynamically vs Statically typed languages studies But the basic argument would be the potential to discover and fix issues at compile time that might otherwise have slipped into production. Contracts vs. Guards Even with guards and exceptions you are still faced with the issue that the system will fail to perform it's intended task in some instance. Possibly an instance that will be quite critical and costly to fail at. Contracts vs. Unit tests The usual argument on this one is that tests prove the presence of bugs, while types (contracts) prove the absence. F.ex. using a type you know that no path in the program could possibly supply invalid input, while a test could only tell you that the covered path did supply the correct input. Contracts vs. Null Object pattern Now this is at least in the same ball park. Languages like Scala and Haskell has had great success with this approach to eliminating null references entirely from programs. (Even if Scala formally allows nulls the convention is to never use them) If you already employ this pattern to eliminate NREs you've basically removed the largest source of runtime failures there is in basically the manner contracts allow you to do it. The difference might be that contracts has an option to automatically require all your code to avoid null, and thus force you to use this pattern in more places to pass compilation. On top of that contracts also give you the flexibility to target things beyond null. So if you no longer see any NRE in your bugs you might want to use contracts to strangle the next most common issue you might have. Off by one? Index out of range? But... All that being said. I do agree that the syntactic noise (and even structural noise) contracts add to the code is quite substantial and the impact the analysis has on your buildtime should not be underestimated. So if you decide to add contracts to your system it would probably be wise to do so very carefully with a narrow focus on which class of bugs one tries to address.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211337", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20294/" ] }
211,342
I use yii framework that implements Active Record pattern as ORM base. It has CActiveRecord class that is a table wrapper class with attributes reflecting table columns. So each object of this class represents a database row. Wiki says about Active Record pattern: Active record is an approach to accessing data in a database and A database table or view is wrapped into a class. Thus, an object instance is tied to a single row in the table. So far so good. But where should I put complex raw sql query that retrieves statistics data for example? And, more generally, where should I put methods that retrieve some data that can not be an active record object (like data retrieved with aggregation queries) or if I knowing do not want to retrieve an object but an array instead for example?
You might just as well have asked when static typing is better than dynamic typing. A debate that has been raging for years with no end in sight. So have a look at questions like Dynamically vs Statically typed languages studies But the basic argument would be the potential to discover and fix issues at compile time that might otherwise have slipped into production. Contracts vs. Guards Even with guards and exceptions you are still faced with the issue that the system will fail to perform it's intended task in some instance. Possibly an instance that will be quite critical and costly to fail at. Contracts vs. Unit tests The usual argument on this one is that tests prove the presence of bugs, while types (contracts) prove the absence. F.ex. using a type you know that no path in the program could possibly supply invalid input, while a test could only tell you that the covered path did supply the correct input. Contracts vs. Null Object pattern Now this is at least in the same ball park. Languages like Scala and Haskell has had great success with this approach to eliminating null references entirely from programs. (Even if Scala formally allows nulls the convention is to never use them) If you already employ this pattern to eliminate NREs you've basically removed the largest source of runtime failures there is in basically the manner contracts allow you to do it. The difference might be that contracts has an option to automatically require all your code to avoid null, and thus force you to use this pattern in more places to pass compilation. On top of that contracts also give you the flexibility to target things beyond null. So if you no longer see any NRE in your bugs you might want to use contracts to strangle the next most common issue you might have. Off by one? Index out of range? But... All that being said. I do agree that the syntactic noise (and even structural noise) contracts add to the code is quite substantial and the impact the analysis has on your buildtime should not be underestimated. So if you decide to add contracts to your system it would probably be wise to do so very carefully with a narrow focus on which class of bugs one tries to address.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211342", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/33850/" ] }
211,395
So we present a straightforward coding exercise to new candidates with some well defined requirements. Occasionally we receive solutions which don't really solve the problem at hand, but are over-engineered to solve a perceived problem - often outside the bounds of the exercise. Now my question is, is this a warning sign? EDIT: Quite a lot of the discussion is based on the test being flawed - which is a fair point. As I described in a comment, the basic premise of the test is to show how you can read the data from the file in a sensible way (and you'd be amazed at the variety of approaches we see), and how to match the items before calculating the latency between the updates. Now for this to work, certain assumptions have to be made about the data, and we look for these assumptions, and we also state explicitly that we want to see the approach you take (including OO approach etc.) All this in a two hour time frame. IMHO, when I was interviewing it was the most complete exercise I came across. The particular scenario which I'm pondering about is where a candidate, rather than reading from the file, accepted "network" input in a multi-threaded application, which clearly is not in scope.
The problem is the test is skewed. You're asking someone to demonstrate their ability to write complex, enterprise-level software using a simple exercise taking only a few minutes. There are other interviewers at other companies who complain that candidates don't show enough skill in object oriented design with these exercises, so people tend to overcompensate. It doesn't necessarily mean your candidate is incapable of using simpler code when the situation warrants it. If you want to know if that's the case with your candidate, just ask them to redo it, giving them some specific guidelines. Say, "I can see you were showcasing your object oriented design skills, but it seems overkill for such a simple problem. Can you rewrite it using only two small functions?"
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211395", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/10505/" ] }
211,421
For example, say I want to fetch a User and all of his phone numbers and email addresses. The phone numbers and emails are stored in separate tables, One user to many phones/emails. I can do this quite easily: SELECT * FROM users user LEFT JOIN emails email ON email.user_id=user.id LEFT JOIN phones phone ON phone.user_id=user.id The problem* with this is that it's returning the user's name, DOB, favorite color, and all the other information stored in the user table over-and-over again for each record (users×emails×phones records), presumably eating up bandwidth and slowing down the results. Wouldn't it be nicer if it returned a single row for each user, and within that record there was a list of emails and a list of phones? It would make the data much easier to work with too. I know you can get results like this using LINQ or perhaps other frameworks, but it seems to be a weakness in the underlying design of relational databases. We could get around this by using NoSQL, but shouldn't there be some middle ground? Am I missing something? Why doesn't this exist? * Yes, it's designed this way. I get it. I'm wondering why there isn't an alternative that is easier to work with. SQL could keep doing what it's doing but then they could add a keyword or two to do a little bit of post-processing that returns the data in a nested format instead of a cartesian product. I know this can be done in a scripting language of your choice, but it requires that the SQL server either sends redundant data (example below) or that you to issue multiple queries like SELECT email FROM emails WHERE user_id IN (/* result of first query */) . Instead of having MySQL return something akin to this: [ { "name": "John Smith", "dob": "1945-05-13", "fav_color": "red", "email": "[email protected]", }, { "name": "John Smith", "dob": "1945-05-13", "fav_color": "red", "email": "[email protected]", }, { "name": "Jane Doe", "dob": "1953-02-19", "fav_color": "green", "email": "[email protected]", } ] And then having to group on some unique identifier (which means I need to fetch that too!) client-side to reformat the result set how you want it, just return this: [ { "name": "John Smith", "dob": "1945-05-13", "fav_color": "red", "emails": ["[email protected]", "[email protected]"] }, { "name": "Jane Doe", "dob": "1953-02-19", "fav_color": "green", "emails": ["[email protected]"], } ] Alternatively, I can issue 3 queries: 1 for the users, 1 for the emails, and 1 for the phone numbers, but then the email and phone number result sets need to contain the user_id so that I can match them back up with the users I previously fetched. Again, redundant data and needless post-processing.
It's returning exactly what you asked for: a single record set containing the Cartesian product defined by the joins. There are plenty of valid scenarios where that's exactly what you would want, so saying that SQL is giving a bad result (and thus implying that it would be better if you changed it) would actually screw a lot of queries up. What you're experiencing is known as " Object/Relational Impedance Mismatch, " the technical difficulties that arise from the fact that the object-oriented data model and the relational data model are fundamentally different in several ways. LINQ and other frameworks (known as ORMs, Object/Relational Mappers, not coincidentally,) don't magically "get around this;" they just issue different queries. It can be done in SQL too. Here's how I'd do it: SELECT * FROM users user where [criteria here] Iterate the list of users and make a list of IDs. SELECT * from EMAILS where user_id in (list of IDs here) SELECT * from PHONES where user_id in (list of IDs here) And then you do the joining client-side. This is how LINQ and other frameworks do it. There's no real magic involved; just a layer of abstraction.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211421", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7390/" ] }
211,608
Is it good idea to require to commit only working code? This commit doesn't need to leave the repository in a working state as: ... we are in early design stages, the code is not yet stable. ... you are the sole developer on the project. You know why things aren't working. Furthermore, you are not stopping anyone's work by committing broken code. ... the code currently doesn't work. We are going to make a big change to it. Let's commit, in order to have a point to revert to if things get ugly. ... the chain is long, no trouble if broken code exists in the local branch. I.e. local files staging area commits in local branch commits in remote personal feature branch merge with remote develop branch merge with remote master branch merge with remote release branch ... commit early,commit often. So in the above-linked question, the majority of answers say that committing not-compilable code is no problem in local and feature branches. Why? What is the value of a broken commit? Some of the highly voted comments say that on a local brach one can do whatever one wants. However, I am not interested in the technical side of the question. Rather I would like to learn the best practices - the habits that people who have worked many years in the industry have found most productive. I am amazed at the vast amount of great answers! They lead me to the conclusion that I am not adept enough at using branches to organize my code.
One of the philosophies suggested by Linus Torvalds is that creative programming should be like a series of experiments. You have an idea, and follow it. It doesn't always work out, but at least you tried it. You want to encourage developers to try creative ideas, and to do that, it must be cheap to try that experiment, and cheap to recover. This is the true power of git commits being so cheap (fast and easy). It opens up this creative paradigm that empowers developers to try thing they might not have otherwise. This is the liberation of git.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211608", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54268/" ] }
211,610
So I have this problem : in order to structure my code hierarchically, for every new tiny thing, I create a separate sub-folder, file, class ... and in it one 10-line function. It is an anti-pattern. Lately I have been trying to err on the opposite side. That is, when I need something new, instead of dreaming up grand hierarchies, I just type the code somewhere inside the existing classes. This is beginning to look like another anti-pattern, and I wander which is the lesser evil. So the pattern is the following: write everything here and now , and, when the need arises, refactor the code out to a separate class. Kind of refactorable God. Am I using this as an excuse to write God classes? Or is there a pattern/worklfow, similar to this one, that I can adopt? What are the indicators that things are going well/terrible?
One of the philosophies suggested by Linus Torvalds is that creative programming should be like a series of experiments. You have an idea, and follow it. It doesn't always work out, but at least you tried it. You want to encourage developers to try creative ideas, and to do that, it must be cheap to try that experiment, and cheap to recover. This is the true power of git commits being so cheap (fast and easy). It opens up this creative paradigm that empowers developers to try thing they might not have otherwise. This is the liberation of git.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211610", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54268/" ] }
211,614
Recently I came across a moderately large python codebase with lots of MyClassAbstractFactory , MyClassManager , MyClassProxy , MyClassAdapter etc. classes. While on the one hand those names pointed me to research and learn the corresponding patterns, they were not very descriptive of what the class does . Also, they seem to fall within the forbidden list of words in programming: variable , process_available_information , data , amount , compute : overly broad names, that don't tell us anything about the function when used by themselves . So should there be CommunicationManager or rather PortListener ? Or maybe I do not understand the problem at all...?
AbstractFactory is indeed a poor choice for a name. There is no way to know what is created by this factory , and when you'll look for an entity which creates Animal s, you'll never find the corresponding factory by name. AnimalAbstractFactory is not a wise choice neither, since in most languages, it would be redundant with the abstract keyword in the signature. This being said, there are several good reasons, highlighted by the comments, to actually include Abstract in the name: not only there are several contexts where you don't have the full signature, but just the name, but also, keeping AnimalFactory for an interface may be a wise choice (unless, unfortunately, the convention of the language/framework is to prefix interfaces with I ). AnimalCreationUtility would also be a bad choice: if it's a factory , make things easier for people who will read code, and call it a factory . abstract AnimalFactory is ok. It doesn't have redundancy, and is clear that it is an abstract factory which delegates the creation of animals to its children. So yes, including the name of the design pattern is a good idea, but it should be only a part of the name, and shouldn't be redundant with the other parts of the signature.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211614", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54268/" ] }
211,682
My manager has recently really been pushing to use velocity as a target and measure of productivity. We are currently working at an average velocity of 50 story points. My manager wants us to increase it by 40% to 70 story points (with no increase in team members). If we don't achieve this increase he wants us to deliver a full break down explaining why. The whole idea of measuring team performance by velocity and using it as a target seems wrong to me, but I am finding it difficult to explain why. Any help? Why isn't this the right way to measure and incentivize productivity?
Well, it's perfectly simple to increase velocity by 40% - just add 40% more points to all your estimates and do the same amount of work. Given that this is so, it should be apparent why using velocity as a target is wrong, it just encourages inflated estimates. A less glib answer is that your estimate already assumes you are going as fast as you can while doing everything correctly. The only way to really increase productivity by 40% is either to work overtime or to not do everything correctly. Both of these speed things up in the short term, but slow things down in the long term. And the long term in this case isn't very long, a month at the outside. The optimal long term strategy is to never go faster than your sustainable pace. Peopleware talks eloquently about the issues of trying to force programmers into higher productivity , and is an often cited classic. But in general it won't be easy to change the mind of a manager that is going down the path that yours is. Your project may well be in trouble - this is certainly a red flag.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211682", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60193/" ] }
211,755
Our team has recently inherited a relatively large project from another company (~250k lines). It was developed using C++Builder and we intend to port the Ui side to Qt. Most of the Ui code is separate from the business logic (yay!) but the logic side is quite a mess. There is a lot of diamond inheritance going on (virtual inheritance thankfully) but it makes understanding the code quite difficult to do. We've got very little documentation to go on, what's there is out of date (comments included). I generated class diagrams using Doxygen, here's the most complex one (I had to remove most of the class names but kept some of the more important ones and kept the standard C++ data types and std classes, yes, they inherit from std) So far we've been able to convert the base program to Qt and we're at a point where we can start converting the program's functionality bit by bit. Problem is, is it worth it long term? We would like to be maintaining this software as our own long term. Is there a general approach we should take to untangling this kind of inheritance mess or should we simply redesign from scratch and only keep bits and pieces of the existing code as we go? EDIT: Some more info Zavior posted a link to an article about why we should not start from scratch but Ptolemy also brought up some good questions and I'd like to add some information about our situation. The program we have is not 'bug free.' There are known issues that users have workarounds for most. There is no 'list' of these issues, that is currently being compiled by talking to all the existing users one by one as they tend to keep things to themselves. We are all new developers to this project. Our only resource is a developer who started working on the project about half-way through its lifetime. He is available via e-mail/chat mostly. He has also put together some documentation on how some parts of the code work. The program has only been used as an internal tool so far. We would like to make it commercially viable. EDIT 2: One of the most important things we want to do is have the program be in Qt. It's currently using the VCL framework from C++Builder that no one on our team is familiar with and we only have 1 license for. It's during my work porting from VCL to Qt that I found the messy code structure and question the decision to 'convert' vs redo.
If you re-write & re-design from scratch, you're going to have two MASSIVE problems: You don't have a spec. You might think you have a spec, but it turns out that the REAL spec is the old code. So you're going to have to dig into it to figure out what it really does, just so you know what to make the new system do. You'll have a long period (looking at that class hierarchy, possibly years) in which you won't have a saleable product. Which means you either have to maintain the old one in parallel, or give up on shipping anything for a long time. It stinks, but you probably need to just try to make the old one better. Write tests for everything you touch, don't hesitate to rip out a big section to make it better, but don't try to re-write it all at once. The project may not survive such an attempt.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211755", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/58558/" ] }
211,895
Would you consider it appropriate if you were asked for your Stack Exchange username in a software job interview (or as a pre-interview screening question)? To me, it seems like a very reasonable request, and one that would be extremely informative -- I'm sure I could learn more about a candidate in five minutes by looking at the questions and answers they have posted on Stack Exchange than by a 30 minute interview. But would such a question be bad form? Is it "too personal"? (Likewise for GitHub , or other public/online code sharing forums.)
Short Answer : Absolutely OK. A little lengthier answer : At my workplace, we routinely ask for a candidate's Stack Overflow / Stack Exchange username. Contribution to the Stack Exchange community leaves a much clearer trail of where someone is at with their skills. I know others who ask for GitHub accounts and refuse to accept candidates without a GitHub account*. In our case, we won't remove a candidate for consideration if they don't have an account. Ultimately, it's just one piece of the interviewing puzzle as you're trying to identify a match between the company's needs and the candidate's skills. It's not a make-or-break factor; it merely helps confirm impressions made during the interview. * To be clear, I don't condone that approach, and I think it causes that team to miss out on otherwise well qualified candidates. I brought it up to point out having heard of more extreme stances and to show that just asking for a Stack Overflow account name is pretty mild in comparison. Some additional qualifiers based upon comments: We don't look at Meta Stack Overflow and meta type posts. Meta is different, and we understand that. It's also really easy to miss the context behind those types of posts. IMO, they are closer to noise than signal when it comes to evaluating a candidate. Likewise, comments and review activity aren't considered. They lack context and they don't have a meaningful correlation to the candidate's ability to do the job. We have found a solid correlation between a candidate's performance in an interview and the level of Q&A that they engage in. Their Stack Overflow / Stack Exchange account becomes a supporting factoid, equivalent to a submitted code sample during the interview process. Obligatory xkcd strip on interviews . Elaboration .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211895", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/50055/" ] }
211,913
I'm a freshman Computer Science student and we just started doing some actual projects in Python. I have found I'm very efficient when I use the pen and paper method that my professor suggested in class. But when I can't write my problem down and work my algorithms out on paper I am really slow. During labs, I always seem to have to take the assignment back to my dorm. When I get there and write it out I solve the problem that took me the whole class in like 5 minutes. Maybe it's because I get stressed seeing people solving labs before me. Or maybe it's the pen and paper method. I was browsing through forums and someone wrote that if you have to write your programs on paper then you shouldn't be a programmer. I'm really worried because I'm so much better when I can see what the program is doing and track my way through it before typing actual code. Am I doing something wrong? Edit: Sorry for being unclear, but when I said writing on paper I meant my problem solving approach (e.g writing examples, making tables with values, etc.) not my actual code. I just use the paper to get my ideas out.
There's nothing wrong with working out your algorithms on paper first. Not so much for everyday coding, but for more complex algorithms, professional programmers work them out on paper or a whiteboard all the time, especially if a graphical format makes it more clear. For a student, every program is complex. If you want to get better at designing algorithms at a computer, though, there are some techniques you can practice. Don't just start by writing out the code, write the same things you would put on paper as comments, then expand it into real code or more detailed comments one by one. For example, if I'm deleting an element from the middle of a linked list, I might start with something like: // find the element // point the previous element to the next element // How do I get a pointer to the previous element? // doubly-linked list? // another find? // keep track during the first find? // delete the element Then I might replace // find the element with a function with more pseudocode, and keep going until I have a complete solution. Don't think code has to be written in a linear manner.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211913", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/102597/" ] }
211,959
I have some difficulty understanding the concept of "fixture". I know what a test suite is, a test case, a test run, but what exactly is a "fixture"? A parameterized test case? It seems to me that the meaning or semantics of the term "fixture" can vary slightly by programming language or by testing framework? I think a phpunit fixture " the code to set the world up in a known state and then return it to its original state when the test is complete. This known state is called the fixture of the test. " is slightly different from a " fitnesse fixture ", where " Fixtures are a bridge between the Wiki pages and the System Under Test (SUT), which is the actual system to test ". Is there an expert in software testing around here who can answer this question? References to other programming languages are welcome.
In the context of testing tools you mentioned, such as PHPUnit and Fitnesse , this term definitely refers to the notion of test fixture : something used to consistently test some item, device, or piece of software... Software Test fixture refers to the fixed state used as a baseline for running tests in software testing . The purpose of a test fixture is to ensure that there is a well known and fixed environment in which tests are run so that results are repeatable. Some people call this the test context . Examples of fixtures: Loading a database with a specific, known set of data Erasing a hard disk and installing a known clean operating system installation Copying a specific known set of files Preparation of input data and set-up/creation of fake or mock objects ... Use of fixtures Some advantages of fixtures include separation of the test initialization (and destruction) from the testing, reusing a known state for more than one test, and special assumption by the testing framework that the fixture set up works...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/211959", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/9589/" ] }