source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
65,181 | Why should a programmer ever fork a library for inclusion in a widely used application? I ask this question because I was reading an article about why Chromium isn't packaged for many Linux distros like Fedora. Apparently its largely due to the fact that Google has forked a number of libraries, modified them, and included them in Chromium. This has driven up the complexity of packaging releases. There are a number of reasons why this can be a bad thing, but how strong a case can you actually make for doing so in a large widely used application such as Chromium? The original article: http://ostatic.com/blog/making-projects-easier-to-package-why-chromium-isnt-in-fedora Isn't it usually worth the effort to make slight modifications to your own program in order to use a popular and well developed library? | Though I think I've heard composition-vs-inheritance discussions long before GoF, I can't put my finger on a specific source. Might have been Booch anyway. <rant> Ah but like many mantras, this one has degenerated along typical lines: it is introduced with a detailed explanation and argument by a well-respected source who coins the catch-phrase as a reminder of the original complex discussion it is shared with a knowing part-of-the-club wink by a few in-the-know for a while, generally when commenting on n00b mistakes soon it is repeated mindlessly by thousands upon thousands who never read the explanation, but love using it as an excuse not to think , and as a cheap and easy way to feel superior to others eventually, no amount of reasonable debunking can stem the "meme" tide - and the paradigm degenerates into religion and dogma. The meme, originally intended to lead n00bs to enlightenment, is now used as a club to bludgeon them unconscious. Composition and inheritance are very different things, and should not be confused with each other. While it is true that composition can be used to simulate inheritance with a lot of extra work , this does not make inheritance a second-class citizen, nor does it make composition the favorite son. The fact that many n00bs try to use inheritance as a shortcut does not invalidate the mechanism, and almost all n00bs learn from the mistake and thereby improve. Please THINK about your designs, and stop spouting slogans. </rant> | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65181",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9287/"
]
} |
65,216 | When speaking with a non-programmer friend, I happened to mention the concept of "beautiful code" and she wanted to understand what that meant, but I was at a bit of a loss as to how to explain it to someone who would have no context whatsoever. When all code looks like gibberish to someone anyways, how can you explain what makes one piece of code prettier than another? Analogies could be useful too. | Language analogy Think of your most favorite story. It is probably beautifully written. To a non-English speaker, just because they can't understand it or comprehend why it is beautiful doesn't detract from its beauty. Construction analogy Consider a shoddily built house. It has doors and windows that are obviously bricked over. The mortar is cracked in one corner where the foundation was insufficient. And no one can understand why there is a stairwell leading up to a 2nd floor that doesn't exist. The toilet is built next to the kitchen with no exhaust fan and situated such that the prevailing winds will blow the nasty bog odors through the rest of the house. The chimney from the open fireplace is 5cm away from some wood, ready to catch fire. Contrast that with a house where everything is where it should be, it is low maintenance, perfectly insulated so that a bit of solar heat will keep it warm in winter, energy is recovered from the ventilation, so you have both fresh air, comfortable temperature and negligible power bills. It is built of non-flammable materials so that it can't burn down from a fire. It will withstand a 9.0 earthquake and an F5 tornado. It only costs 30% more to build than the other house, but will pay the difference off in 5 years. And it is aesthetically pleasing. The GUI is like the outward appearance of a house. A user can recognize beauty in a GUI, just like they can appreciate a beautiful house. But perhaps even more importantly, the real beauty in the design in both houses and programs are the things that are invisible to the uninitiated, but important or nice to have regardless. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65216",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4080/"
]
} |
65,281 | As I understand, in Java, stack memory holds primitives and method invocations and heap memory is used to store objects. Suppose I have a class class A {
int a ;
String b;
//getters and setters
} Where will the primitive a in class A be stored? Why does heap memory exist at all? Why can't we store everything on the stack? When the object gets garbage collected, is the stack associated with the objected destroyed? | The basic difference between stack and heap is the life cycle of the values. Stack values only exist within the scope of the function they are created in. Once it returns, they are discarded. Heap values however exist on the heap. They are created at some point in time, and destructed at another (either by GC or manually, depending on the language/runtime). Now Java only stores primitives on the stack. This keeps the stack small and helps keeping individual stack frames small, thus allowing more nested calls. Objects are created on the heap, and only references (which in turn are primitives) are passed around on the stack. So if you create an object, it is put on the heap, with all the variables that belong to it, so that it can persist after the function call returns. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65281",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17887/"
]
} |
65,294 | As an employee of a company, when you write code do you feel like you have an attachment to it? Do you feel that you have some ownership of the code? Or do you write it completely detached from it without any concern about what happens to it after you've moved onto something else? EDIT: I'm not talking about writing bad code and then running... | After 30 years as a contractor, it's mixed. It's all disposable. I've worked with hundreds of clients. I'll never see the code again. Why become attached? There's no sense of ownership. It's very visible. It's more expensive than in-house code, so it gets a lot of scrutiny. Since I won't be around to maintain it, it gets a very great deal of scrutiny. Code walkthroughs and handovers are very important.
There is some pride in craftsmanship. But no sense of ownership. My record is 17 years of production. 12 of those years with zero maintenance of any kind. I know because I got a call. They were revising their accounting systems and wanted to know how to replace the clever cost allocation algorithm I had built so many years ago. I looked at the code, and the files were unchanged since the last enhancement 12 years ago. (Not a bug-fix, AFAIK.) The next longest run --that I know about-- was 7 years of flawless operation. That, however, had a serious Y2K issue and required some rework to use file names that had 4-digit years. The internal algorithms were all correct, but the log files would have appeared in the wrong order. Again, I know it was flawless because the files hadn't been touched since the last release I had made. So, yes, there is a great deal of pride in craftsmanship. But no "ownership". It's their code, not mine. I only build it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65294",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19799/"
]
} |
65,384 | Programming languages often come with various bit operators (e.g. bitwise left- and right shift, bitwise AND, OR, XOR...). These don't get used though very much, or at least such has my experience been. They are sometimes used in programming challenges or interview questions, or the solution migh require them, e.g.: Without using any equality operator, create a function which returns true when two values are equal Without using a third variable, swap the value of two variables These then again, probably have few real world uses. I guess that they should be faster because they directly manipulate memory on a low level. Why are such found in most programming languages? Any real world use cases? | No, they have many real-world applications, and are fundamental operations on computers. They are used for Juggling blocks of bytes around that don't fit in the programming languages data types Switching encoding back and forth from big to little endian. Packing 4 6bit pieces of data into 3 bytes for some serial or usb connection Many image formats have differing amounts of bits assigned to each color channel. Anything involving IO pins in embedded applications Data compression, which often does not have data fit nice 8-bit boundaries.\ Hashing algorithms, CRC or other data integrity checks. Encryption Psuedorandom number generation Raid 5 uses bitwise XOR between volumes to compute parity. Tons more In fact, logically, all operations on a computer ultimately boil down to combinations of these low level bitwise operations, taking place within the electrical gates of the processor. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65384",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
65,414 | I have worked extensively in developing web applications using PHP and ASP.NET, but one of the questions that I'm constantly asked by customers is whether to move forward with a php website or an asp.net website. So naturally the first thing that comes to mind is to answer the question like this: PHP is open-source and ASP.NET is from Microsoft. Usually after something like that is said the customer has a blank look on there face. Apparently the fact that one is open source and the other isn't doesn't really faze them. And for good reason, because when I first heard it, it really doesn't tell me much. I know from working with both that each have their differences when it comes to developing websites. My question is what are differences between ASP.NET and PHP as far as Features Security Extendability Frameworks Average Development Time I am trying to compile a list of facts to be able to compare with the customer so that an informed choice on the appropriate development platform can be made. | Features , Security , and Extendability are going to be more or less the same. What can be done with PHP can be done with ASP.NET. Frameworks — Again, when it comes to features of frameworks, it will be more or less the same. However, being more specific than the language itself, you'll want to consider: What your developers are most comfortable with. Knowledge = efficiency . On a project-by-project basis, one framework in one language might be a better natural fit than a framework in another. Being more specific than the language itself means a framework cannot help but be well-suited to some tasks and less-well suited to others. Average Development Time — Your average development time for a very small project might be better with PHP since web hosts are so easy to find and dev machines so easy to set up. However, with anything bigger, as long as you have good devs, or are already set up for either, it will probably be a wash. The main consideration you should make is what technology stack your client wants to be tied to going forward. Neither mixes well (easily) with the other. They may have developers who are familiar with one or the other. If your client likes the idea of being connected to Microsoft, then go with ASP.NET. Some clients will have more comfort regarding future support, upgrades, etc. with MS. If they like the idea of open source and Linux servers, go with PHP . This may interest some clients due to transferability of web hosts, free software, etc. And lastly, if they don't care, then go with what you are most comfortable with . There's not much to it beyond that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65414",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22218/"
]
} |
65,467 | What do people generally mean whenever you see XXX in a comment. Occasionally, I'll see a comment like this: # XXX - This widget really should frobulate the whatsit Of course, I can tell what the comment means, but what does the XXX generally mean? Is it saying "This is a hack" or maybe "Perhaps we should revisit this later"? Or is it saying something else entirely? | What XXX represents, depends on the author of the code. In general, it is used as a marker for code that requires attention. However, this web page states a somewhat different train of thought: XXX : used to flag something that is bogus but works FIXME : used to flag something that is bogus and broken I guess this further shows that its meaning is not well defined and is used differently. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65467",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1626/"
]
} |
65,477 | I understand the value of automated testing and use it wherever the problem is well-specified enough that I can come up with good test cases. I've noticed, though, that some people here and on StackOverflow emphasize testing only a unit, not its dependencies. Here I fail to see the benefit. Mocking/stubbing to avoid testing dependencies adds complexity to the tests. It adds artificial flexibility/decoupling requirements to your production code to support mocking. (I disagree with anyone who says this promotes good design. Writing extra code, introducing things like dependency injection frameworks, or otherwise adding complexity to your codebase to make things more flexible/pluggable/extensible/decoupled without a real use case is overengineering, not good design.) Secondly, testing dependencies means that critical low-level code that's used everywhere gets tested with inputs other than those that whoever wrote its tests explicitly thought of. I've found plenty of bugs in low-level functionality by running unit tests on high level functionality without mocking out the low-level functionality it depended on. Ideally these would have been found by the unit tests for the low-level functionality, but missed cases always happen. What's the other side to this? Is it really important that a unit test doesn't also test its dependencies? If so, why? I can understand the value of mocking external dependencies like databases, networks, web services, etc. I am referring to internal dependencies , i.e. other classes, static functions, etc. that don't have any direct external dependencies. | It's a matter of definition. A test with dependencies is an integration test, not a unit test. You should also have an integration test suite. The difference is that the integration test suite may be run in a different testing framework and probably not as part of the build because they take longer. For our product:
Our unit tests are run with each build, taking seconds.
A subset of our integration tests runs with each check-in, taking 10 minutes.
Our full integration suite is run each night, taking 4 hours. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65477",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1468/"
]
} |
65,512 | Edit: How do you manage individual tasks when working on multiple languages In day to day task handling, how many different programming languages do you work with? Do you make a mental shift when working on each language? Do you Prioritize them and keep each task per language separately. Do you code in stages based on progress of each task switching seamlessly between languages/IDE/Environment Do you apply the same coding style/conventions on all the languages (not syntax)? Related: Is it normal for programmer to work on multiple projects simultaneously | Today I've written Java, Python, C++, and SQL (if it counts). And I've been at work for less than 2 hours. If you do this regularly, then the mental shift becomes negligible. It has nothing to do with multitasking. It's just like walking for a while, then driving a car, then driving a bike, then swimming. No problem, because they're sequential things. Regardless, the point is to complete one task before moving to the next. I tend to define tasks in terms of a concrete functionality, fix, or such. Often that task is accomplished with only one language, but it may require several of them. For example, when working with JNI , you'll typically make changes to both the Java and native sides in parallel. Some answers: Do you code in stages based on progress on each task switching seamlessly between languages/IDE/Environment It's important to be able to switch seamlessly between IDEs, editors, environments. Usually I keep them all open all the time. Do you apply the same coding style /conventions on all the languages(Not syntax)? If it's an interface over which two languages are talking, then yes - variable names and such must be similar. Otherwise, I try to apply the typical coding style of that language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65512",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19554/"
]
} |
65,705 | I've been a professional coder for a several years. The comments about my code have generally been the same: writes great code, well-tested, but could be faster . So how do I become a faster coder, without sacrificing quality? For the sake of this question, I'm going to limit the scope to C#, since that's primarily what I code (for fun) -- or Java, which is similar enough in many ways that matter. Things that I'm already doing: Write the minimal solution that will get the job done Write a slew of automated tests (prevents regressions) Write (and use) reusable libraries for all kinds of things Use well-known technologies where they work well (eg. Hibernate) Use design patterns where they fit into place (eg. Singleton) These are all great, but I don't feel like my speed is increasing over time. I do care, because if I can do something to increase my productivity (even by 10%), that's 10% faster than my competitors. (Not that I have any.) Besides which, I've consistently gotten this feeback from my managers -- whether it was small-scale Flash development or enterprise Java/C++ development. Edit: There seem to be a lot of questions about what I mean by fast, and how I know I'm slow. Let me clarify with some more details. I worked in small and medium-sized teams (5-50 people) in various companies over various projects and various technologies (Flash, ASP.NET, Java, C++). The observation of my managers (which they told me directly) is that I'm "slow." Part of this is because a significant number of my peers sacrificed quality for speed; they wrote code that was buggy, hard to read, hard to maintain, and difficult to write automated tests for. My code generally is well-documented, readable, and testable. At Oracle, I would consistently solve bugs slower than other team-members. I know this, because I would get comments to that effect; this means that other (yes, more senior and experienced) developers could do my work in less time than it took me, at nearly the same quality (readability, maintainability, and testability). Why? What am I missing? How can I get better at this? My end goal is simple: if I can make product X in 40 hours today, and I can improve myself somehow so that I can create the same product at 20, 30, or even 38 hours tomorrow, that's what I want to know -- how do I get there? What process can I use to continually improve? I had thought it was about reusing code, but that's not enough, it seems. | I like Jeff Atwood's approach to this which can be found here http://www.codinghorror.com/blog/2008/08/quantity-always-trumps-quality.html . Basically in the article he references a passage from the book Art & Fear by David Bayles and Ted Orland. The passage goes: The ceramics teacher announced on
opening day that he was dividing the
class into two groups. All those on
the left side of the studio, he said,
would be graded solely on the quantity
of work they produced, all those on
the right solely on its quality. His
procedure was simple: on the final day
of class he would bring in his
bathroom scales and weigh the work of
the "quantity" group: fifty pound of
pots rated an "A", forty pounds a "B",
and so on. Those being graded on
"quality", however, needed to produce
only one pot - albeit a perfect one -
to get an "A". Well, came grading time
and a curious fact emerged: the works
of highest quality were all produced
by the group being graded for
quantity. It seems that while the
"quantity" group was busily churning
out piles of work - and learning from
their mistakes - the "quality" group
had sat theorizing about perfection,
and in the end had little more to show
for their efforts than grandiose
theories and a pile of dead clay. Essentially getting your hands dirty faster and more often improves your skills better, than spending your time studying and theorizing about the "perfect" way to do it. My advice, keep practicing, keep up with technology, and study design. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65705",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19993/"
]
} |
65,742 | I'm working at a project in one of the world's top 3 IT consulting firms, and was told by a DBA that company best practice's state stored procedures are not a "best practice". This is so contrary to everything I've learned. Stored procedures give you code reuse, and encapsulation (two pillars of software development), security (you can grant/revoke permissions on an individual stored proc), protect you from SQL injection attacks, and also help with speed (although that DBA said that starting with SQL Server 2008 that even regular SQL queries are compiled if they are run enough times). We're developing a complex app using Agile software development methodology. Can anyone think of good reasons why they wouldn't want to use stored procs? My guess was that the DBAs didn't want to maintain those stored procs, but there seem to be way too many negatives to justify such a design decision. | In my experience working on very large projects, you have to be very clear on where business logic lives. If you allow an environment where individual developers can put business logic in the business object layer or in a stored procedure as they see fit, a large application becomes VERY difficult to understand and maintain. Stored procedures are great for speeding up certain DB operations. My architectural decision is to leave all logic in the business layer of the application and employ stored procedures in a targeted manner to improve performance where benchmarking indicates it is warranted. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65742",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22453/"
]
} |
65,845 | I need to reconcile these 2 facts: I don't feel comfortable working on Linux; I need to develop software for Linux. Some background: I have a 10+ years of programming experience on Windows (almost exclusively C/C++, but some .NET as well), I was a user of FreeBSD at home for about 3 years or so (then had to go back to Windows), and I've never had much luck with Linux. And now I have to develop software for Linux. I need a plan. On Windows, you can get away with just knowing a programming language, an API you're coding against, your IDE (VisualStudio) and some very basic tools for troubleshooting (Depends, ProcessExplorer, DebugView, WinDbg). Everything else comes naturally. On Linux, it's a very different story. How the hell would I know what DLL (sorry, Shared Object) would load, if I link to it from Firefox plugin? What's the Linux equivalent of inserting __asm int 3/DebugBreak() in the source and running the program, and then letting the OS call a debugger? Why the hell release builds use something, called appLoader, while debug builds work somehow different? Worst of all: how to provision Linux development environment? So, taking into account that hatred is usually associated with not knowing enough, what would you recommend? I'm ok with Emacs and GCC. I need to educate myself as a Linux admin/user, and I need to learn proper troubleshooting tools (strace is cool, btw), equivalents to the ones I mentioned above. Do I need to do Linux from Scratch? Or do I need to just read some books (I've read "UNIX programming environment" by Kernighan and "Advanced Programming..." by Stevens, but I need to learn something more practical)? Or do I need to have some Linux distro on my home computer? | You might find the article Dynamic Linking in Linux and Windows interesting which explains how each OS does dynamic linking. The article Shared Library Search Paths explains how the libraries are found. Also Static, Shared Dynamic and Loadable Linux Libraries is very good. A nice thing about Linux libraries is that they have better support for versioning and having several versions of a library around than Windows (AFAIK, I don't do Windows). See Library Interface Versioning in Solaris and Linux for that. These articles should really get you covered with libraries. The GDB is very mighty, a good introduction is probably RMS's gdb Tutorial . You might want to read up on conditional breakpoints. For equivalents to __asm(int 3) see the question Set breakpoint in C or C++ code programmatically for gdb on Linux . The book Advanced Unix Programming from Marc Rochkind is a must-read, IMHO. Has lots of examples and covers all POSIX/SUS topics in a very well explained way. It's the best book about this topic I've read so far. But to make your life easier I actually recommend to use a highlevel API that abstracts stuff for you, like Qt. Also makes writing cross-platform a lot easier. Doing Linux From Scratch helps you get an understanding on how a Linux system is composed, but I think it doesn't much improve your knowledge about it from a developers point of view. It does make you more comfortable with Linux though, as you learn what parts are there in a Linux environment (and partly also why ). A Linux won't be a big blackbox to you after working through Linux From Scratch. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65845",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22407/"
]
} |
65,868 | I'm a big fan of 1tbs when it comes to c-like languages. Now that php got decent lambdas and closures, though, I'm not sure the style I'm using for them (fundamentally made up, has something to do with the jquery source code) is the most readable/standard option. What do you think? Somebody has some nice examples of 1tbs + lambdas and functions as parameters? Here's a sample of my code. function bold_search_terms($needle, $haystack) {
return str_replace(
$occurrencies = array_filter (
explode(" ",$haystack),
function ($var) use ($needle) {
return(levenshtein($var, $needle) < 3);
}
),
array_map(
function ($var) {
return "<strong>$var</strong>";
},
$occurrencies
),
$haystack
);
} | You might find the article Dynamic Linking in Linux and Windows interesting which explains how each OS does dynamic linking. The article Shared Library Search Paths explains how the libraries are found. Also Static, Shared Dynamic and Loadable Linux Libraries is very good. A nice thing about Linux libraries is that they have better support for versioning and having several versions of a library around than Windows (AFAIK, I don't do Windows). See Library Interface Versioning in Solaris and Linux for that. These articles should really get you covered with libraries. The GDB is very mighty, a good introduction is probably RMS's gdb Tutorial . You might want to read up on conditional breakpoints. For equivalents to __asm(int 3) see the question Set breakpoint in C or C++ code programmatically for gdb on Linux . The book Advanced Unix Programming from Marc Rochkind is a must-read, IMHO. Has lots of examples and covers all POSIX/SUS topics in a very well explained way. It's the best book about this topic I've read so far. But to make your life easier I actually recommend to use a highlevel API that abstracts stuff for you, like Qt. Also makes writing cross-platform a lot easier. Doing Linux From Scratch helps you get an understanding on how a Linux system is composed, but I think it doesn't much improve your knowledge about it from a developers point of view. It does make you more comfortable with Linux though, as you learn what parts are there in a Linux environment (and partly also why ). A Linux won't be a big blackbox to you after working through Linux From Scratch. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/65868",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11146/"
]
} |
66,040 | I need to write documentation for several projects I worked on. I was wondering what, according to your experience, makes documentation useful and complete. What part should I include, how deep should I go in my explanation, etc? My target audience is developers. The purpose of the documentation is to make it easy to update or finish projects. | I like examples. If you have an API that performs a variety of foo operations on bar objects, include practical examples, not just a single line showing how to call the function. Also make sure you include somewhere a high-level "big picture" overview of whatever is being documented. It's great to know the different types of foo operations available, but it's also good to know why there are different variations, and some guides as to know when to use which variant. For some systems, a brief developer-centric user manual is also good. This is important if new developers don't even know to use the existing parts of the project. A setup guide for building and compiling is also very important if the setup is non-trivial (more than just add files to IDE project and click "compile"). This may include database connections, and server configuration. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/66040",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22414/"
]
} |
66,138 | I noticed that as of PHP5, interfaces have been added to the language. However, since PHP is so loosely typed, it seems that most of the benefits of using interfaces is lost. Why is this included in the language? | The main advantage of interfaces in PHP is that classes can implement multiple interfaces. This allows you to group classes that share some functionality but do not necessarily share a parent class. Some examples might include caching, output, or accessing properties of the class in a certain way. In your code, you can check if a class implements a given interface instead of checking the class name. Then, your code will still work when new classes are added. PHP provides some predefined interfaces that may come in handy in various situations: http://php.net/manual/en/reserved.interfaces.php . EDIT - Adding an example If you have an interface named MyInterface and you're working with multiple objects of different classes that may or may not share some functionality, interfaces allow you to do something like this: // Assume $objects is an array of instances of various classes
foreach($objects as $obj) {
if($obj instanceof MyInterface) {
$obj->a();
$obj->b();
$obj->c();
}
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/66138",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/190/"
]
} |
66,160 | I would like to hear what kind of design decisions you took and how did they backfire. Because of a bad design decision, I ended up having to support that bad decision forever (I also had a part in it). This made me realize that one single design mistake can haunt you forever. I want to learn from the more experienced people what kind of blunders have they experienced and what did they learn from them. I'm sure this will be a lot of help to other programmers by helping them to not repeat those decisions. Thanks for sharing your experience. | Ignoring YAGNI , again and again ... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/66160",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
66,305 | I am developing an iphone app and working through some sample code I saw, which uses libz.dylib. I did some research and found nothing enlightening, other than libz.dylib could be used for zip files, but other frameworks are prefered... I'm dealing with a pdf sample, so why use libz.dylib? In general, what are dylib? And why all the versions? | A file ending in the extension .dylib is a dynamic library: it's a library that's loaded at runtime instead of at compile time. If you're familiar with DLLs from Windows or DSOs, it's more or less the same type of thing with a few twists. The Dynamic Library Programming Topics section of the Mac OS X Developer Library covers all the details about the format and what you should be aware of. libz.dylib is the dynamic library for Zlib , a general compression library. PDFs can (and usually do) use zlib to compress different aspects of the data contained within them, but accessing the PDF data at that level is pretty low-level, and higher-level libraries would abstract most of that type of stuff. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/66305",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17948/"
]
} |
66,438 | So I'm sitting down to a nice bowl of c# spaghetti, and need to add something or remove something... but I have challenges everywhere from functions passing arguments that doesn't make sense, someone who doesn't understand data structures abusing strings, redundant variables, some comments are red-hearings, internationalization is on a per-every-output-level, SQL doesn't use any kind of DBAL, database connections are left open everywhere... Are there any tools or techniques I can use to at least keep track of the "functional integrity" of the code (meaning my "improvements" don't break it), or a resource online with common "bad patterns" that explains a good way to transition code? I'm basically looking for a guidebook on how to spin straw into gold. Here's some samples from the same 500 line function: protected void DoSave(bool cIsPostBack) {
//ALWAYS a cPostBack
cIsPostBack = true;
SetPostBack("1");
string inCreate ="~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~";
parseValues = new string []{"","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","","",""};
if (!cIsPostBack) { //.......
//....
//....
if (!cIsPostBack) {
} else {
}
//....
//....
strHPhone = StringFormat(s1.Trim());
s1 = parseValues[18].Replace(encStr," ");
strWPhone = StringFormat(s1.Trim());
s1 = parseValues[11].Replace(encStr," ");
strWExt = StringFormat(s1.Trim());
s1 = parseValues[21].Replace(encStr," ");
strMPhone = StringFormat(s1.Trim());
s1 = parseValues[19].Replace(encStr," ");
//(hundreds of lines of this)
//....
//....
SQL = "...... lots of SQL .... ";
SqlCommand curCommand;
curCommand = new SqlCommand();
curCommand.Connection = conn1;
curCommand.CommandText = SQL;
try {
curCommand.ExecuteNonQuery();
} catch {}
//....
} I've never had to refactor something like this before, and I want to know if there's something like a guidebook or knowledgebase on how to do this sort of thing, finding common bad patterns and offering the best solutions to repair them. I don't want to just nuke it from orbit, | There are three books I would recommend : Refactoring: Improving the Design of Existing Code by Martin Fowler. That's the theory book. It`s worth reading as there are general recipes that can be applied to any code. But you may not find explicit examples of what you will encounter in c#. Working effectively with legacy code by Michael Feathers. It can feel a bit dated sometime, but the general idea of how to deal with code that is hard to maintain is well explained. (Michael Feathers actually recognizes the need for an update to this book, called "Brutal Refactoring" ) Professional Refactoring in C# & ASP.NET by Danijel Arsenovski. It is a really good specialized .net book, and has tons of pratical examples of refactoring. There is also a vb.net version that is as useful for anyone working with vb.net. It will guide you about tools that are available for automatic refactoring as well ( Resharper and DevExpress CodeRush Express ). All of this books emphase on placing a "security nest" before engaging in refactoring when possible. This means having tests that ensure that you don't break the original intent of the code you are refactoring. And always do one refactoring at a time . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/66438",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1525/"
]
} |
66,480 | I work in a small company as a solo developer. I'm the only developer at the company in fact. I have several (relatively) large projects I've written and maintain regularly, and none of them have tests to support them. As I begin new projects I often wonder if I should try a TDD approach. It sounds like a good idea, but I honestly can never justify the extra work involved. I work hard to be forward-thinking in my design. I realize that certainly one day another developer will have to maintain my code, or at least troubleshoot it. I keep things as simple as possible and I comment and document things that would be difficult to grasp. And the fact is these projects aren't so big or complicated that a decent developer would struggle to comprehend them. A lot of the examples I've seen of tests get down to the minutiae, covering all facets of the code. Since I'm the only developer and I'm very close to the code in the entire project, it is much more efficient to follow a write-then-manually-test pattern. I also find requirements and features change frequently enough that maintaining tests would add a considerable amount of drag on a project. Time that could otherwise be spent solving the business needs. So I end up with the same conclusion each time. The return on investment is too low. I have occasionally setup a few tests to ensure I've written an algorithm correctly, like calculating the number of years someone has been at the company based on their hire date. But from a code-coverage standpoint I've covered about 1% of my code. In my situation, would you still find a way to make unit testing a regular practice, or am I justified in avoiding that overhead? UPDATE: A few things about my situation that I left out: My projects are all web applications. To cover all my code, I'd have to use automated UI tests, and that is an area where I still don't see a great benefit over manual testing. | A lot of the examples I've seen of tests get down to the minutiae, covering all facets of the code. So? You don't have to test everything . Just the relevant things. Since I'm the only developer and I'm very close to the code in the entire project, it is much more efficient to follow a write-then-manually-test pattern. That's actually false. It's not more efficient. It's really just a habit. What other solo developers do is write a sketch or outline, write the test cases and then fill in the outline with final code. That's very, very efficient. I also find requirements and features change frequently enough that maintaining tests would add a considerable amount of drag on a project. That's false, also. The tests are not the drag. The requirements changes are the drag. You have to fix the tests to reflect the requirements. Whether their minutiae, or high-level; written first or written last. The code's not done until the tests pass. That's the one universal truth of software. You can have a limited "here it is" acceptance test. Or you can have some unit tests. Or you can have both. But no matter what you do, there's always a test to demonstrate that the software works. I'd suggest that a little bit of formality and nice unit test tool suite makes that test a lot more useful. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/66480",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22500/"
]
} |
66,523 | In your experience, what is a useful rule of thumb for how many lines of code are too many for one class in Java? To be clear, I know that number of lines is not even close to the real standard to use for what should be in a particular class and what shouldn't. Classes should be designed according to proper OOP philosophies (encapsulation, etc.) in mind. That said, a rule of thumb could provide a useful starting point for refactoring considerations (i.e. "Hmmm, this class has >n lines of code; it's probably unreadable and doing a lousy job of encapsulation, so I might want to see if it should be refactored at some point"). On the flip side, perhaps have you encountered examples of very large classes that still obeyed OOP design well and were readable and maintainable despite their length? Here's a related, non-duplicate question about lines per function . | Some interesting metrics: junit fitnesse testNG tam jdepend ant tomcat
----- -------- ------ --- ------- --- ------
max 500 498 1450 355 668 2168 5457
mean 64.0 77.6 62.7 95.3 128.8 215.9 261.6
min 4 6 4 10 20 3 12
sigma 75 76 110 78 129 261 369
files 90 632 1152 69 55 954 1468
total lines 5756 49063 72273 6575 7085 206001 384026 I use FitNesse as a benchmark because I had a lot to do with writing it. In FitNesse the average class is 77 lines long. None are longer than 498 lines. And the standard deviation is 76 lines. That means that the vast majority of classes are less than 150 lines. Even Tomcat, that has one class in excess of 5000 lines, has most classes less than 500 lines. Given this we can probably use 200 lines as a good guideline to stay below. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/66523",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11751/"
]
} |
66,565 | I'm sure I am not alone in this issue so I wanted to see what others do to handle this problem. Whenever I have to reinstall my IDE, one of the first things I do is go out and look for addons. It starts out just grabbing whatever I plan on primarily doing but ends up with getting anything that looks interesting or may find useful in the future. "But it's addtional functionality for free! " In the end I just end up with a dev environment with a lot of bloat. Also happens over time so you reinstall only to start the cycle over again... Is there a good way of handling this? | Some interesting metrics: junit fitnesse testNG tam jdepend ant tomcat
----- -------- ------ --- ------- --- ------
max 500 498 1450 355 668 2168 5457
mean 64.0 77.6 62.7 95.3 128.8 215.9 261.6
min 4 6 4 10 20 3 12
sigma 75 76 110 78 129 261 369
files 90 632 1152 69 55 954 1468
total lines 5756 49063 72273 6575 7085 206001 384026 I use FitNesse as a benchmark because I had a lot to do with writing it. In FitNesse the average class is 77 lines long. None are longer than 498 lines. And the standard deviation is 76 lines. That means that the vast majority of classes are less than 150 lines. Even Tomcat, that has one class in excess of 5000 lines, has most classes less than 500 lines. Given this we can probably use 200 lines as a good guideline to stay below. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/66565",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12290/"
]
} |
66,590 | Why has Python been backed by google and become so rapidly popular and Lua has not? Do you know why Lua has stayed in background? | I really love Lua, but it does have some real limitations, and as others have mentioned they derive mostly from Lua's origins as a configuration file language and then later as an embedded scripting language. Because of the goal to keep Lua small, there exists only a very tiny standard library, that has only bare bones functionality. This has lead to an unfortunate culture in Lua circles where Lua developers like to re-implement the functionality offered by many other language's standard libraries themselves rather than working collectively on a universally accepted set of core libraries. Things like multi-threading, regular expressions, platform independent file access methods, and even bit operations (until 5.2) ere all "not included" since they would make Lua much larger and slower. Sure you can get libraries do so these things - but then those have independent maintainers and quality levels. Don't get me wrong. I love Lua for the same reasons I have just listed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/66590",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17599/"
]
} |
66,614 | Programming isn't alien to me. I first starting doing markup (HTML, now please don't laugh at me) when I was 12 and a little bit of BASIC when I was 13 (I knew much about Flowcharts, Pseudocodes at this point), but then I was admonished into Biology in high school and hence missed out on "real" programming knowledge of languages such as C, Java, etcetera. I took up CS for my UG B.E. (similar to BS, but way more theoretical). I learnt C & C++ (to a lesser extent) on my own (my prof was a total pain and the class was filled with code-jocks (who had already learnt it in school, and hence paid no attention to class and didn't let lesser mortals like me to pay any attention to class either)) and could whip up an awesome addition or multiplication program (ones which now even kinder-gardeners' whip up with way more finesse) and a piss-poor knowledge of Java (which has even grown rusty in recent times). My main problem is that I've always felt inadequate and strangled by my limited programming skills and belittled by the code-jocks (believe me, I've come across this site ages ago, but could just now build up the courage to actually post a question) and have been at times even depressed over said inability. Most people say that Programming isn't necessarily about the language but the state of mind that the person has and the techniques they employ to solve problems/issues. I agree with such sentiments, but can I ever acquire such a "state of mind", and if such how should I approach "Programming/Coding", and if there are any set ways and steps one most go through to attain the "Zen of Coding". How do I do so? Also, it wouldn't hurt if some Saint wanted to mentor this downtrodden piece of $#!^. P.S. I would forever be grateful to any person who considers me worth their time, and as a bonus would name my first piece of Software I ship after them. (If I ever get to ship one, i.e.,) TL;DR: Never really learnt "Programming/Coding", can't solve problems even if I try to. Help me! | I'd argue the best way is to simply spend more time on it (search for the 10000 hour rule). Find something you want to get done and set out to get it done. Pick something that's beyond your current ability, but not so far out that you won't be able to finish in a reasonable amount of time. If you really enjoy it, you'll find yourself repeating this until you're really good at it. If you don't enjoy it, then perhaps it's not the right thing for you. Try to challenge yourself though, you'll probably enjoy it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/66614",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22547/"
]
} |
66,708 | Where I work we practice scrum-driven agile with 3-week iterations. Yes, it'd be nice if the iterations were shorter, but changing that isn't an option at the moment. At the end of the iteration, I usually find that the last day goes very slowly. The actual work has already been completed and accepted. There are a couple meetings (the retrospective and the next iteration planning), but other than that not much is going on. What sort of techniques can we as a team use to maintain momentum through the last day? Should we address defects? Get an early start on the next iteration's work anyway? Something else? | I've been struggling with the same question a bit lately. We are starting on the next iteration, but I feel that this removes the satisfaction of an iteration well done. I am thinking about the option of leaving it up to the developers, with the caveat "as long as the intent is to benefit the company." Examples: Spend the day learning something Spend it on an innovation time project Spend it tidying up that annoying piece of code you never get around to refactoring Have a good run through the app with a view to UX (which we never seem to find time to do otherwise) Whatever motivates the programmer, giving them an incentive to deliver the release on time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/66708",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/226/"
]
} |
66,755 | Many programmers, upon first encountering Python, are immediately put off by the significance of whitespace. I've heard a variety of reasons that this is inconvenient, but I've never heard a complaint from a Python programmer. Of course, I haven't met a lot of Python programmers, as I have spent my career in the Java world. So my question is for those of you that have participated in a large Python project (more than 3 months, with Python being the primary language used): Did you find the whitespace issue to be inconvenient and continually annoying? Or was it a non-issue once you got in the flow? I'm not asking the question because I'm for or against Python, or for or against its use of whitespace. I happen to like Python, but I've never used it for anything big. Please don't provide speculations if you are not experienced in Python. | I love Python's significant whitespace. To me it's the perfect example of DRY at a syntactic level. The human-readable way to indicate where a block of code begins and ends is with indentation. If you want your code to be readable, you have to indent it regardless of language. It's silly to make the programmer specify this information twice, once for the compiler/interpreter and once for humans. Furthermore, indentation in C-like languages is similar to a comment: It's intended to improve understandability but its meaning is not enforced by the compiler/interpreter and it can get out of sync with the real meaning (where the braces are) very easily, obfuscating rather than clarifying. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/66755",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2329/"
]
} |
67,065 | A few days ago, StackExchange member Anto inquired about valid uses for bit-wise operators. I stated that shifting was faster than multiplying and dividing integers by powers of two. StackExchange member Daemin countered by stating that right-shifting presented problems with negative numbers. At that point, I had never really thought about using the shift operators with signed integers. I primarily used this technique in low-level software development; therefore, I always used unsigned integers. C performs logical shifts on unsigned integers. No attention is paid to the sign bit when performing a logical shift right. Vacated bits are filled with zeros. However, C performs an arithmetic shift operation when shifting a signed integer right. Vacated bits are filled with the sign bit. This difference causes a negative value to be rounded toward infinity instead of being truncated toward zero, which is a different behavior than signed integer division. A few minutes of thought resulted in a first-order solution. The solution conditionally converts negative values to positive values before shifting. A value is conditionally converted back to its negative form after the shift operation has been performed. int a = -5;
int n = 1;
int negative = q < 0;
a = negative ? -a : a;
a >>= n;
a = negative ? -a : a; The problem with this solution is that conditional assignment statements are usually translated to at least one jump instruction, and jump instructions can be expensive on processors that do not decode both instruction paths. Having to re-prime an instruction pipeline twice makes a good dent in any performance gain obtained by shifting over dividing. With the above said, I woke up on Saturday with the answer to the conditional assignment problem. The rounding problem that we experience when performing an arithmetic shift operation only occurs when working with two's complement representation. It does not occur with one's complement representation. The solution to the problem involves converting a two's complement value to a one's complement value before performing the shift operation. We then have to convert the one's complement value back to a two's complement value. Surprisingly, we can perform this set of operations without conditionally converting negative values before performing the shift operation. int a = -5;
int n = 1;
register int sign = (a >> INT_SIZE_MINUS_1) & 1
a = (a - sign) >> n + sign; A two's complement negative value is converted to a one's complement negative value by subtracting one. On the flip side, a one's complement negative value is converted to a two's complement negative value by adding one. The code listed above works because the sign bit is used to convert from two's complement to one's complement and vice versa . Only negative values will have their sign bits set; therefore, the variable sign will equal zero when a is positive. With the above said, can you think of other bit-wise hacks like the one above that have made it into your bag of tricks? What is your favorite bit-wise hack? I am always looking for new performance-oriented bit-wise hacks. | I love Gosper's hack (HAKMEM #175), a very cunning way of taking a number and getting the next number with the same number of bits set. It's useful, for example, in generating combinations of k items from n thusly: int set = (1 << k) - 1;
int limit = (1 << n);
while (set < limit) {
doStuff(set);
// Gosper's hack:
int c = set & -set;
int r = set + c;
set = (((r^set) >>> 2) / c) | r;
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/67065",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17687/"
]
} |
67,168 | If a prospective employer told you they "outsourced bug fixing because developers hate fixing bugs", What would you think? What might be your concerns? | Fixing our own bugs actually makes us a better developer . And it's a pretty enjoyable moment for me. Especially when they are nicely reported. If they don't like to fix bugs, the problem lies elsewhere. I suspect the problem is how bugs are perceived by the management or worse, by bad design decisions and/or no (unit) testing, causing bug fixing painful. Outsourcing bug fixing will probably make thing worse. Developers may be tempted to do reduce quality. Who cares? Some offshore guys are there to clean up their mess. Until the offshore guys replace the onshore guys. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/67168",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20799/"
]
} |
67,192 | I have a bug in my application that I'm building. I asked a question on S.O. and one of the users asked me to post or send all the code to him so he could look at it. I totally understand the request. It is valid and understandable. However, I'm having doubts on if I should. Obviously, I give him/her the keys to the kingdom and I will not have any recourse if he/she would do something malicious. I also want to add that I mean no-disrespect to the user on SO that offered their help. I'm just airing a concern. I do want to have my bug fixed but there is no guarantee that this person would be able to fix it. Should I release the entire source code and hope for the best? Or keep it and try to figure it out on my own? What would you do? | Build an SSCCE (short, self contained, correct example). If the bug disappears when you remove some of the extra details for the SSCCE, then you found it. Otherwise you will have an SSCCE that you give or post that ideally eliminates the code that you are concerned about sharing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/67192",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7456/"
]
} |
67,594 | I have only heard the term "design pattern" being used for object oriented code, and the GoF patterns include only OOP design patterns, but design patterns are elegant solutions for commonly occuring programming problems, right? There is nothing in there saying that they must be limited to OOP, is there? I would like to see some examples of design patterns outside the realm of object oriented programming. Do you have any? Do such even exist (no book, like the GoF book, must necessarily have been written, they should just be used; that is enough)? They can be specific to some programming language(s), but general (paradigm-level) patterns are preferred, of other paradigms than the object oriented one. | Take a look in the Linux Kernel Design Patterns series. The articles are related with a non object oriented language (C) and I believe that they are well written: Linux Kernel Design Patterns - Part 1 Linux kernel Design patterns - Part 2 Linux kernel Design patterns - Part 3 Ghosts of Unix Past: a historical search for design patterns Ghosts of Unix past, part 2: Conflated designs Ghosts of Unix past, part 3: Unfixable designs Ghosts of Unix past, part 4: High-maintenance designs | {
"source": [
"https://softwareengineering.stackexchange.com/questions/67594",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
67,707 | I occasionally run into methods where a developer chose to return something which isn't critical to the function. I mean, when looking at the code, it apparently works just as nice as a void and after a moment of thought, I ask "Why?" Does this sound familiar? Sometimes I would agree that most often it is better to return something like a bool or int , rather then just do a void . I'm not sure though, in the big picture, about the pros and cons. Depending on situation, returning an int can make the caller aware of the amount of rows or objects affected by the method (e.g., 5 records saved to MSSQL). If a method like "InsertSomething" returns a boolean, I can have the method designed to return true if success, else false . The caller can choose to act or not on that information. On the other hand, May it lead to a less clear purpose of a method call? Bad coding often forces me to double-check the method content. If it returns something, it tells you that the method is of a type you have to do something with the returned result. Another issue would be, if the method implementation is unknown to you, what did the developer decide to return that isn't function critical? Of course you can comment it. The return value has to be processed, when the processing could be ended at the closing bracket of method. What happens under the hood? Did the called method get false because of a thrown error? Or did it return false due to the evaluated result? What are your experiences with this? How would you act on this? | In the case of a bool return value to indicate the success or failure of a method, I prefer the Try -prefix paradigm used in various in .NET methods. For example a void InsertRow() method could throw an exception if there already exists a row with the same key. The question is, is it reasonable to assume that the caller ensures their row is unique before calling InsertRow? If the answer is no, then I'd also provide a bool TryInsertRow() which returns false if the row already exists. In other cases, such as db connectivity errors, TryInsertRow could still throw an exception, assuming that maintaining db connectivity is the caller's responsibility. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/67707",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20636/"
]
} |
67,717 | I've finished a project in PHP of 13000+ lines in Procedural Style [because I'm very familiar with that, though I know OOP], and the project is running perfectly. But should I convert it to OOP? [ because the world is busy with OOP ] My code doesn't need any of the feature of OOP [encapsulation, inheritance basically...]! So what should I do? And what kind of help I will get if I convert it to OOP ? | My code doesn't need any of the feature of OOP You have answered your own question - if you don't need OOP in this case and your project is working then don't convert it. However, you should look at using OOP for your next project - but only if it's appropriate. There's nothing intrinsically wrong with procedural programming - as long as it's used in the appropriate place. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/67717",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22837/"
]
} |
67,750 | Someone told me that easiest way to 'destroy' a programmer is to prevent them from programming for a month or so. Is that correct? What should I do to make sure I stay in practice if I'm not in a position to program as part of my job? | I think the person you refer to may have mixed two different levels of knowledge/ability . The first is general problem solving ability. This is not going to fade away , as others have explained with good examples. I myself had two breaks in my career as a software developer, once for a year, and the other was close to a year, during which I did practically no programming. I could come back to the profession without major problems after each of these. However, as Chris put it, my knowledge of specific language/API features became "rusty". That is the other level, which is more short term knowledge, and it can indeed fade away fairly quickly (although IMHO not in a month - you would need several months to notice the difference). Note though that these things often have a shorter half-life anyway - APIs change, preferred language idioms get obsolete and new ways come along , etc. Let's say you have several years of experience in language A, but nowadays you are programming exclusively in language B. Your skills in language A will inevitably get rusty over time. However, you will be able to dust them off fairly quickly. As for the best way to "destroy" a programmer, I am sad to say there are well known, proven and (unfortunately to our industry) widely practiced methods: always push him/her to deliver results to unrealistic schedules demand regular unpaid overtime burden him/her with bureaucracy, e.g. demand that (s)he get approval from your boss' boss' boss for, and/or fill out lengthy documents before/after each code change reject any process/quality improvement idea of him/her with whatever excuse you can find (e.g. "if it ain't broken, don't fix it", or "this is just the latest fad, no need to take notice") initiate a personal bonus system within the team, overtly stating that the team has a fixed amount of total bonus allotted, so team members must compete against each other for it micromanage him/her, retaining the right to make every technical decision yourself by authority give him/her inadequate tools for the job (old PC, small monitor) cram him/her into tiny and noisy open office spaces, preferably together with totally unrelated but noisy people (e.g. sales/marketing) If practiced consistently, in a few year's time these are almost guaranteed to make your developer(s) burn out, killing any desire and enthusiasm in them towards programming. These are some that come to my mind - unfortunately there are more :-((( | {
"source": [
"https://softwareengineering.stackexchange.com/questions/67750",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11158/"
]
} |
67,852 | I'm a new programmer and want to ask senior programmers (programmers who have some experience in the real world). I do my work and after coding, my project gets completed but honestly speaking I don't remember the code, classes and frameworks name and their properties. Sometimes even I doubt myself that did I made this? Is this normal with all programmers or am I the silliest programmer who couldn't remember the code and classes/properties names? Edit: I think many of programmers are getting me wrong here. I said I forget frameworks names, classes' names, property names but I start remembering my own code once I start working on it again. My question is do you remember syntax and classes/property etc. names? | In all honesty, I don't know how one could be a long term developer without the ability to "forget" code you've worked on. The amount of projects and resulting code would eventually lead to pure information overload. However, I see this situation as a supporting argument for clean, logical design. There will come a time when you have to support your own code. If you've chosen strong and logical coding conventions, your familiarization time will be significantly reduced. Additionally, this would theoretically reduce the time required to perform the actual maintenance. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/67852",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20971/"
]
} |
67,923 | What methods seem to work best to coax requirements out of non-tech business people? I am working with a team that’s trying to get a spec together for a project. Every time we have met and it comes down to expectations for the next meeting, we ask for the business people to bring back their requirements. They usually respond something like this: “Well, do you think you guys could whip up a prototype so we can see what we like next week…you know, not with any data or anything since it’s a prototype, just the functionality.” This is a 6 month plus project so that is obviously infeasible (we would have to develop the entire thing!), and we don’t even know what to prototype without some sort of spec. Frankly, I think like most people, they have some idea of what they want, they just are not thinking about it in the focused sort of way necessary to gather true requirements. As an alternative to simply telling them, “give us what you want or we can’t/won’t do any work” (we do want them to be happy with the results), are there ways to help them decide what they want? For example, we could tell them: “Draw out some screens (in Powerpoint, on a napkin, whatever) that show the UI you would like with all of the data you want to see and a description of the functionality in the margins. From this, we will polish it up and build the backend based on this set of behavior requirements.” OR “Don’t worry about how it will look right now (the number 1 hang up). Just give us a list of all the data you want about each thing the program keeps track of. So for “Customer” you might list: name, address, phone number, orders, etc. It does not have to be a perfect database structure, but we can work something out from this and get an idea of what you are looking for” Do either of these alternative approaches to get business people focused on what they want make sense? Are there alternatives that you have seen in action? | I have spent the last 3 months in an exhaustive - and exhausting - requirements-gathering phase of a major project and have learned, above all else, that there is no one-size-fits-all solution . There is no process, no secret, that will work in every case. Requirements analysis is a genuine skill, and just when you think you've finally figured it all out, you get exposed to a totally different group of people and have to throw everything you know out the window. Several lessons that I've learned: Different stakeholders think at different levels of abstraction. It is easy to say "talk at a business level, not technical", but it's not necessarily that easy to do . The system you're designing is an elephant and your stakeholders are the blind men examining it . Some people are so deeply immersed in process and routine that they don't even realize that there is a business. Others may work at the level of abstraction you want but be prone to making exaggerated or even false claims, or engage in wishful thinking. Unfortunately, you simply have to get to know all of the individuals as individuals and understand how they think, learn how to interpret the things they say, and even decide what to ignore. Divide and Conquer If you don't want something done, send it to a committee. Don't meet with committees. Keep those meetings as small as possible. YMMV, but in my experience, the ideal size is 3-4 people (including yourself) for open sessions and 2-3 people for closed sessions (i.e. when you need a specific question answered). I try to meet with people who have similar functions in the business. There's really very little to gain and very much to lose from tossing the marketing folks in the room with the bean counters. Seek out the people who are experts on one subject and get them to talk about that subject. A meeting without preparation is a meeting without purpose. A couple of other answers/comments have made reference to the straw-man technique, which is an excellent one for those troublesome folks that you just can't seem to get any answers out of. But don't rely on straw-men too much, or else people will start to feel like you're railroading them. You have to gently nudge people in the right direction and let them come up with the specifics themselves, so that they feel like they own them (and in a sense, they do own them). What you do need to have is some kind of mental model of how you think the business works, and how the system should work. You need to become a domain expert , even if you aren't an expert on the specific company in question. Do as much research as you can on your business, their competitors, existing systems on the market, and anything else that might even be remotely related. Once at that point, I've found it most effective to work with high-level constructs, such as Use Cases, which tend to be agreeable to everybody, but it's still critical to ask specific questions. If you start off with "How do you bill your customers?" , you're in for a very long meeting. Ask questions that imply a process instead of belting out the process at the get-go: What are the line items? How are they calculated? How often do they change? How many different kinds of sales or contracts are there? Where do they get printed? You get the idea. If you miss a step, somebody will usually tell you. If nobody complains, then give yourself a pat on the back, because you've just implicitly confirmed the process. Defer off-topic discussions . As a requirements analyst you're also playing the role of facilitator, and unless you really enjoy spending all your time in meetings, you need to find a way to keep things on track. Ironically, this issue becomes most pernicious when you finally do get people talking. If you're not careful, it can derail the train that you spent so much time laying the tracks for. However - and I learned this the hard way a long time ago - you can't just tell people that an issue is irrelevant . It's obviously relevant to them , otherwise they wouldn't be talking about it. Your job is to get people saying "yes" as much as possible and putting up a barrier like that just knocks you into "no" territory. This is a delicate balance that many people are able to maintain with "action items" - basically a generic queue of discussions that you've promised to come back to sometime , normally tagged with the names of those stakeholders who thought it was really important. This isn't just for diplomacy's sake - it's also a valuable tool for helping you remember what went on during the meetings, and who to talk to if you need clarification later on. Different analysts handle this in different ways; some like the very public whiteboard or flip-chart log, others silently tap it into their laptops and gently segue into other topics. Whatever you feel comfortable with. You need an agenda This is probably true for almost any kind of meeting but it's doubly true for requirements meetings. As the discussions drag on, people's minds start to wander off and they start wondering when you're going to get to the things they really care about. Having an agenda provides some structure and also helps you to determine, as mentioned above, when you need to defer a discussion that's getting off-topic. Don't walk in there without a clear idea of exactly what it is that you want to cover and when . Without that, you have no way to evaluate your own progress, and the users will hate you for always running long (assuming they don't already hate you for other reasons). Mock It If you use PowerPoint or Visio as a mock-up tool, you're going to suffer from the issue of it looking too polished . It's almost an uncanny valley of user interfaces; people will feel comfortable with napkin drawings (or computer-generated drawings that look like napkin drawings, using a tool like Balsamiq or Sketchflow ), because they know it's not the real thing - same reason people are able to watch cartoon characters. But the more it starts to look like a real UI, the more people will want to pick and paw at it, and the more time they'll spend arguing about details that are ultimately insignificant. So definitely do mock ups to test your understanding of the requirements ( after the initial analysis stages) - they're a great way to get very quick and detailed feedback - but keep them lo-fi and don't rush into mocking until you're pretty sure that you're seeing eye-to-eye with your users. Keep in mind that a mock up is not a deliverable , it is a tool to aid in understanding. Just as you would not expect to be held captive to your mock when doing the UI design, you can't assume that the design is OK simply because they gave your mock-up the thumbs-up. I've seen mocks used as a crutch, or worse, an excuse to bypass the requirements entirely; make sure you're not doing that. Go back and turn that mock into a real set of requirements. Be patient. This is hard for a lot of programmers to believe, but for most non-trivial projects, you can't just sit down one time and hammer out a complete functional spec. I'm not just talking about patience during a single meeting; requirements analysis is iterative in the same way that code is. Group A says something and then Group B says something that totally contradicts what you heard from Group A. Then Group A explains the inconsistency and it turns out to be something that Group C forgot to mention. Repeat 500 times and you have something roughly resembling truth . Unless you're developing some tiny CRUD app (in which case why bother with requirements at all?) then don't expect to get everything you need in one meeting, or two, or five. You're going to be listening a lot, and talking a lot, and repeating yourself a lot. Which isn't a terrible thing, mind you; it's a chance to build some rapport with the people who are inevitably going to be signing off on your deliverable. Don't be afraid to change your technique or improvise. Different aspects of a project may actually call for different analysis techniques. In some cases classical UML (Use Case / Activity diagram) works great. In other cases, you might start out with business KSIs, or brainstorm with a mind map, or dive straight into mockups despite my earlier warning. The bottom line is that you need to understand the domain yourself, and do your homework before you waste anyone else's time. If you know that a particular department or component only has one use case, but it's an insanely complicated one, then skip the use case analysis and start talking about workflows or data flows. If you wouldn't use the same tool for every part of an app implementation, then why would you use the same tool for every part of the requirements? Keep your ear to the ground. Of all the hints and tips I've read for requirements analysis, this is probably the one that's most frequently overlooked. I honestly think I've learned more eavesdropping on and occasionally crashing water-cooler conversations than I have from scheduled meetings. If you're accustomed to working in isolation, try to get a spot around where the action is so that you can hear the chatter. If you can't, then just make frequent rounds, to the kitchen or the bathroom or wherever. You'll find out all kinds of interesting things about how the business really operates from listening to what people brag or complain about during their coffee and smoke breaks. Finally, read between the lines . One of my biggest mistakes in the past was being so focused on the end result that I didn't take the time to actually hear what people were saying . Sometimes - a lot of the time - it might sound like people are blathering on about nothing or harping about some procedure that sounds utterly pointless to you, but if you really concentrate on what they're saying, you'll realize that there really is a requirement buried in there - or several. As corny and insipid as it sounds, the Five Whys is a really useful technique here. Whenever you have that knee-jerk "that's stupid" reaction (not that you would ever say it out loud), stop yourself, and turn it into a question: Why? Why does this information get retyped four times, then printed, photocopied, scanned, printed again, pinned to a particle board, shot with a digital camera and finally e-mailed to the sales manager? There is a reason , and they may not know what it is, but it's your job to find out. Good luck with that. ;) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/67923",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3792/"
]
} |
67,960 | Do you think that its a good idea when a junior programmer needs help to always jump in and try to educate them? Or will they ignore all the "teaching to fish" advice you give them and just focus on the "fish" you just brought them? Do you let them always figure things out on their own, knowing that mistakes are the best way to learn? Or are you afraid that they'll get so burnt and frustrated that they'll lose the desire to come up to speed? When do you choose when to help someone more junior then you and when to stand back and let them learn through their mistakes? | At one of my jobs, I was both learning and teaching(because I of course don't know everything, but I know more than some) Do not at all costs lay your hands on the keyboard. This is frustrating both for you, and the person you are teaching. Even if you give them step by step instructions, when you put your hands on the keyboard it's the equivalent of giving them a piece of code and saying "this fixes it". In what I've learned: Don't type the code for them Try to teach on their level(if they understand the syntax, don't explain it to them. This will just bore them; instead teach the classes/functions used) Don't ignore them or say "figure it out on your own". What you'll end up with is them coming to you later except for now the 3 lines of code they had problems with, is now 50 lines spread across 8 files trying to work around the problem. Teach them to learn on their own. One of the best ways is tell them to use stackoverflow. I sometimes, even knowing the answer, if they asked me. I'd say "well, I'm going to ask this question on stackoverflow". and I'd give them a link to the question. Take a coffee break and look at some different code. When they came back asking "so how do I fix that problem" just tell them to look up their question on SO(using the URL you gave them). I've found that the masses are usually a better teacher than I am. When they copy and paste code from the internet and ask why it doesn't work, ask them to explain what each line does. If they can't, then tell them to research the functions/classes used. If needed, provide explanations for the class and functions Conduct code reviews to make sure they are solving the problem, not just working around it for it to show up later. Be nice. When someone is just starting out in your codebase with no documentation, don't just tell them to read the source code. Give a summarized high level overview of the function in question. Or, better yet, start writing documentation :) Be humble. Don't BS about the problem. If you don't know it, say you don't and help them look it up. Many times, just knowing the domain enough to know what keywords to search for is enough help for you to give them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/67960",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/980/"
]
} |
67,982 | Is it ever a good idea to hardcode values into our applications? Or is it always the right thing to call these types of values dynamically in case they need to change? | Yes, but do make it obvious . Do: use constants use a descriptive variable name Don't: have any magic numbers floating around the code | {
"source": [
"https://softwareengineering.stackexchange.com/questions/67982",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22218/"
]
} |
68,058 | When designing classes to hold your data model I've read it can be useful to create immutable objects but at what point does the burden of constructor parameter lists and deep copies become too much and you have to abandon the immutable restriction? For example, here is an immutable class to represent a named thing (I'm using C# syntax but the principle applies to all OO languages) class NamedThing
{
private string _name;
public NamedThing(string name)
{
_name = name;
}
public NamedThing(NamedThing other)
{
this._name = other._name;
}
public string Name
{
get { return _name; }
}
} Named things can be constructed, queried and copied to new named things but the name cannot be changed. This is all good but what happens when I want to add another attribute? I have to add a parameter to the constructor and update the copy constructor; which isn't too much work but the problems start, as far as I can see, when I want to make a complex object immutable. If the class contains may attributes and collections, containing other complex classes, it seems to me the constructor parameter list would become a nightmare. So at what point does a class become too complex to be immutable? | When they become a burden? Very quickly (specially if your language of choice does not provide sufficient syntactic support for immutability.) Immutability is being sold as the silver bullet for the multi-core dilemma and all that. But immutability in most OO languages forces you to add artificial artifacts and practices in your model and process. For each complex immutable class you must have an equally complex (at least internally) builder. No matter how you design it, it stills introduces strong coupling (thus we better have a good reason to introduce them.) It is not necessarily possible to model everything in small non-complex classes. So for large classes and structures, we artificially partition them - not because that makes sense in our domain model, but because we have to deal with their complex instantiation and builders in code. It is worse still when people take the idea of immutability too far in a general purpose language like Java or C#, making everything immutable. Then, as a result, you see people forcing s-expression constructs in languages that do not support such things with ease. Engineering is the act of modeling through compromises and trade-offs. Making everything immutable by edict because someone read that everything is immutable in X or Y functional language (a completely different programming model), that is not acceptable. That is not good engineering. Small, possibly unitary things can be made immutable. More complex things can be made immutable when it makes sense . But immutability is not a silver bullet. The ability to reduce bugs, to increase scalability and performance, those are not the sole function of immutability. It is a function of proper engineering practices . After all, people have written good, scalable software without immutability. Immutability gets to become a burden really fast (it adds to accidental complexity) if it is done without a reason, when it is done outside of what it make sense in the context of a domain model. I, for one, try to avoid it (unless I'm working in a programming language with good syntactic support for it.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/68058",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15557/"
]
} |
68,101 | Do you spend your working hours learning new stuff, reading tech blogs, books on programming etc.? What's your opinion on it? Can an employer have benefits allowing developers to spend about 1-1.5 hrs a day on learning. Will it be repaid in future (with better productivity etc.)? | I am of the mindset that it is essential for a good development environment to allow for an hour or two at most for exploration and learning, barring when it's "crunch time" on an application of course. An environment which doesn't do this is a red flag in my book because it tells me they don't value improvement. EDIT Worst of all is the place that reprimands it's developers for reading blogs/technical sites instead of "writing code". That, to me, indicates an environment that doesn't care about it's developers beyond what they can squeeze out of them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/68101",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3189/"
]
} |
68,120 | In the last interview I attended, I was asked to solve a puzzle where I was expected to measure exactly blah liters of water given two buckets with capacities - blah and blah liters respectively. I was unable to solve the puzzle in given time (~5 minutes). The interviewer was a bit disappointed and said a programmer has got to have "these" skills. I didn't get what skills he was talking about. I have always felt strange about these kind of puzzles that are normally asked at programming job interviews. I do not understand what, if at all, is the connection between such puzzles and programming. Exactly what skills do interviewers intend to assess with such puzzles? | Some people ask them in an attempt to gauge your ability and approach to solving problems. Personally, I don't think that such puzzles provide an accurate indicator. In the "real world", you have more than five minutes to figure out if your dealing with a bin packing vs a back pack problem, for instance. Initially, it's sometimes easy to misunderstand the problem at hand until you're in the middle of applying the wrong solution. That happens to people with 1, 5, 10 or even 20 years of experience. The best interview 'puzzles' are the ones where you sit down at a computer to solve a problem in the domain in which you claim expertise. I also dislike the "Well, a programmer should be able to ..." thinking because it doesn't take into consideration that people get anxious when hit with something unexpected in a setting that is already stressful. Sure, you could solve that if you had time to think about it.. and perhaps you could solve it faster if you realized that your life would be over if you didn't. Do you want to work somewhere where your life will be over if you can't solve problems in five minutes ? Will you get fired if you can't ? Should all great programmers also be champion sudoku solvers? I'm sure that plenty are, but it's not like some kind of prerequisite for competency. I'm not saying that you should not be tested on how you approach problems, but the tests should be fun and invite the 'best' that the applicant has to give, given their area of expertise. Proving that you are as smart as a character that Bruce Willis portrays seems kind of pointless, considering that producers spent a pretty sum to get that scene just right. In other words, if you detect that you're being interviewed by someone who has little comprehension over what you'll actually be doing , excuse yourself to go to the restroom and never return. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/68120",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1968/"
]
} |
68,134 | I'm creating a few closed-source applications on my own (no big company behind me) and am wondering exactly how to protect them. At the top of all the source code files I have this pretty basic copyright notice: /*******************************************************
* Copyright (C) 2010-2011 {name} <{email}>
*
* This file is part of {project}.
*
* {project} can not be copied and/or distributed without the express
* permission of {name}
*******************************************************/ However I'm really starting to think that's not enough. Without the money to get a lawyer, I'm interested in any closed-source license that essentially says "You can use it, and that's it". Finding one has been extremely difficult as I can only find open source license comparisons or "Find a lawyer" answers. Is there any closed-source license that I can use that says something similar to this? | Something like this is adequate, depending on where you live: /* Copyright (C) YoYoDyne Systems, Inc - All Rights Reserved
* Unauthorized copying of this file, via any medium is strictly prohibited
* Proprietary and confidential
* Written by Elmer Fudd <[email protected]>, September 1943
*/ (2016 update: The phrase "All Rights Reserved" used to be required in some nations but is now not legally needed most places. In some countries it may help preserve some of the "moral rights." ) This means you can't: Copy the file Print the file out, scan it and copy the image Print the file out, take pictures of it and distribute the film etc ... However, take care to note that in some countries, there is no such thing as copyright. This is also completely in addition to a strong license that you ship with your product, which should go into greater detail. These sort of 'license headers' are designed simply to alert someone who happens upon a file that they should not distribute it. We use something very much like that in stuff that we have that needs to stay behind closed doors. For instance, it alerts someone to not post functions on Stack Overflow. Someone who p0wns your dev server to get your code probably isn't going to pay attention to it, however. Note, again, what you're describing is NOT a license, it's a per file assertion of copyright and specifically stating that the code is proprietary. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/68134",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66/"
]
} |
68,136 | A coworker was wondering this today: "Why is it that in our industry 'production' means 'final, deliverable product'? You know, like if a movie is 'in production', it means they're currently filming it, not that that it's done and audiences are watching it." | I think the term "production" has come from other industries like automotive or electronics, where once a component/product is ready to be used, it becomes part of producing/usage in something bigger like in a "production line" or "construction pipeline". In software the term "production environment" might hold parallel in the sense that people use this software deployed in production to do something that important etc., | {
"source": [
"https://softwareengineering.stackexchange.com/questions/68136",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4060/"
]
} |
68,251 | The more I learn about different programming paradigms, such as functional programming, the more I begin to question the wisdom of OOP concepts like inheritance and polymorphism. I first learned about inheritance and polymorphism in school, and at the time polymorphism seemed like a wonderful way to write generic code that allowed for easy extensibility. But in the face of duck typing (both dynamic and static) and functional features such as higher-order functions, I've begun to look at inheritance and polymorphism as imposing an unnecessary restriction based on a fragile set of relationships between objects. The general idea behind polymorphism is that you write a function once, and later you can add new functionality to your program without changing the original function - all you need to do is create another derived class which implements the necessary methods. But this is a lot simpler to achieve through duck typing, whether it's in a dynamic language like Python, or a static language like C++. As an example, consider the following Python function, followed by its static C++ equivalent: def foo(obj):
obj.doSomething()
template <class Obj>
void foo(Obj& obj)
{
obj.doSomething();
} The OOP equivalent would be something like the following Java code: public void foo(DoSomethingable obj)
{
obj.doSomething();
} The major difference, of course, is that the Java version requires the creation of an interface or an inheritance hierarchy before it will work. The Java version thus involves more work, and is less flexible. Additionally, I find that most real-world inheritance hierarchies are somewhat unstable. We've all seen the contrived examples of Shapes and Animals, but in the real world, as business requirements change and new features are added, it's difficult to get any work done before you need to really stretch the "is-a" relationship between sub-classes, or else remodel/refactor your hierarchy to include further base-classes or interfaces in order to accommodate new requirements. With duck typing, you don't need to worry about modeling anything - you just worry about the functionality you need. Yet, inheritance and polymorphism are so popular that I doubt it would be much of an exaggeration to call them the dominant strategy for extensibility and code reuse. So why are inheritance and polymorphism so wildly successful? Am I overlooking some serious advantages that inheritance/polymorphism have over duck typing? | I mostly agree with you, but for fun I'll play Devil's Advocate. Explicit interfaces give a single place to look for an explicitly, formally specified contract, telling you what a type is supposed to do. This can be important when you're not the only developer on a project. Furthermore, these explicit interfaces can be implemented more efficiently than duck typing. A virtual function call has barely more overhead than a normal function call, except that it can't be inlined. Duck typing has substantial overhead. C++-style structural typing (using templates) can generate huge amounts of object file bloat (since each instantiation is independent at the object file level) and doesn't work when you need polymorphism at runtime, not compile time. Bottom line: I agree that Java-style inheritance and polymorphism can be a PITA and alternatives should be used more often, but it still has its advantages. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/68251",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17252/"
]
} |
68,439 | Is it true that a person with fairly good fundamentals in programming can easily learn any programming language? Well, when I say programming languages, I refer to the agile and dynamic languages like PHP, Perl, Ruby, etc but not the former programming languages of the distant past. I've worked only on java, groovy and flex to some extent. So considering the fact that I am an amateur programmer but a fast learner, on a rough basis, how long would it take to get a foothold on any one of such languages? | Yes, with reservations. Four weeks ago, I would say I had professional-level skill in C and C++, and amateur-level skill in Java. My boss asked me to write some software in JavaScript, with which I had zero experience, and off I went. Over the next two weeks, I read many sample code snippets, found all the cool libraries, and wrote my program. It's done, and it works. Then last week I bought a JavaScript book, and I've been reading it, and boy, I did not know what I was doing. Now I understand why my objects were acting so strangely. So now I say, I know a little JS. I can read it and work with it, but I'm sure what I'm writing is inefficient, hard to read, and does not follow best practices. In general, a fast learner can take a week and start producing low-quality product in a new language. If you know Java, you can pretty quickly pick up C, C++, PHP, Python, JavaScript, but only well enough to modify code or write well defined functions. (Perl may be harder because regex's are complex.) In order to properly architect a system in a new language, you would probably want a year of developing professionally under experienced mentors. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/68439",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23010/"
]
} |
68,470 | With all the new "modern" languages out today, how is it that C is still heralded as the fastest and "closest to the machine"? I don't really believe in there ever being only one correct way to do things, and C has been around for a really long time (since the 60's!). Have we really not come up with anything better than something written nearly 50 years ago? I am aware that modern languages are higher-level and take care of certain tasks like garbage collection and memory allocation and utilize libraries and such. I'm just asking why there has never been a true second option to C. Can it be that C is so perfect that no other way of operating a computer could be possible (developer-adoption aside)? EDIT Look, I'm not trying to knock C or whatever your favorite language is. I'm wondering why C has become the standard and why other alternatives never emerged and C was just "accepted". | C is a very simple language, and it's because of this, along with its longevity, that's it's fast and optimized. It's also extraordinarily widely supported, in concerns with embedded environments, microprocessors, etc. It's hard to beat a really simple and fast language. The only thing to improve upon a language like that is usability: decrease the time it takes to make similar, generic code, and make it easier to model with abstractions. This is where C++ comes in. C++ can be just as fast as C. The thing is, C++ is a much more complex language, which means it definitely increases productivity; as long as people know how to use it. C++ and C are not almost the same language anymore. Now, D was another step up. Same ability for fast code, optional garbage collection, etc., but it never caught on. Hopefully that changes, because it drops what plagues C++: backwards compatibility with C. So to answer your question, "better" is a hard thing to judge. In terms of simplicity and speed, C is probably close to the best we could do. In terms of productivity versus simplicity, C++ is probably best we could do, though that opinion varies much more. Lastly, in terms of a fleshed-out and cleaned up language, with the speed and simplicity of C, D wins this context. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/68470",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4430/"
]
} |
68,726 | Based on this question . Would you consider it best practice to create a function that does the opposite of an existing function just to give it a different name. Example: If you already have bool In(string input,string[] set) which returns true if the the array set contains the string input and false otherwise, should you create a function like bool NotIn(string input,string[] set) which returns false if the string is in the set or true otherwise? | No, certainly not best practice at all. All languages that I've ever used have a "not" operator, so use that. It's very clear, very easy to read and it saves writing essentially duplicate methods. E.g. the meaning and the intention of the code below seems to me to be pretty clear: if (!In(...))
{...} Whilst if I saw a bit of code like this: if (NotIn(...))
{...} I'd think, "this is probably the opposite of In() , but if it was why didn't they just write !In() ". So, I'd end up having to check the docs or the code :( Obviousy it is not syntacticaly wrong to write such a method, it's just not idiomatic (in any langauage I''ve ever used). Edit As Amir mentions on the comments, this is the kind of thing that might well be covered in coding standards, along with how to name a method (or property) that returns a boolean value. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/68726",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22419/"
]
} |
68,740 | The danger of ever suggesting some feature on a product, especially open source, is that you'll get the response, "why don't you do it?". That's valid, and it's cool that you can make the change yourself. But we know practically that products do often improve as programmers listen to the voice of users — even if those users are other programmers. And, the efficient way to make those changes can include someone who's already working on the project taking up the idea and implementing it. There are some common terms used to refer to software development problems. e.g. Bikeshedding . Is there a common term used that essentially replies, "Yes, I know that I can change just about anything in the world — even closed source. I could get hired, and go write that code. But in this case I'm just making an observation that may in fact be useful for another coder already well suited to easily make that change — or just generally discussing possibilities." [p.s. (a few days in) - I should have pointed out that "submit a patch" is often said with wry humor, and I'm seeking an appropriate witty response.] | It's a difficult point: since the user doesn't directly or indirectly pay for a product, she cannot ask for a feature to be implemented. It's not as if you were a stakeholder or a direct customer who ordered the product, and not even an end user of a commercial product. This being said, "submit a patch" is not a valid answer. It's not polite. It's not correct. Even for an open source product. "Submit a patch" is the short version of: "we don't care if you like our product or not. Go and modify it if you want, but don't bother us with your customer requests." What about submitting a patch? Well, it's not so easy. To do it: You must know the language(s) used in the open source project. You must be able to load the source code from the version control to be able to modify it. You must have all the correct versions of any build dependencies installed (including both runtime libraries and build tools). You must be able to compile this source code , which is not so obvious in some cases. Especially, when a huge project takes a few hours to compile and displays 482 errors and thousands of warnings, you may be courageous to go and search for the source of those errors. You should understand very well how the project is done , what are the coding style to use, if any, how to run unit tests, etc. If the project doesn't have a decent documentation (which is often the case for open source projects), it may be really hard. You must adapt yourself to the project and to the habits of the developers who are participating actively to the project. For example, if you use .NET Framework 4 daily, but the project uses .NET Framework 2.0, you can't use LINQ, nor Code Contracts, nor other thousands of new features of the latest versions of the framework. Your patch must be accepted (unless you do the change only for yourself, without the intent to share it with the community). If your intention is to actively participate to the project, then you can do all those things and invest your time for it. If, on the other hand, there is just an annoying minor bug or a simple feature which is missing, spending days, weeks or months studying the project, then doing the work itself in a few minutes is just unreasonable, unless you like it. So is there a canonical retort to "it's open source, submit a patch"? I don't think so. Either you explain to the person that she's impolite, or you just stop talking to her. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/68740",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14734/"
]
} |
68,762 | This is a philosophical question. Given a hypothetical desktop application, and a desire to provide automatic updates (rather than forcing people to go to a website, check for an update, download an update, install), which of the two is more of a "best practice" approach? Like iTunes , it checks to see if there is a new version and prompts the user to download the new version. If so, it downloads a full install executable (in this case, a Windows Installer file (.msi)) that installs the full version (not just an upgrade to the previous version - too much to manage if there are multiple versions out there). So, let's say, it's version 10.1.1 - whether you are installing fresh or upgrading, you use the same file. After it downloads, it instructs the user to close the application and run the install file themselves. Similar to the other one, it checks for a new version and prompts user to download it, but instead of just downloading an executable and prompting the user to run it, it actually runs it for them - shutting down the program they have open, acquiring the necessary security to install files. Issues with #2: many issues around closing down program, since the program can open other programs ( Outlook and Excel ), or what if the user was in the middle of something. Also around security, you need local administrator access to install, what if you don't have it? In later versions of Windows, you cannot just override the person's security. Issues with #1: some people believe this will be too hard, too much effort for the end-user. I would strongly prefer to go with #1 because it will save 80-120 hours on my project, and is simpler to implement and maintain. However, we have people who feel strongly on all sides. What is a best practice for this sort of thing? | Personally, I rather like Google Chrome's approach. A base directory with a launcher, and subdirectories for each installed version of the software. The launcher just looks for the highest version number and uses that and deletes older versions as needed. An updater task runs every so often to download and create new directories. When new versions are installed, the running application requests a restart to use the new version. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/68762",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23095/"
]
} |
69,050 | I have been writing code so far in conventional text editors that come with the OS so far or use an IDE in some cases. I know there are some advanced text editors like Emacs and Vim available solely for the purpose of coders. How important are they really? Should a programmer dealing with PHP , Python etc. learn these editors? What are the advantages that they provide over conventional editors like Notepad++ , Scribes etc.? | Vim is a really good tool once you familiarize yourself with it. It starts up faster than any IDE or text editor I've used, and it has syntax highlighting and it indents the code correctly in most cases. It also helps you focus on the coding process itself, you won't be using the mouse at all to deal with it, that'll save you a lot of time when you're just writing code. It has a wealth of plugins for whatever it is you're doing, as well. I haven't used emacs to be honest, but I'm sure there are people here who like it, I personally don't like having to press Ctrl or Alt all the time. Edit Vim's usefulness also depends on what you're writing. If you're an API developer (Java, C#...etc) you'll most probably be more comfortable with an IDE. But if you write scripts (Bash, Perl...etc), Vim might be the way to go, since you need to write something fast, Vim is fast, and does everything you need. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69050",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
69,078 | I'm having a asp.net website running live with its specific style and specific content. From that site we've built another site with the same logic (in other projects) but with different style and different content (and different files) in the same solution. We've changed the start options from Default.aspx to the root folder for that second site. Now I want to build a more general site which works for several customers, using the same logic and data layers - but with different style and content. The current site(s) relies heavily on SQL Server 2008 R2, where most of the content resides - but not all. All of the content could be placed in the database with some work if we need to - especially if we get a more general site. My idea is to use a different start file for every new customer, such as: Default_Customer1.aspx, Default_Customer2.aspx e t c but I'm far from sure if this is the right way to do it. I don't want to have different solutions for every customer. I know that all of the customers will get all of the compiled code, even if some of the code never will be used by that specific customer (who has its specific needs). So what do I do? Should I try to use different start options (files) for every customer, as mentioned above, or is there a better way handling different customers in the same solution? | Vim is a really good tool once you familiarize yourself with it. It starts up faster than any IDE or text editor I've used, and it has syntax highlighting and it indents the code correctly in most cases. It also helps you focus on the coding process itself, you won't be using the mouse at all to deal with it, that'll save you a lot of time when you're just writing code. It has a wealth of plugins for whatever it is you're doing, as well. I haven't used emacs to be honest, but I'm sure there are people here who like it, I personally don't like having to press Ctrl or Alt all the time. Edit Vim's usefulness also depends on what you're writing. If you're an API developer (Java, C#...etc) you'll most probably be more comfortable with an IDE. But if you write scripts (Bash, Perl...etc), Vim might be the way to go, since you need to write something fast, Vim is fast, and does everything you need. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69078",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23071/"
]
} |
69,178 | I'm learning git and I've noticed that it has a two-step commit process: git add <files> git commit The first step places revisions into what's called a "staging area" or "index". What I'm interested in is why this design decision is made, and what its benefits are? Also, as a git user do you do this or just use git commit -a ? I ask this as I come from bzr (Bazaar) which does not have this feature. | Split work into separate commits. You've probably many times opened a file to write a single-line fix, but at the same time you spotted that the formatting was wrong, some documentation could be improved, or some other unrelated fix. With other RCS s you'd have to write that down or commit it to memory, finish the fix you came for, commit that, and then return to fix the other stuff (or create a ball-of-mud commit with unrelated stuff). With Git you just fix all of it at once, and stage+commit the single line separately, with git add -i or git-gui . Don't break the build. You're working on a complicated modification. So you try different things, some of which work better than others, some which break things. With Git you'd stage things when the modification made things better, and checkout (or tweak some more) when the modification didn't work. You won't have to rely on the editor's undo functionality, you can checkout the entire repo instead of just file-by-file, and any file-level mistakes (such as removing a file that has not been committed or saving+closing after a bad modification) does not lead to lots of work lost. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69178",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2169/"
]
} |
69,186 | I read an interesting article, 10 reasons for quitting IT . I quote a part of this text: "This misunderstanding of both duty
and technology does one thing: It
makes your job impossible. When the
powers-that-be begin to micromanage
your department for you, every single
bad element is exacerbated. You know
your job and you know you know your
job. Management does not know your
job, but they don’t know they don’t
know your job. It’s all a vicious
Mobius strip of stress." This is exactly what is going on on my project at the moment.
The client, that is, the one that pays, wants to be everything. He wants to play every possible role in the project. He even wants a detailed technical explanation even though he doesn't know a single thing about programming. And when something doesn't work he blames it on someone else. Has anybody had some similar experience? Any advice on how to deal with these situations? | I've had my share of jobs in IT. Helpdesk, networking, software development, they all share similar problems. Quitting and starting anew, while refreshing, only brings about a new set of problems to deal with. There's a few things you can do to keep your sanity in check. Look for the real problem The client is trying to fully control projects. See if you can find out why. - Is it due to past failures? Empathize, re-assure them of the project's success. - The client a control freak? Redirect their attention. the iceberg secret . Meetings are not as important as results Every meeting management says something that makes my stomach curl. But, when the meeting is over, everything is forgotten except the results. I'm still the one who solves the problems and solves them in the way I feel they need to be solved. Do not carry the weight of the world on your shoulders The most stressed out guys in IT are often the best. One of my best friends is always stressing about the endless responsibilities management puts on his shoulders. Other devs come to him to solve their problems. I'll tell you the same thing I told him. Don't let them. Find a political way to say no/stop taking on problems you do not own. The company realizes what a valuable asset you are. You may be the only guy who gets anything done. They are probably not going to fire you for stopping some of the abuse. As long as you handle it properly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69186",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18933/"
]
} |
69,215 | Say I've received the specifications for a project from a client, and now its time to start developing it. Normally, I just start with the first module (usually user registration) and then go from one module to the next. I only plan in my head just before I'm about to start on a module how its going to work, but there's no planning before that. However, I think it would be better if I went over the specs and planned out how the system was going to work before I coded it, e.g what are the main components, how they're going to interact, etc. I'm just not sure exactly what I should plan. To give a better idea of what I'm asking for, how should I- a) Divide the project into components, b) Plan their interactions,
e.g should I do class diagrams, write unit tests, etc.? Any ideas? | When you have the privilege of starting on a new project you have a blank canvas--which is both exciting and daunting at the same time. I work in iterations, and this is how I divide up the work: Start with the goals for the project. Goals are necessarily the most vague, but helps you focus on what the client or user intends to do with the software. At the end of the day, you want to satisfy those goals--even if that means dropping some really cool features. Then I start breaking the application down into it's subdomains. There's probably a hundred different ways to do this, which is why we start with the project goals. We want to break up the application into some related subsystems that support those goals. This helps us focus on the next task. Identify how and when the subsystems need to interact. We aren't handling the details, just the high level information to make sure we have an integrated system of subsystems. You need a general idea of this so that you can flesh out the details that supports the overall goals for the project. Only supply details for the subsystem I'm working on at the moment (similar to your current strategy). I already know how this subsystem needs to interact with other subsystems, but I may need to work out a couple alternatives so that it makes the most sense. Each subsystem is separated by an interface, so I can adjust the implementation as much as possible without breaking the system as a whole. Review how things are implemented in my current subsystem compared with how it is implemented in other subsystems. Every approach that is not consistent is something the user has to learn. It's OK if we are talking about a brand new concept. For usability's sake we don't want 5 different ways to delete information that are present simply because we were lazy. Re-using the same user interface elements is the quickest way to make the application more intuitive. Learning three concepts is much easier than learning 20. Essentially, this approach of progressively defining a project from very high level to more detailed design has served me well. Even the interactions between subsystems get refined as you actually attempt to implement them. That's a good thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69215",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9803/"
]
} |
69,308 | I know, and use, two version control systems: Subversion and git. Subversion, as of now, gets used for personal projects where I am the only developer and git gets used for open source projects and projects where I believe others will also work on the project. This is mostly because of git's amazing forking and merging capabilities, where everyone may work on their own branch; very handy. Now, I use Subversion for personal projects, as I think git makes little sense there. It seems to be a little bit of overkill. It is OK for me if it is centralized (on my home server, usually) when I am the only developer; I take regular backups anyway. I don't need the ability to make my own branch, the main branch is my branch. Yes, SVN has simple support for branching, but much more powerful support for it makes no sense, I think. Merging can be a pain with it, or at least from my little experience. Is there any good reason for me to use git on personal projects, or is it just simply overkill? | It's not overkill. The main reason why I started using Git and Mercurial over Subversion for personal projects is that initiating a repository is so much more easier. Wanna start a new project? > git init BAM! No need to set up a repository server nor check in a folder structure to support branching and tags into a subversion repository. Sharing your project later is just a matter of: git push (other than having a remote repository). Try to do that quickly with subversion! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69308",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
69,519 | In many respects I really like the idea of Fluent interfaces, but with all of the modern features of C# (initializers, lambdas, named parameters) I find myself thinking, "is it worth it?", and "Is this the right pattern to use?". Could anyone give me, if not an accepted practice, at least their own experience or decision matrix for when to use the Fluent pattern? Conclusion: Some good rules of thumb from the answers so far: Fluent interfaces help greatly when you have more actions than setters, since calls benefit more from the context pass-through. Fluent interfaces should be thought of as a layer over top of an api, not the sole means of use. The modern features such as lambdas, initializers, and named parameters, can work hand-in-hand to make a fluent interface even more friendly. Here is an example of what I mean by the modern features making it feel less needed. Take for example a (perhaps poor example) Fluent interface that allows me to create an Employee like: Employees.CreateNew().WithFirstName("Peter")
.WithLastName("Gibbons")
.WithManager()
.WithFirstName("Bill")
.WithLastName("Lumbergh")
.WithTitle("Manager")
.WithDepartment("Y2K"); Could easily be written with initializers like: Employees.Add(new Employee()
{
FirstName = "Peter",
LastName = "Gibbons",
Manager = new Employee()
{
FirstName = "Bill",
LastName = "Lumbergh",
Title = "Manager",
Department = "Y2K"
}
}); I could also have used named parameters in the constructors in this example. | Writing a fluent interface (I've dabbled with it) takes more effort, but it does have a pay-off because if you do it right, the intent of the resulting user-code is more obvious. It's essentially a form of domain specific langauge. In other words, if your code is read a lot more than it's written (and what code isn't?), then you should consider creating a fluent interface. Fluent interfaces are more about context, and are so much more than just ways to configure objects. As you can see in the link above, I used a fluent-ish API to achieve: Context (so when you typically do many actions in a sequence with the same thing, you can chain the actions without having to declare your context over and over). Discoverability (when you go to objectA. then intellisense gives you lots of hints. In my case above, plm.Led. gives you all the options for controlling the built-in LED, and plm.Network. gives you the things you can do with the network interface. plm.Network.X10. gives you the subset of network actions for X10 devices. You won't get this with constructor initializers (unless you want to have to construct an object for every different type of action, which is not idiomatic). Reflection (not used in example above) - the ability to take a passed in LINQ expression and manipulate it is a very powerful tool, particularly in some helper API's I built for unit tests. I can pass in a property getter expression, build a whole bunch of useful expressions, compile and run those, or even use the property getter to setup my context. One thing I typically do is: test.Property(t => t.SomeProperty)
.InitializedTo(string.Empty)
.CantBeNull() // tries to set to null and Asserts ArgumentNullException
.YaddaYadda(); I don't see how you can do something like that as well without a fluent interface. Edit 2 :
You can also make really interesting readability improvements, like: test.ListProperty(t => t.MyList)
.ShouldHave(18).Items()
.AndThenAfter(t => testAddingItemToList(t))
.ShouldHave(19).Items(); | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69519",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23358/"
]
} |
69,538 | I constantly see people making the claim that "comments tend to become outdated." The thing is, I think I have seen maybe two or three outdated comments my entire career. Outdated information in separate documents happens all the time, but in my experience outdated comments in the code itself are exceedingly rare. Have I just been lucky in who I work with? Are certain industries more prone to this problem than others? Do you have specific examples of recent outdated comments you've seen? Or are outdated comments more of a theoretical problem than an actual one? | Constantly I really can't believe I'm the only one swimming in outdated and misleading comments. In the off chance this helps with understanding: It probably depends most importantly on the age of the code. The next factor would be turnover on the staff. I do equal parts R&D and maintenance work. The R&D is new code, generally stuff that's a little off the beaten path. Many of my colleagues believe in putting in a lot of commented explanation when trying something that there isn't a library already out there for. Since the comment to code ratio is higher than normal there's just more opportunities for things to go out of sync. The maintenance code...I'm an active maintainer on a system that is over 10 years old and another that is over 5. The 10 year old code and comments are atrocious, as you'd expect. Over 10 years you get a lot of hands in the codebase and no one has any idea how the whole thing works anymore. The 5 year old code and comments are pretty good because the turnover on the team has been pretty low. I work almost all services, even our products are highly customized to a particular customer. Specific examples: Comments describing the performance improvement for a particular methodology, like avoiding an in-memory copy. A big deal when a top end machine in a Pentium 2 with MBs of RAM, but hardly a problem now. TODOs Blocks of copy-pasted code including comments. Comment may have made sense in its original location, but hardly makes sense here Comment blocks on top of commented out code (Who knows how many years that's been in there). In all these you see a trend of just not maintaining the comments and code at the same level as the software. IDEs and basic developer habits don't help with this, my eye has been trained to speed past them. I think comment outdated-ness is relatively cheap to avoid in green-field and active projects. If you can keep the code/comment ratio high, it's not a big deal to keep them up to date. It's a little harder to justify hunting these things down when you're budgeted x hours for a bug fix on a production system. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69538",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3965/"
]
} |
69,697 | I am comfortable with programming in C and C#, and will explore C++ in the future. I may be interested in exploring functional programming as a different programming paradigm. I am doing this for fun, my job does not involve computer programming, and am somewhat inspired by the use of functional programming, taught fairly early, in computer science courses in college. Lambda calculus is certainly beyond my mathematical abilities, but I think I can handle functional programming. Which of Haskell or Scheme would serve as a good intro to functional programming? I use emacs as my text editor and would like to be able to configure it more easily in the future which would entail learning Emacs Lisp. My understanding, however, is that Emacs Lisp is fairly different from Scheme and is also more procedural as opposed to functional. I would likely be using "The Little Schemer" book, which I have already bought, if I pursue Scheme (seems to me a little weird from my limited leafing through it). Or would use the "Learn You a Haskell for Great Good" if I pursue Haskell. I would also watch the Intro to Haskell videos by Dr Erik Meijer on Channel 9. Any suggestions, feedback or input appreciated. Thanks. P.S. BTW I also have access to F# since I have Visual Studio 2010 which I use for C# development, but I don't think that should be my main criteria for selecting a language. | I would recommend OCaml. In my personal point of view, the main basis of modern¹ functional programmings are higher-order functions, a static type system, and algebraic datatypes and pattern matching. Between a Scheme, a ML and a Haskell, I would choose the ML because I think it's the most relevant to this definition. Scheme doesn't have static typing (there is Typed Racket, but it's not for the scheme beginners), and Haskell has too much other stuff (monads, lazy evaluation...) that, while interesting, may divert attention from the important basis. SML and OCaml are equally interesting; I'm more used to OCaml, and it has a more "practical" feeling that is nice (but if you really want "practical" to the risk of losing your soul, you may as well pick F#). Scheme and Haskell are also very interesting languages. Scheme emphasis on macros is interesting, but not directly related to functional programming (it's another of the things in the world you should definitely try, as well as logic programming, stack-based languages, and capability-oriented E). Haskell is definitely a great language and, I think, a mandatory point for the aspiring gurus of functional programming. But as the core languages of OCaml and Haskell are very much similar (except for lazy evaluation that is distracting for the beginner), it's easy to learn Haskell once you know the basics of OCaml. Or rather, you can concentrate on the weird stuff, and you don't have to assimilate the basics at the same time. Similarly, once you have seen OCaml, and possibly also Haskell, and still want to learn more, you should look at Coq or Agda. Yet few would recommend Coq or Agda for a first introduction to functional programming... To make my point clear : I think that learning OCaml (or SML) then Haskell will make you as good a functional programmer as learning Haskell directly, but more easily (or less painfully). Besides, OCaml and Haskell both have good things differentiating them, and it's interesting to know about the advanced features of both . Just learning Haskell is poorer in that aspect (though of course you could learn OCaml after Haskell; I think it's less logical and will get you more frustrated). For learning OCaml I would recommend Jason Hickey's book draft (PDF) . ¹ this definition is controversial. Some Scheme folks will claim static typing has nothing to do with functional programming. Some Haskell people will claim that their definition of purity ("what Haskell does, but no more") is a sine qua non condition for being a functional language. I agree to disagree. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69697",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23606/"
]
} |
69,771 | Does anybody know why was Scala implemented in Java and .NET instead of C or C++? Most languages are implemented with Cor C++ [i.e Erlang, Python, PHP, Ruby, Perl]. What are the advantages for Scala implemented in Java and .NET other than giving access to Java and .NET libraries? UPDATE Wouldn't Scala gain more benefit if it were implemented in C because it can be tuned better rather than relying on JVM? | The question is confusing, as C and C++ are languages , while JVM is a virtual machine and .Net is a platform . Scala could be implemented in C or C++, and it could generate machine code instead of bytecode for a virtual machine. Answering the question that was asked: Scala was not implemented in C or C++ because Scala, the language in which it is actually implemented, is a much better language. Why is it better? Well, go read about Odersky's goals for the Scala language . Answering the question that may have been intended: Scala generates primarily JVM bytecode because that provides great portability as well as features such as a reliable and efficient garbage collector, run-time optimizations and just-in-time compilation by the JVM . Let me repeat that last thing: JVM will compile to machine code hot spots in the code it is running. That's compile just like C and C++ compilers do. There are other virtual machines available, but Odersky, Scala's creator, was already very familiar with JVM. He intended to have CLR as an alternative, but the effort to get that going hasn't achieved success yet. Answering the question that could/should have been asked: Compiling to machine code doesn't provide enough benefits over compiling to JVM bytecode. It is certainly possible to generate microbenchmarks in C or C++ that beat JVM-equivalents. It is also true that extremely optimized code in C or C++ will beat extremely optimized code in Java or Scala. The difference isn't all that great, however, for long-running program. Note that Scala isn't a particularly good scripting language precisely because the overhead for short-running programs is too big. However, in most cases the speed of development and ease of maintenance are more important than the speed of execution . In those cases, where people are more concerned in writing very high level code that is easily understand and change, the run-time optimizations provided by the JVM may easily beat compile-time optimizations made by C or C++ compilers, making JVM (and CLR) the target that will actually execute faster. So, no matter whether the question was about Scala compiler being a machine code executable, or Scala programs being machine code, the potential speed gains do not, necessarily, translate into real speed gains. And, by the way, I'll give you a counter-example: Haskell. Haskell generates machine code, and, yet, Haskell programs fare worse on Debian's shootout than Scala's. Given that, can anyone be sure Scala programs would be faster if compiled directly to machine code? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69771",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9298/"
]
} |
69,788 | Why do many code examples, especially tutorials, use the names "Foo" and "Bar" so often? It is almost a standard. For example: void foo(char* bar) {
printf("%s", bar);
} | I think it's the phonetic pronouncation of fubar. Which stands for: F*cked Up Beyond All Repair | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69788",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
69,827 | Yesterday I spent a good part of the afternoon trying to fix a bug, which I thought to be trivial. I was going around in circles, not having a clue what was wrong. Rewriting large parts of the code. Checking on SO. Still no joy. So I went home, walked the dog, watched a little TV and just before I went to sleep, bingo I realized the obvious mistake I was making. This morning it took about 10 minutes to fix. While I was home, I wasn't actively thinking about the problem. Yet taking myself out of the situation enabled me to solve it. It isn't the first time it has happened, and I know that it is a fairly common way to solve a programming problem. I have even heard of people dreaming the answers. Why does this work? Perhaps more importantly, is there a good guide as to when you should take a break from a problem, how long should the break be, and after how long does leaving a problem stop being effective? I suppose I am trying to work out how to optimize this subconscious processing (or whatever is going on) | Being too focused on a problem prevents you from taking a step back. When you debug your code, you tend to needlessly repeat the same tests. The more you try, the more you fail and you become very frustrated. Increased stress and frustration make things worse. That's why quite often, a colleague can by chance, look over your shoulder, and point out the problem (and solution) in a few seconds. They are not in the same mental state as you. I often try to stop looking after a certain period of time and come back with a calmer mind a few hours later. But the most powerful technique is just... asking for help . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69827",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10624/"
]
} |
69,862 | I'm preparing a LINQ section in interview questions for senior programmers. What are the most interesting questions in LINQ to include? And why? | Some of the things you can ask would be like. Why the var keyword is used and when it is the only way to get query result ? What is Deferred Execution ? Explain Query Expression syntax, Fluent Syntax, Mixed Queries. What are Interpreted Queries ? Use of IQueryable and IEnumerable interfaces. Use of let and into keyword, and how they help in making Progressive queries but still keep Deferred execution What are Expression Trees ? Update: For Detailed answers see this nice post by Oleksii | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69862",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23437/"
]
} |
69,892 | Almost every advanced programmer says that it's very useful to read the code of other professionals. Usually they advice open source. Do you read it or not? If you do, how often and what's the procedure of reading the code? Also, it's a bit difficult for newbies to deal with SVN - a bunches of files. What's the solution? | Do you read it or not? Yes. If you do, how often Daily. Constantly. I work with numerous open-source projects (mostly Python-related) and must read the source because it's the most accurate documentation. and what's the procedure of reading the code? Um. Open and Read. Also, it's a bit difficult for newbies to deal with SVN - a bunches of files. What's the solution? Open and Read. Then read more. It's not easy. Nothing makes it easy. There's no Royal Road to understanding. It takes work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69892",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21746/"
]
} |
69,916 | Recently I read the following 5 Types Of Bosses and How To Deal With Them , which describes the attires of the worst boss. I've just started leading a small team of software developers. I would like to know what are the main things a programmer expects from the senior programmer or what are the things we should avoid while managing a team. Also, I would like to know how to keep the programmers satisfied and create a productive & completeness environment for my team. | Things that seem to work well for me: Give meaningful work and encourage ownership - even when a problem arises, don't solve it, talk through it and give the person insights so they can solve it themselves. edit - addition - this was also meant to include - stay the heck out of details. Assume your people know enough to do the assignment without micromanagement or the requirement to constantly check in. Build a set of guidelines for when they should check in - which should only be when the work is either done or so truly messed up that serious intervention is needed. If possible, stay away from even needing to be in the loop on interteam support issues. Be honest - that has several corollaries: Be honest about yourself - "I won't have time until Tuesday", "I've never done that, here's my best guess", etc. Be honest about the team and where they fit in the company - if you know something about the business stuff, tell them if you can, and tell them what you know as the straight facts. Be honest in giving feedback - don't mince words or soft pedal if you have give negative feedback. That's different from "brutally honest" - you can still have compassion, but if something's wrong, say so. Be honest when you know the work is more about redtape than getting something meaningful done. Into everyone's life, some meaningless work will fall. Don't pretend it's meaningful. Call it like is, so you can all focus on getting past it and getting on to something useful. Listen . At least 50% of your job is listening, maybe more. You have suddently become responsible not just for the technical work, but the people doing it. You have to listen to learn not just about the problems the team is having, but also how your people approach the problem and what the team's shortcomings as a group are. Important corollary - listening can directly lead to point #1 - giving meaningful work - engineers are great at coming up with ways to make development easier. You can't approve everything, but where the idea is good, give the engineer the assignment, and they have essentially done you work for you - they created the meaningful work and told you just what it is. Say "thank you" . I know, it seems obvious. While we all love money, better tools, a nicer work environment and promotions - the way to get to these things is by a series of good efforts, each of which deserves a "thank you". "Thank you" is totally free, you'll never run out of them, and knowing that your manager has seen and appreciated your hard work is definitely motivating. Spend time on the big picture , even if it means sacrificing some portion of the day to day work that got you the position. It's probably true that you can code better than some of your people, but if you don't spend a decent set of time on the big picture - the team, the overall project direction, the state of your codebase, the efficiency of your processes, your team's environment - then you won't be doing the job they need you to do. Learn to be a buffer for your team . Engineering teams work best when they have the time to do ... engineering. Corporate bureaucracy is not engineering. Anything you can do to take the annoying 1 per year/month/week meetings with external people is better. NOTE: That doesn't mean agile meetings with stake holders - that's engineering, your team needs to be there for that. I mean the meeting with facilities who wants to put a loud shrieking piece of machinery near your team, or the process group that wants your team to fill out papers in triplicate before any code gets checked in. You are the flak absorption system. Assume problem people are not evil , they are people who want to do good but haven't figured out how yet. You're not going to be able to fix everyone, but often the first few complete screw ups are as much a factor of failed communication as they are incompetence or deliberate malice. If you start with the assumption that people are not evil, you have a decent hope of avoiding a number of the evil boss archetypes of the list above. And probably most important... respect . If you honestly can't respect the members of you team, you have to work on changing that (whether that's teaching people or changing your headcount). Give respect day one and you will get it back, treat people with a lack of respect and you will never get respect in return. Taken together, if you do most of these things, most of the time then your team will give you the benefit of the doubt when you show you are human and totally screw something up yourself. :) Every boss has their own drawbacks, and it's as much about working out a relationship with your team where they can help you compensate for your weaknesses as you help them with theirs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/69916",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7429/"
]
} |
70,063 | I had a discussion with one of my teachers the other day. We debated the impact that simpler scripting languages (like Python or Ruby) have on junior programmers. He argued that scripting languages engender sloppy coding techniques because beginners don't understand what is going on "under the hood". He also cited other examples of how scripting languages often cause the programmer to neglect concerns about efficiency, memory management, operational complexity, etc. I argued that lower level languages might be too much for some people and they might give up before they develop a passion for programming. When I started learning my first programming language (C), I got to pointers and gave up because the concepts were too hard (in my defense, I was only 14 years old). If it wasn't for Java, I might not have become a programmer! If I had started with a simpler language and then dug deep, I feel I wouldn't have given up and I would have learned just as much as I have starting with C. The class ended before either side was fully explored. To this point, I have been preaching that beginners should start with scripting languages and then dig deep; but after that discussion, I began wondering if this was erroneous thinking. So, what impact do scripting languages have on junior programmers? | I disagree. First, scripting languages are at a higher level of abstraction, and there is nothing wrong with this. At the beginning one is just trying to learn the principles. Actually I would say that choosing a lower level language may encourage bad coding, since one has to deal with some details before being able to understand them. Instead with a simpler language one can start writing clean and concise code from the start. Second, there is much to learn in these languages. As far as learning the language, I would say that C is easier than Python. One has to deal with pointers or take care of strings, but there are many more concepts to learn in Python. Comprehensions, object orientation, reflection, magic methods, first-class functions, lambdas, iterators and generators, metaclasses: all of this is part of the language. I think that starting with Python allows to learn much more about programming and with a gentler learning curve. A lower level language may have less abstractions - so less general concepts to learn - and overwhelm the beginner with details he may want to do without. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/70063",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23461/"
]
} |
70,086 | C++ templates are notorious for generating long, unreadable error messages. I have a general idea of why template error messages in C++ are so bad. Essentially, the problem is that the error isn't triggered until the compiler encounters syntax that is not supported by a certain type in a template. For example: template <class T>
void dosomething(T& x) { x += 5; } If T doesn't support the += operator, the compiler will generate an error message. And if this happens deep within a library somewhere, the error message might be thousands of lines long. But C++ templates are essentially just a mechanism for compile-time duck typing. A C++ template error is conceptually very similar to a runtime type error that might occur in a dynamic language like Python. For example, consider the following Python code: def dosomething(x):
x.foo() Here, if x doesn't have a foo() method, the Python interpreter throws an exception, and displays a stack trace along with a pretty clear error message indicating the problem. Even if the error isn't triggered until the interpreter is deep inside some library function, the runtime-error message still isn't anywhere near as bad as the unreadable vomit spewed by a typical C++ compiler. So why can't a C++ compiler be more clear about what went wrong? Why do some C++ template error messages literally cause my console window to scroll for over 5 seconds? | Template error messages may be notorious, but are by no means always long and unreadable. In this case, the entire error message (from gcc) is: test.cpp: In function ‘void dosomething(T&) [with T = X]’:
test.cpp:11: instantiated from here
test.cpp:6: error: no match for ‘operator+=’ in ‘x += 5’ As in your Python example, you get a "stack trace" of template instantiation points, and a clear error message indicating the problem. Sometimes, template-related error messages can get much longer, for various reasons: The "stack trace" might be much deeper The type names might be much longer, as templates are instantiated with other template instantiations as their arguments, and displayed with all their namespace qualifiers When overload resolution fails, the error message might contain a list of candidate overloads (which might each contain some very long type names) The same error may be reported many times, if an invalid template is instantiated in many places The main difference from Python is the static type system, leading to the necessity of including the (sometimes long) type names in the error message. Without them, it would sometimes be very difficult to diagnose why the overload resolution failed. With them, your challenge is no longer to guess where the problem is, but to decipher the hieroglyphics that tell you where it is. Also, checking at runtime means that the program will stop on the first error it encounters, only displaying a single message. A compiler might display all the errors it encounters, until it gives up; at least in C++, it should not stop on the first error in the file, since that may be a consequence of a later error. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/70086",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17252/"
]
} |
70,196 | In the perils of java schools Joel discusses his experience at Penn and the difficulty of "segmentation faults". He says [segfaults are difficult until you]
"take a deep breath and really try to
force your mind to work at two
different levels of abstraction
simultaneously." Given a list of common causes for segfaults, I don't understand how we have to work at 2 levels of abstraction. For some reason, Joel considers these concepts core to a programmers ability to abstract. I don't want to assume too much. So, what is so difficult about pointers/recursion? Examples would be nice. | I first noticed that pointers and recursion were hard in college. I had taken a couple of typical first year courses (one was C and Assembler, the other was in Scheme). Both courses started with hundreds of students, many of whom had years of high-school level programming experience (typically BASIC and Pascal, in those days). But as soon as pointers were introduced in the C course, and recursion was introduced in the Scheme course, a huge number of students--perhaps even a majority--were completely flummoxed. These were kids who had written a LOT of code before and had no problems at all, but when they hit pointers and recursion, they also hit a wall in terms of their cognitive ability. My hypothesis is that pointers and recursion are the same in that they require you to keep two levels of abstraction in your head at the same time. There's something about the multiple-levels-of-abstraction that requires a type of mental aptitude that it's very possible some people will never have. With pointers, the "two levels of abstraction" are "data, address of data, address of address of data, etc.," or what we traditionally call "value vs. reference." To the untrained student, it's very hard to see the difference between the address of x and x itself . With recursion, the "two levels of abstraction" are understanding how it's possible for a function to call itself. A recursive algorithm is sometimes what people call "programming by wishful thinking" and it's very, very unnatural to think of an algorithm in terms of "base case + inductive case" instead of the more natural "list of steps you follow to solve a problem." To the untrained student looking at a recursive algorithm, the algorithm appears to beg the question . I would also be perfectly willing to accept that it is possible to teach pointers and/or recursion to anyone... I don't have any evidence one way or another. I do know that empirically, being able to really understand these two concepts is a very, very good predictor of general programming ability and that in the normal course of undergraduate CS training, these two concepts stand as some of the biggest obstacles. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/70196",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11107/"
]
} |
70,291 | I'm curious what are the drawbacks to using the ActiveRecord pattern for data access/business objects. The only one I can think of off the top of my head is that it violates the Single Responsibility Principle, but the AR pattern is common enough that this reason alone doesn't seem "good enough" to justify not using it (of course my view may be skewed since often none of the code I work with follows any of the SOLID principles). Personally I am not a fan of ActiveRecord (with the exception of writing a Ruby on Rails application, where AR feels "natural") because it feels like the class is doing too much, and data access shouldn't be up to the class itself to handle. I prefer to use Repositories that return business objects. Most of the code I work with tends to use a variation of ActiveRecord, in the form of (I do not know why the method is a boolean): public class Foo
{
// properties...
public Foo(int fooID)
{
this.fooID = fooID;
}
public bool Load()
{
// DB stuff here...
// map DataReader to properties...
bool returnCode = false;
if (dr.HasRows)
returnCode = true;
return returnCode;
}
} or sometimes the more "traditional" way of having a public static Foo FindFooByID(int fooID) method for the finders and something along the lines of public void Save() for saving/updating. I get that ActiveRecord is typically much simpler to implement and use, but it seems a little too simple for complex applications and you could have a more robust architecture by encapsulating your data access logic in a Repository (not to mention having it easier to swap out data access strategies e.g. maybe you use Stored Procs + DataSets and want to switch to LINQ or something) So what are other drawbacks to this pattern that should be considered when deciding if ActiveRecord is the best candidate for the job? | The main drawback is your "entities" are aware of their own persistence which leads to a lot of other bad design decisions. The other issues is that most active record toolkits basically map 1 to 1 to table fields with zero layers of indirection. This works on small scales but falls apart when you have trickier problems to solve. Well, having your objects know about their persistence means you need to do things like: easily have database connections available everywhere. This typically leads to nasty hardcoding or some sort of static connection that gets hit from everywhere. your objects tend to look more like SQL than objects. hard to do anything in the app disconnected because database is so ingrained. There ends up being a whole slew of other bad decisions on top of this. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/70291",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22390/"
]
} |
70,357 | Somewhere I saw a rant against java/javac allegedly using a mix of Windows and Unix style like java -classpath ... -ea ... Something IMHO, it is no mix, it's just like find works as well, isn't it? AFAIK, according to POSIX, the syntax should be like java --classpath ... --ea ... Something and -abcdef would mean specifying 6 short options at once. I wonder which version leads in general to less typing and less errors. I'm writing a small utility in Java and in no case I'm going to use Windows style /a /b since I'm interested primarily in Unix. What style should I choose? | You can find the POSIX argument conventions in the Utility Conventions chapter. The POSIX style consists of options with a single dash followed by a single letter indicating the option, with the argument value separated from the option by a space. There are exceptions to the rules - find , for example - but these are because of the historical Unix precedents. The X Windows (X11) uses find -like single-dash, long name options. The double-dash long name options were pioneered by GNU (after a detour using + as a prefix). See this StackOverflow question for a discussion of the wide variety of known command line argument handling systems - there are lots. ( Since this was written, the powers-that-be decided the question SO 367309 was not a good fit for SO. I've transferred the answer to another question, What is the general syntax of a Unix shell command? . ) You could extend the list of techniques to cover git (and a number of other systems) where you get a structure like: basecommand [ global options ] subcommand [ sub-command options ] [name ...] There may be many sub-commands, each with its own lexicon of options. Of course, Windows uses (used) slash ' / ' to indicate options instead of dash ' - '. JCL (for z/OS, and OS/360, and intermediate systems) tends to use positional parameters separated by commas, and is generally regarded as not being user-friendly or a good interface. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/70357",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14167/"
]
} |
70,419 | The single largest factor in what is holding me back from being a stellar developer is my reliance on others. I feel like I ask too many questions because I fear the consequences of breaking everything and holding everyone back. So I'm overly cautious by asking so many questions that I basically get the answers after enough questioning. I've recognized that's bad but I want to stop it. Part of it comes that there are times where I simply don't know the code (either it's a branch I've never worked with or it's a brand new product), but I want to rely on others less. To preface, these kinds of questions are not the ones about generic patterns or languages: usually my questions revolve around how we do code at our company, and how we get things to work in our ecosystem. I want to be able to take specs and roll with them without having to feel like I need to get help every step of the way. Is this normal? Have you been through this, and if so, how did you get over it? | I see some new developers come into a job and immediately feel inadequate. I did the same early in my career. I think there's at least two major issues most smart guys need to overcome: time perception and their own natural ability. Time Perception Smart guys are used to solving problems relatively quickly. I remember being aghast when I had to spend an hour on a single calculus problem. Spending 60 minutes on a problem is nothing anymore. Those days are over...bury them and say goodbye. The complexity and size of most software today is outrageous. People don't understand all the tools they have to use to get things done any longer. One of the key men of the JavaScript language, Douglas Crockford said, "Misapplication of standard tools...is the new standard." There just isn't enough time in the world to learn all the dev tools. Natural Ability Your intelligence, problem solving ability and natural skills got you into the whole developer gig in the first place. There's just no room for anything less in this field. So what do you do with 100,000 lines of code, languages and frameworks you barely know, design patterns and paradigms people are pushing at you, guys who know most of it like the back of their hand, customers who want it yesterday, and a boss who expects the world of you? Freak out as your natural ability fails. Yea, that's normal. I still freak out with some of the stuff that gets thrown my way. What can be done? Its time to improve those natural abilities with good old fashioned hard work. Work on breaking problems down into smaller parts. And realize that unlike a lot of things you may have done in the past, these problems do take a great deal of time to solve. So don't give up after just 15 minutes of examining a complex problem. Instead, break the problems down and stop watching the clock. After a while, 30 minutes of working with a problem really isn't what it used to be. Self confidence plays a big role in ones ability to self-govern. So does the team, especially the more experienced seniors. It is good to be careful about not breaking things, but this doesn't mean you need to ask a constant stream of questions. Instead, make use of the source control. As long as you don't checkin a change you can't break the main product and make other devs angry. Also, make changes that you can understand and test and be sure to test them well before checkin. I even have a little test project that I use to write one-off, simple programs so I don't have to worry about all the goings-on in the main application. Finally, remember that every decision comes with some level of give and take. There is no moving forward without making some kind of sacrifice at some level. Don't strive for perfection, strive for awesomeness and be mindful of your actions. Because you always need to be prepared to take criticism and explain your ideas and why you made them. Be proud of the decisions you make. Even when they are wrong there is much to be learned. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/70419",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13945/"
]
} |
70,612 | How was it decided that if you have an array/struct or anything similiar in a programming language it should be zero-based? Wouldn't it have been easier if it was 1-based. Afer all, when we are taught to count, we start with one. | All good answers. A good part of my "career" was spent in Fortran, where all arrays are 1-based. It's OK if you're writing math algorithms over vectors and matrices, where indices naturally go 1 .. N. But as soon as you start trying to do computer-science type algorithms, where you have a big array and you are working on pieces of it, as in binary search, or heap sort, or if it is a memory array and you are writing memory allocation and freeing algorithms, or starting to act like parts of it are actually multidimensional arrays that you have to calculate indices in, that 1-based stuff gets to be a real source of confusion. For example, if you have a 1-dimensional array A, and you want to treat it as a 2-dimensional NxM array, where I and J are the index variables, in C you just say: A[ I + N*J ] but in Fortran you say A( (I-1) + N*(J-1) + 1 )
or
A( I + N*(J-1) ) If it was 3-dimensional, you had to do A( I + N*(J-1) + N*M*(K-1) ) (That's if it was column-major order, as opposed to row-major order which is more common in C.) What I learned to do in Fortran, when doing string manipulation algorithms, was never to think of an index I as being the position of an element in an array.
Rather I would think of a "distance" N as being the number of elements coming before the element of interest.
In other words, always think in terms of "number of elements" rather that "index of element". That enabled me to work within what was an unnatural indexing scheme. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/70612",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4275/"
]
} |
70,708 | Last week I was wondering, with compilers getting better and better at optimizing, will there be a point when there is no need for hand written assembly? Are there still specialized fields where the compilers aren't smart enough to produce code that rivals hand-written assembly? | will there be a point when there is no need for hand written assembly? Never. There is always a need for specialized hardware-specific instructions. Are there still specialized fields where the compilers aren't smart enough to produce code that rivals hand-written assembly? The kernel's interrupt management, features of I/O drivers, locking and thread synchronization often must be include some assembler because they make use of instructions that are outside the standard instruction set used by a compiler. Indeed, as Intel moves forward trying to resolve the memory ordering issues they have created (http://www.mpdiag.com/intel_arch.html) there may be additional or different instructions that may have to be added or adjusted inside the common OS kernels; things that compilers won't normally generate for end-user applications. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/70708",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1320/"
]
} |
70,765 | What are the consequences of leaving work on time in software companies? What's the professional way to deal with this? [Editorial addition] The question is about working standard hours and not putting in extra hours without being explicitly asked to. Is working extra hours a norm or an expected work attitude in software companies and/or for a software developer role? How is one going to be perceived by colleagues and peers if one only worked the standard hours specified by local laws or the contract? | The consequence? You get home on time... That's about it. edit In response to some of the commenters: This answer is directly related to a conversation I had with my manager very recently. I have weekly 1x1 meetings with my manager, and a few weeks ago it went something like this: Me: I have a concern about last week. I was doing [some test procedure], like I talked to you about, but some of the people on the team (I didn't name names) were telling me things like "don't bother" and "it's a waste of time" and even gave me some dirty looks for doing it. Boss: Wait, what? Did I say anything? Me: Uhh... no... Boss: Then who cares what they think? Me: Good point... Boss: Yea, don't worry about it. If there's a problem, I'll tell you. Point being: it doesn't matter what your coworkers think about whatever you do; just do good work. You'll get respect, and won't need to stay late. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/70765",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20128/"
]
} |
70,831 | I have read the controversial article Teaching FP to freshmen posted by Robert Harper who is a professor in CMU. He claimed that CMU would no longer teach object oriented programming in the introductory course sine it is “unsuitable for a modern CS curriculum.” And he claimed that: Object-oriented programming is
eliminated entirely from the
introductory curriculum, because it is
both anti-modular and anti-parallel by
its very nature. Why consider OOP as anti-modular and anti-parallel? | Please consider, that Harper's needs for teaching an introductory CS curriculum class are very different from the needs of a real life project . His job is to teach fundamental concepts (e.g. modularity, parallelism, induction) to freshmen. As such it is very important, that the language (and paradigm) choosen can express these concepts with as little ceremony (syntactical and conceptual) as possible. Familiarity, tool support, available libraries, execution performance etc. are completely irrelevant in this context. So please keep this in mind when considering the following... The view that OO is anti-modular results from the large number of dependencies to other classes even objects of well designed classes tend to end up with. That this is a problem - even in the eyes of proponents of OO - becomes clear when you look at the proliferation of Dependency Injection frameworks , articles, books and blog posts in the last years (also the rise of mocks and stubs is interesting). Another hint is the importance of Design Patterns and the complexity of implementing them - as compared to some other programming paradigms - e.g. Factories, Builder, Adapter, Bridge, Decorator, Facade, Command, Iterator, Mediator, Observer, Strategy and Template Method and maybe the Composite are all in some way related to improving the modularity of OO code. Inheritance is also problematic (e.g. Fragile Base Class Problem ) and (subtype) polymorphism seduces one to spilt up the implementation of an algorithm between multiple classes, where changes can ripple through the whole inheritance chain (up and down!). The charge of being anti-parallel is related to the emphasis of state compared to computation (aka. mutability vs. immutability). The former makes it more involved to express dependencies of subcomputations (which is Harper's take on parallelism!) as you usually can't infer from the location the state is managed (aka. the file, where the instance variable is declared) which outside actors will change it at what point in time. An emphasis on immutability and computation makes expressing dependencies of subcomputations much easier, as there is no state management, just functions/computations which are combined at the place where you want to express the dependencies of subcomputations. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/70831",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3149/"
]
} |
70,877 | I was reading "Coders at Work" and have faced the fact that some of the professionals interviewed in the book are not so enthusiastic about design patterns. I think that there are 2 main reasons for this: Design patterns force us to think in their terms. In other words, it's almost impossible to invent something new (maybe better). Design patterns don't last forever. Languages and technologies change fast; therefore, design patterns will eventually become irrelevant. So maybe it's more important to learn how to program properly without any particular patterns and not to learn them. The point also was that usually when people face a problem and they don't have much time, they try to use a pattern. This means copying and pasting existing code into your project with minor changes in order to get it working. When it's time to change or add something, a developer doesn't know where to start because it's not his code and he's not deeply familiar with it. | For my money, I think everyone's missing the point of design patterns. It's rare that I sit wondering which pattern I should use in a given situation. Also, I was using most of those patterns long before I knew they had names. The power of design patterns is in communication. It is much quicker for me to say "use a Strategy for that" than to describe in detail what I am suggesting. It is much easier for us to debate the benefits of fat domain models vs. transaction scripts if we all know what those two terms mean. And so on. And most powerfully of all, if I have named a class FooBuilder then you know that I'm using the Builder pattern to generate my Foo. Even if you don't know what I'm talking about when I say "Observer pattern is ideal for that," you will be able to go off and google it pretty easily. In that sense, the power of design patterns will never fade. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/70877",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21746/"
]
} |
70,996 | Say you have the following for loop*: for (int i = 0; i < 10; ++i) {
// ...
} which it could commonly also be written as: for (int i = 0; i != 10; ++i) {
// ...
} The end results are the same, so are there any real arguments for using one over the other? Personally I use the former in case i for some reason goes haywire and skips the value 10. * Excuse the usage of magic numbers, but it's just an example. | The reason to choose one or the other is because of intent and as a result of this, it increases readability . Intent: the loop should run for as long as i is smaller than 10, not for as long as i is not equal to 10. Even though the latter may be the same in this particular case, it's not what you mean, so it shouldn't be written like that. Readability: a result of writing down what you mean is that it's also easier to understand. For example, if you use i != 10 , someone reading the code may wonder whether inside the loop there is some way i could become bigger than 10 and that the loop should continue (btw: it's bad style to mess with the iterator somewhere else than in the head of the for -statement, but that doesn't mean people don't do it and as a result maintainers expect it). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/70996",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2210/"
]
} |
71,148 | Why do many open source projects collaborate primarily through mailing lists rather than through, say, forums? I may be ignorant in my assessment, but I, along with my communication teacher, think mailing lists are rather inefficient: It's hard to reference old messages. You have to wait until an archiving site picks up the message you want to link to, then look it up. It's hard to reply to messages you haven't received from subscription, since you have to manually copy the sender and message contents. It's impossible to moderate threads of discussion. Posts cannot be deleted or modified without supernatural powers. It's tricky to keep threads together (namely, to ensure all participants in the conversation are copied/replied to). It's tricky for users to participate in multiple high-volume mailing lists, as they have to subscribe and set up filters (or just be really involved). What real edge do mailing lists have that didn't occur to me? | In addition to the "because they are used to it" arguments, email has a few other huge advantages: You already have an email address, no need to sign up for yet another messageboard account for every project. With a messageboard, you have to actively visit the page and refresh it to see new messages. On the other hand, most people have their email client (Outlook, Mail, Gmail) open all day and it refreshes automatically as soon as a new message comes in. In short, using a messageboard requires me to change my habits in a significant way. On the other hand, mailing lists fit simply & easily into my existing routine, so adoption is much easier. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71148",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3650/"
]
} |
71,152 | I introduced Mercurial to my department. I love it, but it is my first version control experience. I am using it with NetBeans PHP for web development. Another developer who works on internal company applications likes using Visual Source Safe and does not want to switch. He works in a Visual Studio environment. All the other developers have bought into Mercurial except this one. For the most part though, we all work pretty independently. I am trying to move this department in the right direction, I have set everyone up with an account on Kiln, I was hoping to get everyone using Fogbugz down the road as well (since there is currently no bug database being maintained.) I have never used VSS but I hear very bad things about it. Would it be better to just allow him to continue using VSS if that's what he prefers, or would it be in best interest to get him on board with Mercurial? | would it be better to just allow him to continue using vss if thats what he prefers No. There is no point in running two different source management systems in parallel. That defies the very idea that all developers are connected to the same repository and take full advantages of it. A single developer using a different system alone effectively isolates himself from the team. Even if projects do not cross, it is still a bad thing to do. Doubled maintenance efforts for both systems is another argument here. I think you should use your authority or escalate the issue to the management to quickly migrate the content from VSS to Mercurial and then shut VSS down. P.S. Speaking of VSS, it is notorious for losing check-ins or otherwise damaging code when you least expect it. It does work but it regularly goes on the nerves. If you have a better alternative, avoid VSS. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71152",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1785/"
]
} |
71,190 | Arguing for code generation, I am looking for some examples of ways in which it increases code quality. To clarify what I mean by code generation, I can talk only about a project of mine: We use XML files to describe entity relationships in our database schema, so they help us generate our ORM framework and HTML forms which can be used to add, delete, and modify entities. To my mind, it increases code quality because human error is reduced. If something is implemented incorrectly, it is broken in the model, which is good because the error might appear sooner since more generated code is broken too. Since I was asked for the definition of code quality , let me clarify this, what I meant is software quality . Software Quality :
It is not one attribute but many, e.g. efficiency, modifiability, readability, correctness, robustness, comprehensibility, usability, portability etc. which impact on each other. | Code generators cannot generate better code than the person who wrote the generator. My experience with code generators is that they are just fine as long as you never have to edit the generated code . If you can hold to that rule, then you're good to go. This means you can reliably re-generate that part of the system with confidence and speed, automatically adding more features if needed. I guess that could count for quality. I once heard an argument for code generators that a single programmer can produce so-and-so many lines of code per day and with code generators, they could produce thousands of lines! Obviously that is not the reason we are using generators. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71190",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23813/"
]
} |
71,230 | To extend from the question things like googledocs can handle word, excel and powerpoint so why aren't all programs being moved to web applications with the use of smartphones(blackberries, iphones and androids) and tablet pcs(ipad) increasing why isn't there a move to get every program browser based? With the increased capability of CSS and javascript surely this makes sense no need for clunky java programs which are 10 years old which just write to a database, that a rails/django application could probably do quicker and cost less money to maintain. | Because not all programs require for their operation resources reachable over the Internet. Many are just local applications that in their concepts have nothing to do with accessing the web. Besides, Internet connectivity isn't ubiquitously available in every country, in every town, in a non-inhabited place, in the train, on the plane etc. And even if it were, there is always the risk of the connection going down and interrupting your work at the worst possible moment. And even then, if you were to accept that risk, there is the additional danger that the service loses your files, discloses them to a third party due to a bug or a security issue, suddenly starts to charge [more] for the usage or even goes out of business one day. You want to have the security of a local application, local machine and local data storage. But the general trend holds, many desktop applications are being transformed into web software. You may want to read this: All Programming is Web Programming | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71230",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11950/"
]
} |
71,287 | I have noticed lately that functional programming languages are gaining popularity . I recently saw how the Tiobe Index shows an increase in their popularity in comparison to the last year although most of them do not even reach the top 50 most popular languages according to this index. And this has been the case for quite some time. Functional programming simply has not become as popular as other models (i.e., object-oriented programming). I have seen a reborn interest in the power of functional programming, however, and now that multicores are more and more popular, developers have started to show interest in other models of concurrency already explored in the past by languages like Haskell and Erlang. I see with great interest the fact that despite their lack of significant community acceptance, more and more languages of this sort continue to emerge. Clojure (2007), Scala (2003), F# (2002) are just three examples of the recent last decade. I have been, myself, investing some time learning Haskell and Scala. And I find great potential in the paradigm which for me was new despite being out there for so long. And of course, my biggest question is if any of these are going to become popular enough as to consider putting any effort in them, but this is a question that not even Mandrake could answer, despite all the fuss people is making about them. What I do want to ask is: In which scenarios should I consider a functional programming language better suited to do a given task? Besides the so recently popular multicore problem of parallel programming. If I decided to switch to a functional programming language which would you consider being the biggest pitfalls that I would face? (Besides the paradigm change and the difficulty to evaluate performance due to lazy evaluation). With so many functional programming languages out there, how would you choose the one the better suit your needs? Any recommendations for further research will be more than welcome. I have searched the web for opinions, and it appears the all this renew popularity come from the idea that now we're about to hit the wall of Moore's Law and functional programming languages will come and heroically save us. But if this is the case, I would say there are more probabilities of existing popular languages adapting to the paradigm. Some of you, with more experience working every day with these languages perhaps, can offer more insight on the subject. All your opinions will be better appreciated and carefully considered. Thanks in advance! | In which scenarios should I consider a functional programming languages better suited to do a given task? Besides the so recently popular multicore problem of parallel programming. Anything that involves creating sequence of derived data elements using a number of transformation steps. Essentially, the "spreadsheet problem". You have some initial data and set of row-by-row calculations to apply to that data. Our production applications do a number of statistical summaries of data; this is all best approached functionally. One common thing we do is a match-merge between three monstrous data sets. Similar to a SQL join, but not as generalized. This is followed by a number of calculations of derived data. This is all just functional transformations. The application is written in Python, but is written in a functional style using generator functions and immutable named tuples. It's a composition of lower-level functions. Here's a concrete example of a functional composition. for line in ( l.split(":") for l in ( l.strip() for l in someFile ) ):
print line[0], line[3] This is one way that functional programming influences languages like Python. Sometimes this kind of thing gets written as: cleaned = ( l.strip() for l in someFile )
split = ( l.split(":") for l in cleaned )
for line in split:
print line[0], line[3] If I decided to switch to a functional programming language which do you consider are the biggest pitfalls that I will face? (Besides the paradigm change and the difficulty to evaluate performance due to lazy evaluation). Immutable objects is the toughest hurdle. Often you'll wind up calculating values that create new objects instead of updating existing objects. The idea that it's a mutable attribute of an object is a hard mental habit to break. A derived property or method function is a better approach. Stateful objects are a hard habit to break. With so many functional programming languages out there, how would you choose the one the better suit your needs? It doesn't matter at first. Pick any language to learn. Once you know something, you're in a position consider picking another to better suit your needs. I've read up on Haskell just to understand the things Python lacks. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71287",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23837/"
]
} |
71,424 | We've all seen (and most of us have written) plenty of poorly written code. Why? What makes us adopt poor practices rather than good ones? The most obvious answer (to me) is "ignorance", but I'm sure that isn't the only reason. What others are there? What can we do to overcome the temptation to write bad code? | Resistance to change. That is the major driver behind ignorance, poor management etc. Chapter 30 of Peopleware 2nd Edition is devoted to this topic. And it quotes from a book by another fairly well known consultant, written a bit earlier though: And it should be considered that nothing is more difficult to handle, more doubtful of success, nor more dangerous to manage, then to put oneself at the head of introducing new orders. For the introducer has all those who benefit from the old orders as enemies, and he has lukewarm defenders in all those who might benefit from the new orders. Niccolo Machiavelli: The Prince (1513) DeMarco and Lister go on stating the mantra to keep in mind before asking people to change: The fundamental response to change is not logical, but emotional. The process of change is rarely a straight and smooth drive from the current suboptimal conditions to the new, improved world. For any nontrivial change, there is always a period of confusion and chaos before arriving to the new status quo . Learning new tools, processes and ways of thinking is hard , and takes time. During this transition time productivity drops, morale suffers, people may complain and wish if only it was possible to return to the good old ways of doing things. Very often they do, even with all the problems, because they feel that the good ol' known problems are better than the new, unknown, frustrating and embarrassing problems. This is the resistance which must be tactfully and gently, but decidedly overcome in order to succeed. With patience and perseverance, eventually the team arrives from Chaos to the next stage, Practice and Integration. People, although not completely comfortable with the new tools/processes, start to get the hang of these. There may be positive "Aha" experiences. And gradually, the team achieves a new status quo. It is really important to realize that chaos is an integral, unavoidable part of the process of change . Without this knowledge - and preparation for it -, one may panic upon hitting the Chaos phase, and mistake it with the new status quo. Subsequently the change process is abandoned and the team returns to its earlier miserable state, but with even less hope of ever improving anything... For reference, the phases described above were originally defined in the Satir Change Model (named after Virginia Satir ). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71424",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1928/"
]
} |
71,494 | In my understanding, the partial keyword does nothing but allow a class to be split between several source files. Is there any reason to do this other than for code organization? I've seen it used for that in generated UI classes. It seems a poor reason to create a whole keyword. If a class is big enough to require multiple files, it probably is doing too much. I thought that perhaps you could use it to partially define a class for another programmer somewhere to complete, but it would be better to make an abstract class. | It is very useful in every scenario where one part of class is generated by some custom tool because it allows you to adding custom logic to generated code without inheriting the generated class. Btw. there are also partial methods for the same reason. It is not only about UI but also other technologies like Linq-To-Sql or Entity Framework use this quite heavily. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71494",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6415/"
]
} |
71,526 | I have personally never done this. I don't understand why so many sites do, if you do your development on a development server why would you ever need to shut down your production site? I have always wondered about this. What are they doing during this time, what requires doing this? | Big kicker for anything with big scale is that if one is changing database schemas in some way, one typically has some big, nasty maintenance scripts to run. Now, these might take a second or so to run with your development dataset. But when you start measuring data in terabytes and petabytes, even adding a single column to a table can take hours. So no matter how quick and automated the deployment is, you've still got data maintenance issues to get through. If you plan really well, you can put up a read-only mirror of the site while you are undergoing the process, but for many sites read-only is pointless and thus not worth the effort. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71526",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1785/"
]
} |
71,561 | Does Java license allow other companies to create their own versions of Java language or just implement it accurately? Same question about JVM. I heard about Sun suing Microsoft for changing their .NET version of Java implementation and Java for Google Android, but I just can't grasp the whole concept as a totality. | You can write a compiler that implements the Java Language Specification or write a JVM that implements the Java Virtual Machine specification, but when you officially want to call it "Java", you have to prove it is compatible by passing the tests of the TCK (technology compatibility kit) and pay for a license from Oracle. Oracle doesn't make it easy for other parties to do this, though. Apache has their own implementation of the JVM ( Apache Harmony ) but previously Sun, now Oracle, is not cooperating in making the TCK available nor let Apache get a license, which has led to a lot of resentment between Apache and Oracle. Long ago Microsoft had their own version of Java (that was indeed called "Java"). They tried to change it to make it Windows-specific, which Sun of course didn't like. There was a lawsuit, Microsoft lost, quit their own Java version and created .NET, which is a completely different thing that just happens to work a lot like how Java works... The lawsuit about Android isn't based on this at all; Google isn't saying that Android is Java. That lawsuit is about patents; Oracle has patents on a number of ideas and concepts in their own JVM implementation and is claiming that Google is using the same patented ideas in Android without getting a patent license from Oracle. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71561",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9437/"
]
} |
71,593 | I just got hired for my first programming job! I am 25 and have been using Java academically for 6 years. Now that I have been hired I am nervous that my skills will not be what the employer expects. I am afraid I will be assigned to a project and have to ask lots of questions that my coworkers will feel are amateur. Is this a rational fear? What were your first programming job experiences? What should I expect? What advice could you give me? Thanks. | There are too many things you can't learn in college . There are also many things that are specific to the company . In both cases, you have a choice: either you ask your colleagues for explanation, or you don't ask anything to anybody, and take the risk to make a mistake. If I hire someone who doesn't have a professional experience, I would not mind if she asks lots of questions the first weeks or months. On the other hand, if she fears asking for help and wastes hours solving a problem that another developer may solve within seconds or makes mistakes which could be easily avoided by someone more open to communication with peers, it will bother me much more. Don't avoid questions. It is a good way to both learn things and socialize with the people you will work with. But: Don't ask questions just to ask them. Remember that other people have their own work to do and their own deadlines. They have other things to do than spending their time helping you for every task. Don't expect other people doing your job (just like it is never welcome to ask on Stack Overflow to do your job). Note that if you disturb a developer, she loses ten or more minutes to concentrate again. So don't ask questions if you can find yourself an answer within seconds on the internet. Example of bad questions: "Hey, I want to create an array like { 1, 2, 3, ... n-1, n } in PHP. Can you help me?" Here, you just show that not only you don't know how to use PHP documentation, but you don't even bother about searching Google or thinking for a moment. It's ok if you don't know about range method in PHP. It's not ok if you cannot find it yourself. "I'm trying to implement plugins, but I don't know what CAS is in .NET Framework. Can you explain me what's this?" Yes, it's easier to ask for explanation, but what about searching Google for "CAS .NET Framework 4.0" first? "Why are you forcing me to use version control? I always worked without it and I don't understand why would I need it now." Well, your colleagues don't have to explain why you must use it. First, it is a guideline of your company. You're not here to dictate how to work. Second, there are plenty of books, blog articles and answers on SE websites explaining why everyone must use version control. You just have to search. Examples of questions which are welcome: "I want to commit the changes to the version control, but there is a strange error message. It says: [...]. Maybe you know what's this?" Chances are your colleague have seen this message dozens of times before, so it's ok to ask this. "I'm reading the page 9 of the requirements for this project, part 4.2.1, but I'm not sure: is it to me or to the database administrator to do this part?" It's better to ask, than to spend three days to do the work which is already done by the dba. "I need to implement plugins, but after reading this and this, I still don't understand what is a sandbox and how is this related to security. Could you explain me this later when you'll be free?" You searched. You made an effort. You didn't understand. It's ok to not understand everything, and it would be better to ask for explanation rather than spending a weekend searching for it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71593",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23917/"
]
} |
71,710 | I've had a couple of discussions with a co-worker about the use of single letter variable names in certain circumstances inside our codebase, at which we both disagree. He favours more verbose naming convention for these, and I do not. There are three scenarios in my opinion where I use single letter variable names: Loops - i for(int i = 0; i < 10; i++) { ... } Lambda expressions in C# - x/y/z : .Where(x => x == 5) Exceptions - e : try { ... } catch(ExceptionType e) { /* usage of 'e' */ } These are the only scenarios where I would use it, and I obviously use more verbose naming conventions elsewhere. My colleague put forward the following arguments for exceptions and loops: i - it doesn't mean anything. e - it's the most common letter in the English language. If you wanted to search the solution for exceptions, you'd find lots of undesired instances of e . I accept these arguments, but have retorts that, if one does not know what i means in a for loop, then they probably shouldn't be a programmer. It's a very common term for loops and exceptions, as is e . I have also mentioned that, if one wanted, they could search for catch in the case of the exception. I realise that this is subjective, but then, one could argue that coding standards are just that - opinions, albeit opinions by academics. I would be happy either way, and will forward the results to him, but would rather that we (our company) continue to use a single coding standard, rather than have two developers with different opinions on what to use. | I entirely agree with your opinion, and believe it isn't all that subjective. The name of a variable is only as relevant as its scope. There is no point in endless discussions about a name of a variable which will only be read by a person reading that particular small scoped piece of code. On the other hand, class names and member names need to clearly indicate what is going on. A lot of the expected behavior needs to be explained in a concise name. Conventions improve readability. i and e are common conventions used for those small scoped variables. Their intent is really clear if you know the convention. Conciseness improves readability. A short variable name which represents its intent clearly is to be preferred over a big variable name. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71710",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8113/"
]
} |
71,758 | Some people have the view of programming that it is just repetitive typing on a keyboard. None of this is true. First of all, there is so much more you have to do than actually typing down the code, such as design architecture and so on. Secondly, it could be a greatly varying, non-repetitive task, with new challenges coming all the time. How should you explain that programming is not a repetitive task to non-programmers ? | Give them examples they can relate to. Tennis is repetitive. You just keep hitting the ball all the time over to the other side of the net. Soccer is repetitive. You just keep kicking the ball every time until you find a goal post. Playing the piano is repetitive. You keep on moving your fingers on the board. Damn, all so boring!!! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71758",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
71,847 | Where do we draw the line between delegation and encapsulation of business logic? It seems to me that the more we delegate, the more anemic we become. However, delegation also promotes reuse and the DRY principal. So what is appropriate to delegate and what should remain in our domain models? Take the following concerns as examples: Authorization . Should the domain object be responsible for maintaining its access control rules (such as a CanEdit property) or should that be delegated to another component/service solely responsible for managing access, e.g. IAuthorizationService.CanEdit(object)? Or should it be a combination of the two? Perhaps the domain object has a CanEdit property which delegates to an internal IAuthorizationService to perform the actual work? Validation . The same discussion as above relates to validation. Who maintains the rules and who is responsible for evaluating them? On the one hand, the state of the object should belong to that object and validity is a state but we don't want to rewrite the code used to evaluate rules for every domain object. We could use inheritance in this case... Object Creation . Factory class versus factory methods versus 'newing' up an instance. If we use a separate factory class, we are able to isolate and encapsulate creation logic but at the expense of opening our object's state to the factory. This can be managed if our domain layer is in a separate assembly by exposing an internal constructor used by the factory but this becomes a problem if there are multiple creation patterns. And, if all the factory is doing is calling the right constructor, what's the point of having the factory? Factory methods on the class eliminate the issue with opening up the object's internal state but since they are static, we aren't able to break dependencies through injection of a factory interface like we can with a separate factory class. Persistence . One could argue that if our domain object is going to expose CanEdit while delegating the responsibility to perform the authorization check to another party (IAuthorizationService) why not have a Save method on our domain object that does the same thing? This would allow us to evaluate the internal state of the object to determine if the operation can be performed without breaking encapsulation. Of course it requires that that we inject the repository instance into our domain object, which smells a bit to me, so do we raise a domain event instead and allow a handler to perform the persistance operation? See where I'm going with this? Rockford Lhotka has a great discussion about his reasons for going the Class-in-Charge route for his CSLA framework and I have a bit of history with that framework and can see his idea of business objects paralleling domain objects in many ways. But trying to become more adherent to good DDD ideals, I'm wondering when collaboration becomes too much. If I end up with an IAuthorizationService, IValidator, IFactory and IRepository for my aggregate root, what's left? Is having a Publish method that changes the state of the object from Draft to Published enough to consider the class a non-anemic domain object? Your thoughts? | Most of the confusion seems to be around functionality that should not exist in the domain model at all: Persistence should never be in the domain model. Never ever. That's the reason you rely on abstract types such as IRepository if part of the model ever needs to do something like retrieve a different part of the model, and use dependency injection or some similar technique to wire up the implementation. So strike that from the record. Authorization is not generally part of your domain model, unless it is actually part of the domain, e.g. if you're writing security software. The mechanics of who is allowed to perform what in an application are normally handled at the "edge" of the business/domain tier, the public parts that the UI and Integration pieces are actually allowed to talk to - the Controller in MVC, the Services or the messaging system itself in an SOA... you get the picture. Factories (and I assume you mean abstract factories here) aren't exactly bad to have in a domain model but they are almost always unnecessary. Normally you only have a factory when the inner mechanics of object creation might change. But you only have one implementation of the domain model, which means that there will only ever be one kind of factory which always invokes the same constructors and other initialization code. You can have "convenience" factories if you want - classes that encapsulate common combinations of constructor parameters and so on - but honestly, generally speaking, if you've got a lot of factories sitting in your domain model then you're just wasting lines of code. So once you turf all of those, that just leaves validation. That's the only one that's kind of tricky. Validation is part of your domain model but it is also a part of every other component of the application. Your UI and database will have their own, similar yet different validation rules, based on a similar yet different conceptual model. It's not really specified whether or not objects need to have a Validate method but even if they do, they'll usually delegate it to a validator class (not interface - validation is not abstract in the domain model, it is fundamental). Keep in mind that the validator is still technically part of the model; it doesn't need to be attached to an aggregate root because it doesn't contain any data or state. Domain models are conceptual things, usually physically translating to an assembly or a collection of assemblies. Don't stress out over the "anemic" issue if your delegation code resides in very close proximity to the object model; it still counts. What this all really comes down to is that if you're going to do DDD, you have to understand what the domain is . If you're still talking about things like persistence and authorization then you're on the wrong track. The domain represents the running state of a system - the physical and conceptual objects and attributes. Anything that is not directly relevant to the objects and relationships themselves does not belong in the domain model, period. As a rule of thumb, when considering whether or not something belongs in the domain model, ask yourself the following question: "Can this functionality ever change for purely technical reasons?" In other words, not due to any observable change to the real-world business or domain? If the answer is "yes", then it doesn't belong in the domain model. It's not part of the domain. There's a very good chance that, someday, you'll change your persistence and authorization infrastructures. Therefore, they aren't part of the domain, they're part of the application. This also applies to algorithms, like sorting and searching; you shouldn't go and shove a binary search code implementation into your domain model, because your domain is only concerned with the abstract concept of a search, not how it works. If, after you've stripped away all the stuff that doesn't matter, you find that the domain model is truly anemic , then that should serve as a pretty good indication that DDD is simply the wrong paradigm for your project. Some domains really are anemic. Social bookmarking apps don't really have much of a "domain" to speak of; all your objects are basically just data with no functionality. A Sales and CRM system, on the other hand, has a pretty heavy domain; when you load up a Rate entity then there is a reasonable expectation that you can actually do stuff with that rate, such as apply it to an order quantity and have it figure out the volume discounts and promo codes and all that fun stuff. Domain objects that just hold data usually do mean that you have an anemic domain model, but that doesn't necessarily mean that you've created a bad design - it might just mean that the domain itself is anemic and that you should be using a different methodology. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71847",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23991/"
]
} |
71,861 | I heard recently an instructor mentinon that developing games was the best way to learn programming. Besides the fact that everything had to be created in code, he said you really get to fully experience and implement OOP into your programs. In other words everything you make in a game is a literal object both technically and in concept. Is it safe to make this assumption? Or are there always exceptions when learning programming? note it was geared towards .net / xna / c# development. | Game development is some of the most challenging development that there is, in my opinion. Let's look at all the concepts one person creating a first-person shooter needs to know: Graphics 3d calculations Speed optimizations Interfacing with graphics hardware The game Data structures File formats for saving General design, or else you'll create a mess that's impossible to add features to Network knowledge if the game is multiplayer AI algorithms Path finding Battle fighting Publishing OS knowledge such as how installers work Possibly web development for getting the word out You see, one game can cover a lot of areas simultaneously. This makes it an excellent learning tool if you are willing to spend the time and make it good. As for OOP, that's one pattern that works well. You can use a game as a learning tool to practise good design techniques. To answer your question, though, it's not the only way to learn programming. Once you do become an intermediate developer, creating a game will let you touch a lot of programming concepts and levels. Besides, it's fun! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/71861",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22218/"
]
} |
72,040 | I'm a maintenance programmer, and I work with a rather large list of technologies on a monthly if not daily basis. For example I'm competent (obviously not expert) in the following languages: Java (including J2EE) C# VB6 ASP Classic When I go to work on my resume, how many of the technologies should I list? It seems pointless, or even a problem, to list them all. I don't like reading vast lists of technologies when I read resumes. How many should I list, and how should I choose? | List them all in a comma (not bullet) separated list in the first or second paragraph of your resume. Interviewers like to see broad experience, so the more the better. And here's a little tip: leave out the "obviously not expert" bit. You are a "maintenance" programmer. Do you know what that means? That means you get the hard problems. Building a system from scratch is the easy stuff. Keeping a system running over the long haul is the hard stuff. Don't undersell yourself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/72040",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4422/"
]
} |
72,336 | For background, we are doing desktop engineering applications, with an AutoCAD like UI, something similar to etabs . One thing that really bugs me is, is there any need to hire the very best developers? For starters, we are experiencing great difficulties in recruitment; most resumes we see are either doing simple CRUD apps, or SharePoint customization which I don't think really involves a lot of hardcore programming. Even those whom we call for interview, most can't do Fibonacci sequence and a simple binary search, and we are gracious enough to give out hints and spell out the problems explicitly so that the candidates don't have to lookup a dictionary to check what does "Fibonacci sequence" mean. This got me thinking: Yes, we do need some level of programming aptitude when doing computational geometry/ linear programming stuff, and we do need some level of programming aptitude when designing the software architecture/ or deciding which software pattern to use, but beyond that, a lot of our code are just plumbing code ( I think), which can be done by someone with some familiarity with programming. Given that we really need programming talents now, and given that hiring superstar developers are very hard, I want to lower my standard and hire only the so-so ones, in direct contradiction to what Joel preaches . What do you think? Edit: You don't need to rewrite the whole computational geometry/linear programming libraries; all you need to do, as far as my application is concerned, is to be able to know how to cast the problems at hand into appropriate computational geometrical/linear programming terms and know when/how to use the existing libraries. So it's not as difficult as it seems. | I suggest you stop reading Joel too much. What he has written in his blog contradicts with his responses on this site so I wouldn't really take his word for much. What makes a superstar and why it is necessary to have one opens a long and a nowhere-going discussion. It is elitism and it is not practical. What you need is a person who: Would love to be doing what you are doing Would be a passionate enthusiastic programmer Would have the potential to learn what it takes to do your job right The rest is of no importance. You wouldn't believe how many young graduates are out there who want nothing else but to dive into this kind of a CS-strong project and never ever look at coding CRUD applications. A while back I was one of them, I practically dreamed of joining a project around compiler development but wasn't able to find one. Why not give a chance to one of them? I do not believe AutoCAD was written by supermen. Most of the successful projects were done by people who simply wanted to get the thing done and they really wanted that. most resumes we see are either doing simple CRUD apps, or SharePoint customization What is to expect if most jobs require just that? People might have studied CS at the uni and even have been really good at it, but you can't expect them to remember it if they have never used that in practical programming in 10 years. Obviously nobody is going to be reading over old CS books every year just to keep it fresh if this knowledge is not used anywhere. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/72336",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/468/"
]
} |
72,495 | In what situation should I use a Private Set on a property versus making it a ReadOnly property? Take into consideration the two very simplistic examples below. First example: Public Class Person
Private _name As String
Public Property Name As String
Get
Return _name
End Get
Private Set(ByVal value As String)
_name = value
End Set
End Property
Public Sub WorkOnName()
Dim txtInfo As TextInfo = _
Threading.Thread.CurrentThread.CurrentCulture.TextInfo
Me.Name = txtInfo.ToTitleCase(Me.Name)
End Sub
End Class
// ----------
public class Person
{
private string _name;
public string Name
{
get { return _name; }
private set { _name = value; }
}
public void WorkOnName()
{
TextInfo txtInfo = System.Threading.Thread.CurrentThread.CurrentCulture.TextInfo;
this.Name = txtInfo.ToTitleCase(this.Name);
}
} Second example: Public Class AnotherPerson
Private _name As String
Public ReadOnly Property Name As String
Get
Return _name
End Get
End Property
Public Sub WorkOnName()
Dim txtInfo As TextInfo = _
Threading.Thread.CurrentThread.CurrentCulture.TextInfo
_name = txtInfo.ToTitleCase(_name)
End Sub
End Class
// ---------------
public class AnotherPerson
{
private string _name;
public string Name
{
get { return _name; }
}
public void WorkOnName()
{
TextInfo txtInfo = System.Threading.Thread.CurrentThread.CurrentCulture.TextInfo;
_name = txtInfo.ToTitleCase(_name);
}
} They both yield the same results. Is this a situation where there's no right and wrong, and it's just a matter of preference? | There are a couple reasons to use private set . 1) If you are not using a backing field at all and want a read-only automatic property: public string Name { get; private set; }
public void WorkOnName()
{
TextInfo txtInfo = Thread.CurrentThread.CurrentCulture.TextInfo;
Name = txtInfo.ToTitleCase(Name);
} 2) If you want to do extra work when you modify the variable inside your class and want to capture that in a single location: private string _name = string.Empty;
public string Name
{
get { return _name; }
private set
{
TextInfo txtInfo = Thread.CurrentThread.CurrentCulture.TextInfo;
_name = txtInfo.ToTitleCase(value);
}
} In general, though, it's a matter of personal preference. Far as I know, there are no performance reasons to use one over the other. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/72495",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13273/"
]
} |
72,515 | I'm trying to figure out how to do roles/permissions in our application, and I am wondering if anyone knows of a good place to get a list of different permission-based authorization systems (preferably with code samples) and perhaps a list of pros/cons for each method. I've seen examples using simple dictionaries, custom attributes, claims-based authorization, and custom frameworks, but I can't find a simple explanation of when to use one over another and what the pros/cons are to using each method. (I'm sure there's other ways than the ones I've listed...) I have never done anything complex with permissions/authorization before, so all of this seems a little overwhelming to me, and I'm having trouble figuring out what what is useful information that I can use and what isn't. What I DO know is that this is for a Windows environment using C#/ WPF and WCF services. Some permission checks are done on the WCF service and some on the client. Some are business rules, some are authorization checks, and others are UI-related (such as what forms a user can see). They can be very generic like boolean or numeric values, or they can be more complex such as a range of values or a list of database items to be checked/unchecked. Permissions can be set on the group-level, user-level, branch-level, or a custom level, so I do not want to use role-based authorization. Users can be in multiple groups, and users with the appropriate authorization are in charge of creating/maintaining these groups. It is not uncommon for new groups to be created, so they can't be hard-coded. | There are a couple reasons to use private set . 1) If you are not using a backing field at all and want a read-only automatic property: public string Name { get; private set; }
public void WorkOnName()
{
TextInfo txtInfo = Thread.CurrentThread.CurrentCulture.TextInfo;
Name = txtInfo.ToTitleCase(Name);
} 2) If you want to do extra work when you modify the variable inside your class and want to capture that in a single location: private string _name = string.Empty;
public string Name
{
get { return _name; }
private set
{
TextInfo txtInfo = Thread.CurrentThread.CurrentCulture.TextInfo;
_name = txtInfo.ToTitleCase(value);
}
} In general, though, it's a matter of personal preference. Far as I know, there are no performance reasons to use one over the other. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/72515",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1130/"
]
} |
72,529 | My employer is hiring a programmer - more specifically, I am interviewing and selecting the person who will fill the position. The best candidate right now is far more qualified than I am, older, and a lot more experienced. Other people I've talked to have said that hiring someone more qualified than myself is a really bad idea (my family included). And I get the feeling that other technical staff here have a similar attitude (considering that this applicant is also more qualified than they are). They're saying, or thinking, that hiring someone better qualified is going to hurt my and my current colleague's efforts to move up in the company, or are afraid that a Johnny-come-lately is going to steal the spotlight and current staff will be left behind as promotions are given out and new positions open up. Personally I'd love to work with this applicant, and learn from them. I'm confident enough in myself to not be afraid that someone new and more experienced is going to come in and start making me look bad. I'd like to use this as an opportunity to grow, and I don't think that being afraid of competition or of new people like this is rational or beneficial. Or maybe I'm just being naive. What do you think? And have you ever had an experience similar to this? How did it work out for you? | I was in your exact situation recently. My company wanted to hire another programmer and I specifically wanted someone with more experience than me so I could continue to learn and grow. I was most nervous about the Interviews, so asked a question on here . To summarize, ask questions you know the answer to, are related to problems you have, or are problems you solved in the past. Don't try to ask questions that are out of your depth. Be honest if the interviewee starts talking in terms you don't understand and ask him/her to explain them to you. Afterall, the person you hire will be working with you and you'll want someone who can mentor you. It turned out great. We hired someone with way more experience and knowledge than me and I feel I am learning a lot. I would say it's a win-win situation for you. Worst case scenario is the person you hire takes your job, and you've gained valuable knowledge working with them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/72529",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22684/"
]
} |
72,569 | Of course one big pro is the amount of syntactic sugar leading to shorter code in a lot of cases. On http://jashkenas.github.com/coffee-script/ there are impressive examples. On the other hand I have doubts that these examples represent code of complex real world applications. In my code for instance I never add functions to bare objects but rather to their prototypes. Moreover the prototype feature is hidden from the user, suggesting classical OOP rather than idiomatic Javascript. The array comprehension example would look in my code probably like this: cubes = $.map(list, math.cube); // which is 8 characters less using jQuery... | I'm the author of a forthcoming book on CoffeeScript: http://pragprog.com/titles/tbcoffee/coffeescript I was convinced that CoffeeScript was worth using after about a week of playing with it, even though the language was only a few months old at the time and had many more rough edges than it does now. The official site does a great job of listing (most of) the language's features, so I won't repeat those here. Rather, I'll just say that the pros of the language are: Encourages the use of good JavaScript patterns Discourages JavaScript anti-patterns Makes even good JavaScript code shorter and more readable No. 3 gets a lot more attention than the first two (even in my book), but the more I think about it, the more I realize that I didn't make the jump just for the pretty syntax; I made the jump because the language nudged me toward better, less error-prone JavaScript. To give a few quick examples: Because variables are auto-scoped, you can't accidentally overwrite globals by omitting var , shadow a variable with the same name (except with named arguments), or have variables in different files that interact (see https://stackoverflow.com/questions/5211638/pattern-for-coffeescript-modules/5212449 ). Because -> is a heck of a lot easier to write than function(){} , it's easier to use callbacks. Semantic indentation makes it clear when callbacks are nested. And => makes it easier to preserve this when appropriate. Because unless x is easier for English speakers to parse than if (!x) , and if x? is easier than if (x != null) , to give just two examples, you can spend fewer brain cycles on logic syntax and more on the logic itself. A great library like Underscore.js can take care of some of these problems, but not all. Now for the cons: Compilation can be a pain. The syntax errors the CoffeeScript compiler throws are often vague. I expect progress to be made on that track in the future. (In the compiler's defense, it often catches things that—if you wrote them in JavaScript—you wouldn't discover as an error until that line of code ran. Better to catch those bugs sooner than later.) Relatedly, debugging can be a pain. There isn't yet any way to match compiled JS lines to the original CoffeeScript (though the Firefox folks have promised that this is coming). It's prone to change. Your code may run differently, or not run at all, under a future version of CoffeeScript. Of course, this is the case with most languages—moving to a new version of Ruby or Python is similar—but it's not the case with JavaScript, where you can reasonably expect that code that runs fine across major browsers today will run fine across major browsers for as long as the web as we know it exists. It's not as well-known. JavaScript is a lingua franca . CoffeeScript has become very popular in a short amount of time, but it's unlikely that it'll ever enjoy as vast a community as JavaScript. Obviously I think the pros outweigh the cons for me personally, but that won't be the case for every person, team, or project. (Even Jeremy Ashkenas writes a lot of JavaScript.) CoffeeScript is best viewed as a fine complement to JavaScript, not a replacement. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/72569",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9185/"
]
} |
72,593 | I always struggle in abbreviating variable names. Is there any standard for abbreviating variable names? | The standard I use is to not abbreviate variable names unless the abbreviation is more readable than the full version ( i for iteration indices, for instance). We name things so that we can communicate. Abbreviating variable names typically just lessens their ability to communicate. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/72593",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8913/"
]
} |
72,806 | All programmers have their style of programming. But some of the styles are let’s say... let’s not say. So you have code review to try to impose certain rules for good design and good programming techniques. But most of the programmers don’t like code review. They don’t like other people criticizing their work. Who do they think they are to consider themselves better than me and tell me that this is bad design, this could be done in another way. It works right? What is the problem? This is something they might say (or think but not say which is just as bad if not worse). So how do you make people accept code review without starting a war? How can you convince them this is a good thing; that will only improve their programming skills and avoid a lot of work later to fix and patch a zillion times a thing that hey... "it works"? People will tell you how to make code review (peer-programming, formal inspections etc) what to look for in a code review, studies have been made to show the number of defects that can be discovered before the software hits production etc. But how do you convince programmers to accept a code review? | I've found that people who don't like code reviews will do their best to work around whatever you put in place. The best way to make sure that the code you work with is code reviewed properly is to work somewhere that treats that as the normal way of coding, and that only hires developers who are likely to fit into that environment well. If you can't change who you work with, you might have success if you first give your own code for review. Encourage them to find fault with it (dare I suggest adding in a few deliberate mistakes?) so that you can demonstrate that it's not meant to be a way of criticising the developer in question. That's the main problem, IMO: people take code reviews too personally. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/72806",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
72,865 | I have read quite a bit about the Go language , and it seems promising. The last important bit of information I am missing before I decide on spending more effort on the language is: How much money/man power does Google or other companies invest in the development effort? If this information cannot be provided, do you have any other information showing the commitment of Google to the project. Is it being used as the primary language for a new investment or similar (my guess is that it is too early for this, but I do not know)? | I have been using go for about a year now, and the language has continually improved since then. Things are changing, improving, (somewhat) stabilizing, and generally amazing me in their innovations (e.g. gofix). It is most certainly not dying, and they seem to be putting quite a bit of effort into it. The Google code page shows 17 people contributing to the project. From the looks of it, all but three of them are likely Google employees: http://code.google.com/p/go/people/list .
Worth noting is that the go project has notable programmers such as Rob Pike and Ken Thompson working for it, fathers of UNIX. If Google didn't care about the future of the go language, it is unlikely they would assign such high-profile programmers to its development. Google is using go internally: http://golang.org/doc/go_faq.html#Is_Google_using_go_internally The oracle saga won't happen with go: See the licence file and the irrevocable patent grant . Even if Google were to stop developing go (which is unlikely, given my points above), someone else would likely pick it up. In addition to all of the above points, Google go is pretty much ideal for Google's internal use, due to it's built-in parallelization, native library support for the http protocol, and speed. For this reason alone, you can be pretty confident that go will be supported by Google for a while to come. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/72865",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5094/"
]
} |
72,967 | Seeing that so many of my friends are unemployed, some of my frinds and I are planning to create a small software company. What are the basic things we should know and do? Are there things specific to running a software company that we'd need to be aware of? | I'll try to list a few things¹ I wish I thought about when creating my company. The essential thing to know is that either you have to hire people (lawyers, accountants, salesmen, project managers), or you have to learn lots of stuff yourself, given that trial and error technique would often cost you a lot of money. Be aware of the local laws . When you're a small company and you're sued by your customer for thousands of dollars because some mandatory sentence is missing from your invoice, it's not obvious to handle. In the same way, when a customer doesn't pay you for months, when you go to a lawyer and learn that the contract you signed doesn't force your customer to pay you , you wish you had consulted a lawyer before signing anything. I spent four years in law college; I'm always surprised by the poor quality of contracts written by people with no knowledge in law. Most of the contracts I've seen clearly say that the developer may never be payed, or that the customer can request any change at no cost . Remember, some customers will spend a huge amount of time trying not to pay or to pay less. They will invoke the fact that your product does't match their expectations, or that they always thought that the changes you made at their request were for free, or that they don't need the product any longer. Make sure to see F*ck You. Pay Me. by Mike Monteiro which discusses such situations. This is a job of a lawyer. Lawyers are expensive, but they save you money. Be sure that the taxes will not be higher than your income . In France, for example, when you start you can easily be in the situation where multiple semi-governmental organizations (such as the mandatory insurance company) will claim thousands of dollars per year, yet your income is several hundreds of dollars per year. Nobody cares by such nonsense, because it's a way for those organizations to make a lot of money. Even when you don't have any income, you still have to pay. Given that some of them are managed as insurance companies and benefit from their monopoly, you find yourself in front of an entity which behaves much like mafia (i.e. no matter what's your situation, you'll have to pay), but sometimes without the cover benefits. Seeing taxmen arrive at your company and asking to check the accounts, then finding a few mistakes which will cost you a few thousands of dollars is not a nice thing neither. This is a job of an accountant: avoiding accounting errors which usually cost too much, and defend the money of your company from the intentional errors of powerful entities. What makes you better than all the freelance developers? What makes you better than all the larger software development companies? How do you explain to the customers that you're better? I had a few discussions with my colleagues who wanted to create their own companies. "What do you have that others don't?", I asked every time. Either they can't answer, or they answer something like "I'll ask for a lower price", but they are unable to explain how would they do the cost savings. Be sure you know the aspects in which you are better than the competitors. Be sure you are able to market yourself, explaining not only what's better, but also why. Example: a company A ships software at a lower cost, because they use lean management, removing the waste related to tasks which are not needed in order to deliver the product. Another example: a company B ships high-quality software by using intensive formal code reviews, testing, formal proof, and other techniques used in companies writing live-critical software. Last example: a company C delights its customers by using radical management and Agile. More importantly, how you will find your customers? Do you advertise? Where? How? How much would it cost? Are you ready to answer customers' questions? For example, if somebody asks for the names of companies you worked before in order to ask those companies for feedback, or if somebody asks to show the software products or web apps you've done, do you have an answer? This is a job of a salesman: somebody who knows your business, knows your strong points, and can quickly, easily and honestly explain why your company is the best. How do you avoid shipping the project late , when the customer constantly asks for changes in the features you just delivered? How do you calculate the price the customer has to pay? If you're paid per hour of work, how can the customer be sure that you don't ask to be payed for 213 hours when in fact you worked 186 hours? How do you keep track of a project? How do you know that the project is about to fail, and when you know it, how do you prevent it? This is a job of a project manager. Leading a project from "I have a great idea, it's in my head now" to the fully-featured product requires more than knowing how to write programming code. Are you sure you're ready to deal with customers? What will happen when a customer is not polite ? What if a customer says that your product sucks or does not conform to the requirements when in fact it follows them exactly? What if a customer, after two months of development of a three months project tells you that you must rewrite your ASP.NET project in PHP? What if the customer doesn't even know what her project is about? This, again, is a task of the project manager, the salesman or the support. Dealing with customers after you signed the contract requires a lot of tact, patience, professionalism and, often, anger-management. ¹ Note: my company is in France, so some points may not apply or be less important in other countries. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/72967",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24145/"
]
} |
73,065 | Is something like DRY a design pattern, a methodology, or something in between? They do not have specific implementations that could neccessarily be demonstrated(even if you can easily demonstrate a case NOT using something like KISS... see The Daily WTF for a plethora of examples), nor do they fully explain a development process like a methodology generally would. Where does that leave these types of "rule of thumb"'s? | According to Wikipedia it is a principle of software development. In fact, Wikipedia refers to all of them as principles: DRY : In software engineering, Don't Repeat
Yourself (DRY) or Duplication is Evil
(DIE) is a principle of software
development KISS : KISS is an acronym for the design principle "Keep it simple, Stupid!". SOLID : The principles when applied together
intends to make it more likely that a
programmer will create a system that is easy to maintain and extend over time | {
"source": [
"https://softwareengineering.stackexchange.com/questions/73065",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3792/"
]
} |
73,175 | I was wondering what are some really tough books on programming that would make me think. I'm talking about low level languages such as c etc and algorithms, points, functions etc. Thanks ALot. | The Art of Computer Programming. Donald Knuth. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/73175",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15267/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.