source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
109,990 | Today we were training TDD and found the following point of misunderstanding. The task is for the input "1,2" return sum of numbers which is 3. What I have written (in C#) was: numbers = input.Split(',');
return int.Parse(numbers[0]) + int.Parse(numbers[1]); //task said we have two numbers and input is correct But other guys preferred to do it other way. First, for input "1,2" they made added the following code: if (input == "1,2")
return 3; Then they introduced one more test for input "4,5" and changed implementation: if (input == "1,2")
return 3;
else if (input == "4,5")
return 9; And after that they said "Okay, now we see the pattern" and implemented what I initially did. I think the second approach better fits the TDD definition but... should we be so strict about it? For me it is okay to skip trivial baby steps and combine them into "twinsteps" if I am sure enough that I won't skip anything. Am I wrong? Update. I have made a mistake by not clarifing it was not the first test. There already were some tests so "return 3" actually wasn't the simplest piece of code to satisfy the requirement. | I think the second way is mind numbingly stupid. I see the value in making small enough steps, but writing those tiny zygote (can't even call them baby) steps is just asinine and a waste of time. Especially if the original problem you're solving is already very small by it's own. I know it's training and it's more about showing the principle, but I think such examples do TDD more bad than good. If you want to show the value of baby steps, at least use a problem where there is some value in it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/109990",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7369/"
]
} |
110,012 | I'm not asking where to learn. I've found lots of good resources online, and books etc. But how the heck do I tackle them. Where is the start of it, the end? When does the regexp processor advance on the text, when does it hold its stand and tries another match? etc. I feel like trying to figure out hieroglyphs on the Egyptian pyramids. | I think that the knowledge of the Automata theory is critical for understanding. Once you understand what an automaton is, and how regular languages are defined, understanding the regular expressions will be much easier. As to the specific syntax and differences between the various implementations... Well, some things you just have to remember. There are aids for that, too. Edit Some of the comments below raised important points: Don't forget that regular expressions (as implemented in most programming languages) are a superset of regular expressions in automata theory. While a good theoretical background is a useful place to start, it won't tell you everything. (Thanks, David Thornley) Multiple commenters say that it is possible to learn the various regex syntax without learning the theoretical basis. While it is true that you can learn syntax without fully understanding how it works, it was my impression that the full understanding is what the OP was after. The question was about the actual basis: when does the processor advance? When does it stop? How it decides that its a match? That's the basis, that's the theory, and it is based on the Automata Theory. Sure, you can drive a car without knowing how the engine works. But if you're being asked "how does the gas actually make it drive" - you have to talk about how the engine is built, don't you? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110012",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37177/"
]
} |
110,227 | How do you guys know that you are writing the most robust code possible without overengineering? I find myself thinking too much about every possible path that my code can take, and it feels like a waste of time sometimes. I guess it depends on the kind of program you are writing, but I don't want to use too much of my time taking situations into account that will never happen. | How do you guys know that you are writing the most robust code
possible without overengineering? What do you consider robust code? Code that is already future proof and so powerful that it can deal with any situation? Wrong, no one can predict the future! And wrong again, because it'll be a complicated, unmaintainable mess. I follow various principles: First and foremost YAGNI (yet) and KISS , so I don't write unecessary code. That also effectively prevents overengineering. I refactor the application when extensions are needed. Modern refactoring tools let you quite easily create interfaces and exchange implementations afterwards when you need them. Then I try to make the code I write as robust as possible, that includes eliminating as many paths the program can take (and also states) as possible and a bit of Spartan programming . A great help are "atomic" functions/methods that do not rely on external states or at least don't leave the program in an inconsistend state when they fail. If you do that well, it's also very unlikely that you'll ever end up with spaghetti code and it's a blessing for maintainability, too. Also, in object oriented design, the SOLID principles are a great guide to robust code. I've really found out that often times you can reduce complexity, for example combinatorial explosions of program paths or states, by deeply thinking about how you could design it as the straightest path possible. Try to keep the possible combinations at a minimum by choosing the best ordering of your subroutines and designing them for this purpose. Robust code is most always simple and clean code, but simplicity is a trait that is not always easily achieved. Yet, you should strive for it. Always just write the simplest code possible and only add complexity when you have no other choice. Simplicity is robust, complexity is fragile. Complexity kills. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110227",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21152/"
]
} |
110,253 | Should database files(scripts etc.) be on source control?
If so, what is the best method to keep it and update it there? Is there even a need for database files to be on source control since we can put it on a development server where everyone can use it and make changes to it if needed. But, then we can't get it back if someone messes it up. What approach is best used for databases on source-control? | Yes. You should be able to rebuild any part of your system from source control including the database (and I'd also argue certain static data). Assuming that you don't want to have a tool to do it, I'd suggest you want to have the following included: Creation scripts for the basic table structures including schemas, users, tables, keys, defaults and so on. Upgrade scripts (either altering the table structure or migrating data from a previous schema to the new schema) Creation scripts for stored procedures, indexes, views, triggers (you don't need to worry about upgrade for these as you just overwrite what was there with the correct creation script) Data creation scripts to get the system running (a single user, any static picklist data, that sort of thing) All scripts should include the appropriate drop statements and be written so they can be run as any user (so including associated schema / owner prefixes if relevant). The process for updating / tagging / branching should be exactly as the rest of the source code - there's little point in doing it if you can't associate a database version with an application version. Incidentally, when you say people can just update the test server, I'm hoping you mean the development server. If developers are updating the test server on the fly then you're looking at a world of pain when it comes to working out what you need to release. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110253",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18933/"
]
} |
110,380 | Can I (legally) use a program that is released under GPL from another program that I'm writing and not have to respect the GPL (for the program I'm writing)? For example, I have a GUI that uses a program (which is under GPL), can I hide the code in the GUI and even sell it? | You can use a GPLed program from your own program without your program being affected by the GPL, but you cannot link the GPLed code into your own program without your program becoming subject to the GPL's terms. In the example provided in the question, in which you have written a GUI wrapper around an existing command-line program, your GUI is not bound by the terms of the GPL, provided that it is a separate program which runs the GPLed program in a separate process and communicates with it only via the existing interface(s) - e.g., over the command line and/or via stdin/stdout. Some relevant bits from GPL FAQ : Where's the line between two separate programs, and one program with
two parts? This is a legal question, which ultimately judges will
decide. We believe that a proper criterion depends both on the
mechanism of communication (exec, pipes, rpc, function calls within a
shared address space, etc.) and the semantics of the communication
(what kinds of information are interchanged). If the modules are included in the same executable file, they are
definitely combined in one program. If modules are designed to run
linked together in a shared address space, that almost surely means
combining them into one program. By contrast, pipes, sockets and command-line arguments are
communication mechanisms normally used between two separate programs.
So when they are used for communication, the modules normally are
separate programs. But if the semantics of the communication are
intimate enough, exchanging complex internal data structures, that too
could be a basis to consider the two parts as combined into a larger
program. Can I release a non-free program that's designed to load a GPL-covered plug-in? It depends on how the program invokes its plug-ins. For
instance, if the program uses only simple fork and exec to invoke and
communicate with plug-ins, then the plug-ins are separate programs, so
the license of the plug-in makes no requirements about the main
program. If the program dynamically links plug-ins, and they make
function calls to each other and share data structures, we believe
they form a single program, which must be treated as an extension of
both the main program and the plug-ins. In order to use the
GPL-covered plug-ins, the main program must be released under the GPL
or a GPL-compatible free software license, and that the terms of the
GPL must be followed when the main program is distributed for use with
these plug-ins. If the program dynamically links plug-ins, but the
communication between them is limited to invoking the ‘main’ function
of the plug-in with some options and waiting for it to return, that is
a borderline case. Note that the GPL applies in full to the underlying command-line program in any case - if you distribute it (as opposed to having users obtain it from another source), you are responsible for providing a copy of the GPL to users, making it clear to them that the command-line program is under the GPL (even if the GUI wrapper isn't), and making the command-line program's source code available to them on request. From the GPL FAQ again: If people were to distribute GPL-covered software calling it “part of”
a system that users know is partly proprietary, users might be
uncertain of their rights regarding the GPL-covered software. But if
they know that what they have received is a free program plus another
program, side by side, their rights will be clear. Standard disclaimer: I am not a lawyer and, even if I were a lawyer, I'm not your lawyer. If you need a definitive answer, consult an appropriate legal professional who is licensed to practice in your jurisdiction. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110380",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31690/"
]
} |
110,487 | Working as a freelancer, I often see strange requests from my customers, some of which can negatively affect my daily work¹, and others trying to set some sort of control. I usually encounter those things during preliminary negotiations, so it's easy enough at this stage to explain to the customer that I do care about my work and productivity and expect my customers to trust my work. Things were much harder² on a project I just accepted, since it's only after the end of the negotiations (the contract has already been signed and did not mention anything about video tracking) and after I started to work on the project that my customer requested that I record a video of all I do on my machine when working on his project , that is, a video which will show that I move the cursor, type a character, open a file, move a window, etc. I work in my own company, using my own PCs. I answered to this customer that such request cannot be accepted, since: Hundreds of hours of work on a dual-screen PC will require a large amount of disk space for the recorded videos. If I don't care about space, I do care about this customer wasting my bandwidth downloading those videos. Recording a video can affect the overall performance and decrease my productivity (which is not actually true, since the machine is powerful enough to record this video without performance loss, but, well, it still looks like a valid argument). I can't always remember to turn the video recording on before starting the work, and off at the end. It may be a privacy concern. What if I switch to my mails when recording the video? What if, to open the directory with the files about this customers project, I first open the parent directory containing the list of all of my customers? Such video cannot be a reliable source to track the cost of a project (I'm paid by the hour), since some work is done with just a pencil and a paper (which is actually true, since I do lots of draft work without using the PC). Despite those points, the customer considers that if I don't want to record the video, it's because I have something to hide and want to lie about the real time spent on his project³. How to explain to him that it is not usual practice for freelancers to record videos of their daily work , and that such extravagant requests must be reserved for exceptional circumstances⁴? ¹ The most frequent example is to be requested to work through Remote Desktop on a more-than-slow server which uses a more-than-slow Internet connection, or to be forced to use outdated software such as Windows Me without serious justification such as legacy support. ² In fact, I already did a lot of management and system design related work, which is essential, but usually misunderstood by customers and perceived as a waste of time and money. Observing the concerned customer, I'm pretty sure that he will refuse to pay a large amount of money for what was already done, since there is actually zero lines of code. Even if legally I can easily prove that there was a lot of work undertaken on the design level, I don't want to end my relation with this customer in a court. ³ Which is not as risky as it could be, since I gave this customer the expected and the maximum cost of the project, so the customer is sure to never be asked to pay more than the maximum amount, specified in the contract, even if the real work costs more. ⁴ One case when I effectively record on my own initiative the video of actions is when I have to do some manipulations directly on a production server of a customer, especially when it comes to security issues. Recording those steps may be a good idea to know precisely what was done, and also ensure that there were no errors in my work, or see what those errors were. First update Since the question attracted much more attention and had many more answers than I expected, I imagine that it can be relevant to other people, so here is an update. First, to summarize the answers and the comments, it was suggested to (ordered randomly): Suggest other ways of tracking, as shown in Twitter Code Swarm video , or deliver a "short milestone with a simple, clear deliverable, followed by more complex milestones", etc . Explain that video is not a reliable source and can be faked, and that it would be difficult to implement, especially for support. Explain that video is not a reliable source since it shows only a small part of the work: a large amount of work is done without using a computer, not counting the extra hours spent thinking about a solution to a problem. Stick with the contract; if the customer wants to change it, he must expect new negotiations and a higher price. Do the video, "but require that the customer put [the] entire fee into an escrow account", require a lawyer to video tape all billable time, etc., in other words, "operate in an environment void of trust", requiring the customer to support the additional cost. Search for the laws which forbid this. Several people asked in what country I live. I'm in France. Such laws exist to protect the employees of a company (there is a strict regulation about security cameras etc., but I'm pretty sure nothing forbids a freelancer to sign consciously a contract which forces him to record the screen while he works on a project. Just do it and send the videos: the customer will "watch a few ten second snippets of activity he won't understand", then throw those videos away. Say no. After all, it's my business, and I'm the only one to decide how to conduct it. Also, the contract is already signed, and has nothing about video tracking. Say no. The processes and practices I employ in my company can be considered as trade secrets and are or can be classified. Quit. If the relation starts like this, chances are it will end badly sooner or later. Also , "if he's treating you like a thief - and that is what he's suggesting - then it's just going to get worse later when XYZ feature doesn't work exactly the way he envisioned". While all those suggestions are equally valuable, I've personally chosen to say to my customer that I accept to do the videos, but in this case, we must renegotiate the contract , keeping in mind that there will be a considerable cost, including the additional fee for copyright release . The new overall cost would be on average three times the actual cost of the project. Knowing this customer, I'm completely sure that he would never accept to pay so much, so the problem is solved. Second update The customer effectively declined the proposal to renegotiate the original contract, taking into account the considerable additional cost. He agreed to continue the project without video recording. The project continued without video recording. Eventually, the customer seemed satisfied by the final result and cost, so the video recording was not mentioned again. | (Or, the flip-side of my previous advice...) You stop giving protestations, and say yes. "Yes, I would be happy to write a new contract for these additional deliverables. Project-complete tutelege in my proprietary tradecraft is valued at (value of my projected income for the next $N years). There will also be a licensing fee $Y, for physical file ownership rights. If you would like to also own the video's content, I'll get back to you shortly with an additional fee for copyright release." Lest you think that preposterous: seriously, what price makes it worthwhile to risk your business? A competitor could use that video to criticize, mimic, or undercut your practices. The client could edit it to make you look dishonest. You've sacrificed the potential to monetize your business through video tutorials if he chooses to post excerpts of this one for free (or heck, what if he sold them?). Value of a work product is not equal to the value of (work product + expertise + work processes) An employer gets to own and direct all of these. A client only gets to ask "Do you offer_ _ , and if so what do you charge for it?" So, yep, these are reasonable terms for accommodating an unreasonable request. BUT unless he accepts those terms and without further howling, I still say a flat "no" is the most persuasive you can possibly be that what he wants is infeasible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110487",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
110,518 | When developing for embedded devices and other odd worlds, it's very likely your build process will include multiple proprietary binaries, using very specific versions of them.
So the question is, are they part of your source control? My offices goes by the rule of "checking out from source control includes everything you need to compile the code" and this has led to some serious arguments. The main arguments I see against this is bloating the source control DB, the lack of diffing binary files ( see prior questions on the subject) . This is against the ability to check out, build, knowing you have the precise environmental the previous developer intended and without hunting down the appropriate files (with specific versions no less!) | The idea of VERSION CONTROL (misnomer: source control) is to allow you to roll back through history, recover the effect of changes, see changes and why made. This is a range of requirements, some of which need binary thingies, some of which don't. Example: For embedded firmware work, you will normally have a complete toolchain: either a proprietary compiler that cost a lot of money, or some version of gcc. In order to get the shipping executable you need the toolchain as well as the source. Checking toolchains into version control is a pain, the diff utilities are horrible (if at all), but there is no alternative. If you want the toolchain preserved for the guy who comes to look at your code in 5 years time to figure out what it does, then you have no choice: you MUST have the toolchain under version control as well. I have found over the the years that the simplest method to do this is to make a ZIP or ISO image of the installation CD and check this in. The checkin comment needs to be the specific makers version number of the toolchain. If gcc or similar, then bundle up everything you are using into a big ZIP and do the same. The most extreme case I've done is Windows XP Embedded where the "toolchain" is a running Windows XP VM, which included (back then) SQL Server and a stack of configuration files along with hundreds and hundreds of patch files. Installing the whole lot and getting it up to date used to take about 2-3 days. Preserving that for posterity meant checking the ENTIRE VM into version control. Seeing as the virtual disk was made up of about 6 x 2GB images, it actually went in quite well. Sounds over the top, but it made life very easy for the person who came after me and had to use it - 5 years later. Summary: Version control is a tool. Use it to be effective, don't get hung up about things like the meaning of words, and don't call it "source control" because its bigger than that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110518",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10866/"
]
} |
110,606 | A great lathe operator commands several times the wage of an average lathe operator, but a great writer of software code is worth 10,000 times the price of an average software writer. - Bill Gates Say there's a "great" software engineer and an "average" software engineer on the same team. How can you account for one engineer being 10,000 times more productive? I can't quite fathom this, given they're both taking on their share of features, bugs and investigations, and consistently deliver with quality. Would my description possibly justify them to be above "average"? "great"? | The point of the quote isn't that one is 10K times more productive, it's that one is 10K times the worth of the other. Software has the unique condition where a defective design or implementation can lay dormant for years (a part that is machined wrong will usually just "not work" and not make it into the field), well into the life-cycle of the product until one day it rears its head in an intractable situation. Everyone should be familiar with the exponential cost of fixing a defect as it moves from design, to implementation to testing to production to maintenance. When you account for possible liability as well as corporate reputation, it is easy to conclude that the developer who knew enough to avoid the problem is worth 10,000 times the one who ignorantly or naively implemented a poor solution. Edit (Spring 2014): "Heartbleed" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110606",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37380/"
]
} |
110,634 | Sometimes Java outperforms C++ in benchmarks. Of course, sometimes C++ outperforms. See the following links: http://keithlea.com/javabench/ http://blog.dhananjaynene.com/2008/07/performance-comparison-c-java-python-ruby-jython-jruby-groovy/ http://blog.cfelde.com/2010/06/c-vs-java-performance/ But how is this even possible? It boggles my mind that interpreted bytecode could ever be faster than a compiled language. Can someone please explain? Thanks! | First, most JVMs include a compiler, so "interpreted bytecode" is actually pretty rare (at least in benchmark code -- it's not quite as rare in real life, where your code is usually more than a few trivial loops that get repeated extremely often). Second, a fair number of the benchmarks involved appear to be quite biased (whether by intent or incompetence, I can't really say). Just for example, years ago I looked at some of the source code linked from one of the links you posted. It had code like this: init0 = (int*)calloc(max_x,sizeof(int));
init1 = (int*)calloc(max_x,sizeof(int));
init2 = (int*)calloc(max_x,sizeof(int));
for (x=0; x<max_x; x++) {
init2[x] = 0;
init1[x] = 0;
init0[x] = 0;
} Since calloc provides memory that's already zeroed, using the for loop to zero it again is obviously useless. This was followed (if memory serves) by filling the memory with other data anyway (and no dependence on it being zeroed), so all the zeroing was completely unnecessary anyway. Replacing the code above with a simple malloc (like any sane person would have used to start with) improved the speed of the C++ version enough to beat the Java version (by a fairly wide margin, if memory serves). Consider (for another example) the methcall benchmark used in the blog entry in your last link. Despite the name (and how things might even look), the C++ version of this is not really measuring much about method call overhead at all. The part of the code that turns out to be critical is in the Toggle class: class Toggle {
public:
Toggle(bool start_state) : state(start_state) { }
virtual ~Toggle() { }
bool value() {
return(state);
}
virtual Toggle& activate() {
state = !state;
return(*this);
}
bool state;
}; The critical part turns out to be the state = !state; . Consider what happens when we change the code to encode the state as an int instead of a bool : class Toggle {
enum names{ bfalse = -1, btrue = 1};
const static names values[2];
int state;
public:
Toggle(bool start_state) : state(values[start_state])
{ }
virtual ~Toggle() { }
bool value() { return state==btrue; }
virtual Toggle& activate() {
state = -state;
return(*this);
}
}; This minor change improves the overall speed by about a 5:1 margin . Even though the benchmark was intended to measure method call time, in reality most of what it was measuring was the time to convert between int and bool . I'd certainly agree that the inefficiency shown by the original is unfortunate -- but given how rarely it seems to arise in real code, and the ease with which it can be fixed when/if it does arise, I have a difficult time thinking of it as meaning much. In case anybody decides to re-run the benchmarks involved, I should also add that there's an almost equally trivial modification to the Java version that produces (or at least at one time produced -- I haven't re-run the tests with a recent JVM to confirm they still do) a fairly substantial improvement in the Java version as well. The Java version has an NthToggle::activate() that looks like this: public Toggle activate() {
this.counter += 1;
if (this.counter >= this.count_max) {
this.state = !this.state;
this.counter = 0;
}
return(this);
} Changing this to call the base function instead of manipulating this.state directly gives quite a substantial speed improvement (though not enough to keep up with the modified C++ version). So, what we end up with is a false assumption about interpreted byte codes vs. some of the worst benchmarks (I've) ever seen. Neither is giving a meaningful result. My own experience is that with equally experienced programmers paying equal attention to optimizing, C++ will beat Java more often than not -- but (at least between these two), the language will rarely make as much difference as the programmers and design. The benchmarks being cited tell us more about the (in)competence/(dis)honesty of their authors than they do about the languages they purport to benchmark. [Edit: As implied in one place above but never stated as directly as I probably should have, the results I'm quoting are those I got when I tested this ~5 years ago, using C++ and Java implementations that were current at that time. I haven't rerun the tests with current implementations. A glance, however, indicates that the code hasn't been fixed, so all that would have changed would be the compiler's ability to cover up the problems in the code.] If we ignore the Java examples, however, it is actually possible for interpreted code to run faster than compiled code (though difficult and somewhat unusual). The usual way this happens is that the code being interpreted is much more compact than the machine code, or it's running on a CPU that has a larger data cache than code cache. In such a case, a small interpreter (e.g., the inner interpreter of a Forth implementation) may be able to fit entirely in the code cache, and the program it's interpreting fits entirely in the data cache. The cache is typically faster than main memory by a factor of at least 10, and often much more (a factor of 100 isn't particularly rare any more). So, if the cache is faster than main memory by a factor of N, and it takes fewer than N machine code instructions to implement each byte code, the byte code should win (I'm simplifying, but I think the general idea should still be apparent). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110634",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37397/"
]
} |
110,645 | I am reading the official Your first NHibernate based application . While the tutorial is good and easy to follow, I am wondering why the Repository pattern is used. In the various Add , Update , Remove methods in the ProductRepository implementation, the code is nearly identical - they are all using transactions, and the difference is in the "meat" i.e. call session.Save int the Add method, session.Delete in the remove method.
( The page lacks HTML anchors, but you can search the page for the relevant code like public void Remove , public void Add ) That code just "feels wrong". Why is the author using the Repository pattern - is it just for demonstration of using NHibernate or is that required or some other reason? Ps. My background is from Ruby on Rails using ActiveRecord so I'm trying to make sense of how NHibernate works/is used. | First, most JVMs include a compiler, so "interpreted bytecode" is actually pretty rare (at least in benchmark code -- it's not quite as rare in real life, where your code is usually more than a few trivial loops that get repeated extremely often). Second, a fair number of the benchmarks involved appear to be quite biased (whether by intent or incompetence, I can't really say). Just for example, years ago I looked at some of the source code linked from one of the links you posted. It had code like this: init0 = (int*)calloc(max_x,sizeof(int));
init1 = (int*)calloc(max_x,sizeof(int));
init2 = (int*)calloc(max_x,sizeof(int));
for (x=0; x<max_x; x++) {
init2[x] = 0;
init1[x] = 0;
init0[x] = 0;
} Since calloc provides memory that's already zeroed, using the for loop to zero it again is obviously useless. This was followed (if memory serves) by filling the memory with other data anyway (and no dependence on it being zeroed), so all the zeroing was completely unnecessary anyway. Replacing the code above with a simple malloc (like any sane person would have used to start with) improved the speed of the C++ version enough to beat the Java version (by a fairly wide margin, if memory serves). Consider (for another example) the methcall benchmark used in the blog entry in your last link. Despite the name (and how things might even look), the C++ version of this is not really measuring much about method call overhead at all. The part of the code that turns out to be critical is in the Toggle class: class Toggle {
public:
Toggle(bool start_state) : state(start_state) { }
virtual ~Toggle() { }
bool value() {
return(state);
}
virtual Toggle& activate() {
state = !state;
return(*this);
}
bool state;
}; The critical part turns out to be the state = !state; . Consider what happens when we change the code to encode the state as an int instead of a bool : class Toggle {
enum names{ bfalse = -1, btrue = 1};
const static names values[2];
int state;
public:
Toggle(bool start_state) : state(values[start_state])
{ }
virtual ~Toggle() { }
bool value() { return state==btrue; }
virtual Toggle& activate() {
state = -state;
return(*this);
}
}; This minor change improves the overall speed by about a 5:1 margin . Even though the benchmark was intended to measure method call time, in reality most of what it was measuring was the time to convert between int and bool . I'd certainly agree that the inefficiency shown by the original is unfortunate -- but given how rarely it seems to arise in real code, and the ease with which it can be fixed when/if it does arise, I have a difficult time thinking of it as meaning much. In case anybody decides to re-run the benchmarks involved, I should also add that there's an almost equally trivial modification to the Java version that produces (or at least at one time produced -- I haven't re-run the tests with a recent JVM to confirm they still do) a fairly substantial improvement in the Java version as well. The Java version has an NthToggle::activate() that looks like this: public Toggle activate() {
this.counter += 1;
if (this.counter >= this.count_max) {
this.state = !this.state;
this.counter = 0;
}
return(this);
} Changing this to call the base function instead of manipulating this.state directly gives quite a substantial speed improvement (though not enough to keep up with the modified C++ version). So, what we end up with is a false assumption about interpreted byte codes vs. some of the worst benchmarks (I've) ever seen. Neither is giving a meaningful result. My own experience is that with equally experienced programmers paying equal attention to optimizing, C++ will beat Java more often than not -- but (at least between these two), the language will rarely make as much difference as the programmers and design. The benchmarks being cited tell us more about the (in)competence/(dis)honesty of their authors than they do about the languages they purport to benchmark. [Edit: As implied in one place above but never stated as directly as I probably should have, the results I'm quoting are those I got when I tested this ~5 years ago, using C++ and Java implementations that were current at that time. I haven't rerun the tests with current implementations. A glance, however, indicates that the code hasn't been fixed, so all that would have changed would be the compiler's ability to cover up the problems in the code.] If we ignore the Java examples, however, it is actually possible for interpreted code to run faster than compiled code (though difficult and somewhat unusual). The usual way this happens is that the code being interpreted is much more compact than the machine code, or it's running on a CPU that has a larger data cache than code cache. In such a case, a small interpreter (e.g., the inner interpreter of a Forth implementation) may be able to fit entirely in the code cache, and the program it's interpreting fits entirely in the data cache. The cache is typically faster than main memory by a factor of at least 10, and often much more (a factor of 100 isn't particularly rare any more). So, if the cache is faster than main memory by a factor of N, and it takes fewer than N machine code instructions to implement each byte code, the byte code should win (I'm simplifying, but I think the general idea should still be apparent). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110645",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5580/"
]
} |
110,730 | Possible Duplicate: Defend zero-based arrays I'm running code that loops through an array of HTML IDs. With the HTML IDs named content1, content2, …, content12, my loop looks like: for (var i = 1; i<13; i++) {
var contentFind = $('#content'+i)[0]; I know that the usual way to run loops is to start at i = 0. Is there any benefit to this, or is it just the traditional, standard or normal way to write loops? | Start your for loops where ever you have to start them. The reason a lot of for loops start at 0 is because they're looping over arrays, and in most languages (including JavaScript) arrays start at index 0. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110730",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37429/"
]
} |
110,797 | My web server uses PHP as do 77.7% of web servers according to W3Techs , as of 14/03/2022. The reason I use PHP is an inertia born out of seeing everyone else using it on web servers. What is it about PHP that would make it so ubiquitous on web servers? (Note that this question is similar to the following question but takes it in a different direction: Why isn't Java used for modern web application development? ) | PHP is a language specifically designed for web development with built-in support for MySQL, the most popular open source database. Easy to start with: As a beginner it is easy to start with PHP. The user just has to add a few PHP-tags in their existing HTML files and upload it to the server and see the result. Dynamic typing and associative arrays also make it easier to start using PHP. Easy to use: Compared to other solutions like Java, PHP doesn't need to be compiled, so you just need to write the script and upload it to the server. Integrated database support: PHP has built-in support for some of the most popular databases such as MySQL, PostgreSQL, SQLite, Microsoft SQL Server, IBM, and Oracle. That means it is easy to start using databases as no additional drivers need to be installed. The easy-to-use web-based admin tool PHPMyAdmin (released in 1998) is also a significant reason for PHP's success. Old language with a big user base: PHP became popular early (in 1995) since it was designed for web development. Since then, the user base has grown and now there are many PHP frameworks and libraries, such as WordPress , Woocommerce , Magento , MediaWiki (which powers Wikipedia), Laravel , Bagisto , and Statamic . Cheap hosting: Since PHP has existed for long time and works well on both Linux and Windows, many web servers support it. There is no problem finding hosting services with PHP pre-installed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110797",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37397/"
]
} |
110,804 | A question asked here reminded me of a discussion I had with a fellow programmer. He argued that zero-based arrays should be replaced with one-based arrays since arrays being zero-based is an implementation detail that originates from the way arrays and pointers and computer hardware work, but these sort of stuff should not be reflected in higher level languages. Now I am not really good at debating so I couldn't really offer any good reasons to stick with zero-based arrays other than they sort of feel like more appropriate. Why is zero the common starting point for arrays? | I don't think any of us can provide a stronger argument than Edsger W. Dijkstra's article "Why numbering should start at zero" . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110804",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1091/"
]
} |
110,902 | As I get more and more involved with the theory behind programming, I find myself fascinated and dumbfounded by seemingly simple things..
I realize that my understanding of the majority of fundamental processes is justified through circular logic Q : How does this work? A : Because it does! I hate this realization! I love knowledge, and on top of that I love learning, which leads me to my question (albeit it's a broad one). Question: How are fundamental mathematical operators assessed with programming languages? How have current methods been improved? Example var = 5 * 5; My interpretation: $num1 = 5; $num2 = 5; $num3 = 0;
while ($num2 > 0) {
$num3 = $num3 + $num1;
$num2 = $num2 - 1;
}
echo $num3; This seems to be highly inefficient. With Higher factors, this method is very slow while the standard built in method is instantanious. How would you simulate multiplication without iterating addition? var = 5 / 5; How is this even done? I can't think of a way to literally split it 5 into 5 equal parts. var = 5 ^ 5; Iterations of iterations of addition?
My interpretation: $base = 5;
$mod = 5;
$num1 = $base;
while ($mod > 1) {
$num2 = 5; $num3 = 0;
while ($num2 > 0) {
$num3 = $num3 + $num1;
$num2 = $num2 - 1;
}
$num1 = $num3;
$mod -=1;
}
echo $num3; Again, this is EXTREMELY inefficient, yet I can't think of another way to do this.
This same question extends to all mathematical related functions that are handled automagically. | To really understand how arithmetic works inside a computer you need to have programmed in assembly language. Preferably one with a small word size and without multiplication and division instructions. Something like the 6502. On the 6502, virtually all arithmetic is done in a register called the Accumulator. (A register is a special memory location inside the processor that can be accessed quickly.) So to add two numbers, you load the first number into the Accumulator, then add the second number to it. But that's oversimplifying. Because the 6502 is an 8-bit processor, it can handle numbers only from 0 to 255. Most of the time you will want to be able to work with larger numbers. You have to add these in chunks, 8 bits at a time. The processor has a Carry flag that is set when the result of adding two numbers overflows the Accumulator. The processor adds that in when doing an addition, so it can be used to "carry the 1" assuming you start with the lowest-order byte of a number. A multi-byte add on the 6502 looks like this: Clear carry flag (CLC) Load lowest-order-byte of first number (LDA, load accumulator) Add lowest-order-byte of second number (ADC, add with carry) Store lowest-order byte of result (STA, store accumulator) Repeat steps 2-4 with successively higher-order bytes If at the end, the carry is set, you have overflowed; take appropriate action, such as generating an error message (BCS/BCC, branch if carry set/clear) Subtraction is similar except you set the carry first, use the SBC instruction instead of ADC, and at the end the carry is clear if there was underflow. But wait! What about negative numbers? Well, with the 6502 these are stored in a format called two's complement. Assuming an 8-bit number, -1 is stored as 255, because when you add 255 to something, you get one less in the Accumulator (plus a carry). -2 is stored as 254 and so on, all the way down to -128, which is stored as 128. So for signed integers, half the 0-255 range of a byte is used for positive numbers and half for negative numbers. (This convention lets you just check the high bit of a number to see if it's negative.) Think of it like a 24-hour clock: adding 23 to the time will result in a time one hour earlier (on the next day). So 23 is the clock's modular equivalent to -1. When you are using more than 1 byte you have to use larger numbers for negatives. For example, 16-bit integers have a range of 0-65536. So 65535 is used to represent -1, and so on, because adding 65535 to any number results in one less (plus a carry). On the 6502 there are only four arithmetic operations: add, subtract, multiply by two (shift left), and divide by two (shift right). Multiplication and division can be done using only these operations when dealing in binary. For example, consider multiplying 5 (binary 101) and 3 (binary 11). As with decimal long multiplication, we start with the right digit of the multiplier and multiply 101 by 1, giving 101. Then we shift the multiplicand left and multiply 1010 by 1, giving 1010. Then we add these results together, giving 1111, or 15. Since we are ever only multiplying by 1 or 0, we don't really multiply; each bit of the multiplier simply serves as a flag which tells us whether to add the (shifted) multiplicand or not. Division is analogous to manual long division using trial divisors, except in binary. If you are dividing by a constant, it is possible to do this in a way analogous to subtraction: rather than dividing by X, you multiply by a precalculated rendition of 1/X that produces the desired result plus an overflow. Even today, this is faster than division. Here's a page showing some actual 6502 code for multiplication and division, including a nicely optimized multiplication routine that uses a 2 kilobyte lookup table. http://nparker.llx.com/a2/mult.html Now try doing floating-point math in assembly, or converting floating-point numbers to nice output formats in assembly. And do remember, it's 1979 and the clock speed is 1 MHz, so you must do it as efficiently as possible. Things still work pretty much like this today, except with bigger word sizes and more registers, and of course most of the math is done by hardware now. But it's still done in the same fundamental way. If you add up the number of shifts and adds required for a multiply, for example, it correlates rather well to the number of cycles required for a hardware multiply instruction on early processors having such an instruction, such as the 6809, where it was performed in microcode in much the same way you would do it manually. (If you have a larger transistor budget, there are faster ways to do the shifts and adds, so modern processors do not perform these operations sequentially and can perform multiplications in as little as a single cycle.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110902",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37474/"
]
} |
110,933 | I was reading a book today called "Clean code" and I came across a paragraph were the author was talking about the levels of abstraction per a function, he classified some code as low/intermediate/high level of abstraction. My question is what is the criteria for determining the level of abstraction? I quote the paragraph from the book: In order to make sure our functions are doing “one thing,” we need to make sure that the
statements within our function are all at the same level of abstraction. It is easy to see how
Listing 3-1 violates this rule. There are concepts in there that are at a very high level of
abstraction, such as getHtml(); others that are at an intermediate level of abstraction, such
as: String pagePathName = PathParser.render(pagePath); and still others that are remarkably
low level, such as: .append("\n"). | The author explains that in the "Reading Code from Top to Bottom" subsection of the part that talks about abstractions (hierarchical indentation mine): [...] we want to be able to read the program as though it were a set of TO paragraphs, each of which is describing the current level of abstraction and referencing subsequent TO paragraphs at the next level down. To include the setups and teardowns, we include setups, then we include the test page content, and then we include the teardowns. To include the setups, we include the suite setup if this is a suite, then we include the regular setup. To include the suite setup, we search the parent hierarchy for the "SuiteSetUp" page and add an include statement with the path of that page. To search the parent ... The code that'd go along with this would be something like this: public void CreateTestPage()
{
IncludeSetups();
IncludeTestPageContent();
IncludeTeardowns();
}
public void IncludeSetups()
{
if(this.IsSuite())
{
IncludeSuiteSetup();
}
IncludeRegularSetup();
}
public void IncludeSuiteSetup()
{
var parentPage = FindParentSuitePage();
// add include statement with the path of the parentPage
} And so on. Every time you go deeper down the function hierarchy, you should be changing levels of abstraction. In the example above, IncludeSetups , IncludeTestPageContent and IncludeTeardowns are all at the same level of abstraction. In the example given in the book, the author's suggesting that the big function should be broken up into smaller ones that are very specific and do one thing only. If done right, the refactored function would look similar to the examples here. (The refactored version is given in Listing 3-7 in the book.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110933",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13639/"
]
} |
110,936 | When I first started programming Javascript after primarily dealing with OOP in context of class-based languages, I was left confused as to why prototype-based OOP would ever be preferred to class-based OOP. What are the structural advantages to using prototype-based OOP, if any? (e.g. Would we expect it to be faster or less memory intensive in certain applications?) What are the advantages from a coder's perspective? (e.g. Is it easier to code certain applications or extend other people's code using prototyping?) Please don't look at this question as a question about Javascript in particular (which has had many faults over the years that are completely unrelated to prototyping). Instead, please look at it in context of the theoretical advantages of prototyping vs classes. Thank you. | I had quite a lot of experience of the both approaches when writing an RPG game in Java. Originally I wrote the whole game using class-based OOP, but eventually realised that this was the wrong approach (it was becoming unmaintainable as the class hierarchy expanded). I therefore converted the whole code base to prototype-based code. The result was much better and easier to manage. Source code here if you are interested ( Tyrant - Java Roguelike ) Here are the main benefits: It's trivial to create new "classes" - just copy the prototype and change a couple of properties and voila... new class. I used this to define new type of potion for example in 3-6 lines of Java each. Much better than a new class file and loads of boilerplate! It's possible to build and maintain extremely large numbers of "classes" with comparatively little code - Tyrant for example had something like 3000 different prototypes with only about 42,000 lines of code total. That's pretty amazing for Java! Multiple inheritance is easy - you just copy a subset of the properties from one prototype and paste them over the properties in another prototype. In an RPG for example, you might want a "steel golem" to have some of the properties of a "steel object" and some of the properties of a "golem" and some of the properties of an "unintelligent monster". Easy with prototypes, try doing that with an inheritance heirarchy...... You can do clever things with property modifiers - by putting clever logic in the generic "read property" method, you can implement various modifiers. For example, it was easy to define a magic ring that added +2 strength to whoever was wearing it. The logic for this was in the ring object, not in the "read strength" method, so you avoided having to put lots of conditional tests elsewhere in your code base (e.g. "is the character wearing a ring of strength increase?") Instances can become templates for other instances - e.g. if you want to "clone" an object it is easy, just use the existing object as the prototype for the new object. No need to write lots of complex cloning logic for different classes. It's quite easy to change behaviour at runtime - i.e. you can change an properties and "morph" an object pretty much arbitrarily at runtime. Allows for cool in-game effects, and if you couple this with a "scripting language" then pretty much anything is possible at runtime. It's more suited to a "functional" style of programming - you tend to find yourself writing lots of functions that analyse objects an act appropriately, rather than embedded logic in methods attached to specific classes. I personally prefer this FP style. Here are the main drawbacks: You lose the assurances of static typing - since you are effectively creating a dynamic object system. This tends to mean that you need to write more tests to ensure behaviour is correct and that objects are of the right "kind" There is some performance overhead - since reads of object properties are generally forced to go through one or more map lookups, you pay a slight cost in terms of performance. In my case it wasn't a problem, but it could be an issue in some cases (e.g. a 3D FPS with a lot of objects being queried in every frame) Refactorings don't work the same way - in a prototype based system you are essentially "building" your inheritance heirarchy with code. IDEs / refactoring tools can't really help you since they can't grok your approach. I never found this a problem, but it could get out of hand if you are not careful. You probably want tests to check that your inheritance hierarchy is being constructed correctly! It's a bit alien - people used to a conventional OOP style may easily get confused. "What do you mean there's only one class called "Thing"?!?" - "How do I extend this final Thing class!?!" - "You are violating OOP principles!!!" - "It's wrong to have all these static functions that act on any kind of object!?!?" Finally some implementation notes: I used a Java HashMap for properties and a "parent" pointer for the prototype. This worked fine but had the following downsides: a) property reads sometimes had to trace back through a long parent chain, hurting performance b) if you mutated a parent prototype, the change would affect all children that had not overriden the changing property. This can cause subtle bugs if you are not careful! If I was doing this again, I would use an immutable persistent map for the properties (kind of like Clojure's persistent maps ), or my own Java persistent hash map implementation . Then you would get the benefit of cheap copying / changes coupled with immutable behaviour and you wouldn't need to permanently link objects to their parents. You can have fun if you embed functions / methods into object properties. The hack that I used in Java for this (anonymous subtypes of a "Script" class) wasn't very elegant - if doing this again I'd probably use a proper easily-embeddable language for scripts (Clojure or Groovy) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110936",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37397/"
]
} |
110,979 | I have programmed pretty much exclusively in compiled languages, particularly Java, for most of my career. One of my favourite things about Java is how productive you can be, and how little code you actually have to write, when using tools like Eclipse. You can: Easily and automatically refactor your methods and classes View instantly all the places where a method is invoked, or a constant is used (Open Call Hierarchy/Show References) Static typing means you can use code completion to show all the parameters/functions available on an object Control-click on a function/member/class name to go straight to its definition All these facilities make me feel like the IDE is my best friend. Writing Java code and particularly understanding other peoples' programs becomes far easier. However, I am being called on more and more to use Javascript, and my experience so far has been quite negative. In particular: No immediate way of finding a function's entry point
(other than a plain text search, which may then result in a subsequent searches for methods further up the call hierarchy, after two or three of which you've forgotten where you started) Parameters are passed in to functions, with no way of knowing what properties and functions are available on that parameter
(other than actually running the program, navigating to the point at which the function is called, and using console.logs to output all the properties available) Common usage of anonymous functions as callbacks, which frequently leads to a spaghetti of confusing code paths, that you can't navigate around quickly. And sure, JSLint catches some errors before runtime, but even that's not as handy as having red wavy lines under your code directly in the browser. The upshot is that you pretty much need to have the entire program in your head at all times. This massively increases the cognitive load for writing complex programs. And all this extra stuff to worry about leaves less room in my brain for actual creativity and problem solving. Sure, it's faster to just throw an object together rather than write an entire formal class definition. But while programs may be slightly easier and quicker to write, in my experience they are far harder to read and debug. My question is, how do other programmers cope with these issues? Javascript is clearly growing in popularity, and the blogs I read are about how productive people are being with it, rather than desperately trying to find solutions to these issues. GWT allows you to write code for a Javascript environment in Java instead, but doesn't seem to be as widely used as I would expect; people actually seem to prefer Javascript for complex programs. What am I missing? | The IDE-based niceties are not available* in a dynamic language such as javascript. You have to learn to do without them. You'll have to replace tool support with better design. Use a module pattern -- either by hand, or with a tool like requirejs . Keep the modules small, so that you can reason about them easily. Don't define as many types -- use anonymous objects created close to the point of call. Then you can look at the caller and the callee and know what's going on. Try to avoid coupling your code to the DOM -- Try hard to limit the amount of DOM manipulation you do in your code. If you can pass in selectors or jQuery collections, do that rather than having your code know about the page structure. * If you're using a popular library, you can get fake autocomplete, but it's more like "show all jquery methods" than like "what properties does this object have". It saves typing, but offers no guarantee of correctness. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110979",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37453/"
]
} |
110,987 | When you track down and fix a regression—i.e. a bug that caused previously working code to stop working—version control makes it entirely possible to look up who committed the change that broke it. Is it worth doing this? Is it constructive to point this out to the person that made the commit? Does the nature of the mistake (on the scale of simple inattentiveness to fundamental misunderstanding of the code they changed) change whether or not it's a good idea? If it is a good idea to tell them, what are good ways to do it without causing offense or causing them to get defensive? Assume, for the sake of argument, that the bug is sufficiently subtle that the CI server's automated tests can't pick it up. | Be assertive not aggressive. Always favour saying something akin to "this piece of code is not working" vs "your code is not working". Criticise the code, not the person who wrote the code. Better yet, if you can think of a solution, fix it and push to them -- assuming you have a distributed version control system. Then ask them if your fix is valid for the bug they were fixing. Overall, try to increase both your and their knowledge of programming. But do it without your ego getting in the way. Of course, you should be willing to listen to other developers coming to you with the same problem and act how you would have wished they did. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/110987",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17701/"
]
} |
111,021 | While responding to this question , I began to wonder why so many developers believe a good design should not account for performance because doing so would affect readability and/or maintainability. I believe that a good design also takes performance into consideration at the time it is written, and that a good developer with a good design can write an efficient program without adversely affecting readability or maintainability. While I acknowledge that there are extreme cases, Why do many developers insist an efficient program/design will result in poor readability and/or poor maintainability, and consequently that performance should not be a design consideration? | I think such views are usually reactions to attempts at premature (micro-)optimization , which is still prevalent, and usually does way more harm than good. When one tries to counter such views, it is easy to fall into - or at least look like - the other extreme. It is nevertheless true that with the enormous development of hardware resources in recent decades, for most of the programs written today, performance ceased to be a major limiting factor. Of course, one should take into account expected and achievable performance during design phase, in order to identify the cases when performance may be(come) a major issue . And then it is indeed important to design for performance from the beginning. However, overall simplicity, readability and maintainability is still more important . As others noted, performance optimized code is more complex, harder to read and maintain, and more bug-prone than the simplest working solution. Thus any effort spent on optimization must be proven - not just believed - to bring real benefits, while degrading the long term maintainability of the program as little as possible. So a good design isolates the complex, performance intensive parts from the rest of the code , which is kept as simple and clean as possible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/111021",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17203/"
]
} |
111,090 | I consider myself an intermediate Python programmer and have been offered an opportunity to be a trainer for a beginner Python programming class. I was wondering if this would really widen my programming repertoire. Has somebody had an enlightening experience after they successfully trained a group of people? Does it also depend on those people -- whether they're programmers or noob students? (In my case they are intermediate .NET and Java programmers) What should I expect from them? One of my fears is -- what if I choked when one of them asked a tangled question. Is this normal? | In my experience, teaching programming did make me better. It forced me to get a much better understanding of concepts I had previously just accepted or taken for granted. When I had to articulate ideas that were old to me but new to students, in a number of different ways (because not everyone learns the same way from the same examples), it eventually led to a deeper understanding of the material for me. And yes, sometimes students ask questions you don't know the answer to. That's OK, you can tell them you don't know, come up with a possible explanation, and promise to look into it before the next class. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/111090",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31560/"
]
} |
111,546 | I have done a fair bit of work with relational databases, and think I understand the basic concepts of good schema design pretty well. I recently was tasked with taking over a project where the DB was designed by a highly-paid consultant. Please let me know if my gut intinct - "WTF??!?" - is warranted, or is this guy such a genius that he's operating out of my realm? DB in question is an in-house app used to enter requests from employees. Just looking at a small section of it, you have information on the users, and information on the request being made. I would design this like so: User table: UserID (primary Key, indexed, no dupes)
FirstName
LastName
Department Request table RequestID (primary Key, indexed, no dupes)
<...> various data fields containing request details
UserID -- foreign key associated with User table Simple, right? Consultant designed it like this (with sample data): UsersTable UserID FirstName LastName
234 John Doe
516 Jane Doe
123 Foo Bar DepartmentsTable DepartmentID Name
1 Sales
2 HR
3 IT UserDepartmentTable UserDepartmentID UserID Department
1 234 2
2 516 2
3 123 1 RequestTable RequestID UserID <...>
1 516 blah
2 516 blah
3 234 blah The entire database is constructed like this, with every piece of data encapsulated in its own table, with numeric IDs linking everything together. Apparently the consultant had read about OLAP and wanted the 'speed of integer lookups' He also has a large number of stored procedures to cross reference all of these tables. Is this valid design for a small to mid-sized SQL DB? Thanks for comments/answers... | Makes perfect sense to me. It's just very normalized, which imparts a lot of flexibility that you wouldn't have otherwise. De-normalized data is a pain in the butt. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/111546",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21083/"
]
} |
111,633 | No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis , that is , optimizing design according to particular uses cases (by the tool builder). Text editors are probably the most prominent example--a coder who works on Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over TextMate , etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. Is this in fact the case with version-control systems (VCS), in particular, centralized VCS ( CVS and SVN ) versus distributed VCS ( Git and Mercurial )? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to Git (and GitHub) for all of my personal projects. I can think of a number of advantages of Git over Subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that Subversion does better than Git. The only conclusion I have drawn from this is that I don't have any data--not that Git is better, etc. My guess is that such counter-examples exist, hence this question. | Subversion is a central repository While many people will want to have distributed repositories for the obvious benefits of speed and multiple copies, there are situations where a central repository is more desirable. For example, if you've got some critical piece of code that you don't want anyone to access, you'd probably not want to put it under Git. Many corporations want to keep their code centralized, and (I guess) all (serious) government projects are under central repositories. Subversion is conventional wisdom This is to say that many people (especially managers and bosses) have the usual way to number the versions and seeing the development as a "single line" along time hardcoded into their brain. No offense, but Git's liberality is not easy to swallow. The first chapter of any Git book tells you to blank out all the conventional ideals from your mind and start anew. Subversion does it one way, and nothing else SVN is a version control system. It has one way to do its job and everybody does it the same way. Period. This makes it easy to transition to/from SVN from/to other centralized VCS. Git is NOT even a pure VCS -- it's a file-system, has many topologies for how to set up repositories in different situations -- and there isn't any standard. That makes it harder to choose one. Other advantages are: SVN supports empty directories SVN has better Windows support SVN can check out/clone a sub-tree SVN supports exclusive access control svn lock which is useful for hard-to-merge files SVN supports binary files and large files more easily (and doesn't require copying old versions everywhere). Adding a commit involves considerably fewer steps since there isn't any pull/push and your local changes are always implicitly rebased on svn update . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/111633",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8600/"
]
} |
111,692 | I'm the only developer at a small company. I've slowly moved into development here; until ~4 months ago 50-75% of my time was spent on operations. Now, 50-75% of my time is spent on development, with the rest split between operations and various IT stuff. I regularly end up working 50+ hours a week. I inherited some rather poorly-written applications (they were previously maintained by two people) that much of the business relies upon. Keeping these up and running, working on new, smaller applications, and my other responsibilities already take up all my time. In order to be scalable, the existing software needs significant refactoring and additional functionality. I haven't had the pleasure to work on properly written or architected software before. The complexity of this task is well beyond anything I've done before (this is my first job out of college.) I know there's a feverish devotion to self-learning/learning by doing among many here, but this is so beyond my expertise that I wouldn't be doing my employer or myself any favors trying to tackle it alone. I've been very direct about my inexperience, and in the past have mentioned that hiring another, more experienced developer will probably be necessary...if anything, just for the amount of time required for anyone to do the work as we grow and have more software to develop and maintain. I know that I would greatly benefit from hiring another developer; having someone to learn from and bounce ideas off of would be great. StackOverflow is great for determining approaches to individual coding problems or concepts, but is no replacement for discussions on a wider or more significant scale specific to a certain business domain. When mentioning hiring another developer in casual conversation recently, they didn't seem to think it was that important or necessary. tl;dr : Current patch jobs and other responsibilities already take up all my time at work, work on existing applications that needs to be done is beyond my skillset, little chance of me having any time to work on new products that are being planned. Employer initially seems reluctant about hiring another developer. How can I "sell" hiring another developer without sounding like I'm lazy or incompetent (I'd like to think I'm neither!)? edit : Just wanted to clarify that I'm in no way interested in taking any kind of hostile action to prove a point (i.e. taking a vacation to show them they'd be screwed if I wasn't around.) I'm pretty content working here and consider myself to be fairly compensated, even figuring in the overtime, which is why I'm nowhere near considering a new job yet. That said, I accepted the 'no more overtime' answer - even if I don't mind working over too much I'm not doing anyone any favors by doing so (prone to more errors, wear myself out) and it's not really tenable in the short term much less the long term. I'll be stressing this when discussing the matter with my supervisor, and will probably suggest hiring a contractor part-time as an initial approach that's more financially palatable. Thanks for all the great answers. | I regularly end up working 50+ hours a week To me thats all you need to tell your manager. "Im working 50+ hours a week to make sure the work gets done. Im a hard worker but this is unsustainable long term, you should hire another developer". If that dosent work then I suggest you start looking for a new job. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/111692",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5747/"
]
} |
111,756 | I am not sure if both terms can be used interchangeably. Maybe there is some academic distinction in computer science which is not relevant for day-to-day programming? Or can I use both term interchangeably without being wrong? Maybe it depends on the context in which I use both terms? Edit: On reason why I find both terms possibly interchangeable is a Wikipedia entry about Abstraction layer . There you can find David Wheelers quote 'All problems in computer science can be solved by another level of indirection.' | Abstraction deals with simplification, indirection deals with location. Abstraction is a mechanism that "hides" complicated details of a object in terms of simpler, easier to manipulate terms. In programming, a good example is the difference in details between machine code and the various tools for creating applications that are ultimately based on machine code. Consider creating a Windows Form application with the Visual Studio IDE. The IDE lets you think of the application in terms of easy-to-manipulate items in a What-You-See-Is-What-You-Get manner. The position of a screen widget is abstracted out to a visual location in a frame which you can change by dragging the widget around. Internally, the IDE manipulates the widget using another layer of abstraction such as a high level language (such as C#). C# itself is not manipulated using machine code, it is manipulated using a "Common Runtime Environment" which itself is an abstraction of a computer and operating system. Indirection refers to making the location of an item transparent. If you know a web resource's URI, you can access the resource without knowing its precise location. You do not access the resource directly, instead you access through a channel that passes your request through a series of servers, applications and routers. Indirection may be considered to be a special type of abstraction where the location is abstracted. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/111756",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37459/"
]
} |
111,837 | What is the relation of BDD and TDD? From what I understood BDD adds two main things over TDD: tests naming (ensure/should) and acceptance tests. Should I follow TDD during development by BDD? If yes, should my TDD unit tests be named in the same ensure/should style? | BDD adds a cycle around the TDD cycle. So you start with a behaviour and let that drive your tests, then let the tests drive the development. Ideally, BDD is driven by some kind of acceptance test, but that's not 100% necessary. As long as you have the expected behaviour defined, you're ok. So, let's say that you're writing a Login Page. Start with the happy path: Given that I am on the login page
When I enter valid details
Then I should be logged into the site
And shown my default page This Given-And-When-And-Then-And syntax is common in behaviour-driven development. One of the advantages of it is that it can be read (and, with training, written) by non-developers -- that is, your stakeholders can view the list of behaviours you have defined for successful completion of a task and see if it matches their expectations long before you release an incomplete product. There is a scripting language, known as Gherkin, which looks a lot like the above and allows you to write test code behind the clauses in these behaviours. You should look for a Gherkin-based translator for your usual development framework. That's out of the scope of this answer. Anyway, back to the behaviour. Your current application doesn't do this yet (if it does then why is someone requesting a change?), so you're failing this test, whether you're using a test runner or simply testing manually. So now it's time to switch to the TDD cycle to provide that functionality. Whether you're writing BDD or not, your tests should be named to a common syntax. One of the most common is the "should" syntax you described. Write a test: ShouldAcceptValidDetails. Go through the Red-Green-Refactor cycle until you're happy with it. Do we now pass the behaviour test? If not, write another test: ShouldRedirectToUserDefaultPage. Red-Green-Refactor til you're happy. Wash, rinse, repeat until you fulfil the criteria set out in the behaviour. And then we move on to the next behaviour. Given that I am on the login page
When I enter an incorrect password
Then I should be returned to the login page
And shown the error "Incorrect Password" Now you shouldn't have preempted this to pass your earlier behaviour. You should fail this test at this point. So drop back down to your TDD cycle. And so on until you have your page. Highly recommend The Rspec Book for learning more about BDD and TDD, even if you're not a Ruby developer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/111837",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7369/"
]
} |
111,846 | One of the main goals of software development companies is to increase their Bus factor This is also advocated in a talk that was organized by Google . That means that you should code and document everything in a way that if you're run over by bus tomorrow, the project can still continue. In other words, you should make yourself easily replaceable by another programmer with a similar skill set to yours. Being replacable, isn't that against the interest of a developer? In the book, 48 laws of power Rule 11 states that you should try to keep people dependent on you, in order to gain power, which then translates into monetary rewards. Apart from the scenario, where you need some documentation for yourself in order to continue a project after 6 months of pause, there seems to be a clear conflict of interest here between the developer and the software company. So as a programmer, should you really write excellent documentation and easily readable code for everyone; or should you write code and documentation in a way that it does the job and you yourself can understand it, but another person may have trouble understanding it? | You should strive to become irreplaceable not by writing code noone else understands, but by gathering more experience and knowledge than others. The former way makes you a developer everyone tries to avoid working with, as they will fear and loath maintaining code you wrote. The latter way you become a sought out team member, whom managers want to have in their team. In my experience writing clean, well documented (and preferably self-documenting) code has always paid off. In fact I took almost every opportunity to help and teach others (as well as learning from them when they knew something better), and I hardly ever felt in danger of getting replaced by someone less capable than me. In fact, this usually helped the whole team work better and solve problems faster, which is every manager's dream - and a sensible manager doesn't want to replace members of a good team. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/111846",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22695/"
]
} |
111,863 | I hear a lot from TDD practitioners that one of TDD's advantages is that it forces developers to follow SOLID principles (Single responsibility, Open-closed, Liskov substitution, Interface segregation and Dependency inversion). But as for me it is enough to just write some tests (unit test primarily) to understand it is important to follow SOLID (and thus create testable architecture). Does TDD force developers to follow SOLID more actively than just writing unit tests? | First of all, TDD does not strictly force you to write SOLID code. You could do TDD and create one big mess if you wanted to. Of course, knowing SOLID principles helps, because otherwise you may just end up not having a good answer to many of your problems, and hence write bad code accompanied by bad tests. If you already know about SOLID principles, TDD will encourage you to think about them and use them actively. That said, it doesn't necessarily cover all of the letters in SOLID , but it strongly encourages and promotes you to write at least partly SOLID code, because it makes the consequences of not doing so immediately visible and annoying. For example: You need to write decoupled code so you can mock what you need. This supports the Dependency Inversion Principle . You need to write tests that are clear and short so you won't have to change too much in the tests (which can become a large source of code noise if done otherwise). This supports the Single Responsibility Principle . This may be argued over, but the Interface Segregation Principle allows classes to depend on lighter interfaces that make mocking easier to follow and understand, because you don't have to ask "Why weren't these 5 methods mocked as well?", or even more importantly, you don't have a lot of choice when deciding which method to mock. This is good when you don't really want to go over the whole code of the class before you test it, and just use trial and error to get a basic understanding of how it works. Adhering to the Open/Closed principle may well help tests that are written after the code, because it usually allows you to override external service calls in test classes that derive from the classes under test. In TDD I believe this is not as required as other principles, but I may be mistaken. Adhering to the Liskov substitution rule is great if you want to minimize the changes for your class to receive an unsupported instance that just happens to implement the same statically-typed interface, but it's not likely to happen in proper test-cases because you're generally not going to pass any class-under-test the real-world implementations of its dependencies. Most importantly, SOLID principles were made to encourage you to write cleaner, more understandable and maintainable code, and so was TDD. So if you do TDD properly, and you pay attention to how your code and your tests look (and it's not so hard because you get immediate feedback, API and correctness wise), you can worry less about SOLID principles, in general. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/111863",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7369/"
]
} |
111,938 | This question is subjective but I was just curious how most programmers approach this. The sample below is in pseudo-C# but this should apply to Java, C++, and other OOP languages as well. Anyway, when writing helper methods in my classes, I tend to declare them as static and just pass the fields if the helper method needs them. For example, given the code below, I prefer to use Method Call #2 . class Foo
{
Bar _bar;
public void DoSomethingWithBar()
{
// Method Call #1.
DoSomethingWithBarImpl();
// Method Call #2.
DoSomethingWithBarImpl(_bar);
}
private void DoSomethingWithBarImpl()
{
_bar.DoSomething();
}
private static void DoSomethingWithBarImpl(Bar bar)
{
bar.DoSomething();
}
} My reason for doing this is that it makes it clear (to my eyes at least) that the helper method has a possible side-effect on other objects - even without reading its implementation. I find that I can quickly grok methods that use this practice and thus help me in debugging things. Which do you prefer to do in your own code and what are your reasons for doing so? | This really depends. If the values your helpers operate on are primitives, then static methods are a good choice, as Péter pointed out. If they are complex, then SOLID applies, more specifically the S , the I and the D . Example: class CookieJar {
function takeCookies(count:Int):Array<Cookie> { ... }
function countCookies():Int { ... }
function ressuplyCookies(cookies:Array<Cookie>
... // lot of stuff we don't care about now
}
class CookieFan {
function getHunger():Float;
function eatCookies(cookies:Array<Cookie>):Smile { ... }
}
class OurHouse {
var jake:CookieFan;
var jane:CookieFan;
var cookies:CookieJar;
function makeEveryBodyAsHappyAsPossible():Void {
//perform a lot of operations on jake, jane and the cookies
}
public function cookieTime():Void {
makeEveryBodyAsHappyAsPossible();
}
} This would be about your problem. You can make makeEveryBodyAsHappyAsPossible a static method, that will take in the necessary parameters. Another option is: interface CookieDistributor {
function distributeCookies(to:Array<CookieFan>):Array<Smile>;
}
class HappynessMaximizingDistributor implements CookieDistributor {
var jar:CookieJar;
function distributeCookies(to:Array<CookieFan>):Array<Smile> {
//put the logic of makeEveryBodyAsHappyAsPossible here
}
}
//and make a change here
class OurHouse {
var jake:CookieFan;
var jane:CookieFan;
var cookies:CookieDistributor;
public function cookieTime():Void {
cookies.distributeCookies([jake, jane]);
}
} Now OurHouse need not know about the intricacies of cookie distribution rules. It must only now an object, which implements a rule. The implementation is abstracted away into an object, who's sole responsibility is to apply the rule. This object can be tested in isolation. OurHouse can be tested with using a mere mock of the CookieDistributor . And you can easily decide to change cookie distribution rules. However, take care that you don't overdo it. For example having a complex system of 30 classes act as the implementation of CookieDistributor , where each class merely fulfills a tiny task, doesn't really make sense. My interpretation of the SRP is that it doesn't only dictate that each class may only have one responsibility, but also that a single responsibility should be carried out by a single class. In the case of primitives or objects you use like primitives (for example objects representing points in space, matrices or something), static helper classes make a lot of sense. If you have the choice, and it really makes sense, then you might actually consider adding a method to the class representing the data, e.g. it's sensible for a Point to have an add method. Again, don't overdo it. So depending on your problem, there are different ways to go about it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/111938",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37853/"
]
} |
111,962 | I'm taking my second course on Java. We are getting into data structures. I have done an assignment on a linked list, and now a stack. I had a hard time with the linked list. The stack gave me a little trouble, but was much easier. Should I be worried about having a hard time with these algorithms and data structures? I just feel like I didn't really grasp it. | I think, you must not accept not understanding these things, because they are really fundamental. That being said, your not understanding them is nothing to feel bad about.
You can explain a linked list to a child. So if your teacher failed to explain them to you, it is as much their fault. So you shouldn't spend time worrying, but rather try to find people, who can explain it to you. Often a fellow student is a far better teacher than a full-time academic. Think of Trains Imagine, you have a set of railway carriages, where each carriage has enough capacity, to contain one piece of data. Each carriage has some sort of hook at it's end, which can be attached to another carriage's front. This in fact gives you a linked list: the empty list: the train containing no carriages (and therefore carrying no data) adding an element: add a new carriage containing the element in front of the train and hook it to the rest of the train removing an element: find the carriage containing the element. Remove it (you might need a crane here :)), hook the carriage before with the carriage after. replacing an element: find the carriage containing the old element. Exchange the old element with the new element. inserting an element right after another: find the carriage containing the element after which you want to insert. Insert a new carriage after it, which is hooked accordingly (we don't want the train to fall apart) and put the the new element into it. In contrast to that, you could think of an array as a train with a given number of carriages, that cannot be rearranged in any way. All you can do is to change the data within them. This model also explains a lot of the problems arrays have: If you want to insert one element before another, you will have to move all the following elements to the next carriage. If you want to remove one element, you will need to move all the following elements one carriage to the front. If you need a train with more carriages, you will have to construct a new one, because you can't just prepend a carriage.
On the other hand, finding carriages in an array is much easier, because you can simply number them permanently (their order will never change). As for the stack: A "stack" is less a data structure, than an idea. The idea of the stack is, that it acts much like a stack of books. You can only put books on top of the stack and you can only ever take the top book off the stack (at least if the books are sufficiently heavy). That being said, a linked list can be used as a stack, if you think of the data in the carriages as books, and the book in the first most carriage as the top of the stack. So I hope this helped you. Maybe it didn't. Maybe you're more of a visual type. In that case, I suggest you find somebody, who's good at giving visual explanations and explain it to you. It won't take long, but it will absolutely be worth it. It's ok to struggle with this now. But merely accepting it, is not an option in the long run. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/111962",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37834/"
]
} |
112,098 | People who are used to garbage collected languages are often scared of C++'s memory management. There are tools, like auto_ptr and shared_ptr which will handle many of the memory management tasks for you. Lots of C++ libraries predate those tools, and have their own way to handle the memory management tasks. How much time do you spend on memory management tasks? I suspect that it is highly dependent on the set of libraries you use, so please say which ones your answer applies to, and if they make it better or worse. | Modern C++ makes you not worry about memory management until you have to, that is until you need to organize your memory by hand, mostly for optimization purpose, or if the context forces you to do it (think big-constraints hardware). I've written whole games without manipulating raw memory, only worriing about using containers that are the right tool for the job, like in any language. So it depends on the project but most of the time it's not memory management that you have to handle but only object life-time. That is solved using smart pointers , that is one of idiomatic C++ tool resulting from RAII . Once you understand RAII , memory management will not be a problem. Then when you'll need to access raw memory, you'll do it in very specific, localized and identifiable code, like in pool object implementations, not "everywhere". Outside of this kind of code, you'll not need to manipulate memory, only objects lifetime. The "hard" part is to understand RAII. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112098",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27253/"
]
} |
112,145 | It seems that frequently in large projects the software is still released with the bug tracker full of bugs. Now I can understand feature requests, but several times I've seen large numbers of bugs still unresolved, not reviewed, or not finished but a release is still pushed out. Why? Why would an open source project or a project in general be released with known bugs? Why wouldn't they wait until the bug tracker had 0 opened bugs? | Any number of reasons, including: Company had made commitment to user base to release at a particular time Bugs were not mission-critical, or even major New feature development was viewed as more important (whether correctly or not) To a small extent, this is like asking why you work as a programmer even though your programming knowledge isn't "complete". In most complex projects, there will be many, many bugs. Dealing with them while adding new features is a difficult, complex task. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112145",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/66/"
]
} |
112,270 | It's not really a technical question, but there are several other questions here about source control and best practice. The company I work for (which will remain anonymous) uses a network share to host its source code and released code. It's the responsibility of the developer or manager to manually move source code to the correct folder depending on whether it's been released and what version it is and stuff. We have various spreadsheets dotted around where we record file names and versions and what's changed, and some teams also put details of different versions at the top of each file. Each team (2-3 teams) seems to do this differently within the company. As you can imagine, it's an organised mess - organised, because the "right people" know where their stuff is, but a mess because it's all different and it relies on people remembering what to do at any one time. One good thing is that everything is backed up on a nightly basis and kept indefinitely, so if mistakes are made, snapshots can be recovered. I've been trying to push for some kind of managed source control for a while, but I can't seem to get enough support for it within the company. My main arguments are: We're currently vulnerable; at any point someone could forget to do
one of the many release actions we have to do, which could mean whole
versions are not stored correctly. It could take hours or even days
to piece a version back together if necessary We're developing new features along with bug fixes, and often have to delay the release
of one or the other because some work has not been completed yet. We also have to force
customers to take versions that include new features even if they just want a bug fix,
because there's only really one version we're all working on We're experiencing problems with Visual Studio because multiple
developers are using the same projects at the same time (not the same
files, but it's still causing problems) There are only 15 developers, but we all do stuff differently; wouldn't
it be better to have a standard company-wide approach we all have to
follow? My questions are: Is it normal for a group of this size not to have source control? I have so far been given only vague reasons for not having source control - what
reasons would you suggest could be valid for not implementing source control, given the information above? Are there any more reasons for source control that I could add to my
arsenal? I'm asking mainly to get a feel for why I have had so much resistance, so please answer honestly. I'll give the answer to the person I believe has taken the most balanced approach and has answered all three questions. Thanks in advance | It is absolutely not normal for a group that size to be working without source control—the size of the largest group of programmers that can work effectively without source control is less than or equal to one. It’s absolutely inexcusable to work without version control for a professional team of any size, and perhaps I’m not feeling creative, but I can’t come up with any reason why you would want to forgo it. Version control is just another tool—a particularly powerful one, and one which delivers enormous benefits relative to its minimal cost. It gives you the power to finely manage all of your changes in an organised fashion, with all kinds of other handy things like branching, automated merging, tagging, and so on. If you need to build a version from umpteen versions ago, you can check out the code from that point in time and just build without having to jump through any other hoops. More importantly, if you need to write a bugfix, you can merge it into an update without having to deliver the new features you’re working on—because they’re on another branch, and as far as the rest of the development needs to be concerned, they don’t exist yet. You’re experiencing resistance because you’re challenging the culture of the company. It will take time for them to adjust, no matter what you say. The best you can do is keep pushing for it, and if the company really won’t budge, find another job that’s better suited to your level as a developer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112270",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37945/"
]
} |
112,349 | What is a good attitude from developers when discussing new features, and namely, non critical/questionable features? Say you are developing some sort of Java like language, and the boss says: "We need pointers so that developers could fiddle with object memory directly!" Should the developer shoot down the idea because it adds unimaginable complexity and security vulnerabilities, or should he do what's asked? This may not be a good example, but what about things that are more in a gray area, like adding buttons that break workflow, or goes against internal structure of the program, etc.? What is the optimal "can do" vs. "can't do" distribution for a regular programmer? EDIT: The question is not about a bad boss :D I was more interested how do people approach new problems that add a noticeable amount of problems while maybe being marginally useful. Should the general attitude be: yes we'll do it, screw the complexity maybe no, the general rework, and implications don't justify the change What should be the reaction of a good developer? | Best thing is to have a meeting and lay out the pros and cons as a group, and based on that discuss the best solution. If you have a team, get them to agree on solution. Once a team agrees on something, managers and "bosses" tend to go with the solution. If your boss still does not agree, then you've done all you can do: you've gotten your team and managers together and covered the pros and cons and despite that your boss chose a potentially inferior solution. The key to this is discussing the pros and cons as a group. By doing so you are discussing what the best solution is with your team, and at the same time are pointing out your boss's decision (before he makes it) without the political backlash of going around after the fact telling people why you think your bosses decision was the wrong one. This is a tender situation involving work politics, but it can be handled amicably. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112349",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8029/"
]
} |
112,383 | In the book "Hard Code" by Eric Brechner, he states, Lying is one of a handful of valuable process canaries that can warn
you of trouble. I've heard a dev or two toss around the old "canary". What is it? [Google didn't answer it for me. Perhaps my keywords were a poor choice.] | Canaries were once used in coal mines to find out if any poisonous gasses were around (the canary would die - the miners would get out). It was much safer than an open flame. One assumes in this context that if there is much lying, the process is poisonous. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112383",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11107/"
]
} |
112,402 | I recently had to investigate a field issue for our large enterprise application. I was horrified by the logs that I had to comb through in an attempt to find the problem and at the end of the day the logs did not help at all identifying/isolating the bug. Note: I understand not all bugs are discoverable through logs. This does not change the fact that the logs are horrible. There are some obvious problems with our logging that we can already attempt to fix. I do not want to list those here and I cannot simply show you our log files so you can give advice on what to do. Instead, in order to assess how bad we are doing on the logging front, I would like to know: What are some guidelines , if any, when it comes to logging for an application, especially large application. Are there any patterns we should follow or anti-patterns we should be aware of? Is this an important thing to fix or can it even be fixed or all log files are simply huge and you need supplemental scripts to analyze them? Side note: we use log4j. | A few points that my practice proved useful: Keep all logging code in your production code. Have an ability to enable more/less detailed logging in production, preferably per subsystem and without restarting your program. Make logs easy to parse by grep and by eye. Stick to several common fields at the beginning of each line. Identify time, severity, and subsystem in every line. Clearly formulate the message. Make every log message easy to map to its source code line. If an error happens, try to collect and log as much information as possible. It may take long but it's OK because normal processing has failed anyway. Not having to wait when the same condition happens in production with a debugger attached is priceless. Logs are mostly needed for monitoring and troubleshooting. Put yourself in a troubleshooter's shoes and think what kind of logs you'd like to have when something wrong is happening or has happened in the dead of night. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112402",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/16992/"
]
} |
112,463 | Here's the syntax for iterators in Java (somewhat similar syntax in C#): Iterator it = sequence.iterator();
while (it.hasNext()) {
System.out.println(it.next());
} Which makes sense. Here's the equivalent syntax in Python: it = iter(sequence)
while True:
try:
value = it.next()
except StopIteration:
break
print(value) I thought Exceptions were supposed to be used only in, well, exceptional circumstances. Why does Python use exceptions to stop iteration? | There's a very Pythonic way to write that expression without explicitly writing a try-except block for a StopIteration : # some_iterable is some collection that can be iterated over
# e.g., a list, sequence, dict, set, itertools.combination(...)
for value in some_iterable:
print(value) You can read up on the relevant PEPs 234 255 if you want to know more behind why StopIteration was introduced and the logic behind iterators. A general principle in python is to have one way to do something (see import this ), and preferably its beautiful, explicit, readable, and simple, which the pythonic method satisfies. Your equivalent code is only necessary as python doesn't give iterators a hasNext member function; preferring people to just loop through the iterators directly (and if you need to do something else to just try reading it and catch an exception). This automatic catching of an StopIteration exception at the end of an iterator makes sense and is an analogue of the EOFError raised if you read past an end of file. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112463",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3151/"
]
} |
112,485 | What is the benefit of studying bitwise operators (Bitwise Not, Bitwise AND, Bitwise OR, Bitwise XOR, Left Shift, Signed Right Shift, Unsigned Right Shift etc.)? Will we really use these operators in programming? | "Yes, we will." Bitwise operations are everywhere. They are perfect for working with bitfields (a practice that is ubiquitous in C and C++), such as a 'flags' field in a data structure or function argument. Basically, | combines flags, ^ flips flags, & checks if a flag is set, and the x &= ~FLAG pattern clears a flag. Bitwise operations are ubiquitous in all things low-level - hardware drivers, network protocols, binary file formats - as well as some higher-level fields like character encodings, cryptography, etc. Bit-shifting can also sometimes double for integer division and multiplication by powers of 2, with a slightly different rounding behavior for negative numbers (sometimes, but not always, more desirable than what regular integer division does). In tight loops, bitwise arithmetic can sometimes be used to avoid conditionals, which is beneficial because modern CPUs use branch prediction, and a misprediction (i.e., the condition in an if statement evaluates differently from the previous time) causes a significant delay. Using bitwise arithmetic, the same calculation can sometimes be expressed without any conditionals. Even if you don't intend to work in any of the above scenarios, it is still a good idea to study and understand bitwise operations - all modern computers are binary, and you definitely need to know the basic principles by which they operate. Numbers in a computer don't behave like numbers in the real world, and studying binary operations will help you understand why. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112485",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13724/"
]
} |
112,731 | What does the backslash really escape? It's used as an escaping character. But I always wonder what's escaping, or what's being escaped from. I know "\n" designates a new line. But what where does the escaping come in? Why is it called that? | The backslash is used as a marker character to tell the compiler/interpreter that the next character has some special meaning. What that next character means is up to the implementation. For example C-style languages use \n to mean newline and \t to mean tab. The use of the word "escape" really means to temporarily escape out of parsing the text and into a another mode where the subsequent character is treated differently. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112731",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20052/"
]
} |
112,747 | I used to blame changing specifications from clients for code rot, not realising that business models do change and it's my job to develop in an adaptable way. I now see that as a sign of a bad developer (I've changed!). But now I see other 'whinges' in myself. A few times recently I've found myself saying 'it's like trying to fit a square peg in a round hole', and also I find myself blaming client indecision for a project not progressing. Are there signs I should look out for where I should change my attitude? Is the client always right, or am I sometimes justified in getting frustrated? | I wouldn't say you are bad developer. Being aware of the issues already moves you beyond this definition. Requirements change. That is a given. A good developer need to take this into account. Many modern programming techniques help coping with that. Staying true to original spec is not realistic. Also not realistic is changing the requirements all the time. The client is definitely not always right. It is 'right' more often than we want him/her to be, though (as in, try to accommodate him if he isn't totally off). But when you see him driving the project in the wrong direction, try to advocate for the things you think are right. There are no hard rules on these things, and even good and experienced developers haven't achieved the perfect 'Zen'. The only wrong approach is not trying to improve on these. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112747",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33723/"
]
} |
112,863 | Last night I was discussing with another programmer that even though something may be O(1), an operation which is O(n) may outperform it if there are is a large constant in the O(1) algorithm. He disagreed, so I've brought it here. Are there examples of algorithms which greatly outperform those in the class below it? For example, O(n) being faster than O(1) or O(n 2 ) being faster than O(n). Mathematically this can be demonstrated for a function with an asymptotic upper bounds, when you disregard constant factors, but do such algorithms exist in the wild? And where would I find examples of them? What types of situations are they used for? | Lookups in very small, fixed data tables. An optimized hash table may be O(1) and yet slower than a binary search or even a linear search due to the cost of the hash calculation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112863",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17065/"
]
} |
112,911 | When people mention COBOL, it's usually either met with a snort or groan. I don't know much about COBOL, but I've seen some programs written in it. I can see that it's wordy, and to uninitiated eyes such as mine, unintelligible. But, really, aren't all programming languages complete gibberish to a lay person? I understand that it works, works well, and is still in widespread use in the industries it was designed for. Aren't those the hallmarks of a good language? What's so bad about COBOL? | COBOL was one of the first languages I learned - if you ignore countless versions of Basic, three or four assembler languages and a variant of Forth, then it was in my first five, and learned concurrently with Pascal. IOW, I'm answering from personal experience using the language. EDIT I should say ancient experience. I never used the language after the end of the 80s, though I did buy a new book (to replace the old one I threw away in disgust) so that I had something to refer to so my horror stories wouldn't get too distorted. But I have no idea how the language has evolved in at least the last 20 years. Obviously, for many people, it is just that "old is bad" view that jonsca has already described - and also much more a third-hand pass-me-down attitudes thing. But there are real issues underlying that. Being too wordy is a real problem - there's too much clutter in the way of understanding the code. This is by far the biggest issue. People who look at the MOVE , ADD and MULTIPLY etc statements in horror have a slightly exaggerated view of this, true - the COMPUTE statement is closer to the assignments in other languages. But there's still a lot of clutter in all those divisions and sections. One of the first things I learned in COBOL was to always start by copying a standard page-of-A4-long SKELETON.COB. COBOL does have some interesting features, but those features (e.g. the PIC thing) tend to be things that are now more part of the DBMS rather than the programming language, and that seems to me to usually be a better way to separate those responsibilities. Also, some libraries in other languages use something comparable to PIC (e.g. printf and scanf in the C standard library). Arguably, the best has been kept, but the worst dropped. Also, for every nice feature, there was at least one intolerable one. For example, no matter how trivial a loop is, you have to move the body into a separate procedure. The PERFORM ... UNTIL ... and similar statements are single statements - not block structures. In a sense, COBOL was a taste of structured programming from before structured programming was invented - there was a GO TO , but it's use was discouraged (at least when I used COBOL), but looping in particular just wasn't handled that well. In fact, the language that I used after COBOL that most reminded me of it was... dBase. As in Ashton-Tate dBase III+. These days, people are more likely to remember all the now-dead-or-dying clones (Clipper, FoxPro etc) that led to the generic name xBase - and there is still a living descendant in xHarbour. The point is that these were database languages, but nothing like SQL. Even then, where every COBOL program operating on a particular database needs to include a copy of the specification of that database (and the copies could end up inconsistent), that isn't really the case in xBase where the database knows it's own structure. Taking that into account, then, COBOL is not so terrible if you accept it for what it is. But what it isn't is a language for writing data structures. Which may be why COBOL suffered a lot back in the times of the C vs. Pascal holy wars - both sides could agree that COBOL was no good for reinventing the binary tree yet again. Oh - and one thing I'll never forget is how my first COBOL textbook didn't describe the SORT command, saying that it was outside the scope of the book - apparently, either the author couldn't cope with the idea of sorting, or considered it to be more than the tiny little minds of COBOL students could cope with [see edit at end]. That kind of thing made it very difficult to take COBOL seriously. An odd aspect of this was Jackson Structured Programming, which I also was forced to learn at around the same time, and specifically for use with COBOL. Part of this was drawing a structure diagram for the input, then a structure diagram for the output, then drawing the in-between structure diagram for the code. Sorting was clearly expected to be an already-solved problem - you couldn't derive a sorting algorithm in this way. So it was odd to be told by the recommended text-book that the whole concept of sorting was beyond my tiny little mind, while at the same time being taught something like a dozen different sorting algorithms and how to implement them in Pascal. The problems that JSP can handle are probably a good guide for the things that COBOL can do relatively well. But even then, that doesn't necessarily mean that either JSP or COBOL are good ways to handle those problems. EDIT on 30th July 2014 I just got a reputation boost from this, reminding me it's here. As it happens, due to some nostalgia-fueled ancient book collecting, I can now correct a point WRT the SORT command. The book I originally used as the recommended text when learning COBOL was "Methodical Programming in COBOL" by Ray Welland. This doesn't cover COBOL 85 (though there was a later edition "Methodical Programming in COBOL-85" which I've still never seen). kindall comments below that "You were supposed to sort the input files before reading them, or sort the output file after generating it, using the sort utility that came with the OS". From my reply to that, I missed the "came with the OS" point. Kindall was suggesting something akin to the Unix philosophy AFAICT, with COBOL used for the bits it's good for, OS utilities such as a sort utility used for some other things, and presumably using a batch/scripting/shell language to glue the bits together. This makes much more sense in an ancient world where interactive software was rare to non-existent, so you'd be submitting batches of work (hence "batch language") anyway. The following is quoted from page 165-166 of "Methodical Programming in COBOL"... The use of ordered serial files implies that it is necessary to have
a means of sorting records within a file into some specified order by
key. Most larger computer systems have a sort utility which will sort
a file given the position, type and size of each of the data-items
forming the key. There is also a facility for sorting records from within a COBOL
program but this is beyond the scope of this book for two reasons: (a) the interface to the operating system is often quite complex and
varies from system to system, (b) the sort module is an optional part of ANS '74 COBOL and may not be
implemented in COBOL systems for smaller computers. Therefore it will be assumed that facilities exist for sorting files
into a specified order and the problem of updating such files will be
considered. In short, kindall is correct - the assumption was that usually sorting
would be done outside of COBOL. There may even have been a real justification
to excluding sorting from a programming language around 1974 for small computers. What I said above was basically what you get after around 20 years of not
being able to check facts due to throwing away the book. I should still point out, though, that I formally studied COBOL from this
recommended book that covered the 1974 standard (not the 1985
standard) in 1988 and 1989. The third edition of "COBOL for Students" (Parkin, Yorke, Barnes) - the first edition covering COBOL 85 - wasn't published until 1990. I'm not certain, but I think the COBOL 85 edition of "Methodical Programming" wasn't published until 1994. But that doesn't necessarily represent the COBOL world dragging its feet - well, not that much anyway. New standards adoption takes time for any language, even now. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112911",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3790/"
]
} |
112,953 | Throughout my various workplaces I always wrote code which made me think "this would be really useful in other situations". Indeed, I intentionally write code, even if it takes me longer write, which I know will help me in the future (e.g. custom SubString() functions). A good candidate for these snippets are various 'Helper' classes. These snippets I'm sure can probably be found elsewhere online but the point is, I wrote them, and I will use them again later in other jobs or for personal projects. Currently I don't maintain a personal code library, but the question is, is it wrong to take code you have produced at work and re-use it ( a ) for personal projects, and ( b ) in other jobs? | I've always solved this problem by having a personal project where I put all my crazy ideas and generic stuff, and then license it under the BSD license, which allows people to re-use, alter, rebrand, close it and charge money for it. That way, I retain the copyright but can re-use the code as I please for this and that employer, so that I retain the copyright to the original, but the employer retains the copyright to the re-used instance. I figure that if they had a problem with that, then they'd simply have to pay me to rewrite it on work time which makes no sense from their point of view. Furthermore, companies use BSD code all the time, since the idea behind BSD is to allow people and companies to do with it pretty much whatever they want, including rebranding and selling it. Then of course, if additions are made to the code at the work place, I can't re-use it elsewhere without rewriting it on my own time... which is fine because generic stuff tends to be relatively small, unless it's an idea that warrants considerable free-time effort anyway. Writing it on your own time and licensing the code under a BSD-style license should allow you to maintain a library for yourself which you can use pretty much anywhere you want. Now, as for contracts that claim to suck up all your personal projects' copyright... this probably differs radically between jurisdictions, but in at least some western jurisdictions it's my understanding that a contract can't do that. The contract can say that it does, but it wouldn't be enforced in a court of law because copyright has to be explicitly transferred, as opposed to "all your base are belong to us"-kinda deal which would never be upheld (in the jurisdiction where I'm from anyway). There are a number of restrictions on what can be upheld in a court of law via contract, which is why you'll usually (and hopefully) see a clause saying something to the effect that if one part of the contract doesn't work legally, the rest of the contract still holds. But as always, consult a lawyer before you interpret this as accurate legal advice. I've never been taken to court on this so I know none of these things as lawyer-proof facts. :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/112953",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38139/"
]
} |
113,019 | Basically, I've learned so far that garbage collection erases forever any data structure that is not currently being pointed to. But this only checks the heap for such conditions. Why doesn't it also check the data section (globals, constants, etc etc) or the stack as well? What is it about the heap that it's the only thing that we want to be garbage collected? | The garbage collector does scan the stack -- to see what things in the heap are currently being used (pointed to) by things on the stack. It makes no sense for the garbage collector to consider collecting stack memory because the stack is not managed that way: Everything on the stack is considered to be "in use." And memory used by the stack is automatically reclaimed when you return from method calls. Memory management of stack space is so simple, cheap and easy that you wouldn't want garbage collection to be involved. (There are systems, such as smalltalk, where stack frames are first-class objects stored in the heap and garbage collected like all other objects. But that's not the popular approach these days. Java's JVM and Microsoft's CLR use the hardware stack and contiguous memory.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113019",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35740/"
]
} |
113,028 | OK, I've learned what a static function is, but I still don't see why they are more useful than private member functions. This might be kind of a newb-ish question here, but why not just replace all private member functions with static functions instead? | Assuming that you're using OOP , use static functions when they don't depend on any class members. They can still be private, but this way they are optimized as they don't depend on any instance of the related object. Other than the above, I find static functions useful when you don't want to create an instance of an object just to execute one public function on it. This is mainly the case for helper classes that contain public functions to do some repetitive and general work, but don't need to maintain any state between calls. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113028",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35740/"
]
} |
113,160 | I know you can use C# and F# together in the same project however I'm not sure if its a good idea to do so. It seems to me that mixing two very different coding styles (functional vs OOP) could cause a lack of cohesion in the design. Is this correct? | There's nothing wrong with mixing languages in a product as long as you use each appropriately and they "play nice" together. If there is part of your project that would be best coded using a functional language then it makes sense to code it in F#. Similarly for C#. What would be pointless (at best) would be mixing languages for the sake of it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113160",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29270/"
]
} |
113,177 | Well, I know that there are things like malloc/free for C, and new/using-a-destructor for memory management in C++, but I was wondering why there aren't "new updates" to these languages that allow the user to have the option to manually manage memory, or for the system to do it automatically (garbage collection)? Somewhat of a newb-ish question, but only been in CS for about a year. | Garbage collection requires data structures for tracking allocations and/or reference counting. These create overhead in memory, performance, and the complexity of the language. C++ is designed to be "close to the metal", in other words, it takes the higher performance side of the tradeoff vs convenience features. Other languages make that tradeoff differently. This is one of the considerations in choosing a language, which emphasis you prefer. That said, there are a lot of schemes for reference counting in C++ that are fairly lightweight and performant, but they are in libraries, both commercial and open source, rather than part of the language itself. Reference counting to manage object lifetime is not the same as garbage collection, but it addresses many of the same kinds of issues, and is a better fit with C++'s basic approach. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113177",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35740/"
]
} |
113,182 | It seems to me that everything that can be done with a stack can be done with the heap, but not everything that can be done with the heap can be done with the stack. Is that correct? Then for simplicity's sake, and even if we do lose a little amount of performance with certain workloads, couldn't it be better to just go with one standard (ie, the heap)? Think of the trade-off between modularity and performance. I know that isn't the best way to describe this scenario, but in general it seems that simplicity of understanding and design could be a better option even if there is a potential for better performance. | Heaps are bad at fast memory allocation and deallocation. If you want to grab many tiny amounts of memory for a limited duration, a heap is not your best choice. A stack, with its super-simple allocation / deallocation algorithm, naturally excels at this (even more so if it is built into the hardware), which is why people use it for things like passing arguments to functions and storing local variables - the most important downside is that it has limited space, and so keeping large objects in it, or trying to use it for long-lived objects, are both bad ideas. Getting rid of the stack completely for the sake of simplifying a programming language is the wrong way IMO - a better approach would be to abstract the differences away, let the compiler figure out which kind of storage to use, while the programmer puts together higher-level constructs that are closer to the way humans think - and in fact, high-level languages like C#, Java, Python etc. do exactly this. They offer almost identical syntax for heap-allocated objects and stack-allocated primitives ('reference types' vs. 'value types' in .NET lingo), either fully transparent, or with a few functional differences which you must understand to use the language correctly (but you don't actually have to know how a stack and a heap work internally). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113182",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35740/"
]
} |
113,237 | Regular expressions are powerful tool in programmer's arsenal, but - there are some cases when they are not a best choice, or even outright harmful. Simple example #1 is parsing HTML with regexp - a known road to numerous bugs. Probably, this also attributes to parsing in general. But, are there other clearly no-go areas for regular expressions ? p.s.: " The question you're asking appears subjective and is likely to be closed. " - thus, I want to emphasize, that i am interested in examples where usage of regexps is known to cause problems. | Don't use regular expressions: When there are parsers. This doesn't limit to HTML . A simple valid XML cannot be reasonably parsed with a regular expression, even if you know the schema and you know it will never change. Don't try, for example, parse C# source code . Parse it instead, to get a meaningful tree structure or the tokens. More generally, when you have better tools to do your job. What if you must search for a letter, both small and capital? If you love regular expressions, you'll use them. But isn't it easier/faster/readable to use two searches, one after another? Chances are in most languages you'll achieve better performance and make your code more readable. For example the sample code in Ingo's answer is a good example when you must not use regular expressions. Just search for foo , then for bar . When parsing human writing. A good example is an obscenity filter. Not only it is a bad idea in general to implement it, but you may be tempted to do it using regular expressions, and you'll do it wrong. There are plenty of ways an human can write a word, a number, a sentence and will be understood by another human, but not your regular expression. So instead of catching real obscenity, your regular expression will spend her time hurting other users. When validating some types of data. For example, don't validate an e-mail address through a regular expression. In most cases, you'll do it wrong. In a rare case, you'll do it right and finish with a 6 343 characters length coding horror . When you don't have the right tools or don't care about clean code. Without the right tools, you will make mistakes. And you will notice them at the last moment, or maybe never. If you don't care about clean code, you'll write a twenty lines string with no comments, no spaces, no newlines. When your code will be read. And then read again, and again and again, every time by different developers. Seriously, if I take your code and must review it or modify it, I don't want to spend a week trying to understand a twenty lines long string plenty of symbols. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113237",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37580/"
]
} |
113,256 | I've noticed on MySQLWorkbench that you can choose how to store your indexes before forward engineering your design. The storage types are: BTREE RTREE HASH Researching this, I found some information that was pretty much over my head, so I'm looking for practical information on what the difference is between these and/or why you should choose one over another. Also, I have never chosen a storage type before, so I assume MySQL is choosing a default storage type (BTREE?) | BTree BTree (in fact B*Tree) is an efficient ordered key-value map. Meaning: given the key, a BTree index can quickly find a record, a BTree can be scanned in order. it's also easy to fetch all the keys (and records) within a range. e.g. "all events between 9am and 5pm", "last names starting with 'R'" RTree RTree is a spatial index which means that it can quickly identify close values in 2 or more dimensions. It's used in geographic databases for queries such as: all points within X meters from (x,y) Hash Hash is an unordered key-value map. It's even more efficient than a BTree: O(1) instead of O(log n) . But it doesn't have any concept of order so it can't be used for sort operations or to fetch ranges. As a side note, originally, MySQL only allowed Hash indexes on MEMORY tables; but I'm not sure if that has been changed over the years. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113256",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
113,262 | I have heard people say that variables should be declared as close to their usage as possible. I don't understand this. For example, this policy would suggest I should do this: foreach (var item in veryLongList) {
int whereShouldIBeDeclared = item.Id;
//...
} But surely this means the overheads of creating a new int are incurred on every iteration. Wouldn't it be better to use: int whereShouldIBeDeclared;
foreach (var item in veryLongList) {
whereShouldIBeDeclared = item.Id;
//...
} Please could somebody explain? | This is one style rule among many, and it isn't necessarily the most important rule of all the possible rules you could consider. Your example, since it includes an int, isn't super compelling, but you could certainly have an expensive-to-construct object inside that loop, and perhaps a good argument for constructing the object outside the loop. However, that doesn't make it a good argument against this rule since first, there are tons of other places it could apply that don't involve constructing expensive objects in a loop, and second, a good optimizer (and you've tagged C#, so you have a good optimizer) can hoist the initialization out of the loop. The real reason for this rule is also the reason you don't see why it's a rule. People used to write functions that were hundreds, even thousands of lines long and they used to write them in plain text editors (think Notepad) without the kind of support Visual Studio provided. In that environment, declaring a variable hundreds of lines away from where it was used meant that the person reading if (flag) limit += factor; didn't have a lot of clues about what flag, limit and factor were. Naming conventions like Hungarian notation were adopted to help with this, and so were rules like declaring things close to where they are used. Of course, these days, it's all about refactoring, and functions are generally less than a page long, making it hard to get very much distance between where things are declared and where they are used. You're operating in a range of 0-20 and quibbling that maybe 7 is ok in this particular instance, while the guy who made the rule would have LOVED to get 7 lines away and was trying to talk someone down from 700. And on top of that, in Visual Studio, you can mouse over anything and see its type, is it a member variable, and so on. That means the need to see the line declaring it is lessened. It's still a reasonably good rule, one that's actually quite hard to break these days, and one that no-one ever advocated as a reason to write slow code. Be sensible, above all. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113262",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14141/"
]
} |
113,289 | This is actually somewhat related to the question I asked yesterday about why both a Stack and a Heap are necessary in the applications we use today (and why we can't just go with a Heap instead of both, in order to have a simple & singular standard to go by). However, many of the responses indicated that a Stack is irreplaceable due to the fact that is many hundreds (or thousands) of times faster than trying to allocate/reference the Heap. I know there is a problem with dynamic storage allocation if we do away with the Heap, but isn't there a way around this, or perhaps, a way to improve on the Stack so that it can handle dynamic memory allocation? | The problem with stacks is that you can't "free" memory unless it is on top of the stack. For instance, say you allocated 3 things of varying sizes: a = allocate(2000000); // 2000000 bytes
b = allocate(1);
c = allocate(5000000); The stack would have a on the bottom, b in the middle, and c on top. This becomes problematic if we want to free b : free(b); // b is not on top! We have to wait until c is freed! The workaround is to move all the data after b and shift if so that it comes after a . This works, but will require 5000000 copies in this case - something that will be much slower than a heap. This is why we have a heap. While allocation may be slower than a stack ( O(log n) vs O(1) ), heaps allow freeing memory at an arbitrary location to be fast - O(log n) , compared to a stack's O(n) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113289",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35740/"
]
} |
113,295 | I've been introduced to Computer Science for a little over a year now, and from my experience it seems that C and C++ are both considered to be "ultrafast" languages, whereas others such as Python and such scripting languages are usually deemed somewhat slower. But I've also seen many cases where a software project or even a small one would interleave files where a certain number n of those files would be written in C, and a certain number m of those files would be written in C++. (I also noticed that C++ files almost always have corresponding headers, while C files not so much). But my main point of inquiry is to get a general sense of intuition on when it is appropriate to use C over C++, and when it is better to use C++ over C. Other than the facts that (1) C++ is object-oriented whereas C is not, and (2) the syntaxes are very similar, and C++ was intentionally created to resemble C in many ways, I am not sure what their differences are. It seems to me that they are (almost) perfectly interchangeable in many domains. So it would be appreciated if someone could clear up the situation! Thanks | You pick C when you need portable assembler (which is what C is, really) for whatever reason, your platform doesn't provide C++ (a C compiler is much easier to implement), you need to interact with other languages that can only interact with C (usually the lowest common denominator on any platform) and your code consists of little more than the interface, not making it worth to lay a C interface over C++ code, you hack in an Open Source project (many of which, for various reasons , stick to C), you don't know C++. In all other cases you should pick C++. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113295",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35740/"
]
} |
113,306 | So I've kind of been getting angry about the current position I'm in, and I'd love to get other developers' input on this. I've been at my current place of employment for about 11 months now. When I began, I was working on all new features. I basically worked on an entire new web project for the first 5-6 months I was here. After this, I was moved to more of a service oriented role (which was still great, all new stuff for me), and I was in this role for about the past 5-6 months. Here's where the problem comes in. Basically, a couple of days ago I was made the support/maintenance guy. Now, we have an IT support team, so I'm not talking that kind of support, I'm talking more of a second level support guy (when the guys on the surface can't really get to the root of the issue), coupled with working on maintenance issues that have been lingering in the backlog for a while. To me, a developer with about 3 years of experience, this is kind of disheartening. With the type of work place this is, I wouldn't be surprised if these support issues take up most of my days, and I barely make it to working on maintenance issues. Also, most of these support issues aren't even related to code, they are more or less just knowing the system architecture, working with making sure services are running/getting started properly, handling/fixing bad data, etc. I'm a developer, so this part sucks. Also, even when I do have time to work maintenance, these are basically just bug fixes/improving bad code, so this sucks as well, however at least it's related to coding. Am I wrong for getting angry here? I don't want to really complain about it, but to be honest, I wasn't spoken to about this or anything, I was kind of just sent an e-mail letting me know I'm the guy for this type of thing, and that was that. The entire team took a few minutes to give me their "that sucks" talk, because they know how annoying it is to be on support for the type of work we do, so I know I'm not the only guy that knows it's not that great of an opportunity. I'm just kind of on the fence about how to move forward. Obviously I'm just going to continue working for the time being, no point making a bad impression on anybody, but I'd like to know how you guys would approach this situation, or how you think I should be feeling about it/how you guys would feel. | You can look at this as either a as time in limbo; or you can turn it into an opportunity to grow. The core idea of being a maintenance developer is to put yourself out of a job . Each time you have to fix something; take the time to understand the problem well enough so that your solution (which could come a few weeks after you put out the fire) means you (nor any other living soul) will ever have to solve that problem again. Since it's a legacy system you're supporting, you can usually get a way with some solutions that wouldn't be "ok" for mainline products; you can use cron to periodically restart buggy services; You can hard-code exceptional logic for particular customers since the number of customers using that product won't increase. Another part of it is extracting useful knowledge out of the system; You probably don't want to do this, it's not very much fun; but by documenting what the working system does (even if it does it wrong), you can make the task of mantaining that portion of the application an order of magnitude easier (ten minutes to read two paragraphs that explain a module, instead of three days reading the module code). Better yet, this same documentation can be used to implement the functionality in the mainline product, so you can stop supporting the legacy system altogether (for that feature, at least). If you're the "go to guy" for a project, even if that's legacy maintenance, you probably (or at very least, you should) have considerable latitude to approach problems in your own way. That might mean rewriting sections in your favorite language or platform, suggesting workarounds instead of solutions, or just responding to issues with "wont fix, use supported products". Edit: You might be misinterpreting your boss's intent; Maintenance requires a different set of skills from other parts of the development cycle; He might be putting you on a job because he thinks, out of all of the people he has available, you are the best for this role; not because he thinks you're not skillful enough for something else. Maintenance requires a focus on reading code, more than any other role you might have. If you are good at reading, you will be good at maintenance. There are lots of developers, even ones of many more years experience, that just don't have the knack for reading other people's code (or worse, their own), even if they are brilliant at other things. When you put yourself out of the job of maintenance, you aren't 'convincing management that you would be good enough to do something more important', you are literally taking the work of maintenance away from the living. When you stop doing that job, it's because there's nothing left to do. All problems are solved, either because they have been migrated out of the legacy application, because manual tasks are now automated, or you have convinced the customer that they don't want or need it and never will. If your boss interprets that as 'he's good at this, better keep him here', doing nothing at all, then he is a fool. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113306",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
113,375 | Just a quick question, but why are there so many file systems still competing and in use today? (ntfs, fat32, ext3(ffs), etc) It seems that file system designers could agree upon the best aspects of each type of system and implement a "best" filesystem, no? Just a thought, since these filesystems have been around for a while now, and it should be at least somewhat apparent which ones have good qualities over others, and we could just combine the good in each and create an ultimate system that is much better | Let's think about the specifics here for a moment, using examples you've cited: ntfs - Proprietary to Microsoft. Anyone who is not Microsoft cannot use this, therefore would have to use/create something different. Now, if you are Microsoft, you want to use this over FAT because of the issues of the next bullet point. fat32 - Not sufficiently modern. The maximum file size is 4GB. Directory entry lookup is O(n). The allocation table is a linked list, rather than something more efficient like an allocation bitmap (where it's really quick to find contiguous free space). Does not support permissions. Does not support hard links or symbolic links. Does not support journaling. ext3 - This was an extension of ext2 mainly to support journaling. So, it seems there are a few reasons: An earlier filesystem lacks something. In the case of FAT it is lacking a lot: both in terms of (1) features and (2) performance. In the case of ext2 it did not have journalled updates, so recovering from a crash took more time. An existing filesystem would probably do, but it is not yours. (eg. NTFS if you're not Microsoft). In this case you don't really have much choice but to come up with your own. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113375",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35740/"
]
} |
113,381 | Is there an exact, but simple and understandable defintion of the distinction between "use case", "User Story" and "Usage Scenario"? there are quite a bunch of explanation, but right now, I see no one that explains the differences in a single sentence, or two... (e.g. http://c2.com/cgi-bin/wiki?UserStoryAndUseCaseComparison very long and hard to get, full of discussion) | A User Story is a more informal, friendlier and smaller version of a Use Case , minus the UML diagram; it is typically used in iterative scenarios. A Usage Scenario is a Use Case drawn out into a step-by-step procedure, sometimes accompanied by a flowchart. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113381",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/193405/"
]
} |
113,430 | I come from languages like Python or Javascript (and others that are less object-oriented) and I am trying to improve my working knowledge of Java, which I know only in a superficial way. Is it considered a bad practice to always prepend this to the current instance attributes? It feels more natural to me to write ...
private String foo;
public void printFoo() {
System.out.println(this.foo);
}
... than ...
private String foo;
public void printFoo() {
System.out.println(foo);
}
... as it helps me to distinguish instance attributes from local variables. Of course in a language like Javascript it makes more sense to always use this , since one can have more function nesting, hence local variables coming from larger scopes. In Java, as far as I understand, no nesting like this is possible (except for inner classes), so probably it is not a big issue. In any case, I would prefer to use this . Would it feel weird and not idiomatic? | In most IDEs, you can simply mouseover the variable if you want to know. In addition, really, if you're working in an instance method, you should really know all the variables involved. If you have too many, or their names clash, then you need to refactor. It's really quite redundant. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113430",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15072/"
]
} |
113,433 | I am using a DDD-like approach for a greenfield module of an existing application; it's not 100% DDD due to architecture but I'm trying to use some DDD concepts. I have a bounded context (I think that's the proper term - I'm still learning about DDD) consisting of two Entities: Conversation and Message . Conversation is the root, as a Message doesn't exist without the conversation, and all messages in the system are part of a conversation. I have a ConversationRepository class (although it's really more like a Gateway, I use the term "Repository") which finds Conversations in the database; when it finds a Conversation it also creates (via Factories) a list of messages for that Conversation (exposed as a property). This seems to be the correct way of handling things as there doesn't seem to be a need for a full-blown MessageRepository class as it only exists when a Conversation is retrieved. However, when it comes to saving a Message, is this the responsibility of the ConversationRepository, since it's the aggregate root of Message? What I mean is, should I have a method on ConversationRepository called, say, AddMessage that takes a Message as it's parameter and saves it to the database? Or should I have a separate repository for finding/saving Messages? The logical thing seems to be one repository per Entity, but I've also heard "One repository per Context". | The blue book is definitely worth a read if you want to get the best out of the DDD approach. DDD patterns are not trivial and learning the essence of each of them will help you ponder when to use which pattern, how to divide your application in layers, how to define your Aggregates, and so on. The group of 2 entities you're mentioning isn't a Bounded Context - it's probably an Aggregate. Each Aggregate has an Aggregate Root, an Entity that serves as a single entry point to the Aggregate for all other objects. So no direct relation between an Entity and another Entity in another Aggregate that is not the Aggregate Root. Repositories are needed to get hold of Entities that are not easily obtained by traversal of other objects. Repositories usually contain Aggregate Roots, but there can be Repositories of regular Entities as well. In your example, Conversation seems to be the Aggregate Root. Maybe Conversations are the starting point of your application, or maybe you want to query them with detailed criteria so they are not satisfyingly accessible through simple traversal of other objects. In such a case you can create a Repository for them that will give the client code the illusion of a set of in-memory Conversations to query from, add to or delete from directly.
Messages on the other hand are easily obtained by traversal of a Conversation, and you might not want to get them according to detailed criteria, just all of a Conversation's Messages at once, so they might not need a Repository. ConversationRepository will play a role in persisting Messages, but not such a direct role as you mention. So, no AddMessage() on ConversationRepository (that method rather belongs in Conversation itself) but instead, each time the Repository will persist a Conversation, it's a good idea to persist its Messages at the same time, either transparently if you use an ORM framework such as (N)Hibernate, using ad hoc SQL if you choose so, etc. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113433",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22390/"
]
} |
113,533 | It seems that C has its own quasi-objects such as 'structs' that can be considered as objects (in the high-level way that we would normally think). And also, C files themselves are basically separate "modules", right? Then aren't modules kind of like 'objects' too? I'm confused as to why C, which seems so similar to C++, is considered a low-level "procedural" language where as C++ is high-level "object-oriented" *edit: (clarification) why and where, is the line drawn, for what an 'object' is, and isn't? | It seems that C has its own quasi-objects such as 'structs' that can be considered as objects Let's together you and I read through the Wikipedia page on object oriented programming and check off the features of C-style structs that correspond to what is traditionally considered to be object-oriented style: (OOP) is a programming paradigm using "objects" – data structures consisting of data fields and methods together with their interactions Do C structs consist of fields and methods together with their interactions ? No. Programming techniques may include features such as data abstraction, encapsulation, messaging, modularity, polymorphism, and inheritance. Do C structs do any of these things in a "first class" way? No. The language works against you every step of the way. the object-oriented approach encourages the programmer to place data where it is not directly accessible by the rest of the program Do C structs do this? No. An object-oriented program will usually contain different types of objects, each type corresponding to a particular kind of complex data to be managed or perhaps to a real-world object or concept Do C structs do this? Yes. Objects can be thought of as wrapping their data within a set of functions designed to ensure that the data are used appropriately No. each object is capable of receiving messages, processing data, and sending messages to other objects Can a struct itself send and receive messages? No. Can it process data? No. OOP data structures tend to "carry their own operators around with them" Does this happen in C? No. Dynamic dispatch ... Encapsulation ... Subtype polymorphism ... Object inheritance ...
Open recursion ... Classes of objects ... Instances of classes ... Methods which act on the attached objects ... Message passing ... Abstraction Are any of these features of C structs? No. Precisely which characteristics of structs do you think are "object oriented"? Because I can't find any other than the fact that structs define types . Now, of course you can make structs that have fields that are pointers to functions. You can make structs have fields that are pointers to arrays of function pointers, corresponding to virtual method tables. And so on. You can of course emulate C++ in C. But that is a very non-idiomatic way to program in C; you'd be better off just using C++. And also, C files themselves are basically separate "modules", right? Then aren't modules kind of like 'objects' too? Again, what characteristics of modules are you thinking of that makes them act like objects? Do modules support abstraction, encapsulation, messaging, modularity, polymorphism, and inheritance? Abstraction and encapsulation are pretty weak. Obviously modules are modular; that's why they're called modules. Messaging? Only in the sense that a method call is a message and modules can contain methods. Polymorphism? Nope. Inheritance? Nope. Modules are pretty weak candidates for "objects". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113533",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35740/"
]
} |
113,545 | A little background: I am one of two programmers for our department of 10 people (the rest are artists and management). The two of us do all of the coding required to make things flow well, and develop any projects that come up. I've been programming for about 4 years now, where this is his first "real" job (as he puts it). We generally are working on different projects at any point in time. A couple months ago I developed a (by no means perfect) set of classes that were to be used for a later project. A large portion of that project was delegated to him (for billing reasons) to design and program a GUI interface. Since he was new, I helped a bit with the designing, and said to ask for help if he needed it with the rest. He finished up the interface a few weeks ago, which he demo'd to show that it worked, although a little slow. The next part of that project has started which I'm working on. I opened up the interface to start with the next steps, and immediately ran into issues (a little slow was a little understatement, errors on common actions, etc.). I looked in to the code for a few issues and am finding O(n^n) on calls that should be O(n) , type assumptions with no error checking (it's in Python), references to the GUI added to the original code, and so on. Now, I definitely would like to teach him what was wrong and how to fix it, but he's already moved on to his next project, and this was a few weeks ago. I'm afraid me saying "Go back and do it right!" (with help of course) is too harsh, and we still have other projects to get done in the meantime. Should I just fix the code myself for now and try to catch things in the future? | Sounds like instituting some sort of code review policy might be beneficial on multiple levels. Some immediate benefits: You can directly influence the quality of his code before code is committed thus keeping the code base quality high Keeps you from making similar mistakes that another set of eyes may catch In the absence of coding guidelines, reviews naturally lead to consistency in coding style Knowledge sharing. If there's only two of you and one gets hit by a bus... Now when you go ahead and start cleaning up his code, use that as a teaching exercise when you seek a review of this code. You will be getting your stuff reviewed, and he may learn how to do it better next time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113545",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29179/"
]
} |
113,556 | I do a lot of work in Python and Java, and both those languages have fairly common (though not universal) conventions on how capitalization should be used in identifiers: both use PascalCase for class names and ALL_CAPS for "global" constants, but for other identifiers a lot of Java code uses mixedCase whereas a lot of Python code uses underscore_delimiters . I know that no language or library enforces any particular capitalization, but I've found that when I stick to the standard conventions for the language I'm using, my code seems much more readable. Now I'm starting a project in C++, and I'd like to apply the same idea. Is there any most common convention for capitalization that I should know about? | Is there any most common convention for capitalization that I should know about? C++ is based on C, which is old enough to have developed a whole bunch of naming conventions by the time C++ was invented. Then C++ added a few, and C hasn't been idle with thinking of new ones either. Add to that the many C-derived languages, which developed their inventor's C naming conventions further, to the point where they back-fertilized on C and C++... In other words: C++ hasn't one, but many of such conventions. However, if you are looking for the one naming convention, you might as well look at the standard library's naming convention , because this is the single one that all C++ developers will have to know and be used to. However, whatever you use the most important rule is: Be consistent! Interestingly, while I started out with a mix of PascalCase and camelCase, and was involved in numerous projects with even more numerous naming conventions, over the years I find I got stuck more and more with the standard_library_convention. Don't ask me why. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113556",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3652/"
]
} |
113,576 | In Eric Lippert's article What's Up With Hungarian Notation? , he states that the purpose of Hungarian Notation (the good kind) is to extend the concept of "type" to encompass semantic information in addition to storage representation information. A simple example would be prefixing a variable that represents an X-coordinate with "x" and a variable that represents a Y-coordinate with "y", regardless of whether those variables are integers or floats or whatever, so that when you accidentally write xFoo + yBar , the code clearly looks wrong. But I've also been reading about Haskell's type system, and it seems that in Haskell, one can accomplish the same thing (i.e. "extend the concept of type to encompass semantic information") using actual types that the compiler will check for you. So in the example above, xFoo + yBar in Haskell would actually fail to compile if you designed your program correctly, since they would be declared as incompatible types. In other words, it seems like Haskell's type system effectively supports compile-time checking equivalent to Hungarian Notation So, is Hungarian Notation just a band-aid for programming languages whose type systems cannot encode semantic information? Or does Hungarian Notation offer something beyond what a static type system such as Haskell's can offer? (Of course, I'm using Haskell as an example. I'm sure there are other languages with similarly expressive (rich? strong?) type systems, though I haven't come across any.) To be clear, I'm not talking about annotating variable names with the data type, but rather with information about the meaning of the variable in the context of the program. For example, a variable may be an integer or float or double or long or whatever, but maybe the variable's meaning is that it's a relative x-coordinate measured in inches. This is the kind of information I'm talking about encoding via Hungarian Notation (and via Haskell types). | I would say "Yes". As you say, the purpose of Hungarian Notation is to encode information in the name that cannot be encoded in the type. However, there are basically two cases: That information is important. That information is not important. Let's start with case 2 first: if that information is not important, then Hungarian Notation is simply superfluous noise. The more interesting case is number 1, but I would argue that if the information is important, it should be checked, i.e. it should be part of the type , not the name . Which brings us back to the Eric Lippert quote: extend the concept of "type" to encompass semantic information in addition to storage representation information. Actually, that's not "extending the concept of type", that is the concept of type! The whole purpose of types (as a design tool) is to encode semantic information! Storage representation is an implementation detail that doesn't usually belong in the type at all . (And specifically in an OO language cannot belong in the type, since representation independence is one of the major prerequisites for OO.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113576",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3640/"
]
} |
113,593 | I'm a great believer in clean code and code craftsmanship, though I'm currently at a job where this isn't regarded as a top priority. I sometimes find myself in a situation where a peer's code is riddled with messy design and very little concern for future maintenance, though it's functional and contains little to no bugs. How do you go about suggesting improvements in a code review when you believe there is so much that needs changing, and there's a deadline coming up? Keep in mind that suggesting the improvements be made after the deadline may mean they'll be de-prioritized altogether as new features and bug-fixes come in. | Double-check your motivation. If you think the code should be changed, you ought to be able to articulate some reason why you think it should be changed. And that reason should be more concrete than "I would have done it differently" or "it's ugly." If you can't point to some benefit that comes from your proposed change, then there's not much point in spending time (a.k.a. money) in changing it. Every line of code in the project is a line that has to be maintained. Code should be as long as it needs to be to get the job done and be easily understood, and no longer. If you can shorten the code without sacrificing clarity, that's good. If you can do it while increasing clarity, that's much better. Code is like concrete: it's more difficult to change after it's been sitting a while. Suggest your changes early if you can, so that the cost and risk of changes are both minimized. Every change costs money. Rewriting code that works and is unlikely to need to be changed could be wasted effort. Focus your attention on the sections that are more subject to change or that are most important to the project. Form follows function, and sometimes vice versa. If the code is messy, there's a stronger likelihood that it also contains bugs. Look for those bugs and criticize the flawed functionality rather than the aesthetic appeal of the code. Suggest improvements that make the code work better and make the operation of the code easier to verify. Differentiate between design and implementation. An important class with a crappy interface can spread through a project like cancer. It will not only diminish the quality of the rest of the project, but also increase the difficulty of repairing the damage. On the other hand, a class with a well-designed interface but a lousy implementation shouldn't be a big deal. You can always re-implement the class for better performance or reliability. Or, if it works correctly and is fast enough, you can leave it alone and feel secure in the knowledge that its cruft is well encapsulated. To summarize all the above points: Make sure that your proposed changes add value. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113593",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22513/"
]
} |
113,632 | I've seen this a lot in our legacy system at work - functions that go something like this: bool todo = false;
if(cond1)
{
... // lots of code here
if(cond2)
todo = true;
... // some other code here
}
if(todo)
{
...
} In other words, the function has two parts. The first part does some sort of processing (potentially containing loops, side effects, etc.), and along the way it might set the "todo" flag. The second part is only executed if the "todo" flag has been set. It seems like a pretty ugly way to do things, and I think most of the cases that I've actually taken the time to understand, could be refactored to avoid using the flag. But is this an actual anti-pattern, a bad idea, or perfectly acceptable? The first obvious refactorization would be to cut it into two methods. However, my question is more about whether there's ever a need (in a modern OO language) to create a local flag variable, potentially setting it in multiple places, and then using it later to decide whether to execute the next block of code. | I don't know about anti-pattern, but I'd extract three methods from this. The first would perform some work and return a boolean value. The second would perform whatever work is performed by "some other code" The third would perform the auxiliary work if the boolean returned was true. The extracted methods would probably be private if it was important that the second only (and always) be called if the first method returned true. By naming the methods well, I hope it would make the code clearer. Something like this: public void originalMethod() {
bool furtherProcessingRequired = lotsOfCode();
someOtherCode();
if (furtherProcessingRequired) {
doFurtherProcessing();
}
return;
}
private boolean lotsOfCode() {
if (cond1) {
... // lots of code here
if(cond2) {
return true;
}
}
return false;
}
private void someOtherCode() {
... // some other code here
}
private void doFurtherProcessing() {
// Do whatever is needed
} Obviously there is debate to be had over whether early returns are acceptable, but that is an implementation detail (as is the code formatting standard). Point is that the intent of the code becomes clearer, which is good... One of the comments on the question suggests that this pattern represents a smell , and I would agree with that. It is worth looking at it to see if you can make the intent clearer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113632",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38424/"
]
} |
113,800 | I've always thought that referring to the syntax of a language was the same as referring to the semantics of a language. But I've been informed that apparently that's not the case. What's the difference? | Semantics ~ Meaning Syntax ~ Symbolic representation So two programs written in different languages could do the same thing (semantics) but the symbols used to write the program would be different (syntax). A compiler will check your syntax for you (compile-time errors), and derive the semantics from the language rules (mapping the syntax to machine instructions say), but won't find all the semantic errors (run-time errors, e.g. calculating the wrong result because the code says add 1 instead of add 2). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113800",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38478/"
]
} |
113,844 | I searched the forum, but I couldn't find the answers why it should be avoided, only why it's not a silver bullet. So I don't think this question is a duplicate. Is there a VALID reason why I should unlearn Systems Hungarian I'm used to? So far I see the following benefits in using it: Consistent variable naming You see type without searching (intellisense is dead/indexing half of the time, so it's still a valid reason) Semantics can still be packed into second part of the name And following downsides: It annoys some people (no idea why) If type is changed, the type might not match the naming of the variable (I don't think it's valid reason, types are changed rarely, and you have "rename all") So why: vector<string> vecCityNames;
wstring strCity = L"abc";
//more code here
vecCityNames.push_back(strCity); is worse than: vector<string> cityNames;
wstring city = L"abc";
//more code here
cityNames.push_back(city);//Are we pushing back int on a queue? Float on a stack? Something else? | I used to use it (many years ago) and I don't anymore. The main reason is it's superfluous in the OO languages with strong typing (C++, Java) which I happen to have used most of my career. In these languages, if I define my types well, the compiler can and will enforce type safety for me. So any naming prefixes are just clutter which make the names longer, thus harder to read and to search. In any well written OO program, most of your variables are (references to) user defined types. If you prefix these with the same general tag (like o for "object"), you won't get any benefit from it, only the drawbacks . If however you prefix them with type-specific tags, you get into a maze of trying to find different abbreviations for a thousand different types with often similar names*, and to remember to change them all when a type or its name changes (which is not rare at all in a well maintained program). Of course, this doesn't apply to non-OO languages, and may not apply to weakly and/or dynamically typed languages (I have no experience with these, apart from C). Neither to suboptimal editors/IDEs without a usable IntelliSense (or its local equivalent). And this is just my 2 cents. So if Hungarian notation works for your team and your project, go for it. The important thing is to agree on this (as well as on a consistent coding style in general) before the project starts, and keep it consistent at all times. * just a short list from our current project: we have Charge , ChargeBreakdown , ChargeCalculator , ChargeDAO , ChargeDTO , ChargeLineHelper , ChargeMaps , ChargePair and ChargeType , among others. Moreover we also have Contract s, Countrie s, Checkout s, Checkin s... and this is just the letter C, in a project which probably wouldn't even be called "reasonably sized" by the OP. Disclaimer: I am actually a Hungarian, so I believe I can speak on this issue with authority ;-) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113844",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8029/"
]
} |
113,936 | Possible Duplicate: Do people in non-English-speaking countries code in English? I have a development comming that is intended to be sold across Latin America (Spanish speakers), but I've heard from some partners that is a good practice to always code in English, I mean just code (methods, classes, pages names, etc), labels on GUI are going to be all in Spanish... Code will be edited in the future by developer of companies across Latin America and just maybe some from outside. What do you think?, any experience with this? | I'm French and working in a French company. However, we do code mostly in English. Here is why: We sometimes have non-French people working with us so it's easier for them. We have some code coming from third parties (lib, framework, etc.), and this code is often in English, so we don't want to end up with a patchwork of French-English code. Generally speaking, this is not a problem because you have to understand English anyway if you want to be a good programmer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/113936",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38515/"
]
} |
114,002 | Possible Duplicate: Importance of learning to google efficiently for a programmer? Avoiding lengthy discussions, as a senior level student in CS, how can I get away from Googling problems I run into? I find myself using it too much; I seemingly reach for the instant answer and then blindly copy and paste code, hoping it works. Anyone can do that. I've read the related threads about being a better programmer, but mostly those recommend practicing on pet projects, which I have done, but again I feel EVERY wall encountered, from design through completion, was hurdled with Google. Do professionals instantly "research" their problem? Or do you guys step back and try and figure it out yourselves? I'm talking about both 'algorithm/design' problems as well as compiler issues. | I research most problems I encounter. If I encounter an issue, my first assumption is that I am not the first one to have encountered it. I also don't believe in reinventing the wheel - so will look for an existing solution before writing my own. The thing about research is that you need to evaluate the results and how well they fit (or not) with your code base/model/project/team. This means you need to understand the solution, how it works and how to apply it to your situation. My research saves my clients time and money. Before the days of search engines, I asked questions on usenet. The Internet is a great tool - use it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114002",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38539/"
]
} |
114,156 | HTML4 / XHTML1 allows only GET and POST in forms, now it seems like HTML5 will do the same. There is a proposal to add these two but it doesn't seem to be gaining traction. What were the technical or political reasons for not including PUT and DELETE in HTML5 specification draft? | This is a fascinating question. The other answers here are all speculative, and in some cases flat-out incorrect. Instead of writing my opinion here, I actually did some research and found original sources that discuss why delete and put are not part of the HTML5 form standard. As it turns out, these methods were included in several, early HTML5 drafts (!), but were later removed in the subsequent drafts . Mozilla had actually implemented this in a Firefox beta , too. What was the rationale for removing these methods from the draft? The W3C discussed this topic in bug report 10671 . Mike Amundsen argued in favor of this support: Executing PUT and DELETE to modify resources on the origin server is straight-forward for modern Web browsers using the XmlHttpRequest object. For unscripted browser interactions this not so simple. [...] This pattern is required so often that several commonly-used Web frameworks/libraries have created a "built-in" work-around. [...] Other considerations: Using POST as a tunnel instead of using PUT/DELETE can lead to caching mis-matches (e.g. POST responses are cachable , PUT responses are not(6), DELETE responses are not(7)) Using a non-idempotent method (POST) to perform an idempotent operation (PUT/DELETE) complicates recovery due to network failures (e.g. "Is is safe to repeat this action?"). [...] It's worth reading his entire post . Tom Wardrop also makes an interesting point ( href ): HTML is inextricably bound to HTTP. HTML is the human interface of HTTP. It's therefore automatically questionable why HTML does not support all relevant methods in the HTTP specification. Why can machines PUT and DELETE resources, but humans cannot? [...] It's contradictory that while HTML goes to great lengths to ensure semantic markup, it has to date made no such effort to ensure semantic HTTP requests. The bug was eventually closed as Won't Fix by Ian Hickson, with the following rationale ( href ): PUT as a form method makes no sense, you wouldn't want to PUT a form payload. DELETE only makes sense if there is no payload, so it doesn't make much sense with forms either. However, that's not the end of the story! The issue was closed in the W3C bug tracker and escalated to the HTML Working Group issue tracker: https://www.w3.org/html/wg/tracker/issues/195 At this point, it seems that the main reason why there is no support for these methods is simply that nobody has taken the time to write a comprehensive specification for it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114156",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35160/"
]
} |
114,258 | I have multiple projects on Git that I eventually want to bring others into. However, right now it's just me and I use Git and GitHub very simplistically: no branches and basically just using the commits as a backup to my local files. Sometimes I'll go back and look at previous versions of my files for reference, but I haven't needed to do any rollbacks to this point, though I appreciate the option should I need it in the future. As a sole developer, what Git or GitHub features could I take advantage of that would benefit me right now? What should my workflow be like? Also, are there any particular practices that I need to start doing in anticipation of adding others to my projects in the future? | Also, are there any particular practices that I need to start doing in anticipation of adding others to my projects in the future? Of course. There is a simple good practice that you can use even if you don't have a team right now: create a separated branch for development. The idea is that master branch will contain only released code versions or major changes. This can be adopted easily by new developers that join your project. Besides, branching is useful even if you are working solo. For instance, you find a bug while in the process of coding a new feature. If you don't use branches, you will have to do both: add new features and fix the bug in the same branch. This is not good :P On the other hand, if you had created a new branch for creating your new feature, you can just checkout the development branch, fix the bug, and checkout back the new feature branch. This is just a brief example of what you can do being a sole programmer. I am sure there must be more good practices. I highly recommend you this article: A successful Git branching model | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114258",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/73/"
]
} |
114,338 | Back in school some 10+ years ago, they were teaching you to use exception specifiers. Since my background is as one of them Torvaldish C programmers who stubbornly avoids C++ unless forced to, I only end up in C++ sporadically, and when I do I still use exception specifiers since that's what I was taught. However, the majority of C++ programmers seem to frown upon exception specifiers. I have read the debate and the arguments from various C++ gurus, like these . As far as I understand it, it boils down to three things: Exception specifiers use a type system that is inconsistent with the rest of the language ("shadow type system"). If your function with an exception specifier throws anything else except what you have specified, the program will get terminated in bad, unexpected ways. Exception specifiers will be removed in the upcoming C++ standard. Am I missing something here or are these all the reasons? My own opinions: Regarding 1): So what. C++ is probably the most inconsistent programming language ever made, syntax-wise. We have the macros, the goto/labels, the horde (hoard?) of undefined-/unspecified-/implementation-defined behavior, the poorly-defined integer types, all the implicit type promotion rules, special-case keywords like friend, auto, register, explicit... And so on. Someone could probably write several thick books of all the weirdness in C/C++.
So why are people reacting against this particular inconsistency, which is a minor flaw in comparison to many other far more dangerous features of the language? Regarding 2): Isn't that my own responsibility? There are so many other ways I can write a fatal bug in C++, why is this particular case any worse? Instead of writing throw(int) and then throwing Crash_t, I may as well claim that my function returns a pointer to int, then make a wild, explicit typecast and return a pointer to a Crash_t. The spirit of C/C++ has always been to leave most of the responsibility to the programmer. What about advantages then? The most obvious is that if your function tries to explicitly throw any type other than what you specified, the compiler will give you an error. I believe that the standard is clear regarding this(?). Bugs will only happen when your function calls other functions that in turn throw the wrong type. Coming from a world of deterministic, embedded C programs, I would most certainly prefer to know exactly what a function will throw at me. If there is something in the language supporting that, why not use it? The alternatives seem to be: void func() throw(Egg_t); and void func(); // This function throws an Egg_t I think there is a big chance that the caller ignores/forgets to implement the try-catch in the second case, less so in the first case. As I understand it, if either one of these two forms decides to suddenly throw another kind of exception, the program will crash. In the first case because it isn't allowed to throw another exception, in the second case because nobody expected it to throw a SpanishInquisition_t and therefore that expression isn't caught where it should have been. In case of the latter, to have some last resort catch(...) at the highest level of the program doesn't really seem any better than a program crash: "Hey, somewhere in your program something throwed a strange, unhandled exception.". You can't recover the program once you are that far from where the exception was thrown, the only thing you can do is to exit the program. And from the user's point-of-view they couldn't care less if they get an evil message box from the OS saying "Program terminated. Blablabla at address 0x12345" or an evil message box from your program saying "Unhandled exception: myclass.func.something". The bug is still there. With the upcoming C++ standard I'll have no other option but to abandon exception specifiers. But I would rather hear some solid argument why they are bad, rather than "His Holiness has stated it and thus it is so". Perhaps there are more arguments against them than the ones I listed, or perhaps there is more to them than I realize? | Exception specs are bad because they're weakly enforced, and therefore don't actually accomplish much, and they're also bad because they force the run-time to check for unexpected exceptions so that they can terminate(), instead of invoking UB, this can waste a significant amount of performance. So in summary, exception specs aren't enforced strongly enough in the language to actually make code any safer, and implementing them as specified was a big performance drain. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114338",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
114,424 | There are a ton of questions on Programmers.SE about whether or not taking extended time off is a good idea and what to do during that time off to maintain your skill level: https://softwareengineering.stackexchange.com/questions/31536/will-taking-two-years-off-for-school-in-a-related-field-destroy-a-mid-level-devel https://softwareengineering.stackexchange.com/questions/102009/can-i-take-a-year-off-without-hurting-my-career If you take a year or two out from being a developer, is it really that hard to get back into it? https://softwareengineering.stackexchange.com/questions/91176/is-taking-a-break-in-career-to-learn-stuff-a-bad-idea They've been really helpful, but I still have a question about the logistical details of take a self-funded sabbatical. I'm coming to the end of a year off from working as a software developer (after 7 years in the field.) I've taken this year to let myself explore interests that I'd never had enough time for: baking, sewing, photography, and making new and interesting friends. During this time, I've also been working on pet development projects in technologies and disciplines that I would never have had the chance to explore otherwise. in addition, I've read all of those software development books I'd never had time to read and kept up with programming news and blogs. The development projects have all had an eye towards being part of a Micro ISV, but only one of the projects has made it to any sort of production stage. That project is impressive, but not very successful (yet!) I've got a reasonably active programming blog that I believe reflects a high dedication to software development and demonstrates that I haven't just put my programming skills on a shelf for a year. My question is: What is the best way to transmit this information to a potential employer at the resume/cover letter level? I'm reasonably confident that I can explain this sabbatical in an interview setting. I know that at the resume level though, hiring managers will use any excuse they can to throw my resume out (I've done hiring and I would probably have thrown out my own resume if it didn't handle this time off really well, maybe it's karma.) So I feel like the resume/cover letter is the really tricky part in getting a kick-ass new job. I have a few ideas about what to do, but I'm not sure what the larger community sees as acceptable. Here are the approaches I'm considering: Put a special section on my resume for the sabbatical time in which I outline the personal projects. If I do this, what's a good way to label it? Create a personal company name and put these projects in the work
experience section of my resume under that company. This seems like the most go-getter
way to do it, but I'm worried it could be perceived as trickery when
I get to the interview stage. Also if I do this, what do I put as my
title? Leave my resume as-is and explain everything in my cover letter,
with a link to my active blog. There's probably something I'm not thinking of, I'm open to any
other ideas. I know I've included a bunch of special-snowflake information about my personal situation, but I would prefer answers for the more general case. The personal details are here more as an example than as a request for answers designed specifically for me. | Honestly I think you're over thinking it a bit. If you were gaining relevant experience during your time off, then your resume should not even show a break in your work timeline. Instead of a company name just put something like Entrepreneurial Pursuits . You don't even have to explain in detail each project. The important parts are the technologies you worked on and the business problems you helped overcome. In the end, majority of hiring managers are just playing a game of buzz word search anyways. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114424",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38677/"
]
} |
114,453 | Second of all, I was wondering if anyone knew what the difference was between exceptions (in the realm of exception control flow) and Exceptions (such as used in Java). But are they there to basically protect the system from crashing by terminating the user program? | Exceptions exist to allow Exception Handling , which can avoid crashes but more generally prevent unwanted or unpredictable system behavior. For instance if my program's connection to a database times out it's usually not going to crash the system, but if I was depending on data from the database an exception can allow me to treat this data-less situation differently than normal. Say by default my program displays a page of data based on what was returned from the database--well crap, I have no data. Instead of presenting a messed up view or continuing a potentially invalid operation I can catch this exception and fall back to a different database, read from local data, ask the user for data or otherwise return the user or system to a safe state (presumably one that will not immediately cause the same exception!) In addition in systems where user input could be the cause/solution to a problem, exceptions can let a user know detailed and helpful info about the problem. Instead of the too common "An unhandled exception occurred at..." or "Intimidating Error Message Straight from SQL" you can tell the user something helpful or at least understandable like "Could not connect to resource B." | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114453",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35740/"
]
} |
114,542 | A friend in academia asked me for advice (I'm a C# business application developer). He has a legacy codebase which he wrote in Fortran in the medical imaging field. It does a huge amount of number crunching using vectors. He uses a cluster (30ish cores) and has now gone towards a single workstation with 500ish GPUS in it. However where to go next with the codebase so: Other people can maintain it over next 10 year cycle Get faster at tweaking the software Can run on different infrastructures without recompiles After some research from me (this is a super interesting area) some options are: Use Python and CUDA from Nvidia Rewrite in a functional language. For example, F# or Haskell Go cloud based and use something like Hadoop and Java Learn C What has been your experience with this? What should my friend be looking at to modernize his codebase? UPDATE: Thanks @Mark and everyone who has answered. The reasons my friend is asking this question is that it's a perfect time in the projects lifecycle to do a review. Bringing research assistants up to speed in Fortran takes time (I like C#, and especially the tooling and can't imagine going back to older languages!!) I liked the suggestion of keeping the pure number crunching in Fortran, but wrapping it in something newer. Perhaps Python as that seems to be getting a stronghold in academia as a general-purpose programming language that is fairly easy to pick up. See Medical Imaging and a guy who has written a Fortran wrapper for CUDA, Can I legally publish my Fortran 90 wrappers to Nvidias' CUFFT library (from the CUDA SDK)? . | The demands you have put actually put Fortran at the top of the list, for problems like this: a) number crunching b) parallelable c) it was and still is the de facto language taught outside of CS studies (to engineers who aren't professional programmers). d) has an incredible(!) industry backing, number-of-industry-grade-compilers-wise, with none of the vendors showing the least signs of abandoning that branch. One of Intel's representatives not far ago revealed that sales of their Fortran products are higher than any other in their development tools. It is also a language which is incredibly easy to pick up. I don't agree that it takes time for bringing research assistants up to speed. My first textbook on it had no more than, oh I don't know, 30 (?) pages of sparse printed text. It is a language in which after learning 10 keywords, one can write medium-sized programs. I would dare say that those 30 pages written in default Word text would make a more than comprehensive "Fortran manual" for most users. If you're interested in CUDA, you might want to check Portland Group's compiler , which supports it . I'm not familiar with the finer details, but people generally talk of it with praise. Apart from that, for paralleling programs you have available OpenMP, MPI and now the upcoming (and long awaited) co-arrays, which Intel's compiler has recently implemented. To not waste words, Fortran has a very fine gamma of "libraries" for parallelizing programs. Industry standard numerical libraries are developed for it foremost, other languages following more or less in the function/routines portfolio. All that being said, I would however (depends on when it was originally written) recommend if it is, let's say, F77 code or older, rewriting it partially through time to newer dialects - F90 at least, if possible with F2003 features. A paper / thesis on that topic was recently published (medium-sized PDF file ahead). Not only can that, if done properly, ensure portability across multiple platforms, but will also make it more easy for future maintenance. p.s. As far as "future maintenance" goes, just an anecdote which I sometimes like to mention. While writing my thesis, I reused some code from my mentor, written 35 years ago from the time of writing. It compiled with only one error; a statement missing at the end, due to copy-paste mistake :) @DaveMateer (reply to comment) - I'm going to make a comment in the following which may be a bit impolite, but please don't take it the wrong way, for it is in the fair intentions. It seems to me you're tackling this "problem" in the wrong way. What I mean in a
few short points (for it is very late in here, and my ability to make up
readable (let alone comprehensible) sentences leaves me after 10p.m.) a) You mentioned you're trying to minimize extra coding time, yet you're
considering a rewrite from a language specialized for numerical computing to one from a
colorful choice of languages , if you'll pardon my expression some of which don't have support for multidimensional arrays, amongst other
things most of them are unsuitable for heavy numerical work (of parallel processing
capabilities of Haskell and Hadoop I admit, I know nothing about ... but have
never heard them even mentioned in those circles) it possibly has been tried, but I've never heard of a rewrite from Fortran,
a language for discretized problems, to a functional language there has been a discussion recently on comp.lang.fortran (try searching
through Google Groups) on the aspects of scientific computing "in the cloud" (wouldn't like to demotivate you, but to be fair, no one was really sure what
that term even represents, let alone had an example of a successful application.
Most people agreed that potential exists, but so far they're happy the way things
work for now.). A lot of problems are not suitable for that kind of parallelisation either. b) What would be the costs of such a rewrite? People/hours. c) Correct versions of the libraries to compile...- is a problem in any
language that cannot be avoided, however you look at it. d) I've heard of Python (a nice language really) used in parallel applications
on a few occasions, but its penetration of that market still doesn't seem to be
rising, and its ever changing nature makes it a very poor choice for a long term
project (think backward compatibility). Some people like it very much as a "glue" language. Ugh, if I think of anything else, will add it tomorrow. Gotta get some sleep... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114542",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25850/"
]
} |
114,673 | This post by Rob Conery (note the slug) says that development environment should be run inside a virtual machine. I see what he's saying and tend to agree, but still feel a little uneasy. Now that virtualization is so mature that even production systems run inside VMs speed is pretty much a non-issue, but as I say something bothers me here. What's your take on virtualizing your development machine? Have you already done so? If you did, any pitfalls or gotchas along the road? | My experience with developing on VMs in a corporate environment is that due to virtualisation of multiple cores being fraught with difficulties, it's difficult to get the kind of performance that many enterprise development machines need. Getting the code-compile-test inner loop to be as fast as possible requires the best machines possible - compilation and the running of tests obviously run faster on machines with more cores, as those activities can be quite easily be executed in a concurrent manner*. Until mainstream development OSs can deal with the number of available cores being volatile, and until virtualisation software can intelligently offer some kind of "up to N core" contract, virtualised development machines will not offer the same kind of productivity returns as physical devices. EDIT: This just recounts my personal feelings on developing using corporate-dictated VMs, which are often proscribed to cut hardware costs, which tend to run on servers. Running a local VM seems mostly superfluous provided you're enforcing good source control discipline, unless your project specifically requires you to develop code for multiple OSs. *:by which I mean the subtasks inside the compile stages and test stages can be run concurrently, NOT compiling and testing concurrently :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114673",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4994/"
]
} |
114,681 | I am learning functionnal programming with Haskell, and I try to grab concepts by first understanding why do I need them. I would like to know the goal of arrows in functional programming languages. What problem do they solve? I checked http://en.wikibooks.org/wiki/Haskell/Understanding_arrows and http://www.cse.chalmers.se/~rjmh/afp-arrows.pdf . All I understand is that they are used to describe graphs for computations, and that they allow easier point free style coding. The article assume that point free style is generally easier to understand and to write. This seems quite subjective to me. In another article ( http://en.wikibooks.org/wiki/Haskell/StephensArrowTutorial#Hangman:_Main_program ), a hangman game is implemented, but I cannot see how arrows makes this implementation natural. I could find a lot of papers describing the concept, but nothing about the motivation. What I am missing? | I realize I'm coming late to the party, but you've had two theoretical answers here, and I wanted to provide a practical alternative to chew over. I'm coming at this as a relative Haskell noob who nonetheless has been recently force-marched through the subject of Arrows for a project I'm currently working on. First, you can productively solve most problems in Haskell without reaching for Arrows. Some notable Haskellers genuinely do not like and do not use them (see here , here , and here for more on this). So if you're saying to yourself "Hey, I don't need these," understand that you may genuinely be correct. What I found most frustrating about Arrows when I first learned them was how the tutorials on the subject inevitably reached for the analogy of circuitry. If you look at Arrow code -- the sugared variety, at least -- it resembles nothing so much as a Hardware Defnition Language. Your inputs line up on the right, your outputs on the left, and if you fail to wire them all up properly they simply fail to fire. I thought to myself: Really? Is this where we've ended up? Have we created a language so completely high-level that it once again consists of copper wires and solder? The correct answer to this, as far as I've been able to determine, is: Actually, yes. The killer use case right now for Arrows is FRP (think Yampa, games, music, and reactive systems in general). The problem facing FRP is largely the same problem facing all other synchronous messaging systems: how to wire a continuous stream of inputs into a continuous stream of outputs without dropping relevant information or springing leaks. You can model the streams as lists -- several recent FRP systems use this approach -- but when you have a lot of inputs lists become almost impossible to manage. You need to insulate yourself from the current. What Arrows allow in FRP systems is the composition of functions into a network while at the same time entirely abstracting away any reference at all to the underlying values being passed by those functions. If you're new to FP, this can be confusing at first, and then mind-blowing when you've absorbed the implications of it. You've only recently absorbed the idea that functions can be abstracted, and how to understand a list like [(*), (+), (-)] as being of type [(a -> a -> a)] . With Arrows, you can push the abstraction one layer further. This additional ability to abstract carries with it its own dangers. For one thing, it can push GHC into corner cases where it doesn't know what to make of your type assumptions. You'll have to be prepared to think at the type level -- this is an excellent opportunity to learn about kinds and RankNTypes and other such topics. There are also a number of examples of what I'd call "Stupid Arrow Stunts" where the coder reaches for some Arrow combinator just because he or she wants to show off a neat trick with tuples. (Here's my own trivial contribution to the madness .) Feel free to ignore such hot-dogging when you come across it in the wild. NOTE: As I mentioned above, I'm a relative noob. If I've promulgated any misconceptions above, please feel free to correct me. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114681",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34327/"
]
} |
114,728 | My t-sql teacher told us that naming our PK column "Id" is considered bad practice without any further explanations. Why is naming a table PK column "Id" is considered bad practice? | I'm going to come out and say it: It's not really a bad practice (and even if it is, its not that bad). You could make the argument (as Chad pointed out) that it can mask errors like in the following query: SELECT *
FROM cars car
JOIN manufacturer mfg
ON mfg.Id = car.ManufacturerId
JOIN models mod
ON mod.Id = car.ModelId
JOIN colors col
ON mfg.Id = car.ColorId but this can easily be mitigated by not using tiny aliases for your table names: SELECT *
FROM cars
JOIN manufacturer
ON manufacturer.Id = cars.ManufacturerId
JOIN models
ON models.Id = cars.ModelId
JOIN colors
ON manufacturer.Id = cars.ColorId The practice of ALWAYS using 3 letter abbreviations seems much worse to me than using the column name id . (Case in point: who would actually abbreviate the table name cars with the abbreviation car ? What end does that serve?) The point is: be consistent. If your company uses Id and you commonly make the error above, then get in the habit of using full table names. If your company bans the Id column, take it in stride and use whatever naming convention they prefer. Focus on learning things that are ACTUALLY bad practices (such as multiple nested correlated sub queries) rather than mulling over issues like this. The issue of naming your columns "ID" is closer to being a matter of taste than it is to being a bad practice. A NOTE TO EDITORS : The error in this query is intentional and is being used to make a point. Please read the full answer before editing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114728",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38802/"
]
} |
114,782 | I'm a self-taught programmer, just in case this question is answered in CS 101. I've learned and used lots of languages, mostly for my own personal use, but occasionally for professional stuff. It seems that I'm always running into the same wall when I run into trouble programming. For example, I just asked a question on another forum about how to handle a pointer-to-array that was returned by a function. Initially I'm thinking that I simply don't know the proper technique that the designers of C++ set up to handle the situation. But from the answers and discussions that follow I see that I don't really get what happens when something is 'returned'. How deep a level of understanding of the programming process must a good programmer achieve? | No. Nobody understands what's going on at the hardware level. Computer systems are like onions -- there are many layers, and each one depends on the layer underneath it for support. If you're the guy working on one of the outer layers, you shouldn't care too much what happens in the middle of the onion. And that's a good thing, because the middle of the onion is always changing. As long as the layer or layers that support your particular layer continue to look the same and support your layer, you're good. But then again... Yes. I mean, you don't need to understand what's really happening inside the onion, but it helps a lot to have a mental model of what the inside of a typical onion looks like. Maybe not the deepest part, where you've got gates made up of transistors and such, or the next layer or two, where you've got microcode, a clock, instruction decoding units etc. The next layers, though, are where you've got registers, the stack, and the heap. These are the deepest layers where you have a lot of influence over what happens -- the compiler translates your code into instructions that run at this level, and if you want you can usually step through these instructions and find out what's "really" happening. Most experienced programmers have a slightly fairy-tale version of these layers in their head. They help you understand what the compiler is talking about when it tells you that there was an "invalid address exception" or a "stack overflow error" or something like that. If you're interested, read a book on computer architecture. It doesn't even need to be a particularly new book -- digital computers have been working in approximately the same way for a long time. The more you learn about the inside of the onion, the more astounded you'll be that any of this stuff works at all! Learning (approximately) what's going on in the lower layers makes programming both less mysterious and, somehow, more magical. And really, more fun. Another thing you might look into is embedded onions. Er, I mean embedded systems. There are a number of embedded platforms that are pretty easy to use: Arduino and BASIC Stamp are two examples. These are basically small microprocessors with a lot of built-in features. You can think of them as onions with fewer layers than your typical desktop PC, so it's possible to get a pretty thorough understanding of just what's going on in the whole system, from the hardware on up to the software. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114782",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8699/"
]
} |
114,819 | I would consider myself a 9 to 5 programmer. What I mean by this, is that I have a programming job, but after I leave work, I leave my work there and do not take it home. I very much enjoy my career choice, and I enjoy the work that I do at my current job. I also enjoy learning new things in my field, such as new technologies and advancements in the programming industry. It's just that outside of my job I have other hobbies that I feel are more important and I'd like to devote more of my time and energies to. I also feel that devoting >40 hours a week to a single subject is a little exhausting, so are there really that many programmers that want to come home from their programming job and do more programming? Maybe it's just my current employer, but I feel like they leave little time for career development. The only way for me to keep up on the newest technologies and programming techniques is to do so on my own time, because my employer does not allocate time during work hours to do these sorts of things (deadlines == $$$). Does anyone else feel the same way about their employer? From your experience, do managers and people who hire programmers see 9 to 5 programmers as a less valuable resource? I know that I could improve my resume by contributing to and open source project etc, but I just feel like I don't have the time to spare. Could the opposite be said, such that devoting your spare time to other subjects such as the arts show a well-rounded-ness that could be a desirable trait to the company? | Let us bring some balance to this argument. For the record, I am a 9-5 programmer in the strictest sense of the word. I have coded for many many years and I will probably be coding for many more. I do have a strong passion for development and love seeing all those classes giving each other hugs and kisses. I'm all for fluffy bunny designs and FOR loops... BUT... and it's a big but... I refuse to sacrifice my other responsibilities as a husband and father to become better at one thing... software development. You see, when you lie on your death bed, you will look deep into your wife's eyes, and think of all those lovely moments you spend in Visio drawing UML diagrams and writing clean, simple and maintainable code... I think not. It's not about balance. If I have to choose, I WILL be poor and be with my family. It's not about the money or job satisfaction or the stuff I want. Agreed, my answer is probably only relevant to some of the married developers out there but for what it's worth, I'll try to represent those of us who are compelled to look after our families as real men do. Taking responsibility. Don't give me the excuse " My wife married me as I am, she knows my passion for programming and willingly sacrifices every last second of my free time for the computer because she loves me ". Dude... I won't even go there. SO, to cut a already long story short. I code from 9 to 5, I occasionally read articles on software development at home. I value time with my family and will not be an absent father or husband. The world has enough of those. You only have 80 odd years to live on this planet, what do you want your scoreboard to look like once you're done. Like this: Software developer - 8/10 Husband - 2/10 Father - 3/10 Go for it. Not me. In fact, I go as far as to not work for companies that expect regular overtime . I am willing to do overtime on occasion although still see it as a lack of managing expectations. Period. A delivery date can in most cases be flexible if issues are detected/reported ahead of time. Companies tend to start with the "crunch time" excuse which conveniently turns into a regular occurrence. It makes business sense, unpaid effort. If you give me time in lieu (yay! You know where I'll be spending mine!) I would do crunch time, any time. If not, go get yourself one of those developers who think software development is all there is to life. There are many of them. Regrettably this appears like some sort of rant, which it isn't. Summary: Review your current working hours. Look at your other responsibilities in life and give them appropriate attention. Do not waste your life on becoming great at only one thing in life, it's too huge a sacrifice with too small a pay-off. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114819",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17774/"
]
} |
114,846 | My understanding is that in the 1980s, and perhaps in the 1990s too, Pascal and C were pretty much head-to-head as production languages. Is the ultimate demise of Pascal only due to Borland's neglect of Delphi ? Or was there more, such as C being a more robust language? If the latter, what were the perceived advantages of C over Pascal? I'm interested in historical facts and observations one can back up, rather than likes and dislikes. | Pascal has lost the battle mostly because of: Verbosity ( if ... then begin ... end , var A: array[0..15] of Integer ) Mutually incomprehensible dialects and the official standard Less than impressive object-oriented extensions The most successful and practical dialect - Turbo Pascal - has never been ported to platforms other than DOS/Windows. Plus Borland never opened the sources of the compiler. Pascal's "last hope" - Delphi - was positioned by Borland as a database development platform targeted at corporate environments. This was an unfortunate marketing move (made by marketing people I suppose), because creative engineers hate both databases and corporate environments. Then the failure of Delphi for Linux, Kylix. Apple switched to C and subsequently to Objective-C and thus it killed Pascal as an OS language | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114846",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29029/"
]
} |
114,923 | Many (perhaps most?) database applications today use B-Trees and variations to store data, because this data structure optimizes the read, write and seek operations on a hard disk (and these operations in turn play an important role in the overall efficiency of the databases). Should Solid State Drives (SSDs) completely outplace traditional hard disks (HDDs), though, could we say that B-Trees and variations will become obsolete, giving room for data structures that are more efficient operating on direct access memory? If so, what will those structures be? (e.g., hash tables, AVL trees) | B-Trees are most often used for database indexes on hard disk, but they have advantages even as an in-memory data structure, given the modern memory heirarchy with multiple layers of cache and with virtual memory. Even if virtual memory is on an SSD, that won't change. I use an in-memory B+-style multiway tree library that I wrote quite a lot in C++. It can have performance advantages - the reason it was originally written was to try to use cache better - but I have to admit it often doesn't work that way. The problem is the trade-off which means items have to move around within nodes on inserts and deletes, which doesn't happen for binary trees. Also, some of the low-level coding hacks I used to optimise it - well, they probably confuse and defeat the optimiser, truth told. Anyway, even if your databases are stored on an SSD, that's still a block-oriented storage device, and there's still an advantage to using B-Trees and other multiway trees. BUT about ten years ago, cache-oblivious algorithms and data structures were invented. These are oblivious to the size and structure of caches etc - they make (asymptotically) the best possible use of any memory heirarchy. B-Trees need to be "tuned" to a particular memory heirarchy to make the best use (though they work fairly well for quite a wide range of variation). Cache oblivious data structures aren't often seen in the wild yet, if at all, but it time they may well make the usual in-memory binary trees obsolete. And they may also prove worthwhile for hard disks and SSDs as well, since they don't care what the cluster-size or hard-disk cache page size is. Van Emde Boas layout is very important in cache-oblivious data structures. The MIT OpenCourseware algorithms course includes some coverage of cache oblivious data structures. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/114923",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34340/"
]
} |
115,115 | I'm new in the field of programming. I really enjoy it as a career, but I'm not sure I can handle sitting at a desk for eight hours a day. I don't mind it for short stretches of time of course, but I can't do it day in and day out. Is there a field of programming that possibly has jobs that require less time spent at the desk? | You could go into teaching programming. Most of your time would be at the front of the room lecturing. I am not sure how much actual programming would still be involved. Probably as much as you wanted, depending the style you choose to teach with. More hands-on demonstration rather than just lecturing in theory. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115115",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38979/"
]
} |
115,136 | We currently have the following stack : VS 2005 Web forms SQL Server 2005 IIS 6 We are planning on transitioning to this : VS 2010 MVC and Web Forms SQL Server 2008 IIS 7 My question is, when we move to MVC with VS 2010, should we use Entity Framework( or another ORM), a micro ORM (like Massive ), or just plain SQL? All the tutorials I've read about VS 2010 are all geared towards using Entity Framework for data transactions, but is that going to be around for the foreseeable future (5+ years)? If it matters, our client's applications can have anywhere from 10 - 1,000 active users. | I recently switched from using in-line SQL queries to using EF and here's what I've found: Pros Much faster to build the DAL (love not writing the SQL queries!) Much easier to maintain No longer need to remember to parse my input before building an in-line sql statement, which means less chance of a SQL injection attack (of course, it's still possible depending on your queries, but much less likely) Cons Cannot span multiple databases... at least not easily All entities (tables, views, etc) need a primary key If you want to update a single column in a 100+ required columns table (not my table design), you have to pull down all 100 columns to make the update. Or use a Stored Procedure. I've had issues with some default values defined on SQL server not getting pulled into the entity model after a new record gets added. Usually this is with computed values, or values that get added in an INSERT Trigger On occasion, the SQL queries get badly written and are slow to execute. If you have a slow-running query, run a SQL trace to see what EF is doing. It's possible you can re-work that query as a SP or View. This doesn't happen that often though. I've had a few issues with trying to create an association between tables that do not have a Foreign Key defined in SQL Server. Usually it's because I'm trying to create a 1:0-1 relationship where EF wants to use a 1:0-* I'm no EF expert though, so I probably missed some things. These are just the items I know I've encounted in the past when switching from inline SQL to Entity Framework. I am glad I made the switch, but there have been times when I've really hated EF due to its quirks. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115136",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24713/"
]
} |
115,163 | I am very familiar with the concept of object pooling and I always try to use it as much as possible. Additionally I always thought that object pooling is the standard norm as I have observed that Java itself as well as the other frameworks use pooling as much as possible. Recently though I read something that was completely new (and counter-intuitive?) to me. That pooling actually makes program performance worse especially in concurrent applications, and it is advisable to instantiate new objects instead, since in newer JVMs, instantiation of an object is really fast. I read this in the book: Java Concurrency in Practice Now I am starting to think if I am missunderstanding something here since the first part of the book adviced to use Executors that reuse Thread s instead of creating new instances. So has object pooling become deprecated nowadays? | It is deprecated as a general technique, because - as you noticed - creation and destruction of short lived objects per se (i.e. memory allocation and GC) is extremely cheap in modern JVMs. So using a hand-written object pool for your run-of-the-mill objects is most likely slower, more complicated and more error-prone than plain new .* It still has its uses though, for special objects whose creation is relatively costly, like DB / network connections, threads etc. * Once I had to improve the performance of a crawling Java app. Investigation uncovered an attempt to use an object pool to allocate millions of objects... and the clever guy who wrote it used a single global lock to make it thread safe. Replacing the pool with plain new made the app 30 times faster. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115163",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10326/"
]
} |
115,252 | I hear about C, C++, Java every day whenever people starting talking about computer science, but in my first computer science class we are asked to write in Scheme (DrRacket). Why is that? What differences will this make to my future understanding of programming? UPDATE: I have finished my first term, but not completely done with Scheme. In my second term (which is now) we got in to C programming. It was frustrated to learn pointers at first, but now I feel much better. There's not much more to say than that. I'm trying to teach myself Java (or C++?) for the OOP part which I'm missing. So far, I still like functional programming best. Lambda is just fascinating. :) | Sounds like a great school! Lisp dialects follow the mathematical paradigm of algorithms much more closely. They force programmers to learn recursion and the functional style. This is excellent experience. Your school is in the ranks with MIT, which still uses Abelson and Sussman for the required CS 6.001. You might find this article encouraging and helpful in understanding the issue. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115252",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38936/"
]
} |
115,269 | Ok, so here is my problem: I work for a big company, some how landed a job (frankly because the interview was easy). It's not that I don't know my stuff, I am pretty good at understanding java, it's libraries etc. But, when ever I try to solve some logic problem, I find it really hard to come up with a solution. For example, conversion of decimal to roman , when I saw the solution, I find that it's a simple problem. But I was not able to implement it after 1-2hrs of trying out! I feel I am dumb and not worth being a software engineer. Puzzle solving abilities should come natively to a great programer. But when I try to solve some puzzles, I am not able to find a solution and I just google it up!....and I hate that! When given a problem (like implement xyz feature) at work, I am fairly pretty quick at it and am respect at my work place for that, but I am not at all proud about it. Because when I try to solve any mathematically or logic wise challenging problem, I fumble. I still feel I love what I am doing (as an engineer) but feel really sad that I am not able to solve some tough logic problems which I friends come up with. I feel demoralized :( TL;DR: I understand stuff from a practical viewpoint (implementing
features in our product) but when trying to work on problem from say
ProjectEuler, I SUCK badly! And I need to Sharpen my brain! So, my questions are: How should I go about fixing it? Should I start with solving (and forcing my self to) project euler problems? Even if it takes hours for me to solve some basic problems ? Or should I go back to basics and study some basic math? I don't really find puzzle solving fun. But I want to make it fun for my self! And I think if I understand them in a better manner, I will like it! PS: I was never educated in CS (my undergrad was electrial). But that's not an excuse to be a sucky developer. Thanks! | First, it's wonderful that you see this as a weakness in your skills. You actually know where you need to improve, which makes for a far easier time doing so, and indicates that you are better than you think. I believe your primary problem, which I've seen many times before, is that you don't have a "problem solving toolset". When faced with a problem, what do you do? How do you go about solving it? I'm slow at math, but because I know how to use the little tools of math together, I aced calculus. So, besides just working on problem solving, you need to look at what tools and skills you bring to the table when you do so. If you were going to work on a new feature at work, do you just sit down with no IDE, no debugger, no documentation, no internet, and no source code? Of course not! There are tools for solving problems, you just don't know them.... yet . The wikipedia article has some links to problem solving techniques. But the most important tool is the scientific method . The wikipedia article includes a pragmatic approach: Define a question Gather information and resources (observe) Form an explanatory hypothesis Test the hypothesis by performing an experiment and collecting data
in a reproducible manner Analyze the data Interpret the data and draw conclusions that serve as a starting
point for new hypothesis Publish results Retest (frequently done by other scientists) All problems can be solved this way! Many people don't go through these steps though. It's like a developer who refuses to test his code. At all. He'll have problems figuring out bugs even exist. Finally, the other primary tool for problem solving is break it down with simple steps. For example, the first problem in project Euler's examples : If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23.
Find the sum of all the multiples of 3 or 5 below 1000. We have two facts, and a problem here. Fact one shows us how to define a multiple of three or five below 10. 3*1,3*2,3*3,5*1 are all valid. 5*2 is not because it equals 10. Then, fact two tells us that we add them together to get 23. So we already have a method of finding values, and we can add them together to get our sum. Of course, we can look at the facts and apply a simple reverse of the order. 3, 5, 6 and 9 are multiples of 3 or 5. That is 3 % 3, 5 % 5, 6 % 3, 9 % 3 all give a mod of zero. So another approach would be to go through 999 to 1 and modulus each number with 3 and 5. Collect the list of values and add them together. I would suggest The art of unix programming as an excellent example of using small tools in the programming world. Chaining them together allows you to solve very complex problems, and I can't count the number of times these concepts have helped me. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115269",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39025/"
]
} |
115,389 | I have been reviewing several resumes we have for a new position. I noticed that a few of them had many old programming language versions and old applications on their resume (e.g. SQL 4.2, VB5, Lotus 123, Novell). This left their list of computer experience very long. Do you keep it fresh? Do you show your depth of experience even though you will never use that techology again? When should you drop old technology on your resume? By keeping old technology on your resume does that do any harm in getting hired? | I drop old technologies from the "technologies" section of my resume when I am no longer interested in working with them, or when they aren't being used anymore. I don't think long lists of technologies do anyone any favors. I think technical depth is best illustrated through your work experience, where you can mention older technologies if you like. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115389",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38562/"
]
} |
115,406 | I've read a few times that when storing passwords, it's good practice to 'double hash' the strings (eg. with md5 then sha1, both with salts, obviously). I guess the first question is, "is this actually correct?" If not, then please, dismiss the rest of this question :) The reason I ask is that on the face of it, I would say that this makes sense. However, when I think about it, every time a hash is rehashed (possibly with something added to it) all I can see is that there is a reduction in the upper bound on the final 'uniqueness'... that bound being related to the initial input. Let me put it another way: we have x number of strings that, when hashed, are reduced to y possible strings. That is to say, there are collisions in the first set. Now coming from the second set to the third, is it not possible for the same thing to occur (ie. collisions in the set of all possible 'y' strings that result in the same hash in the third set)? In my head, all I see is a 'funnel' for each hash function call, 'funneling' an infinite set of possibilities into a finite set and so on, but obviously each call is working on the finite set before it, giving us a set no larger than the input. Maybe an example will explain my ramblings?
Take 'hash_function_a' that will give 'a' and 'b' the hash '1', and will give 'c' and 'd' the hash '2'. Using this function to store passwords, even if the password is 'a', I could use the password 'b'. Take 'hash_function_b' that will give '1' and '2' the hash '3'. If I were to use it as a 'secondary hash' after 'hash_function_a' then even if the password is 'a' I could use 'b', 'c' or 'd'. On top of all of that, I get that salts should be used, but they don't really change the fact that each time we are mapping 'x' inputs to 'less than x' outputs. I don't think. Can someone please explain to me what it is that I am missing here? Thanks! EDIT: for what it's worth, I don't do this myself, I use bcrypt. And I'm not really concerned about whether or not it's useful for 'using up cycles' for a 'hacker'. I genuinely am just wondering whether or not the process reduces 'security' from a hash collision stand point. | This is more suited on security.stackexchange but... The problem with hash1(hash2(hash3(...hashn(pass+salt)+salt)+salt)...)+salt) is that this is only as strong as the weakest hash function in the chain. For example if hashn (the innermost hash) gives a collision, the entire hash chain will give a collision ( irrespective of what other hashes are in the chain ). A stronger chain would be hash1(hash2(hash3(...hashn(pass + salt) + pass + salt) + pass + salt)...) + pass + salt) Here we avoid the early collision problem and we essentially generate a salt that depends on the password for the final hash. And if one step in the chain collides it doesn't matter because in the next step the password is used again and should give a different result for different passwords. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115406",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19222/"
]
} |
115,418 | Are there any object-oriented programming languages that are not based on the class paradigm? | As far as I know, Self is the original language that invented the "class-free" paradigm based on prototypes . It already existed (in an experimental stage) in the 1980s and pushes Smalltalk 's elegant usage of the prototype pattern to the extreme, such that classes are completely eliminated. It influenced all the other "class-free" OO languages I know of: most prominently Javascript, the classical programming language and environment Squeak (which is built on top of Smalltalk) the multi-paradigm script language Lua . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115418",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4942/"
]
} |
115,474 | Concurrent programming is quite difficult to me: even looking at a basic slide seems challenging to me. It seems so abstract. What are the benefits to knowing Concurrent programming concepts well? Will it help me in regular, sequential programming? I know there is a satisfaction to understanding how our programs work, but what else? | Here's a quick and easy motivation: If you want to code for anything but the smallest, weakest systems, you will be writing concurrent code. Want to write for the cloud? Compute instances in the cloud are small. You don't get big ones, you get lots of small ones. Suddenly your little web app is a concurrent app. If you designed it well, you can just toss in more servers as you gain customers. Else you have to learn how while your instance has its load average pegged. OK, you want to write desktop apps? Everything has a dual-or-more-core-CPU. Except the least expensive machines. And people with the least expensive machines probably aren't going to fork over for your expensive software, are they? Maybe you want to do mobile development? Hey, the iPhone 4S has a dual-core CPU. The rest won't be far behind. Video games? Xbox 360 is a multi-CPU system, and Sony's PS3 is essentially a multi-core system. You just can't get away from concurrent programming unless you are working on tiny, simple problems. 2016 update : The current iteration of the $35 Raspberry Pi is built around a quad-core system on a chip intended for cell phones. Dramatic advances in AI have been made in part due to the availability of high-end graphics cards as parallel compute engines. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115474",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29032/"
]
} |
115,520 | Should I reuse variables? I know that many best practices say you should not do it, however, later, when different developer is debugging the code and have 3 variables that look alike and the only difference is that they are created in different places in the code, he might be confused. Unit-testing is a great example of this. However, I do know that best practices are most of the time against it.
For example they say not to "override" method parameters. Best practices are even are against nulling the previous variables (in Java there is Sonar that gives a warning when you assign null to variable, that you don't need to do it to call the garbage collector since Java 6. You can't always control which warnings are turned off; most of the time the default is on.) | Your problem appears only when your methods are long and are doing multiple tasks in a sequence. This makes the code harder to understand (and thus maintain) per se. Reusing variables adds on top of this an extra element of risk, making the code even harder to follow and more error prone. IMO best practice is to use short enough methods which do one thing only, eliminating the whole problem. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115520",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/929/"
]
} |
115,587 | According to Wikipepdia, A software bug is the common term used to describe an error, flaw, mistake, failure, or fault in a computer program or system that
produces an incorrect or unexpected result, or causes it to behave in
unintended ways. Recently I've found a "bug" in StarCraft 2 which produces an unexpected result: http://eu.battle.net/sc2/en/forum/topic/2868627470 The problem is that if I keep StarCraft 2 minimized for a long time, the game does not disconnect or generate any form of timeout. It does disconnects however after first battle and sometimes also loses game data (match statistics). Unfortunatly, according to Blizzard: The game is not designed to be kept minimized for such a long period of time. (Blizzard) cannot consider such behaviour as erroneous as
StarCraft II is not meant to be minimized for hours. So, is my "bug" really a bug? | To a software team, a bug is a software problem that needs to be fixed. Not all software problems need to be fixed. Updating software is expensive. Blizzard is telling you that your problem is an edge case. In other words, the edge case problem you discovered is not necessarily something they tested for or otherwise care to account for. Fixing the problem will help you, but in all likelihood it will not help many others. Yet, the cost to fix the bug could be high. Instead, they can invest their resources into new features or even finish Diablo III. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115587",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39157/"
]
} |
115,690 | Studying some classes of Android, I realized that most of the variables of methods
are declared as final. Example code taken from the class android.widget.ListView: /**
* @return Whether the list needs to show the top fading edge
*/
private boolean showingTopFadingEdge() {
final int listTop = mScrollY + mListPadding.top;
return (mFirstPosition > 0) || (getChildAt(0).getTop() > listTop);
}
/**
* @return Whether the list needs to show the bottom fading edge
*/
private boolean showingBottomFadingEdge() {
final int childCount = getChildCount();
final int bottomOfBottomChild = getChildAt(childCount - 1).getBottom();
final int lastVisiblePosition = mFirstPosition + childCount - 1;
final int listBottom = mScrollY + getHeight() - mListPadding.bottom;
return (lastVisiblePosition < mItemCount - 1)
|| (bottomOfBottomChild < listBottom);
} What is the intention of using the final keyword in these cases? | I would say that this is due to force of habit . The programmer that wrote this code knew as he was writing it that the values for the final variables should never be changed after assignment, and so made them final. Any attempt to assign a new value to a final variable after assignment will result in a compiler error. As habits go, it's not a bad one to develop. At the least, making a variable final specifies the intent of the programmer at the time of writing. This is important as it might give subsequent programmers who edit the code pause for thought before they start changing how that variable is used. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115690",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39186/"
]
} |
115,704 | I wonder why are weakly-typed languages still being actively developed. For example, what benefit can one draw from being able to write $someVar = 1;
(...) // Some piece of code
$someVar = 'SomeText'; instead of using the much different, stongly-typed version int someInt = 1;
(...)
string SomeString = 'SomeText'; It is true that you need to declare an aditional variable in the second example, but does that really hurt? Shouldn't all languages strive to be strongly-typed since it enforces type-safety at compile time, thus avoiding some pitfalls in type-casting? | Strong / weak typing and static / dynamic typing are orthogonal. Strong / weak is about whether the type of a value matters, functionally speaking. In a weakly-typed language, you can take two strings that happen to be filled with digits and perform integer addition on them; in a strongly-typed language, this is an error (unless you cast or convert the values to the correct types first). Strong / weak typing is not a black-and-white thing; most languages are neither 100% strict nor 100% weak. Static / dynamic typing is about whether types bind to values or to identifiers. In a dynamically-typed language, you can assign any value to any variable, regardless of type; static typing defines a type for every identifier, and assigning from a different type is either an error, or it results in an implicit cast. Some languages take a hybrid approach, allowing for statically declared types as well as untyped identifiers ('variant'). There is also type inference, a mechanism where static typing is possible without explicitly declaring the type of everything, by having the compiler figure out the types (Haskell uses this extensively, C# exposes it through the var keyword). Weak dynamic programming allows for a pragmatic approach; the language doesn't get in your way most of the time, but it won't step in when you're shooting yourself in the foot either. Strong static typing, by contrast, pushes the programmer to express certain expectations about values explicitly in the code, in a way that allows the compiler or interpreter to detect a class of errors. With a good type system, a programmer can define exactly what can and cannot be done to a value, and if, by accident, someone tries somethine undesired, the type system can often prevent it and show exactly where and why things go wrong. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115704",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38684/"
]
} |
115,839 | I'm the lead designer in our team, which means I'm responsible for the quality of the code; functionality, maintainability and readability. How clean should I require my team members' code to be if we are not short on time? In my view, we should clean up old code we modify; adding a line to a method means you clean up that method. But what about new code? We could make it sparkling clean, so that if another coder comes along tomorrow and makes a small modification, she doesn't have to clean it up at all. But that means if no-one ever reads that piece of code again, we've wasted time making it sparkling clean. Should we aim for "almost clean" and then clean it up further on future visits? But that would mean not getting the full value for the understanding we had when we wrote it in the first place. Right now, I'm going for "sparkling clean"; partly as a tutorial for my colleagues who are not as picky as I. | My experience: A piece of code longer than 20 lines, written as a proof-of-concept, will not get re-written when the concept thus proved is needed, but rather bent to somewhat fit the needs of the production code . Usually, this goes with the promise to re-write it later, when there is time to do it all properly. However, this time never comes, so this piece of test code will be stuck in production code forever . Any software project larger than, say, 2 man months stands on the shoulders of at least one piece of such proof-of-concept toying-with-the-idea code, big projects are built on many of them. And nobody is going to replace these pieces of code which usually are underlying vital features of the application, threatening to bring it all down on a single error. Too many big projects sooner or later trip over such code . My solution: Anything over 20 lines of code has to be rock-solid and bullet-proof. Properly design the whole thing, do not use hacks to save time, put effort into picking good identifiers, refactor as soon as you see the need,... The whole enchilada. While 80% of these snippets are indeed thrown way, the rest will end up at the heart of some code that keeps the company afloat a decade down the road. That prospect excuses any amount of effort poured into a small, seemingly one-off, program. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115839",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8494/"
]
} |
115,851 | I know it may have been asked before, but here goes nothing... Is Perl still something that would be considered useful? If someone was a new programmer (either completely new to programming or just a few month/years of experience) would Perl be something to be considered worthwhile to learn? Is Perl still used with frequency? Is it still popular? Or is Perl dying out compared to languages like Python, Ruby, PHP, ASP.NET, etc.? Basically it boils down to this: Is it still used/is it still used frequently? If yes, is it dying? If no, will it make a come back? Is it something that would be worth learning? How does it compare in demand to languages like Python in both popularity and usability/viability? Could languages like Python or Ruby be considered replacements for Perl? Also, will newer versions of Perl really bring a large improvement to the Perl community, and perhaps bring Perl back to centerstage compared to other languages? EDIT: Okay, I suppose here's a better, reworded question: Is Perl still growing, or is it "dying"? Is it still a language worth learning and using? What projects does it really "shine" in compared to other languages? What makes Perl a language to choose? Essentially: is Perl growing obsolete compared to other languages, and if so, do you expect that to change, or to continue? And thank you to everyone who has answered so far, the discussion has been really interesting! | First of all, it's always better to disambiguate . Businesses talk about Perl 5 when talking Perl, but on a far-far
land, beyond deep-thinking island , the design-by-committee tribe is
still cooking a hefty slab of Perl 6 (and it's almost ready, with an
engine written in Haskell and powered by the tears of the gods ) Ok, that said, what is Perl 5 used for, today? legacy web systems / intrawebs - some just won't die data mining / statistical analysis - the perl regex engine, even if slightly outdated , ( PCRE , a spinned off library, tops it up in any possibile way and it's the default PHP engine) is still good for simple analysis UNIX system administration - Perl shall always be installed on UNIX. You can count on it being readily available even on Mac OS X. network prototyping - many core network experts learned Perl when it was all the
rage; and they still do their proofs-of-concept with it. security - many security experts, too, need fast prototyping . (and fast automated fixes) Perl can, and does, cover for that. The extensive CPAN collection is very handy , when dealing with prototypes. (Batteries may not be included, but they're still right there, on the shelf ) Remember drawbacks , though: Object support in Perl sucks hard , you bless references and do unholy stuff in the name of objects, then wonder why you took all the trouble in the first place. Reading other people's Perl is more than a craft, it's science , and a painful one, too. Perl is nifty, it makes you think nifty, it makes you feel nifty, you become a programming rockstar . Now, think about getting up, and going to work in a office full of rockstars : it's a "boat that rocks" hard. Expect wild fluctuations. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115851",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39255/"
]
} |
115,903 | A professional full-time programmer can do a great job by continuously learning from their work. How can an amateur programmer train to become a good programmer? ** If you like to play music or sing, you can do it because it is your hobby and you are interested, and you can become a good singer or music player. But you do not need to become a professional singer or do singing for a living. Is this also true for programmers ? Any amateur programmer who is famous? | The road to become good at programming is the same as for singing or playing music: practice, practice, practice. If you spend enough time regularly developing software for several years, chances are you will become good at it - be it inside or outside working hours. Now, apart from spending more time practicing, there is another reason why professionals usually become better than amateurs in a certain sense (in music as well as in programming). If you are a professional, you have to do tasks which you don't necessarily like, but belong to the wider job of developing software (e.g. testing, discussions with customers, writing documentation, setting up dev/build environment, writing build scripts etc). And every now and then you are also pressed to step into unfamiliar areas, to learn new languages or platforms. As an amateur, you aren't forced to do anything you don't want to, which makes it likely that you stay within your comfort zone for most of your life. In other words, you can easily become limited to one or a few specific areas you are most fond of, and miss a lot of opportunities to learn and grow. OTOH many professional developers fall into this as well, staying at the same company doing the same routine job hardly learning anything new for decades... So the key to become better is your attitude. If you keep learning, and consciously look for opportunities to move out of your comfort zone into new, unfamiliar territory, you will eventually outperform those swarms of slowly fossilizing "professionals". A good way to this may be contributing to some open source projects. A recommended reading is The Pragmatic Programmer: From Journeyman to Master , with lots of great and very practical advice on how to keep becoming better. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/115903",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37859/"
]
} |
116,005 | Alright I'm new to programming and I admit this is a fairly abstract question. The natural language we speak every day exist because people can understand each other. How can computers understand my code written in a certain language? Let's say Mr. A creates a new language. How is that accepted by machines? Must the creator communicate with the machine using machine language to create a new language? What guarantees that we can written in a language while getting understood by the machine properly? | You can sum up pretty much the entire answer to your set of questions with the word "compiler" . A compiler is a special program whose function is to take source code as input, apply language rules determined by the language designer in order to figure out what the code means, and produce code with the same meaning in another language as output. This is generally machine code or some form of bytecode (the "machine code" for virtual machines), although specialized compilers that translate code into other high-level languages do exist. They're beyond the scope of this question, though. Not all languages have a compiler. Some of them have an interpreter instead, which does all the same things a compiler does, except that instead of producing machine code after determining what the program means, it simply executes the program immediately. But the basic principles of parsing (reading) the code and determining what it means are the same. Answering any more in-depth than this would get into compiler theory, which is a very broad subject. If you're interested in the topic, you should start by reading the Wikipedia article for "compiler" and checking the links from it, and if you have specific questions, feel free to ask them on here. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116005",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38936/"
]
} |
116,089 | How do I unit test a web forms site? It seems to me that as so much of it depends on state and user input it wouldn't be feasible. If it's not feasible is there a valid automated alternative? | Yes, you can. You just have to be careful to separate your concerns well. In short, you have to remove all your logic from the code-behind and put it into other classes. There are two common ways to do this. The simple way is to rethink all of your event handlers in terms of "What information does the system give me? What information do I need to populate on the page?" and then provide a service class which does that conversion. In this case, the service layer should know very little about the nature of your presentation layer. You still have to take the data returned from the service and populate the correct components of the WebForm in your code-behind and this remains untested (at least by unit tests, you can still employ integration tests). But this is rarely where code goes wrong, it is much more likely to fail in the logic. A more complicated, but more effective, way is to use the Model View Presenter pattern . When we tried that, we found that the Presenters quickly became very coupled to the framework and, the more we developed MVP, the more clear it was that MVP really wanted to be MVC but couldn't be. That said, others have done this very successfully - there is even a webformsmvp framework available to remove the heavy lifting - so your mileage may vary. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116089",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29270/"
]
} |
116,093 | Please excuse the artwork, I know it looks like an elongated imperial shuttle, but it's the way I visualise a multiple team (or multiple stream) version control strategy. The central core, is the trunk (or master) repository. If there were lots more streams you can imagine more planes projecting out from the trunk, in which various branching and merging is taking place within those streams (the thinner lines). The red line shows A lice from accounting wanting to take a piece of code that B ob from the backups team has only just forged (she needs it urgently for some reason that I can't think of). Actually, it's not like a one line change that she can just do herself. Instead you can consider that accounting are heavily blocked and need a large change set from Backups to continue. Now as I understand it, this is against best-practice, and the changes required should go through the trunk, and Alice should pick them up that way, rather than take the short cut illustrated (a version control wormhole). However, although I have seen this stated in various places, I have never seen some deep, lengthy and well-justified reasons why a cross-stream (or circular?) merge is something not to allow. The only thing I can think of is that I guess it can lead to greater manual conflict resolution. So questions are: Why is allowing a cross-stream merge bad? (I need some well-defined
reasons to convince my various-minded colleagues why it shouldn't be
done) Are there any justifiable reasons to go against best-practice? What if Alice's team is reaching a release deadline, can there be an
exception? | Yes, you can. You just have to be careful to separate your concerns well. In short, you have to remove all your logic from the code-behind and put it into other classes. There are two common ways to do this. The simple way is to rethink all of your event handlers in terms of "What information does the system give me? What information do I need to populate on the page?" and then provide a service class which does that conversion. In this case, the service layer should know very little about the nature of your presentation layer. You still have to take the data returned from the service and populate the correct components of the WebForm in your code-behind and this remains untested (at least by unit tests, you can still employ integration tests). But this is rarely where code goes wrong, it is much more likely to fail in the logic. A more complicated, but more effective, way is to use the Model View Presenter pattern . When we tried that, we found that the Presenters quickly became very coupled to the framework and, the more we developed MVP, the more clear it was that MVP really wanted to be MVC but couldn't be. That said, others have done this very successfully - there is even a webformsmvp framework available to remove the heavy lifting - so your mileage may vary. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116093",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39362/"
]
} |
116,101 | I have a confession to make: Formalized automated testing was never a part of my programming background. I now work in a very large company with many developers (most of them web developers of one sort or another), and it's apparent that most of them don't test* either. (*I'm not going to keep saying formally ; please infer it.) If I wait to have the support of my organization to begin testing it will never happen. If I try to "change things from the inside" by pushing testing at management I will run out of steam before change happens. I need to start testing now. But with TDD and its ilk I'm going to end up with lots of testing code right along with the production code. Our version control systems (all centralized) aren't organized for storing testing code. I'll have to find a place for all that on my workstation. Is it possible to begin a personal practice of software testing in a culture that doesn't value or provide the tools for it? What techniques and tools do you use to enable you to test when the official tools and organization don't have a place for tests, frameworks and automations? | I've personally done this with considerable success. The key factors for success: Get (tentative) management support. The advantages of automates tests are well-documented and should convince any manager to at least try it. That includes finding a spot in the VCS and a build server, because Automated tests only provide their full value if they are run frequently and automatically so that you know about problems soon and don't have to rely on people not forgetting to run them. You need a build server that runs them at least daily. This can be an old workstation. Jenkins takes very little work to get running. Lead by example. Write tests, talk about the benefit they're providing to you, and when they reveal errors introduced by other developers talk about it in terms of how they were protected from potentially much greater embarrassment. Go for the low-hanging fruit. Some parts of the application will be hard to test, others easy. Some will be robust, others brittle. Writing tests for brittle, but easy to test parts provides the most value in the shortest time. See if you can write reusable tests, e.g. that test conventions or features that all modules (web pages, REST services, whatever) must have but which is often forgotten. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/116101",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19033/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.