source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
96,933 | In the recent years, the hype around Git raised greatly. Everybody knows about Git, nobody knows about alternatives. Other ones like Mercurial seem to be unnoticed. Both have been released in 2005, and provide similar functionalities. Moreover, Mercurial is generally considered to be easier to use, more intuitive and had for a long time better UIs. Therefore, it could be assumed that this would be a popular alternative, especially for those new to distributed version-control. Yet, it seems unknown to most people, unlike Git which succeeded pretty well. The point of this post is to try to understand this phenomenon better. How comes Git gets all part of the cake? Did they somehow use better marketing? Is it because its community is more ...ahem ..."verbose"? Is it because of the "Linus" name? Is it because of its geeky image? What's your opinion? | Linus Torvalds Linus is a big advocate of Git and promoted it heavily to the core linux group for years and it's grown from there. I daresay it's entirely due to Linus's influence over the *nix community. Personally I still use Subversion, but that's from preference rather than utility. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/96933",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19417/"
]
} |
96,947 | I know the syntax, rules applied to abstract class and I want know usage of an abstract class Abstract class can not be instantiated directly but can be extended by other class What is the advantage of doing so? How it is different from an Interface? I know that one class can implement multiple interfaces but can only extend one abstract class. Is that only difference between an interface and an abstract class? I am aware about usage of an Interface. I have learned that from Event delegation model of AWT in Java. In which situations I should declare class as an abstract class? What is benefits of that? | This answer does a good job of explaining the differences between an abstract class and an interface, but it doesn't answer why you should declare one. From a purely technical standpoint, there is never a requirement to declare a class as abstract. Consider the following three classes: class Database {
public String[] getTableNames() { return null; } //or throw an exception? who knows...
}
class SqlDatabase extends Database { } //TODO: override getTableNames
class OracleDatabase extends Database { } //TODO: override getTableNames You don't have to make the Database class abstract, even though there is an obvious problem with its implementation: When you are writing this program, you could type new Database() and it would be valid, but it would never work. Regardless, you would still get polymorphism, so as long as your program only makes SqlDatabase and OracleDatabase instances, you could write methods like: public void printTableNames(Database database) {
String[] names = database.getTableNames();
} Abstract classes improve the situation by preventing a developer from instantiating the base class, because a developer has marked it as having missing functionality . It also provides compile-time safety so that you can ensure that any classes that extend your abstract class provide the bare minimum functionality to work, and you don't need to worry about putting stub methods (like the one above) that inheritors somehow have to magically know that they have to override a method in order to make it work. Interfaces are a totally separate topic. An interface lets you describe what operations can be performed on an object. You would typically use interfaces when writing methods, components, etc. that use the services of other components, objects, but you don't care what the actual type of object you are getting the services from is. Consider the following method: public void saveToDatabase(IProductDatabase database) {
database.addProduct(this.getName(), this.getPrice());
} You don't care about whether the database object inherits from any particular object, you just care that it has an addProduct method. So in this case, an interface is better suited than making all of your classes happen to inherit from the same base class. Sometimes the combination of the two works very nicely. For example: abstract class RemoteDatabase implements IProductDatabase {
public abstract String[] connect();
public abstract void writeRow(string col1, string col2);
public void addProduct(String name, Double price) {
connect();
writeRow(name, price.toString());
}
}
class SqlDatabase extends RemoteDatabase {
//TODO override connect and writeRow
}
class OracleDatabase extends RemoteDatabase {
//TODO override connect and writeRow
}
class FileDatabase implements IProductDatabase {
public void addProduct(String name, Double price) {
//TODO: just write to file
}
} Notice how some of the databases inherit from RemoteDatabase to share some functionality (like connecting before writing a row), but FileDatabase is separate class that only implements IProductDatabase . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/96947",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26473/"
]
} |
96,966 | When did people start writing Readme files? It seems that pretty much all programs have this file, regardless of the format. Is there any documented first use of this document? | I don't know of a canonical first use. The Jargon File describes the README as: Hacker's-eye introduction traditionally included in the top-level directory of a Unix source distribution So i had a look through some early unix source trees, courtesy of The Unix Tree (provided by the Unix Heritage Society and the Unix Archive ). Some README files found in early unices include: /sys/source/lex/README from PWB 1.0 (July 1977) - the earliest i could find /usr/doc/README from Seventh Edition (Jan 1979) - the earliest i could find for the whole source tree /usr/doc/README from 3BSD (March 1980) - the earliest i could find for a BSD So, advances on July 1977 are welcome! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/96966",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1725/"
]
} |
96,973 | This wiki page tells : A schrödinbug is a bug that manifests only after someone reading
source code or using the program in an unusual way notices that it
never should have worked in the first place, at which point the
program promptly stops working for everybody until fixed. The Jargon
File adds: "Though... this sounds impossible, it happens; some
programs have harbored latent schrödinbugs for years." What is being talked about is very vague.. Can someone provide an example of how a schrödinbug is like (like with a fictional / real-life situation)? | In my experience the pattern is this: System works, often for years An error is reported The developer investigates the error and finds a bit of code which seems to be completely flawed and declares that it "could never have worked" The bug gets fixed and the legend of the code that could never have worked (but did for years) grows Let's be logical here. Code that could never have worked... could never have worked . If it did work then the statement is false. So I'm going to say that a bug exactly as described (that is observing the flawed code stops it working) is patently nonsense. In reality what has happened is one of two things: 1) The developer hasn't fully understood the code . In this case the code is usually a mess and somewhere in it has a major but non-obvious sensitivity to some external condition (say a specific OS version or configuration that governs how some function works in some minor but significant way). This external condition is altered (say by a server upgrade or change which is believed to be unrelated) and in doing so causes the code to break. The developer then looks at the code and, not understanding the historical context or having the time to trace through every possible dependency and scenario, declared that it could never have worked and rewrites it. In this situation, the thing to understand here is that the idea that "it could never have worked" is provably false (because it did). That's not to say rewriting it is a bad thing - it's often not, while it's nice to know exactly what was wrong often that's time consuming and rewriting the section of code is often faster and allows you to be sure that you've fixed things. 2) Actually it never worked, just no-one has ever noticed . This is surprisingly common, particularly in large systems. In this instance someone new starts and starts looking at things in a way no-one did before, or a business process changes bringing some previously minor edge case into the main process, and something which never really worked (or worked some but not all of the time) is found and reported. The developer looks at it and declares "it could never have worked" but the users say "nonsense, we've been using it for years" and they're sort of right but something they consider irrelevant (and usually fail to mention until the developer finds the exact condition at which point they go "oh yes, we do do that now and didn't before") has changed. Here the developer is right - it could never have worked and didn't ever work. But in either case one of two things is true: The claim "it could never have worked" is true and it never has worked - people just thought it did It did work and the statement "it could never have worked" is false and down to a (usually reasonable) lack of understanding of the code and its dependencies | {
"source": [
"https://softwareengineering.stackexchange.com/questions/96973",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24257/"
]
} |
97,181 | Real programmers can write assembly code in any language. (Larry Wall). As far as I can make out, Mr. Larry Wall is trying to say that to a real programmer any language can have the same functionality as ASM. But I seriously do not understand. How can you write assembly code in high level languages like Perl, Python, Java and C#? Languages like Perl and Python don't even have pointers. Or Does he mean something else? What is Mr. Wall actually trying to say? | It is a tongue-in-cheek mockery of an earlier meme about "real programmers" which is a variation of the " no true Scotsman " fallacy and " real men don't eat quiche " which was a very popular book. http://c2.com/cgi/wiki?RealProgrammer Original thread where Wall stated this . Monty Python version, The Four Yorkshiremen is a mockery of this whole thing. "Real programmers" don't need high level languages, and The Story of Mel is the exemplar of this. And it was uphill both ways! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97181",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31064/"
]
} |
97,187 | I have heard about programmers coding for two days without sleep and drinking coffee and Red Bull . Also in movies like The Social Network , in a scene they show that Mark Zuckerberg has been programming for 36 hours. Also i read somewhere that in companies like Facebook, Google, foursquare , etc. they can code for more than 24 hours without sleep. Is this really true? Can you actually produce high-quality code if you are sleep deprived? Can things like Red Bull make up for sleep? | Simply No . Coding for 36 hours has nothing to do with programming, rather it's an attribute of human. Very few people can stay awake for 24 hours and even when they stay awake, their mind really loses problem-solving skills. Drivers who are sleepy, simply hit other cars. Accountants who are sleepy, simply make mistakes in their calculations. Also many programmers when sleepy, write less quality code. PS: There is an illness called insomnia that makes you sleep less. But I don't think Google hires people with such illness. :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97187",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/32266/"
]
} |
97,207 | I have recently been learning D and am starting to get some sort of familiarity with the language. I know what it offers, I don't yet know how to use everything, and I don't know much about D idioms and so on, but I am learning. I like D. It is a nice language, being, in some sort of ways, a huge update to C, and done nicely. None of the features seem that "bolted on", but actually quite well thought-out and well-designed. You will often hear that D is what C++ should have been (I leave the question whether or not that is true to each and everyone to decide themselves in order to avoid unnecessary flame wars). I have also heard from several C++ programmers that they enjoy D much more than C++. Myself, while I know C, I can not say that I know C++. I would like to hear from someone knowing both C++ and D if they think there is something that C++ does better than D as a language (meaning not the usual "it has more third-party libraries" or "there are more resources" or "more jobs requiring C++ than D exists"). D was designed by some very skilled C++ programmers ( Walter Bright and Andrei Alexandrescu , with the help of the D community) to fix many of the issues that C++ had, but was there something that actually didn't get better after all? Something he missed? Something you think wasn't a better solution? Also, note that I am talking about D 2.0 , not D 1.0 . | Most of the things C++ "does" better than D are meta things: C++ has better compilers, better tools, more mature libraries, more bindings, more experts, more tutorials etc. Basically it has more and better of all the external things that you would expect from a more mature language. This is inarguable. As for the language itself, there are a few things that C++ does better than D in my opinion. There's probably more, but here's a few that I can list off the top of my head: C++ has a better thought out type system There are quite a few problems with the type system in D at the moment, which appear to be oversights in the design. For example, it is currently impossible to copy a const struct to a non-const struct if the struct contains class object references or pointers due to the transitivity of const and the way postblit constructors work on value types. Andrei says he knows how to solve this, but didn't give any details. The problem is certainly fixable (introducing C++-style copy constructors would be one fix), but it is a major problem in language at present. Another problem that has bugged me is the lack of logical const (i.e. no mutable like in C++). This is great for writing thread-safe code, but makes it difficult (impossible?) to do lazy intialisation within const objects (think of a const 'get' function which constructs and caches the returned value on first call). Finally, given these existing problems, I'm worried about how the rest of the type system ( pure , shared , etc.) will interact with everything else in the language once they are put to use. The standard library (Phobos) currently makes very little use of D's advanced type system, so I think it is reasonable the question whether it will hold up under stress. I am skeptical, but optimistic. Note that C++ has some type system warts (e.g. non-transitive const, requiring iterator as well as const_iterator ) that make it quite ugly, but while C++'s type system is a little wrong at parts, it doesn't stop you from getting work done like D's sometimes does. Edit: To clarify, I believe that C++ has a better thought out type system -- not necessarily a better one -- if that makes sense. Essentially, in D I feel that there is a risk involved in using all aspects of its type system that isn't present in C++. D is sometimes a little too convenient One criticism that you often hear of C++ is that it hides some low-level issues from you e.g. simple assignments like a = b; could be doing many things like calling conversion operators, calling overload assignment operators etc., which can be difficult to see from the code. Some people like this, some people don't. Either way, in D it is worse (better?) due to things like opDispatch , @property , opApply , lazy which have the potential to change innocent looking code into things that you don't expect. I don't think this is a big issue personally, but some might find this off-putting. D requires garbage-collection This could be seen as controversial because it is possible to run D without the GC. However, just because it is possible doesn't mean it is practical. Without a GC, you lose a lot of D's features, and using the standard library would be like walking in a minefield (who knows which functions allocate memory?). Personally, I think it is totally impractical to use D without a GC, and if you aren't a fan of GCs (like I am) then this can be quite off-putting. Naive array definitions in D allocate memory This is a pet peeve of mine: int[3] a = [1, 2, 3]; // in D, this allocates then copies
int a[3] = {1, 2, 3}; // in C++, this doesn't allocate Apparently, to avoid the allocation in D, you must do: static const int[3] staticA = [1, 2, 3]; // in data segment
int[3] a = staticA; // non-allocating copy These little 'behind your back' allocations are good examples of my previous two points. Edit: Note that this is a known issue that is being worked on. Edit: This is now fixed. No allocation takes place. Conclusion I've focussed on the negatives of D vs C++ because that's what the question asked, but please don't see this post as a statement that C++ is better than D. I could easily make a larger post of places where D is better than C++. It's up to you to make the decision of which one to use. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97207",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12750/"
]
} |
97,247 | All characters in ASCII can be encoded using UTF-8 without an increase in storage (both requires a byte of storage). UTF-8 has the added benefit of character support beyond "ASCII-characters". If that's the case, why will we ever choose ASCII encoding over UTF-8? Is there a use-case when we will choose ASCII instead of UTF-8? | In some cases it can speed up access to individual characters. Imagine string str='ABC' encoded in UTF8 and in ASCII (and assuming that the language/compiler/database knows about encoding) To access third ( C ) character from this string using array-access operator which is featured in many programming languages you would do something like c = str[2] . Now, if the string is ASCII encoded, all we need to do is to fetch third byte from the string. If, however string is UTF-8 encoded, we must first check if first character is a one or two byte char, then we need to perform same check on second character, and only then we can access the third character. The difference in performance will be the bigger, the longer the string. This is an issue for example in some database engines, where to find a beginning of a column placed 'after' a UTF-8 encoded VARCHAR, database does not only need to check how many characters are there in the VARCHAR field, but also how many bytes each one of them uses. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97247",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24257/"
]
} |
97,295 | After almost 4 years of experience, I haven't seen a code where yield keyword is used. Can somebody show me a practical usage (along explanation) of this keyword, and if so, aren't there other ways easier to fullfill what it can do? | Efficiency The yield keyword effectively creates a lazy enumeration over collection items that can be much more efficient. For example, if your foreach loop iterates over just the first 5 items of 1 million items then that's all yield returns, and you didn't build up a collection of 1 million items internally first. Likewise you will want to use yield with IEnumerable<T> return values in your own programming scenarios to achieve the same efficiencies. Example of efficiency gained in a certain scenario Not an iterator method, potential inefficient use of a big collection, (Intermediate collection is built having lots of items) // Method returns all million items before anything can loop over them.
List<object> GetAllItems() {
List<object> millionCustomers;
database.LoadMillionCustomerRecords(millionCustomers);
return millionCustomers;
}
// MAIN example ---------------------
// Caller code sample:
int num = 0;
foreach(var itm in GetAllItems()) {
num++;
if (num == 5)
break;
}
// Note: One million items returned, but only 5 used. Iterator version, efficient (No intermediate collection is built) // Yields items one at a time as the caller's foreach loop requests them
IEnumerable<object> IterateOverItems() {
for (int i; i < database.Customers.Count(); ++i)
yield return database.Customers[i];
}
// MAIN example ---------------------
// Caller code sample:
int num = 0;
foreach(var itm in IterateOverItems()) {
num++;
if (num == 5)
break;
}
// Note: Only 5 items were yielded and used out of the million. Simplify some programming scenarios In another case, it makes some kinds of sorting and merging of lists easier to program because you just yield items back in the desired order rather than sorting them into an intermediate collection and swapping them in there. There are many such scenarios. Just one example is the merging of two lists: IEnumerable<object> EfficientMerge(List<object> list1, List<object> list2) {
foreach(var o in list1)
yield return o;
foreach(var o in list2)
yield return o;
} This method yields back one contiguous list of items, effectively a merge with no intermediate collection needed. More Info The yield keyword can only be used in context of an iterator method (having a return type of IEnumerable , IEnumerator , IEnumerable<T> , or IEnumerator<T> .) and there is a special relationship with foreach . Iterators are special methods. The MSDN yield documentation and iterator documentation contains lots of interesting information and explanation of the concepts. Be sure to correlate it with the foreach keyword by reading about it too, to supplement your understanding of iterators. To learn about how the iterators achieve their efficiency, the secret is in the IL code generated by the C# compiler. The IL generated for an iterator method differs drastically from that generated for a regular (non-iterator) method. This article (What Does the Yield Keyword Really Generate?) provides that kind of insight. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97295",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31418/"
]
} |
97,437 | I'm a long time Java developer and finally, after majoring, I have time to study it decently in order to take the certification exam... One thing that has always bothered me is String being "final". I do understand it when a read about the security issues and related stuff... But, seriously, does anyone have a true example of that? For instance, what would happen if String weren't final? Like it's not in Ruby. I haven't heard any complaints coming from the Ruby community... And I'm aware of the StringUtils and related classes that you have to either implement yourself or search over the web to just implement that behavior (4 lines of code) you're willing to. | The main reason is speed: final classes can't be extended which allowed the JIT to do all kinds of optimizations when handling strings - there is never a need to check for overridden methods. Another reason is thread safety: Immutables are always thread safe because a thread has to completely build them before they can be passed to someone else - and after building, they can't be changed anymore. Also, the inventors of the Java runtime always wanted to err on the side of safety. Being able to extend String (something I often do in Groovy because it's so convenient) can open a whole can of worms if you don't know what you're doing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97437",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/32892/"
]
} |
97,490 | I have a method that looks like this: def foobar(mock=False, **kwargs):
# ... snipped foobar actually makes several calls to Amazon S3 and returns a composed result.
In order to make this testable, I introduced the mock parameter to turn off making live network connections. It feels like a code smell for me but testability is also very important. What else can I do if I want to do away with the parameter ? | Why not have a connection class instead? class Connection(object):
def retrieve(self, resource):
return something_from_s3()
class MockConnection(Connection):
def retrieve(self, resource):
return 42
def foobar(connection = Connection(), **kwargs):
whatever = connection.retrieve("foobar") Not only is this cleaner, but you can trivially test it with various mock connections. If you ever decide to support a different kind of service instead of S3, you can easily extend your product to support it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97490",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52830/"
]
} |
97,538 | I'm developing sites based on .NET platform. I usually deploy these sites on my local IIS, so that I can test them and see their functionality before going live. However, each time I restart windows, it seems that sites take a long time to run for the first time. I know about JIT and I'm also aware of this question , but it doesn't answer my question. Does JIT happens every time you restart windows? Is it related to creation of w3wp.exe process? Why sites are so slow for the first request after each restart? | This problem is the JIT compile. The application pool needs time to build the libraries before it can begin processing them. This can be sped up by using a warmup script, but it's something that needs to happen. It also depends on whether you're using a website or a web application project. A website is JIT for every page so the very first hit is slow and each new page hit has an extra compile time as well. Web application projects are precompiled so should not suffer this hit as badly, but the libraries still need to be loaded up. The more libraries/tools you have the worse this hit tends to be. Here are some links that discuss the warm up: http://weblogs.asp.net/gunnarpeipman/archive/2010/01/22/iis-application-warm-up-module.aspx http://blogs.iis.net/steveschofield/archive/2009/05/30/application-pool-warm-up.aspx https://stackoverflow.com/questions/2063461/iis-web-applications-warmup http://sharepoint.smayes.com/2011/06/application-pool-specific-warm-up-scripts/ | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97538",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31418/"
]
} |
97,541 | I use unsigned ints everywhere, and I'm not sure if I should. This can be from database primary key id columns to counters, etc. If a number should never be negative, then I will always used an unsigned int. However I notice from other's code that no one else seems to do this. Is there something crucial that I'm overlooking? Edit: Since this question I've also noticed that in C, returning negative values for errors is commonplace rather than throwing exceptions as in C++. | Is there something crucial that I'm overlooking? When calculations involve both signed and unsigned types as well as different sizes, the rules for type promotion can be complex and lead to unexpected behaviour . I believe this is the main reason why Java omitted unsigned int types. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97541",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19705/"
]
} |
97,615 | While threads can speed up execution of code, are they actually needed? Can every piece of code be done using a single thread or is there something that exists that can only be accomplished by using multiple threads? | First of all, threads cannot speed up execution of code. They do not make the computer run faster. All they can do is increase the efficiency of the computer by using time that would otherwise be wasted. In certain types of processing this optimization can increase efficiency and decrease running time. The simple answer is yes. You can write any code to be run on a single thread. Proof: A single processor system may only run instructions linearly. Having multiple lines of execution is done by the operating system processing interrupts, saving the state of the current thread, and starting another one. The complex answer is ... more complex! The reason that multithreaded programs may often be more efficient than linear ones is because of a hardware "problem". The CPU can execute calculations more quickly than memory and hard storage IO. So, an "add" instruction, for example, executes far more quickly than a "fetch". Caches and dedicated program instruction fetching (not sure of the exact term here) can combat this to some extent, but the speed issue remains. Threading is a way of combating this mismatch by using the CPU for CPU bound instructions while IO instructions are completing. A typical thread execution plan probably would be: Fetch data, process data, write data. Assume that fetching and writing take 3 cycles and processing takes one, for illustrative purposes. You can see that while the computer is reading or writing, it's doing nothing for 2 cycles each? Clearly it's being lazy, and we need to crack our optimization whip! We can rewrite the process using threading to use this wasted time: #1 fetch no operation #2 fetch #1's done, process it write #1 #1 fetch #2's done, process it write #2 fetch #2 And so on. Obviously this is a somewhat contrived example, but you can see how this technique can utilize the time that would otherwise be spent waiting for IO. Note that threading as shown above can only increase efficiency on heavily IO bound processes. If a program is mainly calculating things, there's not going to be a lot of "holes" we could do more work in. Also, there is an overhead of several instructions when switching between threads. If you run too many threads, the CPU will spend most of it's time switching and not much actually working on the problem. This is called thrashing . That all is well and good for a single core processor, but most modern processors have two or more cores. Threads still serve the same purpose - to maximize CPU use, but this time we have the ability to run two separate instructions at the same time. This can decrease running time by a factor of however many cores are available, because the computer is actually multitasking, not context switching. With multiple cores, threads provide a method of splitting work between the two cores. The above still applies for each individual core though; A program that runs a max efficiency with two threads on one core will most likely run at peak efficiency with about four threads on two cores. (Efficiency is measured here by minimum NOP instruction executions.) The problems with running threads on multiple cores (as opposed to a single core) are generally taken care of by hardware. The CPU will be sure that it locks the appropriate memory locations before reading/writing to it. (I've read that it uses a special flag bit in memory for this, but this could be accomplished in several ways.) As a programmer with higher level languages, you don't have to worry about anything more on two cores as you would have to with one. TL;DR: Threads can split work up to allow the computer to process several tasks asynchronously. This allows the computer to run at maximum efficiency by utilizing all the processing time available, rather than locking when a process is waiting for a resource. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97615",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17476/"
]
} |
97,660 | I am a Web application developer who also is responsible for project managing some projects, I sometimes have to manage remote developers, who work for me under a contract basis. I feel that sometimes it is really difficult to manage. I am facing some strange situations. For example: Once a developer didn't respond for two days while we had to deliver a project. (didn't attend calls or respond to emails ) Once a developer templated a CMS with static solution (this one had less expertise I think). Once I asked a developer to complete search functionality, the next day he said done, but when I looked, it wasn't done. Upon asking I came to know that, he had done the search but not search result listing formatting :) . (I guess he is unable to manage his time while working from home) So does this mean that I don't have right people? Or there is a problem in project management that I need to cover? I understand that sometimes misunderstandings can happen, and therefore we need to communicate in writing using a tool like unfuddle or basecamp, but what we can do for instance in the above situations? The person who did has at least 2 years of experience as a developer. So I actually want to know: where is the problem? I am a programmer and know that programmers understand these things, then what should I do in such cases? | I don't know your precise situation, so it's very difficult to tell what's happening. Nevertheless, here a few elements which may or may not apply to you: 1. Communication Do you communicate clearly your ideas? I mean, are you sure that those developers understand correctly what is the work to do? 2. Management Asking to do something may not be enough. On large projects, you have functional/non-functional requirements, acceptance testing, etc. All this stuff is not just to waste time; it is useful to be clear on the expectations, and to be sure that those expectations are met. Maybe you should use more the techniques used in real project management. 3. Staying connected with the developers If you let a developer in the wild, chances are he will not be under pressure to end the work. It's like those stories about the interns on Daily WTF: you ask the intern to do something, you forget about this intern, and a few weeks later, you discover that the work is not done and the person is currently doing something strangely different because he was afraid to ask for explanation. 4. Tracking It is a good idea not only to stay connected, but also to keep an eye regularly on what's done. Asking the developers to commit to your version control server is a good idea: not only do you get the updates daily and can quickly take the required measures if you see that something goes wrong, but it also forces the developers to work from the beginning, and not keeping the work for the last day. Another technique is to ask the developer to report you daily about the work which was done by phone or through a video conference. I hate this and would never work for somebody who asks me to do that, but it may work very well with other developers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97660",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14774/"
]
} |
97,691 | It seems logical to me that one could define a context for static source code analysis that included rules to produce a relative value of complexity. I know it is not like in the physical sense because souce code doesn't have "Energy" but I'm betting there have been efforts, at leat academic, to draw a parallel. Has anyone any knowledge of this and if so, to what end has it produced useful results? | There are already a number of measures of code complexity: Cyclomatic complexity Class length Method length Number of fields Number of method parameters N-path complexity Fan-in and fan-out Data flow analysis (DU/DD chains) Work has been done to correlate these to defect density, effort to maintain, and ease of understanding. Some are more meaningful than others, depending on what you are trying to learn from your analysis. I'm not that familiar with the concept of entropy from the physical sciences, but I wonder if tracking measurements and metrics like the ones I named over time, and relating them to defects over time, would be similar to what you are looking for. You might also be interested in Ivar Jacobson's definition of software entropy and software rot . The general idea of these topics is that over time, as the code as well as the execution environment changes, the software system begins to degrade. Refactoring is seen as a method of minimizing entropy or rot, and, at least in my experiences, the metrics and measurements that I mentioned above would be indicators that refactoring might be necessary in a system or subsystem. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97691",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20108/"
]
} |
97,716 | What would you do if you were in a situation where the project you are working on is obviously built poorly and will have failures in the future and be a nightmare to maintain...but it is considered a "success" by management because the clients are happy? Should I just not care? Is it ok that the clients don't even realize they could have a better application than this? At what point do I stop caring about building it right and just go with the flow? | If the clients are happy, you're doing something right. Lots of people enjoy hot dogs without knowing how they are made... If the app is a good solution to the problem but you're worried that the foundation is faulty, figure out how to improve things incrementally and pitch a plan to implement those improvements as you update the product. Incremental is key: if you're itching to rewrite whole parts of it, your manager is going to rightly say that's unreasonable. The perfect can be enemy of the good. Look up jwz's story of how Netscape let IE take the lead because they "had to" rewrite Navigator. If the app's UI is itself a mess, the clients may still be happy because they are comparing it to "the hard way" and even a buggy program can be miles better than that. You are comparing it to an ideal that you can imagine because of your background and skills. Again, consider how you can improve things in incremental ways, and pitch that as part of the plan. Don't stop caring: you want your work to be the best that it can be. But also remember it's the customer that pays your bills, and you're writing software for them, not you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97716",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7274/"
]
} |
97,785 | So my Dad bought me 5 books on programming (C++, Java, PHP, Javascript, Android) about a month ago. He's an architect and he knows NOTHING about programming. He bought me them because I told him programming was fun and I wanted to learn it. As you might know, being a kid (I'm 14) and being told to learn programming out of dull books isn't the easiest thing. I'm always getting distracted.. I told him before that I didn't need to buy books and I could just watch online tutorials.. but no, he's so old-fashioned. He's only letting me use the books. Recently, he started asking me what I've done with it, and I showed him a C++ program I made that takes what you type in, then assigns values to each letter (A is the first letter in the alphabet so it gets the value of 1).. and so on. It then adds up all the values and tells you it. So the word "add" would have a value of 9. ^^ That wasn't very impressive to him. He yelled at me and told me all I've been doing is screwing around. That's not true. He is extremely traditional and stubborn and doesn't listen to anything I had to say. What should I tell him? PS: If you have any tips on zoning in on a book, let me know EDIT: Thank you so much everyone, you have no idea how much it means to know that there are some people that understand my situation. I've read every one and I'll consider everyone's opinion. ¡Gracias! | I showed him a C++ program I made that takes what you type in, then assigns values to each letter (A is the first letter in the alphabet so it gets the value of 1).. and so on. It then adds up all the values and tells you it. So the word "add" would have a value of 9. I don't know what you should do with your dad. But: If you did this all by yourself, starting from scratch, learning from books, in a month , it's damn impressive. And you did it in C++, which is one of the scariest programming languages in existence. There are quite a few people out there taking interviews, seriously trying to get programming jobs, who would struggle with that. See this story. I can only suggest: keep doing what you enjoy. Ignore your dad in this context; he doesn't know what he's talking about. You have talent in programming and willingness to learn - the main ingredients in becoming a great programmer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97785",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33123/"
]
} |
97,879 | I am starting a new job in a company with many developers and media people, the layout of the place is open with computers around a skinny oval, I have worked in small teams and programming embedded C, the jobis for objective C I'm still in a medium stage, so I know what I don't know (haha), that means I have to google it and then implement it, So the question is how bad does it look if the guy next to you does lot of searching for coding I mean, at the end of the day I will get the job done, but want to look professional too! | Programming makes you a good coder ; reading can make you a good developer : Browse API documentation to make sure you don't reinvent the wheel or use the APIs incorrectly or inefficiently. Look up language documentation to make sure you don't continue programming in language Foo when starting to work with language Bar. Read and understand best practices and patterns to know when to use them. Look up code samples, then use them as templates instead of copying verbatim. That way you're sure to have understood at least the general structure of the code. If someone else doesn't appreciate just how much you have to read to be able to write good code, s/he is not a developer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97879",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33161/"
]
} |
97,880 | I am trying to re-architect a web application I developed to use the MVC pattern, but I'm not sure if validation should be handled in the model or not. For example, I'm setting up one of my models like this: class AM_Products extends AM_Object
{
public function save( $new_data = array() )
{
// Save code
}
} First Question: So I'm wondering if my save method should call a validation function on $new_data or assume that the data has already been validated? Also, if it were to offer validation, I'm thinking some of the model code to define data types would look like this: class AM_Products extends AM_Object
{
protected function init() // Called by __construct in AM_Object
{
// This would match up to the database column `age`
register_property( 'age', 'Age', array( 'type' => 'int', 'min' => 10, 'max' => 30 ) );
}
} Second Question: Every child class of AM_Object would run register_property for each column in the database of that specific object. I'm not sure if this is a good way of doing it or not. Third Question: If validation should be handled by the model, should it return an error message or an error code and have the view use the code to display an appropriate message? | First Answer: A key role of the model is to maintain integrity. However processing user input is a responsibility of a controller. That is, the controller must translate user data (which most of the time is just strings) into something meaningful. This requires parsing (and may depend on such things as the locale, given that for example, there are different decimal operators etc.). So the actual validation, as in "is the data well formed?", should be performed by the controller. However the verification, as in "does the data make sense?" should be performed within the model. To clarify this with an example: Assume your application allows you to add some entities, with a date (an issue with a dead-line for example). You might have an API, where dates might be represented as mere Unix time stamps, while when coming from a HTML page, it will be a set of different values or a string in the format of MM/DD/YYYY. You don't want this information in the model. You want each controller to individually try to figure out the date. However, when the date is then passed to the model, the model must maintain integrity. For example, it might make sense to not allow dates in the past, or dates, that are on holidays/sundays, etc. Your controller contains input (processing) rules. Your model contains business rules. You want your business rules to always be enforced, no matter what happens. Assuming you had business rules in the controller, then you'd have to duplicate them, should you ever create a different controller. Second Answer: The approach does make sense, however the method could be made more powerful. Instead of the last parameter being an array, it should be an instance of IContstraint which is defined as: interface IConstraint {
function test($value);//returns bool
} And for numbers you could have something as class NumConstraint {
var $grain;
var $min;
var $max;
function __construct($grain = 1, $min = NULL, $max = NULL) {
if ($min === NULL) $min = INT_MIN;
if ($max === NULL) $max = INT_MAX;
$this->min = $min;
$this->max = $max;
$this->grain = $grain;
}
function test($value) {
return ($value % $this->grain == 0 && $value >= $min && $value <= $max);
}
} Also I don't see what 'Age' is meant to represent, to be honest. Is it the actual property name? Assuming there's a convention by default, the parameter could simple go to the end of the function and be optional. If not set, it would default to the to_camel_case of the DB column name. Thus the example call would look like: register_property('age', new NumConstraint(1, 10, 30)); The point of using interfaces is that you can add more and more constraints as you go and they can be as complicated as you want. For a string to match a regular expression. For a date to be at least 7 days ahead. And so on. Third Answer: Every Model entity should have a method like Result checkValue(string property, mixed value) . The controller should call it prior to setting data. The Result should have all the information about whether the check failed, and in case it did, give reasons, so the controller can propagate those to the view accordingly. If a wrong value is passed to the model, the model should simply respond by raising an exception. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97880",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3382/"
]
} |
97,912 | Possible Duplicate: What does it mean to write “good code”? In a discussion on coding quality, and how you identify it, I came across a discussion on testing people's coding ability by getting them to show how they would swap two values using a piece of code to achieve the objective. Two key solutions were produced: Introduce a spare variable to do some pass the parcel of the values or: Use some bitwise operators. There then ensued an argument on which was in fact the better solution (I'd be leaning towards the first option while being aware that the second one exists, but may not always evaluate as expected depending on the values in question). Bearing in mind the story of Mel the Real Programmer , I am interested in knowning how you evaluate code as being elegant or not, and is succinctness a key feature of elegant code. | Good code should be clean, simple and easy to understand first of all. The simpler and cleaner it is, the less the chance of bugs slipping in. As Saint-Exupery coined, "Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." Moreover, elegant code is usually the result of careful analysis of the problem, and finding an algorithm and design which simplifies the code greatly (and often speeds it up too). E.g. Programming Pearls shows several examples where an insight gained during analysis gave a totally different angle of attack, resulting in a very simple, elegant and short solution. Showing how clever the author is, only comes after these ;-) Performance micro-optimization (like using the bitwise operations you mention) should be used only when one can prove (with concrete measurements) that the piece of code in question is the bottleneck, and that the change actually improves performance (I have seen examples to the contrary). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97912",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17561/"
]
} |
97,985 | I was thinking today about Paul Graham's book "Hackers and Painters." More specifically, these two paragraphs : "I was taught in college that one ought to figure out a program
completely on paper before even going near a computer. I found that I
did not program this way. I found that I liked to program sitting in
front of a computer, not a piece of paper. Worse still, instead of
patiently writing out a complete program and assuring myself it was
correct, I tended to just spew out code that was hopelessly broken,
and gradually beat it into shape. Debugging was a kind of final pass
where you caught typos and oversights... [It] seemed like programming
consisted of debugging. ... As far as I can tell, the way they taught me to program in college
was all wrong. You should figure out programs as you're writing them,
just as writers and painters and architects do." That's how it's taught in my college and I'm pretty sure most other colleges as well. You figure out what your program will do, and then you figure out how to do it, then you type and debug. Sometimes you make a basic version and add functionality, but the idea is that you think through and then type. This sort of reminds of that chapter in Feynman's book called "He Solves Radios By Thinking!" where he paced around thinking of how the radio could be broken, and then fixes it. To me, that's what programming is about - thinking and then finding a solution. Is this the prevalent approach to coding? If so, why don't more people just hack away and put a program together without having a preconceived idea of what it's going to look like? What are the advantages and disadvantages of think & type vs. spew & beat? | This is a perfect example of the excluded middle fallacy. Yes, writing out the whole program on paper before you touch the actual keyboard is a bad idea. But that doesn't make the opposite extreme--immediately jumping into the coding and starting to hack away--a good idea. In fact, it's even worse. It's very important to understand what you're trying to write before you start writing it. When I've got a new feature to implement at work, I make sure I've got a spec that describes what needs done before I start. I look it over, and if there's something on there that doesn't make sense, I talk with the people who wrote the spec and work over the issue until we're in agreement. Sometimes I hadn't understood the requirements and they can set me straight; other times the PM folks didn't understand the technical details, and they end up modifying the spec. Just about anyone who's done this can tell you from personal experience that it's a whole lot easier to fix problems in the spec than it is to find a problem in your code halfway through the implementation, rip it all out, and replace it with something else. So having a plan for what you write before you start writing the code is very, very important. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/97985",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27757/"
]
} |
98,048 | I like a technology (including programming language) but its platform is closed sourece and many times I meet people who ask me, "why do you use a closed source platform, why not use an open source alternative? If there is something wrong it should be with the closed source not with the open source, (as they say)". Actually I don't know how to answer their question. Could anyone tell me a good answer? Why do you use a closed source platform? | You've already answered the question: you like the technology you're working with. If that's not a good enough answer to satisfy these people, then they're not interested in being satisfied. I've been doing this for over 20 years. I've worked on VMS, MPE, MPX, Unix, Linux, Windows, MacOS, etc. I've used open source and proprietary tools on the major desktops. All that really matters to me is, which toolset allows me to accomplish a given task in the least amount of time with the least amount of frustration (assuming I get to make the choice)? Sometimes the OSS solution falls short of the proprietary equivalent (e.g., GIMP vs. Photoshop). Sometimes it's superior. Sometimes it's a wash. OSS has its advantages, but it's not a panacea. Unfortunately, some people have embraced it with a religious fervor, and like many religious fanatics tend to be jerks about it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98048",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26782/"
]
} |
98,083 | What's the difference between the two UpdateSubject methods below? I felt using static methods is better if you just want to operate on the entities. In which situations should I go with non-static methods? public class Subject
{
public int Id {get; set;}
public string Name { get; set; }
public static bool UpdateSubject(Subject subject)
{
//Do something and return result
return true;
}
public bool UpdateSubject()
{
//Do something on 'this' and return result
return true;
}
} I know I will be getting many kicks from the community for this really annoying question but I could not stop myself asking it. Does this become impractical when dealing with inheritance? Update: Its happening at our work place now.
We are working on a 6 month asp.net web application with 5 developers. Our architect decided we use all static methods for all APIs. His reasoning being static methods are light weight and it benefits web applications by keeping server load down. | I'll go with the most obvious problems of static methods with explicit "this" parameters: You lose virtual dispatch and subsequently polymorphism. You can never override that method in a derived class. Of course you can declare a new ( static ) method in a derived class, but any code that accesses it has to be aware of the entire class hierarchy and do explicit checking and casting, which is precisely what OO is supposed to avoid . Sort of an extension of #1, you can't replace instances of the class with an interface , because interfaces (in most languages) can't declare static methods. The unnecessary verbosity. Which is more readable: Subject.Update(subject) or just subject.Update() ? Argument checking. Again depends on the language, but many will compile an implicit check to ensure that the this argument is not null in order to prevent a null reference bug from creating unsafe runtime conditions (kind of a buffer overrun). Not using instance methods, you'd have to add this check explicitly at the beginning of every method. It's confusing. When a normal, reasonable programmer sees a static method, they are naturally going to assume that it doesn't require a valid instance (unless it takes multiple instances, like a compare or equality method, or is expected to be able to operate on null references). Seeing static methods used this way is going to make us do a double or perhaps triple take, and after the 4th or 5th time we are going to be stressed and angry and god help you if we know your home address. It's a form of duplication. What actually happens when you invoke an instance method is that the compiler or runtime looks up the method in the type's method table and invokes it using this as an argument. You are basically re-implementing what the compiler already does. You're violating DRY , repeating the same parameter again and again in different methods when it's not needed. It's hard to conceive of any good reason to replace instance methods with static methods. Please don't. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98083",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28720/"
]
} |
98,358 | I am developing the UI for a .NET MVC application that will require international localization of all content in the near future. I am very familiar with .NET in general but have never had a project that required such a significant focus on international accessibility. The projected is initially being done in English. What measures should I take at this point to make it easier to implement localization in the future? | You are developing ASP.Net MVC application, are you? Other answers seem to be specific to desktop applications. Let me capture common things: Locale detection It is quite important that your application detect user's locale correctly. In desktop application, CultureInfo.CurrentCulture holds preferred formatting locale (the one that should be used to format numbers, dates, currencies, etc.) whereas CultureInfo.CurrentUICulture holds preferred User Interface locale (the one that should be used to display localized messages). For web applications, you should set both cultures to auto (to automatically detect locale from AcceptLanguage header) unless you want to implement some fancy locale detection workflow (i.e. want to support changing language on demand). Externalize strings All strings should come from resources, that is Resx files. In Winforms App it is easily achievable by setting form Localizable property to true. You would also need to manually (unfortunately) externalize strings that come from your models. It is also relatively simple. In Asp.Net you would need to externalize everything manually... Layouts You definitely need to allow for string expansion. In Winforms world it is achievable via TableLayoutPanel which should be used to make sure that layout will adjust automatically to accommodate longer text. In web world, you are a bit out of luck. You might need to implement CSS Localization Mechanism - a way to modify (override) CSS definitions. This would allow Localization folks to modify style issues on demand. Make sure that each HTML element in rendered page has unique id - it will allow to target it precisely. Culture specific issues Avoid using graphics, colors and sounds that might be specific for western culture. If you really need it, please provide means of Localization. Avoid direction-sensitive graphics (as this would be a problem when you try to localize to say Arabic or Hebrew). Also, do not assume that whole world is using the same numbers (i.e. not true for Arabic). ToString() and Parse() Be sure to always pass CultureInfo when calling ToString() unless it is not supported. That way you are commenting your intents. For example: if you are using some number internally and for some reason need to convert it to string use: int i = 42;
var s = i.ToString(CultureInfo.InvariantCulture); For numbers that are going to be displayed to user use: var s = i.ToString(CultureInfo.CurrentCulture); // formatting culture used The same applies to Parse(), TryParse() and even ParseExact() - some nasty bugs could be introduced without proper use of CultureInfo. That is because some poor soul in Microsoft, full of good intentions decided that it is a good idea to treat CultureInfo.CurrentCulture as default one (it would be used if you don't pass anything) - after all when somebody is using ToString() he/she want to display it to user, right? Turn out it is not always the case - for example try to store your application version number in database and then convert it to instance of Version class. Good luck. Dates and time zones Be sure to always store and instantiate DateTime in UTC (use DateTime.UtcNow instead DateTime.Now). Convert it to local time in local format upon displaying: DateTime now = DateTime.UtcNow;
var s = now.ToLocalTime().ToString(CultureInfo.CurrentCulture); If you need to send emails with time reference in body, be sure to include time zone information - include both UTC offset and list of cities: DateTime someDate; // i.e. from database
var formattedDate = String.Format("{0} {1}",
someDate.ToLocaleTime().ToString(CultureInfo.CurrentCulture),
TimeZoneInfo.Local.DisplayName); Compound messages You already have been warned not to concatenate strings. Instead you would probably use String.Format() as shown above. However, I must state that you should minimize use of compound messages. That is just because target grammar rules are quite commonly different, so translators might need not only to re-order the sentence (this would be resolved by using placeholders and String.Format()), but translate the whole sentence in different way based on what will be substituted. Let me give you some examples: // Multiple plural forms
English: 4 viruses found.
Polish: Znaleziono 4 wirusy. **OR** Znaleziono 5 wirusów.
// Conjugation
English: Program encountered incorrect character | Application encountered incorrect character.
Polish: Program napotkał nieznaną literę | Aplikacja napotkała nieznaną literę. Other concatenation issues Concatenation is not restricted to strings. Avoid laying out controls together, say: Remind me again in [text box with number] days. This should be re-designed to something like: Remind me again in this number of days: [text box]. Character encoding and fonts Always save, transfer, whatever text in Unicode (i.e. in UTF-8). Do not hard-code fonts - Localization might need to modify them and it will turn off default font fall-back mechanism (in case of Winforms).
Remember to allow "strange" characters in most fields (i.e. user name). Test You will probably need to implement so called pseudo translation, that is create resources for say German culture and copy your English strings adding prefix and suffix. You may also wrap placeholders to easily detect compound strings. The purpose of pseudo translation is to detect Localizability issues like hard-coded strings, layout issues and excessive use of compound messages. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98358",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29044/"
]
} |
98,406 | I began teaching a friend programming just recently (we're using Python), and when we began discussing variable creation and the assignment operator, she asked why the value on the right is assigned to the name on the left, and not vice-versa. I had not thought about it too much before, because it seemed natural to me, but she said that left-to-right seemed more natural to her, since that's how most of us read natural languages. I thought about it, and concluded that it makes code much easier to read, since the names that are assigned to (which the programmer will need to reuse) are easily visible, aligned on the left. aligned = 2
on = 'foo' + 'bar' + 'foobar'
the = 5.0 / 2
left = 2 + 5 As opposed to: 2 = aligned
'foo' + 'bar' + 'foobar' = on
5.0 / 2 = the
2 + 5 = right
# What were the names again...? Now I wonder if there are other reasons as well for this standard. Is there a history behind it? Or is there some technical reason why this is a good option (I don't know much about compilers)? And are there any programming languages that assign to the right side? | Ditto @paxdiablo. The early programming languages were written by mathematicians--actually all of them were. In mathematics, by her own principle--reading left to right-- it makes sense in the way it works. x = 2y - 4. In mathematics, you would say this: Let x be equal to 2y -4. Also, even in algebra you do this. When you solve an equation for a variable, you isolate the variable you are solving for to the left side. i.e. y = mx + b; Furthermore, once an entire family of languages-- such as the C family-- has a certain syntax, it is more costly to change. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98406",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24014/"
]
} |
98,485 | What is a negative side of your TDD experience? Do you find baby steps (the simplest fix to make test green) annoying and useless? Do you find no-value tests (when test has sense initially but in final implementation checks the same logic as other test) maintanence critical? etc. The questions above are about things which I am uncomfortable with during my TDD experience. So I am interested whether other developers have similar feelings and what do they think about them. Would be thankful for the links to articles describing negative sides of TDD (Google is fullfilled by positive and often fanatic articles). | Like everything that comes under the "Agile" banner, TDD is something that sounds good in theory, but in practice it's not so clear how good it is (and also like most "Agile" things, you are told that if you don't like it, you are doing it wrong). The definition of TDD is not etched in stone: guys like Kent Beck demand that a non-compiling test must be written before a single line of code and every single line of code should be written to pass a failing test. Up front design is minimal and everything is driven by the tests. It just doesn't work. I've seen a big enterprise app developed using that methodology and I hope that it is the worse code I see in my career (it won't be far off; and that was despite having some talented developers working on it). From what I've seen it results in a huge number of poorly thought out tests that mainly validate that function calls occur, that exceptions are thrown when variables are null and the mocking framework gets a thorough workout (whoop-de-whoop); your production code gets heavily coupled to these tests and the dream of constant and easy refactoring does not appear - in fact people are even less likely to fix bad code because of all the test it will break. In this kind of environment software managers would rather have bad software with passing tests and high code coverage than good software with less tests. Conversely I've heard people argue that TDD means designing the tests up front on a high level as part of the planning phase - alongside the architectural design. These tests may change during development as more information becomes available, but they have been carefully considered and offer a good guide as to what the code should actually do. To me that makes perfect sense. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98485",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7369/"
]
} |
98,509 | I was taught in university to write comments for our programs. As I write comments, I feel like I have a better organization and understand the programs better. However, I am in a company where most of the code I encounter doesn't have comments. Why would a programmer not write comments? Are there are any objective reasons? Do you think comments are annoying when you read others' code? I don't write comments either, and I can think of some reasons why not: Because I can already easily understand the program. My boss won't care how my program gets the job done. Nobody is likely to pick up my program after I am gone. The purpose of comments is usually to include explanations for the program, however, I found that in my company, comments are used to cover obsolete code instead. For example: /* Obsolete code */
New code What is your opinion on commenting? Should we do it? Or does it depend on the situation? | Too many comments are much worse than too few. They take time to write, and even more time to maintain. Throughout the life of a project, code will always reflect what the program does, comments will be inaccurate the instant the code changes. Once the intent to the code does not match the comment, you are on a slippery downhill slope. Comments are not tested by compilers for correctness, so incorrect comments cannot be detected and corrected by the programmer unless huge amounts of attention to detail, where it makes little difference, is made. In my experience new programmers often overcomment code with useless comments that add no value. Extreme cases such as this are common int x ; // Integer x
// Copy the string bob into fred making sure not to overflow the buffer.
strncpy(bob, fred,sizeof(bob)); (The error is intentional to make a point) However devs putting in too few comments is just as common...... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98509",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15309/"
]
} |
98,580 | I come to you as a newbie programmer who's been working on his own project (which is progressing nicely). My co-founder has also been learning how to program and has reached a point where he could probably start fixing some things and making some things happen. He asked a very good question, which was "how will this work". Something I could only theorize about as I've never programmed with someone else. Could you advise me on the best work flow. We use git. Should we own specific parts of the system? Checking code in? Code review? How do you work with >1 dev? | I work in a team which uses git, where 40+ developers are working on multiple code repositories(100+) at any given point of time. We also started out with very few developers, growing the team size in a span of few years. In the beginning though with few people you can get away with knowing only a bare minimum of git. Over time you will improve your git fu, discovering powerful features. You'll need a place to host your code. Consider using github or gitorious . Both are free to use, but your repositories will be public and visible to others. If you would like private repositories you can host them on github for free or install and host your own gitorious server . In the beginning it's better not to worry about advanced workflows which involve forking, pull requests. You can begin by using git in a centralized manner (shudder!). Treat your hosted copy as the authoritative copy of your source code. Lets call this repository upstream . One of you commit all the code to a local git repository and push it to this upstream repository. The other team member can clone this repository. A set of minimum commands you'll need to learn are clone , pull , push , add , commit , log , status , diff , branch , stash , apply , reset , format-patch , branch . Learn more about them from gittutorial . Either of you can now work on any part of the code. Do not worry what happens when both of you edit the same file. Git is really good at handling merges and fixing conflicts. Make small atomic commits and write good log messages . Use the present tense for commit logs. You can make any number of commits as you like to your local copy as it does not affect the other person's work. When you think your code is ready to be shared with others, publish it to the upstream repository. A good practice is to always pull before you push . This way you keep your repository in sync with others changes. Repeat steps 7 and 8 . Once you are comfortable with this workflow you can progress into more advanced stuff like - topical branches, forking, pull requests, merging, interactively rebasing commits etc. If you really want code reviews, it's doable with git and email alone. When your team size grows beyond 10+ this is ideally done better with some kind of online tool. So in practice there are many ways of doing this, and this is just one simple way: Create a set of commits to be reviewed with git format-patch . This will generate a set of patch files. Email these patches to the reviewer. The reviewer can apply the patches with git apply . This applies the patch but does not create a commit. Review the code and email back with suggestions. Repeat 1-2-3 until satisfactory. The reviewer confirms that the patches can be pushed upstream . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98580",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33766/"
]
} |
98,631 | Have you always been fundamentally correct in the software designs you proposed? When you give out some design that was fundamentally wrong, you tend to lose the respect of your fellow team members. No matter what you do after that you end up being crosschecked for everything thing you propose after that incident. This is especially worse when you are new to a team and they don't know your past where you have had some good success stories. Maybe the reason you gave a bad design was because of lack of experience or knowledge or both in that area. How have you who have faced such a situation dealt with it? Is this like a one time thing in your career or does it happen on and off? Does one put this behind or does one in such a situation need to look for a new line of work? Some honest feedback please... Thank you. | Once, the vp of a fortune 500 cost the company 1 million dollars with a bad business decision. When he turned in his resignation to the C.E.O the response he was given was, "I just invested One Million dollars in your education and now you are trying to leave? I do not accept." I grow tired of managers and other workers who are quick to blame a mistake on someone being a rookie or assuming that they are incompetent. There is only one way to become a good designer and that is to f@$% a few up. I don't care if my employees make a mistake, I care if they make the same one multiple times. The question is, how humble and how teachable are you? When someone presents your error to you, do you defend yourself first, or hear them out? If you are one of the rare guys who can swallow his pride and learn from it, then you are worth hanging on to. Anyone who you lose respect from for making an error once, is not someone who deserves your respect. I personally had to rewrite the first two projects I designed at least twice, but you know what? I learned a ton, and though my employers were perturbed at the time, that was quickly offset by the efficiency I gained over time by being willing to learn from my mistakes. As to the humiliation aspect and how to recover, I have two pieces of advice. First, people forget over time. Also, when someone else has the spotlight on them, they will screw up too. Then all will be equal again. Second, don't be an asshole to others when they make honest, learning, mistakes. In fact, you should encourage them unless they just really need a firm kick in the ass. You can over time help change the culture of your team by remembering how you felt when you made an honest mistake. You will eventually inspire people to be better programmers, designers, and human beings. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98631",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30757/"
]
} |
98,691 | While I understand what the final keyword is used for in the context of classes and methods as well as the intent of its use for in regards to variables; however, the project I just started working on seems to have an excessive number of them and I'm curious as to the logic behind it. The following snippet of code is just a short example as I don't see much point in the final keyword for the key and value variables: private <K, V> Collection<V> getValuesForKeys(
final Map<K, V> map, final Collection<K> keys)
{
final Collection<V> values = new ArrayList<V>(keys.size());
for (final K key : keys) {
final V value = map.get(key);
if (value != null) {
values.add(value);
}
}
return values;
} I have been doing a bit of reading the usage through articles I have found via Google; however, does the pattern really do things such as help the compiler optimize the code? | There are many references suggesting a liberal use of final . The Java Language Specification even has a section on final variables . Various rules in static analysis tools also support this - PMD even has a number of rules to detect when final can be used . The pages that I linked to provide a number of points as to what final does and why you should use it liberally. For me, the liberal use of final accomplished two things in most code, and these are probably the things that drove the author of your code sample to use it: It makes the intent of the code much more clear, and leads to self-documenting code. Using final prevents the value of a primitive object from changing or a new object being made and overwriting an existing object. If there's no need to change the value of a variable and someone does, the IDE and/or compiler will provide a warning. The developer must either fix the problem or explicitly remove the final modifier from the variable. Either way, thought is necessary to ensure the intended outcome is achieved. Depending on your code, it serves as a hint for the compiler to potenitally enable optimizations. This has nothing to do with compile time, but what the compiler can do during compilation. It's also not guaranteed to do anything. However, signaling the compiler that the value of this variable or the object referred to by this variable will never change could potentially allow for performance optimizations. There are other advantages as well, related to concurrency. When applied at a class or method level, having to do with ensuring what can be overridden or inherited. However, these are beyond the scope of your code sample. Again, the articles I linked to go far more in-depth into how you can apply final . The only way to be sure why the author of the code decided to use final is to find the author and ask for yourself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98691",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/2471/"
]
} |
98,692 | I just banged my head against the table for some 20 minutes looking at a totally weird bug in PHP, and then I realized there's octal. The <%(*&#> octal. In short, I padded some literals with zeros so the code would be aligned, I know, big mistake. Forgot about octals. The question is, does anyone use octals for anything other than file permissions? (I personally prefer chmod ugo+rwx but I understand that if they have to be programmaticaly generated, it's useful to use octals.) But are they useful in any other situation? | According to Wikipedia , octals aren't as common as they used to be. As others have already mentioned, in the past, systems used to have a 12/24/36-bit word, which is more easily represented in octal than hexadecimal, but currently, the x86 and i64 architectures use a 16/32/64 bit word, which is more easily represented in hexadecimal and downright ugly in octal. Current uses, however, include: "Real" real-world use: the Yuki people and in the native Mexican Pamean languages use octal counting because they count the spaces between their fingers (see here ) The Romans had an eight-day week (called Nundical cycle) prior to the introduction of the Julian calendar. Thus, counting in weeks and days is essentially octal. Historically in the 1950's, one of the oldest debuggers, UT-3 for the TX-0 computer at MIT (an 18-bit system), could only be operated by using commands written in octal notation (see page 20 of Hackers, Heroes of the Computer Revolution ). The original ASCII encoding was often represented as 4-bits + 3-bits, i.e. a nibble (0-15) for one row and a single octal digit for the column, as this 1972 chart shows . Representation of UTF8 numbers (any start byte is \3nn and any continuation byte \2nn ) Representation of file and other permissions in Unix-like systems ( chmod ), here's an online octal permission calculator TAR files store some information in octal representation according to this . Representation of IP addresses (rare, sometimes used by spammers to obscure addresses). Microsoft accepts octal IP numbers for Ping and FTP . When fields are naturally divided into three or six bits, octal representation comes in handy then, see here . From the same link, you find that the FAA uses octals in transponders and in the venerable Arinc 429 bus standard . Integers, but also fractions on the Honeywell and other legacy systems were represented as octal. This PDF explains how to go from Honeywell octal fractions to decimal . A whole lot of legacy (CDC machines, DEC PDP-8 etc), because they used multiples of 3 bits, like 6-bit or 12-bit word sizes In 1971, octal numbers were proposed to replace the decimal system (Really? Really! see reference ). And finally, most trivially: you use it almost everyday when you write down the number 0 in some programming language that supports octals, as this user wrote . Note that modern programming languages sometimes don't support octals anymore because of the lack of use-cases and the potential for bugs. C# is an example, as Eric Lippert wrote this in the TCPL 3rd and 4th edition : C# does not support octal literals, for two reasons. First, hardly
anyone uses octal literals these days. Second, if C# supported octal
in the standard "leading zero means octal" format, then it would be a
potential source of errors. Consider this code: FlightNumber = 0541; Clearly this expression is intended as a decimal literal not an octal
literal. Related, and just for reference and curiosity, Code, The Hidden Language of Computer Hardware has an excellent easy-to-follow explanation on octal and other number systems on page 55 - 63 . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98692",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/16586/"
]
} |
98,800 | Dividing a problem to smaller ones until the individual problems can be solved independently and then combining them to answer the original question is known as the divide and conquer algorithm design technique. [See: Introduction to Algorithms by CLR] Recently, this approach to solve computational problems especially in the domain of very large data sets has been referred to as MapReduce rather than divide and conquer. My question is as follows: Is MapReduce anything more than a proprietary framework that relies on the divide and conquer approach, or are there details to it that make it unique in some respect? | If you're asking about the MapReduce architecture, then it is very much just a divide and conquer technique. However, any useful MapReduce architecture will have mountains of other infrastructure in place to efficiently "divide", "conquer", and finally "reduce" the problem set. With a large MapReduce deployment (1000's of compute nodes) these steps to partition the work, compute something, and then finally collect all results is non-trivial. Things like load balancing, dead node detection, saving interim state (for long running problems), are hard problems by themselves. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98800",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17372/"
]
} |
98,833 | I am computer engineering student. I've been thinking about
how I can handle a big project. What should be my first step to reach my goal in a more efficient and effective way? When I come up with a project, I don't know how I should start working on it. Many times, I just ignore it. However, I don't want to ignore my project ideas anymore. Now, I am asking to all of you, can anyone share his/her experiences? How should I start a project when all I have is an idea? | Forget coding and setting up a development environment for a moment. If you want to embark on a big project, the first thing you need to do is get a handle on the purpose and scope of the project. What I recommend is opening up a word processor, and writing out a 'project goals' document. Describe what the idea is all about, and the general purpose of the software you want to write. Then list out the functionality goals of the project. I don't mean spec it out, but rather describe the different pieces of functionality that the finished product should support. So, if you were writing software to run a school, you might list 'teachers management' as a piece of functionality, and then describe what that functionality would include (track contact info, class schedule, etc). Then the toughest part: It's not something you need to do right up front, but as you go along. Every bit as important as listing features you want to add is reviewing the functionality you described in your goals document, and note those features you can live without in the first version of the program. This is key to managing scope. One of the main reasons people fail at larger projects is that they don't know when to stop working on it. They don't feel it is 'done' because the ideas keep coming, and it never gets released. Eventually they lose interest, and you have yet another half finished masterpiece. So you want to make sure you have a good handle on the functionality that is truly important to achieve the basic part of your goal. That is your first target. This is how I start all non-trivial projects now. It helps me keeping the focus, and helps keeping the scope and purpose from 'evolving' during development. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98833",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
98,986 | Programming is a highly cerebral job, and one of the biggest problems I have is bringing my work home with me. It's so easy to do: whenever I get the chance to think, my mind naturally wanders to work-related matters. I find so many of the other things in my day rather mentally unstimulating and as they say, nature abhors a vacuum. Unfortunately, constantly thinking about work is stressing me out and I can't seem to just flip that switch at the end of the day. It's been causing me quite a bit of insomnia. To make matters worse, nearly all of my friends are co-workers, and many of our conversations do very little to make me forget about the week. They're fun people to be around generally, and once we've had our bitch-fest we all stop talking about work, but suffice it to say it does little to encourage me to forget about work-related matters. So what can I do to leave my programming projects behind me at the end of the day? Or failing that, what type of mentally stimulating activities could I do to occupy my non-work hours with (that are non-stressful and don't involve mind-altering drugs)? | During my last project, I used to have the same problem. I was thinking about the code during my commutes to home, before going to sleep and even as I was alone in the room with my girlfriend. That's when I knew I had to stop. I have pretty much figured it out now, and here's my advice to you. Get Positive First, accept that you can't just stop thinking about the work out of the blue. This is a habit you have acquired, and habits don't disappear at your immediate will. However you can start by reshaping your experience into a positive one . Consider this example of negative thinking: I don't know which database design is better. The deadline is coming next week and I feel I can't make the right decision at the moment. We can't afford to hesitate and If I fail now, I will be the one to blame. This leads you nowhere. It may sound trivial but if you suspect you don't work well enough, or worse, procrastinate, your conscience will revenge on you. It's five times as hard to think about work when you feel like you fail at it . If you're in this kind of situation, there is something you can do about it right now . Essentially, there are two points to add to your workflow: Make sure you have something to be proud of every evening. Make sure you have something you strive to work on every morning. The ultimate goal of this is to switch your obsession to a positive tone . You know the feeling when you go to sleep thinking about that awesome code you wrote in just about three hours that solved all the world's problems and made the bunnies happy ? You're still obsessed but now you've made a major and very important shift. Get Productive Once you're in the positive stream, you'll find it easier to effectively constrain your tasks to the working hours. Try to plan them in such way that thinking about the problems in your free time doesn't add any value . Consider this example of positive thinking: This database design problem is an interesting challenge and I'll try my best to solve it. I know I'm usually more productive in the afternoon so I'll just have some tea now and fix a couple of bugs so I can give it my full attention when I'm at my best. Before leaving, I'll evaluate my results, and if I don't make a considerable progress, next morning I will ask for some advice from the more experienced colleagues and post a question on StackOverflow as well. I'll make the final decision by tomorrow evening. What has changed? Now you pick your challenges and organize your working time in a way that makes sense to you . These eight hours are not just eight hours in your life, they are special , and you need to take advantage of them. Specifically, you need to: Turn these eight hours into the Perfect Time™ for problem solving . Make sure that physically being in the office empowers you . The second point is a kind of trick you can play on yourself. Ask your company to provide you with the best hardware. Do you have three monitors yet? I keep my favorite teacup at work, and I just love my armchair. I'd never want to solve a problem without it again. Okay, I made up the armchair thingie but the point is: If you learn how to get really productive at the office, you will see that the habit of bringing work home eventually fades because nothing really justifies it anymore . Get a Life! There is a great answer by pydave that does a better job of suggesting after-job activities. You must check it out . No, seriously. Thinking about a database late night? How about going to a club instead? Watching a movie? If you're not the type of person to know how to spend time, ask your friends to take you out. I can't possibly remember how many times I was initially resistant to my friends calling me somewhere and then realized what a great time it had been and how I could've easily missed it out of the passivity. Now, even when work-related thoughts are buzzing in my head, if somebody calls me in my spare time, I just say “I'm in!” and get going. A great relief comes when you realize you're still going to do what you love the next morning and there isn't a single reason to think about it right now . So go ahead and find something else to muse on! Even when you're in love, after some time, you stop thinking about your significant other every single hour. This would have exhausted you. Instead, you split your free hours so there is a time for her (or him), there is a time for your friends and there is a time for you to be alone. There is absolutely no reason why you shouldn't apply the same principle to work. Divide and conquer! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/98986",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22976/"
]
} |
99,120 | So, I am here at assembly 2011 and there was this demo played: http://www.youtube.com/watch?v=69Xjc7eklxE&feature=player_embedded It's one single file only, it says that in the rules. So I repeat, how have they made this to fit into so small file? | It's procedural based. The content is not included in the exe, only the rules of how to draw it. When launched, the program draws what it needs to at runtime it's not pre-rendered or pre-saved in any form. This is the same method used by Elite to create a vast universe of star systems, etc. It's pretty amazing what is possible today using procedural generation, i think games will feature more of this in the future. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99120",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33608/"
]
} |
99,195 | Javascript has a feature called Automatic Semicolon Insertion where basically if the parser encounters an invalid token, and the last token before that was a line break, then the parser will insert a semicolon where the linebreak is. This enables you to basically write all your javascript code without semicolons, but you have to be aware of some edge cases, mostly if you have a return keyword and then the value you want to return on a new line. function test(){
// This will return 'undefined', because return is a valid statement
// and "john" is a valid statement on its own.
return
"john"
} Because of these gotchas there are dozens of articles with titles like 'Automatic semicolon insertion is Evil', 'Always use semicolons in Javascript' etc. But in Python no one ever uses semicolons and it has exactly the same gotchas. def test():
# This will return 'undefined', because return is a valid statement
# and "john" is a valid statement on its own.
return
"john" Works exactly the same, and yet no-one is deadly afraid of Pythons behaviour. I think the cases where the javascript behaves badly are few enough that you should be able to avoid them easily. Return + value on a new line? Do people really do that a lot? What are considered the best practices? Do you use semicolons in javascript and why? | The reason is that in Python, newlines are an unambiguous way of separating code lines; this is by design, and the way this works has been thoroughly thought through. As a result, python code is perfectly readable and unambiguous without any special end-of-statement markers (apart from the newline). JavaScript, on the other hand, was designed with a C-like syntax in mind, where statements are always terminated with a semicolon. To make the language more tolerant to errors, it tries to guess where extra semicolons should go to make the code correct. Since this was sort of retro-fitted onto the C-like syntax, it doesn't always work as expected (sometimes, the script interpreter guesses wrong), and can make for fairly counter-intuitive code. Or, arguing in terms of "explicit is better than implicit": In Python, a newline is already completely explicit, while in JavaScript, it is ambiguous, so you add the semicolon to make it explicit. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99195",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33630/"
]
} |
99,201 | Do people still use Ada -- (it was mostly used in the Defense Department) Are all applications written in Ada "Legacy"? Does Ada knowledge still sell | Do people still use Ada -- (it was mostly used in the Defense Department) It appears that Ada was last updated in 2005 and there's work toward an Ada 2012 , so the language itself is still alive and kicking. As far as use, Ada isn't mandated for use in the Department of Defense anymore. Most of the work that I've seen and done has been in Java, C, and C++, but there's also use of the .NET framework and I've even heard of projects running other JVM languages such as Scala, depending on how the system will be used. There's probably a lot of code out there in Ada, so I wouldn't be surprised if there's a lot of code reuse and maintenance happening. Given the nature of defense projects, it's hard to come up with specific numbers as to its use. There is a list of Ada projects and users , but it looks like the last update was in June 2008. There might be more recent lists out there, but I couldn't quickly find any. Are all applications written in Ada "Legacy"? If you're asking if there is new Ada development, I wouldn't be surprised if new systems are being written from the ground up in Ada. I would suspect there would be too many, but there are probably some out there. However, I would suspect that most of the Ada work out there is maintenance or upgrades on existing systems, not new development. Does Ada knowledge still sell Knowledge in anything sells, if you can find someone who is looking for that skill set. Even if you aren't using Ada in development, I've found that knowing a particular language or framework has changed my opinion and how I use other languages or frameworks. I would suspect that knowing Ada would give you an insight into other methods to design and construct software in other languages as well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99201",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31560/"
]
} |
99,229 | Why does the Lisp community prefer to accumulate all the parentheses at the end of the function: (defn defer-expensive [cheap expensive]
(if-let [good-enough (force cheap)]
good-enough
(force expensive))) Why not employ a convention like C or Java? Well ok, Lisp is much more older than those languages, but I'm talking about the contemporary Lispers. (defn defer-expensive [cheap expensive]
(if-let [good-enough (force cheap)]
good-enough
(force expensive)
)
) Note: Code snippet is from the book "The Joy of Clojure". | One reason Algol-based languages encourage the braces on their own line is to encourage adding more lines in between the delimiting braces without having to move the braces. That is, if one starts out with if (pred)
{
printf("yes");
} it's easy to come along and add another statement within the braces: if (pred)
{
printf("yes");
++yes_votes;
} Had the original form been if (pred)
{ printf("yes"); } then we'd have to have "moved" two braces, but my example is more concerned with the latter. Here, the braces are delimiting what's intended to be a sequence of statements , mostly invoked for side effect. Conversely, Lisp lacks statements; every form is expression , yielding some value—even if in some rare cases (thinking of Common Lisp), that value is deliberately chosen to be "no values" via an empty (values) form. It's less common to find sequences of expressions , as opposed to nested expressions . The desire to "open up a sequence of steps until the closing delimiter" doesn't arise as often, because as statements go away and return values become more common currency, it's more rare to ignore the return value of an expression, and hence more rare to evaluate a sequence of expressions for side effect alone. In Common Lisp, the progn form is an exception (as are its siblings): (progn
(exp-ignored-return-1)
(exp-ignored-return-2)
(exp-taken-return)) Here, progn evaluates the three expressions in order, but discards the return values of the first two. You could imagine writing that last closing parenthesis on its own line, but note again that since the last form is special here (not in the Common Lisp sense of being special , though), with distinct treatment, it's more likely that one would add new expressions in the middle of the sequence, rather than just "adding another one on to the end," as callers would then be impacted not just by any new side effects but rather by a likely change in return value. Making a gross simplification, the parentheses in most parts of a Lisp program are delimiting arguments passed to functions—just like in C-like languages—and not delimiting statement blocks. For the same reasons we tend to keep the parentheses bounding a function call in C close around the arguments, so too do we do the same in Lisp, with less motivation to deviate from that close grouping. The closing of the parentheses is of far less import than the indentation of the form where they open. In time, one learns to ignore the parentheses and write and read by shape—much like Python programmers do. However, don't let that analogy lead you to think that removing the parentheses entirely would be worthwhile. No, that's a debate best saved for comp.lang.lisp . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99229",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10097/"
]
} |
99,243 | Can someone explain the concrete reasons why BDFL choose to make Python lambdas single line? This is good: lambda x: x**x This results in an error: lambda x:
x**x I understand that making lambda multi-line would somehow "disturb" the normal indentation rules and would require adding more exceptions, but isn't that worth the benefits? Look at JavaScript, for example. How can one live without those anonymous functions? They're indispensable. Don't Pythonistas want to get rid of having to name every multi-line function just to pass it as an argument? | Guido van van Rossum answered it himself: But such solutions often lack "Pythonicity" -- that elusive trait of a good Python feature. It's impossible to express Pythonicity as a hard constraint. Even the Zen of Python doesn't translate into a simple test of Pythonicity... In the example above, it's easy to find the Achilles heel of the proposed solution: the double colon, while indeed syntactically unambiguous (one of the "puzzle constraints"), is completely arbitrary and doesn't resemble anything else in Python... But I'm rejecting that too, because in the end (and this is where I admit to unintentionally misleading the submitter) I find any solution unacceptable that embeds an indentation-based block in the middle of an expression. Since I find alternative syntax for statement grouping (e.g. braces or begin/end keywords) equally unacceptable, this pretty much makes a multi-line lambda an unsolvable puzzle. http://www.artima.com/weblogs/viewpost.jsp?thread=147358 Basically, he says that although a solution is possible, it's not congruent with how Python is. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99243",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31560/"
]
} |
99,389 | We need to create an API to our system. How do I convince my boss that REST is a better option than SOAP (or XML-RPC)? I say REST is... easier to implement and maintain not much new to learn -- plain old HTTP lot of people have chosen it Yahoo ~ Facebook ~ Twitter will be lot quicker to code My boss says SOAP is... richer and more expressive it's all standard XML (SOAP, WSDL, UDDI) -- and so will be easier to consume well standardized than REST Google uses a lot of SOAP it is important to adhere to SOAP standards than to create a custom XML schema in REST | From a guy who's used both SOAP and REST extensively... BOSS says SOAP is... richer and more expressive Anytime someone says a product is "rich" I want to become violently ill. I can't think of a more cliche comment to make about a technology or platform. Basically you're saying "I think this product is great, but I don't have any actual facts to back it up ." I don't know what he means by "expressive" so I can't really comment on it... it's all standard XML (SOAP, WSDL, UDDI) -- and so will be easier to
consume This is patently false. SOAP can be finicky, especially when you get into things like complex types and authentication headers. This is especially true when you start doing cross-language communication - getting PHP to properly consume and communicate with a .NET SOAP service that was using complex types and authentication was an exercise in keyboard-snapping horror that makes me wake up in a cold sweat to this day. REST is definitely easier to consume - you just provide the URL and done! You have your data! There are some drawbacks to this, depending on your needs, but for many web services this all that's needed. well standardized than REST It is "standardized" by the fact that it has a schema. That's it. Aside from that, you're still going to have to work with somebody else's data , which is never a picnic no matter what communication protocol you use. And REST has a standard - its called HTTP . It works pretty well. Google uses a lot of SOAP They used to use SOAP (they may for some products still, but not many). The majority of their web services are solidly REST-based. Here's a link showing that they abandoned a SOAP service in favor of REST. it is important to adhere to SOAP standards than to create a custom
XML schema in REST This sounds like one of those comments made by superiors with limited understanding of the actual technology at hand. There will only be one part of the SOAP packet that is standardized - the message header, and the body wrapper. Everything in between is your own XML. You still have to create your own message. The message itself is not conforming to specific standard. It is still a serialized object or group of objects. As a closing note, SOAP versus REST is a big topic, one without a concrete answer and you'll probably get different answers depending on who you talk to. In fact, I can't say for certain that in your particular case that REST WILL be better but I can say that your bosses arguments are weak and are indicative of a lack of understanding about the distinction between the two. I've used both technologies and my hard and fast conclusion is this: there is no hard and fast conclusion and, like so many other tech decisions, it depends on the needs of organization. The best solution, truly, is thoughtful research, an open discussion between the people working on the project to find the best solution, and an honest look at your needs. Here a few links to existing discussions that may be beneficial. https://stackoverflow.com/questions/3285704/should-a-netflix-or-twitter-style-web-service-use-rest-or-soap http://www.prescod.net/rest/rest_vs_soap_overview/ https://stackoverflow.com/questions/209905/rest-and-soap | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99389",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31560/"
]
} |
99,445 | I recently asked a question on Stack Overflow to find out why isset() was faster than strlen() in PHP . This raised questions around the importance of readable code and whether performance improvements of micro-seconds in code were worth even considering. My father is a retired programmer, and I showed him the responses. He was absolutely certain that if a coder does not consider performance in their code even at the micro level, they are not good programmers. I'm not so sure - perhaps the increase in computing power means we no longer have to consider these kind of micro-performance improvements? Perhaps this kind of considering is up to the people who write the actual language code? (of PHP in the above case). The environmental factors could be important - the Internet consumes 10% of the world's energy. I wonder how wasteful a few micro-seconds of code is when replicated trillions of times on millions of websites? I'd like to know answers preferably based on facts about programming. Is micro-optimisation important when coding? My personal summary of 25 answers, thanks to all. Sometimes we need to really worry about micro-optimisations, but only in very rare circumstances. Reliability and readability are far more important in the majority of cases. However, considering micro-optimisation from time to time doesn't hurt. A basic understanding can help us not to make obvious bad choices when coding such as if (expensiveFunction() || counter < X) Should be if (counter < X || expensiveFunction()) ( Example from @zidarsk8 ) This could be an inexpensive function and therefore changing the code would be micro-optimisation. But, with a basic understanding, you would not have to, because you would write it correctly in the first place. | I both agree and disagree with your father. Performance should be thought about early, but micro-optimization should only be thought about early if you actually know that a high percent of time will be spent in small CPU-bound sections of code. The problem with micro-optimization is that it is usually done without having any concept of how programs actually spend more time than necessary. This knowledge comes from experience doing performance tuning, as in this example , in which a seemingly straightforward program, with no obvious inefficiencies, is taken through a series of diagnosis and speedup steps, until it is 43 times faster than at the beginning. What it shows is that you cannot really guess or intuit where the problems will be. If you perform diagnosis, which in my case is random-pausing , lines of code responsible for a significant fraction of time are preferentially exposed. If you look at those, you may find substitute code, and thereby reduce overall time by roughly that fraction. Other things you didn't fix still take as much time as they did before, but since the overall time has been reduced, those things now take a larger fraction, so if you do it all again, that fraction can also be eliminated. If you keep doing this over multiple iterations, that's how you can get massive speedups, without ever necessarily having done any micro-optimization . After that kind of experience, when you approach new programming problems, you come to recognize the design approaches that initially lead to such inefficiencies. In my experience, it comes from over-design of data structure, non-normalized data structure, massive reliance on notifications, that sort of thing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99445",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33747/"
]
} |
99,450 | New to the site so sorry if this is the wrong section. I'm starting app development and wondering what is the best practice when initially releasing my app. Do developers tend to keep some of the features for future updates to keep users active, or do they try to release the most complete app possible? Basically, is it advised to release an app as soon as possible, and then periodically update it to the complete app you have in mind, or wait until you have it fully developed and release it with fewer update prospects? EDIT: Thanks for the answers. I am currently just designing the app and writing down all the features I can think of and trying to prioritize which to include to the initial launch. Based on the answers given, I think I will get a MVP (thanks for the term) out as soon as it is ready, and then update with new features as soon as they are built. I am not holding back built features, was just torn between if I should build them all before launch or just the necessary ones, release, and then build the others. As far as I am aware this isn't a clone. It is my first app though and I will be using it as a learning experience | Most people producing their first app (at least those who turn out to have a successful product) release what they call an MVP first. MVP is Minimum Viable Product - the app at this point contains the bare minimum amount of features necessary to be a useful product. Then, based on user/customer feedback, you can work on new features. The idea is, you'll only know what matters to your customers once people start using it. Some of the plans you had prior to launch may be thrown away entirely, or revised, in the light of the feedback you receive. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99450",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33751/"
]
} |
99,543 | I have read the Wikipedia article on Indent Styles , but I still don't understand. What is the difference between K&R and 1TBS? | The biggest difference between K&R and the One True Brace Style (1TBS) is that in the 1TBS, all if , else , while , and for statements have opening and closing braces, even if they aren't necessary. The purpose is to make it easy to insert new statements and know exactly how they will be grouped. As an example: K&R: int i;
for (i = 0; i < 10; i++)
printf("Hi."); 1TBS: int i;
for (i = 0; i < 10; i++) {
printf("Hi");
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99543",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20570/"
]
} |
99,564 | I've been in my current position for a long time (10 years) and in that time, I feel like I've performed well as a designer, system architect, and programmer. However, all that work has been on internal projects that aren't accessible from the outside world. I see a lot of advice like this that suggests 'If you can literally point to something and say "I wrote this" it's very impressive'. What about if you can 'literally point to' nothing at all, because while you're a passionate programmer who (as the classic Joel-ism puts it) "is smart and gets things done", all those things are invisible? Do I need to start frantically committing to open-source projects? Start a "real world" (not corporate-internal) blog? Frankly, I spent most of my 10 years happy here, and only recently have considered leaving for greener pastures. Am I going to be sunk before I start looking because of my focus on work my current employer, at the expense of my "public presence"? | Showing external projects is helpful but it's never been an blocker for me hiring or getting hired in the past. If you can talk about the projects you worked on and explain to whoever is interviewing you some detail about what you did, what went well, how it provided value to your organization. Getting excited about what you did and programming in general is a good way to score points in a lot of places. Showing interest in open source stuff, having a github account, even if all you do is follow some projects, maybe a small patch, does show some value. I've found most employers don't actually try and look at the details of my open source projects on github, they are just excited to see it ;) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99564",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6413/"
]
} |
99,680 | I used to commit files which I want to group. But one of my colleague said that committing whole working project is better than committing files. I think it makes sense but if I commit like that sometimes I have to commit several different works at once. What is the best practice to commit? | Commit a single unit of work. Otherwise reverting the commit or remerging it elsewhere (different branch) will be painful. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99680",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22362/"
]
} |
99,692 | C\C++ specifications leave out a large number of behaviors open for compilers to implement in their own way. There are a number of questions that always keep getting asked here about the same and we have some excellent posts about it: https://stackoverflow.com/questions/367633/what-are-all-the-common-undefined-behaviour-that-a-c-programmer-should-know-abo https://stackoverflow.com/questions/4105120/what-is-undefined-behavior https://stackoverflow.com/questions/4176328/undefined-behavior-and-sequence-points My question is not about what undefined behavior is, or is it really bad. I do know the perils and most of the relevant undefined behavior quotes from the standard, so please refrain from posting answers about how bad it is. This question is about the philosophy behind leaving out so many behaviors open for compiler implementation. I read an excellent blog post that states that performance is the main reason. I was wondering if performance is the only criteria for allowing it, or are there any other factors which influence the decision to leaving things open for compiler implementation? If you have any examples to cite about how a particular undefined behavior provides sufficient room for compiler to optimize, please list them. If you know of any other factors other than performance, please back your answer with sufficient detail. If you do not understand the question or do not have sufficient evidences/sources to back your answer, please do not post broadly speculating answers. | First, I'll note that although I only mention "C" here, the same really applies about equally to C++ as well. The comment mentioning Godel was partly (but only partly) on point. When you get down to it, undefined behavior in the C standards is largely just pointing out the boundary between what the standard attempts to define, and what it doesn't. Godel's theorems (there are two) basically say that it's impossible to define a mathematical system that can be proven (by its own rules) to be both complete and consistent. You can make your rules so it can be complete (the case he dealt with was the "normal" rules for natural numbers), or else you can make it possible to prove its consistency, but you can't have both. In the case of something like C, that doesn't apply directly -- for the most part, "provability" of the completeness or consistency of the system isn't a high priority for most language designers. At the same time, yes, they probably were influenced (to at least some degree) by knowing that it's provably impossible to define a "perfect" system -- one that's provably complete and consistent. Knowing that such a thing is impossible may have made it a bit easier to step back, breathe a little, and decide on the bounds of what they would try to define. At the risk of (yet again) being accused of arrogance, I'd characterize the C standard as being governed (in part) by two basic ideas: The language should support as wide a variety of hardware as possible (ideally, all "sane" hardware down to some reasonable lower limit). The language should support writing as wide a variety of software as possible for the given environment. The first means that if somebody defines a new CPU, it should be possible to provide a good, solid, usable implementation of C for that, as long as the design falls at least reasonably close to a few simple guidelines -- basically, if it follows something on the general order of the Von Neumann model, and provides at least some reasonable minimum amount of memory, that should be enough to allow a C implementation. For a "hosted" implementation (one that runs on an OS) you need to support some notion that corresponds reasonably closely to files, and have a character set with a certain minimum set of characters (91 are required). The second means it should be possible to write code that manipulates the hardware directly, so you can write things like boot loaders, operating systems, embedded software that runs without any OS, etc. There are ultimately some limits in this respect, so nearly any practical operating system, boot loader, etc., is likely to contain at least a little bit of code written in assembly language. Likewise, even a small embedded system is likely to include at least some sort of pre-written library routines to give access to devices on the host system. Although a precise boundary is difficult to define, the intent is that the dependency on such code should be kept to a minimum. The undefined behavior in the language is largely driven by the intent for the language to support these capabilities. For example, the language allows you to convert an arbitrary integer to a pointer, and access whatever happens to be at that address. The standard makes no attempt at saying what will happen when you do (e.g., even reading from some addresses can have externally visible affects). At the same time, it makes no attempt at preventing you from doing such things, because you need to for some kinds of software you're supposed to be able to write in C. There is some undefined behavior driven by other design elements as well. For example, one other intent of C is to support separate compilation. This means (for example) that it's intended that you can "link" pieces together using a linker that follows roughly what most of us see as the usual model of a linker. In particular, it should be possible to combine separately compiled modules into a complete program without knowledge of the semantics of the language. There is another type of undefined behavior (that's much more common in C++ than C), which is present simply because of the limits on compiler technology -- things that we basically know are errors, and would probably like the compiler to diagnose as errors, but given the current limits on compiler technology, it's doubtful that they could be diagnosed under all circumstances. Many of these are driven by the other requirements, such as for separate compilation, so it's largely a matter of balancing conflicting requirements, in which case the committee has generally opted to support greater capabilities, even if that means lack of diagnosing some possible problems, rather than limiting the capabilities to ensure that all possible problems are diagnosed. These differences in intent drive most of the differences between C and something like Java or a Microsoft's CLI-based systems. The latter are fairly explicitly limited to working with a much more limited set of hardware, or requiring software to emulate the more specific hardware they target. They also specifically intend to prevent any direct manipulation of hardware, instead requiring that you use something like JNI or P/Invoke (and code written in something like C) to even make such an attempt. Going back to Godel's theorems for a moment, we can draw something of a parallel: Java and CLI have opted for the "internally consistent" alternative, while C has opted for the "complete" alternative. Of course, this is a very rough analogy -- I doubt anybody's attempting a formal proof of either internal consistency or completeness in either case. Nonetheless, the general notion does fit fairly closely with the choices they've taken. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99692",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33851/"
]
} |
99,876 | I want to know, how important is it to program in your spare time? Is it necessary to work your 9-5 as a programmer and then get home and work on your hobby to become a better programmer? This said, I know you only get better at programming by, well, programming. Do prospective employers take hobby programming into account in an interview or do they ask this just out of curiosity? I feel guilty for not having a hobby project, but everything I can think of doing has already been done. So I am kind of in two minds about this, start something that has already been done or leave it until I come up with something original? | I feel guilty for not having a hobby project Feeling guilty is a crazy reason to embark on a programming project. Probably a good way to start hating programming, too. Work on something because you want to , not because you think you're supposed to . but everything I can think of doing has already been done. Bah! Who cares if it's already been done? Do it again! Do it better! Or, accept that you may not be able to do it better and do it anyway. Where would Microsoft be if they said "well, someone has already created a database/spreadsheet/word processor/operating system/IDE/project manager/money manager/C-based single-inheritance dynamic object-oriented language/web browser/web server/music player/mobile platform/search engine, so we'll look for something else to do..."? Seriously, if you write a web server, it's probably not going to out-perform Apache, but you'll definitely learn valuable lessons in the process. You're unlikely to outsell Angry Birds, but writing a simple little video game will teach you a lot too. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99876",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33908/"
]
} |
99,894 | HTTP redirects are done via HTTP codes 301, and 302 (maybe other codes also) and a header field known as "Location" which has the address of the new place to go. However, browsers always send a "GET" request to that URL. However, many times you need to redirect your user to another domain via POST (bank payments for example). This is a common scenario, and really a requirement. Does anybody know why such a common requirement has been neglected in HTTP specification? The workaround is to send a form (with parameters in hidden fields) with action set to the target location (the value of the Location header field) and use setTimeout to submit the form to the target location. | In HTTP 1.1, there actually is a status code ( 307 ) which indicates that the request should be repeated using the same method and post data . As others have said, there is a potential for misuse here which may be why many frameworks stick to 301 and 302 in their abstractions. However, with proper understanding and responsible usage, you should be able to accomplish what you're looking for. Note that according to the W3.org spec , when the METHOD is not HEAD or GET , user agents should prompt the user before re-executing the request at the new location. You should also provide a note and a fallback mechanism for the user in case old user agents aren't sure what to do with a 307. Using this form: <form action="Test307.aspx" method="post">
<input type="hidden" name="test" value="the test" />
<input type="submit" value="test" />
</form> And having Test307.aspx simply return 307 with the Location: http://google.com , Chrome 13 and Fiddler confirm that "test=the test" is indeed posted to Google. Of course the further response is a 405 since Google doesn't allow the POST, but it shows the mechanics. For more information see List of HTTP status codes and the W3.org spec . 307 Temporary Redirect (since HTTP/1.1) In this occasion, the request
should be repeated with another URI, but future requests can still use
the original URI. 2 In contrast to 303, the request method should not
be changed when reissuing the original request. For instance, a POST
request must be repeated using another POST request. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99894",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31418/"
]
} |
99,980 | Possible Duplicates: Frankly, do you prefer Cowboy coding? Prototyping vs. Clean Code at the early stages Good design: How much hackyness is acceptable? Does craftsmanship pay off? Which is better: Coding fast, not caring about possible errors and limits, maybe forgetting to check the input, NULL returns etc, just to complete the task or to get to the milestone, and then correct all possible errors. Coding slow, checking every line you write many times, writing tests, and checking every possible input to make a code as bug free as you can but taking weeks to write a working program. Actually I'm using the 2nd way but it's frustrating to work, work, work and see only small improvements every day... | This depends ENTIRELY on the type of work that you're doing. For a lot of situations Test-Driven-Development, like you're currently doing, is definitely the way to go. Overall you'll spend less time on the project since you're not having to constantly go back and fix bugs and edge cases that you didn't account for the first time around. With the first option, yes, you'll finish in record time but then you'll spend a good chunk of time going back and fixing all the mistakes. That being said , if you're in a situation where you need to get a product out the door as fast as possible (maybe you're trying to beat the competition for "first to market" advantage), toss TDD out the window and get something working and out there. A lot of great products have happened this way. Your product won't get you anywhere if you're endlessly polishing and fixing bugs if a competitor with a "good enough" product is eating your lunch. Consider Facebook. Mark Zuckerberg created Facebook in his dorm room with PHP and MySQL, at a time when probably twenty other people or organizations were planning or already releasing social networking sites. Part of the reason for Facebook's success was that he got out the door in a hurry and beat a good part of the competition solely because he was first to the college market. If you have time: unit test . If you are in a race: code , and get it working - worry about the mistakes later. EDIT: Obviously, you can't release a product that is so buggy its unusable. When using the "out the door" method, you need to realize when hunting and fixing additional bugs will only add marginal value to your product when compared with releasing now. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/99980",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33962/"
]
} |
100,031 | When you work across multiple programming languages, there is a problem you encounter... A valid name (identifier) in one language is invalid in another. For example... var new function this are a keywords in JavaScript, but you can use them freely in Python. Similarly list dict def can be used in JavaScript without problems. This is a very common and something programmers generally quickly become acquainted with when they program in multiple languages. However, when you're working in collaboration, you have to lay out some rules/guidelines for your team members to ensure consistency and uniformity in the code. With teams, this issue becomes more important than simply remembering what's valid and what's not while you program. So, my question is, what strategies you adopt... simply take a union of all the reserved words present in all the languages you use, hand out a list to everybody and abstain their use? accept the diversity and take extra pains when "context switching" adopt an intermediate ground where one language can use the other's, but not vice-versa (Note: i am only talking about Python and JavaScript in this question ... but please answer the question more broadly) -- UPDATE -- Thanks for all the answers. So the general consensus i see emerging is to let programmers use any name regardless of what they do other languages -- as long as names are descriptive, it doesn't hurt. | Having programmed in quite a few languages over the 30+ years of my experience, I would say that trying to find naming standards that will work in any language is probably a pie in the sky idea. Early on in my experience, I tried to use #define macros in C to create things that would make my C code look like the Pascal code that I was using before that. I was so used to programming in Pascal that I figured if I could just make C work like Pascal it would make me more productive. I soon discovered that I was wrong. What made me more productive was to learn C and to not try to leverage Pascal syntax into another language just because it made me more comfortable. I think you will be potentially constraining your programmers by prevent them from doing something in one language, just because it is wrong to do it in another language you are using. If you limit your naming conventions to things that make sense to explain the variable use, then you will probably create good code, in whatever language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/100031",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31560/"
]
} |
100,095 | Should organizations penalize developers for defect reports filed against their work products? I was having a discussion with my friend where he asks if a manager taking the number of defects filed against a developer is justified. My take is no, because developers can introduce defects, but by holding those against him can cause unnecessary negative feelings in the developer's mind. What are the problems with penalizing developers based on defects that they inject? Does your organization penalize developers for creating defects in work products? | Sounds like it would do more harm than good. Ignoring for a moment whether it is fair for a manager to do that, let's look at the logistics... Problem 1: Are all bugs created equal? Developer 1 introduces a bug: Erases all customer data and curses at them. Developer 2 introduces two bugs: Form labels are not left aligned, and the calendaring feature is off by 1 second if an event is created that spans two leap years. So clearly developer 2 deserves more grief from their manager because they have double the bug rate. Of course not, so you come up with a bug rating system so developers with trivial bugs don't get dinged so hard. But wait, should the system factor in a modifier for a developer who is clearly making the same trivial mistake repeatedly and wasting the tester's time because they never learn from their mistakes? Maybe, hmmm. This is complicated. Problem 2: What counts as a bug? Manager - This report was supposed to include a running total, that's one bug for you! Developer - That wasn't in the requirements, that's a FEATURE not a bug. Problem 3: How do you group bugs? Developer - "[Manager's Name], the testers filed 10 bugs against me because the velocities were incorrect on 10 different screens, but that was all related to a single bug in the getVelocity function. We argued for 3 hours about it, but they won't budge. We would like a sit down meeting with you to decide how many bugs should be filed. Oh and by the way, there is no way we are going to hit the code complete deadline tomorrow." Problem 4: More SLOC probably means more bugs Developer 1 sits on his butt all day, but manages to write 3 bug-free lines of code between arguments on Reddit over Arizona's immigration law. Developer 2 works hard all day and churns out a fully functional AI that won't kill John Connor the first chance it gets." So obviously you want to penalize the developer who makes more progress and/or takes more risks by innovating, right? Summary There are probably workable solutions to several of these, but as a manager of a programming team trying to meet a deadline do you really want to have everyone spending time arguing about what counts as a bug, what counts as a discrete bug, the importance of a bug, etc.? None of these things move your project forward and this will be poison for teams who will be forced to compete on issues that have no meaningful impact on the actual software being created. Not to mention what it does to your employee culture to focus this much effort on finding ways to make sure that every employee's mistakes are meticulously recorded so they can be thrown back in their face later. Inevitably you will have developers cajoling testers to work around your bug tracking system and report issues directly so they can fix them without it going in their "PERMANENT FILE". Then you don't even have an accurate accounting of bugs or what people are really working on. Then there is the issue of adverse impact. That is HR talk for, you better have pretty good documentation before you start penalizing employees, especially financially. And if any of them are a protected class (minorities, veterans, women, handicapped, etc.) you better be triple sure that whatever system you have set up doesn't discriminate against one of them based on membership in that class (or that a judge could be convinced as such), even if it is just an unintended side-effect of the plan. So ultimately, you are not creating incentives to create less bugs, which is hard, but rather to negotiate away bugs by minimizing their importance or blaming them on someone else. Short Version No. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/100095",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1028/"
]
} |
100,214 | I was just reviewing some code I wrote a while back, and can see that I have a couple of private methods that throw argumentnullexceptions and/or argumentexceptions if there are issues with the methods parameters. I guess my rationale is that helps future proof the application if someone attempts to "misuse" the method in future. However, given it is a private method and the people who are likely to call this method can see the associated comments and code, it is just unnecessary to throw this. It certainly doesn't hurt to have them, although it does add clutter. My feeling is that these exceptions are generally more useful on something like a API that is going to be exposed publically. | Normally, for private methods you do not throw exceptions since as you wrote it, the developer is supposed to know how and where he is calling the method from. As such, the variables passed as parameters to the private method should be checked outside of the method, that is, before calling it.
Throwing "InvalidArgumentException" and other such exceptions is considered good practice for public methods (whether you are writing an "API" or not). For those cases where you want to throw "InvalidArgumentException" it's worth mentioning that there is an Assert class in the Spring API for Java since version 1.1.2. It has been very helpful - to me at least - in writing less code to perform checks. You may however use "asserts" to check parameters in private methods. That is one of their true purpose. They are more reasons to use them, check out the following link which also explains thoroughly when to use asserts and when to use exceptions.
Asserts are not to be included in production code and the compiler removes them by default. So they are what you are looking for: helping the developers, invisible for the users. In Java you have to use a special flag ("-ea") to tell the compiler to enable assertions. You may consider them as "debugging" friends. Here is how to use asserts in: Java: http://download.oracle.com/javase/1.4.2/docs/guide/lang/assert.html .NET: http://msdn.microsoft.com/en-us/library/system.diagnostics.debug.assert(v=VS.100).aspx | {
"source": [
"https://softwareengineering.stackexchange.com/questions/100214",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33922/"
]
} |
100,480 | Possible Duplicate: Should newbies use IDE autocomplete (Intellisense)? I was having a conversation with another developer the other night about the pros and cons of Visual Studio. He was of the opinion that Intellisense reduces productivity. Of course I thought that was insane but I could be wrong. Is there any evidence to support the idea that Intellisense reduces productivity? | Your friend was probably implying that intellisense allows developers to never memorize all of the properies and methods of every type of object, which in turn reduces the speed at which they write code. But for anyone who has ever used a type, control, class, or object with which he or she was unfamiliar, intellisense is infinitely useful in reducing wasted time due to reading through the entire class. So, basically, according to me, your friend is generally wrong. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/100480",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4379/"
]
} |
100,488 | I know there are similar questions on here, which I've read, but I recently read this post by Joel Spolsky: How can I teach a bright person, with no programming experience, how to program? And it got me thinking about my way of learning and whether it might actually be harmful in the long run. I've dabbled with various languages but C# is my first serious one, I've read "Head First C#" and created a few projects. But after reading the post above I've found it a bit disheartening that I may be going about it all wrong, obviously I respect Joel's opinion which is what has thrown me a bit. I've started reading "Code" as recommended in the reading list and I'm finding it pretty hard going, although enjoyable. I feel like it's taken the shine off of my "noobish hacking about" in Visual Studio. So now I'm unsure as to what path I should take? Should I take a step back and follow Joel's advice and start reading? I guess my main aim is just to become a good programmer, like everyone else, but I don't want to be going into bad practice by learning a .NET language when someone who's opinion I respect thinks that it is harmful. Thoughts? | I've dabbed with various languages but C# is my first serious one,
I've read "Head First C#" and created a few projects. But after
reading the post above I've found it a bit disheartening that I may be
going about it all wrong, obviously I respect Joel's opinion which is
what has thrown me a bit. I respect Joel's opinions too, but they are just that: opinions . There's absolutely nothing wrong with using C# as a starting language. The biggest advice I can give you, or anyone doing any programming (even if they're starting in vanilla C!) is to not be stagnant, and don't be religious . I don't care what programming language you're starting with, or how pure or righteous that language is - in this day and age you cannot afford to sit in only world of programming. For example, I started programming with PHP3 back when I was a teenager. I built some small web apps and a few web sites with it; I thought that I was a genius programmer and that I could do anything with PHP3, and frowned on people who were all about ASP or BASIC. Boy, was I wrong. I didn't start to truly blossom as a developer until I began expanding my horizons and studying other programming languages and concepts. During high school I learned some RealBASIC, and then later Visual Basic. After business school, when I became a professional developer, I started learning C# and Javascript in earnest. Now, don't misunderstand me here - I'm not advocating that you try to be a Jack of All Trades. At heart, and in trade, I'm still a PHP programmer. PHP is my bread and butter, and I know it inside and out. However, my PHP skills didn't become what they were just doing PHP. Here are some highly important concepts that I didn't grasp from PHP, despite working in it professionaly. Javascript: Closures jQuery (yes, separate): the DOM and Ajax Visual Basic: Objected-oriented programming C#: Generics and closures Ruby (on Rails): The power of MVC design I could go on, and so could many others on this site as well, for days. Even though I'm a PHP programmer I was able to bring all of these other wonderful concepts back with me into the work I do every day. What's my point? Learn C#. Become a master of C# - you'll have a long, successful career and you'll probably accomplish some amazing things. But don't pigeon-hole yourself. Journey, and taste other languages and environments and concepts. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/100488",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14293/"
]
} |
100,499 | I m thinking of using iTextSharp, which is licensed under Affero GPL, in an internal closed-source WinForms project. No one outside my company will be using it. GPL (and Affero GPL as well) typically demands that the source be provided with the binary. Given that this is an internal project, do I need to provide my employees with the source code of the project? | If you confine use of the library to within the walls of your corporation, you do not have to distribute the source (even to your employees), because you are not redistributing (selling or giving away a software product that includes the library) outside of your organization. The GPL allows you to freely use the code inside a corporation without restrictions, and that includes (by necessity) your ability to prevent your employees (as a matter of company policy) from distributing the source code outside the organization. From the Gnu Licensing FAQ: Is making and using multiple copies within one organization or
company “distribution”? No, in that case the organization is
just making the copies for itself. As a consequence, a company or
other organization can develop a modified version and install that
version through its own facilities, without giving the staff
permission to release that modified version to outsiders. However, when the organization transfers copies to other organizations
or individuals, that is distribution. In particular, providing copies
to contractors for use off-site is distribution. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/100499",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3516/"
]
} |
100,534 | What are the most common mistakes and anti-patterns NHibernate user programmers make? Please explain why those are bad practices or give link to resource for further reading. For example: One anti-pattern common for new NHibernate programmers is to use identity/native POID's instead of ORM style onces. Read more here... | My personal "frequently explained" issues: Anti-Patterns Messing around with detached objects (SaveOrUpdate or Merge plus some messy code) instead of using DTO's. The more complex the entities are, the messier the code is. (It also means that it works quite well with trivial entities.) Ayende also calls it the Stripper Pattern and explains the encapsulation issue. Not understanding persistence ignorance and writing NH applications as when using explicit SQL. Symptom of that: calling Update after changing an object, wondering why changes are persisted even if Update had not been called, wondering how to avoid changes to be persisted. I tried to explain it in this SO answer Read how flushing works in the reference documentation . A blog post by kurtharriger who is criticizing exactly what is actually one of
the main features (as a prove that it is a common misconception about NH) Not understanding transactions and the unit of work pattern. Frequent anti-patterns: implicit transactions, session-per-operation and session-per-application. Some more reading: Fabio Maulo: Conversation-per-Business-Transaction On nhforge.org: Effective NHibernate Session management for web apps Using NH events to put application logic in (eg. change tracking in insert and update triggers) Create one class per table . Some people don't understand OOD, others don't understand relational design. Mistakes use of one-to-one instead of many-to-one. I tried it to explain in this answer . Using join fetch in combination with SetMaxResult. My latest answers related to that topic: Why doesnt NHibernate eager fetch my data (with some more notes about side effects in the comments) Hibernate - How to make associations eager NHIbernate 1.2 And Lazy Loading Pagination with Hibernate criteria and FetchMode.JOIN Writing self changing entities . When an entity doesn't exactly return the value that had been set by NH, it is considered dirty and gets updated in every session. For instance: replacing the NH persistent collection in a property setter. IList<Address> Addresses
{
get { return addresses; }
// will cause the addresses collection to be built up from scratch
// in the database in every session, even when just reading the entity.
set { addresses = new List<Address>(value); }
}
int Whatever
{
// will make the entity dirty after reading negative values from the db.
// this causes unexpected updates after just reading the entity.
get { if (whatever < 0) return 0; }
set { whatever = value; }
} May be more is following. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/100534",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10433/"
]
} |
100,685 | I'm currently a student (Applied Information Technology) and we do most of our programming in C# and Java. I was wondering how can I as a developer, make money with open source. I know there is the story of give support, or provide services, but I'm not a sysadmin and I don't like sysadmin work. Actually I would love to get my hands on some C and C++ in the future and do some low level development. So my actual question is: Is there money to be made with the development of Open Source software, and how? Edit: Just pointing out that starting my own Open Source project is not a requirement. | This answer summarises many of the answers to the following questions, along with some additional research and opinion. Open Source: Balancing Altruism and making a wage Making money with Open Source as a developer? Synopsis Open source can be a viable primary or auxiliary business model, both directly through specific project work and indirectly through the acquisition of skills, experience and reputation. There may also be additional, motivations; the satisfaction of producing software that is useful to others, the "scratching of one's personal itch" (the first step towards any good piece of open source software, according to Eric S. Raymond ), or philosophical reasons , normally based around the notion of free software , either the copyleft approach advocated by Richard Stallman , or the more permissive approach of the BSD licenses . Ways to earn money through Open Source 1. Sponsorship by a company This can happen in several ways. Permanent job to continue work on high-profile project. This is probably the rarest case. If you are a senior member of a major open source project, someone like Linus Torvalds , Guido van Rossum or Theo de Raadt , then you will probably be able to continue working on your project while being financially supported by a major company such as Google or IBM . Although this mode of support is relatively uncommon, you don't necessarily have to be an open source superstar to secure this type of funding; many Linux kernel developers are partially or wholly funded by companies like Red Hat . Paid for specific features or extensions. Some companies offer bounties to have specific features implemented in open source software that they use for business functions. Often there is no need for the feature to remain closed source, so significant code is contributed back to the community. This has been described as the beekeeper model of open source development. In some cases the additional features are required to remain proprietary, but are based upon an open source codebase. In both cases, open source expertise is a clear advantage for a developer. Your day job code can be open-sourced. A related case is where aspects of the code you write for a company in the course of your day-to-day job may be open-sourced without harm to the company. The code may or may not be based on an existing FOSS project. Generically useful tools and libraries may often be released in this way, and anecdotal evidence suggests such projects can often accelerate once they become volunteer-driven. 2. Add value to existing projects An individual or company can position themselves as a primary provider that adds value to an existing open-source project or projects. There are many examples of companies who provide a service by packaging, layering, combining or extending existing projects. They broadly fall into two categories. Support. Enthought adds value by packaging a custom Python distribution focusing on scientific libraries. Redhat and the other Linux distributions add value by collating and testing many disparate open source projects, and providing easy-to-use install and upgrade mechanisms. These companies sell support services in the same way as many proprietary providers do. Freemium model. Under this model, a basic version of the software or service is free; additional 'premium' features normally cost extra. Sleepycat software provided extra features for the Berkeley DB under a proprietary license. Cedega provides a reimplementation of the Windows API under Linux, released as a mixture of free and proprietary code. This model need not be open source; Gmail for organizations is one example of a service that offers both free (as in beer) and premium options. 3. Offer code under a dual-licensing model A powerful approach is to offer software under two alternative licenses , a copyleft license requiring modifications to be released back to the community if the software is distributed, and a commercial license allowing the use of the software without open-source restrictions. This approach has been successfully applied by large projects such as Qt and Open Office , as well as to small one-off projects . 4. Consult Open-source work can provide a way to gain valuable community visibility. Showcasing of abilities. Being able to verify a developer's work and competence by looking at open source projects they have been involved in is a powerful draw for prospective employers . Reputation building. Having a high profile reputation in an open source community can lead to speaking engagements , training requests or book writing offers based on your expertise. Being the expert. Being a significant player in a technology that companies need, means being in demand for custom consulting, support and training in that technology. This can lead to the creation of a specific job niche in your area of expertise. 5. Auxiliary channels Finally, income can be derived through auxiliary channels such as advertising (as Stackoverflow does), donations , or through the use of nagware techniques in the software itself that aim to annoy a user into providing financial contributions to the author. These techniques are not specific to Open Source development models. For example, they are often used by non-free shareware products. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/100685",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
100,903 | I'm not talking about working overtime for a project, but R&D, or test bed applications that benefit the company. These would become teaching tools, and eventually sales tools. I know it sounds crazy to even ask, but I'm seeing a trend in this industry with regard to rapidly changing technology, and a problem with getting programmers to bring the after hours knowledge gained in house. You would think it would spill over naturally, but I find most leads holding back because the work would then be "managed" by the company, and now the property of the company. Are there solid programs or initiatives that stimulate a back-and-forth, where you can actually bring something to the table and be rewarded for it? EDIT Can anyone provide additional feedback on this: Are there solid programs or initiatives that stimulate a back-and-forth, where you can actually bring something to the table and be rewarded for it? There appears to be a miscommunication here, where some users are under the impression I'm trying to figure out how to get free work out of colleagues. Just the opposite is true. I want to know if there are programs that exist, or ideas that you have that would motivate you, which doesn't necessarily have to be money. | and a problem with getting programmers to bring the after hours
knowledge gained in house. The problem is that you're not paying them to do that. You would think it would spill over naturally, No I wouldn't, free time is free time. If somebody chooses to spend their free time studying instead of going kite surfing or whatever, then of course they should be the ones to benefit from what they did during their free time. Why should you? What have you contributed to that time and effort that they put in? but I find most leads holding back because the work would then be "managed" by the company, and now the property of the company. Yup Here's the thing, if you find value in the things that programmers do while not working for you, then why don't you have them working on those things during work hours. Google understands this and that's why they have 20% time. But it's too easy to mess this up by trying to keep control over what employees work on during this time. A smart developer knows many things you could be doing to improve your business if only they are given a chance. Sure you now have one day a week less but after some time you will find that the work on the fifth day makes the work on the next four days a lot more productive and effective. It also makes smart devs love working for you as they no longer have to deal with bad decisions that affect their every day work, they can actually do something to positively change the situation. If you want benefit from free time the only way you will get it is if the devs really, really like you and the company they work for. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/100903",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
100,939 | I don't have any serious experience in SQL and I even hate to write SQL instead of LINQ. I am happy enough with ORMs. From the employers and sector view point, is it important to know SQL? Do I have to master on it? Are companies that prefer pure SQL over ORM frameworks a "dinosaur" in the programming world? | Absolutely! SQL is still the lingua franca of databases and although you may do a lot with ORMs you have to understand SQL to understand the decisions ORMs make and the SQL they generate. Also, there are still lots of things that you have to do with custom sql and stored procedures as well. Sorry, no free lunch. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/100939",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24802/"
]
} |
100,959 | I am working on a java project. I am new to unit testing. What is the best way to unit test private methods in java classes? | You generally don't unit test private methods directly. Since they are private, consider them an implementation detail. Nobody is ever going to call one of them and expect it to work a particular way. You should instead test your public interface. If the methods that call your private methods are working as you expect, you then assume by extension that your private methods are working correctly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/100959",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17887/"
]
} |
101,064 | According to Martin Fowler , code refactoring is (emphasis mine): Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior . Its heart is a series of small behavior preserving transformations. Each transformation (called a 'refactoring') does little, but a sequence of transformations can produce a significant restructuring. Since each refactoring is small, it's less likely to go wrong. The system is also kept fully working after each small refactoring, reducing the chances that a system can get seriously broken during the restructuring. What is "external behaviour" in this context? For example, if I apply move method refactoring and move some method to other class, it looks like I change external behaviour, doesn't it? So, I'm interested in figuring out at what point does a change stop being a refactor and becomes something more. The term "refactoring" may be misused for larger changes: is there a different word for it? Update. A lot of interesting answers about interface, but wouldn't move method refactoring change the interface? | "External" in this context means "observable to users". Users may be humans in case of an application, or other programs in case of a public API. So if you move method M from class A to class B, and both classes are deep inside an application, and no user can observe any change in the behaviour of the app due to the change, then you can rightly call it refactoring. If, OTOH, some other higher level subsystem/component changes its behaviour or breaks due to the change, that is indeed (usually) observable to users (or at least to sysadmins checking logs). Or if your classes were part of a public API, there may be 3rd party code out there which depends on M being part of class A, not B. So neither of these cases are refactoring in the strict sense. there is a tendency to call any code rework as refactoring which is, I guess, incorrect. Indeed, it is a sad but expected consequence of refactoring becoming fashionable. Developers have been doing code rework in an ad hoc manner for ages, and it is certainly easier to learn a new buzzword than to analyse and change ingrained habits. So what is the right word for reworks which change external behaviour? I would call it redesign . Update A lot of interesting answers about interface, but wouldn't move method refactoring change the interface? Of what? The specific classes, yes. But are these classes directly visible to the outside world in any way? If not - because they are inside your program, and not part of the external interface (API / GUI) of the program - no change made there is observable by external parties (unless the change breaks something, of course). I feel that there is a deeper question beyond this: does a specific class exist as an independent entity by itself? In most cases, the answer is no : the class only exists as part of a larger component, an ecosystem of classes and objects, without which it can't be instantiated and/or is unusable. This ecosystem does not only include its (direct/indirect) dependencies, but also other classes / objects which depend on it. This is because without these higher level classes, the responsibility associated with our class may be meaningless/useless to the users of the system. E.g. in our project which deals with car rentals, there is a Charge class. This class has no use to the users of the system by itself, because rental station agents and customers can't do much with an individual charge: they deal with rental agreement contracts as a whole (which include a bunch of different kinds of charges). The users are mostly interested in the sum total of these charges, that they are to pay in the end; the agent is interested in the different contract options, the length of the rental, the vehicle group, insurance package, extra items etc. etc. selected, which (via sophisticated business rules) govern what charges are present and how the final payment is calculated out of these. And country representatives / business analysts care about the specific business rules, their synergy and effects (on the income of the company, etc.). A single charge by itself has no meaning without the bigger picture. Recently I refactored this class, renaming most of its fields and methods (to follow the standard Java naming convention, which was totally neglected by our predecessors). I also plan further refactorings to replace String and char fields with more appropriate enum and boolean types. All this will certainly change the interface of the class, but (if I do my job correctly) none of it will get visible to the users of our app. None of them cares about how individual charges are represented, even though they surely know the concept of charge . I could have selected as example a hundred other classes not representing any domain concept, so being even conceptually invisible to the end users, but I thought it is more interesting to pick an example where there is at least some visibility at the concept level. This shows nicely that class interfaces are only representations of domain concepts (at best), not the real thing*. The representation can be changed without affecting the concept. And users only have and understand the concept; it is our task to do the mapping between concept and representation. * And one can easily add that the domain model, which our class represents, is itself only an approximate representation of some "real thing"... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101064",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7369/"
]
} |
101,133 | I have an initial phone interview coming up with a company for a Senior Web Developer position and in the email they asked to prepare by signing up on their site because they'd like to hear my thoughts about it and suggestions for improvement. I've heard stories before of companies that do this. They interview 10 people and then if everyone gives advice, then they've got a lot of free consultation. I believe this is the first time I've ever been asked to do something like this so I wanted to hear other people's thoughts of whether it is free consulting or if I'm just being paranoid. | I think you are being paranoid. If they are interviewing phone screening 10 people, that's roughly 10 man-hours they are spending in on this phase of the interviewing. Plus the cost of advertizing the job, reading a bunch of resumes, etc. And they are getting random ideas from 10 developers, many of whom are probably "also rans" in the employment race. Then ask yourself: What is an hour of your time really worth? How much are your ideas really worth? Do you want the job or not? So, they might end up using some ideas from people who they don't employ. It could be construed as free (albeit slap-dash, second rate, etc) consulting ... But so what?! Ideas are easy to have. Successful execution of the ideas that is the hard (and expensive) part. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101133",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9963/"
]
} |
101,163 | I am aware that floating point arithmetic has precision problems. I usually overcome them by switching to a fixed decimal representation of the number, or simply by neglecting the error. However, I do not know what are the causes of this inaccuracy. Why are there so many rounding issues with float numbers? | This is because some fractions need a very large (or even infinite) amount of places to be expressed without rounding. This holds true for decimal notation as much as for binary or any other. If you would limit the amount of decimal places to use for your calculations (and avoid making calculations in fraction notation), you would have to round even a simple expression as 1/3 + 1/3. Instead of writing 2/3 as a result you would have to write 0.33333 + 0.33333 = 0.66666 which is not identical to 2/3. In case of a computer the number of digits is limited by the technical nature of its memory and CPU registers. The binary notation used internally adds some more difficulties. Computers normally can't express numbers in fraction notation, though some programming languages add this ability, which allows those problems to be avoided to a certain degree. What Every Computer Scientist Should Know About Floating-Point Arithmetic | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101163",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34226/"
]
} |
101,187 | I don't know why, but I always feel like I am "cheating" when I use reflection - maybe it is because of the performance hit I know I am taking. Part of me says, if it is part of the language you are using and it can accomplish what you are trying to do, then why not use it. The other part of me says, there has to be a way I can do this without using reflection. I guess maybe it depends on the situation. What are the potential issues I need to look out for when using reflection and how concerned should I be about them? How much effort is it worth spending to try to find a more conventional solution? | No, it's not cheating - it is a way to solve problems in some programming languages. Now, it is often not the best (cleanest, simplest, easiest to maintain) solution. If there is a better way, use that one indeed. However, sometimes there isn't. Or if there is, it is just so much more complex, involving a lot of code duplication etc. which makes it infeasible (difficult to maintain in the long run). Two examples from our current project (Java): some of our testing tools use reflection to load configuration from XML files. The class to be initialized has specific fields, and the config loader uses reflection to match the XML element named fieldX to the appropriate field in the class, and to initialize the latter. In some cases, it can build a simple GUI dialog box out of the identified properties on the fly. Without reflection, this would take hundreds of lines of code across several applications. So reflection helped us put together a simple tool quickly, without much fuss, and enabled us to focus on the important part (regression testing our web app, analysing server logs, etc.) rather than the irrelevant. one module of our legacy web app was meant to export/import data from DB tables to Excel sheets and back. It contained a lot of duplicated code, where of course the duplications were not exactly the same, some of them contained bugs etc. Using reflection, introspection and annotations, I managed to eliminate most of the duplication, cutting down the amount of code from over 5K to below 2.4K, while making the code more robust and way easier to maintain or extend. Now that module ceased to be a problem to us - thanks to the judicious use of reflection. The bottom line is, like any powerful tool, reflection too can be used to shoot yourself in the foot. If you learn when and how (not) to use it, it can bring you elegant and clean solutions to otherwise difficult problems. If you abuse it, you can turn an otherwise simple problem into a complex and ugly mess. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101187",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18497/"
]
} |
101,248 | I've been in the financial engineering arena (after BA Math and BA Computer Science) for about 5 years (20% analysis/programming, 80% communicating) and take pride in my ability to communicate with people and discuss technical problems (i.e. interacting with a team). I love this part of my job. Going to the white board to draw abstract ideas and brainstorm. However, for many reasons, I want to transition my career into a technology company (software engineering) but I'm deeply afraid that I will fall into a stereotypical programming job where programmers code with big headphones on. I certainly know this is only a stereotype but I've witnessed similar environments before (at startups) and it scares me to think that I would be migrating to a career of isolation. I love coding and thinking algorithmically, but I don't want to give up interacting with people. I understand that having communication skills is only a positive, but am I setting myself up for career-happiness failure by transitioning into software engineering. I'd love to hear any clarifications and/or advice. | Here's the secret about programming: it is almost 100% communication . A significant part of that is communicating with a human; the rest is communicating what you've just learned to a computer. The latter part is the easier of the two. Computers do exactly what they're told and you are always in a position to test that what you told it is correct. The former is something else. Differences in terminologies, in understanding, in priorities, from person to person make it very difficult to get the correct message to feed to the computer. Miscommunications at this level are much more common than miscommunications between a programmer and a computer. Good programmers are good at communicating with the computer; great programmers are good at communicating with people too, by one medium or another. Those programmers you see that never come out of their headphones? They're still doing this communication, whether it be via email or a bug-tracker, or even messenger. It's all happening, otherwise they can't possibly know what to tell the computer to do. So yes, your communication skills will serve you well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101248",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8950/"
]
} |
101,273 | I have had a discussion with someone about unit/integration testing with web applications and I have a disagreement about 1 core idea. The issues is that the person I am talking to think that the database the unit test work off of should have pre-populated data in it and I think it should be completely empty before and after the tests are executed. My concern with pre-populated data in the database is that there is no way to make sure that data is maintained in a good state. The tests themselves are going to be creating, deleting, and modifying data in the database so I really don't see how having data in the database before you start the tests is a good thing. Is seems the the best way of testing database functionality would be having the following setups: In a "setup" phase before the test actually run, you first truncate all the tables in the database Then you insert all the data needed for the test cases you are about to run Then you run and validate the test cases Then in a "teardown" phase you once again truncates all the tables in the database I don't see any other better way to ensuring that the data you are testing against in is a good testable test. Am I missing something here? Is this not the best way to test database related functionality? Is there some benefit to have pre-populated database that always exists in the database (even before you start the tests or after the tests are done)? Any help in ideas to explain my process differently to better get my point across would also be great (that is if my point has merits). | For me unit tests should not deal with the database, integration tests deal with the database. Integration tests that deal with the database should in practice have a empty database with a tear up and tear down approach, using a transaction based approach is quite a good way to go (i.e. create a transaction on setup and rollback on tear down). What your friend sounds like they want to do is test from a 'regression' point of view, i.e. have real data there and see how the system reacts, after all no system is perfect and there can usually be bad data lying around somewhere that provide some quirks to your domain model. Your best practices are the way to go, and what I tend to do, is if I find a scenario for bad data, write an integration test with a setup up and tear down with that exact scenario. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101273",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20237/"
]
} |
101,292 | Lots of big companies, from Apple to Microsoft to Google, are putting more and more money into creating tools that will allow anybody to create a website with a WYSIWYG editor. For example, this email I just got from Adobe: Build websites as quickly and easily as an Adobe® InDesign® layout. Currently in beta form, the application code-named "Muse" is a new technology that enables graphic designers to use familiar, free-form tools to design and publish HTML and CSS websites—all without writing code or being restricted to templates. Be part of this incredible free preview and experience how Muse will revolutionize the way you create for the web. As a web developer, I can't foresee any way that Adobe or any other company will be to create some solution which allows a user with no HTML/CSS/JS knowledge to build a useful website design for these reasons: The code generated will almost certainly be a mess, which makes it difficult for a programmer who wants to write the backend for the site to work with it. They may even be required to change the code themselves to structure it as they need it. HTML is not pixel-based, so it is very difficult to develop a tool that can easily design templates which can flow with changes in text size, etc. In addition, elements should follow each other in reasonable order, not in some random order (e.g. as dictated by when an element is added). Code generated in one tool would likely not be portable to other tools easily, which would lock you in to the original tool. (I am assuming that the tool would allow complete control of a website; as Adobe said, "as quickly and easily as an Adobe InDesign layout". Programs which let you use professionally-designed templates are a different story.) Do you think it will ever be possible for a person unskilled in HTML to create quality (both behind-the-scenes and appearance) web designs/sites? | Not anytime soon. The era of WYSIWYG editors is long over (like the dinosaurs) but companies continue to pump it out. I remember the days of using Dreamweaver and having dozens of spacer.gif images to put the layout in the same way. Software like this is fool's gold - it's meant to appeal to people who want something quick and dirty (tomorrow as opposed to in three months) and who don't know or care about quality. It's not a real solution, it just provides that illusion; in the immortal words of Admiral Ackbar: It's a trap! To be perfectly honest, and I'm going to adopt a ranty tone for this so be warned, the fact that snake oil like this is perpetuated disgusts me because it fosters and encourages the idea that you don't have to do things correctly. Whether it's some WYSIWYG editor to let the receptionist create a web page or some nifty wizard that looks like it will create a full CRUD application for you in a couple of clicks, it's the attitude that I hate - it makes businesses think that quality doesn't matter and you can just toss out garbage as quickly as possible, so when the time comes that the shoddy design falls apart there's too much invested in it to do it properly and you're left trying to monkey patch a leaky pipe because nobody wants to replace the thing. It's completely the wrong attitude to have, but it gets pushed more. To go back to the Star Wars references, it's the path to the Dark Side, and once you start down that path forever will it dominate your destiny. To flat out answer your question, yes someday there will be a way to create a good website without using raw HTML, but that day is far off. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101292",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34351/"
]
} |
101,316 | There seem to be a lot of developers who write their SQL by capitalising the keywords: SELECT column
FROM table
INNER JOIN table
ON condition
WHERE condition
GROUP BY clause
HAVING condition I'm wondering why people stick to this approach? Clearly, it's a long established convention - but I've never run into a RDBMS that requires capitalisation. Personally, I find KEYWORDS THAT SHOUT to be calling attention to exactly the wrong part of the query, which is why I write the keywords in lowercase. Still, enough people use this convention that I figure I might be missing something, hence this question. | Capitalization makes them stand out, as opposed to the other characters that are in the query window. The reason I don't do this as it's a huge time waster. You can do one of two things: 1) Hold your shift key down while typing out the word : way too error-prone and just haphazard. 2) Put on caps lock for the duration of the word : a bit too much of time consumption. I use SQL Server, and the environment (SSMS) has great syntax highlighting so I don't personally believe keyword capitalization is as prevalent these days as it used to be (if at all). It is good practice in books and online tutorials, though, so it is evident what the reserved keywords are. It's just one of those unwritten things. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101316",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4099/"
]
} |
101,337 | The books and documentation on the MVC just heap on using the Stateful and Stateless terms. To be honest, i am just unable to grab the idea of it, what the books are talking about. They don't give an example to understand any of the either state, rather than just telling that HTTP is stateless and with ASP.NET MVC microsoft is going along with it. Am I missing some fundamental knowledge, as i can't understand what is stateful and why is stateful and same goes for stateless. A simple and short example that talks about a control like button or textbox can be simplify the understanding i suppose. | Stateless - There's no memory (state) that's maintained by the program Stateful - The program has a memory (state) To illustrate the concept of state I'll define a function which is stateful and one which is stateless Stateless //The state is derived by what is passed into the function
function int addOne(int number)
{
return number + 1;
} Stateful //The state is maintained by the function
private int _number = 0; //initially zero
function int addOne()
{
_number++;
return _number;
} As others have said http is inherently stateless. So state must be built into your applications. Imagine a request over the web where you have a client browser communicating to a server process. To maintain state over the stateless http protocol the browser will send typically send a session identifier to the server on each request. For each request the server will be like "ah, its this guy". State information can then be looked up in server side memory or in a database based on this session id. In a purely stateless environment you wouldn't need this session id. Each request would contain all the information the server would need to process. But many applications need to maintain state to keep track of whether or not a session is authenticated to view certain content or to keep track of what a user is doing. You wouldn't want to send user credentials over the wire for each request. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101337",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/32803/"
]
} |
101,346 | Sometimes (rarely), it seems that creating a function that takes a decent amount of parameters is the best route. However, when I do, I feel like I'm often choosing the ordering of the parameters at random. I usually go by "order of importance", with the most important parameter first. Is there a better way to do this? Is there a "best practice" way of ordering parameters that enhances clarity? | In general: use it . Write a test for your function, a real world test. Something you would actually like to do with that function . And see in what order you did put those down. Unless you already have (or know of) some functions that do something similar. In that case: conform to what they do already, at least for the first arguments. e.g. Do they all take a document/object/file-pointer/series-of-values/coordinates as the first argument(s)? For god's sake conform to those arguments . Avoid confusing your coworkers and your future self . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101346",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21497/"
]
} |
101,352 | I was browsing through the Amazon.com Bookstore and I came across the book "Functional Programming for Java Developers" . I know some very basic Functional Programming and have been programming in Java for 3 years. I would like to know is Functional Programming even possible in Java? | It depends what you mean by "functional programming" and by "possible". You can obviously implement things following a functional paradigm. However the Java language doesn't provide the syntactic sugar for it, so some things will be tedious at best, and some other ones will be extremely arcane. Similarly, you can very well write object-oriented code in a language recognized as being non-OO, like C. Java Libraries There are libraries that can help you do this, by already doing the legwork for you and hiding the arcane things: mature/established libraries: Functional Java Google guava LambdaJ more obscure/experimental libraries : Fun4J (also comes with a lisp to bytecode compiler) JCurry OCaml-Java Jambda Bolts Functional Java (swensen.functional) These will allow you to write Java code with a more functional approach and possibly more familiar syntax and semantic, as you'd expect from an FP-competent language. Within reason, that is. JVM Languages And obviously, you can implement a functional language on top of Java. So that you can then use that one as your FP language. Which is a bit of a higher-level of abstraction than what you asked for, but relatively within context (though I'm cheating a bit here, granted). For instance, check out: quite mature languages: Clojure Scala less mature or active / more obscure languages: Haskell -based: Frege Jaskell Scheme -based: Bigloo (targets R5RS ) Kawa (targets R6RS ) SISC (targets R5RS ) JScheme (targets R4RS ) ML -based: Yeti More-or-Less Functional JVM Languages While they may not be exactly what you want, there are a number of other languages that have been ported to the Java Platform and that might free you from Java's relatively not so fun-oriented (yes, pun intended) nature and already give you more flexibility. Notable contenders like JRuby , Jython and Rhino (respectively for Ruby , Python and JavaScript / ECMAScript ) also offer interesting potential for functional programming, though they arguably aren't really functional programming languages by nature. JetBrains' Kotlin , while clearly acknowledging it isn't a functional language, does support some functional constructs and is also worth a look. Further Reading You may also want to read or watch these articles or videos: Functional Progamming in the Java Language , IBM DeveloperWorks (2004) Functional Programming Java , Lambda the Ultimate (2004) Functional Programming: a Pragmatic Introduction , InfoQ /CodePalousa (2011) and related StackOverflow questions like this one | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101352",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17887/"
]
} |
101,409 | I'm a web developer working in a team of three developers and one designer. It's now about five months that we've implemented the agile scrum software development methodology. But I have a weird feeling I just wanted to share in this site. One important factor in human life is decision-making process. However, there is a big difference in decisions you make. Some decisions are just the outcome of an internal or external force, while other decisions are completely based on your free will, and some decisions are simply something in between. The more freedom you have in making decisions, the more self-driven your work would become. This seems to be a rule. Because we tend to shape our lives ourselves. There is a big difference between you deciding what to do , or being told what to do . Before scrum, I felt like having more freedom in making the decisions which were related to development, analysis, prioritizing implementation, etc. I had more feeling like I'm deciding what I'm doing . However, due to the scrum methodology, now many decisions simply come from the product owner. He prioritizes PBIs , he analyzes how the software should work, even sometimes how the UI and functionality should be implemented. I know that this is part of the scrum methodology, and I also know that this may result in better sales of product in future. However, I now feel like I'm always getting told to do something, instead of deciding to do something . This syndrome now has made me more passive towards the work. I tend to search less to find a better solution, approach, or technique I don't wake up in the morning expecting to get to an enjoyable work. Rather, I feel like being forced to work in order to live I have more hunger to work on my own hobby projects after work I won't push the team anymore to get to the higher technological levels I spend more time now on dinner, or tea-times and have less enthusiasm to get back to work I'm now willing more for the work to finish sooner, so that I can get home The big problem is, I see and diagnose this behavior in my colleagues too. Is it the outcome of scrum? Does scrum really makes the development team feel like they have no part in forming the overall software, thus making the passive to the project?
How can I overcome this feeling? | However, I now feel like I'm always getting told to do something, instead of deciding to do something. This is a serious indicator that something has gone off the rails. An agile project should not feel like this. That "people over process" rhetoric should include "we don't force our people to do things that suck." Here are some ideas: Are you doing "scrum but"? That is, part scrum, part some other thing. (ie: "We're doing scrum, but all our stories have to come from our PMO, not a product owner.") Lots of crazy crap is called Scrum these days. Are you, personally, not involved in the process where you should be? I've known a number of people to be upset at the contents of stories, and it turns out that they only get involved once the story is in the sprint backlog. Talk with the product owner early on in the development of the story, and get your feedback in. (As the PO, they have the final say, but that doesn't mean they have to do it alone.) In Scrum, the team is supposed to own the process, and it's expected that the process will change over time to suit the team's needs. Bring up your concerns at the retrospective. If you can come up with a process tweak to suggest, that tends to make it easier to sell for some teams. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101409",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31418/"
]
} |
101,513 | I've noticed posts on here demonstrating the use of delegates\lambda functions to solve the hole in the middle idea without a lot of repetition: http://www.markhneedham.com/blog/2009/04/04/functional-c-the-hole-in-the-middle-pattern/ The problem seems to be that junior developers and others don't necessarily understand what the function pointer\delegate\lambda function concept is, which seems to make reading (and possibly debugging) the code more difficult. Should we avoid or severely limit the use of this tool in writing business software, especially in small team or sole developer shops? Or is it acceptable to use it with appropriate comments and expect that when I'm no longer around that the next developer will understand or learn about lambda functions? | Yes, use them. I am a junior developer, and I know/understand lambdas (and the other concepts you mentioned). There is nothing I could forsee preventing a junior developer from learning all of those concepts in a very short amount of time. Juniors may not have the same amount of experience/expertise when it comes to many gotchas of software development, however, concepts like lambdas can be just as easily understood by anyone with a basic understanding of programming and an internet connection. Also, it seems odd that you chose lambdas as the example, considering if you are using C# it is likely you do not even have that much of a head start on learning lambdas over the junior developers (they were only introduced in 2008, if I remember correctly). In short, there is no need to dumb down your code for us neophytes. We will be just fine, and would actually prefer to work on the best possible implementation you can come up with. :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101513",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14294/"
]
} |
101,528 | I have been working on a new project. The project works like this: The end user can access a webapp using a link and he can add multiple systems on his network and manage that particular systems details. My part involves the front end and the webserver, which is done in python. My python actually communicates with another project which is entirely done in c & c++. The c/c++ project is the main app which does all the functionality. My python sends the user request to it and displays the response from it to the user. I am very familiar with my work and I will finish it soon. Since that's not much work in it. And I am a person who loves to work. I spends most of the time in office and only go home when I feel sleepy. The c/c++ app is managed by another colleague who has 5+ year experience and can do things much faster than me, but he never does it. May be he doesn't like to do it. His app crashes often when my python communicate with it or returns wrong values. It's full of bugs. Since my app depends on it, I am having a hard time building it. Instead of fixing the bugs, he asks me to slow down my work. He asks me to tell manager that my work needs a lot of time. He is asking me to fool the manager and even forcing me to work slowly like him. During project meeting, when manager asks him about the bugs he says that he fixed everything and it works fine. Since he is my colleague, I couldn't tell anything to the manager. I obviously need to have a good relationship with my colleagues more than my manager, since most of the time we will be with our colleagues, not with the manager. I am not able to tell the manager anything regarding this, since if manager asks him why, then he may think I complained about him to the manager. And he keeps on lying in the meeting. And since he fixes the bug slowly, it even slows down my work. Now I thought of working on the front-end part of my app and finishing it off so that in the mean time he can make his project stable. Now he is asking me to tell the manager that my front end part require a lot of work and I may need more and more time, simply so that he can drag the project down. And the sad thing is our actual manager has gone to the US, so we have a temporary manager and this guy doesn't know about the project much, so the c,c++ just fools him. Can anyone suggest me how I deal with this?
I wanted to finish off the project soon. How can I make him work even by maintaining a good relationship with him? Responses to comments: If he's really deliberately misleading the company, you should report him to management. I am new to this company and the other guy has been there for many years. And I have just started knowing my colleagues. If I directly go and complaint him, I don't think so I can make good relationship with my other colleagues. Even he has the power to mislead them. I am not telling he is a bad guy, he can do the work, but he is not doing it. Doesn't your company have any kind of bug tracking system ? Here actual bug tracking system isn't there. The company tries to finish off the project as soon as possible and gives it to the QA. And then fixes the bugs reported by QA. This is why companies should give employees stock / options or some sort of ownership. That way you can literally tell the guy "You are costing me monetary growth... don't you want to make money also?". The company has the stock options they have given me a 2500 share, mostly he too would have got some more. Seniority does deserve some benefit of a doubt. You really need to speak to him first and try to understand the problem. He may be out of his depth, you may be able to help him, there could easily be variables you are unaware of. It may be hard now, but you could easily make the situation a lot worse by jumping the gun. I even does it, first his app wasn't handling multiple requests at a time, he was using a queue to handle the requests I sent to him. I even suggested to him some of my ideas on it. He said he already had these ideas, and will be executing them. His explanations was: "Everything require certain time to do and this is a project which may need two years to complete and we are asked to finish it in two months". I used to have a hard time coding during first few weeks because of this bug. But now he fixed it. But he is using a single queue for a user requests and that is now slowing down the app, since it processes one request at a time. What is QA doing this whole time? Why aren't they reporting/confirming the status of the project(s)? The manager is the person who decides when to give to the QA. As of now it has not yet given to QA. He said we should give it by this month end. | You're in a bad situation, I wouldn't want to be in your shoes. It's unlikely that you could to sort it out without getting into conflict with your colleague. This is what I would do: Don't become his partner in crime. Refuse to lie about the status of your project or his project. Implement (in your spare time if necessary) bug reporting to your application, so all bugs are sent by email to your coworkers and to your manager. If the bug is caused by his application, make it visible in the email (put [XYZ APP BUG] in email subject or something). Maintain a bug database (besides sending bugs by email). You can say that its primary purpose is tracking your bugs, when in fact you'll be tracking mostly his bugs. Among other things, it should track how long it takes to fix specific bug. Have all inter-process communication with his app covered with tests ("when I sent you this, you should return me that" style). You could set up a cron task which runs these tests every day and if they fail, email is sent to everyone. Basically, try not to waste your time arguing with him about bugs and focus on your work instead. If his app is broken and thus you can't work on your app and manager doesn't do anything with it - well, that's a management problem and you're covered with bug database, emails and test reports. However, watch out and don't underestimate him. Long-time slacker like him might have a trick or two up his sleeve. He can turn whole team against you or something, but that depends on your specific situation and it's kinda out of scope of this question. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101528",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34092/"
]
} |
101,649 | I've been programming for a few years and I began in Java, and in my time I've found many different sources claiming Java to be an inferior language in some way or another. I'm well aware that each language has it's strengths and weaknesses, but a lot of things I've read about Java seem to be dated. The most often cited reason for Java being inferior is that it is much slower than other natively compiled languages, like C++ for example. Many people criticize the game designer Notch (who developed Minecraft) for using Java because of its apparent lack in the performance department. I know Java was much slower back in the day, but there have been many improvements since, especially JIT compilation. I would like to get some objective opinions of Java as a language today. So my question has 4 parts. Performance. a. How does Java's speed today compare to C++? b. Would it be possible to create a modern AAA title using Java? c. In what areas specifically is Java slower than C++, if at all? (i.e. Number-crunching, graphics, or just all around) Is Java now considered a compiled language or interpreted language? What are some major shortcomings of Java that have been addressed since the early days? What are some major shortcomings of Java that have yet to be addressed? Edit: Just for clarification purposes I'm not making this Java vs C++, obviously on average c++ will be a little faster than Java. I simply need something to compare Java to in terms of maturity as a language at this point in time. Since c++ has been around forever I thought I would be a good point of comparison. | a. How does Java's speed today compare to C++? Difficult to measure. It's worth noting that a major part of the speed of an implementation, it's memory allocator, are very different algorithms in Java and C++. The non-deterministic nature of the collector makes it extremely difficult to obtain meaningful performance data in comparison to the deterministic memory management of C++, because you can never be certain what state the collector is in. This means that it's very hard to write a benchmark that might meaningfully compare them. Some memory allocation patterns run much faster with a GC, some run much faster with a native allocator. What I would say, however, is that the Java GC has to run fast in every situation. A native allocator, however, can be swapped out for one that's more appropriate. I recently fielded a question on SO about why a C# Dictionary could execute in (0.45 ms on my machine) compared to an equivalent std::unordered_map which executed on (10ms on my machine). However, by simply swapping out the allocator and hasher for more appropriate ones, I cut that execution time to 0.34ms on my machine- a thirtieth of the original run-time. You could never, ever hope to perform that kind of custom optimization with Java. An excellent example of where this can make a real difference is threading. Native thread libraries like TBB provide thread-caching allocators which are massively faster than traditional allocators when dealing with many allocations on many threads. Now, many people will talk about JIT improvements and how the JIT has more information. Sure, that's true. But it's still not even remotely close to what a C++ compiler can pull- because the compiler has, comparatively, infinite time and space in which to run, from the perspective of the run-time of the final program. Every cycle and every byte that the JIT spends thinking about how best to optimize your program is a cycle that your program isn't spending executing and can't use for it's own memory needs. In addition, there will always be times where compiler and JIT optimizations cannot prove certain optimizations- especially in the case of things like escape analysis. In C++, then as the value is on the stack anyway , the compiler doesn't need to perform it. In addition, there are simple things, like contiguous memory. If you allocate an array in C++, then you allocate a single, contiguous array. If you allocate an array in Java, then it's not contiguous at all, because the array is only filled with pointers which could point anywhere. This is not only a memory and time overhead for the double indirections, but cache overheads as well. This kind of thing is where the language semantics of Java simply enforce that it must be slower than equivalent C++ code. Ultimately, my personal experience is that Java could be about half the speed of C++, on average. However, there's realistically no way to back up any performance statements without an extremely comprehensive benchmark suite, because of the fundamentally different algorithms involved. b. Would it be possible to create a modern AAA title using Java? I assume that you mean "game", here, and not a chance. Firstly, you'd have to write everything from scratch yourself as nearly all the existing libraries and infrastructure target C++. Whilst not making it impossible per se, it could certainly contribute solidly towards unfeasible. Secondly, even the C++ engines can hardly fit in the tiny memory constraints of existing consoles- if JVMs even exist for those consoles- and PC gamers expect a little more for their memory. Creating performant AAA games is hard enough in C++, I don't see how it could be achieved in Java. Nobody has ever written an AAA game with significant time spent in a non-compiled language. More than that, it would simply be extremely error-prone. Deterministic destruction is essential when dealing with, for example, GPU resources- and in Java, you'd basically have to malloc() and free() them. c. In what areas specifically is Java slower than C++, if at all?
(i.e. Number-crunching, graphics, or just all around) I'd definitely go for all-around. The enforced-reference nature of all Java objects mean that Java has far more indirection and references in it than C++ does- an example I gave earlier with arrays, but also applies to all member objects, for example. Where a C++ compiler can look up a member variable in constant time, a Java run-time has to follow another pointer. The more accesses you do, the slower this is gonna get, and there's nothing the JIT can do about it. Where C++ can free and re-use a piece of memory almost instantly, in Java you have to wait for the collection, and I hope that piece didn't go out of cache, and inherently requiring more memory means lower cache and paging performance. Then look at the semantics for things like boxing and unboxing. In Java, if you want to reference an int, you have to dynamically allocate it. That's an inherent waste compared to the C++ semantics. Then you have the generics problem. In Java, you can only operate on generic objects through run-time inheritance. In C++, templates have literally zero overhead- something Java can't match. This means that all generic code in Java is inherently slower than a generic equivalent in C++. And then you come to Undefined Behaviour. Everyone hates it when their program exhibits UB, and everyone wishes that it didn't exist. However, UB fundamentally enables optimizations that can never exist in Java. Take a look at this post describing optimizations based on UB. Not defining behaviour means that implementations can do more optimizations and reduce the code required to check for conditions that would be undefined in C++ but defined in Java. Fundamentally, the semantics of Java dictate that it is a slower language than C++. Is Java now considered a compiled language or interpreted language? It doesn't really fit into either of those groups. I'd say that managed is really a separate category on it's own, although I'd say it's definitely more like an interpreted language than a compiled language. More importantly, there pretty much only are two major managed systems, the JVM and the CLR, and when you say "managed" it's sufficiently explicit. What are some major shortcomings of Java that have been addressed
since the early days? Automatic boxing and unboxing is the only thing I know of. The generics solve some issues, but far from many. What are some major shortcomings of Java that have yet to be
addressed? Their generics are very, very weak. C#'s generics are considerably stronger- although of course, neither is quite templates. Deterministic destruction is another major lack. Any form of lambda/closure is also a major problem- you can forget a functional API in Java. And, of course, there's always the issue of performance, for those areas that need them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101649",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29987/"
]
} |
101,716 | The section entitled Algorithmic Implementation has the following code: // Return RC low-pass filter output samples, given input samples,
// time interval dt, and time constant RC
function lowpass(real[0..n] x, real dt, real RC)
var real[0..n] y
var real α := dt / (RC + dt)
y[0] := x[0]
for i from 1 to n
y[i] := α * x[i] + (1-α) * y[i-1]
return y what does := mean? | := is the assignment operator for languages that use single equals sign equality testing. The most well known of those languages is Pascal. Due to C's influence most languages switched to = for assignment and == for testing. Some older texts and authors that were trained in such styles use := for pseudocode. You sometimes see arrows <- as well for assignment . From the article: input: an array a of length n with array elements numbered 0 to n − 1
inc ← round(n/2)
while inc > 0 do:
for i = inc .. n − 1 do:
temp ← a[i]
j ← i
while j ≥ inc and a[j − inc] > temp do:
a[j] ← a[j − inc]
j ← j − inc
a[j] ← temp
inc ← round(inc / 2.2) Some modern languages use arrows for assignment; most notably R , which uses it for global assignment whilst using the single equals ( = ) for local assignment. From Sebesta's Concepts of Programming Languages and the class notes of Dr. K. N. King we learn that the assignment standards go back much farther than C or Pascal. It appears that in 1958 when Algol was being designed, it was decided to use := for assignment. The commitee was composed of American and European representatives. Some of the Germans on the committee were familiar with Konrad Zuse's Plankalkul language (which was drafted during World War II but not published until 1972 and not implemented until 2005) and wanted the assignment to follow that language's proposed assignment method which was b+c => a where b+c is assigned to a. The committee changed this to =: on the grounds that the method of entering programs at the time called a keypunch, did not have a ">" to use. So they compromised on the equals colon. However, the Americans being familiar with FORTRAN (it didn't have lower case until 1990) wanted the assignment to operate to the left since that was how FORTRAN did it. So they managed to get it changed to := instead and had the assignment operate toward the left rather than the right in the style of FORTRAN (being a known implemented language) rather than Plankalkul (a virtually unknown language outside of Germany and not implemented). Algol 60 strongly influenced all major subsequent imperative languages including Pascal and C. Thus Pascal kept ALGOL's syntax for assignment and both kept the lefthandedness of assignment. ALGOL was designed to be easy to read and close to mathematical notation. It was the de facto (and basically de jure) standard for writing algorithms in journals for next 20+ years. Therefore, instructors and computer scientists educated from 1960 to around 1980 would have been familiar with that style of notation. The release of the IBM 029 Keypunch in 1964 allowed for > and < characters, thus prompting their inclusion in C among others. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101716",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
101,762 | Possible Duplicate: How do you keep track of the authors of code? One of my colleagues is in the habit of putting his name and email address in the head of each source file he works on, as author metadata. I am not; I prefer to rely on source control to tell me who I should be speaking to about a given set of functionality. Should I also be signing files I work on for any other reasons? Do you? If so, why? To be clear, this is in addition to whatever metadata for copyright and licensing information is included, and applies to both open sourced and proprietary code. | Not really, no. There are a couple of reasons why: Your version control system (VCS) stores this metadata already . E.g. each commit in git has a field for the name who made the commit. Competent version control systems allow you to see who made a change on a specific lines of code as well. That functionality is usually called blame which is a misnomer as, instead of finding someone to do actual blaming on, it is most useful for finding someone to talk with about a problem you have in the context of the piece of code). I've seen some header comments that have history log as well, but that can easily be extracted from a VCS as well. It discourages code collaboration . "Hey, someone made this... maybe we shouldn't touch his code". If there is no sole code ownership on some source code file then the higher chance that someone else will change it, which in turn facilitates and enables refactoring. What about copyright notices? Though for the sake of copyright you might want to add the company's name as the copyright holder in the header comments (aka copyright banner). If it's an open source project then it is often the name of the maintainer organization or just one maintainer instead. However it is now common practice to put this information in text files at the root of the project. You can see these text files done on open source projects with names such as README , LICENSE , and/or CONTRIBUTORS . What about database scripts? You could argue you need to do history logs in header comments with database SQL scripts though lately with database migration tools has made this an obsolete practice. Database migration tools usually keep track of which migrations were run and can do rollbacks as needed. These tools are more flexible to use than a text file. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101762",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14728/"
]
} |
101,822 | This is for the freelancers. In my past life as a full-time employee, I was involved in hiring and interviewed a good number of developers. I learned that it was much more valuable to see a developer work through a problem, however small, than to talk about experience, etc. Now I'm working as a freelancer/consultant. One of the nicer things is the freedom to choose what I want to work on. But now, a few years in, I've learned (the hard way) that some clients are not worth the money. They can be fickle, unreasonable, demanding, and difficult to work with in a very demoralizing way. So I've been wondering: is there a better way to "interview" clients? Obviously any method would have to be subtle, as the client probably isn't expecting to be tested. But is there an analogue to " FizzBuzz " that will tell me whether I should sign on, or back away slowly? | For many of my former clients, I often scheduled initial meetings at their office. Ostensibly this creates less heartache for them because you're saving them the trip, but more importantly it keeps them from seeing the "man behind the curtain" and it gives you a good close up look at how their office operates. I would typically show up anywhere from 30 minutes to an hour before the scheduled appointment. I would bring a technical book that I could skim and read, and I would patiently wait in some out of the way area. This would afford me a good opportunity to gauge how the client reacted to difference in the time. Often I would offer to wait eagerly because I wanted to see how the office interacted. If a client treats the people who work for him every day like dirt, chances are good that he'll treat you like dirt once he no longer needs to worry about being nice to you. If that didn't work, I would offer a meeting place where I could buy the prospective client(s) lunch. Then I would watch to see how they treated the wait staff, and I would be very attentive to their comments about the other patrons, etc. Since the relationship with the client is exactly that, the only way to test the truth of a potential relationship is to see how the person behaves towards people they don't have to treat with respect. Observation is your best friend, and unfortunately in most client situations, you are in a carefully controlled meeting environment that prevents this very important data. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101822",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5213/"
]
} |
101,830 | I am new to static analysis of code. My application has a Cyclomatic complexity of 17,754. The application itself is only 37,672 lines of code. Is it valid to say that the complexity is high based on the lines of code? What exactly is the Cyclomatic complexity saying to me? | What exactly is the Cyclomatic complexity saying to me? Cyclomatic complexity is not a measure of lines of code, but the number of independent paths through a module. Your cyclomatic complexity of 17,754 means that your application has 17,754 unique paths through it. This has a few implications, typically in terms of how difficult it is to understand and test your application. For example, the cyclomatic complexity is the number of test cases needed to achieve 100% branch coverage, assuming well-written tests. A good starting point might be the Wikipedia article on cyclomatic complexity . It has a couple of snippits of pseudocode and some graphs that show what cyclomatic complexity is all about. If you want to know more, you could also read McCabe's paper where he defined cyclomatic complexity . My application has a Cyclomatic complexity of 17,754 lines of code. The application itself is only 37,672 lines of code. Is it valid to say that the complexity is high based of the lines of code? Not at all. An application with few lines of code and a high number of conditionals nested within loops could have an extremely high cyclomatic complexity. On the other hand, an application with few conditions might have a low cyclomatic complexity. That's oversimplifying it a big, but I think it gets the idea across. Without knowing more about what your application does, it might be normal to have a higher cyclomatic complexity. I would suggest measuring cyclomatic complexity on a class or method level, however, instead of just an application level. This is a little more managable, conceptually, I think - it's easier to visualize or conceptualize the paths through a method than paths through a large application. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101830",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17476/"
]
} |
101,873 | I was looking through various APIs and SDKs, when I realized that I couldn't really tell the difference between something called an API and something called an SDK . Both of them are, conceptually, a way for your program to interface with and control the resources provided by another piece of software, whether that other software is a web service, an end-user app, an OS service or daemon, or a kernel device driver. So, what is the semantic difference between an SDK and an API? | I think it rather falls along the lines of "All SDKs are APIs but not all APIs are SDKs". An SDK seems to be a complete set of APIs that allow you to perform most any action you would need to for creating applications. In addition an SDK may include other tools for developing for the platform/item that it is for. An API on the other hand is just a series of related methods that may be good for a specific purpose. As an example, the JDK (Java Development Kit) contains the API as well as the compilers, runtimes, and other miscellaneous tools. The Java API is simply all the libraries that make up the core language that you can work with out of the box. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101873",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19295/"
]
} |
101,931 | I am sure this question has been asked many times. However, I like to ask it again with the intention of what is the future of these languages. I was first introduced to Groovy and really liked it. I felt the syntax was simpler and it was much closer to Java and I was able to quickly learn Grails . Then there was Scala , and the web frame work Lift . I am still learning Scala and I find the syntax very difficult at times. However, I still wonder what is the future of Groovy. When the author of Groovy says he would have never created groovy if he knew about Scala, then it makes me wonder if there is a future at all. Of course Groovy has came a long way and Grails is used today by many large companies. If one was to look at Grails vs Lift today, then Grails would be clear winner. More companies are using it. But given everything I have said so far, I am interested to know if one should invest in Groovy? Is Groovy going away and Scala the better choice? If the CEO of BMW says he drives a Mercedes then one would wonder why shouldn't we all drive Mercedes too, right? (I understand if this question is really broad and might be closed. I hope to make it an open Wiki for others though.) | I don't think Groovy is going to disappear, it is a self sustained entity now. Grails framework is built around Groovy language, which it is a good reason to stay alive. Groovy got its killer application. I'm not saying Grails is the only vital Groovy project, Gradle is also pretty good. GORM is really handy. Gant is a nice improvement over Ant. Not mention that Groovy/Grails are now under the umbrella of SpringSource and they are committed to the both. If the CEO of BMW says he drives a Mercedes then one would wonder why shouldn't we all drive Mercedes too, right? Guess what James Gosling chose? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101931",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34539/"
]
} |
101,944 | I currently have an open source project that is licensed under GPLv3. I'd like to dual license it, so I can offer it for commercial use. Are there any resources for choosing or creating a commercial software license? | I think Binpress' license generator is excellent: Adjust the options below to fit your business needs. Once you hit "Generate License" at the bottom, you will be given a unique address to your generated license. If you leave an Email address and name, you can edit your license at any time (optional). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101944",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22470/"
]
} |
101,952 | I am doing unit testing. I am trying to test one function. I call it from my test component. But If remote function cannot handle the exception then my tester component will also get exception, I guess. So should I worry about getting exception in my tester component? Thanks. EDIT: PS: Throwing an error is good, but only for other functions, not to end users until its a last option! OMG I wrote a programming quote!! | Short answer: NO. Don't catch exceptions in unit tests. You are unit testing to find errors and situations where exceptions are raised. The unit test framework should handle exceptions in a sane manner. Most (if not all) xUnit frameworks have a construct to expect certain exceptions which you use when you want to induce a particular exception condition in the system under test and have a test pass if the expected exception is raised but fail if it does not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/101952",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9421/"
]
} |
102,023 | Should the usage of Dependency Injection and an Inversion of Control container remove all occurrences of the " new " keyword from your code? In other words, should every object/dependency, no matter how simple or short-lived, be "registered" within your IoC container and injected into the method/class that needs to use them? If no, where do you draw the line between which dependencies/objects get registered in the IoC container, versus what gets created "in-line" as a concrete reference, via the new keyword ? | Avoid dogma. Do what feels right. I prefer to use "new" for data structures which have no behavior. If the class does have behavior, I then look at the scope of that behavior. If it is stateless and has no dependencies, I lean towards "new". I only begin refactoring towards DI when I need to add a dependency to stateful resources (such as a database or a file), or to other classes with such resources. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102023",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4351/"
]
} |
102,041 | I must confess that I was not so strong in data structures when I graduated out of college. Throughout the campus placements during my graduation, I've witnessed that most of the biggie tech companies like Amazon, Microsoft etc focused mainly on data structures. It appears as if data structures is the only thing that they expect from a graduate. To be honest, I felt bad about that. I write good code. I follow standard design patterns of coding, I do use data structures but at the superficial level as in Java exposed APIs like ArrayList, LinkedList etc. But the companies usually focused on the intricate aspects of Data Structures like pointer based memory manipulation and time complexities. Probably because of my Java background, back then, I understood code efficiency and logic only when talked in terms of Object Oriented Programming like objects, instances, etc but I never drilled down into the level of bits and bytes. I did not want people to look down upon me for this knowledge deficit of mine in Data Structures. So really why all this emphasis on Data Structures? | most of the biggie tech companies like Microsoft focus mainly on data structures. It appears as if data structures is the only thing that they expect from a graduate. No, there's more. For example, we also expect that you be a quick learner who can learn new frameworks, APIs or even programming languages within a short amount of time. That's a bare minimum bar. Someone who takes a long time to learn a new framework, API or language will not be a successful developer on most teams at Microsoft. And of course there are many more aspects that we focus on in interviews other than just raw knowledge of data structures. Ability to deal with ambiguous specifications, for example, or ability to recognize coding patterns that produce insecure code, or a dozen other things. But ability to understand data structures certainly is a very big one. It is particularly the case that interviews are biased towards testing knowledge of data structures for recent CS graduates. Recent graduates, most of whom do not have a lot of real-world experience, are not expected to be good at the same sorts of things that someone with fifteen years of industry experience would be good at. I must confess that I was not so strong in data structures It's good that you know that about yourself. If you're unable or unwilling to change that about yourself then my recommendation is that you don't apply for a job that requires facility with data structures. there is this general perspective that a good programmer is necessarily a one with good knowledge about data structures. It's tautological that a good programmer is a programmer who is good at building the sorts of programs that need to be built. Lots of programmers work on tasks that do not require deep knowledge of data structures. Some of them work on tasks that require a deep knowledge of user interface design, for example. Or database normalization. Or whatever. Those people can still be "good programmers" in their domains. why all this emphasis on Data Structures? I ask interview questions about data structures because on my team the developers design, implement and manipulate complex data structures all day every day. Yesterday we had four hours of meetings in which a half-dozen developers argued the pros and cons of adding single Boolean field to a particular tree node. There is probably no skill on my team more important than ability to understand data structures at a deep level. It would be foolish to not ask interview questions about it, since that's what we do. Does not having knowledge in Data Structures really affect one's career in programming? Well it certainly will prevent you from getting a job on my team. But like I said before, programming is a huge field. There are lots of kinds of computer programming that don't require knowledge of data structures. is the knowledge in this subject really a sufficient basis to differentiate a good and a bad programmer? No. But it is almost always sufficient to detect developers who are unlikely to be successful at Microsoft. Since that is what I am primarily interested in detecting, knowledge of data structures is one of the factors I test for in interviews. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102041",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23010/"
]
} |
102,090 | As a professional Java programmer, I've been trying to understand - why the hate toward Java for modern web applications? I've noticed a trend that out of modern day web startups, a relatively small percentage of them appears to be using Java (compared to Java's overall popularity). When I've asked a few about this, I've typically received a response like, "I hate Java with a passion." But no one really seems to be able to give a definitive answer. I've also heard this same web startup community refer negatively to Java developers - more or less implying that they are slow, not creative, old. As a result, I've spent time working to pick up Ruby/Rails, basically to find out what I'm missing. But I can't help thinking to myself, "I could do this much faster if I were using Java," primarily due to my relative experience levels. But also because I haven't seen anything critical "missing" from Java, preventing me from building the same application. Which brings me to my question(s) : Why is Java not being used in modern web applications? Is it a weakness of the language? Is it an unfair stereotype of Java because it's been around so long (it's been unfairly associated with its older technologies, and doesn't receive recognition for its "modern" capabilities)? Is the negative stereotype of Java developers too strong? (Java is just no longer "cool") Are applications written in other languages really faster to build, easier to maintain, and do they perform better? Is Java only used by big companies who are too slow to adapt to a new language? | Modern day startups need to hit the market as soon as possible. They don't need to spend about six months in order to release their Java web application. Twitter for example was built using Rails/Ruby but once it became unscalable, they migrated to the JVM. Not to mention that the development process isn't productive: code -> compile -> deploy while it is in frameworks like (Rails/Django/Grails): run testing server -> code -> change things and see what happens. The good news is that JRebel lets you see code changes instantly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102090",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34605/"
]
} |
102,205 | I'm going to ask what is probably quite a controversial question: "Should one of the most
popular encodings, UTF-16, be considered harmful?" Why do I ask this question? How many programmers are aware of the fact that UTF-16 is actually a variable length encoding? By this I mean that there are code points that, represented as surrogate pairs, take more than one element. I know; lots of applications, frameworks and APIs use UTF-16, such as Java's String, C#'s String, Win32 APIs, Qt GUI libraries, the ICU Unicode library, etc. However, with all of that, there are lots of basic bugs in the processing of characters out of BMP (characters that should be encoded using two UTF-16 elements). For example, try to edit one of these characters: 𝄞 ( U+1D11E ) MUSICAL SYMBOL G CLEF 𝕥 ( U+1D565 ) MATHEMATICAL DOUBLE-STRUCK SMALL T 𝟶 ( U+1D7F6 ) MATHEMATICAL MONOSPACE DIGIT ZERO 𠂊 ( U+2008A ) Han Character You may miss some, depending on what fonts you have installed. These characters are all outside of the BMP (Basic Multilingual Plane). If you cannot see these characters, you can also try looking at them in the Unicode Character reference . For example, try to create file names in Windows that include these characters; try to delete these characters with a "backspace" to see how they behave in different applications that use UTF-16. I did some tests and the results are quite bad: Opera has problem with editing them (delete required 2 presses on backspace) Notepad can't deal with them correctly (delete required 2 presses on backspace) File names editing in Window dialogs in broken (delete required 2 presses on backspace) All QT3 applications can't deal with them - show two empty squares instead of one symbol. Python encodes such characters incorrectly when used directly u'X'!=unicode('X','utf-16') on some platforms when X in character outside of BMP. Python 2.5 unicodedata fails to get properties on such characters when python compiled with UTF-16 Unicode strings. StackOverflow seems to remove these characters from the text if edited directly in as Unicode characters (these characters are shown using HTML Unicode escapes). WinForms TextBox may generate invalid string when limited with MaxLength. It seems that such bugs are extremely easy to find in many applications that use UTF-16. So... Do you think that UTF-16 should be considered harmful? | This is an old answer. See UTF-8 Everywhere for the latest updates. Opinion: Yes, UTF-16 should be considered harmful . The very reason it exists is because some time ago there used to be a misguided belief that widechar is going to be what UCS-4 now is. Despite the "anglo-centrism" of UTF-8, it should be considered the only useful encoding for text. One can argue that source codes of programs, web pages and XML files, OS file names and other computer-to-computer text interfaces should never have existed. But when they do, text is not only for human readers. On the other hand, UTF-8 overhead is a small price to pay while it has significant advantages. Advantages such as compatibility with unaware code that just passes strings with char* . This is a great thing. There're few useful characters which are SHORTER in UTF-16 than they are in UTF-8. I believe that all other encodings will die eventually. This involves that MS-Windows, Java, ICU, python stop using it as their favorite. After long research and discussions, the development conventions at my company ban using UTF-16 anywhere except OS API calls, and this despite importance of performance in our applications and the fact that we use Windows. Conversion functions were developed to convert always-assumed-UTF8 std::string s to native UTF-16, which Windows itself does not support properly . To people who say " use what needed where it is needed ", I say: there's a huge advantage to using the same encoding everywhere, and I see no sufficient reason to do otherwise. In particular, I think adding wchar_t to C++ was a mistake, and so are the Unicode additions to C++0x. What must be demanded from STL implementations though is that every std::string or char* parameter would be considered unicode-compatible. I am also against the " use what you want " approach. I see no reason for such liberty. There's enough confusion on the subject of text, resulting in all this broken software. Having above said, I am convinced that programmers must finally reach consensus on UTF-8 as one proper way. (I come from a non-ascii-speaking country and grew up on Windows, so I'd be last expected to attack UTF-16 based on religious grounds). I'd like to share more information on how I do text on Windows, and what I recommend to everyone else for compile-time checked unicode correctness, ease of use and better multi-platformness of the code. The suggestion substantially differs from what is usually recommended as the proper way of using Unicode on windows. Yet, in depth research of these recommendations resulted in the same conclusion. So here goes: Do not use wchar_t or std::wstring in any place other than adjacent point to APIs accepting UTF-16. Don't use _T("") or L"" UTF-16 literals (These should IMO be taken out of the standard, as a part of UTF-16 deprecation). Don't use types, functions or their derivatives that are sensitive to the _UNICODE constant, such as LPTSTR or CreateWindow() . Yet, _UNICODE always defined, to avoid passing char* strings to WinAPI getting silently compiled std::strings and char* anywhere in program are considered UTF-8 (if not said otherwise) All my strings are std::string , though you can pass char* or string literal to convert(const std::string &) . only use Win32 functions that accept widechars ( LPWSTR ). Never those which accept LPTSTR or LPSTR . Pass parameters this way: ::SetWindowTextW(Utils::convert(someStdString or "string litteral").c_str()) (The policy uses conversion functions below.) With MFC strings: CString someoneElse; // something that arrived from MFC. Converted as soon as possible, before passing any further away from the API call:
std::string s = str(boost::format("Hello %s\n") % Convert(someoneElse));
AfxMessageBox(MfcUtils::Convert(s), _T("Error"), MB_OK); Working with files, filenames and fstream on Windows: Never pass std::string or const char* filename arguments to fstream family. MSVC STL does not support UTF-8 arguments, but has a non-standard extension which should be used as follows: Convert std::string arguments to std::wstring with Utils::Convert : std::ifstream ifs(Utils::Convert("hello"),
std::ios_base::in |
std::ios_base::binary); We'll have to manually remove the convert, when MSVC's attitude to fstream changes. This code is not multi-platform and may have to be changed manually in the future See fstream unicode research/discussion case 4215 for more info. Never produce text output files with non-UTF8 content Avoid using fopen() for RAII/OOD reasons. If necessary, use _wfopen() and WinAPI conventions above. // For interface to win32 API functions
std::string convert(const std::wstring& str, unsigned int codePage /*= CP_UTF8*/)
{
// Ask me for implementation..
...
}
std::wstring convert(const std::string& str, unsigned int codePage /*= CP_UTF8*/)
{
// Ask me for implementation..
...
}
// Interface to MFC
std::string convert(const CString &mfcString)
{
#ifdef UNICODE
return Utils::convert(std::wstring(mfcString.GetString()));
#else
return mfcString.GetString(); // This branch is deprecated.
#endif
}
CString convert(const std::string &s)
{
#ifdef UNICODE
return CString(Utils::convert(s).c_str());
#else
Exceptions::Assert(false, "Unicode policy violation. See W569"); // This branch is deprecated as it does not support unicode
return s.c_str();
#endif
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102205",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35048/"
]
} |
102,245 | Is there an encyclopedia of algorithms similar in style to the Handbook of Mathematics? It seems useful to have large numbers of them available in one place. I know the Art of Computer Programming is considered a good source but it does not seem encyclopedic so much as instructive. Moderator Note We're looking for long answers that provide some explanation and context. Don't just list a book: please explain why you're recommending a book or resource. Answers that don't explain anything will be deleted. See Good Subjective, Bad Subjective for more information. | I'm not sure if this is what you're looking for but NIST has the Dictionary of Algorithms and Data Structures . It's a pretty comprehensive dictionary for data structures and algorithms (doh) and usually a good to place to look when I find something I never heard about before. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102245",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
102,381 | In another question I asked recently about best practices for whiteboarding , there was general consensus that thinking out loud while coming up with the answer was the best strategy. Indeed, long moments of silence are awkward. However after recent interviews I have noticed that if my thinking out loud leads to wrong solutions or down the wrong path, that with further consideration I would have seen, interviewers tend to quickly jump in and point out problems with my approach, especially if I stop to pause for a minute. This was not an isolated case, and happened during more than one interview with more than one interviewer. The other thing is that after the interview, on a problem I absolutely bombed, when I sat down and sketched out the problem on a piece of paper in silence I was able to sketch out the solution pretty quickly. Thinking out loud ends up with me spending brain cycles on reflecting on how what I say must be registering with the interviewer and in addition there's a fear of recognizing that I've gone down the wrong path and starting over after having written something on the board wastes a lot of time. Once you've started down one path and realize you've written a lot of junk, you can't undo it, whereas if you've thought quietly about it the interviewer wouldn't have seen the mess and it would have been quicker since whiteboarding a bad idea takes up more time than simply considering a bad idea. I don't want moments of silence but at the same time speaking takes more time, leads to self-consciousness and can lead to interviewer intervention on something I might have figured out myself with just a little more time. | It may not the best strategy for you, but it surely is nice for the interviewers , as long as you don't go "Full Metal Jacket"-crazy on them. Most interviewers appreciate that (at least for programming positions), as it allows them to: evaluate your thought-process , and guide you if you're on the wrong track . But feel free to say "hang on, let me have a think about this" and think things through before rambling on too much. Take your time ; but don't let them hanging for ages. They are anxious to see if you're stuck or not. Also, being on the wrong path at first is not a bad thing: it is your throught process . It's incremental and you need encounter issues along the way. Fairly normal. It's only bad if you don't see that you're on the wrong track, or refuse to see it when told so, and then don't manage to find the right way. It helps to get the conversation flowing and going forward. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102381",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/15053/"
]
} |
102,396 | Do software developers who choose not to put code optimization, standards and best practices as a top priority, create more useful code than those developers who want to worry about optimization, implementation of coding standards and practices above completing tasks on time? How do these differing methodologies compare when it comes to individual performance reviews? How do these styles compare in peer reviews? What is the best way to influence your team to implement more best practices during the SDLC? | No, they'll only get respect from the project's owner of the moment. They'll get trashed for years by: future maintainers, future testers, future project owners, future managers, and pretty much anyone involved with the codebase in the future. They might even get the same treatment from: the current testers, the current technical documentation writers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102396",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12139/"
]
} |
102,507 | We're a scrum team of 3 developers, 1 designer, the scrum master, and the product owner. However, we don't have official tester in our team. A problem that is always with us, is that, testing the application and passing those tests and removing bugs has been defined as one of the criteria to consider a PBI (Product Backlog Item) as done. But the problem is that, no matter how much we (3 developers and 1 designer) try to test the application and implemented use cases, still some bugs are not seen and ruin our presentation ( Murphy's law ) to stakeholders. As a remedy, we recommended that the company hire a new tester. Someone who's job would be testing, and testing only. An official professional tester. However, the problem is that, scrum master and stakeholders believe that a developer (or a designer) should also be a tester. Are they right? Should we developers (also designers) be testers too? | Ex ante: There seems to be a lot of confusion on what is regarded as testing what is not. Sure, every developer needs to test his code as he creates it, he/she needs to verify it works. She/he can't hand it to a tester before he/she thinks it's done and good enough. But developers don't see everything. They might not recognize bugs. These bugs can only be found later in the development cycle when thorough testing is conducted. The question is whether developers should conduct that kind of testing or not, and in my humble opinion this needs to be looked at from a project manager's point of view: Developers can be testers, but they shouldn't be testers . Developers tend to unintentionally/unconciously avoid to use the application in a way that might break it. That's because they wrote it and mostly test it in the way it should be used. A good tester on the other hand, tries to torture the application. His/her primary intention is to break it. They often use the application in ways developers wouldn't have imagined. They're closer to the users than the developer and often times have a different approach to test a workflow. Also, using developers as testers increases development costs and does not benefit the quality of the product as much as having a dedicated tester. I wouldn't let developers cross-test their works when I can have it done better by a tester for cheap. Only if the feedback loop between developers and testers became too expensive, I'd have developers crosstest each other's code, but in my experience that is rarely the case and it highly depends on the process. That does not mean a developer should be sloppy and leave everything to the tester. The software should be backed up by unit tests and technical errors should be reduced to a minimum before handing the software to the tester. Still, sometimes you have fix here, break there problems or other bugs that no developer could forsee, that's ok. Also, integration testing should be done mostly by the developers. The tester's main objective is to verify that the requirements are met. In such a small team (and also depending on the size of the application), I can also see the tester in a hybrid role, writing unit tests and UI tests. You should definitely hire one . But more important than the tester are regular freezes/branches. Don't present anything that hasn't been properly tested. When you've added a feature or changed something, everything surrounding it has to be verified again. You'll only get a bad reputation if your company doesn't. Don't release something unstable. When the customer wants to have the software by a certain date, then stop developing early enough and test it properly, so you have enough time for bug fixes. Often it's better to decline last-minute feature-requests than to implement them poorly or release without proper testing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102507",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31418/"
]
} |
102,617 | I've heard stories of this from senior coders and I've seen some of it myself. It seems that there are more than a few instances of programmers writing pointless code. I will see things like: Method or function calls that do nothing of value. Redundant checks done in a separate class file, object or method. if statements that always evaluate to true. Threads that spin off and do nothing of note. Just to name a few. I've been told that this is because programmers want to intentionally make the code confusing to raise their own worth to the organization or make sure of repeat business in the case of contractual or outsourced work. My question is. Has anyone else seen code like this? What was your conclusion was to why that code was there? If anyone has written code like this, can you share why? | I haven't see the code like this but I have seen the code that looks pointless or is pointless for the other reasons: Backward compatibility. You found much better way to do things but you must keep old (and by now not very useful) API/function because some third-party module out there may be using this API/function for something. Even if the function doesn't do anything useful, absence of it might break some code. Defensive coding. You know the checks in this code are pointless because this was already checked elsewhere. But what if somebody changes this elsewhere code and removes or changes the checks so that they won't longer match your preconditions? Organic growth. In big projects, over the years many things change, and it turns out some methods that were used before aren't used anymore, but nobody bothered to remove them since nobody kept track of if this specific method is used or not, they just refactored their pieces of code and by chance it happened they all stopped to use this method. Or conditions that once had meaning but application was refactored in other places so that condition became always true but nobody bothered to remove it. Over-designing. People might code some things "just in case we'd need it" and never actually need it. Like "let's spawn a thread in case we'd have to do some work offline" and then nobody asks to do anything offline and the programmer forgets about it and moves on to other projects (or maybe even another company) and that code remains there forever because nobody knows why it's there or if it's safe to remove it. So while I have never seen it done out of malice or misguided approach to job security, I've seen tons of times when it happens as natural result of software development. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102617",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34203/"
]
} |
102,677 | I'm 21 years old and a first year master's computer science student. Whether or not to continue with my PhD has been plaguing me for the past few months. I can't stop thinking about it and am extremely torn on the issue. I have read http://www.cs.unc.edu/~azuma/hitch4.html and many, many other masters vs phd articles on the web. Unfortunately, I have not yet come to a conclusion. I was hoping that I could post my ideas about the issue on here in hopes to 1) get some extra insight on the issue and 2) make sure that I am correct in my assumptions. Hopefully people who have experience in the respective fields can tell me if I am wrong so I don't make my decision based on false ideas. Okay, to get this topic out of the way - money. Money isn't the most important thing to me, but it is still important. It's always been a goal of mine to make 6 figures, but I realize that will probably take me a long time with either path. According to most online salary calculating sites, the average starting salary for a software engineer is ~60-70k. The PhD program here is 5 years, so that's about 300k I am missing out on by not going into the workforce with a masters. I have only ever had ~1k at one time in my life so 300k is something I don't think I can accurately imagine. I know that I wouldn't have all of that at once obviously, but knowing I would be earning that is kinda crazy to me. I feel like I would be living quite comfortably by the time I'm 30 years old (but risk being too content too soon). I would definitely love to have at least a few years of my 20s to spend with that kind of money before I have a family to spend it all on. I haven't grown up very financially stable so it would be so nice to just spend some money…get a nice car, buy a new guitar or two, eat some good food, and just be financially comfortable. I have always felt like I deserved to make good money in my life, even as a kid growing up, and I just want to have it be a reality. I know that either path I take will make good money by the time I'm ~40-45 years old, but I guess I'm just sick of not making money and am getting impatient about it. However, a big idea pushing me towards a PhD is that I feel the masters path would give me a feeling of selling out if I have the capability to solve real questions in the computer science world. (pretty straight-forward - not much to elaborate on, but this is a big deal) Now onto other aspects of the decision. I originally got into computer science because of programming. I started in high school and knew very soon that it was what I wanted to do for a career. I feel like getting a masters and being a software engineer in the industry gives me much more time to program in my career. In research, I feel like I would spend more time reading, writing, trying to get grant money, etc than I would coding. A guy I work with in the lab just recently published a paper. He showed it to me and I was shocked by it. The first two pages was littered with equations and formulas. Then the next page or so was followed by more equations and formulas that he derived from the previous ones. That was his work - breaking down and creating all of these formulas for robotic arm movement. And whenever I read computer science papers, they all seem to follow this pattern. I always pictured myself coding all day long…not proving equations and things of that nature. I know that's only one part of computer science research, but that part bores me. A couple cons on each side - Phd - I don't really enjoy writing or feel like I'm that great at technical writing. Whenever I'm in groups to make something, I'm always the one who does the large majority of the work and then give it to my team members to write up a report. Presenting is different though - I don't mind presenting at all as long as I have a good grasp on what I am presenting. But writing papers seems like such a chore to me. And because of this, the "publish or perish" phrase really turns me off from research.
Another bad thing - I feel like if I am doing research, most of it would be done alone. I work best in small groups. I like to have at least one person to bounce ideas off of when I am brainstorming. The idea of being a part of some small elite group to build things sounds ideal to me. So being able to work in small groups for the majority of my career is a definite plus. I don't feel like I can get this doing research. Masters - I read a lot online that most people come in as engineers and eventually move into management positions. As of now, I don't see myself wanting to be a part of management. Lets say my company wanted to make some new product or system - I would get much more pride, enjoyment, and overall satisfaction to say "I made this" rather than "I managed a group of people that made this." I want to be a big part of the development process. I want to make things. I think it would be great to be more specialized than other people. I would rather know everything about something than something about everything. I always have been that way - was a great pitcher during my baseball years, but not so good at everything else, great at certain classes in school, but not so good at others, etc. To think that my career would be the same way sounds okay to me. Getting a PhD would point me in this direction. It would be great to be some guy who is someone that people look towards and come to ask for help because of being such an important contributor to a very specific field, such as artificial neural networks or robotic haptic perception. From what I gather about the software industry, being specialized can be a very bad thing because of the speed of the new technology. When it comes to being employed, I have pretty conservative views. I don't want to change companies every 5 years. Maybe this is something everyone wishes, but I would love to just be an important person in one company for 10+ (maybe 20-25+ if I'm lucky!) years if the working conditions were acceptable. I feel like that is more possible as a PhD though, being a professor or researcher. The more I read about people in the software industry, the more it seems like most software engineers bounce from company to company at rapid paces. Some even work like a hired gun from project to project which is NOT what I want AT ALL. But finding a place to make great and important software would be great if that actually happens in the real world. I'm a very competitive person. I thrive on competition. I don't really know why, but I have always been that way even as a kid growing up. Competition always gave me a reason to practice that little extra every night, always push my limits, etc. It seems to me like there is no competition in the research world. It seems like everyone is very relaxed as long as research is being conducted. The only competition is if someone is researching the same thing as you and its whoever can finish and publish first (but everyone seems to careful to check that circumstance). The only noticeable competition to me is just with yourself and your own discipline. I like the idea that in the industry, there is real competition between companies to put out the best product or be put out of business. I feel like this would constantly be pushing me to be better at what I do. One thing that is really pushing me towards a PhD is the lifetime of the things you make. I feel like if you make something truly innovative in the industry…just some really great new application or system…there is a shelf-life of about 5-10 years before someone just does it faster and more efficiently. But with research work, you could create an idea or algorithm that last decades. For instance, the A* search algorithm was described in 1968 and is still widely used today. That is amazing to me. In the words of Palahniuk, "The goal isn't to live forever, its to create something that will." Over anything, I just want to do something that matters. I want my work to help and progress society. Seriously, if I'm stuck programming GUIs for the next 40 years…I might shoot myself in the face. But then again, I hate the idea that less than 1% of the population will come into contact with my work and even less understand its importance. So if anything I have said is false then please inform me. If you think I come off as a masters or PhD, inform me. If you want to give me some extra insight or add on to any point I made, please do. Thank you so much to anyone for any help. | The choice is simple: If you want to do computer science research, get a PhD. If you want to learn advanced topics to be a better programmer, get a Masters. Now, there are some misconceptions you have about PhD programs. Given your long entry, let me reply with bullet points: You will be financially supported in a PhD program in the United States. This include tuition and money for rent, food, and a car. The pay, however, is mostly subsistence level. Once you do have a PhD, you will be able to command a higher salary. Job openings for Google and Apple mention PhDs as "preferred," even when it's not a research position. [This could be that graduate school is the new bachelors, the signal that you are capable of completing significant and intense work.] Yes, as a researcher you will be chasing money, writing papers, and reading related works. The amount that you code, however, entirely depends on the kind of school or company you end up at. In programs with few or no PhD students, you will be coding almost everything yourself. In lower-ranked or not-ranked PhD programs, you will be doing a large share of the coding. In top tier institutions, however, your students will be independent enough that you will not need to program at all. It all depends on what aspect of programming that you like. If it's creating and thinking creatively and problem finding that you love, you will get plenty of that conducting research. How good or bad you are at technical writing now doesn't matter. Why? Because you will improve. Your advisor will whip you into shape. You'll also be reading books like George Gopen's The Sense of Structure: Writing from the Reader's Perspective . I've never heard of any researcher being "too specialized" that their skills became completely obsolete due to technological change. A PhD knows enough of the fundamentals that they can forge new paths in the latest areas, always. The research world is highly competitive! This is usually a downside of research, having to deal with rejection upon rejection, as reviewers miss all of the points of your brilliance, sometimes to hyper-macho degrees. You think it's a plus, however. You may or may not hit a "home run" with your PhD. Yes, you may do something that changes the world, but most likely you will make something that shows you are competent at research. Overall, you seem like the type who wants to run on all of your cylinders, at full capacity, covering all of your skills. When you are a researcher, you are your own boss and you decide what is worthy of study. That is academic freedom. However, there is so much doubt in your message, I don't think you'd have the motivation to pull through the darker times of a six+ year program. Just to be sure, why don't you go for a Master's Thesis option and find an advisor. If you find you like research, then you will have just the right things to say when you are writing your personal statement for applying to a PhD program. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102677",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34931/"
]
} |
102,689 | One of the things I struggle with is not using Hungarian notation. I don't want to have to go to the variable definition just to see what type it is. When a project gets extensive, it's nice to be able to look at a variable prefixed by 'bool' and know that it's looking for true/false instead of a 0/1 value. I also do a lot of work in SQL Server. I prefix my stored procedures with 'sp' and my tables with 'tbl', not to mention all of my variables in the database respectively. I see everywhere that nobody really wants to use Hungarian notation, to the point where they avoid it. My question is, what is the benefit of not using Hungarian notation, and why does the majority of developers avoid it like the plague? | Because its original intention (see http://www.joelonsoftware.com/articles/Wrong.html and http://fplanque.net/Blog/devblog/2005/05/11/hungarian_notation_on_steroids ) has been misunderstood and it has been (ab)used to help people remember what type a variable is when the language they use is not statically typed. In any statically typed language you do not need the added ballast of prefixes to tell you what type a variable is. In many untyped script languages it can help, but it has often been abused to the point of becoming totally unwieldy. Unfortunately, instead of going back to the original intent of Hungarian notation, people have just made it into one of those "evil" things you should avoid. Hungarian notation in short was intended to prefix variables with some semantics. For example if you have screen coordinates (left, top, right, bottom), you would prefix variables with absolute screen positions with " abs " and variables with positions relative to a window with " rel ". That way it would be obvious to any reader when you passed a relative coordinate to a method requiring absolute positions. update (in response to comment by delnan) IMHO the abused version should be avoided like the plague because: it complicates naming. When (ab)using Hungarian notation there will always be discussions on how specific the prefixes need to be. For example: listboxXYZ or MyParticularFlavourListBoxXYZ . it makes variable names longer without aiding the understanding of what the variable is for. it sort of defeats the object of the exercise when in order to avoid long prefixes these get shortened to abbreviations and you need a dictionary to know what each abbreviation means. Is a ui an unsigned integer? an unreferenced counted interface? something to do with user interfaces? And those things can get long . I have seen prefixes of more than 15 seemingly random characters that are supposed to convey the exact type of the var but really only mystify. it gets out of date fast. When you change the type of a variable people invariably ( lol ) forget to update the prefix to reflect the change, or deliberately don't update it because that would trigger code changes everywhere the var is used... it complicates talking about code because as "@g ." said: Variable names with Hungarian notation are typically difficult-to-pronounce alphabet soup. This inhibits readability and discussing code, because you can't 'say' any of the names. ... plenty more that I can't recall at the moment. Maybe because I have had the pleasure of not having to deal with the abused Hungarian notation for a long while... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102689",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
102,699 | We have a project that everyone says we will be doing in a agile way but I doubt we have clearly understood what agile is. In previous projects we had planning meetings, then defined the product back log and allocated the work to developers in 2 to 3 week sprints. Every morning we had scrum meetings (which seemed to go on for 1/2 an hour each time) and each developer got on with it after that. Hardly anyone wrote any tests until at the end of sprint and work that was not completed was added on to the next sprint. Developers hardly spoke to each other and there was no TDD involved in development. In fact most developers had a spec at the start and just got on with it for the 2 or 3 weeks the sprint was arranged for. There was hardly any communication with the client/stake holder. QA got involved usually a few months later and by then we found missing requirements which further increased the amount of work we had to do. Clearly there was no feedback loop. So my question is, where did we go wrong and how can I prevent the team from making the same mistakes. | What you are describing isn't Agile by definition (Agile Manifesto) it is Waterfall with daily status meetings. Agile means easily adapting to change, if there is no interactive feedback loop with the product owner and thus the customers, then what change is occurring? Agile is about rapid failures, through constant communication with the product owner/customers. It is better to fail sooner than later, less work is done, and less is "lost". And you don't get stuck with the argument, that "we don't have time to do it correctly, since we spent so much time doing it wrong, we just need to continue on this same path, even though it leads to failure". Sounds like your managment is doing "SCRUM, but ..." where the "but" is where they throw out all the SCRUM stuff that they don't understand or agree with and just do things the same haphazard waterfall way as always, but with new shiny buzzword names to it all. In SCRUM the daily stand up is NOT about delivering status to management, it is to force developer interaction, so you know what your fellow team members are doing and can help each other out and not duplicate work. If it takes more than 45 seconds per person you are doing it wrong. It is about transparency for the team, if one person is giving the same status multiple days on something that should be a single days worth of work, the team can resolve the persons problem sooner than later. If you aren't testing each others code as it is written, then you aren't doing it correctly either. Testing should be embedded into the process not an after thought. QA should be included in the planning sessions and give estimates on how long things will take to test. If you are not meeting Sprint commitments and rolling things over, you aren't doing it correctly. Sprints are about commitments if you are committing to too much work, stop doing that, there is no way you can introduce any predictibility or repeatability if you can't accurately commit to deliverables. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102699",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34116/"
]
} |
102,771 | Are there scenarios where polling for events would be better than using the observer pattern ? I have a fear of using polling and would only start using it if someone gave me a good scenario. All I can think of is how the observer pattern is better than polling. Conside this scenario: You are programming a car simulator. The car is an object. As soon as the car turns on, you
want to play a "vroom vroom" sound clip. You can model this in two ways: Polling : Poll the car object every second to see if it's on. When it's on, play the sound clip. Observer pattern : Make the car the Subject of the observer pattern. Have it publish the "on" event to all observers when itself turns on. Create a new sound object that listens to the car. Have it implement the "on" callback, which plays the sound clip. In this case, I think the observer pattern wins. Firstly, polling is more processor intensive. Secondly, the sound clip does not immediately fire when the car turns on. There can be up to a 1 second gap because of the polling period. | Imagine you want to get notified about every engine cycle, e.g. to display an RPM measurement to the driver. Observer pattern: The engine publishes an "engine cycle" event to all observers for each cycle. Create a listener that counts events and updates the RPM display. Polling: The RPM display asks the engine at regular intervals for an engine cycle counter, and updates the RPM display accordingly. In this case, the observer pattern would probably loose: the engine cycle is a high-frequency, high-priority process, you don't want to delay or stall that process just to update a display. You also don't want to thrash the thread pool with engine cycle events. PS: I also use the polling pattern frequently in distributed programming: Observer pattern: Process A sends a message to process B that says "each time an event E occurs, send a message to Process A". Polling pattern: Process A regularly sends a message to process B that says "if you event E occured since the last time I've polled, send me a message now". The polling pattern produces a bit more network load. But the observer pattern has downsides, too: If process A crashes, it will never unsubscribe, and process B will try to send notifications to it for all eternity, unless it can reliably detect remote process failures (not an easy thing to do) If event E is very frequent and/or the notifications carry a lot of data, then process A might get more event notifications than it can handle. With the polling pattern, it can just throttle the polling. In the observer pattern, high load can cause "ripples" through the whole system. If you use blocking sockets, these ripples can go both ways. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102771",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25498/"
]
} |
102,819 | There's a bug in my program. Doesn't really matter what the platform is. Every so often, a row in a ListView is the wrong color. I tried setting a watchpoint for the variable that is supposed to dictate the color of the row, and it doesn't change... I am guessing it means the problem could be in the framework code. I've only witnessed it happen once.. My client has noticed it though, and wants it fixed. No idea where to start. Someone told me to artificially increase the load of the program. What are some methods for hunting down difficult to find bugs? | Logging. Add a bunch of logging to the related module, set it to debug and get the user to send you a copy of the log when the error occurs again. If you can reproduce the error, set your logger to go to the console, and try to reproduce it. Look for anomalies that only occur when the colour is wrong. Back track. Find all references find any line of code that could possibly cause this issue. Then think about whether the code is doing what it should. Don't trust that your code works like you think it does. Challenge yourself - "this variable should be set to x" now test that it is. Set a conditional breakpoint. Or a print line. Or a log. Grab a friend. Do some pair programing, another pair of eyes may help you. Google the frame work, ask on SO, see if other people have had a similar issue. Maybe its not a bug in the framework, but a trap that people occasionally fall into. This is framework dependent, but in WPF you should display the output and look for binding errors. Go have a look over your clients shoulder. See if they are using the program in a different way to the way you test. Maybe they can reproduce it but aren't documenting their steps. Maybe you are assuming usage flow that is different to what the client is actually doing. Take careful notes, or even better run a screen recording programing. Once the bug occurs, restart the app, and follow exactly what you did before. if you can reproduce it then you're half way there. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102819",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19204/"
]
} |
102,856 | I'm a junior developer and I find it hard to estimate how much time it takes to finish a bigger software project. I know how to structure the architecture in general, but it's hard for me to know what details I have to do and what problems I have to solve. So it's hard to estimate how much time it will take to finish a bigger project, because I don't know what problems I need to solve and how long it takes to solve them. How do I explain this to a person that is not a software developer ? | You could ask him/her to estimate how long it would take for her to access some far away location in an uninhabited corner of the world. As an extreme example, let's choose some lesser known peak in the Himalayas, where very few (if any) people have ever climbed on. She would need an awful lot of preparation plus practice before even starting the journey, plus a bunch of permits, each of which can delay the trip for months to years... and a good support team... then once up on the hill slope, she would need to wait and pray for good weather to start climbing towards the peak... etc. etc. Most of these are hard to impossible to estimate, even with prior experience. And the point is: each software project is a bit like climbing a new mountain, where no one has been before, so no one has direct prior experience. Seasoned developers may have gathered experience on more or less similar projects, but there will always be new elements and surprises - otherwise, if a software project were exactly like some previous one, there would be absolutely no point doing it . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102856",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18/"
]
} |
102,869 | I saw on a article that put forth this statement: Developers love to optimize code and with good reason. It is so
satisfying and fun. But knowing when to optimize is far more
important. Unfortunately, developers generally have horrible intuition
about where the performance problems in an application will actually
be. How can a developer avoid this bad intuition? Are there good tools to find which parts of your code really need optimization (for Java)? Do you know of some articles, tips, or good reads on this subject? | Use a good profiler to identify expensive methods. Document how long the hot spots actually took. Write a faster implementation of the hot spots Document how long the hot spots now take, hopefully not making them hotspots anymore. Essentially you need to be able to prove to others where the problem was, and that this change made it go away. Not being able to prove an improvement, qualifies - in my personal opinion - for immediate rollback to the original version. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/102869",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22941/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.