source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
145,832 | What is the fastest way to find the first (smallest) integer that doesn't exist in a given list of unsorted integers (and that is greater than the list's smallest value)? My primitive approach is sorting them and stepping through the list, is there a better way? | Assuming that you mean "integer" when you say "number", you can use a bitvector of size 2^n, where n is the number of elements (say your range includes integers between 1 and 256, then you can use an 256-bit, or 32 byte, bitvector). When you come across an integer in position n of your range, set the nth bit. When you're done enumerating the collection of integers, you iterate over the bits in your bitvector, looking for the position of any bits set 0. They now match the position n of your missing integer(s). This is O(2*N), therefore O(N) and probably more memory efficient than sorting the entire list. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48128/"
]
} |
145,941 | Should catch blocks be used for writing logic i.e. handle flow control etc? Or just for throwing exceptions? Does it effect efficiency or maintainability of code? What are the side effects (if there are any) of writing logic in catch block? EDIT: I have seen a Java SDK class in which they have written logic inside the catch block. For example (snippet taken from java.lang.Integer class): try {
result = Integer.valueOf(nm.substring(index), radix);
result = negative ? new Integer(-result.intValue()) : result;
} catch (NumberFormatException e) {
String constant = negative ? new String("-" + nm.substring(index))
: nm.substring(index);
result = Integer.valueOf(constant, radix);
} EDIT2 : I was going through a tutorial where they count it as an advantage of writing logic of exceptional cases inside the exceptions: Exceptions enable you to write the main flow of your code and to deal
with the exceptional cases elsewhere. Any specific guidelines when to write logic in catch block and when not to? | The example you cite is due to poor API design (there is no clean way to check whether a String is a valid integer except trying to parse it and catching the exception). At the technical level, throw and try/catch are control flow constructs that allow you to jump up the call stack, nothing more and nothing less. Jumping the up the call stack implicitly connects code that is not close together in the source, which is bad for maintainability . So it should only be used when you need to do that and the alternatives are even worse. The widely accepted case where the alternatives are worse is error handling (special return codes that need to be checked and passed up each level of the call stack manually). If you have a case where the alternatives are worse (and you really have considered all of them carefully), then I'd say using throw and try/catch for control flow is fine. Dogma is not a good substitute for judgement. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145941",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51251/"
]
} |
146,021 | I'm refreshing my CS Theory, and I want to know how to identify that an algorithm O (log n) complexity. Specifically, is there an easy way to identify it? I know with O(n), you usually have a single loop; O(n^2) is a double loop; O(n^3) is a triple loop, etc. How about O (log n)? | I know with O(n), you usually have a single loop; O(n^2) is a double
loop; O(n^3) is a triple loop, etc. How about O (log n)? You're really going about it the wrong way here. You're trying to memorize which big-O expression goes with a given algorithmic structure, but you should really just count up the number of operations that the algorithm requires and compare that to the size of the input. An algorithm that loops over its entire input has O(n) performance because it runs the loop n times, not because it has a single loop. Here's a single loop with O(log n) performance: for (i = 0; i < log2(input.count); i++) {
doSomething(...);
} So, any algorithm where the number of required operations is on the order of the logarithm of the size of the input is O(log n). The important thing that big-O analysis tells you is how the execution time of an algorithm changes relative to the size of the input: if you double the size of the input, does the algorithm take 1 more step (O(log n)), twice as many steps (O(n)), four times as many steps (O(n^2)), etc. Does it help to know from experience that algorithms that repeatedly partition their input typically have 'log n' as a component of their performance? Sure. But don't look for the partitioning and jump to the conclusion that the algorithm's performance is O(log n) -- it might be something like O(n log n), which is quite different. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/146021",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47174/"
]
} |
146,127 | In the first line of a git commit message I have a habit of mentioning the file that modified if a change doesn't span multiple files, for example: Add [somefunc] to [somefile] Is this a good thing to do or is it unnecessary? | Version control tools are powerful enough to let the person see what files were modified, and what methods were added. It means that in general, log messages which plainly duplicate what already exists are polluting the log. You added somefunc method to fulfill a requirement, i.e.: to add a feature, to remove a bug or to refactor the source code. This means that your log messages must rather explain what features/bugs were affected or what was the purpose of the refactoring. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/146127",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28114/"
]
} |
146,140 | After reading this interesting question, I felt like I had a good idea of which insecure hashing algorithm I'd use if I needed one, but no idea why I might use a secure algorithm instead. So what is the distinction? Isn't the output just a random number representing the hashed thing? What makes some hashing algorithms secure? | There are three properties one wants from every cryptographic hash function H : preimage resistance : Given h , it should be hard to find any value x with h = H(x) . second preimage resistance : Given x1 , it should be hard to find x2 != x1 with H(x1) = H(x2) . collision resistance : It should be hard to find two values x1 != x2 with H(x1) = H(x2) . With hash functions as used in common programming languages for hash tables (of strings), usually none of these is given, they only provide for: weak collision resistance : For randomly (or "typically") selected values of the domain, the chance of collision is small. This says nothing about an attacker intentionally trying to create collisions, or trying to find preimages. The three properties above are (among) the design goals for every cryptographic hash function. For some functions (like MD4, SHA-0, MD5) it is known that this failed (at least partially). The current generation (SHA-2) is assumed to be secure, and the next one ("Secure Hash Algorithm 3") is currently in the process of being standardized , after a competition . For some uses (like password hashing and key derivation from passwords), the domain of actually used values x is so small that brute-forcing this space becomes feasible with normal (fast) secure hash functions, and this is when we also want: slow execution : Given x , it takes some minimum (preferably configurable) amount of resources to calculate the value H(x) . But for most other uses, this is not wanted, one wants instead: fast execution : Given x , calculating the value of H(x) is as fast as possible (while still secure). There are some constructions (like PBKDF2 and scrypt) to create a slow hash function from a fast one by iterating it often. For some more details, have a look at the hash tag on our sister site Cryptography Stack Exchange. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/146140",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5109/"
]
} |
146,255 | We are in a small company with around 10 developers. I am the team leader and responsible for the development process. Supervisors and salesmen are close to us since we are a small team, but have no clue on how software is developed. When they ask me how much time I want for a change (bugfixes/features) in a product, my response is 'let me calculate it'. After giving them the schedule, they start by saying OK you can do it in XX time which differs a lot from my plan. We are using a model close to Agile basic principles and have circles per week or per three days. Of course I argue and say that this cannot be done. They seem to have no idea on the effort we are doing. They do not want to see WHY my schedule is for that amount of time. I know this behavior is stupid, but how can I make them see the problem? | If the salesmen are also the ones who are in charge, you can say, "Ok, I can go with your schedule. Which features or responsibilities would you like me to sacrifice in order to make your deadline?" That way you're not saying "no" to the people in charge but you're not committing to impossible things. The decision is in their hands how to run the business. If they want to axe other things to make time for the changes, let them. EDIT:
We need to respect and submit to those who are in authority, while still doing our jobs with excellence. The only way to do this is with humility. I'll work on whatever my boss wants me to work on, but I can only do so much. When you tell him like it is with an attitude of submission, he is in a better position to make better decisions and he'll want more employees like you. Make sure these things are documented too in order to explain why the commitments are unreasonable and how the situation was resolved. It can help coworkers deal with similar situations in the future. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/146255",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35446/"
]
} |
146,397 | Can somebody explain me in what use-cases I should consider using AMQP like e.g. RabbitMQ? What are the pros and cons? | Imagine that you have a web service that can accept many requests per second. You also have a accounting system that does a lot of things, one of which is processing the requests coming from the web service. If you put a queue between the web service and the accounting system, you will be able to: have less coupling between the two applications, because now both applications have to know the configuration parameters of the queue management system and the name of the queue. Here the catch is that usually you are more likely to move to another server some application than move the queue management system if you have a lot of requests coming in a short amount of time, the accounting system will be able to process them all anyway persist some requests if their number becomes really huge Of course, you could have more complex situations where the number of your applications is much bigger than two and you need to manage the communication between them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/146397",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21735/"
]
} |
146,633 | On my projects where the repository is shared between me and other programmers, I always write commit messages even if I'm the primary developer. But on those projects where I'm the solo developer working on a project, and the repository is hosted on my personal laptop, and is not even hosted by the client, hence no one except myself would see the commits, should I still write commit messages? So far I have been writing them, but I've found that I've never gone back and viewed my commit messages. I take time off development to write down the messages, but then they're never seen again even by me. Are there any good reasons for writing commit messages as a solo developer, or should you just skip them in favor of staying focused on development? | I try to always. How many times do you look back and think, "Man what was I doing when I made this change." I do all the time. 30 seconds of writing a message can save you 20 minutes worth of work trying to remember. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/146633",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9803/"
]
} |
146,780 | I work for a small company. The software development arm of the company before I was hired consisted of one self-taught overworked guy. Now that I've been writing software for the company for a few years, I have been tasked with establishing formal company-wide software development practices. We currently have no guidelines, other than Write code, test it, put it in a .zip file and send it to the client. Bonus points for TDD and version control. My boss wants me to write a software developer's handbook which defines the general processes, protocols, tools, and guidelines we use to get things done. In other words, he wants a "This is what we do here" book to make it easier to get a new employee familiar with the way we do things, as well as to help my boss understand what his minions are doing and how they do it. The way I see it, I'm laying a foundation and it needs to be done right. How would you go about choosing topics for such a handbook? Can you provide some example topics? Side Note: If it matters, we are primarily a Microsoft .NET shop. And we are looking at agile practices such as XP and Scrum, but we may have to heavily modify them to make them work in our company. | I would break it down into sections like Current staff - names and titles (ideally with photos) Applications, logins to them, data to know and permission requests to have submitted Bookmarks to company sites and key external sites relevant to the business Applications that the company uses for comms, email, conference room booking, sharescreen Procedures for company related activities such as Expensing receipts, booking travel Developer Machine Setup. Describe the process of a setting up a new developers machine in detail. This is usually 'expected' to only take a day, but often it take 3-5 days in reality. The development process, how work is tracked, assigned and updated and what tools are used. How to test, what to test, when to test, where to test. Coding standards including file naming conventions and language specific standards. How to handle bugs, where to document them, how to go about fixing them. deployment process, what are the key things to know for production pushes. How to document, what to document, When to document. Where stuff 'is', e.g. location(s) for Code, Data, Standards, Documentation, Links and other assets. Making it modular will also let you or others update pieces separately, for instance the employees names and positions will change frequently as people come and go. For each section I'd try hard to write it from a 'newbie' point of view. Most important will be making sure it really makes sense to a newbie. Your boss obviously is not the right person to review this as he is not the intended audience. He's right to want it, just make sure the content doesn't end up being tested by him. Also a 'newbie' both only has "1 week" as being a newbie... and only has one point of view. So it's likely (and recommended) that the document will be refined with each new employee. In fact it's a pretty good task to also assign them for their first week, i.e. "Update the newbie manual". For Agile/SCRUM: The hardest part of doing Agile and SCRUM is 'really' doing it. For reading I would start at http://agilemanifesto.org/ and go from there. I would also read the well-known http://www.halfarsedagilemanifesto.org/ which adds weight to the fact that you really have to embrace all the aspects for it to work. If you have to heavily modify Agile for your organizations it's likely that people want the benefits - without using the correct processes. This fact itself should be presented to ward off any half-assyness. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/146780",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7935/"
]
} |
146,823 | I saw many issue numbers from comments of jQuery code . (Actually, there were 69 issue numbers in jQuery code.) I think it would be good practice, but I've never seen any guidelines. If it is a good practice, what are the guidelines for this practice? | In general, I would not consider it good practice. But in exceptional cases, it can be very useful, namely when the code has to do something unintuitive to fix a complex issue, and without any explanation there would be a risk that someone might want to "fix" this strange code and thereby break it, while explaining the reasoning would result in a huge comment that duplicates information from the issue. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/146823",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22362/"
]
} |
147,036 | Kind of a yes / no question and why? Is it the responsibility of the software developer to understand what the customer meant with his/her request or is it the responsibility of the customer to properly explain his/her request to the developer? The situation at work is currently "the customer already explained us, what he wants. It's your responsibility to understand the request, not ask more questions". While English is not my strong suite, all the requests are written in obscure English with misplaced words and hard to understand sentences, some requests assume previous understanding of the system on my part. I'm the 3rd or 4th developer of the system (the last developers quit the job) and that might be the reason the customer expects some understanding on the developers side. The system itself is quite messy both in the UI and source code level. This looks like monkey coding to me - code and hope you get the request right, while not actually understanding the request. I'm actually thinking about quitting the job, but haven't yet, given I'm not sure about who's right and who's wrong. | If it's your job to understand, it is your job to ask questions until you do The person you ask may be someone who is not the customer (i often talked to an intermediary, who was in contact with the customer), so the ones who forbid you to talk to the customer should instead answer the questions themselves or refer you to someone who can. But, in the end there has to be SOME kind of communication. If they deny it (and providing some documents that you don't unterstand is effectively denying communication), you should do as your predecessors did: run away, quickly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147036",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43990/"
]
} |
147,055 | It seems to be generally assumed (on Stack Overflow at least) that there should always be unit tests, and they should be kept up to date. But I suspect the programmers making these assertions work on different kinds of projects from me - they work on high quality, long-lasting, shipped software where bugs are A Really Bad Thing. If that's the end of the spectrum where unit testing is most valuable, then what's the other end? Where are unit tests least valuable? In what situations would you not bother? Where would the effort of maintaining tests not be worth the cost? | I will personally not write unit tests for situations where: The code has no branches is trivial. A getter that returns 0 doesn't need to be tested, and changes will be covered by tests for its consumers. The code simply passes through into a stable API. I'll assume that the standard library works properly. The code needs to interact with other deployed systems; then an integration test is called for. If the test of success/fail is something that is so difficult to quantify as to not be reliably measurable, such as steganography being unnoticeable to humans. If the test itself is an order of magnitude more difficult to write than the code. If the code is throw-away or placeholder code. If there's any doubt, test. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147055",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37877/"
]
} |
147,059 | I've been pondering this problem for a while now and find myself continually finding caveats and contradictions, so I'm hoping someone can produce a conclusion to the following: Favour exceptions over error codes As far as I'm aware, from working in the industry for four years, reading books and blogs, etc. the current best practice for handling errors is to throw exceptions, rather than returning error codes (not necessarily an error code, but a type representing an error). But - to me this seems to contradict... Coding to interfaces, not implementations We code to interfaces or abstractions to reduce coupling. We don't know, or want to know, the specific type and implementation of an interface. So how can we possibly know what exceptions we should be looking to catch? The implementation could throw 10 different exceptions, or it could throw none. When we catch an exception surely we're making assumptions about the implementation? Unless - the interface has... Exception specifications Some languages allow developers to state that certain methods throw certain exceptions (Java for example, uses the throws keyword.) From the calling code's point of view this seems fine - we know explicitly which exceptions we might need to catch. But - this seems to suggest a... Leaky abstraction Why should an interface specify which exceptions can be thrown? What if the implementation doesn't need to throw an exception, or needs to throw other exceptions? There's no way, at an interface level, to know which exceptions an implementation may want to throw. So... To conclude Why are exceptions preferred when they seem (in my eyes) to contradict software best practices? And, if error codes are so bad (and I don't need to be sold on the vices of error codes), is there another alternative? What is the current (or soon to be) state of the art for error handling that meets the requirements of best practices as outlined above, but doesn't rely on calling code checking the return value of error codes? | First of all, I would disagree with this statement: Favour exceptions over error codes This is not always the case: for example, take a look at Objective-C (with the Foundation framework). There the NSError is the preferred way to handle errors, despite the existence of what a Java developer would call true exceptions: @try, @catch, @throw, NSException class, etc. However it is true that many interfaces leak their abstractions with the exceptions thrown. It is my belief that this is not the fault of the "exception"-style of error propagating/handling. In general I believe the best advice about error handling is this: Deal with the error/exception at the lowest possible level, period I think if one sticks to that rule of thumb, the amount of "leakage" from abstractions can be very limited and contained. On whether exceptions thrown by a method should be part of its declaration, I believe they should: they are part of the contract defined by this interface: This method does A, or fails with B or C. For example, if a class is an XML Parser, a part of its design should be to indicate that the XML file provided is just plain wrong. In Java, you normally do so by declaring the exceptions you expect to encounter and adding them to the throws part of the declaration of the method. On the other hand, if one of the parsing algorithms failed, there's no reason to pass that exception above unhandled. It all boils down to one thing: Good interface design. If you design your interface well enough, no amount of exceptions should haunt you.
Otherwise, it's not just exceptions that would bother you. Also, I think the creators of Java had very strong security reasons to include exceptions to a method declaration/definition. One last thing: Some languages, Eiffel for example, have other mechanisms for error handling and simply do not include throwing capabilities. There, an 'exception' of sort is automatically raised when a postcondition for a routine is not satisfied. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147059",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11393/"
]
} |
147,089 | I have many times seen various benchmarks that show how a bunch of languages perform on a given task. These benchmarks always reveal that Python is slower than Java, and faster than PHP, and I wonder why that's the case. Java, Python, and PHP run inside a virtual machine All three languages convert their programs into their custom byte codes that run on top of the OS -- so none is running natively Both Java and Python can be "compiled" ( .pyc for Python) but the __main__ module for Python is not compiled Python and PHP are dynamically typed, and Java statically -- is this the reason Java is faster, and if so, please explain how that affects speed. And, even if the dynamic-vs-static argument is correct, this does not explain why PHP is slower than Python -- because both are dynamic languages. You can see some benchmarks here and here , and here | JVM code can be JIT-compiled efficiently, using a trivial (and fast) ad hoc compiler. But the same would be exceptionally hard for PHP and Python, because of their dynamically typed nature. JVM translates to a fairly low level and straightforward native code, quite similar to what would a C++ compiler produce, but for the dynamic languages you'd have to generate dynamic dispatch for literally all the basic operations and for all the method calls. This dynamic dispatch is the primary bottleneck for all the languages of this kind. In some cases it is possible to eliminate the dynamic dispatch (as well as the virtual calls in Java) using a much more complicated tracing JIT compiler. This approach is still in its infancy, not doing too much of an abstract interpretation, and such a compiler is likely to choke on eval calls (which are very typical for the dynamic languages). As for the difference between Python and PHP, the latter is just of a much lower quality. It could run faster in theory, but it never will. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147089",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31560/"
]
} |
147,111 | I have often heard that I should not use the Unlicense because of issues regarding putting things into the public domain. However, I do not understand why this would be an issue for the Unlicense. The Unlicense attempts to put whatever is being unlicensed into the public domain, and if that works, awesome! However, the author of the Unlicense understands that putting something into the public domain is not so simple, it may even be impossible, and therefore the Unlicense contains a backup clause (the 2nd paragraph) which clearly states that everyone is free to do whatever they want with the Unlicensed software. The Unlicense even includes a disclaimer containing the usual "this software is provided as-is blah blah" legalese. Is the Unlicense bad because it is short and doesn't define who the "unlicensor", the "unlicensee" and Santa Claus is? If yes, then what about the MIT/BSD-style licenses? They are generally considered to be valid, so why isn't the Unlicense? Is the opposition to public domain waivers with permissive license backup clauses, such as the Unlicense, and even the Creative Commons CC0, just FUD or are there really major legal issues with them? Here is the full text of the Unlicense: This is free and unencumbered software released into the public
domain. Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means. In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE. For more information, please refer to http://unlicense.org/ | (Disclaimer: IANAL - for reliable advice on legal issues, ask a lawyer.) See the discussion on the OSI mailing list for some of the immediate issues with the license. My interpretation: It's not global. It doesn't make sense outside of a commonwealth ecosystem, is explicitly illegal in some places (Germany), and of unclear legality in others (Australia) It's inconsistent. Some of the warranty terms cannot, logically, co-exist, given the current legal ecosystem, as written, with the licensing terms. Its applicability is unpredictable. The license is short, clearly expressing intent, at the cost of not carefully addressing common license, copy-right and warranty issues. It leaves a lot of leeway interpretation - meaning that, in the US, it will take a few trials before you can reliably know when the license is applicable, and how. Personally, I think of the license as having been written in human-readable pseudo-code, without having been properly compiled yet to a given set of legal systems. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147111",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42791/"
]
} |
147,128 | If I'm using a switch statement to handle values from an enum (which is owned by my class) and I have a case for each possible value - is it worth adding code to handle the "default" case? enum MyEnum
{
MyFoo,
MyBar,
MyBat
}
MyEnum myEnum = GetMyEnum();
switch (myEnum)
{
case MyFoo:
DoFoo();
break;
case MyBar:
DoBar();
break;
case MyBat:
DoBat();
break;
default:
Log("Unexpected value");
throw new ArgumentException()
} I don't think it is because this code can never be reached (even with unit tests).
My co-worker disagrees and thinks this protects us against unexpected behavior caused by new values being added to MyEnum. What say you, community? | Including the default case doesn't change the way your code works, but it does make your code more maintainable. By making the code break in an obvious way (log a message and throw an exception), you're including a big red arrow for the intern that your company hires next summer to add a couple features. The arrow says: "Hey, you! Yes, I'm talking to YOU! If you're going to add another value to the enum, you'd better add a case here too." That extra effort might add a few bytes to the compiled program, which is something to consider. But it'll also save someone (maybe even the future you) somewhere between an hour and a day of unproductive head-scratching. Update: The situation described above, i.e. protecting against values added to an enumeration at some later time, can also be caught by the compiler. Clang (and gcc, I think) will by default issue a warning if you switch on an enumerated type but don't have a case that covers every possible value in the enumeration. So, for example, if you remove the default case from your switch and add a new value MyBaz to the enumeration, you'll get a warning that says: Enumeration value 'MyBaz' not handled in switch Letting the compiler detect uncovered cases is that it largely eliminates the need for that unreachable default case that inspired your question in the first place. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147128",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27895/"
]
} |
147,134 | Consider a method to randomly shuffle elements in an array. How would you write a simple yet robust unit test to make sure that this is working? I've come up with two ideas, both of which have noticeable flaws: Shuffle the array, then make sure its order differs from before. This sounds good, but fails if the shuffle happens to shuffle in the same order. (Improbable, but possible.) Shuffle the array with a constant seed, and check it against the predetermined output. This relies on the random function always returning the same values given the same seed. However, this is sometimes an invalid assumption . Consider a second function which simulates dice rolls and returns a random number. How would you test this function? How would you test that the function... never returns a number outside the given bounds? returns numbers in a valid distribution? (Uniform for one die, normal for large numbers of dice.) I'm looking for answers offering insight into testing not only these examples but random elements of code in general. Are unit tests even the right solution here? If not, what sort of tests are? Just to ease everyone's mind I'm not writing my own random number generator. | I don't think unit tests are the right tool for testing randomness. A unit test should call a method and test the returned value (or object state) against an expected value. The problem with testing randomness is that there isn't an expected value for most of the things you'd like to test. You can test with a given seed, but that only tests repeatability . It doesn't give you any way to measure how random the distribution is, or if it's even random at all. Fortunately, there are a lot of statistical tests you can run, such as the Diehard Battery of Tests of Randomness . See also: How to unit test a pseudo random number generator? Steve Jessop recommends that you find a tested implementation of the same RNG algorithm that you're using and compare its output with selected seeds against your own implementation. Greg Hewgill recommends the ENT suite of statistical tests. John D. Cook refers readers to his CodeProject article Simple Random Number Generation , which includes an implementation of the Kolmogorov-Smirnov test mentioned in Donald Knuth's volume 2, Seminumerical Algorithms. Several people recommend testing that the distribution of the numbers generated is uniform, the Chi-squared test, and testing that the mean and standard deviation are within the expected range. (Note that testing the distribution alone is not enough. [1,2,3,4,5,6,7,8] is a uniform distribution, but it's certainly not random.) Unit Testing with functions that return random results Brian Genisio points out that mocking your RNG is one option for making your tests repeatable, and provides C# sample code. Again, several more people point to using fixed seed values for repeatability and simple tests for uniform distribution, Chi-squared, etc. Unit Testing Randomness is a wiki article that talks about many of the challenges already touched on when trying to test that which is, by its nature, not repeatable. One interesting bit that I gleaned from it was the following: I've seen winzip used as a tool to measure the randomness of a file of values before (obviously, the smaller it can compress the file the less random it is). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147134",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30039/"
]
} |
147,205 | Reading about the Google v Oracle case, I came across these questions (apparently from the presiding Judge) ... Is it agreed that the following is true, at least as of 1996? The following were the core Java Application Programming Interface: java.lang, java.util and java.io. Does the Java programming language refer to or require any method, class or package outside the above three? ... source: Groklaw There are obviously lots of legal ramifications, Google and Oracle probably disagree on some points, and I don't care . Leave law to the lawyers. However, I suspect there's an interesting bit of history in here. My question is (as someone who first did any Java coding around 2001 in version 1.3), in version 1.0 of Java was anything required outside of java.lang , java.util , and java.io to compile a valid Java program? As an example (using C# 5.0), the await keyword is dependent upon Task<T> GetAwaiter() (amongst other things). The compiler couldn't function to spec without that class. Equivalently, were there any core runtime features (like ClassLoader *) that were dependent on other packages? I'll admit I ask out of curiosity, exactly what is necessary for minimum-viable Java (the language, ignoring all the legal bits around it) is interesting. *I am assuming that ClassLoader was even a feature in Java 1.0, it's part of the spec in 7.0 and presumably many earlier versions. | Per Wikipedia , the first formally released version of Java was 1.0.2, on Jan 23 1996. The first stable version was the JDK 1.0.2. is called Java 1 There's an archive of Java 1.0.2 and all related documentation here : JDK 1.0.2 API reference (book format) JDK 1.0.2 API reference (javadoc format) Java tutorial Java language specification (link broken, wayback'd here ) Java virtual machine specification There appears to be a download of the JDK 1.0.2 bits here http://www.pascal-man.com/download/download-jdk.shtml It works for me at the time of writing. BEHOLD THE RAW UNMITIGATED POWER OF JAVA 1.0.2 In the language spec, the following classes are referred to (single citation, not exhaustive citations): Class (section 4.3.1) String (section 4.3.1) Object (section 4.3.2) Random (section 4.4) Thread (section 17.2) ThreadGroup (section 17.2) Throwable (section 11) Error (section 11.2) loads and loads of errors, all under java.lang (section 11.5.2.1 - 11.5.2.2) RuntimeException (section 11.2.1) the "Array classes", [I , and so on (section 10.8 ) ... at which point I stopped looking because, technically , [I , et. al. aren't in the java.lang , java.util , or java.io packages. Example: class Test {
// Compare namespaces of built-ins object and int[]
public static void main(String[] args){
int[] arr = new int[0];
Object obj = new Object();
Class arrClass = arr.getClass();
Class objClass = obj.getClass();
Class arrSuper = arrClass.getSuperclass();
System.out.println("plain jane Object - " + objClass.getName());
System.out.println();
System.out.println("int[] - "+arrClass.getName());
System.out.println("super of int[] - "+arrSuper.getName());
}
} Outputs Behavior is consistent between modern and 1.0.2 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147205",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/32/"
]
} |
147,214 | I was reading this article and was wondering, do we get rid of all switch statements by replacing them with a Dictionary or a Factory so that there are no switch statements at all in my projects. Something did not quite add up. The question is, do switch statements have any real use or do we go ahead and replace them with either a dictionary or a factory method (in using a factory method, of course, there will be a minimum use of switch statements for creating the objects using the factory...but that is about it). | Both switch statements and polymorphism have their use. Note though that a third option exists too (in languages which support function pointers / lambdas, and higher-order functions): mapping the identifiers in question to handler functions. This is available in e.g. C which is not an OO language, and C# which is*, but not (yet) in Java which is OO too*. In some procedural languages (having no polymorphism nor higher-order functions) switch / if-else statements were the only way to solve a class of problems. So many developers, having accustomed to this way of thinking, continued to use switch even in OO languages, where polymorphism is often a better solution. This is why it is often recommended to avoid / refactor switch statements in favour of polymorphism. At any rate, the best solution is always case dependent. The question is: which option gives you cleaner, more concise, more maintainable code in the long run? Switch statements can often grow unwieldy, having dozens of cases, making their maintenance hard. Since you have to keep them in a single function, that function can grow huge. If this is the case, you should consider refactoring towards a map based and/or polymorphic solution. If the same switch starts to pop up in multiple places, polymorphism is probably the best option to unify all these cases and simplify code. Especially if more cases are expected to be added in the future; the more places you need to update each time, the more possibilities for errors. However, often the individual case handlers are so simple, or there are so many of them, or they are so interrelated, that refactoring them into a full polymorphic class hierarchy is overkill, or results in a lot of duplicated code and/or tangled, hard to maintain class hierarchy. In this case, it may be simpler to use functions / lambdas instead (if your language allows you). However, if you have a switch in a single place, with only a few cases doing something simple, it may well be the best solution to leave it like it is. * I use the term "OO" loosely here; I am not interested in conceptual debates over what is "real" or "pure" OO. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147214",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5070/"
]
} |
147,252 | Often when the syntax of the language requires me to name a variable that is never used, I'll name it _ . In my mind, this reduces clutter and lets me focus on the meaningful variables in the code. I find it to be unobtrusive so that it produces an "out of sight, out of mind" effect. A common example of where I do this is naming subqueries in SQL. SELECT *
FROM
(
SELECT *
FROM TableA
JOIN TableB
ON TableA.ColumnB = TableB.ColumnB
WHERE [ColumnA] > 10
) _ --This name is required, but never used here
ORDER BY ColumnC Another example is a loop variable that isn't used. array = [[] for _ in range(n)] # Defines a list of n empty lists in Python I use this technique very sparingly, only when I feel that a descriptive name adds nothing to the code, and in some sense takes away from it by adding more names to remember. In some ways I view it as similar to the var keyword in C#, which I also use sparingly. My coworkers disagree. They say that even having a single (alphabetic) character name is better than _ . Am I wrong? Is it bad practice to do this? | All names should be meaningful. If _ was a well known standard at your company or in the wider community, then it would be meaningful as a "name that does not matter". If it's not, I would say it's bad practice. Use a descriptive name for what you refer to, especially since the name might matter in the future. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147252",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33445/"
]
} |
147,423 | I come from an ASP.NET forms background and have found server side coding very powerful in the past. More recently, however, I have been wanting to phase out the server side code of the front-end and replace it with pure HTML/JavaScript, which accesses the data through JSON webservices. I have no real experience in this, and so I would like to hear whether this is a tried and tested model. Also, what are the pitfalls surrounding it? I find ASP.NET user controls very useful, so I would like to keep the theory behind it by storing markup templates in separate HTML files on the server. These will be retrieved and used through jQuery AJAX and the jQuery HTML templates plugin respectively. Any input will be extremely appreciated. P.S. Sorry for the noob question, but is this type of Web architecture what is referred to as web-2.0 or am I completely off-track? | I have used this technique exclusively for a web application we're working on. My backend is hosted on Google App Engine using the Java SDK, and my frontend uses HTML, CSS, and JavaScript (with jQuery). The project is a smaller one with just myself and a Web designer, and we both feel that this method has helped us work a lot faster and get something to market much sooner. Advantage: Working with Web Designers The major advantage of this technique is that the Web designer, who knows some PHP but does not consider himself a programmer, can work unencumbered in the HTML and CSS without having to wade through countless lines of JSP, taglib tags, and other server-side markup that we've been told for years is supposed to make a front-end developer's life much easier. Without all of the server-side markup, we've been more agile. The web designer has directly swapped out and revised his original design 3 or 4 times, with very few changes on my part. His comment to me was that he felt like the HTML was alive in that he could edit it and then immediately see the changes on his machine with dynamic data. We've both benefit by this in that the integration is mostly automatic. Server-side code and HTML/CSS Handoffs In past projects, he's had to handoff the HTML and CSS to Java developers who would then take his HTML and CSS and completely rewrite it using JSP technology. This would take lots of time, and would usually result in subtle yet important differences in the actual rendering of the pages as well as it's validation in the W3C validator. Overall, we're both quite happy with this technique, and I still have zero JSP pages or server-side code in my HTML pages. Pitfalls of the REST/JSON Technique Perhaps the biggest pitfalls are ones that we haven't encountered yet. I fully expect to have some disagreements with more experienced Java developers who have been brainwashed by what the Apache foundation and the Spring team have told them regarding how tag libraries make it easier for frontend developers to work with the code. I fully expect there to be a learning curve as this project expands and we take on more developers who might have to unlearn these outdated techniques that, in my experience, have made the Web designers' job more difficult . Another pitfall is that the JavaScript code has become very massive. This is more of a problem perhaps because I'm using this technique for the first time, and because we've introduced some slight technical debt in working towards a rapid release. Perhaps picking a better framework would have helped alleviate a lot of the bulk of the code. In my opinion, none of this has been a showstopper, and I'm encouraged to continue using this technique and refine my skills in this area. Advantage: Other Applications Can Be Built On the Platform Lastly, I should mention a hidden advantage. Because there is a nice degree of separation between my backend RESTful Web services and my frontend, I've also created a platform that I can easily extend. One of our operations guys wanted to try a proof of concept in another application, and thanks to my RESTful services, we were able to create an entirely different frontend to the application to solve a completely different problem. The rapidly developed proof of concept used it's own HTML, CSS, and JavaScript, but it used the RESTful services as the backend and datasource. In the end, another project manager saw what I had done, and it became immediately clear that the feature needed to be more than just a proof of concept, so his team implemented it. I can't emphasize enough how reusable this architecture is, both at the application level as well as the HTML/CSS/JavaScript level, and I would definitely encourage you to try this in your next project. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147423",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/53408/"
]
} |
147,480 | Last week, we had a heated argument about handling nulls in our application's service layer. The question is in the .NET context, but it will be the same in Java and many other technologies. The question was: should you always check for nulls and make your code work no matter what, or let an exception bubble up when a null is received unexpectedly? On one side, checking for null where you are not expecting it (i.e. have no user interface to handle it) is, in my opinion, the same as writing a try block with empty catch. You are just hiding an error. The error might be that something has changed in the code and null is now an expected value, or there is some other error and the wrong ID is passed to the method. On the other hand, checking for nulls may be a good habit in general. Moreover, if there is a check the application may go on working, with just a small part of the functionality not having any effect. Then the customer might report a small bug like "cannot delete comment" instead of much more severe bug like "cannot open page X". What practice do you follow and what are your arguments for or against either approach? Update: I want to add some detail about our particular case. We were retrieving some objects from the database and did some processing on them (let's say, build a collection). The developer who wrote the code did not anticipate that the object could be null so he did not include any checks, and when the page was loaded there was an error and the whole page did not load. Obviously, in this case there should have been a check. Then we got into an argument over whether every object that is processed should be checked, even if it is not expected to be missing, and whether the eventual processing should be aborted silently. The hypothetical benefit would be that the page will continue working. Think of a search results on Stack Exchange in different groups (users, comments, questions). The method could check for null and abort the processing of users (which due to a bug is null) but return the "comments" and "questions" sections. The page would continue working except that the "users" section will be missing (which is a bug). Should we fail early and break the whole page or continue to work and wait for someone to notice that the "users" section is missing? | The question is not so much whether you should check for null or let the runtime throw an exception; it is how you should respond to such an unexpected situation. Your options, then, are: Throw a generic exception ( NullReferenceException ) and let it bubble up; if you don't do the null check yourself, this is what happens automatically. Throw a custom exception that describes the problem on a higher level; this can be achieved either by throwing in the null check, or by catching a NullReferenceException and throwing the more specific exception. Recover by substituting a suitable default value. There is no general rule which one is the best solution. Personally, I would say: The generic exception is best if the error condition would be a sign of a serious bug in your code, that is, the value that is null should never be allowed to get there in the first place. Such an exception may then bubble all the way up into whatever error logging you have set up, so that someone gets notified and fixes the bug. The default value solution is good if providing semi-useful output is more important than correctness, such as a web browser that accepts technically incorrect HTML and makes a best effort to render it sensibly. Otherwise, I'd go with the specific exception and handle it somewhere suitable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147480",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17857/"
]
} |
147,489 | I'm 16. I started programming about a year ago when I was about to start high-school. I'm going for a career in programming, and I'm doing my best to learn as much as I can. When I first started, I learned the basics of C++ from a book and I started to learn things by myself from there on. Nowadays I'm much more experienced than I was a year ago. I knew I had to study by myself because high-school won't (likely) teach me anything valuable about programming, and I want to be prepared. The question here is: how important is it to study programming by oneself? | It's critical. I don't think I've ever known a good programmer who wasn't self-taught at some level. As a hiring manager at a large company, I can say that a candidate who describes personal projects and a desire to learn will trump one with an impressive degree every time. (Though it's best to have both.) Here's the thing about college: Computer Science courses teach theory, not technology. They will teach you the difference between a hash table and a B-tree, and the basics of how an operating system works. They will generally not teach you computer languages, operating systems or other technologies beyond a shallow level. I remember back in the mists of time when I took my first data structures class and we got a thin manual for this new language called "C++" that they'd decided to start learning. We had two weeks to pick it up enough to write code. That was a good lesson in and of itself. That's the way your career will go. Your school will likely not teach you what you need to get a good job. Schools often trail what's hot in the industry by many years. Then you'll get a job. Whatever company you go to will almost certainly not spend any particular effort to train you. The bad companies are too cheap, and frankly the good companies will only hire people smart enough to pick it up as they go. I graduated college in 1987. I went to work as a C programmer with expertise in DOS, NetBIOS and "Terminate-and-Stay-Resident" programs. In the years since, I have had little if any actual training. Look at the job ads... not much call for those skills! The only reason I can be employed today is because I've spent the intervening years constantly learning. To succeed as an engineer, you have to have the habit of learning. Hell, I'd go beyond that: you have to have the love of learning. You need to be the sort of person who messes around with WebGL or Android or iOS because it looks fun. If you are that sort of person, and maintain the habit of learning, you'll go far in the industry. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147489",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47546/"
]
} |
147,645 | According to Wikipedia, a stack : is a last in, first out (LIFO) abstract data type and linear data structure. While an array : is a data structure consisting of a collection of elements (values or variables), each identified by at least one array index or key. As far as I understand, they are fairly similar. So, what are the main differences? If they are not the same, what can an array do that a stack can't and vice-versa? | Well, you can certainly implement a stack with an array. The difference is in access. In an array, you have a list of elements and you can access any of them at any time. (Think of a bunch of wooden blocks all laid out in a row.) But in a stack, there's no random-access operation; there are only Push , Peek and Pop , all of which deal exclusively with the element on the top of the stack. (Think of the wooden blocks stacked up vertically now. You can't touch anything below the top of the tower or it'll fall over.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147645",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34364/"
]
} |
147,664 | I've heard swarming mentioned in the context of Agile or Extreme Programming. It seems to be a complement to pairing. What exactly is it? When should it be applied? How do you do it well? | The idea is that everyone on your team works on the same story at the same time. Instead of everyone focusing on different tasks, everyone focuses on one task at a time until it's completed. Then they move on to the next thing, where they all work together on it. This helps teams that struggle completing stories before the end of sprint. Often teams finish 80% of all the stories, but none are complete. This is less useful than completely finishing 80% of the stories, since unfinished stories have (effectively) no value to an end user. It's easier to get stories completed when everyone on the team is focusing on one story at a time. This is the motivation behind swarming. There are some difficulties here. For instance, QA can't always test things before they are built (or even designed). In this case, you should establish a design together early on, and then QA can write (initially failing) tests against the design and not the actual implementation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147664",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/883/"
]
} |
147,667 | I'm fairly familiar with where to use Stacks, Queues, and Trees in software applications but I've never used a Deque (Double Ended Queue) before. Where would I typically encounter them in the wild? Would it be in the same places as a Queue but with extra gribbilies? | One way a deque is used is to "age" items. It is typically used as an undo or history feature. A new action is inserted into the deque. The oldest items are at the front. A limit on the size of the deque forces the items at the front to be removed at some point as new items are inserted (aging the oldest items). It then provides a fast way to access both ends of the structure because you instantly know the oldest and the newest items to either remove the front and commit the oldest action in O(1) or undo in O(1) time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147667",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
147,713 | I want to learn about null values or null references. For example I have a class called Apple and I created an instance of it. Apple myApple = new Apple("yummy"); // The data is stored in memory Then I ate that apple and now it needs to be null, so I set it as null. myApple = null; After this call, I forgot that I ate it and now want to check. bool isEaten = (myApple == null); With this call, where is myApple referencing? Is null a special pointer value? If so, if I have 1000 null objects, do they occupy 1000 object memory space or 1000 int memory space if we think a pointer type as int? | In your example myApple has the special value null (typically all zero bits), and so is referencing nothing. The object that it originally referred to is now lost on the heap. There is no way to retrieve its location. This is known as a memory leak on systems without garbage collection. If you originally set 1000 references to null, then you have space for just 1000 references, typically 1000 * 4 bytes (on a 32-bit system, twice that on 64). If those 1000 references originally pointed to real objects, then you allocated 1000 times the size of each object, plus space for the 1000 references. In some languages (like C and C++), pointers always point to something, even when "uninitialized". The issue is whether the address they hold is legal for your program to access. The special address zero (aka null ) is deliberately not mapped into your address space, so a segmentation fault is generated by the memory management unit (MMU) when it is accessed and your program crashes. But since address zero is deliberately not mapped in, it becomes an ideal value to use to indicate that a pointer is not pointing to anything, hence its role as null . To complete the story, as you allocate memory with new or malloc() , the operating system configures the MMU to map pages of RAM into your address space and they become usable. There are still typically vast ranges of address space that are not mapped in, and so lead to segmentation faults, too. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147713",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/53568/"
]
} |
147,880 | I've got a senior developer with eight years of .NET experience starting tomorrow to work on a 11,000-lines-of-code application. In the team there's myself and another programmer. We've both got about three years experience each. It's my first project as a manager (I'm also a developer on the project) and this is the first time I've ever had to introduce someone to an already established code base. Obviously I'll be going over each module, the deployment process, etc., and handing them the location of the source control repository, documentation (which isn't the best), etc. How long should I give them before they're ready to start writing new features and fixing bugs? | I would assign a couple of low priority bugs the first day, that way no one is screaming if they aren't done right away giving the new developer some time to get familiar with the code base. The most critical thing to do is to have a code review of all of his work the first couple of weeks. You don't want to find out that the guy is going in the wrong direction or not following company coding standards months into things. It is better to make sure he knows what is expected from the start, and code reviews ensures this. Of course I think code reviews are good for all employees (We review 100% of our code before deployment), but they are critical for new employees and should be done in person where you can answer questions and refer them to documentation they may not have seen yet if need be. What you don't want is a new guy coming in and using a different style from the rest of you. People often try to keep using the code style of their previous job even when it conflicts with the code style used at the new place which can create confusion and annoyance on the part of the other developers. One thing I have noticed even with experienced developers is that some of them are not as good as they seemed to be in the interview, code review will help you find this out fast, so you can fix it. It will also encourage them to actually get something done, I have seen new employees who are not code reviewed drag out a project without showing what they were doing to anybody and then leave a week before the deadline they knew they were not going to hit because they were in over their heads and had not actually completed any part of the project. Better to check early and often with new people until you are really sure that they are working out. Also, it is normal for the new guy to be appalled at the state of your legacy project. It's not designed the way he thinks it should have been. Expect this, hear him out and don't automatically dismiss everything he says. In particular, this person appears to have more experience than you or the other developers, he may see things you hadn't considered. However, as a manager, you have to balance the proposed changes against the current workload and deadlines. You all may want to invest some time in learning how to refactor existing code and invest some hours in your time estimates to do that especially if the new guy has some valid concerns. You probably can't support a total re-write (many people who come in new think we should start over and do it better), but you can create a refactoring plan to fix the worst of the problems if there are any that he brings up. If you have some time where he is not expected to be fully contributing (and fully accounting for his time by client), it might also be a time when he can start on some of those refactoring things that you have wanted to do but haven't had time to do. Sometimes, it is a good thing to use the new person training period to address some things that aren't in the project plan. They can learn the code base and if what they want to do doesn't work, you haven't affected the existing schedules because you hadn't factored them into the existing schedule yet. And if it does work, you might have a big win making future maintenance easier or security better or whatever the problem is. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147880",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26765/"
]
} |
147,977 | I have seen a practice from time to time that "feels" wrong, but I can't quite articulate what is wrong about it. Or maybe it's just my prejudice. Here goes: A developer defines a method with a boolean as one of its parameters, and that method calls another, and so on, and eventually that boolean is used, solely to determine whether or not to take a certain action. This might be used, for example, to allow the action only if the user has certain rights, or perhaps if we are (or aren't) in test mode or batch mode or live mode, or perhaps only when the system is in a certain state. Well there is always another way to do it, whether by querying when it is time to take the action (rather than passing the parameter), or by having multiple versions of the method, or multiple implementations of the class, etc. My question isn't so much how to improve this, but rather whether or not it really is wrong (as I suspect), and if it is, what is wrong about it. | Yes, this is likely a code smell, which would lead to unmaintainable code that is difficult to understand and that has a lower chance of being easily re-used. As other posters have noted context is everything (don't go in heavy-handed if it's a one off or if the practice has been acknowledged as deliberately incurred technical debt to be re-factored later) but broadly speaking if there is a parameter passed into a function that selects specific behaviour to be executed then further step-wise refinement is required; Breaking up this function in to smaller functions will produce more highly cohesive ones. So what is a highly cohesive function? It's a function that does one thing and one thing only. The problem with a parameter passed in as you describe, is that the function is doing more than two things; it may or may not check the users access rights depending on the state of the Boolean parameter, then depending on that decision tree it will carry out a piece of functionality. It would be better to separate the concerns of Access Control from the concerns of Task, Action or Command. As you have already noted, the intertwining of these concerns seems off. So the notion of Cohesiveness helps us identify that the function in question is not highly cohesive and that we could refactor the code to produce a set of more cohesive functions. So the question could be restated; Given that we all agree passing behavioural selection parameters is best avoided how do we improve matters? I would get rid of the parameter completely. Having the ability to turn off access control even for testing is a potential security risk. For testing purposes either stub or mock the access check to test both the access allowed and access denied scenarios. Ref: Cohesion (computer science) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/147977",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18171/"
]
} |
148,081 | When correcting bugs, it is encouraged where I work to first write a test that fails with the given bug, and then to fix the code until the test passes.
This follows TDD practices, and is supposed to be good practice, but I noticed it tends to produce cryptic tests that come really close to the implementation. For instance, we had a problem when a job was sent, reached a certain state, was aborted, and retried. To reproduce this bug, a massive test was written with thread synchronization in it, lots of mocking and stuff... It did the job, but now that I am refactoring the code, I find it very tempting to just remove this mammoth, since it would really require a lot of work (again) to fit the new design. And it's just testing one small feature in a single specific case. Hence my question : how do you test for bugs that are tricky to reproduce ? How do you avoid creating things that test the implementation, and hurt refactoring and readability ? | Yes, in general you should . As with all guidelines, you'll need to use your best judgement when they run against other guidelines. For this scenario, the severity of the bug needs to be weighed against the work needed to implement the test and the quality of that test in being targeted to the business problem and catching regression of the bug's state. I would tend to favor writing tests over not, since interruptions to run down bugs tend to have more overhead than simply developing and maintaining a unit test. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148081",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43136/"
]
} |
148,108 | Before we start this, let me say I'm well aware of the concepts of Abstraction and Dependency Injection. I don't need my eyes opened here. Well, most of us say, (too) many times without really understanding, "Don't use global variables", or "Singletons are evil because they are global". But what really is so bad about the ominous global state? Let's say I need a global configuration for my application, for instance system folder paths, or application-wide database credentials. In that case, I don't see any good solution other than providing these settings in some sort of global space, which will be commonly available to the entire application. I know it's bad to abuse it, but is the global space really THAT evil? And if it is, what good alternatives are there? | Very briefly, it makes program state unpredictable. To elaborate, imagine you have a couple of objects that both use the same global variable. Assuming you're not using a source of randomness anywhere within either module, then the output of a particular method can be predicted (and therefore tested) if the state of the system is known before you execute the method. However, if a method in one of the objects triggers a side effect which changes the value of the shared global state, then you no longer know what the starting state is when you execute a method in the other object. You can now no longer predict what output you'll get when you execute the method, and therefore you can't test it. On an academic level this might not sound all that serious, but being able to unit test code is a major step in the process of proving its correctness (or at least fitness for purpose). In the real world, this can have some very serious consequences. Suppose you have one class that populates a global data structure, and a different class that consumes the data in that data structure, changing its state or destroying it in the process. If the processor class executes a method before the populator class is done, the result is that the processor class will probably have incomplete data to process, and the data structure the populator class was working on could be corrupted or destroyed. Program behaviour in these circumstances becomes completely unpredictable, and will probably lead to epic lossage. Further, global state hurts the readability of your code. If your code has an external dependency that isn't explicitly introduced into the code then whoever gets the job of maintaining your code will have to go looking for it to figure out where it came from. As for what alternatives exist, well it's impossible to have no global state at all, but in practice it is usually possible to restrict global state to a single object that wraps all the others, and which must never be referenced by relying on the scoping rules of the language you're using. If a particular object needs a particular state, then it should explicitly ask for it by having it passed as an argument to its constructor or by a setter method. This is known as Dependency Injection. It may seem silly to pass in a piece of state that you can already access due to the scoping rules of whatever language you're using, but the advantages are enormous. Now if someone looks at the code in isolation, it's clear what state it needs and where it's coming from. It also has huge benefits regarding the flexibility of your code module and therefore the opportunities for reusing it in different contexts. If the state is passed in and changes to the state are local to the code block, then you can pass in any state you like (if it's the correct data type) and have your code process it. Code written in this style tends to have the appearance of a collection of loosely associated components that can easily be interchanged. The code of a module shouldn't care where state comes from, just how to process it. If you pass state into a code block then that code block can exist in isolation, that isn't the case if you rely on global state. There are plenty of other reasons why passing state around is vastly superior to relying on global state. This answer is by no means comprehensive. You could probably write an entire book on why global state is bad. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148108",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38223/"
]
} |
148,146 | Years ago somebody created a bunch of really awesome and popular scripts. But they were not updated for a long, now they no-longer work (target platform were updated, and some changes are needed). He hasn't released it under any license. I want to fix the bug (currently, many of the target users can't use it), and post it on GitHub, preferably under a public-domain style OSS license. I wonder what the legal ramifications might be? I have sent an email to the author, but (let's say) he didn't reply to my email. What we should do in the following this into 2 cases: If the script is posted on a private website (without any source control). If the script is posted on GitHub (without any licensing hints). However, one can clearly see, that it seems open source - intended to be used/modified/whatever. | Short answer: absolutely not. Everything a person writes, whether it is software or text, is automatically under copyright. The default state of any text is that it is completely owned by the author and no one has rights to do anything with it without express permission of the author. A few decades ago, an author used to have to assert copyright in order to retain it, but this is no longer the case. You can even see on sites like this legal text down there that states that I agree that this post I am typing is available under a certain license. If that wasn't there, I'd retain all rights under the law. Thus, if you cannot find any license information, then you cannot copy or modify it for any reason other than personal use. Making something "open source" is a deliberate act and for you to treat it as such, you have to have found a license that tells you explicitly what your rights to the software are. This is even true of "public domain" software. That is, something is only "public domain" if it has either expired copyright (which mostly means it was written decades ago) or if the author has explicitly placed it in the public domain in writing. In the case you describe, your only recourse is to contact the author and request that he allow you to do what you ask. To do otherwise is flatly illegal and in theory could lead to damages. (In practice, of course, you'd have to get caught.) Edit: IANAL. Talk to one if you intend to do this. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148146",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40631/"
]
} |
148,213 | I am trying to determine some good interview questions to assess the ability of people coming in for a Html/CSS job, however that topic is extremely broad, and I'm not sure what sort of questions I can ask to properly evaluate someone's HTML/CSS knowledge. What sort of questions can I ask to evaluate a candidate's Html/CSS abilities during an interview? Ideally I would like to ask few questions and then give them a real life scenario to combat. | HTML and CSS are difficult to interview for a few reasons: They are too basic, compared, for example, to a programming language, They depend very much on the context of the job. Examples: If you create Google scale, hugely fast and optimized websites, the people you interview for the job cannot ignore what CSS sprites are. If you create XHTML W3C valid websites, you should ensure that the candidates know the difference between XHTML 1.0 and XHTML 1.1, or what are the mandatory attributes for <img/> , etc. If you create terrible websites full of hacks, you should ask the people you interview about how they will do such or such hack, how they serve different CSS for different browsers, etc. etc. If it's a pure HTML and CSS job, the person will have to work with designers on one hand, and developers on other hand. They must know HTML and CSS, but what is much more valuable is their ability to interact with those people , and to understand both the needs of the designers, the requirements of the developers, and the constraints of HTML and CSS. For example, they must know how to structure their HTML in a such way that it would be easy for a JavaScript developer to add interactivity later. You may want to start by some basic questions: What is your favorite browser? If the person answers "Internet Explorer", stop the interview immediately: you don't need somebody like that. No, I'm kidding. The answer is irrelevant. Instead, you can ask the following: Tell me about the debug tools you use in your favorite browser. Primary using Chrome, I work daily with Developer Tools. Those tools allow me to: View the requests made from a page, Study the time it takes for a page and the related resources to load, especially the DNS lookup, waiting and receiving times, Study the headers of the elements sent, as well as the cache indicator, View the DOM and study how CSS selectors are applied, I also use YSlow which serves me as a checklist for optimization of a website which require high scalability. YSlow is also a good tool when it comes to determining if the server is configured correctly (sending correct headers, etc.). In Firefox, I use Firebug, the tool very similar to Developer Tools from Chrome. Developer tools are also available in new versions of Internet Explorer, and also enable me to switch to IE7 to IE10 compatibility views. This last feature is very helpful, since without it, I would be forced to install several virtual machines just for legacy testing, or to use much more often the paid services like Litmus . Please, explain me what <dl/> tag is about? What was the intended use for this tag? How is it used in practice? What do you think about this extended usage? Here, you want the person to be able to explain that <dl/> is for dictionaries, associating one key, <dt/> , with one or several values, <dd/> . While the primary use of this tag was purely related to semantics, in practice it was extensively used to replace tables, a good example being PHPBB3. This is a good thing when tables are slowing the rendering of the page, but it must be used with caution: not only tables are still appropriate in lots of cases to better describe the data, but also there may be other means, such as ordinary lists, to describe the content without using <dl/> . What is the difference between fixed and fluid layouts? What are the pros and cons of each? The fixed layout has predefined widths of the elements. The elements of a fluid layout depend on the width of the page. The fixed layout makes it easier to design the page, especially when there are lots of full-width graphics. Even without graphics, it's still easier, because you care only for a precise case. For example, Programmers.SE being a fixed layout website, the column which displays the questions and the answers has always the same size. If a fluid layout would be used for this column, this would create an issue: on small screens, the text would be unreadable, because the lines would be too short, while on large screens, the lines would be extremely large, so the text would be unreadable too. The problem with the fixed layout is that it works well for a few, most used resolutions, but fails more or less for everything else. It becomes especially important since the adoption of very large, wide monitors, and the increasing usage the internet on small, mobile devices. The fluid layout helps with that, but the design is more difficult to do for such website. In some scenarios on badly managed projects, this may lead to HTML and CSS hacks, large pages, low maintainability and, during development, to higher costs and missed deadlines. On a page with a fluid layout, how can you avoid the situation where a column of text becomes too large to stay readable? You can limit the width of a zone of text by using max-width property. What do you think about this piece of code: <p color="Red" align="Center">Text here</p> ? The piece of code has a flaw to mix presentation logic inside HTML. Presentation logic must be put in CSS for several reasons: It helps the separation of concerns and clean code, meaning cheaper maintenance later, It makes the styles reusable from page to page, which (outside maintainability concerns) helps ensuring that you're using the same styles on the whole website, It helps reducing the bandwidth, since CSS files will be cached. After a few basic questions like that, you may ask some more tricky ones: How do you avoid duplicating colors or fonts in CSS, when those colors or fonts are applied to multiple elements which cannot be targeted by a single selector? Are there drawbacks? You do that by using CSS preprocessors, like Sass or LESS. They allow to define colors, fonts and other parts of the style inside variables that you can use later in your styles. The drawbacks of CSS preprocessors are that: They sometimes require to change the development and deployment workflow, in order to have the up-to-date CSS code in the browser, They are known only by a few developers, which makes it harder for a new person to join or maintain the project later, There are no both good and fast IDEs for Sass or LESS, and the integration inside the most popular IDEs is rather disappointing. Give me an example of a href value of an image which is on CDN, given that this image is displayed on a website which may be accessed both through HTTP and HTTPS. Since HTTPS needs every called resource to be on HTTPS too (otherwise, a security warning will be displayed to the user in many cases), it is not possible to specify the link as http://cdn.example.com/image.png . To properly link to the image, //cdn.example.com/image.png must be used; the browser will then prepend http: or https: depending on the context. Given that the size of the pages and the number of requests on a website cannot be optimized and the content cannot be changed nor AJAX be added, how do you give the impression to the user that the website is faster? What it involves from HTML perspective? If HTTP 1.1 is used, the page may be chunked . This means that the first parts will appear faster, giving an impression that the website is faster than it is in reality. Chunked transfer encoding is impossible in HTTP 1.0, which means that there is nothing to do in this case. Being able to serve the chunked content requires from HTML perspective to reorder the elements, putting the most critical ones at the top of the file (which doesn't mean that they would have to appear at the top of the page). For example, on an e-commerce website, when the user wants to see the details of the product, the first chunk may contain the <head/> and the product details. The next chunk may contain the primary elements like the logo of the website, the main menu, the copyright, etc. Finally, the last chunk may contain the "People who bought this also bought" section, the comments and ratings of the product, the "Share on Facebook", etc. Finally, you may ask the candidate to work on a real-world scenario. It may be anything, like the easiest one below, to the complex scenarios where the person has to deal with CSS sprites or other advanced optimization techniques, with browser inconsistencies, etc. Please, can you create an XHTML page with two zones: the left one, with a list, and the right one, with text. Two zones are separated by a vertical line, which extends from the very top to the very bottom of the page. List and text varying in size, you can't predict which one will have the biggest height. You cannot use <table/> s. Actually, it's pretty simple but shows if the person has the reflex to think about heights. An inexperienced candidate will create the float:left zone and the border-left:solid 1px #ccc; zone, but forget adding the border to the left zone and extending it so that two borders will be at the same place. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148213",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50766/"
]
} |
148,230 | I asked myself why we didn't import a package while we use String functions such as toUpperCase() ? How they get in there without importing packages? | Java tutorials > Learning the Java Language > Packages : For convenience, the Java compiler automatically imports three entire packages for each source file: (1) the package with no name, (2) the java.lang package, and (3) the current package (the package for the current file)... Class String is in java.lang package, hence it is imported automatically per above rule. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148230",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26782/"
]
} |
148,250 | I'm not actually sure that "maze" is the correct term. Basically users start in a single Room that has 4 doors (N, S, E, and W). They can go in any direction, and each subsequent room is contains another room with anywhere from 1 to 4 doorways that go to other rooms. The "maze" is supposed to be unlimited in size, and to grow as you move rooms. There is a limited number of Rooms available, however the available number is dynamic and can change. My problem is, I'm not sure of the best data structure for this type of pattern I first thought about just using a [X][X] array of Room objects, but I'd really rather avoid that since the thing is supposed to grow in any direction, and only rooms that are "visited" should be built. The other thought was to have each Room class contain 4 linked Room properties for N, S, E, and W, and just link to the previous Room , but the problem with that I don't know how to identify if a user goes into a room that has an adjacent room already "built" For example, --- --- ----------
| | | |
Start 5 4
| | | |
--- --- --- ---
--- --- ---------- --- ---
| | | | | |
| 1 2 3
| | | | | |
--- --- --- --- ---------- If the user moves from Start > 1 > 2 > 3 > 4 > 5, then Room #5 needs to know that W contains the starting room, S is room #2 and in this case should not be available, and N can be either a new Room or a wall (nothing). Perhaps I need a mix of the array and the linked rooms, or maybe I'm just looking at this the wrong way. Is there a better way of building the data structure for this type of "maze"? Or am I on the right track with my current thought process, and am just missing a few pieces of information? (In case you're interested, the project is a game very similar to Munchkin Quest ) | Give each Room coordinates (start would be (0,0)) and store each generated Room in a dictionary/hashmap by coordinates. It's trivial to determine the adjacent coordinates for each Room, which you can use to check if a Room already exists. You could insert null values to represent locations where it is already determined that no Room exists. (or if that's not possible, i'm not sure atm, a seperate dictionary/hasmap for coordinates that do not contain a Room) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148250",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1130/"
]
} |
148,317 | In the project, I found a file, math.c , with a big GPL header and ... //------------------------------------------------------------------------------
/// Returns the minimum value between two integers.
/// \param a First integer to compare.
/// \param b Second integer to compare.
//------------------------------------------------------------------------------
unsigned int min(unsigned int a, unsigned int b)
{
if (a < b) {
return a;
}
else {
return b;
}
} OK, cool so I need to get min value and ... this file !? So I need to open the whole project because of it? Or do I need to reinvent mathematics? I don't believe it's just insane, so the question is: when we can just remove the GPL header? Must I be a weirdo and do it ?: unsigned int min( unsigned int
JEIOfuihFHIYEFHyigHUEFGEGEJEIOFJOIGHE,
unsigned int hyrthrtRERG ) { if
(JEIOfuihFHIYEFHyigHUEFGEGEJEIOFJOIGHE
< hyrthrtRERG ) { return JEIOfuihFHIYEFHyigHUEFGEGEJEIOFJOIGHE; }
else {return hyrthrtRERG ; } } Seriously, do they want me to write code like the above? | Unlike many of the user here, I would simply suggest: Copy it! Make sure the formatting of the code fits your coding standard and also you should probably remove or rewrite the comment. No one will ever know you copied it - when a piece of code is this simple, you might as well have written it from scratch. If your coding standard somehow requires the function to look exactly as it does in the snippet, so be it - as long as it looks as it would look if you had written it from scratch. Think about it, this is hardly(!) the first time this exact piece has been written - when something is this trivial, there is little reason not to copy it, if you do not feel like writing it yourself. Even having this discussion seems a little superfluous to me - we need to be pragmatic if we are to get any real work done! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148317",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12033/"
]
} |
148,460 | I work at a newly-minted startup of five people. We have a Ph. D in machine learning, a former member of the RSpec core team, and the guy who compiles the Git binary for OS X. That's just the employees; the founder has a Ph. D and was CTO for a multi-billion-dollar corporation before leaving to start a (successful) startup, and has now left that to start this one. We also might get a guy with a Ph. D in math. Aaaaaaaaand then there's me, college-dropout intern. I think I'm pretty smart and I'm reading non-stop, but the delta of experience, skill, and knowledge between me and my co-workers is just breathtaking. So put yourself in their shoes: you've got a bright young intern who has a lot to learn but is at least energetic. What would be annoying? What use would you hope to get out of him in the here and now? What would be pleasantly surprising if it happened? | Most important thing: Don't be impressed by the titles. In a short time, you will realize that your Ph. D coworkers, too, are just humans. And some people with a Ph. D never really created anything practically usefull. Always remember that, don't feel inferior. What I'd expect of you?
To write good code and get things done. Chances are that you are someone who is really working, as you describe yourself as energetic. I've seen lots of people with degrees who took like forever to achieve simple tasks because they were focussing too much on details etc. Put that to good use and deliver good code in a reasonable time and soon everyone will respect you. But don't disrespect the others. They're most likely older and you can probably learn valuable things from them. But don't take anything over mindlessly. Always try to understand and think for yourself. I'd expect you to copy the behaviours and knowledge from them that really work. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148460",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/32733/"
]
} |
148,471 | When I am writing small scripts for myself, I stack my code high with comments (sometimes I comment more than I code). A lot of people I talk to say that I should be documenting these scripts, even though they are personal, so that if I ever do sell them, I would be ready. But aren't comments a form of documentation? Wouldn't this: $foo = "bar"; # this is a comment
print $foo; # this prints "bar" be considered documentation, especially if a developer is using my code? Or is documentation considered to be outside of the code itself? | Comments are definitely documentation. For most projects, comments are (unfortunately) the primary (if not only) form of project documentation. For this reason, it's very important to get it right. You need to make sure that this documentation stays accurate despite code changes. This is a common problem with comments. Developers often "tune" them out when they're working in familiar code, so they forget to update comments to reflect code. This can create out-of-date, and misleading comments. A lot of people suggest making the code self-documenting. This means that instead of comments, you restructure your code to remove the need for them. This can get rid of most of the "what" and "how" comments, but doesn't really help with the "why" comments. While this might work effectively to get rid of most comments, there are still plenty of times where writing a comment is the simplest and most efficient way to document a piece of code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148471",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34364/"
]
} |
148,522 | According to http://dictionary.reference.com push verb (used with object) to press upon or against (a thing) with force in order to move it away. to move (something) in a specified way by exerting force; shove; drive: to push something aside; to push the door open . to effect or accomplish by thrusting obstacles aside: to push one's way through the crowd. to cause to extend or project; thrust. to press or urge to some action or course: His mother pushed him to get a job. This IMO fits to FIFO queues. Is there an explanation for this? | According to legend, the original stack received its name by analogy to the stacks of dishes in the university cafeteria: you put one on top, and the (spring-loaded) stack of dishes goes down a bit, you take one away and it pops up a bit. Therefore 'pushing' received a connotation of operating downwards, even though you don't actually push down on the plate - you just set it down and gravity does the work. "Pushdown stack" is still a common phrase, and stacks tend to grow downwards in memory (i.e. with decreasing memory addresses), although it is doubtful whether that has anything to do with dish stacks or not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148522",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49035/"
]
} |
148,677 | Why is 80 characters the "standard" limit for code width? Why 80 and not 79, 81 or 100? What is the origin of this particular value? | You can thank the IBM punch card for this limit - it had 80 columns: | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148677",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14102/"
]
} |
148,747 | It's quite difficult for me to understand these terms. I searched on google and read a little on Wikipedia but I'm still not sure. I've determined so far that: Abstract Data Type is a definition of new type, describes its properties and operations. Data Structure is an implementation of ADT. Many ADT can be implemented as the same Data Structure. If I think right, array as ADT means a collection of elements and as Data Structure, how it's stored in a memory. Stack is ADT with push, pop operations, but can we say about stack data structure if I mean I used stack implemented as an array in my algorithm? And why heap isn't ADT? It can be implemented as tree or an array. | Simply put, an ADT (Abstract Data Type) is more of a logical description, while a Data Structure is concrete. Think of an ADT as a picture of the data and the operations to manipulate and change it. A Data Structure is the the real, concrete thing . It can be implemented and used within an algorithm. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148747",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
148,754 | 80x24 characters seems to be a very common default for terminal windows. This answer provides a very good historical reason as to why the width is 80 characters. But why is the height commonly 24 (or 25) lines? | Early terminals were built around the same cathode ray tubes that were used for televisions. In the 1960's and 1970's these were all 4:3 aspect ratio. If the display needs to fit 80 characters across the width then given the aspect ratio of the standard characters which was taller than 3:4 (if I remember correctly) and allowing for a larger space between lines than between characters you get to fit 24 or 25 lines on the display. I haven't done the exact maths because I can't remember (or find) the exact character aspect ratio or line spacing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148754",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9742/"
]
} |
148,790 | Python's Zen states on line 14 that: Although that way may not be obvious at first unless you're Dutch. Is this a reference to the famous Dutch computer scientist Edsger W. Dijkstra ? | Although that way may not be obvious at first unless you're Dutch. refers to the previous line: There should be one-- and preferably only one --obvious way to do it. And it has been argued that it's in reference to Dijkstra's thoughts on language design as expressed in his comments for the GREEN language (an early ADA): I thought that it was a firm principle of language design --out of concern for programming as a human activity-- that in all respects equivalent programs should have few possibilities for different representations (possibility for differences ideally not going beyond the arbitrary choice of identifiers and the arbitrary ordering of syntactically unordered components). Otherwise completely different styles of programming arise unnecessarily, thereby hampering maintainability, readability and what have you. This requires from the language designers the courage to make up their minds! The designers of the GREEN language have repeatedly lacked that courage, and have provided multiple ways of doing the same thing. The quote has been used to point the antithesis between Python's design (There’s only one way to do it) to Perl's ( There's more than one way to do it ) Slogans, semi-official and unofficial: Perl: "There's more than one way to do it." "There's more ways to do it than you can remember, probably more than you can even recognize." Python: "There should be one -- and preferably only one -- obvious way to do it." At least we tried to pick the right way.
(I have seen a progenitor of this remark attributed to Dijkstra: "I thought..." - Edsger W. Dijkstra on GREEN, an early version of Ada) Further digging revealed this old thread on a Python mailing list, appropriately named "Dijkstra on Python". The thread is centered around the same quote, and the philosophical differences between Python and Perl. But, the Dutch is indeed Guido van Rossum, as Tim Peters (author of the Zen of Python) reveals : In context, "Dutch" means a person from the Netherlands, or one imbued with
Dutch culture (begging forgiveness for that abuse of the word). I would
have said French, except that every French person I asked "how do you make a
shallow copy of a list?" failed to answer alist[:] so I guess that's not obvious to them. It must be obvious to the Dutch,
though, since it's obvious to Guido van Rossum (Python's creator, who is
Dutch), and a persistent rumor maintains that everyone who posts to
comp.lang.python is in fact also Dutch. The French people I asked about
copying a list weren't Python users, which is even more proof (as if it
needed more). Or, in other words, "obvious" is in part a learned, cultural judgment.
There's really nothing universally obvious about any computer language,
deluded proponents notwithstanding. Nevertheless, most of Python is obvious
to the Dutch. Others sometimes have to work a bit at learning the one
obvious way in Python, just as they have to work a bit at learning to
appreciate tulips, and Woody Woodpecker impersonations. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148790",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54164/"
]
} |
148,836 | I am preparing for a test and I can't find a clear answer on the question: What would be the impact of proving that PTIME=NPTIME. I checked wikipedia and it just mentioned that it would have "profound impact on maths,AI,algorithms.." etc. Anyone can give me an answer? | First thing that comes to mind is that the security of public-key cryptography currently depends on being unable to brute-force math problems that are in the NP difficulty class. If P = NP, everything that depends on PKC (including HTTPS, which means the entire modern, worldwide ecommerce infrastructure ) would have to be reworked! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148836",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22660/"
]
} |
148,856 | Say there is a team of ten agile developers. Every day they each pick a task from the board, commits several changes against it, until (by the end of the day) they have completed the task. All developers check in directly against trunk (Google-style, every commit is a release candidate, using feature toggles etc). If they were using a centralized CVS like SVN, every time one of them commits, the build server will integrate and test their changes against the other nine developers' work. The build server will be pretty much running continuously all day. But if they were using a DCVS like git, the developer may wait until they complete the task before pushing all their local commits together up to the central repository. Their changes will not be integrated until the end of the day. In this scenario, the SVN team is continuously-integrating more frequently, and discovering integration problems much faster than the git team. Does this mean DVCSs are less suitable for continuous teams than older centralized tools? How do you guys get around this deferred-push issue? | Disclaimer: I work for Atlassian DVCS does not discourage Continuous Integration as long as the developer pushes remotely on a regular basis to their own branch and the CI server is setup so that it builds the known active branches. Traditionally there are two problems with DVCS and CI: Uncertainty of integration state - unless the developer has been merging regularly from master and running the build, you don't know what the state of the combined changes are. If the developer has to do this manually, chances are it won't be done often enough to pick up problems early enough. Duplication and drift of build configuration - if the build configuration has to be copied from a 'master' build to create a branch build, the configuration for the branch can quickly become out of sync with the build it was copied from. In Bamboo, we introduced the ability for the build server to detect new branches as they are created by developers and automatically setup builds for the branch based off the build configuration for master (so if you change masters build config, it also changes the branches config to reflect the change). We also have a feature called Merge Strategies that can be used to either update the branch with changes from master before the branch build runs or automatically push the changes in a successful build branch to master, ensuring changes between branches are tested together as soon as possible. Anyhow, if your interested in learning more, see my blog post "Making Feature Branches effective with Continuous Integration" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148856",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54213/"
]
} |
148,959 | In Java, there are multiple languages that compile to Java bytecode and can run on the JVM -- Clojure, Groovy, and Scala being the main ones I can remember off the top of my head. However, Python also turns into bytecode (.pyc files) before being run by the Python interpreter. I might just be ignorant, but why aren't there any other programming languages that compile to python bytecode? Is it just because nobody bothered to, or is there some kind of inherent restriction or barrier in place that makes doing so difficult? | Simple - last time I checked, Python had no formal specification, including its bytecode. CPython is the spec, and bytecode portability is IIRC not required. Thus, it's a moving, undocumented target designed for a specific language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148959",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/45762/"
]
} |
148,983 | Looking for the recent and powerful upcoming programming languages over net, I came across Ceylon. I dropped in at ceylon-lang.org and it says: Ceylon is deeply influenced by Java. You see, we're fans of Java, but we know its limitations inside out. Ceylon keeps the best bits of Java but improves things that in our experience are annoying, tedious, frustrating, difficult to understand, or bugprone. What are the advantages of Ceylon over Java? | Ceylon seems like a nice fun language but I'd argue it has relatively few "advantages" over Java. I think it has a nicer syntax and some more "modern" language features - though this is subjective and I'd argue should be relatively minor factors in choosing a programming language. Much more important factors when choosing a language / platform for a serious project: Does it enable you to develop in a better paradigm for your given problem? (no - Ceylon is clearly yet another language in the over-crowded statically-typed Java-like OOP space. Contrast with e.g. Clojure which is targeting the functional language space or Groovy which is a very dynamic OOP JVM language so they are addressing different niches) Has it got a better library ecosystem? (no chance.... Java is unmatched in this regard. At best you'd probably just end up using the Java libraries from Ceylon) Can you get more skilled developers? (unlikely, few people are currently using Ceylon and even if they did there would be a big learning curve to climb) Has it got better tools? (no - Java tooling is very comprehensive and mature) Does it make you more productive? (debatable - it has some nice productive language features, but combined with learning curve and tooling effects it might actually end up behind) Does it provide better performance? (no - the JVM is extremely well optimised for Java, it's a tough call for any other JVM language to beat it. Scala comes close, but that's after many years of fine-tuning...) Does it support more target platforms? (no - it's a JVM language so exactly the same as Java) Is the code going to be more maintainable? (probably not - Java has stood the test of time here precisely because it is relatively stable, mature and doesn't have a lot of advanced language features that might confuse future maintainers) Is there a large, active and helpful community? (no, at least not compared to Java or the other big JVM languages like Scala, Clojure, Groovy etc.) Overall I'd certainly encourage people to experiment with Ceylon and have fun with it from a learning perspective. But I don't currently see any compelling advantages that would make large numbers of people want to switch to it (or choose it ahead of other JVM languages like Clojure, Scala, JRuby or Groovy). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/148983",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43175/"
]
} |
149,056 | In implementations of the Scheme programming language (R6RS standard) I can import a module as follows: (import (abc def xyz)) The system will try to look for a file $DIR/abc/def/xyz.sls where $DIR is some directory where you keep your Scheme modules. xyz.sls is the source code for the module and it is compiled on the fly if necessary. The Ruby, Python, and Perl module systems are similar in this respect. C# on the other hand is a little more involved. First, you have dll files that you must reference on a per project basis. You must reference each one explicitly. This is more involved than say, dropping dll files in a directory and having C# pick them up by name. Second, There isn't a one-to-one naming correspondence between the dll filename, and the namespaces offered by the dll. I can appreciate this flexibility, but it can also get out of hand (and has). To make this concrete, it would be nice if, when I say this using abc.def.xyz; , C# would try to find a file abc/def/xyz.dll , in some directory that C# knows to look in (configurable on a per project basis). I find the Ruby, Python, Perl, Scheme way of handling modules more elegant. It seems that emerging languages tend to go with the simpler design. Why does the .NET/C# world does things in this way, with an extra level of indirection? | The following annotation in the Framework Design Guidelines Section 3.3. Names of Assemblies and Dlls offers insight into why namespaces and assemblies are separate. BRAD ABRAMS Early in the design of the CLR we decided to seperate the
developer view of the platform (namespaces) from the packaging and
deployment view of the platform (assemblies). This separation allows
each to be optimized independently based on its own criteria. For
example, we are free to factor namespaces to group types that are
functionally related (e.g., all the I/O stuff in System.IO) while the
assemblies can be factored for performance (load time), deployment,
servicing, or versioning reasons. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149056",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/45105/"
]
} |
149,167 | TL;DR : Do functional languages handle recursion better than non-functional ones? I am currently reading Code Complete 2. At some point in the book, the author warns us about recursion. He says it should be avoided when possible and that functions using recursion are generally less effective than a solution using loops. As an example, the author wrote a Java function using recursion to compute the factorial of a number like so (it may not be exactly the same since I do not have the book with me at the moment): public int factorial(int x) {
if (x <= 0)
return 1;
else
return x * factorial(x - 1);
} This is presented as a bad solution. However, in functional languages, using recursion is often the preferred way of doing things. For example, here is the factorial function in Haskell using recursion: factorial :: Integer -> Integer
factorial 0 = 1
factorial n = n * factorial (n - 1) And is widely accepted as a good solution. As I have seen, Haskell uses recursion very often, and I did not see anywhere that it is frowned upon. So my question basically is: Do functional languages handle recursion better than non-functional ones? EDIT : I am aware that the examples I used are not the best to illustrate my question. I just wanted to point out that Haskell (and functional languages in general) uses recursion much more often than non-functional languages. | Yes, they do, but not only because they can , but because they have to . The key concept here is purity : a pure function is a function with no side effects and no state. Functional programming languages generally embrace purity for many reasons, such as reasoning about code and avoiding non-obvious dependencies. Some languages, most notably Haskell, even go so far as to allow only pure code; any side effects a program may have (such as performing I/O) are moved to a non-pure runtime, keeping the language itself pure. Not having side effects means you can't have loop counters (because a loop counter would constitute mutable state, and modifying such state would be a side effect), so the most iterative a pure functional language can get is to iterate over a list (this operation is typically called foreach or map ). Recursion, however, is a natural match with pure functional programming - no state is needed to recurse, except for the (read-only) function arguments and a (write-only) return value. However, not having side effects also means that recursion can be implemented more efficiently, and the compiler can optimize it more aggressively. I haven't studied any such compiler in depth myself, but as far as I can tell, most functional programming languages' compilers perform tail call optimization, and some may even compile certain kinds of recursive constructs into loops behind the scenes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149167",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38148/"
]
} |
149,190 | So I have a function that takes two dates. The SQL query gets all the results between the two dates. I currently have my function setup as such: function myFunc(start, end) Where start is the most recent date and end is the oldest date. Is this intuitive/correct or is there another naming/ordering convention I should be using? | Typically, you'd think that start <= end, so not really. Consider renaming it to give a hint to the business logic you are trying to implement. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149190",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36051/"
]
} |
149,232 | The company where I work is aiming to have a 10% maximum error margin.They expect analysts to not miss the estimate by more or less than 10%. I don't know what to think about it, since I got nothing to compare it to. What could be a good baseline to measure if we are estimating too wrong, for more or less?
How much (in %) do you think is okay to to miss? | Unless you are estimating something very similar to that which you and your co-workers have done before, +/-10% is ridiculously optimistic. Your management either doesn't have a lot of experience with software, or they're not aware of Large Limits to Software Estimation . That paper has some accompanying supporting material , and a lot of punditry can be found. Let's examine a far simpler system than a typical software project: Rubik's Cube. You can solve any position in 20 moves , max. But since you're estimating, you can only look at a given cube for a few minutes before giving the solution. Can you give a good estimate? No, sometimes estimating a process takes longer than doing the process. Another simple system: Pinocchio. A wooden automaton, his nose-piece grows when he utters a lie. What happens when Pinocchio is at rest, and then says "My nose is growing"? Some systems aren't amenable to prediction, they're undecidable. These two problems are built-in to most software systems. Because of that, you'll never get estimates close to +/-10%. My advice is to give a heavily padded estimate, work like a slave to get the project done as fast as you can, and then look busy until you're within 10% under or over. At that point, announce a spectacular success. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149232",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54388/"
]
} |
149,303 | There are 3 important naming conventions: with_underscores PascalCased camelCased Other variants are not important because they are not commonly used. For variables it seems that the one with underscores is the most used by developers so I'll stick with that. I think it's the same for functions. But what about class, and method names? Which of these 3 is the most used by developers for such constructs? (personally, it's 3. for methods and 2. for classes) Please do not post things like "use what you feel is right", because the code I'm writing is API for other developers, and I'd like to adopt the most popular coding style :) | I had the same question about a year ago so I looked at some code myself. Here is what I found (constants were ALL_CAPS in every project, by the way): ╔═══════════════════════╦═════════════╦════════════╦══════════════╦════════════╦════════════╗
║ PHP Project ║ Classes ║ Methods ║ Properties ║ Functions ║ Variables ║
╠═══════════════════════╬═════════════╬════════════╬══════════════╬════════════╬════════════╣
║ Akelos Framework ║ PascalCase ║ camelCase ║ camelCase ║ lower_case ║ lower_case ║
║ CakePHP Framework ║ PascalCase ║ camelCase ║ camelCase ║ camelCase ║ camelCase ║
║ CodeIgniter Framework ║ Proper_Case ║ lower_case ║ lower_case ║ lower_case ║ lower_case ║
║ Concrete5 CMS ║ PascalCase ║ camelCase ║ camelCase ║ lower_case ║ lower_case ║
║ Doctrine ORM ║ PascalCase ║ camelCase ║ camelCase ║ camelCase ║ camelCase ║
║ Drupal CMS ║ PascalCase ║ camelCase ║ camelCase ║ lower_case ║ lower_case ║
║ Joomla CMS ║ PascalCase ║ camelCase ║ camelCase ║ camelCase ║ camelCase ║
║ modx CMS ║ PascalCase ║ camelCase ║ camelCase ║ camelCase ║ lower_case ║
║ Pear Framework ║ PascalCase ║ camelCase ║ camelCase ║ ║ ║
║ Prado Framework ║ PascalCase ║ camelCase ║ Pascal/camel ║ ║ lower_case ║
║ SimplePie RSS ║ PascalCase ║ lower_case ║ lower_case ║ lower_case ║ lower_case ║
║ Symfony Framework ║ PascalCase ║ camelCase ║ camelCase ║ camelCase ║ camelCase ║
║ WordPress CMS ║ ║ ║ ║ lower_case ║ lower_case ║
║ Zend Framework ║ PascalCase ║ camelCase ║ camelCase ║ camelCase ║ camelCase ║
╚═══════════════════════╩═════════════╩════════════╩══════════════╩════════════╩════════════╝ So after looking at all this, I decided to go with: ClassName methodName propertyName function_name (meant for global functions) $variable_name | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149303",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52792/"
]
} |
149,457 | Many of us, including me, started their programming life with programs written on home computers , something like 10 PRINT "ENTER RADIUS"
20 INPUT R
30 PRINT "CIRCUMFERENCE="; 2 * R * PI
40 PRINT "AGAIN?"
50 INPUT A$
60 IF A$="Y" THEN GOTO 10
70 END Of course, the line-number based BASIC was prone for creating spagetti code, also because most BASIC dialects missed structural statements like WHILE , doing everything but the FOR -loop with IF , GOTO and GOSUB . I'm talking about BASIC dialects before 1991, when QBASIC and Visual Basic appeared. While BASIC dialects may have promoted bad style amongst aspiring programmers, were there larger commercial projects created in such a BASIC dialect? If so, how did they manage to live with and workaround the obvious shortcomings? By "serious", I mean: Not a game (I know some commercial games were written in BASIC, for example, Pimania ) Not freeware Not trivial, that is, reasonably large (say: at least 1500 LOC ) Sold to several customers (not an in-house development) "Mission critical" is a plus | Sure. Before the Altair/MITS/SWTPC/Kim/Sinclair/Pet/RadioScrap/OSI/Apple things happened, there was a delightful little machine known as the IBM 5100 . It had BASIC in ROM , a big cassette tape drive (or two), 8 KB of memory. a 24 line screen, and a printer, all for a measly USD 10,000 -- an order of magnitude cheaper than your typical mini. Originally built for scientists ( APL in ROM was also an option), but then a few accounting types discovered it, and started a craze: every small business wanted one. With custom software, of course. The 5110 followed that, with the tape drives replaced by 8" floppies. Any commercial software? Galoons . Can you say general ledger, payroll, accounts payable, accounts receivable, inventory control, and invoicing? I have been there, done that -- in BASIC. Utility bills, new and used car inventory, garbage truck pickup and beverage delivery scheduling? Yup -- BASIC. Want to track iron ore from mines onto trains onto ships... BASIC. Everything that wasn't raised floor was likely getting done in BASIC. Commercially, I mean. (Because RPG II doesn't count ;-). How did one work around the limitations? Well, the first thing you did was send the customer back to IBM for more memory, Because who could write anything serious in 8 KB? You simply had to have 16. And two tape drives, if possible, because automata theory aside, merge sorting on a single tape is, well, a tad slow. Oh, sorry - you meant the limitations of BASIC. Well, you had to manage your resources pretty carefully -- things like line numbers -- because you didn't want to run out of those; real pain in the behind to have to renumber a whole section, and type it all back in, without accidentally losing a line or two of code. Nah - just kidding. We didn't actually have that problem until micro---er, home computers showed up, with a BASIC interpreter that couldn't do renumbering by itself. We also used modularity - where you called a new program, ran it till it quit and returned back to the calling program. A gosub on steroids (because you got more memory to use), but way slower (because it took a while for the machine to find the program on the tape, and load it in, and then rewind and find the original program and load that back...). A lot like a fork and exec, but without the fork, only better because the whole memory space was shared. Rigorous use of conventions also helped -- you know, like "you MUST always target a GOSUB at a comment line that says what this routine does, and you SHOULD do the same for a GOTO when possible. Stuff like that. Oh, and structured programming , a little later -- "by convention" again. Some even went a little to the extreme: OAOO , YAGNI , TSTTCPW , pairing, refactor mercilessly, that sort of stuff. Not by those names, of course. (See also: Ecclesiastes ;-) The glory days. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149457",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6617/"
]
} |
149,465 | while (1) {
if (1+1==2) {
print "Yes, you paid attention in Preschool!";
} else {
print "Wait... I thought 1+1=2";
}
} As a developer, we all have to use loops very frequently. We know that. What I was wondering was, who thought of the idea to have loops? What language introduced loops? What was the first loop construct? Was it a while loop? A for loop? etc? | As mouviciel and Emilio Garavaglia noted, the concept predates computing. However, the first instance of a software loop was the loop Ada Lovelace used to calculate Bernoulli numbers , as described in Note G of her translation of the Sketch of the Analytical Engine Invented by Charles Babbage , by L. F. Menabrea . The Analytical Engine's ability to loop is noted early on by Menabrea: This being understood, let us, at the beginning of the series of operations we wish to execute, place the needle C on the division 2, the needle B on the division 5, and the needle A on the division 9. Let us allow the hammer of the dial C to strike; it will strike twice, and at the same time the needle B will pass over two divisions. The latter will then indicate the number 7, which succeeds the number 5 in the column of first differences. If we now permit the hammer of the dial B to strike in its turn, it will strike seven times, during which the needle A will advance seven divisions; these added to the nine already marked by it will give the number 16, which is the square number consecutive to 9. If we now recommence these operations, beginning with the needle C, which is always to be left on the division 2, we shall perceive that by repeating them indefinitely, we may successively reproduce the series of whole square numbers by means of a very simple mechanism. The Analytical Engine's looping mechanism is directly inherited from Joseph Marie Jacquard's mechanical loom (1801), as noted in the memoir by Menabrea: It will now be inquired how the machine can of itself, and without having recourse to the hand of man, assume the successive dispositions suited to the operations. The solution of this problem has been taken from Jacquard's apparatus, used for the manufacture of brocaded stuffs, in the following manner:— Two species of threads are usually distinguished in woven stuffs; one is the warp or longitudinal thread, the other the woof or transverse thread, which is conveyed by the instrument called the shuttle, and which crosses the longitudinal thread or warp. When a brocaded stuff is required, it is necessary in turn to prevent certain threads from crossing the woof, and this according to a succession which is determined by the nature of the design that is to be reproduced. Formerly this process was lengthy and difficult, and it was requisite that the workman, by attending to the design which he was to copy, should himself regulate the movements the threads were to take. Thence arose the high price of this description of stuffs, especially if threads of various colours entered into the fabric. To simplify this manufacture, Jacquard devised the plan of connecting each group of threads that were to act together, with a distinct lever belonging exclusively to that group. All these levers terminate in rods, which are united together in one bundle, having usually the form of a parallelopiped with a rectangular base. The rods are cylindrical, and are separated from each other by small intervals. The process of raising the threads is thus resolved into that of moving these various lever-arms in the requisite order. To effect this, a rectangular sheet of pasteboard is taken, somewhat larger in size than a section of the bundle of lever-arms. If this sheet be applied to the base of the bundle, and an advancing motion be then communicated to the pasteboard, this latter will move with it all the rods of the bundle, and consequently the threads that are connected with each of them. But if the pasteboard, instead of being plain, were pierced with holes corresponding to the extremities of the levers which meet it, then, since each of the levers would pass through the pasteboard during the motion of the latter, they would all remain in their places. We thus see that it is easy so to determine the position of the holes in the pasteboard, that, at any given moment, there shall be a certain number of levers, and consequently of parcels of threads, raised, while the rest remain where they were. Supposing this process is successively repeated according to a law indicated by the pattern to be executed, we perceive that this pattern may be reproduced on the stuff. For this purpose we need merely compose a series of cards according to the law required, and arrange them in suitable order one after the other; then, by causing them to pass over a polygonal beam which is so connected as to turn a new face for every stroke of the shuttle, which face shall then be impelled parallelly to itself against the bundle of lever-arms, the operation of raising the threads will be regularly performed. Thus we see that brocaded tissues may be manufactured with a precision and rapidity formerly difficult to obtain. Jacquard's loom is a very early application of a loop in the context of ordering a machine to produce a repeated output : The idea behind the Jacquard-loom was a system of punch cards and hooks. The cards were made very thick and had rectangular holes punched in them. The hooks and needles used in weaving were guided by these holes in the cardboard. When the hooks came into contact with the card they were held stationary unless it encountered one of the punched holes. Then the hook was able to pass through the hole with a needle inserting another thread, thus forming the desired pattern. Intricate patterns were achieved by having many cards arranged one after the other and/or used repeatedly. Jacquard's loom is also recognized as a very early form of a stored program : If the impetus behind much of the development of calculating machines discussed so far had arisen from numerical computation, the motivation that led to the earliest form of `stored program' was to come from a very different source: the textile industry. We have seen earlier that one of the fundamental aspects of computational systems is the concept of representing information and, although we have not done so explicitly, the application of this idea can be discerned in all of the artefacts that we have examined up to now: in the development of written representations for numeric values and the mechanical parallels that sprung from these. Thus, the alignment of pebbles on an abacus frame, the juxtaposition of moving scales on a slide-rule, and the configuration of cogged gears on the devices of Schickard, Pascal and Leibniz, are all examples of representational techniques that seek to simplify the complex processes underlying arithmetic tasks. There are, however, categories of information, and representations thereof, other than number upon which computational processes can be performed. The weaving technology developed by Joseph-Marie Jacquard in 1801 illustrates one example of such a category. Charles Babbage also adapted Jacquard's storing procedure into the Analytical Engine , the presence or absence of a hole communicated a simple on-off command to the machine: The Analytical Engine has many essential features found in the modern digital computer. It was programmable using punched cards, an idea borrowed from the Jacquard loom used for weaving complex patterns in textiles. The Engine had a 'Store' where numbers and intermediate results could be held, and a separate 'Mill' where the arithmetic processing was performed. It had an internal repertoire of the four arithmetical functions and could perform direct multiplication and division. It was also capable of functions for which we have modern names: conditional branching, looping (iteration), microprogramming, parallel processing, iteration, latching, polling, and pulse-shaping, amongst others, though Babbage nowhere used these terms. It had a variety of outputs including hardcopy printout, punched cards, graph plotting and the automatic production of stereotypes - trays of soft material into which results were impressed that could be used as molds for making printing plates. The Analytical Engine's conditional branches combined with the Jacquard inspired mechanical loops and storing procedure are dauntingly similar (conceptually) to your example, especially if we add Babbage's printer to the mix, for the print "..."; parts. Obviously mechanical loops predate Jacquard's loom, the first known device to work in a loop fashion being the Antikythera mechanism (100 BCE), and if we look even further into history (and venture horribly off topic), sundials are probably the oldest man made mechanisms where an understanding of loops is evident, following of course the repeating pattern of the sun's and other stellar bodies' orbits. However I think that in context of computing (and not calculating or anything else), the Analytical Engine and Ada's Bernoulli numbers calculating algorithm can be credited for introducing loops, sharing at least some of the credit with Jacquard's loom, having directly adapted the concept from it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149465",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34364/"
]
} |
149,493 | ...and unmarshalling/deserializing? Wikipedia's explanation leaves me none-the-wiser! I'm a Java programmer, in case the terminology is used differently in different languages. | Semantics are important here: Marshalling implies moving the data, it does not imply transforming the data from its native representation or storage. Java Objects can be Marshalled over the wire in their native representation. Serializing implies transforming the data to some non-native intermediate representation. For example: transforming a Java Object to JSON or XML. Of course, most systems that Marshal data, Serialize it to some non-native format before they transport it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149493",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36339/"
]
} |
149,563 | I was told by a colleague that in Java object creation is the most expensive operation you could perform. So I can only conclude to create as few objects as possible. This seems somewhat to defeat the purpose of object oriented programming. If we aren't creating objects then we are just writing one long class C style, for optimization? | Your colleague has no idea what they are talking about. Your most expensive operation would be listening to them . They wasted your time mis-directing you to information that is over a decade out of date (as of the original date this answer was posted) as well as you having to spend time posting here and researching the Internet for the truth. Hopefully they are just ignorantly regurgitating something they heard or read from more than a decade ago and don't know any better. I would take anything else they say as suspect as well, this should be a well known fallacy by anyone that keeps up to date either way. Everything is an Object ( except primitives ) Everything other than primitives ( int, long, double , etc ) are Objects in Java. There is no way to avoid Object creation in Java. Object creation in Java due to its memory allocation strategies is faster than C++ in most cases and for all practical purposes compared to everything else in the JVM can be considered "free" . Early as in late 1990's early 2000s JVM implementations did have some performance overhead in the actual allocation of Objects. This hasn't been the case since at least 2005. If you tune -Xms to support all the memory you need for your application to run correctly, the GC may never have to run and sweep most of the garbage in the modern GC implementations, short lived programs may never GC at all. It doesn't try and maximize free space, which is a red herring anyway, it maximizes performance of the runtime. If that means the JVM Heap is almost 100% allocated all the time, so be it. Free JVM heap memory doesn't give you anything just sitting there anyway. There is a misconception that the GC will free memory back to the rest of the system in a useful way, this is completely false! The JVM heap doesn't grow and shrink so that the rest of the system is positively affected by free memory in the JVM Heap . -Xms allocates ALL of what is specified at startup and its heuristic is to never really release any of that memory back to the OS to be shared with any other OS processes until that instance of the JVM quits completely. -Xms=1GB -Xmx=1GB allocates 1GB of RAM regardless of how many objects are actually created at any given time. There are some settings that allow for percentages of the heap memory to be release, but for all practical purposes the JVM never is able to release enough of this memory for this to ever happen so no other processes can reclaim this memory, so the rest of the system doesn't benefit from the JVM Heap being free either. An RFE for this was "accepted" 29-NOV-2006, but nothing has ever been done about it. This is behavior is not considered a concern by anyone of authority. There is a misconception that creating many small short lived objects causes the JVM to pause for long periods of time, this is now false as well Current GC algorithms are actually optimized for creating many many small objects that are short lived, that is basically the 99% heuristic for Java objects in every program. Attempts at Object Pooling will actually make the JVM perform worse in most cases. The only Objects that need pooling today are Objects that refer to finite resources that are external to the JVM; Sockets, Files, Database Connections, etc and can be reused. Regular objects can not be pooled in the same sense as in languages that allow you direct access to memory locations. Object caching is a different concept and may or may not be what some people naively call pooling , the two concepts are not the same thing and should not be conflated. The modern GC algorithms don't have this problem because they don't deallocate on a schedule, they deallocate when free memory is needed in a certain generation. If the heap is big enough, then no deallocations happen long enough to cause any pauses. Object Oriented Dynamic languages are beating C even now days on compute sensitive tests. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149563",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40531/"
]
} |
149,708 | So I have a webservice that has something like a getAccount where it would return an identifier to the account if it got it, else throw an exception. The client will always want to create an account if an exception is thrown with the same info the get is done with. I am creating a convenience library for clients that will be handling all of the webservice calls inside so they don't need to know how to do the calls themselves. What I am wondering is in this library if I were to create a getAccount(accountName) that will get the account if it exists, and if it does not then create it and return the info, is that a bad thing to do? Should I leave it to the client to handle the exceptions or simply name it something like getOrCreateAccount? Does it matter? Is it bad practice to create something in a get operation? | Yes, it matters. In my opinion, it's a generally a bad practice to create something in a procedure that is not documented as having the powers of creation. Either name the procedure getOrCreate... or have a separate create... procedure and then if you really want, have a getOrCreate... that first attempts get... , and if that fails, calls create... and then calls get... . The user of the library will probably not expect the get... procedure to create if the get operation fails. If they suddenly find out that their test calls to get... created a whole tonne of data, they will probably be rather surprised. And how do they clean it up? What if they write code thinking that they will get an error if get... fails and they want to handle that their way? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149708",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21894/"
]
} |
149,762 | I have hardly a year's experience in coding. After I started working, most of the time I would be working on someone else's code, either adding new features over the existing ones or modifying the existing features. The guy who has written the actual code doesn't work in my company any more. I am having a hard time understanding his code and doing my tasks. Whenever I tried modifying the code, I have in some way messed with the working features. What all should I keep in mind, while working over someone else's code? | Does the code have unit tests? If not, I strongly suggest you start adding them. This way, you can write new features/bug fixes as failing tests and then modify the code that the test passes. The more of those you build, the more confidence you will have that your added code has not broken something else. Writing unit tests for code you do not fully understand will help you understand said code. Of course, functional tests should be added if they do not already exist. My impression was that those already did exist form the OP question. If I am wrong on that point, then these functional tests should be your first step. Eagle76dk makes a great point about getting your manager on board for doing this work -- more details in Eagle76dk's post. In addition as you write these tests I'd encourage you to try to write the tests so that they verify business behavior that the method may have tried to accomplish, not code behavior. Also, don't at all assume the business behaviors you see in the code are the correct ones - If you have someone who could tell you what the application should be doing that is in many cases more valuable than what the code might tell you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149762",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/53710/"
]
} |
149,792 | Let it be known that I am a big fan of dependency injection (DI) and automated testing. I could talk all day about it. Background Recently, our team just got this big project that is to built from scratch. It is a strategic application with complex business requirements. Of course, I wanted it to be nice and clean, which for me meant: maintainable and testable. So I wanted to use DI. Resistance The problem was in our team, DI is taboo. It has been brought up a few times, but the gods do not approve. But that did not discourage me. My Move This may sound weird but third-party libraries are usually not approved by our architect team (think: "thou shalt not speak of Unity , Ninject , NHibernate , Moq or NUnit , lest I cut your finger"). So instead of using an established DI container, I wrote an extremely simple container. It basically wired up all your dependencies on startup, injects any dependencies (constructor/property) and disposed any disposable objects at the end of the web request. It was extremely lightweight and just did what we needed. And then I asked them to review it. The Response Well, to make it short. I was met with heavy resistance. The main argument was, "We don't need to add this layer of complexity to an already complex project". Also, "It's not like we will be plugging in different implementations of components". And "We want to keep it simple, if possible just stuff everything into one assembly. DI is an uneeded complexity with no benefit". Finally, My Question How would you handle my situation? I am not good in presenting my ideas, and I would like to know how people would present their argument. Of course, I am assuming that like me, you prefer to use DI. If you don't agree, please do say why so I can see the other side of the coin. It would be really interesting to see the point of view of someone who disagrees. Update Thank you for everyone's answers. It really puts things into perspective. It's nice enough to have another set of eyes to give you feedback, fifteen is really awesome! This are really great answers and helped me see the issue from different sides, but I can only choose one answer, so I will just pick the top voted one. Thanks everyone for taking the time to answer. I have decided that it is probably not the best time to implement DI, and we are not ready for it. Instead, I will concentrate my efforts on making the design testable and attempt to present automated unit testing. I am aware that writing tests is additional overhead and if ever it is decided that the additional overhead is not worth it, personally I would still see it as a win situation since the design is still testable. And if ever testing or DI is a choice in future, the design can easily handle it. | Taking a couple of the counter arguments: "We want to keep it simple, if possible just stuff everything into one assembly. DI is an uneeded complexity with no benefit". "its not like we will be plugging in different implementations of components". What you want is for the system to be testable. To be easily testable you need to be looking at mocking the various layers of the project (database, communications etc.) and in this case you will be plugging in different implementations of components. Sell DI on the testing benefits it gives you. If the project is complex then you're going to need good, solid unit tests. Another benefit is that, as you are coding to interfaces, if you come up with a better implementation (faster, less memory hungry, whatever) of one of your components, using DI makes it a lot easier to swap out the old implementation for the new. What I'm saying here is that you need to address the benefits that DI brings rather than arguing for DI for the sake of DI. By getting people to agree to the statement: "We need X, Y and Z" You then shift the problem. You need to make sure that DI is the answer to this statement. By doing so you co-workers will own the solution rather than feeling that it's been imposed on them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149792",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25679/"
]
} |
149,827 | Is there some consensus among historians on who was the first programmer ever? If so, who was it and what were they programming on? I find it more interesting to know more about the pioneers of programming, regardless if they programmed on a programmable machine or if they've designed the machine themselves to do some computing task. | Augusta Ada King , Countess of Lovelace (1815 - 1852) is credited by most as the first programmer. The first program was an algorithm to calculate Bernoulli numbers for Charles Babbage's Analytical Engine , and it appeared in her translation notes of Luigi Menabrea's memoir "Sketch of the Analytical Engine Invented by Charles Babbage" , more specifically Note G . That said, the math necessary for calculating Bernoulli numbers were known long before Ada's time, however Ada's algorithm is the first instance of a calculating algorithm designed to be executed by a (at the time still hypothetical) machine. Konrad Zuse (1910 – 1995) is also a solid candidate for the "first programmer" moniker, having invented a floating point binary mechanical calculator with limited programmability, the Z1 (1936) but more importantly the Z3 (1941), a Turing complete electro-mechanical computer. When it comes to electronic computers, the Atanasoff–Berry Computer (conceived in 1937, operational by 1942) is credited as the first electronic digital computing device, so it's reasonable to think of its designers, John Vincent Atanasoff and Clifford Berry as programming pioneers. The Atanasoff–Berry Computer wasn't programmable though, the first programmable electronic computer was ENIAC (1946). Although ENIAC's designers John Mauchly and J. Presper Eckert probably did a fair share of programming, most of ENIAC's programming were done by these lovely ladies : Their names from left to right are Kathy Kleiman 1 , Jean Bartik , Marlyn Meltzer , Kay Mauchly Antonelli and Betty Holberton at the front. Two of the ENIAC's female programmers, Fran Bilas and Ruth Lichterman , are missing from the photo. When it comes to digital computers, the first one was Colossus (operational by December 1943), and the project's lead Tommy Flowers (1905 – 1998) should also be considered a programming pioneer, along with Max Newman (1897 – 1984) who was responsible for formulating the requirements for the machine and of course Alan Turing (1912 – 1954), who had designed Bletchley Park's earlier electromechanical cryptanalytical machine, the Bombe (1939), and was influential in Colossus design 2 . 1 Kathy Kleiman is the founder of the ENIAC Programmers Project and obviously not an ENIAC programmer (too young :) 2 A History of Computing in the Twentieth Century: The Colossus - B. Randell, Newcastle University (PDF) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149827",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1683/"
]
} |
149,839 | So what happened to XHTML5? http://www.w3.org/TR/html5/ That page is a draft for both xhtml5 and html5?
So there's no difference between these doctypes? | In 2012 at the moment of writing, it was clear that W3C decided to abandon XHTML for HTML 5. This decision was motivated by several reasons: Only few people were really interested in XHTML. Most of the websites were written in plain HTML. Even fewer really understood what XHTML is about and how to use it. Too many websites which pretended to serve XHTML used wrong headers, instead of Content-Type: application/xhtml+xml . Even when you fully understand what XHTML is and what must be the headers, the thing is really tricky with some crappy browsers not accepting/supporting application/xhtml+xml content type. This meant that you had to change the header according to the browser. The XML part of XHTML also caused some weird situations the developers had to solve. One is INVALID_STATE_ERR: DOM Exception 11 message appearing when you assign the text containing HTML characters (like é ) to an element within the XHTML page. When you encounter this error with its very helpful message in a large web application after doing an AJAX request, you have really no idea if it's the fault of JQuery, AJAX, or something else. Writing HTML 5 code doesn't mean mixing up tags all around. If you're passionate about XML and XHTML, you can still write HTML 5 code which will look very close to XML. In the early days of mobile phones, XHTML was interesting for the mobile devices which were not very powerful. Parsing XML is much easier than HTML. Now, with dual-core mobile devices, it really doesn't matter if they have to parse clean valid XML or dirty HTML full of hacks and mixed tags. The spec of October 2014 mentions XHTML syntax . For the moment, it is unclear whether there is such a thing as the new XHTML language (not syntax ), and if there is, what will be the position of XHTML, nor the adoption of the new XHTML standard by the mainstream browsers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149839",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54704/"
]
} |
149,868 | Everybody nowadays does SOA , even if some don't actually understand what is all about. So they do it wrong. Using that as an analogy I know what REST is (or at least I think I do) and want to do some of it. But I want to do it right. So my question is what's the proper way to do REST? | Well, there are a lot of ways to learn how to build a RESTful web application and no, there isn't a unique right way. RESTful is not a standard but it uses a set of standards (HTTP, URI, Mime Type, ...). Start with this: How I Explained REST to my wife Then, proceed with this: RESTful Web Services Cookbook And then put your entire effort to develop web applications because the best way to learn is doing experiments and you can learn so much from your mistakes ;) Don't worry if your first web apps won't be completely RESTful: you will find the way to do it! So, quoting Obi-Wan Kenobi, "may the force be with you!" ;) EDIT Ok, let me be more specific. You wanna make some RESTful webapp, huh? Well, as I said there are many ways to do it but this is the main guideline. Definition REST (Representational State Transfer) is a software architecture's style for distributed system (like WWW).
It is not a standard but it uses a set of standards: HTTP, AJAX, HTML, URI, Mime Type, etc.
We are talking about representation of a resource, not about a resource itself. Taken from 'How I explained REST to my wife': Wife : A web page is a resource? Ryan : Kind of. A web page is a representation of a resource. Resources are just concepts. Architecture's Constraints Client-Server : client and server are separated by the Uniform Interface (described below). Stateless : server-client communication is done without saving a particular client state on the server. Cachable : the client might have a cache of responses of requests already made. Layered System : the client doesn't know if it's directly connected with an end-server or if the communication is done through intermediates. Uniform Interface Resources' Identification : each resource has to be identified by a URI. Protocol : in order to get in communication client and server, a protocol has to be done before. Each request might have the right MIME Type (application/xml, text/html, application/rdf+xml, etc.), the right headers and the right HTTP method (see CRUD description below). CRUD Ok, we saw that to identify resources we can use URI, but we need something else for the actions (add, modify, delete, etc): a great welcome to CRUD (Create, Read, Update and Delete). Create { HTTP: POST } { SQL: INSERT } => create a new resource Read { HTTP: GET } { SQL: SELECT } => get a resource Update { HTTP: PUT } { SQL: UPDATE } => modify a resource Delete { HTTP: DELETE } { SQL: DELETE } => delete a resource Now, concerning PUT and DELETE, some tech problems could appear (you'll get them with HTML form): often developers bypass this problem using POST for each 'PUT' and 'DELETE' request. Officially, you have to use PUT and DELETE. By the way, do what you want. My experience pushes me to use POST and GET every time. --- Next part should be used but it isn't a REST's bond: it concerns Linked Data --- URI Abstract URI from technical details! Say goodbye to URI as follows: http://www.example.com/index.php?query=search&id=9823&date=08272012 Re-design URI! Take the link above and change it as follows: http://www.example.com/search/2012/08/27/9823 That's much better, huh?
It could be done by: server application : a root file that routes each request. server web : .htaccess file plus rewrite rules client application : HTML5 history object or fragments (also Twitter uses fragments: http://www.twitter.com/#!/__wilky__ ) Another thing: use different URI to represent different resources: http://www.example.com/about (that's the resource) http://www.example.com/about.html (that's the HTML representation of the resource) http://www.example.com/about.rdf (that's the RDF representation of the resource) Pay attention : about.html and about.rdf are not files! They could be the result of an XSLT transformation! Content Negotiation If you've reached this point, congratulations! Probably, you're ready to get more abstract concepts because we're entering in the Semantic Web technical details ;)
Well, when your client wants a resource, it typically makes the following request: GET http://www.example.com/about
Accept: application/rdf+xml But the server won't respond with the about.rdf because it has a different URI ( http://www.example.com/about.rdf ).
So, let's have a look to the 303 pattern !
Server will return this: 303 See Other
Location: http://www.example.com/about.rdf And the client will follow the link returned as follows: GET http://www.example.com/about.rdf
Accept: application/rdf+xml Finally, the server will return the resource requested: 200 OK
about.rdf Don't worry: your client application won't do anything of this!
The 303 pattern must be done by the server application and your browser will do the rest ;) Conclusion Often the theory is far, far away from the practice.
Yeah, now you know how to design and develop a RESTful application but the guideline above is just a hint.
You will find your best way to build web applications and probably it won't be the same as theory wants. Don't give it a damn :D! Bibliography RESTful Web Services, Sameer Tyagi REST APIs must be hypertext-driven, Roy Thomas Fielding RESTful Web services: The basics, Alex Rodriguez Webber REST Workflow | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149868",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38417/"
]
} |
149,970 | I feel that I am good at writing code in bits and pieces, but my designs really suck. The question is, how do I improve my designs - and in turn become a better designer? I think schools and colleges do a good job of teaching people how to become good at mathematical problem solving, but let's admit the fact that most applications created at school are generally around 1000 - 2000 lines long, which means that it is mostly an academic exercise which doesn't reflect the complexity of real world software - on the order of a few hundred thousand to millions of lines of code. This is where I believe that even projects like topcoder / project euler also won't be of much help, they might sharpen your mathematical problem solving ability - but you might become an academic programmer; someone who is more interested in the nice, clean stuff, who is utterly un-interested in the day to day mundane and hairy stuff that most application programmers deal with. So my question is how do I improve my design skills? That is, the ability to design small/medium scale applications that will go into a few thousand of lines of code? How can I learn design skills that will help me build a better html editor kit, or some graphics program like gimp? | The only way to become really good at something is to try, fail spectacularly, try again, fail again a little less than before, and over time develop the experience to recognize what causes your failures so that you can manage potential failure situations later on. This is as true of learning to play a musical instrument, drive a car, or earn some serious PWN-age in your favourite first person shooter, as it is of learning any aspect of software development. There are no real shortcuts, but there are things you can do to avoid having problems get out of hand while you are gaining experience. Identify a good mentor . There is nothing better than being able to talk about your issues with someone who has already paid their dues. Guidance is a great way to help fast-track learning. Read , read some more, practice what you've been reading, and repeat for the entire lifetime of your career. I've been doing this stuff for more than 20 years, and I still get a kick out learning something new every day. Learn not just about up front design, but also emergent design, testing, best practices, processes and methodologies. All have varying degrees of impact on how your designs will emerge, take shape, and more importantly, how they last over time. Find time to tinker . Either get involved with a skunkwork project through your workplace, or practice on your own time. This goes hand-in-hand with your reading, by putting your new knowledge into practice, and seeing how such things will work. This is also the stuff that makes for a good discussion with your mentor. Get involved with something technical outside of your workplace. This could be a project, or a forum. Something that will allow you to test out your theories and ideas outside of your immediate circle of peers in order to maintain a fresh perspective on things. Be patient . Recognize that earning experience takes time, and learn to accept that you need to back off for a while in order to learn why and where you have failed. Keep a diary or a blog of your tasks, your thoughts, your failures and your successes. This isn't strictly necessary, however I have found that it can be of great benefit to you to see how you have developed over time, how your skills have grown and your thoughts have changed. I come back to my own journals every few months and look at the stuff I wrote 4-5 years ago. It's a real eye-opener discovering just how much I'd learned in that time. It's also a reminder that I got things wrong from time to time. It's a healthy reminder that helps me improve. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/149970",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43149/"
]
} |
150,045 | At the company I work at, every service class has a corresponding interface . Is this necessary? Notes: Most of these interfaces are only used by a single class We are not creating any sort of public API With modern mocking libraries able to mock concrete classes and IDEs able to extract an interface from a class with two or so clicks, is this just a holdover from earlier times? | Your company is following the SOLID principles and targeting an interface rather than concrete class adds zero overhead to the program. What they're doing is a very moderate amount of extra work that can pay back volumes when the assumptions that you are making end up being false...and you can never tell what assumption that's going to be. What you're recommending while slightly less work, can cost many, many, many hours of refactoring that could have just been done ahead of time by simply following basic OO design principles. You should be learning from these guys and consider yourself lucky. Wish I worked in such an environment where solid design was a priority goal. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150045",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9274/"
]
} |
150,050 | I have a collection of normal functions and self-calling functions within a javascript file. In my comments i want to say something along the lines of "This script can contain both self-calling and XXX functions", where XXX is the non-self-calling functions. "Static" springs to my mind, but feel that would be incorrect because of the use of static methods in PHP, which is something completely different. Anyone any ideas? Thanks! | First, the "self-calling functions" aren't actually self-calling. I know that's what people in the Javascript community call them, but it's really misleading; the functions never reference themselves, and in fact a lot of the time there's no way to call that particular function more than once. If they were actually self-calling functions, then recursive functions would have been the best way to name them. What you really have are (usually) immediately executed lambdas whose results are stored in a variable in order to limit the scope of their internals. "IEL" isn't as catchy as "self-calling functions", so I guess that's why the real name never caught on. The thing is though, that's entirely too low-level; it's an implementation detail that nobody cares about (it's like saying "here be for loops"). Generally, when you're using those immediate-execution functions, the reason why you're using them is because you're making some sort of a module, which needs its own namespace. If that's the case, then instead of saying "self-executing functions", you should say "this script contains modules that do <stuff>". Otherwise, you should figure out what you're trying to do with the functions, and say that's what's in your script. Now, the reason why you use modules in Javascript is because otherwise everything goes into the global scope. Those other functions you're writing, that aren't going to be inside the modules (or whatever you decide they are), are going to end up there. So use that - "this script file contains both modules that do <stuff> and global functions that do <more stuff>". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150050",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54838/"
]
} |
150,159 | I don't know if this question is strictly related to software development, but still I'll give it a try: Like a lot of programmers, I love to work on hobby projects. Sometimes, seemingly good ideas turn out to be not so good, so I drop the project. But sometimes, something useful comes out of the project. So, I could release it, present it to the world, right? Wrong. Somehow, I don't seem to be able to make this step. I fear that my code is not good enough, I can always think of things which are suboptimal, of features which could be added. So, I don't release anything, I lose interest, and at one point abandon the project. Is this normal? How do you overcome such a situation? | First of all, remember: shipping is a feature . It's better to release something imperfect than to release nothing at all. The other thing to note is that these are Hobby projects. If you don't meet deadlines or lose interest it's not a big deal. You're doing the project for fun after all. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150159",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1320/"
]
} |
150,165 | When using method chaining like: var car = new Car().OfBrand(Brand.Ford).OfModel(12345).PaintedIn(Color.Silver).Create(); there may be two approaches: Reuse the same object, like this: public Car PaintedIn(Color color)
{
this.Color = color;
return this;
} Create a new object of type Car at every step, like this: public Car PaintedIn(Color color)
{
var car = new Car(this); // Clone the current object.
car.Color = color; // Assign the values to the clone, not the original object.
return car;
} Is the first one wrong or it's rather a personal choice of the developer? I believe that he first approach may quickly cause the intuitive/misleading code. Example: // Create a car with neither color, nor model.
var mercedes = new Car().OfBrand(Brand.MercedesBenz).PaintedIn(NeutralColor);
// Create several cars based on the neutral car.
var yellowCar = mercedes.PaintedIn(Color.Yellow).Create();
var specificModel = mercedes.OfModel(99).Create();
// Would `specificModel` car be yellow or of neutral color? How would you guess that if
// `yellowCar` were in a separate method called somewhere else in code? Any thoughts? | I'd put the fluent api to it's own "builder" class seperate from the object it is creating. That way, if the client doesn't want to use the fluent api you can still use it manually and it doesn't pollute the domain object (adhering to single responsibility principle). In this case the following would be created: Car which is the domain object CarBuilder which holds the fluent API The usage would be like this: var car = CarBuilder.BuildCar()
.OfBrand(Brand.Ford)
.OfModel(12345)
.PaintedIn(Color.Silver)
.Build(); The CarBuilder class would look like this (I'm using C# naming convention here): public class CarBuilder {
private Car _car;
/// Constructor
public CarBuilder() {
_car = new Car();
SetDefaults();
}
private void SetDefaults() {
this.OfBrand(Brand.Ford);
// you can continue the chaining for
// other default values
}
/// Starts an instance of the car builder to
/// build a new car with default values.
public static CarBuilder BuildCar() {
return new CarBuilder();
}
/// Sets the brand
public CarBuilder OfBrand(Brand brand) {
_car.SetBrand(brand);
return this;
}
// continue with OfModel(...), PaintedIn(...), and so on...
// that returns "this" to allow method chaining
/// Returns the built car
public Car Build() {
return _car;
}
} Note that this class will not be thread safe (each thread will need it's own CarBuilder instance). Also note that, even though fluent api is a really cool concept, it probably is overkill for the purpose of creating simple domain objects. This deal is more useful if you're creating an API for something much more abstract and has more complex set up and execution, which is why it works great in unit testing and DI frameworks. You can see some other examples under the Java section of the wikipedia Fluent Interface article with persistance, date handling and mock objects. EDIT: As noted from the comments; you could make the Builder class a static inner class (inside Car) and Car could be made immutable. This example of letting Car be immutable seems a bit silly; but in a more complex system, where you absolutely don't want to change the contents of the object that is built, you might want to do it. Below is one example of how to do both the static inner class and how to handle an immutable object creation that it builts: // the class that represents the immutable object
public class ImmutableWriter {
// immutable variables
private int _times; private string _write;
// the "complex" constructor
public ImmutableWriter(int times, string write) {
_times = times;
_write = write;
}
public void Perform() {
for (int i = 0; i < _times; i++) Console.Write(_write + " ");
}
// static inner builder of the immutable object
protected static class ImmutableWriterBuilder {
// the variables needed to construct the immutable object
private int _ii = 0; private string _is = String.Empty;
public void Times(int i) { _ii = i; }
public void Write(string s) { _is = s; }
// The stuff is all built here
public ImmutableWriter Build() {
return new ImmutableWriter(_ii, _is);
}
}
// factory method to get the builder
public static ImmutableWriterBuilder GetBuilder() {
return new ImmutableWriterBuilder();
}
} The usage would be the following: var writer = ImmutableWriter
.GetBuilder()
.Write("peanut butter jelly time")
.Times(2)
.Build();
writer.Perform();
// console writes: peanut butter jelly time peanut butter jelly time Edit 2: Pete in the comments made a blog post about using builders with lambda functions in the context of writing unit tests with complex domain objects. It is an interesting alternative to make the builder a bit more expressive. In the case of CarBuilder you need to have this method instead: public static Car Build(Action<CarBuilder> buildAction = null) {
var carBuilder = new CarBuilder();
if (buildAction != null) buildAction(carBuilder);
return carBuilder._car;
} Which can be used as this: Car c = CarBuilder
.Build(car =>
car.OfBrand(Brand.Ford)
.OfModel(12345)
.PaintedIn(Color.Silver); | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150165",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
150,254 | I need a proper explaination of Jamie Zawinski's Law of Software Envelopment : Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can. | All the answers (and comments) so far seem to focus entirely on the first half of the statement, making it into a comment about "bloat," when the important half is the second half: Those programs which cannot so expand are replaced by ones which can. This is not about software bloat, it's about the realities of the market. People may say they want a simple product, but when you look at actual usage, the things that get used are the things that allow users to do more, and they end up replacing less-capable tools. Part of the problem is that "simple" is a confusing word. Like "cleave," it can mean two almost completely opposite things. What people want is something that simplifies complex tasks. That's "the good simple," and it requires a great deal of complexity to do right. What some people interpret it as, though, is that people want something simplistic , or minimalist. This concept may have some niche appeal, but on the whole it's the wrong kind of "simple" to focus on when designing a product. No matter how good your work is, the new feature requests keep coming in. To give an example, there's the program I work on at work. You've probably never heard of it, but we're the market leader in a specialized industry: media control. Our program most likely runs your favorite TV and/or radio station. The customers love it, they say it's so much better than anything else they've worked with. It's also enormous . The EXE is over 65 MB in size, with around 4 million lines of code, backed by a database with over 150 tables, built up over the course of more than a decade of work. And yet it seems like every time we try to get it installed at some new station or network, there's one or two things that are absolutely essential to their workflow, that we don't have any support for. So we end up adding the new features because otherwise the customers wouldn't want to switch from the system they're already used to. And let me repeat, the customers love it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150254",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24256/"
]
} |
150,289 | During an interview I was asked to implement a random generator in java without using any existing random number libraries that takes as an argument int n, and returns a random number between 0 and n. This was the implementation I provided: public static int random(int n) {
int r = 0;
for (int i =0; i <n;i++) {
r+=helper();
}
return r;
}
// helper that returns 0 or 1
private static int helper() {
long t = System.nanoTime();
if (t%2 == 0) {
return 1;
} else {
return 0;
}
} He said it's not right but he wouldn't tell me what he was expecting. Why did he say it's wrong? How would you have done it differently? | Main issues with your approach: System.nanoTime() isn't (on its own) a useful source of random bits - it's highly likely to produce the same value multiple times in a row if you call it in quick succession because many systems don't actually have a sufficiently accurate timer. Even if it was nano-second-accurate, you are likely to get predictable patterns from the lowest bits if you sample it in a tight loop. Valid uses of System.nanoTime in random number generation might be: a) one-off initialisation of a seed value or b) occasionally adding some extra randomness into an entropy pool (not guaranteed to be beneficial, but it can't hurt) Even if the bits were truly random, by adding the 0/1 values n times you would be creating a binomial-style distribution with a mean of n/2, i.e. not a uniform distribution which is presumably what the interviewer was expecting. Your algorithm is O(n) - not good for generating random numbers with a large value of n! You ideally want a PRNG that produces new pseudo-random bits from an internal state. Here's the one I use: private static volatile long state = 0xCAFEBABE; // initial non-zero value
public static final long nextLong() {
long a=state;
state = xorShift64(a);
return a;
}
public static final long xorShift64(long a) {
a ^= (a << 21);
a ^= (a >>> 35);
a ^= (a << 4);
return a;
}
public static final int random(int n) {
if (n<0) throw new IllegalArgumentException();
long result=((nextLong()>>>32)*n)>>32;
return (int) result;
} This is based on George Marsaglia's XORShift algorithm. It produces good pseudorandom numbers and is very fast (typically even faster than a Linear Congruential Generator since the xors and shifts are cheaper than multiplies and divides on most hardware). Having said that, I wouldn't expect people to memorise this kind of algorithm for an interview unless you are specifically applying for a role as a crypto programmer! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150289",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54970/"
]
} |
150,294 | I was wondering why C++ is a good choice to write a compiler. Of course C is good for this purpose too, because many compilers are written either in C or C++ but I am more interested in C++ this time. Any good reasons ? I was looking for that in the Internet, but I cannot find any good reasons. | C++ has two sides to it. It has a low-level development side which makes it seem like a natural language for doing low level thing like code generation. It also has a high-level side (which C does not) that lets you structure a complex application (like a compiler) in a logical, object oriented way, while still maintaining performance. Because it has both the low and high level aspects to it, it's a good choice for large application which require low-level features or performance. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150294",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54972/"
]
} |
150,356 | I'd like to reach out to the community on this one. As a software developer, I'm not an expert salesperson or marketing guru - I think in code and not much else. Most developers I come across are like this and also tend to be serious penny-pinchers. Let's say, as a developer, I recently released a new software product that I'm pretty sure will be a hit IF people only knew about it. Assume a budget of $0.00 and limited time each day (i.e. 30 to 60 minutes). What can I do, within those limitations, to maximize exposure? If possible, please back up your reply with at least two working examples. | Your Own Site Build your OWN site to distribute your software. It needs to have a home.
This can be the code hosting repository where you host it and its development, but you could have a more customer facing site, and have them link to each other. Your own site comes with additional elements: your own chatroom(s), your own newsgroup(s), your own mailing list(s), your own social network business page(s), feeds ( RSS / Atom ) for your update channels (and some of previous points). Notice that you can have several ones for different purposes: to talk to developers, make announcement, take care of customer support... One point though: it's better to have one active point of communication than to get dispersed and have no content and no activity at all. It's the chicken and egg thing, but people are less enclined to ask questions on an empty forum. It's understandable to want to reach out to as many users as you'd like (we all prefer one medium to another), but wait a bit before you set up that Gopher site and an IRC channel. Search Engines Search Engines are the key element here: that's what everybody uses to find you. In the good ol' days (actually, the dark ages, really :)), you used to have search engines that were actually mostly keyword-based directories, and you had to submit your site to them individually/manually, or using so-called "search-engine auto-submitters". Some were relatively good, some would get you blacklisted easily. Nowadays, I'd recommend you do 3 things: Create a decent site with good, sensible, readable and easily-indexible markup you may want to read these Google Webmaster Guidelines Create one (or more) sitemaps for your site(s) and define robots.txt rules (if needed); Submit your site to at least: Google (via Google Webmaster Tools ), Bing (via Bing Webmaster Toolbox ), and Yahoo! (via Yahoo! Site Explorer this seems to have merged with Bing's Webmaster Toolbox, actually ). Surprisingly, even Google still has pages to let your "submit" a site for inclusion, but usually that won't be needed. Feel free to also look for other directories and less known search engines to check for your inclusion in their databases. It's a good thing to regularly check where you are. Software Distribution Sites Software distribution sites: Softpedia , CNet (and their acquisition Download.com ), Softonic ... OSS distribution sites: Freshmeat / Freecode , etc... Direct-install app stores (generally specific to a target platform and type of apps): Apple App Store , Google Play , Google Apps Marketplace , Amazon App Store, Google Chrome WebStore , Ubuntu Software Center and many more... As mentioned by stmax in comments, the easiest way to start promoting an app that targets known mobile devices would usually be to use their dedicated app stores. It's rather quick and easy. Depending on your platform of choice, and whether you want to sell your app or not (and if it supports in-app payments or not), you may want to to look at package management systems. This somewhat similar to software distribution sites (in that they aggregate software distribution in one place and) and app stores (in that they allow one-click install), but usually you only use them directly from you system (and not from the web). A famous example is the debian packaging format, and its mainy repositories and front-ends (which includes the Ubuntu Software Center, for instance). Social Networks End-User Social Networks: Facebook , Twitter , Google+ , etc... to: generate buzz, re-route users to your site, Professional Social Networks: LinkedIn , Xing|OpenBC Programmer-Oriented Social Networks: Ohloh , CIA , ... You can use social aggregators to make things easier to deal with, or at least to make it easier for your users to then enhance your popularity on several networks, for instance with ShareThis or AddThis . Communicate Actively This can take some time, but not this much if you're efficient and have things well prepared. communicate on forums, chat rooms, newsgroups... DO NOT be spammy, DO answers that relate to your software, give full disclosure in a proper way, and kindly point people to your software when they request alternatives or solutions. broadcast updates and news to your different communication streams above, tweet about them, tell your friends on FB, publish an announcement on appropriate mailing-lists: when you publish a minor revision, when you have a potential project or feature in mind and need feedback, when you reach a milestone (# of downloads, # of users...), anything, really. Of course, broadcast these to your communication channels described above. Write Support Material Write user and developgment guides accordingly. Publish video tutorials or demonstrations (create a Youtube and/or Vimeo channel). Write tutorials on how to use your software. Publish a (tentative) roadmap for future features. Get Reviewed Friends can review you on their blogs and social network pages. Users can review you and you can facilitate that by adding "talk about MY_PROJECT on SOCIAL_NETWORK" link. Professionals (bloggers, writers, developers...) can review your app, for free or for a compensation (this is a possibly spammy route, beware to contact the right people). Contact newspapers and technical magazines, online and offline (print is NOT dead). Some might want to write an article on you, some will just write a small column, some won't but will remember your name and product later, and some might just talk about your product to some friends at the bar. Engage your Users Request feedback, and permission to publish it, via: User feedback websites: GetSatisfaction , UserVoice ... Survey systems: SurveyMonkey , built-in services in your blogging / CMS platform... Hallway testing !! Listen to feature requests. Request your users' help in promoting your software. Request your users' help in identifying flaws and troubleshooting in your software. Personally, I'm not a fan of the user feedback sites like GetSatisfaction and UserVoice. They tend to slow down your site or web-app, you need to rely on them and if they break they may break parts of your site, and are generally more prone to downtimes than a good old mailing system. So I prefer a mailing-list/newsgroup, maybe with a web-interface as well (like a Google Group), and a simple contact form for the basic user. An issue- and/or bug-tracker is good to have for more advanced users (use one hosted on Google Code Project Hosting, BitBucket, GitHub, Sourceforge, Assembla... depending on your licensing terms, of course) and to let them know about the progress of a feature request and vote for the most requested features or bugfixes). Advertize All of the above is advertisement, really, but obviously some more professional advertising can help. And even a 75USD AdWords voucher can go a long way, if you play it right. You can go further and contact some services that manufacture and sell promotional items for you (mugs, t-shirts, caps, ...). This seems a bit nutty, but some users are happy to have some, and this does sometimes help to reach out to new users. Just make sure to pick the right services, where you won't need to pay much, or anything (some just take a commission on sales of articles). Stay Up to Date Publish updates often and communicate about them. Before you know it people will follow suit. Publuish beta-testing versions of upcoming releases, for advanced users only. Also keep up with competitors and eventually review and compare them. DO NOT be derogatory or pejorative, be fair, do not twist numbers, and point our where you fare better. We don't expect you to to point ou your flaws, but to state what's the small "plus" you have over them. Zero Budget, 30 minutes All of this looks like a lot of time and even like it involves some money. But you can do most of it for no cost at all, or very low cost. If you register for AdWords / AdSense / Google Webmaster Tools , you might eventually get a free voucher, or some friends might have one to spare. Technically this is money, but you didn't actually pay it, you're not down anything. You can find free hosting services (even Blogger would do) for simple sites with (originally) low to medium traffic, and domain names can be found for very cheap value per year. And all the communication, while it can be expensive in terms of time, gets better over time: Write out templates for your release and update announcements for your mailing-list, your tweets, etc.. Make sure to program said updates to be broadcasted automatically to your different commication channels. Automate this as much as possible. It will be worth the time saved over the longer run. Giving a little of your time every day or every week amounts to a lot in the end, and it's generating constant noise that matters to keep conversations going. And your friends and die-hard fans can help with this as well. It's important to remember that every single new visitor and new recommendation counts. Whether it's someone publishing a full-page article about you, or just a friend sending a link to your app to another friend or talking about your product over a drink in a bar. Learn Put these 30 minutes a day to good use by learning the tools of the trade and the techniques of SEO experts, marketers and advertisers. They are, in the end, valuable skills and knowledge to have. I remember omeone saying on another StackExchange site you should set apart 5 years of your life to learn them. Though I'd say it really doesn't take this long, there's obviously a lot to learn and various levels of expertise to obtain, but you can learn a great deal. I'm sure as a developer you'll be happy to learn the more technical bits (like how to create pages that are SEO-friendly), relatively less happy to learn less technical bits (how to produce user-friendly page layouts, based on actual and tested HCI concepts and marketing research, not just programmer's instincts), and a lot less happy to learn the "annoying" bits that relate to marketing and advertising (picking keyword lists, writing good announcements, etc...). The motivator, for me, is to always view it as something technical, in the end: what you want is optimize the visibility, and all this because purely a game of numbers. Learning to write and design decently is just a mean to get these numbers up. Plus I find it interesting to learn UI and UX concepts, for which "lambda" users often have very different expectations than the programmers of an application (hence the need to request a lot of user feedback, and to listen to it ). Stand on the Shoulders of Giants... Be a Copy-Cat You're not the first person to try to promote a product. Pick a famous product, and look how they did it. How do you get access to this product when you start from 0? Ideally, you want to be able to allow users to do the same with yours. That's what you aim for. Maybe look at some influential commercial or free software project, and look how they created a community, how they communicate around their product. You can try to find innovative ways of promoting yourself (and it's usually good to innovate, to stand out of the crowd), but the good old and tested ways work well, obviously. Measure, Measure, Measure I said two things I need to repeat here: Listen to your users; It's all about data, not about what you think you know as a programmer. You can't improve things if you don't know what doesn't work or what is a better alternative. Learn (see above ;)) to use analytics systems (like Google Analytics ) to track basic stats about your visitors (population demographics, origins, platforms ...) and more advanced reports (conversion rates, funnels...). Use such tools to measure the impact of changes you make to your site, and get real hard data to be able to know whether a change is beneficial or not. I've done personal mistakes like this at first, believing my vision was better, and I've had (and still have...) to deal with startup founders who always start 83% of their sentences with "I think that...". No you don't. If you really "thought", you wouldn't say that. You assumed , and that's a bad habit. Usually, when someone says "I think", I now follow up with "prove it", or if I can't and don't believe their claim, I will go do my own hallway testing to prove or disprove their assumption. A/B testing just works. Of course, all this also takes time. I'm giving you the tools here, but just do with what you can with your own constraints. You don't need to A/B test every single scenario, and you don't need to re-evaluate every week every single little thing you do. But the more you do it, the better. All of this meant to consolidate the prevalence of your software's own distribution site. Your goal is to promote it, and to then allow users to find all the necesasry and relevant information on your site, and to minimize the path to a download. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150356",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/55007/"
]
} |
150,549 | Note more discussion at http://news.ycombinator.com/item?id=4037794 I have a relatively simple development task, but every time I try to attack it, I end up spiraling in deep thoughts - how could it extending the future, what are the 2nd generation clients going to need, how does it affect "non functional" aspects (e.g. Performance, authorization...), how would it best to architect to allow change... I remember myself a while ago, younger and, perhaps, more eager. The "me" I was then wouldn't have given a thought about all that - he would've gone ahead and wrote something, then rewrote it, then rewrote it again (and again...). The "me" today is more hesitant, more careful. I find it much easier today to sit and plan and instruct other people on how to do things than to actually go ahead and do them myself - not because I don't like to code - the opposite, I love to! - but because every time I sit at the keyboard, I end up in that same annoying place. Is this wrong? Is this a natural evolution, or did I drive myself into a rut? Fair disclosure - in the past I was a developer, today my job title is a "system architect". Good luck figuring what it means - but that's the title. Wow. I honestly didn't expect this question to generate that many responses. I'll try to sum it up. Reasons: Analysis paralysis / Over engineering / gold plating / (any other "too much thinking up-front can hurt you"). Too much experience for the given task. Not focusing on what's important. Not enough experience (and realizing that). Solutions (not matched to reasons): Testing first. Start coding (+ for fun) One to throw away (+ one API to throw away). Set time constraints. Strip away the fluff, stay with the stuff. Make flexible code (kinda opposite to "one to throw away", no?). Thanks to everyone - I think the major benefit here was to realize that I'm not alone in this experience. I have, actually, already started coding and some of the too-big things have fallen off, naturally. Since this question is closed, I'll accept the answer with most votes as of today. When/if it changes - I'll try to follow. | Thinking about these things is definitely good, but don't let it stop your progress. One approach that works really well (especially with iterative development) is to implement a simple solution and then refactor as necessary later. This keeps the code as simple as possible, and avoids over-engineering. Most of the performance or architecture changes you are contemplating probably aren't going to be required anyway, so don't bother writing them until they have officially become necessary. For example, don't worry about performance until a profiler has told you that it's time to improve performance. One thing you can do to help you adjust is to set a hard time limit on how long you think about something before writing code. Most of the time, the code will turn out better if you think for a little, write it, realize your mistakes, and then fix them by refactoring. There is a balance to be struck here. You shouldn't just jump in head-first and not think about the consequences, but you also shouldn't try to over-engineer your code either. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150549",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5293/"
]
} |
150,570 | I find that some software developers are very adept at this, and often times are praised for their ability to deliver a working concept with abstract requirements. Frankly, this drives me crazy, and I don't like "making it up" as I go. I used to think this was problematic, but I've started to sense a shift, and I'm wondering if I need to adjust my thought (and programming) process when given very little direction. Should I begin to acquire this ability as a skill, or stick to the idea that requirement's gathering and business rules are the first priority? | The skill isn't to write software without requirements. It is instead to elicit requirements from the project owner regardless of whether there is a formal requirements documentation or not. Gathering requirements is definitely your first priority, but you don't necessarily need to get all of the customer's needs noted up front. The risk is of course is that you might miss some vital piece of information that renders your system architecture useless if you haven't managed to have the right sort of conversations with your customer, however it is not unusual to define a product and even get much of the development out of the way, while deferring the major system architecture decisions until the last possible moment. This is a lean development approach which is meant to ensure that you don't commit to a potentially incompatible architecture too early in your product development until you have more solid information. In the situations the OP has described in his question, this lean approach would be quite important IMHO to avoid major rework and cost blow-out later on, which is when you've finally managed to learn what it was your customer really needed. Yes, you do sometimes need to crystal-ball-gaze a little to get to the heart of what it is the customer really is asking for, which is where prototyping spikes and the slow - and yes sometimes painful - incremental drawing out of requirements requires that you really develop good customer relationship skills, and also the patience to realize that with any complex software idea, that in the beginning the customer doesn't often know much more than you about what the software actually needs to do. Most often, the customer calls you in early to depend on your expertise to define their requirements as the customer doesn't always have the necessary expertise or knowledge of the software development process. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150570",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
150,615 | This is a repost of a question on cs.SE by Janoma . Full credits and spoils to him or cs.SE. In a standard algorithms course we are taught that quicksort is O(n log n) on average and O(n²) in the worst case. At the same time, other sorting algorithms are studied which are O(n log n) in the worst case (like mergesort and heapsort ), and even linear time in the best case (like bubblesort ) but with some additional needs of memory. After a quick glance at some more running times it is natural to say that quicksort should not be as efficient as others. Also, consider that students learn in basic programming courses that recursion is not really good in general because it could use too much memory, etc. Therefore (and even though this is not a real argument), this gives the idea that quicksort might not be really good because it is a recursive algorithm. Why, then, does quicksort outperform other sorting algorithms in practice? Does it have to do with the structure of real-world data ? Does it have to do with the way memory works in computers? I know that some memories are way faster than others, but I don't know if that's the real reason for this counter-intuitive performance (when compared to theoretical estimates). | I wouldn't agree that quicksort is better than other sorting algorithms in practice. For most purposes, Timsort - the hybrid between mergesort/insertion sort which exploits the fact that the data you sort often starts out nearly sorted or reverse sorted. The simplest quicksort (no random pivot) treats this potentially common case as O(N^2) (reducing to O(N lg N) with random pivots), while TimSort can handle these cases in O(N). According to these benchmarks in C# comparing the built-in quicksort to TimSort, Timsort is significantly faster in the mostly sorted cases, and slightly faster in the random data case and TimSort gets better if the comparison function is particularly slow. I haven't repeated these benchmarks and would not be surprised if quicksort slightly beat TimSort for some combination of random data or if there is something quirky in C#'s builtin sort (based on quicksort) that is slowing it down. However, TimSort has distinct advantages when data may be partially sorted, and is roughly equal to quicksort in terms of speed when the data is not partially sorted. TimSort also has an added bonus of being a stable sort, unlike quicksort. The only disadvantage of TimSort uses O(N) versus O(lg N) memory in the usual (fast) implementation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150615",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/55137/"
]
} |
150,616 | I have, for example, this table +-----------------+
| fruit | weight |
+-----------------+
| apple | 4 |
| orange | 2 |
| lemon | 1 |
+-----------------+ I need to return a random fruit. But apple should be picked 4 times as frequent as Lemon and 2 times as frequent as orange . In more general case it should be f(weight) times frequently. What is a good general algorithm to implement this behavior? Or maybe there are some ready gems on Ruby? :) PS I've implemented current algorithm in Ruby https://github.com/fl00r/pickup | The conceptually simplest solution would be to create a list where each element occurs as many times as its weight, so fruits = [apple, apple, apple, apple, orange, orange, lemon] Then use whatever functions you have at your disposal to pick a random element from that list (e.g. generate a random index within the proper range). This is of course not very memory efficient and requires integer weights. Another, slightly more complicated approach would look like this: Calculate the cumulative sums of weights: intervals = [4, 6, 7] Where an index of below 4 represents an apple , 4 to below 6 an orange and 6 to below 7 a lemon . Generate a random number n in the range of 0 to sum(weights) . Go through the list of cumulative sums from start to finish to find the first item whose cumulative sum is above n . The corresponding fruit is your result. This approach requires more complicated code than the first, but less memory and computation and supports floating-point weights. For either algorithm, the setup-step can be done once for an arbitrary number of random selections. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150616",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23219/"
]
} |
150,644 | Another team in my company start to document their stand-up meetings, but I believe it is a waste of time. As far as I know, stand-up meetings are for communication not for status reporting (please, correct me if I am wrong) So, Should we document stand-up meetings? | One of the benefits of Agile is that each team can determine what works best for them, and go with it. However: should you take notes? No; in my experience, the minute a team decides to start backtracking on some of the core principles such as: individuals and interactions over processes and tools (documented scrums that begin to be non-scrum-like)
AND working software over comprehensive documentation (scrum minutes one day, who knows what would be next) then that team will begin to move away from Agile and toward a more documentation-heavy and low-velocity approach to work. In Martin Fowler's "It's Not Just Standing Up" , there's nary a mention of note-taking or documenting the "minutes" of a stand-up meeting. All you should take away from that stand-up meeting are GIFTS: To help start the day well To support improvement To reinforce focus on the right things To reinforce the sense of team To communicate what is going on As a mnemonic device, think of GIFTS:
Good Start, Improvement, Focus, Team, Status However , if someone has a blocker that you need to help him or her work through and you take a note on what they are saying, that's totally different. -- knock yourself out with that. As a point of reference, I'm a Product Owner and am mentoring the ScrumMaster right now in my company, and of all the Agile meetings we have (scrum, sprint planning, sprint review, sprint retrospective), the only one we take official minutes of is the retrospective, because that gives the team something concrete to work toward and refer to in the next sprint (and those "minutes" are a couple sets of short bullet points). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150644",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11517/"
]
} |
150,654 | Me and a friend are looking to team up for some freelance web designing. We both are very strong PHP coders and website designers, but I am better at design and he is better at PHP. So, I'll be controlling the stuff the user sees, he'll be controlling the stuff the user does. In order to speed up development, I'm wondering if anyone has experience or recommendations for asynchronous web designing and coding. So far my best idea is to create a doctrine of which tags and classes every element should be wrapped in (if any), then have us developing via FTP. He would create the code that spits out snippets of HTML, and I would simply style those snippets and wrap it in the page. Does anyone know of a better way? So, any ideas? | One of the benefits of Agile is that each team can determine what works best for them, and go with it. However: should you take notes? No; in my experience, the minute a team decides to start backtracking on some of the core principles such as: individuals and interactions over processes and tools (documented scrums that begin to be non-scrum-like)
AND working software over comprehensive documentation (scrum minutes one day, who knows what would be next) then that team will begin to move away from Agile and toward a more documentation-heavy and low-velocity approach to work. In Martin Fowler's "It's Not Just Standing Up" , there's nary a mention of note-taking or documenting the "minutes" of a stand-up meeting. All you should take away from that stand-up meeting are GIFTS: To help start the day well To support improvement To reinforce focus on the right things To reinforce the sense of team To communicate what is going on As a mnemonic device, think of GIFTS:
Good Start, Improvement, Focus, Team, Status However , if someone has a blocker that you need to help him or her work through and you take a note on what they are saying, that's totally different. -- knock yourself out with that. As a point of reference, I'm a Product Owner and am mentoring the ScrumMaster right now in my company, and of all the Agile meetings we have (scrum, sprint planning, sprint review, sprint retrospective), the only one we take official minutes of is the retrospective, because that gives the team something concrete to work toward and refer to in the next sprint (and those "minutes" are a couple sets of short bullet points). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150654",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/55187/"
]
} |
150,669 | I am currently creating a web application that allows users to store and share files, 1 MB - 10 MB in size. It seems to me that storing the files in a database will significantly slow down database access. Is this a valid concern? Is it better to store the files in the file system and save the file name and path in the database? Are there any best practices related to storing files when working with a database? I am working in PHP and MySQL for this project, but is the issue the same for most environments ( Ruby on Rails , PHP , .NET ) and databases (MySQL, PostgreSQL ). | Reasons in favor of storing files in the database: ACID consistency including a rollback of an update which is complicated when the files are stored outside the database. This isn't to be glossed over lightly. Having the files and database in sync and able to participate in transactions can be very useful. Files go with the database and cannot be orphaned from it. Backups automatically include the file binaries. Reason against storing files in the database: The size of a binary file differs amongst databases. On SQL Server, when not using the FILESTREAM object, for example, it is 2 GB. If users need to store files larger (like say a movie), you have to jump through hoops to make that magic happen. Increases the size of the database. One general concept you should take to heart: The level of knowledge required to maintain a database goes up in proportion to the size of the database. I.e., large databases are more complicated to maintain than small databases. Storing the files in the database can make the database much larger. Even if say a daily full backup would have sufficed, with a larger database size, you may no longer be able to do that. You may have to consider putting the files on a different file group (if the database supports that), tweak the backups to separate the backup of the data from the backup of the files etc. None of these things are impossible to learn, but do add complexity to maintenance which means cost to the business. Larger databases also consume more memory as they try to stuff as much data into memory as possible. Portability can be a concern if you use system specific features like SQL Server's FILESTREAM object and need to migrate to a different database system. The code that writes the files to the database can be a problem. One company for whom I consulted not so many moons ago at some point connected a Microsoft Access frontend to their database server and used Access' ability to upload "anything" using its Ole Object control. Later they changed to use a different control which still relied on Ole. Much later someone changed the interface to store the raw binary. Extracting those Ole Object's was a new level of hell. When you store files on the file system, there isn't an additional layer involved to wrap/tweak/alter the source file. It is more complicated to serve up the files to a website. In order to do it with binary columns, you have to write a handler to stream the file binary from the database. You can also do this even if you store file paths but you don't have to do this. Again, adding a handler is not impossible but adds complexity and is another point of failure. You cannot take advantage of cloud storage. Suppose one day you want to store your files in an Amazon S3 bucket. If what you store in the database are file paths, you are afforded the ability to change those to paths at S3. As far as I'm aware, that's not possible in any scenario with any DBMS. IMO, deeming the storage of files in the database or not as "bad" requires more information about the circumstances and requirements. Are the size and/or number of files always going to be small? Are there no plans to use cloud storage? Will the files be served up on a website or a binary executable like a Windows application? In general, my experience has found that storing paths is less expensive to the business even accounting for the lack of ACID and the possibility of orphans. However, that does not mean that the internet is not legion with stories of lack of ACID control going wrong with file storage but it does mean that in general that solution is easier to build, understand and maintain. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150669",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30000/"
]
} |
150,760 | I'm working on a team where the team leader is a virulent advocate of SOLID development principles. However, he lacks a lot of experience in getting complex software out of the door. We have a situation where he has applied SRP to what was already quite a complex code base, which has now become very highly fragmented and difficult to understand and debug. We now have a problem not only with code fragmentation, but also encapsulation, as methods within a class that may have been private or protected have been judged to represent a 'reason to change' and have been extracted to public or internal classes and interfaces which is not in keeping with the encapsulation goals of the application. We have some class constructors which take over 20 interface parameters, so our IoC registration and resolution is becoming a monster in its own right. I want to know if there is any 'refactor away from SRP' approach we could use to help fix some of these issues. I have read that it doesn't violate SOLID if I create a number of empty coarser-grained classes that 'wrap' a number of closely related classes to provide a single-point of access to the sum of their functionality (i.e. mimicking a less overly SRP'd class implementation). Apart from that, I cannot think of a solution which will allow us to pragmatically continue with our development efforts, while keeping everyone happy. Any suggestions ? | If your class has 20 parameters in the constructor, it doesn't sound like your team quite knows what SRP is. If you have a class that does only one thing, how does it have 20 dependencies? That's like going on a fishing trip and bringing along a fishing pole, tackle box, quilting supplies, bowling ball, nunchucks, flame thrower, etc.... If you need all that to go fishing, you're not just going fishing. That said, SRP, like most principles out there, can be over-applied. If you create a new class for incrementing integers, then yeah, that may be a single responsibility, but come on. That's ridiculous. We tend to forget that things like the SOLID principles are there for a purpose. SOLID is a means to an end, not an end in itself. The end is maintainability . If you are going to get that granular with the Single Responsibility Principle, it's an indicator that zeal for SOLID has blinded the team to the goal of SOLID. So, I guess what I'm saying is... The SRP isn't your problem. It's either a misunderstanding of the SRP, or an incredibly granular application of it. Try to get your team to keep the main thing the main thing. And the main thing is maintainability. Get people to design modules in a way that encourages ease of use. Think of each class as a mini API. Think first, "How would I like to use this class," and then implement it. Don't just think "What does this class need to do." The SRP does have a great tendency to make classes harder to use, if you don't put much thought into usability. If you're looking for tips on refactoring, you can start doing what you suggested - create coarser-grained classes to wrap several others. Make sure the coarser-grained class is still adhering to the SRP , but on a higher level. Then you have two alternatives: If the finer-grained classes are no longer used elsewhere in the system, you can gradually pull their implementation into the coarser-grained class and delete them. Leave the finer-grained classes alone. Perhaps they were designed well and you just needed the wrapper to make them easier to use. I suspect this is the case for much of your project. When you're finished refactoring (but before committing to the repository), review your work and ask yourself if your refactoring was actually an improvement to maintainability and ease of use. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150760",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/55380/"
]
} |
150,800 | Why was it decided to call the kill command "kill"? I mean, yes, this utility is often used to terminate processes, but it can actually be used to send any signal. Isn't it slightly confusing? Maybe there are some historical reasons. All I know from man kill that this command appeared in Version 3 AT&T UNIX. | Originally, the kill command could only kill a process, only later was kill enhanced to allow you to send any signal. Since version 7 of Unix (1979) the default has been to signal the process in a way which can be caught and either handled gracefully or ignored (by sending a SIGTERM signal), but it can also be used to pull the rug out from under a process (a kill -9 sends a SIGKILL signal which cannot be caught and thus cannot be ignored). Background Computing, and Unix in particular, is rife with metaphor. The main metaphor for processes is that of a living thing which is born, lives and dies. In Unix all processes except init have parents , and any process which spawns other processes has children . Processes may become orphaned (if their parent dies) and can even become zombies , if they hang around after their death. Thus, the kill command fits in with this metaphor. Unix Archaeology From the manual page from version 4 of Unix (the version where kill was introduced, along with ps ) we find: NAME
kill - do in an unwanted process
SYNOPSIS
kill processid ...
DESCRIPTION
Kills the specified processes.
The processid of each asynchronous process
started with `&' is reported by the shell.
Processid's can also be found by using ps (I).
The killed process must have
been started from the same typewriter
as the current user, unless
he is the superuser.
SEE ALSO
ps(I), sh(I) I particularly like the final section of this man page: BUGS
Clearly people should only be allowed to kill
processes owned by them, and having the same typewriter
is neither necessary nor sufficient. By the time fifth edition had come around, the kill command had already been overloaded to allow any signal to be sent. From the Unix Programmers Manual, Fifth Edition (p70): If a signal number preceded by "-" is given
as an argument, that signal is sent instead of
kill (see signal (II)). The default though was to send a signal 9, as signal 15 did not yet exist (see p150). With version 6 the kill man page no longer mentioned the same typewriter bug. It was only with version 7 of Unix that signal 15 was introduced (see see the signal(2) and kill(1) man pages for v7) and kill switched to that rather than using signal 9. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150800",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34613/"
]
} |
150,837 | I wonder what are the advantages of Maybe monad over exceptions? It looks like Maybe is just explicit (and rather space-consuming) way of try..catch syntax. update Please note that I'm intentionally not mentioning Haskell. | Using Maybe (or its cousin Either which works basically the same way but lets you return an arbitrary value in place of Nothing ) serves a slightly different purpose than exceptions. In Java terms, it's like having a checked exception rather than a runtime exception. It represents something expected which you have to deal with, rather than an error you did not expect. So a function like indexOf would return a Maybe value because you expect the possibility that the item is not in the list. This is much like returning null from a function, except in a type-safe way which forces you to deal with the null case. Either works the same way except that you can return information associated with the error case, so it's actually more similar to an exception than Maybe . So what are the advantages of the Maybe / Either approach? For one, it's a first-class citizen of the language. Let's compare a function using Either to one throwing an exception. For the exception case, your only real recourse is a try...catch statement. For the Either function, you could use existing combinators to make the flow control clearer. Here are a couple of examples: First, let's say you want to try several functions that could error out in a row until you get one that doesn't. If you don't get any without errors, you want to return a special error message. This is actually a very useful pattern but would be a horrible pain using try...catch . Happily, since Either is just a normal value, you can use existing functions to make the code much clearer: firstThing <|> secondThing <|> throwError (SomeError "error message") Another example is having an optional function. Let's say you have several functions to run, including one that tries to optimize a query. If this fails, you want everything else to run anyhow. You could write code something like: do a <- getA
b <- getB
optional (optimize query)
execute query a b Both of these cases are clearer and shorter than using try..catch , and, more importantly, more semantic. Using a function like <|> or optional makes your intentions much clearer than using try...catch to always handle exceptions. Also note that you do not have to litter your code with lines like if a == Nothing then Nothing else ... ! The whole point of treating Maybe and Either as a monad is to avoid this. You can encode the propagation semantics into the bind function so you get the null/error checks for free. The only time you have to check explicitly is if you want to return something other than Nothing given a Nothing , and even then it's easy: there are a bunch of standard library functions to make that code nicer. Finally, another advantage is that a Maybe / Either type is just simpler. There is no need to extend the language with additional keywords or control structures--everything is just a library. Since they're just normal values, it makes the type system simpler--in Java, you have to differentiate between types (e.g. the return type) and effects (e.g. throws statements) where you wouldn't using Maybe . They also behave just like any other user-defined type--there is no need to have special error-handling code baked into the language. Another win is that Maybe / Either are functors and monads, which means they can take advantage of the existing monad control flow functions (of which there is a fair number) and, in general, play nicely along with other monads. That said, there are some caveats. For one, neither Maybe nor Either replace unchecked exceptions. You'll want some other way to handle things like dividing by 0 simply because it would be a pain to have every single division return a Maybe value. Another problem is having multiple types of errors return (this only applies to Either ). With exceptions, you can throw any different types of exceptions in the same function. with Either , you only get one type. This can be overcome with sub-typing or an ADT containing all the different types of errors as constructors (this second approach is what is usually used in Haskell). Still, over all, I prefer the Maybe / Either approach because I find it simpler and more flexible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/150837",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38810/"
]
} |
151,004 | Is there a metric analogous to the McCabe Complexity measure to measure how cohesive a routine is and also how loosely (or tightly) coupled the routine is to other code in the same code base? | I think the metric you are looking for is LCOM4, although it applies more to classes. Sonar explains it nicely here : ...metric : LCOM4 (Lack Of Cohesion Methods) to measure how cohesive classes are. Interpreting this metric is pretty simple as value 1 means that a class has only one responsibility (good) and value X means that a class has probably X responsibilities (bad) and should be refactored/split. There is not any magic here, only common sense. Let’s take a simple example with class Driver. This class has two fields : Car and Brain, and five methods : drive(), goTo(), stop(), getAngry() and drinkCoffee(). Here is the dependency graph between those components. There are three blocks of related components, so LCOM4 = 3, so the class seems to have three different responsibilities and breaks the Single Responsibility Principle. ... It's a great tool, if you can use it. :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151004",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7912/"
]
} |
151,055 | To make it more clear, this is a quick example: class A implements Serializable { public B b; }
class B implements Serializable { public A a; }
A a = new A();
B b = new B();
a.b = b;
b.a = a; So what happens if we serialize a and b objects into a file and deserialize from that file? I thought we get 4 objects, 2 of each. Identical objects but different instances. But I'm not sure if there's anything else or is it right or wrong. If any technology needed to answer, please think based on Java. Thank you. | Java keeps track of the objects that have been written to the stream, and subsequent instances are written as an ID, not an actual serialized object. So, for your example, if you write instance "a" to the stream, the stream gives that object a unique ID (let's say "1"). As part of the serialization of "a", you have to serialize "b", and the stream gives it another id ("2"). If you then write "b" to the stream, the only thing that is written is the ID, not the actual object. The input stream does the same thing in reverse: for each object that it reads from the stream, it assigns an ID number using the same algorithm as the output stream, and that ID number references the object instance in a map. When it sees an object that was serialized using an ID, it retrieves the original instance from the map. This is how the API docs describe it: Multiple references to a single object are encoded using a reference sharing mechanism so that graphs of objects can be restored to the same shape as when the original was written This behavior can cause problems: because the stream holds a hard reference to each object (so that it knows when to substitute the ID), you can run out of memory if you write a lot of transient objects to the stream. You solve that by calling reset() . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151055",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21573/"
]
} |
151,080 | I was talking with a colleague today. We work on code for two different projects. In my case, I'm the only person working on my code; in her case, multiple people work on the same codebase, including co-op students who come and go fairly regularly (between every 8-12 months). She said that she is liberal with her comments, putting them all over the place. Her reasoning is that it helps her remember where things are and what things do since much of the code wasn't written by her and could be changed by someone other than her. Meanwhile, I try to minimize the comments in my code, putting them in only in places with a unobvious workaround or bug. However, I have a better understanding of my code overall, and have more direct control over it. My opinion in that comments should be minimal and the code should tell most of the story, but her reasoning makes sense too. Are there any flaws in her reasoning? It may clutter the code but it ultimately could be quite helpful if there are many people working on it in the short- to medium-run. | Comments don't clutter the code. And when they do, well, every half decent IDE can hide / fold comments. Ideally the story should be told by your code, your requirements document, your commit history and your unit tests, and not by comments. However excessive commenting can only hurt when comments are concentrated on the how and not the why, however that's a different discussion . I think both you and your colleague are "right", the difference being, of course, that you work alone and she in a team, that oftenly includes inexperienced developers. You don't have so much a different philosophy on comments, but vastly different requirements on communicating your code. The "it helps me remember" argument could also stem from the fact that she deals with a lot more code than you, and more importantly code produced by different people, each one with their own personal preferences and quirks. At the end of the day, code comments, albeit their obvious flaws, are the simplest and quickest way of communicating your code. Depending on team composition and organization, it might even be the only way that applies to the lowest common denominator. I usually find myself following your commenting philosophy when working alone and adjusting to your colleague's when working in a team, especially if it's an unbalanced team skill wise. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151080",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36853/"
]
} |
151,169 | Comparing software engineering with civil engineering, I was surprised
to observe a different way of thinking: any civil engineer knows that
if you want to build a small hut in the garden you can just get the
materials and go build it whereas if you want to build a 10-storey house
(or, e.g., something like this ) you need to do quite some maths
to be sure that it won't fall apart. In contrast, speaking with some programmers or reading blogs or forums
I often find a wide-spread opinion that can be formulated more or less as follows: theory and formal methods are for mathematicians / scientists
while programming is more about getting things done . What is normally implied here is that programming is something very
practical and that even though formal methods, mathematics, algorithm theory,
clean / coherent programming languages, etc, may be interesting topics,
they are often not needed if all one wants is to get things done . According to my experience, I would say that while you do not need much
theory to put together a 100-line script (the hut), in order to develop
a complex application (the 10-storey building) you need a structured
design, well-defined methods, a good programming language, good text
books where you can look up algorithms, etc. So IMO (the right amount of) theory is one of the tools
for getting things done . My question is why do some programmers think that there is a contrast
between theory (formal methods) and practice (getting things done)? Is software engineering (building software) perceived by many as easy compared to, say, civil engineering (building houses)? Or are these two disciplines really different (apart from mission-critical
software, software failure is much more acceptable than building failure)? I try to summarize, what I have understood from the answers so far. In contrast to software engineering, in civil engineering it is much clearer what amount of theory (modelling, design) is needed for a certain task. This is partly due to the fact that civil engineering is as old as mankind while software engineering has been around for a few decades only. Another reason is the fact that software is a more volatile kind of artefact, with more flexible requirements (it may be allowed to crash), different marketing strategies (good design can be sacrificed in order to get it on the market quickly), etc. As a consequence, it is much more difficult to determine what the right amount
of design / theory is appropriate in software engineering (too little -> messy code, too much -> I can never get finished)
because there is no general rule and only (a lot of) experience can help. So if I interpret your answers correctly, this uncertainty about how much theory is really needed contributes to the mixed love / hate feelings
some programmers have towards theory. | I think the main difference is that with civil engineering, real world physics act as a constant, powerful reality check that keeps theory sane and also limits bad practices, whereas in software engineering there is no equally strong force to keep impractical ivory tower concepts as well as shoddy workmanship in check. Many programmers have had bad experiences with runaway theory becoming an active impediment to getting things done (e.g. "executable UML", super-bureaucratic development processes). Conversely, dirty hacks and patches can get you pretty damn far, albeit slowly in the end. And as you observe in your last paragraph: failures are usually not as final and thus not as problematic. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151169",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29020/"
]
} |
151,215 | First, I'm an entry level programmer; In fact, I'm finishing an A.S. degree with a final capstone project over the summer. In my new job, when there isn't some project for me to do (they're waiting to fill the team with more new hires), I've been given books to read and learn from while I wait - some textbooks, others not so much (like Code Complete). After going through these books, I've turned to the internet to learn as much as possible, and started learning about SOLID and DI (we talked some about Liskov's substitution principle, but not much else SOLID ideas). So as I've learned, I sat down to do to learn better, and began writing some code to utilize DI by hand (there are no DI frameworks on the development computers). Thing is, as I do it, I notice it feels familiar... and it seems like it is very much like work I've done in the past using composition of abstract classes using polymorphism. Am I missing a bigger picture here? Is there something about DI (at least by hand) that goes beyond that? I understand the possibility of having configurations not in code of some DI frameworks having some great benefits as far as changing things without having to recompile, but when doing it by hand, I'm not sure if it's any different than stated above... Some insight into this would be very helpful! | The fundamental idea in DI is simply to push dependencies into objects, rather than letting them pull dependencies from outside. This was also called "inversion of control" earlier. This allows one to control dependencies much better, and - last but not least - to write code which is easy to unit test. DI frameworks only build on top of this idea, and (partly) automate the often tedious part of wiring object dependencies together. This may otherwise require a significant amount of code in a larger project when done by hand. To define dependencies, apart from external configuration files, it is also possible to use code annotations nowadays, e.g. in languages like Java and C#. These may bring the best of both worlds: wiring dependencies in a declarative style, but right within the code itself, where it logically belongs to. To me, the possibility of changing dependencies without having to recompile is not that valuable, as it is rarely done in real life (apart from some special cases), but introduces an often artifical separation of dependencies from the classes and objects defined within the code. To return to your title question, DI is not an alternative to composition nor polymorphism; after all, it is made possible by these two. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151215",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54833/"
]
} |
151,231 | Rob Napier, the author of iOS 5 Programming Pushing the Limits, mentioned there are several models of selling apps on the App Store: Write an app and sell it Publish a free and a full version Ad supported by third party or by iAd In App purchase Surprisingly, the author said that the most workable model is (1) in terms of sales. I would think that (2) with fairly limiting ability for the free version can bring more sales, as people without trying, might not plunge down $0.99 or $1.99 for something they haven't tried? I for one, might not have purchased Angry Birds if I didn't try their free version first. Also, I think it also depends on the situation: for example, if the app is an alarm clock, and there are already 5 alarm clocks in App Store that are free, then your app that is $0.99 might not be that eagerly purchased (or they may end up never trying your app). But if yours is also free, and users really like it out of all the other ones, then they may think, $0.99 is nothing to get a good alarm clock, and gladly pay you the $0.99 in exchange for a full version of the alarm clock, for something that they can't get with the free version. (such as the full version can let you choose a song from your Music Library for the alarm). Could (1) work only if the user definitely want it and have no substitute? How might it work the best? | The fundamental idea in DI is simply to push dependencies into objects, rather than letting them pull dependencies from outside. This was also called "inversion of control" earlier. This allows one to control dependencies much better, and - last but not least - to write code which is easy to unit test. DI frameworks only build on top of this idea, and (partly) automate the often tedious part of wiring object dependencies together. This may otherwise require a significant amount of code in a larger project when done by hand. To define dependencies, apart from external configuration files, it is also possible to use code annotations nowadays, e.g. in languages like Java and C#. These may bring the best of both worlds: wiring dependencies in a declarative style, but right within the code itself, where it logically belongs to. To me, the possibility of changing dependencies without having to recompile is not that valuable, as it is rarely done in real life (apart from some special cases), but introduces an often artifical separation of dependencies from the classes and objects defined within the code. To return to your title question, DI is not an alternative to composition nor polymorphism; after all, it is made possible by these two. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151231",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/5487/"
]
} |
151,285 | Background: I'm currently part of a team of four: 1 manager, 1 senior developer and 2 developers. We do a range of bespoke in-house systems / projects (e.g. 6-8 weeks) for an organisation of around 3500 staff, as well as all the maintenance and support required from the systems that have been created before. There is not enough of us to do all the work that is potentially coming our way - we're understaffed. Management acknowledge this, but budget restraints limit our ability to recruit additional members to the team (even if we make the salary back in savings). The Change This leaves us where we are now. Our manager is due to leave his role for pastures new, leaving a vacancy in the team. Management are using this opportunity to restructure our team which would see the team manager role replaced by another developer and another senior developer. Their logic being that we need more developers, so here's a way of funding it (one of the roles is partially funded from another vacant post). The team would have no direct line manager and the roles and responsibilities would be divided up between the seniors and the (relatively new to post) service manager (a non-technical role with little-to-no development knowledge/experience whose focus is shared amongst a number of other teams and individuals) - who would be our next actual manager up the food chain. I guess the final question is: Is it possible to run a development team without an manager? Have you had experience of this? And what things could go wrong / could be of benefit to us? I'd ideally like to "see the light" and the benefits of doing things this way, or come up with some points for argument against it. | The greater the risks, the more you need "air cover". This is what a manager is really supposed to provide. While the team does the work, the manager is supposed to ensure that there is nothing that will keep the team from achieving team goals. Whether it's tweaking the schedule, running interference between the team and the sales staff, or simply making sure the team are paid on time and that the coffee machine remains in working order. A really great manager allows the team to function almost as if the manager isn't there. The reality of course is that most managers utterly fail at this. They either micromanage, or they are rendered obsolete so that the upper echelons of the company can control things more directly, and the truly great managers are a rare bird indeed. As far as a software team is concerned, there are some pros and cons both ways when it comes to having a hierarchical or flat team structure. If the team is very small, and the work done requires very little overlap (and by that I mean everyone has an independent project), then it's been my experience that a flat (aka unmanaged) team structure can work very well if all of the team members are disciplined. It's also been my experience however that where there is a great deal of overlap in the work that the team members do, where there are two or more relatively strong personalities, or where there is a relatively stressful working environment with a busy workload, then having a team leader or a manager with clearly defined responsibilities is generally essential. There are a lot of factors involved, however it really boils down to the personalities involved, their individual motivations and career objectives, and the example and guidance provided by upper management that will determine how necessary a manager or team leader position is. Generally, if there is any chaos, and when the team is asking for it, then the team clearly needs leadership. If things generally tick along ok without management input, then perhaps the team can manage within a non-hierarchical structure for a time... at least until the workload and schedule becomes too difficult to manage. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151285",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1031/"
]
} |
151,375 | Years ago, I was surprised when I discovered that Intel sells Visual Studio compatible compilers. I tried it in particular for C/C++ as well as fantastic diagnostic tools. But the code was simply not that computationally intensive to notice the difference. The only impression was: did Intel really do it for me just now, wow, amazing tools with nanoseconds resolution, unbelievable. But the trial ended and the team never seriously considered a purchase. From your experience, if license cost does not matter, which vendor is the winner? It is not a broad or vague question or attemt to spark a holy war. This sort of question is about two very visible tools. Nobody likes when tools have any mysteries or surprises. And choices between best and best are always the pain. I also understand the grass is always greener argument. I want to hear all "what ifs" stories. What if Intel just locally optimizes it for the chip stepping of the month, and not every hardware target will actually work as well as Microsoft compiled? What if AMD hardware is the target and everything will slow down for no reason? Or on the other hand, what if Intel's hardware has so many unnoticable opportunities, that Microsoft compiler writers are too slow to adopt and never implement it in the compiler? What if both are the same exactly, actually a single codebase just wrapped into two different boxes and licensed to both vendors by some third-party shop? And so on. But someone knows some answers. | WARNING: Answer based on own experience - YMMV If the code is really computationally expensive, yes, definitely . I have seen an improvement of over 20x times with the former Intel C++ Compiler (now Intel Studio if I recall correctly) vs the standard Microsoft Visual C++ Compiler. It's true the code was very far from perfect and that may have played a role (actually that's why we bothered using the Intel compiler, it was easier than refactoring the giant codebase), also the CPU used to run the code was an Intel Core 2 Quad, which is the perfect CPU for such a thing, but the results were shocking. The compiler itself contains myriads of ways to optimize code, including targeting a specific CPU in terms of, say, SSE capabilities. It really makes -O2 / -O3 run away ashamed. And that was before using the profiler. Note that, however, turning on really aggressive optimizations will make the compilation take quite some time, two hours for a large project is not impossible at all. Also, with high levels of optimizations, there's a higher chance of an error in the code to manifest itself (this can be observed with gcc -O3 , too). To a project you know well, this might be a plus, since you'll find and fix any eventual bugs you didn't catch earlier, but when compiling a hairy mess, you just cross your fingers and pray to the x86 gods. Something about performance on AMD machines: It's not as good as Intel CPUs, but it's still way better than the MS C++ compiler (again, from my experience). The reason is that you can also target a generic CPU with SSE2 support (for example). Then AMD CPUs with SSE2 will not be discriminated much. Intel compiler on Intel CPU really steals the show, though. It's not all double rainbows and shiny unicorns, however. There have been some heavy accusations about binaries not-running at all on non-GenuineIntel CPUs and (this one is admitted) artificially induced inferior performance on CPUs by other vendors . Also note this is information from at least 3 years ago and it's validity as of now is unknown, BUT the new product descriptions gives binaries a carte blanche to run as slow as Intel sees fit on non-Intel CPUs. I don't know what it is about Intel and why they make so good numeric computation tools, but have a look at this, too: http://julialang.org/ . There is a comparison and if you look at the last row, MATLAB shines by defeating both C code and Julia , what strikes me is that the authors think the reason is Intel's Math Kernel Library . I realize this sounds a lot like an advertisement for the Intel Compiler toolkit, but in my experience it really did the job well, and even simple logic dictates that the guys who make CPUs should know best how to program for them. IMO, the Intel C++ compiler squeezes every last bit of performance gain possible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151375",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
151,472 | For example, stackexchange.com , without asking the site owner or Google their information about developing the website, is this possible to know what language is used in the back end? Seems, the website don't have .extension bar, for example .php that can indicated which is developed in PHP , but without the extension, how can I know that? | There are indicators. Some are easier to find, others are harder. file extensions: .php indicates that the site is written in PHP, .asp indicates classic ASP, .aspx indicates ASP.NET, .jsp indicates Java JSPs, ... cookie names: JSESSIONID is a widely used cookie name in Java servers headers: some systems add HTTP headers to their responses specific HTML content: patterns such as lots of div-wrappers with a consistent class-naming scheme as used by CMSes like Drupal. comments in the HTML or meta tags in the head directly/indirectly indicating tool usage Default error messages or error page design (e.g. pinging a fake URL to see their 404) Sometimes comment tags are placed in the page for versioning purposes which provide a clue ... But all of those can be remove/changed/faked. Some are easier to change than others, but none are 100% reliable. There are various reasons to change those indicators: You change the underlying technology but don't want to change your URLs You want to give as little information about your technology as possible (related to previous) You'd rather not be the first stop for the script kiddie bus when known platform-wide vulnerabilities are discovered/publicized You want to seem "in" (even 'though that currently means having extension-less REST-style URLs). ... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151472",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37859/"
]
} |
151,518 | I've moved all our company Git repositories to GitHub and now I want to add employees to the projects. Since most employees already have personal GitHub accounts, I'm wondering whether I should ask them to create a work GitHub account. The reason that I'm thinking of doing this is to decrease the chances of unauthorized access to our code base since their personal accounts may be well publicized through their personal activity on the site, increasing chances of targeted attacks. Furthermore, if their personal account is ever compromised it won't mean the whole company code is accessible to the hijacker. Since this will bring the burden of maintaining two accounts for the employees I'm wondering whether it is the correct approach and whether it even makes sense. I would love to hear your opinions on this. Update Thanks for all the useful insights. I won't set an answer as accepted because of the subjective nature of the question/answers and since I took the best points from several different answers. I have decided to go forward this way: I will remind employees that work-related GitHub e-mail notifications will have to be sent to their work e-mail accounts for practical reasons. Therefore it would make more sense to create work GitHub accounts. If they are willing to use their personal GitHub accounts and connect it to their work e-mail accounts then that's fine. In any case , employees will have to agree in written form to a number of conditions tied to using GitHub. These are related to account security: choosing a secure password using a secure random password generator that is not used with any other account, not accessing GitHub through computers not owned or administered by them, etc. At the end of the day employees will have to decide themselves whether a work account makes more sense for them or not. | If there was a benefit, it would merely be painful. But nothing sucks worse than painful and pointless. Just have the single personal account. Two reasons: Github has incredibly good access control in their organizations. If an employee leaves, you can instantly remove their access. If they had a company account, you'd have to reclaim the account somehow to get the stated benefits. In practice, you'd probably just remove the account access, same as if they had a personal account. Having more than one account is painful. Logging in and logging out between accounts hurts, and adding comments, following, and all the social stuff when you use different accounts. References: I make a CI server that has GitHub integration , so I have about a lot of test accounts, and I've talked to customers with all sorts of weird configurations, including separate work accounts and personal accounts. It always leads to trouble. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151518",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/55797/"
]
} |
151,541 | I've always agreed with Mercurial's mantra 1 , however, now that Mercurial comes bundled with the rebase extension and it is a popular practice in git, I'm wondering if it could really be regarded as a "bad practice", or at least bad enough to avoid using. In any case, I'm aware of rebasing being dangerous after pushing. OTOH, I see the point of trying to package 5 commits in a single one to make it look niftier (specially at in a production branch), however, personally I think would be better to be able to see partial commits to a feature where some experimentation is done, even if it is not as nifty, but seeing something like "Tried to do it way X but it is not as optimal as Y after all, doing it Z taking Y as base" would IMHO have good value to those studying the codebase and follow the developers train of thought. My very opinionated (as in dumb, visceral, biased) point of view is that programmers like rebase to hide mistakes... and I don't think this is good for the project at all. So my question is: have you really found valuable to have such "organic commits" (i.e. untampered history) in practice?, or conversely, do you prefer to run into nifty well-packed commits and disregard the programmers' experimentation process?; whichever one you chose, why does that work for you? (having other team members to keep history, or alternatively, rebasing it). 1 per Google DVCS analysis , in Mercurial "History is Sacred". | The History is sacred, the Present is not. You can split your DVCS "tree" in two parts: The past /history which contains an accurate view of how you have reached the current state of the code. This part of the history grow over time The present which part you are currently working on to make you code evolve. This tip most part of the history have about always the same size. Every code you released or used somehow is part of the past . The past is sacred because you need to be able to reproduce a setup or understand what introduced a regression. You shall never ever rewrite the past . In git you usually never rewrite anything once it is in master: master is the past part of the history. In Mercurial you have this public phase concept that keep track of the past part of your "tree" and enforce its immutability. The present part of the code are the changeset you are currently working on. The feature branch that you are trying to make usable, bugfree and properly refactored. It is perfectly fine to rewrite it it is even a good idea because it make the past part more pretty, simple and usable. Mercurial track this in the draft phase. So yes, please rebase if it improves your history. Mercurial will prevent you to shoot yourself in the foot if you are rebasing stuff you should not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151541",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10869/"
]
} |
151,619 | It seems to me that many bigger C++ libraries end up creating their own string type. In the client code you either have to use the one from the library ( QString , CString , fbstring etc., I'm sure anyone can name a few) or keep converting between the standard type and the one the library uses (which most of the time involves at least one copy). So, is there a particular misfeature or something wrong about std::string (just like auto_ptr semantics were bad)? Has it changed in C++11? | Most of those bigger C++ libraries were started before std::string was standardized. Others include additional features that were standardized late, or still not standardized, such as support for UTF-8 and conversion between encodings. If those libraries were implemented today, they would probably choose to write functions and iterators that operate on std::string instances. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151619",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3530/"
]
} |
151,661 | I've come across this PHP tag <?= ?> recently and I am reluctant to use it, but it itches so hard that I wanted to have your take on it. I know it is bad practice to use short tags <? ?> and that we should use full tags <?php ?> instead, but what about this one : <?= ?> ? It would save some typing and it would be better for code readability, IMO. So instead of this: <input name="someVar" value="<?php echo $someVar; ?>"> I could write it like this, which is cleaner : <input name="someVar" value="<?= $someVar ?>"> Is using this operator frowned upon? | History Before the misinformation train goes too far out of the station, there are a bunch of things you need to understand about PHP short tags. The primary issue with PHP's short tags is that XML managed to choose a tag ( <? ) that was already used by PHP. With the option enabled, you weren't able to raw output the xml declaration without getting syntax errors: <?xml version="1.0" encoding="UTF-8" ?> This is a big issue when you consider how common XML parsing and management is. What about <?= ? Although <? causes conflicts with xml, <?= does not . Unfortunately, the options to toggle it on and off were tied to short_open_tag , which meant that to get the benefit of the short echo tag ( <?= ), you had to deal with the issues of the short open tag ( <? ). The issues associated with the short open tag were much greater than the benefits from the short echo tag, so you'll find a million and a half recommendations to turn short_open_tag off, which you should . With PHP 5.4, however the short echo tag has been re-enabled separate from the short_open_tag option. I see this as a direct endorsement of the convenience of <?= , as there's nothing fundamentally wrong with it in and of itself. The problem is that you can't guarantee that you'll have <?= if you're trying to write code that could work in a wider range of PHP versions. ok, so now that that's all out of the way Should you use <?= ? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151661",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38148/"
]
} |
151,733 | If immutable objects¹ are good, simple and offer benefits in concurrent programming why do programmers keep creating mutable objects²? I have four years of experience in Java programming and as I see it, the first thing people do after creating a class is generate getters and setters in the IDE (thus making it mutable). Is there a lack of awareness or can we get away with using mutable objects in most scenarios? ¹ Immutable object is an object whose state cannot be modified after it is created. ² Mutable object is an object which can be modified after it is created. | Both mutable and immutable objects have their own uses, pros and cons. Immutable objects do indeed make life simpler in many cases. They are especially applicable for value types, where objects don't have an identity so they can be easily replaced. And they can make concurrent programming way safer and cleaner (most of the notoriously hard to find concurrency bugs are ultimately caused by mutable state shared between threads). However, for large and/or complex objects, creating a new copy of the object for every single change can be very costly and/or tedious. And for objects with a distinct identity, changing an existing objects is much more simple and intuitive than creating a new, modified copy of it. Think about a game character. In games, speed is top priority, so representing your game characters with mutable objects will most likely make your game run significantly faster than an alternative implementation where a new copy of the game character is spawned for every little change. Moreover, our perception of the real world is inevitably based on mutable objects. When you fill up your car with fuel at the gas station, you perceive it as the same object all along (i.e. its identity is maintained while its state is changing) - not as if the old car with an empty tank got replaced with consecutive new car instances having their tank gradually more and more full. So whenever we are modeling some real-world domain in a program, it is usually more straightforward and easier to implement the domain model using mutable objects to represent real-world entities. Apart from all these legitimate reasons, alas, the most probable cause why people keep creating mutable objects is inertia of mind, a.k.a. resistance to change. Note that most developers of today have been trained well before immutability (and the containing paradigm, functional programming) became "trendy" in their sphere of influence, and don't keep their knowledge up to date about new tools and methods of our trade - in fact, many of us humans positively resist new ideas and processes. "I have been programming like this for nn years and I don't care about the latest stupid fads!" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151733",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17887/"
]
} |
151,920 | Since Java 1.6 the JVM can run a myriad of programming languages on top of instead of just Java. I conceptually understand how Java is run on the Java VM, but not how other languages can run on it as well. To me, it all looks like black magic. Do you have any articles to point me to so I can better understand how this all fits together? | The key is the native language of the JVM: the Java bytecode. Any language can be compiled into bytecode which the JVM understands - all you need for this is a compiler emitting bytecode. From then on, there is no difference from the JVM's point of view. So much so that you can take a compiled Scala, Clojure, Jython etc. class file and decompile it (using e.g. JAD ) into normal looking Java source code. You can find more details about this in the following articles / threads: Java virtual machine (Wikipedia) Why do we need other JVM languages (Stack Overflow) I am not aware of any fundamental changes in the Java 5 or 6 JVMs which would have made it possible or easier for (code compiled from) other languages to run on it. In my understanding, the JVM 1.4 was more or less as capable in that respect as JVM 6 (there may be differences though; I am not a JVM expert). It was just that people started to develop other languages and/or bytecode compilers in the first half of the decade, and the results started to appear (and become wider known) around 2006 when Java 6 was published. However, all these JVM versions share some limitations: the JVM is statically typed by nature, and up to release 7, did not support dynamic languages. This has changed with the introduction of invokedynamic , a new bytecode instruction which enables method invocation relying on dynamic type checking. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151920",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27417/"
]
} |
151,955 | Is there one? All the definitions I can find describe the size, complexity / variety or velocity of the data. Wikipedia's definition is the only one I've found with an actual number Big data sizes are a constantly moving target, as of 2012 ranging from a few dozen terabytes to many petabytes of data in a single data set. However, this seemingly contradicts the MIKE2.0 definition , referenced in the next paragraph, which indicates that "big" data can be small and that 100,000 sensors on an aircraft creating only 3GB of data could be considered big. IBM despite saying that: Big data is more simply than a matter of size. have emphasised size in their definition . O'Reilly has stressed "volume, velocity and variety" as well. Though explained well, and in more depth, the definition seems to be a re-hash of the others - or vice-versa of course. I think that a Computer Weekly article title sums up a number of articles fairly well "What is big data and how can it be used to gain competitive advantage" . But ZDNet wins with the following from 2012 : “Big Data” is a catch phrase that has been bubbling up from the high
performance computing niche of the IT market... If one sits through
the presentations from ten suppliers of technology, fifteen or so
different definitions are likely to come forward. Each definition, of
course, tends to support the need for that supplier’s products and
services. Imagine that. Basically "big data" is "big" in some way shape or form. What is "big"? Is it quantifiable at the current time? If "big" is unquantifiable is there a definition that does not rely solely on generalities? | There isn't one; it's a buzzword. The delineator though is that your data is beyond the capabilities of traditional systems. The data is too large to store on the largest disk, the queries take tons too long without special optimization, the network or disk can't support the incoming traffic flow, a plain old dataview isn't going to handle visualization for the shape/size/breadth of data... Basically, that your data is beyond some ill-defined tipping point where "just add more hardware" isn't going to cut it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/151955",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35424/"
]
} |
152,020 | I am a CS student with several years of experience in C and C++, and for the last few years I've been constantly working with Java/Objective C doing app development and now I have switched to web development and am mainly focused on ruby on rails and I came to the realization that (as with app development , really) I reference other code way too much. I constantly Google functionality for lots of things I imagine I should be able to do from scratch and it's really cracked my confidence a bit. Basic fundamentals are not an issue, I hate to use this as an example but I can run through javabat in both java/python at a sprint - obviously not an accomplishment and but what I mean to say is I have a strong base for the fundamentals I think? I know what I need to use typically but reference syntax constantly. Would love some advice and input on this, as it has been holding me back pretty solidly in terms of looking for work in this field even though I'm finishing my degree. My main reason for asking is not really about employment, but more that I don't want to be the only guy at a hackathon not hammering out nonstop code and sitting there with 20 Google/github tabs open, and I have refrained from attending any due to a slight lack of confidence... Is a person a bad developer by constantly looking to code examples for moderate to complex tasks? | Copy-paste blindly: bad. Look up documentation, read code examples to get a better understanding: good. I'd rather work with someone who looks up things all the time and makes sure everything works as intended than someone over-confident who thinks he knows it all but doesn't. But the worst is someone who doesn't bother understanding how things work, and just uncritically copies code from the web (and then when the bug reports start raining down is unable to fix anything properly). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/152020",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/56140/"
]
} |
152,094 | Attribution: This grew out of a related P.SE question My background is in C / C++, but I have worked a fair amount in Java and am currently coding C#. Because of my C background, checking passed and returned pointers is second-hand, but I acknowledge it biases my point of view. I recently saw mention of the Null Object Pattern where the idea is that an object is always returned. Normal case returns the expected, populated object and the error case returns empty object instead of a null pointer. The premise being that the calling function will always have some sort of object to access and therefore avoid null access memory violations. So what are the pros / cons of a null check versus using the Null Object Pattern? I can see cleaner calling code with the NOP, but I can also see where it would create hidden failures that don't otherwise get raised. I would rather have my application fail hard (aka an exception) while I'm developing it than have a silent mistake escape into the wild. Can't the Null Object Pattern have similar problems as not performing a null check? Many of the objects I have worked with hold objects or containers of their own. It seems like I would have to have a special case to guarantee all of the main object's containers had empty objects of their own. Seems like this could get ugly with multiple layers of nesting. | You wouldn't use a Null Object Pattern in places where null (or Null Object) is returned because there was a catastrophic failure. In those places I would continue to return null. In some cases, if there's no recovery, you might as well crash because at least the crash dump will indicate exactly where the problem occurred. In such cases when you add your own error handling, you are still going to kill the process (again, I said for cases where there's no recovery) but your error handling will mask very important information that a crash dump would've provided. Null Object Pattern is more for places where there's a default behavior that could be taken in a case where object isn't found. For example consider the following: User* pUser = GetUser( "Bob" );
if( pUser )
{
pUser->SetAddress( "123 Fake St." );
} If you use NOP, you would write: GetUser( "Bob" )->SetAddress( "123 Fake St." ); Note that this code's behavior is "if Bob exists, I want to update his address". Obviously if your application requires Bob to be present, you don't want to silently succeed. But there are cases where this type of behavior would be appropriate. And in those cases, doesn't NOP produce a much cleaner and concise code? In places where you really can't live without Bob, I would have GetUser() throw an application exception (i.e. not access violation or anything like that) that would be handled at a higher level and would report general operation failure. In this case, there's no need for NOP but there's also no need to explicitly check for NULL. IMO, those checks for NULL, only make the code bigger and take away from readability. Check for NULL is still the right design choice for some interfaces, but not nearly as many as some people tend to think. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/152094",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
152,152 | I would like to know if it is a good practice to generalize variables (use single variable to store all the values). Consider simple example Strings querycre,queryins,queryup,querydel;
querycre = 'Create table XYZ ...';
execute querycre ;
queryins = 'Insert into XYZ ...';
execute queryins ;
queryup = 'Update XYZ set ...';
execute queryup;
querydel = 'Delete from XYZ ...';
execute querydel ; and Strings query;
query= 'Create table XYZ ... ';
execute query ;
query= 'Insert into XYZ ...';
execute query ;
query= 'Update XYZ set ...';
execute query ;
query= 'Delete from XYZ ...';
execute query ; In first case I use 4 strings each storing data to perform the actions mentioned in their suffixes. In second case just 1 variable to store all kinds the data. Having different variables makes it easier for someone else to read and understand it better. But having too many of them makes it difficult to manage. Also does having too many variables hamper my performance? P.S: please don't answer w.r.t the code in example it was just to convey what I really mean. | Having to ask yourself this question is a pretty strong smell that you are not following DRY (Don't Repeat Yourself). Suppose you have this, in a hypothetical curly-brace language: function doFoo() {
query = "SELECT a, b, c FROM foobar WHERE baz = 23";
result = runQuery(query);
print(result);
query = "SELECT foo, bar FROM quux WHERE x IS NULL";
result = runQuery(query);
print(result);
query = "SELECT a.foo, b.bar FROM quux a INNER JOIN quuux b ON b.quux_id = a.id ORDER BY date_added LIMIT 10";
result = runQuery(query);
print(result);
} Refactor that into: function runAndPrint(query) {
result = runQuery(query);
print(result);
}
function doFoo() {
runAndPrint("SELECT a, b, c FROM foobar WHERE baz = 23");
runAndPrint("SELECT foo, bar FROM quux WHERE x IS NULL");
runAndPrint("SELECT a.foo, b.bar FROM quux a INNER JOIN quuux b ON b.quux_id = a.id ORDER BY date_added LIMIT 10");
} Notice how the need for deciding whether or not to use different variables goes away, and how you can now change the logic for running a query and printing the result in one place, rather than having to apply the same modification three times. (For example, you might decide you want to pump the query result through a template system instead of printing it right away). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/152152",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42599/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.