source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
221,762 | The Java team has done a ton of great work removing barriers to functional programming in Java 8. In particular, the changes to the java.util Collections do a great job of chaining transformations into very fast streamed operations. Considering how good a job they have done adding first class functions and functional methods on collections, why have they completely failed to provide immutable collections or even immutable collection interfaces? Without changing any existing code, the Java team could at any time add immutable interfaces that are the same as the mutable ones, minus the setter methods and make the existing interfaces extend from them, like this: ImmutableIterable
____________/ |
/ |
Iterable ImmutableCollection
| _______/ / \ \___________
| / / \ \
Collection ImmutableList ImmutableSet ImmutableMap ...
\ \ \_________|______________|__________ |
\ \___________|____________ | \ |
\___________ | \ | \ |
List Set Map ... Sure, operations like List.add() and Map.put() currently return a boolean or previous value for the given key to indicate whether the operation succeeded or failed. Immutable collections would have to treat such methods as factories and return a new collection containing the added element - which is incompatible with the current signature. But that could be worked-around by using a different method name like ImmutableList.append() or .addAt() and ImmutableMap.putEntry() . The resulting verbosity would be more than outweighed by the benefits of working with immutable collections, and the type system would prevent errors of calling the wrong method. Over time, the old methods could be deprecated. Wins of immutable collections: Simplicity — reasoning about code is simpler when the underlying data does not change. Documentation — if a method takes an immutable collection interface, you know it isn't going to modify that collection. If a method returns an immutable collection, you know you can't modify it. Concurrency — immutable collections can be shared safely across threads. As someone who has tasted languages which assume immutability, it is very hard to go back to the Wild West of rampant mutation. Clojure's collections (sequence abstraction) already have everything that Java 8 collections provide, plus immutability (though maybe using extra memory and time due to synchronized linked-lists instead of streams). Scala has both mutable and immutable collections with a full set of operations, and though those operations are eager, calling .iterator gives a lazy view (and there are other ways of lazily evaluating them). I don't see how Java can continue to compete without immutable collections. Can someone point me to the history or discussion about this? Surely it's public somewhere. | Because immutable collections absolutely require sharing to be usable. Otherwise, every single operation drops a whole other list into the heap somewhere. Languages that are entirely immutable, like Haskell, generate astonishing amounts of garbage without aggressive optimizations and sharing. Having collection that's only usable with <50 elements is not worth putting in the standard library. Further more, immutable collections often have fundamentally different implementations than their mutable counterparts. Consider for example ArrayList , an efficient immutable ArrayList wouldn't be an array at all! It should be implemented with a balanced tree with a large branching factor, Clojure uses 32 IIRC. Making mutable collections be "immutable" by just adding a functional update is a performance bug just as much as a memory leak is. Furthermore, sharing isn't viable in Java. Java provides too many unrestricted hooks to mutability and reference equality to make sharing "just an optimization". It'd probably irk you a bit if you could modify an element in a list, and realize you just modified an element in the other 20 versions of that list you had. This also rules out huge classes of very vital optimizations for efficient immutability, sharing, stream fusion, you name it, mutability breaks it. (That'd make a good slogan for FP evangelists) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/221762",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62323/"
]
} |
221,836 | Let's say I wanted to start an open source project that I hope/expect to have many people submit patches and whatnot. Is it viable to take a strict TDD approach? Can/should I expect/trust collaborators to write quality tests whenever they submit a patch? One thing I've been thinking about is writing test suites for individual bug reports and feature requests and requiring that all patches/pull requests make the tests pass, but at that point it seems like it would be better just to write the feature/bugfix myself. As far as I can tell, most of the major open source projects that use TDD (or at least write tests) seem to be mostly written purely by an individual or team, where it's easy to enforce practices such as TDD. | You can't really enforce a TDD (test first) approach on an open source project where patches can be submitted by the general public. What you can enforce is that all patches must have a set of test cases for the fixes included in the patch and that those test cases, as well as all the existing test cases, must pass. You can enforce this by only giving commit rights to a few trusted developers who are known to use and agree to the policies of the project and by publicly stating that submissions/pull-requests will only be incorporated if they come with passing test cases (with sufficient coverage). This doesn't ensure that the test is written first , but it does ensure that the test is written . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/221836",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18948/"
]
} |
221,892 | I'm working on a windows form to calculate UPC for item numbers. I successfully create one that will handle one item number/UPC at a time, now I want to expand and do it for multiple item numbers/UPCs. I have started and tried using a list, but I keep getting stuck. I created a helper class: public class Codes
{
private string incrementedNumber;
private string checkDigit;
private string wholeNumber;
private string wholeCodeNumber;
private string itemNumber;
public Codes(string itemNumber, string incrementedNumber, string checkDigit, string wholeNumber, string wholeCodeNumber)
{
this.incrementedNumber = incrementedNumber;
this.checkDigit = checkDigit;
this.wholeNumber = wholeNumber;
this.wholeCodeNumber = wholeCodeNumber;
this.itemNumber = itemNumber;
}
public string ItemNumber
{
get { return itemNumber; }
set { itemNumber = value; }
}
public string IncrementedNumber
{
get { return incrementedNumber; }
set { incrementedNumber = value; }
}
public string CheckDigit
{
get { return checkDigit; }
set { checkDigit = value; }
}
public string WholeNumber
{
get { return wholeNumber; }
set { wholeNumber = value; }
}
public string WholeCodeNumber
{
get { return wholeCodeNumber; }
set { wholeCodeNumber = value; }
}
} Then I got started on my code, but the issue is that the process is incremental, meaning I get the item number from a gridview via checkboxes and put them in the list. Then I get the last UPC from the database, strip the checkdigit, then increment the number by one and put it in the list. Then I calculate the checkdigit for the new number and put that in the list. And here I already get an Out of Memory Exception.
Here is the code I have so far: List<Codes> ItemNumberList = new List<Codes>();
private void buttonSearch2_Click(object sender, EventArgs e)
{
//Fill the datasets
this.immasterTableAdapter.FillByWildcard(this.alereDataSet.immaster, (textBox5.Text));
this.upccodeTableAdapter.FillByWildcard(this.hangtagDataSet.upccode, (textBox5.Text));
this.uPCTableAdapter.Fill(this.uPCDataSet.UPC);
string searchFor = textBox5.Text;
int results = 0;
DataRow[] returnedRows;
returnedRows = uPCDataSet.Tables["UPC"].Select("ItemNumber = '" + searchFor + "2'");
results = returnedRows.Length;
if (results > 0)
{
MessageBox.Show("This item number already exists!");
textBox5.Clear();
//clearGrids();
}
else
{
//textBox4.Text = dataGridView1.Rows[0].Cells[1].Value.ToString();
MessageBox.Show("Item number is unique.");
}
}
public void checkMarks()
{
for (int i = 0; i < dataGridView7.Rows.Count; i++)
{
if ((bool)dataGridView7.Rows[i].Cells[3].FormattedValue)
{
{
ItemNumberList.Add(new Codes(dataGridView7.Rows[i].Cells[0].Value.ToString(), "", "", "", ""));
}
}
}
}
public void multiValue1()
{
_value = uPCDataSet.UPC.Rows[uPCDataSet.UPC.Rows.Count - 1]["UPCNumber"].ToString();//get last UPC from database
_UPCNumber = _value.Substring(0, 11);//strip out the check-digit
_UPCNumberInc = Convert.ToInt64(_UPCNumber);//convert the value to a number
for (int i = 0; i < ItemNumberList.Count; i++)
{
_UPCNumberInc = _UPCNumberInc + 1;
_UPCNumberIncrement = Convert.ToString(_UPCNumberInc);//assign the incremented value to a new variable
ItemNumberList.Add(new Codes("", _UPCNumberIncrement, "", "", ""));//**here I get the OutOfMemoreyException**
}
for (int i = 0; i < ItemNumberList.Count; i++)
{
long chkDigitOdd;
long chkDigitEven;
long chkDigitSubtotal;
chkDigitOdd = Convert.ToInt64(_UPCNumberIncrement.Substring(0, 1)) + Convert.ToInt64(_UPCNumberIncrement.Substring(2, 1)) + Convert.ToInt64(_UPCNumberIncrement.Substring(4, 1)) + Convert.ToInt64(_UPCNumberIncrement.Substring(6, 1)) + Convert.ToInt64(_UPCNumberIncrement.Substring(8, 1)) + Convert.ToInt64(_UPCNumberIncrement.Substring(10, 1));
chkDigitOdd = (3 * chkDigitOdd);
chkDigitEven = Convert.ToInt64(_UPCNumberIncrement.Substring(1, 1)) + Convert.ToInt64(_UPCNumberIncrement.Substring(3, 1)) + Convert.ToInt64(_UPCNumberIncrement.Substring(5, 1)) + Convert.ToInt64(_UPCNumberIncrement.Substring(7, 1)) + Convert.ToInt64(_UPCNumberIncrement.Substring(9, 1));
chkDigitSubtotal = (300 - (chkDigitEven + chkDigitOdd));
_chkDigit = chkDigitSubtotal.ToString();
_chkDigit = _chkDigit.Substring(_chkDigit.Length - 1, 1);
ItemNumberList.Add(new Codes("", "",_chkDigit, "", ""));
} Is this the right way to go about it, using a list, or should I be looking at a different way? | I'll expand my comment: ... if you're adding or removing elements, you want a list (or other flexible data structure). Arrays are only really good when you know exactly how many elements you need at the start. A Quick Breakdown Arrays are good when you have a fixed number of elements that is unlikely to change, and you wish to access it in a non-sequential fashion. Fixed Size Fast Access - O(1) Slow Resize - O(n) - needs to copy every element to a new array! Linked-Lists are optimized for quick additions and removals at either end, but are slow to access in the middle. Variable Size Slow Access at middle - O(n) Needs to traverse each element starting from the head in order to reach the desired index Fast Access at Head - O(1) Potentially fast access at Tail O(1) if a reference is stored at the tail end (as with a doubly-linked list) O(n) if no reference is stored (same complexity as accessing a node in the middle) Array Lists (such as List<T> in C#!) are a mixture of the two, with fairly fast additions and random access. List<T> will often be your go-to collection when you're not sure what to use. Uses an array as a backing structure Is smart about its resizing - allocates the double of its current space when it runs out of it. This leads to O(log n) resizes, which is better than resizing every time we add/remove Fast Access - O(1) How Arrays Work Most languages model arrays as contiguous data in memory, of which each element is the same size. Let's say we had an array of int s (shown as [address: value], using decimal addresses because I'm lazy) [0: 10][32: 20][64: 30][96: 40][128: 50][160: 60] Each of these elements is a 32-bit integer, so we know how much space it takes up in memory (32 bits!). And we know the memory address of the pointer to the first element. It's trivially easy to get to the value of any other element in that array: Take the address of the first element Take the offset of each element (its size in memory) Multiply the offset by the desired index Add your result to the address of the first element Let's say our first element is at '0'. We know our second element is at '32' (0 + (32 * 1)), and our third element is at 64 (0 + (32 * 2)). The fact that we can store all these values next to each other in memory means our array is as compact as it can possibly be. It also means that all our elements need to stay together for things to continue working! As soon as we add or remove an element, we need to pick up everything else, and copy them over to some new place in memory, to make sure there are no gaps between elements, and everything has enough room. This can be very slow , especially if you're doing it every time you want to add a single element. Linked Lists Unlike arrays, Linked Lists don't need all their elements to be next to each other in memory. They are composed of nodes, that store the following info: Node<T> {
Value : T // Value of this element in the list
Next : Node<T> // Reference to the node that comes next
} The list itself keeps a reference to the head and tail (first and last nodes) in most cases, and sometimes keeps track of its size. If you want to add an element to the end of the list, all you need to do is get the tail , and change its Next to reference a new Node containing your value. Removing from the end is equally simple - just dereference the Next value of the preceding node. Unfortunately, if you have a LinkedList<T> with 1000 elements, and you want element 500, there's no easy way to jump right to the 500th element like there is with an array. You need to start at the head , and keep going to the Next node, until you've done it 500 times. This is why adding and removing from a LinkedList<T> is fast (when working at the ends), but accessing the middle is slow. Edit : Brian points out in the comments that Linked Lists have a risk of causing a page fault, due to not being stored in contiguous memory. This can be hard to benchmark, and can make Linked Lists even a bit slower than you might expect given just their time complexities. Best of Both Worlds List<T> compromises for both T[] and LinkedList<T> and comes up with a solution that's reasonably fast and easy to use in most situations. Internally, List<T> is an array! It still has to jump through the hoops of copying its elements when resizing, but it pulls some neat tricks. For starters, adding a single element doesn't usually cause the array to copy. List<T> makes sure there's always enough room for more elements. When it runs out, instead of allocating a new internal array with just one new element, it will allocate a new array with several new elements (often twice as many as it currently holds!). Copy operations are expensive, so List<T> cuts down on them as much as possible, while still allowing fast random access. As a side effect, it may end up wasting slightly more space than a straight-up array or linked list, but it's usually worth the tradeoff. TL;DR Use List<T> . It's normally what you want, and it seems to be correct for you in this situation (where you're calling .Add()). If you're unsure of what you need, List<T> is a good place to start. Arrays are good for high-performance, "I know I need exactly X elements" things. Alternatively, they're useful for quick, one-off "I need to group these X things I've already defined together so I can loop over them" structures. There are a number of other collection classes. Stack<T> is like a linked list that only operates from the top. Queue<T> works as a first-in-first-out list. Dictionary<T, U> is an unordered, associative mapping between keys and values. Play with them and get to know the strengths and weaknesses of each. They can make or break your algorithms. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/221892",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/108550/"
]
} |
222,309 | When you have to iterate a reader where the number of items to read is unknown, and the only way to do is it to keep reading until you hit the end. This is often the place you need an endless loop. There is the always true that indicates there must be a break or return statement somewhere inside the block. int offset = 0;
while(true)
{
Record r = Read(offset);
if(r == null)
{
break;
}
// do work
offset++;
} There is the double read for loop method. Record r = Read(0);
for(int offset = 0; r != null; offset++)
{
r = Read(offset);
if(r != null)
{
// do work
}
} There is the single read while loop. Not all languages support this method . int offset = 0;
Record r = null;
while((r = Read(++offset)) != null)
{
// do work
} I'm wondering which approach is the least likely to introduce a bug, most readable and commonly used. Every time I have to write one of these I think "there has to be a better way" . | I would take a step back here. You're concentrating on the picky details of the code but missing the larger picture. Let's take a look at one of your example loops: int offset = 0;
while(true)
{
Record r = Read(offset);
if(r == null)
{
break;
}
// do work
offset++;
} What is the meaning of this code? The meaning is "do some work to each record in a file". But that is not what the code looks like . The code looks like "maintain an offset. Open a file. Enter a loop with no end condition. Read a record. Test for nullity." All that before we get to the work! The question you should be asking is " how can I make this code's appearance match its semantics? " This code should be: foreach(Record record in RecordsFromFile())
DoWork(record); Now the code reads like its intention. Separate your mechanisms from your semantics . In your original code you mix up the mechanism -- the details of the loop -- with the semantics -- the work done to each record. Now we have to implement RecordsFromFile() . What's the best way of implementing that? Who cares? That's not the code that anyone is going to be looking at. It's basic mechanism code and its ten lines long. Write it however you want. How about this? public IEnumerable<Record> RecordsFromFile()
{
int offset = 0;
while(true)
{
Record record = Read(offset);
if (record == null) yield break;
yield return record;
offset += 1;
}
} Now that we are manipulating a lazily computed sequence of records all sorts of scenarios become possible: foreach(Record record in RecordsFromFile().Take(10))
DoWork(record);
foreach(Record record in RecordsFromFile().OrderBy(r=>r.LastName))
DoWork(record);
foreach(Record record in RecordsFromFile().Where(r=>r.City == "London")
DoWork(record); And so on. Any time you write a loop, ask yourself "does this loop read like a mechanism or like the meaning of the code?" If the answer is "like a mechanism", then try to move that mechanism to its own method, and write the code to make the meaning more visible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/222309",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52871/"
]
} |
222,403 | We concatenate CSS and JavaScript files to reduce the number of HTTP requests, which improves performance. The result is HTML like this: <link rel="stylesheet" href="all-my-css-0fn392nf.min.css">
<!-- later... -->
<script src="all-my-js-0fn392nf.min.js"></script> If we've got server-side/build logic to do all this for us, why not take it one step further and embed those concatenated styles and scripts in the HTML? <style>.all{width:100%;}.my{display:none;}.css{color:white;}</style>
<!-- later... -->
<script>var all, my, js;</script> That's two fewer HTTP requests, yet I've not seen this technique in practice. Why not? | Because saving HTTP requests is of little use when you achieve it by breaking caching. If the stylesheets and scripts are served separately, they can be cached very well and amortized over many, many requests to wildly different pages. If they're mushed in the same HTML page, they have to be re-transmitted with every. Single. Request. This page's HTML, for example is 13 KB right now. The 180 KB of CSS hit the cache, and so did the 360 KB of JS. Both cache hits took minuscle amounts of time and consumed practically no bandwidth. Whip out your browser's network profiler and try it on some other sites. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/222403",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92609/"
]
} |
222,528 | Which one is considered better: having a directive that interacts with services directly or having a directive that exposes certain hooks to which controller may bind behaviour (involving services)? | A directive is best (as a rule-of-thumb) when it's short (code-wise), (potentially) re-usable, and has a limited a scope in terms of functionality. Making a directive that includes UI and depends on a service (that I assume handles connection to the backend), not only gives it 2 functional roles, namely: Controlling the UI for display/entry of data for the widget. Submitting to the backend (via the service). but also making it less re-usable, as you then can't use it again with another service, or with a different UI (at least not easily). When making these decisions, I often compare to the built-in HTML elements: for example <input> , <textarea> or <form> : they are completely independent of any specific backend. HTML5 has given the <input> element a few extra types, e.g. date , which is still independent of backend, and where exactly the data goes or how it is used. They are purely interface elements. Your custom widgets, built using directives, I think should follow the same pattern, if possible. However, this isn't the end of the story. Going beyond the analogy with the built-in HTML elements, you can create re-usable directives that both call services, and use a purely UI directive, just like it might use a <textarea> . Say you want to use some HTML as follows: <document document-url="'documents/3345.html'">
<document-data></document-data>
<comments></comments>
<comment-entry></comment-entry>
</document> To code up the commentEntry directive, you could make a very small directive that just contains the controller that links up a service with a UI-widget. Something like: app.directive('commentEntry', function (myService) {
return {
restrict: 'E',
template: '<comment-widget on-save="save(data)" on-cancel="cancel()"></comment-widget>',
require: '^document',
link: function (scope, iElement, iAttrs, documentController) {
// Allow the controller here to access the document controller
scope.documentController = documentController;
},
controller: function ($scope) {
$scope.save = function (data) {
// Assuming the document controller exposes a function "getUrl"
var url = $scope.documentController.getUrl();
myService.saveComments(url, data).then(function (result) {
// Do something
});
};
}
};
}); Taking this to an extreme, you might not ever need to have a manual ng-controller attribute in the HTML: you can do it all using directives, as long as each directly has a clear "UI" role, or a clear "data" role. There is a downside I should mention: it gives more "moving parts" to the application, which adds a bit of complexity. However, if each part has a clear role, and is well (unit + E2E tested), I would argue it's worth it and an overall benefit in the long term. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/222528",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/112969/"
]
} |
222,555 | This claim by Aleks Bromfield states: Almost every language with a static type system also has a dynamic type system. Aside from C, I can't think of an exception Is this a valid claim? I understand that with Reflection or Loading classes at runtime Java gets a bit like this - but can this idea of 'gradual typing' be extended to a large number of languages? | Original tweeter here. :) First of all, I'm somewhat amused/shocked that my tweet is being taken so seriously! If I had known it was going to be this widely disseminated, I would have spent more than 30 seconds writing it! Thiago Silva is correct to point out that "static" and "dynamic" more accurately describe type checking , rather than type systems . In fact, it isn't really accurate to say that a language is statically or dynamically typed, either. Rather, a language has a type system, and an implementation of that language might enforce the type system using static checking, or dynamic checking, or both, or neither (though that would not be a very appealing language implementation!). As it happens, there are certain type systems (or features of type systems) which are more amenable to static checking, and there are certain type systems (or features of type systems) which are more amenable to dynamic checking. For example, if your language allows you to specify in the text of a program that a particular value must always be an array of integers, then it's reasonably straightforward to write a static checker to verify that property. Conversely, if your language has subtyping, and if it permits downcasting, then it's reasonably straightforward to check the validity of a downcast at runtime, but extremely difficult to do so at compile time. What I really meant by my tweet is simply that the vast majority of language implementations perform some amount of dynamic type checking. Or, equivalently, the vast majority of languages have some features that are difficult (if not impossible) to check statically. Downcasting is one example. Other examples include arithmetic overflow, array bounds checking, and null checking. Some of these can be statically checked in some circumstances, but by and large, you'd be hard-pressed to find a language implementation that doesn't do any checking at runtime. This is not a bad thing. It's just an observation that there are many interesting properties that we would like our languages to enforce, and that we don't really know how to check statically. And it's a reminder that distinctions like "static types" versus "dynamic types" are not nearly as clear-cut as some people would have you believe. :) One final note: the terms "strong" and "weak" aren't really used in the programming language research community, and they don't really have a consistent meaning. In general, I've found that when someone says that a language has "strong typing" and some other language has "weak typing", they're really saying that their favorite language (the one with "strong typing") prevents them from making some mistake that the other language (the one with "weak typing") doesn't -- or conversely, that their favorite language (the one with "weak typing") allows them to do some cool thing that the other language (the one with "strong typing") does not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/222555",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13382/"
]
} |
223,027 | I work in embedded systems. Right now, my organization has two full-time programmers and two occasional programmers. It's rare that the same project is worked on by two programmers. All code is stored on a network drive. There are folders for the current production code, another folder for the source for every released version throughout history, and a third folder for active work. We have an automated system (Mercurial being abused) that makes backups of every changed code file in Working every fifteen minutes, so we can revert to previous states. My question is this: is it worth the trouble to set up a formal versioning system in an environment like this? Why or why not? | As you describe it, you already have some sort of version control, though currently there are some issues with it compared to a typical version control: An intentional commit in version control indicates that the developer strongly believes that the current state of the system would build successfully. (There are exceptions, as suggested by Jacobm001's comment . Indeed, several approaches are possible, and some teams would prefer not trying to make every commit possible to build. One approach is to have nightly builds, given that during the day, the system may receive several commits which don't build.) Since you don't have commits, your system will often result in a state which doesn't build. This prevents you from setting Continuous Integration . By the way, a distributed version control system has a benefit: one can do local commits as much as needed while bringing the system to a state where it cannot build, and then do a public commit when the system is able to build. Version control lets you enforce some rules on commit . For example, for Python files, PEP 8 can be run, preventing the commit if the committed files are not compliant. Blame is extremely hard to do with your approach. Exploring what changes were made when, and by who is hard too. Version control logs , the list of changed files and a diff is an excellent way to find exactly what was done. Any merge would be a pain (or maybe developers wouldn't even see that their colleagues were modifying the files before they save the changes). You stated that: It's rare that the same project is worked on by two programmers Rare doesn't mean never, so merges would occur sooner or later. A backup every fifteen minutes means that developers may lose up to fifteen minutes of work . This is always problematic: it's hard to remember exactly what changes were done meanwhile. With source control you can have meaningful commit messages. With backups all you know is that it was x minutes since last backup. A real version control ensures that you can always revert to the previous commit; this is a huge advantage. Reverting a backup using your system would be slightly more difficult than doing a one-click rollback , which you can do in most version control systems. Also, in your system Branching is impossible. There's a better way to do version control, and you should certainly consider changing the way you currently do it. Especially since, like Eric Lippert mentions , your current system is probably a lot more painful to maintain than any common version control system is. Having a Git or Mercurial repository on a network drive is pretty easy for example. Note: Even if you switch to a common version control system, you should still have a daily/weekly backup of the repositories. If you're using a distributed system it's less important though, since then every developer's working copy is also a backup. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/223027",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/101501/"
]
} |
223,086 | In some code I'm writing right now, I have something like this: if (uncommon_condition) {
do_something_simple();
} else {
do();
something();
long();
and();
complicated();
} Part of me thinks "It's fine the way it's written. Simple cases should go first and more complicated cases should go next." But another part says: "No! The else code should go under the if , because if is for dealing with unusual cases and else is for dealing with all other cases." Which is correct or preferable? | Order by their likelihood of being executed. The most common, most likely, etc. condition(s) should go first. The "difficulty of dealing with them" should be dealt with by code structure, abstraction, etc. in the example the else block could be refactored to single method call. You want your if clauses to be at the same abstract level. if ( ! LotteryWinner ) {
GoToWorkMonday();
} else {
PlanYearLongVacation();
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/223086",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/113582/"
]
} |
223,250 | I recently interviewed with some companies that do Agile, Scrum to be more precise and there are some things that don't quite seem like Agile to me. I'll take one case that particularly interests me right now, that of Scrum sprints. One particular project manager I talked to (yes, I said project manager) proudly stated that people in her team understand ("were told" is what I picked up from the context) that you don't go home when the working hours are over, you go home when the job is done, no matter how much it takes. What I've read in between the lines is that we pack as much features as possible into a sprint and work overtime to make it happen. Now, I haven't done Agile by now (worked with financial and governmental institutions which most still do prefer waterfall) but my understanding is that: sprint in Scrum is the name for the generic iteration in Agile; the team should work at a sustainable pace and try to avoid long term overtime as that has effects only on the short time and the effects are dwarfed by the problems they incur in the long time. Are my statements right? And, should I take the manager's presentation as a red flag? | You don't have to search far to see that these practices goes contrary to the principles behind Agile. One of the principles behind the Agile Manifesto states: Agile processes promote sustainable development. The sponsors,
developers, and users should be able to maintain a constant pace
indefinitely. A few years ago, Scrum made a subtle but important change . Instead of teams committing to the work that can be achieved they forecast what they think they can get done. The change comes because of abuse, which sounds very much like the don't-go-home-until-its-done attitude you describe. In development, there are many factors outside the teams control that they can't commit to - to use the a weather analogy, you can't "commit" that it will be rainy tomorrow. To directly answer your questions: yes, Sprint is the name for an iteration in Scrum see this answer for the difference yes, teams should work at a sustainable pace. The only certainty of overtime is that it will reduce the teams productivity long term. yes, it is a red flag! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/223250",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38417/"
]
} |
223,384 | I recently ran across an idea put forth by Jaron Lanier called "phenotropic programming." The idea is to use 'surface' interfaces instead of single point interfaces in computer programs utilizing statistics to winnow out minor errors that would typically cause a "classical" program to catastrophically crash. The two-line description is here: According to Jaron, the 'real difference between the current idea of
software, which is protocol adherence, and the idea [he is]
discussing, pattern recognition, has to do with the kinds of errors
we're creating' and if 'we don't find a different way of thinking
about and creating software, we will not be writing programs bigger
than about 10 million lines of code no matter how fast our processors
become.' The slightly longer explanation is here .
And the even longer explanation is here . So, the question, looking past the obvious robot-overlord connotations that people tend to pick out, how would one actually design and write a "phenotropic program?" | Lanier has invented a 50 cent word in an attempt to cast a net around a specific set of ideas that describe a computational model for creating computer programs having certain identifiable characteristics. The word means: A mechanism for component interaction that uses pattern recognition or
artificial cognition in place of function invocation or message
passing. The idea comes largely from biology. Your eye interfaces with the world, not via a function like See(byte[] coneData) , but through a surface called the retina. It's not a trivial distinction; a computer must scan all of the bytes in coneData one by one, whereas your brain processes all of those inputs simultaneously. Lanier claims that the latter interface is more fault tolerant, which it is (a single slipped bit in coneData can break the whole system). He claims that it enables pattern matching and a host of other capabilities that are normally difficult for computers, which it does. The quintessential "phenotropic" mechanism in a computer system would be the Artificial Neural Network (ANN). It takes a "surface" as input, rather than a defined Interface. There are other techniques for achieving some measure of pattern recognition, but the neural network is the one most closely aligned with biology. Making an ANN is easy; getting it to perform the task that you want it to perform reliably is difficult, for a number of reasons: What do the input and output "surfaces" look like? Are they stable, or do they vary in size over time? How do you get the network structure right? How do you train the network? How do you get adequate performance characteristics? If you are willing to part with biology, you can dispense with the biological model (which attempts to simulate the operation of actual biological neurons) and build a network that is more closely allied with the actual "neurons" of a digital computer system (logic gates). These networks are called Adaptive Logic Networks (ALN). The way they work is by creating a series of linear functions that approximate a curve. The process looks something like this: ... where the X axis represents some input to the ALN, and the Y axis represents some output. Now imagine the number of linear functions expanding as needed to improve the accuracy, and imagine that process occurring across n arbitrary dimensions, implemented entirely with AND and OR logic gates, and you have some sense of what an ALN looks like. ALNs have certain, very interesting characteristics: They are fairly easily trainable, They are very predictable, i.e. slight changes in input do not produce wild swings in output, They are lightning fast, because they are built in the shape of a logic tree, and operate much like a binary search. Their internal architecture evolves naturally as a result of the training set So a phenotropic program would look something like this; it would have a "surface" for input, a predictable architecture and behavior, and it would be tolerant of noisy inputs. Further Reading An Introduction to Adaptive Logic Networks
With an Application to
Audit Risk Assessment "Object Oriented" vs "Message Oriented," by Alan Kay | {
"source": [
"https://softwareengineering.stackexchange.com/questions/223384",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/113950/"
]
} |
223,400 | Whenever a new project starts, it usually makes sense to start by committing straight to master until you've got something "stable", and then you start working in branches. At least, this is how I normally do it. Is there a way to immediately start branches from second commit? Does it make sense to do it this way? Obviously, "Initial Commit" will always be on master, but after that, when will I know it's the right time to start making branches for new features? | Immediately. The key is the question of what the policy for Master is. With git, typically, the branch policy on Master is the buildable stable release. Sometimes, Master is the 'mainline' where branches are made from and merged to prior to merging to a Release branch. These are two different role/policy approaches. It is often a source of errors for people to change the role or the policy of a branch part way through the project. It is easier for a solo developer to communicate these changes out to contributors, but trying to get a dozen programmers to all recognize "Master is now at 1.0, please branch features rather than everyone pushing to it" I touched on the policy approach above. The policy for Master is that it is the buildable stable release. Checking in small incremental changes into this means you don't have something buildable stable at all times. Not checking in small changes goes against the "lots of small (but complete) checkins" that tends to be the best policy (and encouraged by easy branching). From a role based perspective, you've started out with master being mainline, release, maintenance, and development roles, and then some point down the road the development and maintenance role moves to branches. This again means a change in what is allowed on master and can confuse contributors as to where things belong. It can also (slightly) confuse the branch history, encouraging large commits that mean bigger and harder to understand merges. Key the roles and policies on the branches simple and consistent from the start. This "branch on policy change" can be seen in the Branching Patterns . The idea of each branch having roles, can be read in Advanced SCM Branching Strategies . Both of these are very good reads. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/223400",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40984/"
]
} |
223,471 | In C++ a reference argument to a function allows the function to make the reference refer to something else: int replacement = 23;
void changeNumberReference(int& reference) {
reference = replacement;
}
int main() {
int i = 1;
std::cout << "i=" << i << "\n"; // i = 1;
changeNumberReference(i);
std::cout << "i=" << i << "\n"; // i = 23;
} Analogously, a constant reference argument to a function will throw a compile time error if we try to change the reference: void changeNumberReference(const int& reference) {
reference = replacement; // compile-time error: assignment of read-only reference 'reference'
} Now, with Java, the docs say that functions arguments of non-primitive types are references. Example from the official docs: public void moveCircle(Circle circle, int deltaX, int deltaY) {
// code to move origin of circle to x+deltaX, y+deltaY
circle.setX(circle.getX() + deltaX);
circle.setY(circle.getY() + deltaY);
// code to assign a new reference to circle
circle = new Circle(0, 0);
} Then circle is assigned a reference to a new Circle object with x = y
= 0. This reassignment has no permanence, however, because the reference was passed in by value and cannot change. To me this doesn't look at all like C++ references. It doesn't resemble regular C++ references because you cannot make it refer to something else, and it doesn't resemble C++ const references because in Java, the code that would change (but really doesn't) the reference does not throw a compile-time error. This is more similar in behavior to C++ pointers. You can use it to change the pointed objects values, but you cannot changes the pointer's value itself in a function. Also, as with C++ pointers (but not with C++ references), in Java you can pass "null" as value for such an argument. So my question is: Why does Java use the notion of "reference"? Is it to be understood that they don't resemble C++ references? Or do they indeed really resemble C++ references and I'm missing something? | Why? Because, although consistent terminology is generally good for the entire profession, language designers don't always respect the language use of other language designers, particularly if those other languages are perceived as competitors. But really, neither use of 'reference' was a very good choice. "References" in C++ are simply a language construct to introduce aliases (alternative names for exactly the same entity) explicitly. Things would have been much clearer of they had simply called the new feature "aliases" in the first place. However, at that time the big difficulty was to make everyone understand the difference between pointers (which require dereferencing) and references (which don't), so the important thing was that it was called something other than "pointer", and not so much specifically what term to use. Java doesn't have pointers, and is proud of it, so using "pointer" as a term was no option. However, the "references" that it does have behave quite a bit as C++'s pointers do when you pass them around - the big difference is that you can't do the nastier low-level operations (casting, adding...) on them, but they result in exactly the same semantics when you pass around handles to entities that are identical vs. entities that merely happen to be equal. Unfortunately, the term "pointer" carries so many negative low-level associations that it's unlikely ever to be accepted by the Java community. The result is that both languages use the same vague term for two rather different things, both of which might profit from a more specific name, but neither of which is likely to be replaced any time soon. Natural language, too, can be frustrating sometimes! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/223471",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38218/"
]
} |
223,634 | There is a popular quote by Jamie Zawinski : Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems. How is this quote supposed to be understood? | Some programming technologies are not generally well-understood by programmers ( regular expressions , floating point , Perl , AWK , IoC ... and others ). These can be amazingly powerful tools for solving the right set of problems. Regular expressions in particular are very useful for matching regular languages. And there is the crux of the problem: few people know how to describe a regular language (it's part of computer science theory / linguistics that uses funny symbols - you can read about it at Chomsky hierarchy ). When dealing with these things, if you use them wrong it is unlikely that you've actually solved your original problem. Using a regular expression to match HTML (a far too common occurrence) will mean that you will miss edge cases. And now, you've still got the original problem that you didn't solve, and another subtle bug floating around that has been introduced by using the wrong solution. This is not to say that regular expressions shouldn't be used, but rather that one should work to understand what the set of problems they can solve and can't solve and use them judiciously. The key to maintaining software is writing maintainable code. Using regular expressions can be counter to that goal. When working with regular expressions, you've written a mini computer (specifically a non-deterministic finite state automaton ) in a special domain specific language. It's easy to write the 'Hello world' equivalent in this language and gain rudimentary confidence in it, but going further needs to be tempered with the understanding of the regular language to avoid writing additional bugs that can be very hard to identify and fix (because they aren't part of the program that the regular expression is in). So now you've got a new problem; you chose the tool of the regular expression to solve it (when it is inappropriate), and you've got two bugs now, both of which are harder to find, because they're hidden in another layer of abstraction. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/223634",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/103266/"
]
} |
223,714 | We all use if ..else if.. else. But still I'm confused as to why we use else if . Where if does the same thing as else if . So why are we using else if ? Any specific reasons behind this? Is there any algorithm where it's mandatory to use else if ? | The main reason to use else if is to avoid excessive indentation. For example: if(a) {
} else {
if(b) {
} else {
if(c) {
} else {
}
}
} Can become: if(a) {
} else if(b) {
} else if(c) {
} Of course both of the pieces of code above are equivalent (which means it's impossible for the latter to be mandatory other than in style guides). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/223714",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/114166/"
]
} |
223,918 | If I'm designing a one page website, is it better to create external file for my JS code, or just put it in the html code?
Is putting it on the page faster to load?
Can I change the permissions to deny the users requests for the code, but the html page can still call the code? | You should put your JS code in a separate file because this makes it easier to test and develop. The question of how you serve the code is a different matter. Serving the HTML and the JS separately has the advantage that a client can cache the JS. This requires you to send appropriate headers so that the client does not issue a new request each time. Caching is problematic if you want to perform an update and thus want to invalidate the client caches. One method is to include a version number in the filename, e.g. /static/mylibrary-1.12.2.js . If the JS is in a separate file you cannot restrict access to it: It is difficult (technically: impossible) to tell if a request to a JS file was made because you referenced it on your HTML page, or because somebody wants to download it directly. You can however use cookies and refuse to serve clients that don't transmit certain cookies (but that would be silly). Serving the JS inside the HTML increases the size of each page – but this is OK if a client is unlikely to view multiple pages. Because the client does not issue a separate request for the JS, this strategy loads the page faster – for the first time at least, but there is a break-even point where caching is better. You can include the JS e.g. via PHP. Here the client does not need separate access to the JS file, which can be hidden if you like. But a anyone can still view the JS code inside the HTML. Other strategies to minimize load times include JS minification which reduces the size of the JS file you serve. As minification only happens once when deploying the code, this is a very efficient method to save bytes. OTOH this makes your code harder to understand for interested visitors. Related to minification is the practice of combinining all your JS files in a single file. This reduces the number of necessary requests. Compression, which adds a computational overhead for each request on both the client and the server. However the time spent (de-)compressing is usually smaller than the time spent transmitting the uncompressed data. Compression is usually handled transparently by the server software. These techniques also apply to other resources like images. Images can be inlined into HTML or CSS with data-URLs. This is only practical for small, simple images as the base64-encoding inflates the size. This can still be faster than another request. Multiple small images (icons, buttons) can be combined into a single image, and then extracted as sprites. Images can be reduced by the server to the size in which they are actually used on the website, which saves bandwidth. Compare thumbnail images. For some graphics, text-based images like SVG can be a lot smaller. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/223918",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/114467/"
]
} |
223,991 | Short introduction to this question. I have used now TDD and lately BDD for over one year now. I use techniques like mocking to make writing my tests more efficiently. Lately I have started a personal project to write a little money management program for myself. Since I had no legacy code it was the perfect project to start with TDD. Unfortunate I did not experience the joy of TDD so much. It even spoiled my fun so much that I have given up on the project. What was the problem? Well, I have used the TDD like approach to let the tests / requirements evolve the design of the program. The problem was that over one half of the development time as for writing / refactor tests. So in the end I did not want to implement any more features because I would need to refactor and write to many test. At work I have a lot of legacy code. Here I write more and more integration and acceptance tests and less unit tests. This does not seem to be a bad approach since bugs are mostly detected by the acceptance and integration tests. My idea was, that I could in the end write more integration and acceptance tests than unit tests. Like I said for detecting bugs the unit tests are not better than integration / acceptance tests. Unit test are also good for the design. Since I used to write a lot of them my classes are always designed to be good testable. Additionally, the approach to let the tests / requirements guide the design leads in most cases to a better design. The last advantage of unit tests is that they are faster. I have written enough integration tests to know, that they can be nearly as fast as the unit tests. After I was looking through the web I found out that there are very similar ideas to mine mentioned here and there . What do you think of this idea? Edit Responding to the questions one example where the design was good,but I needed a huge refactoring for the next requirement: At first there were some requirements to execute certain commands. I wrote an extendable command parser - which parsed commands from some kind of command prompt and called the correct one on the model. The result were represented in a view model class: There was nothing wrong here. All classes were independent from each other and the I could easily add new commands, show new data. The next requirement was, that every command should have its own view representation - some kind of preview of the result of the command. I redesigned the program to achieve a better design for the new requirement: This was also good because now every command has its own view model and therefore its own preview. The thing is, that the command parser was changed to use a token based parsing of the commands and was stripped from its ability to execute the commands. Every command got its own view model and the data view model only knows the current command view model which than knows the data which has to be shown. All I wanted to know at this point is, if the new design did not break any existing requirement. I did not have to change ANY of my acceptance test. I had to refactor or delete nearly EVERY unit tests, which was a huge pile of work. What I wanted to show here is a common situation which happened often during the development. There were no problem with the old or the new designs, they just changed naturally with the requirements - how I understood it, this is one advantage of TDD, that the design evolves. Conclusion Thanks for all the answers and discussions. In summary of this discussion I have thought of an approach which I will test with my next project. First of all I write all tests before implementing anything like I always did. For requirements I write at first some acceptance tests which tests the whole program. Then I write some integration tests for the components where I need to implement the requirement. If there is a component which work closely together with another component to implement this requirement I would also write some integration tests where both components are tested together. Last but not least if I have to write an algorithm or any other class with a high permutation - e.g. a serializer - I would write unit tests for this particular classes. All other classes are not tested but any unit tests. For bugs the process can be simplified. Normally a bug is caused by one or two components. In this case I would write one integration test for the components which tests the bug. If it related to a algorithm I would only write a unit test. If it is not easy to detect the component where the bug occurs I would write an acceptance test to locate the bug - this should be an exception. | It's comparing oranges and apples. Integration tests, acceptance tests, unit tests, behaviour tests - they are all tests and they will all help you improve your code but they are also quite different. I'm going to go over each of the different tests in my opinion and hopefully explain why you need a blend of all of them: Integration tests: Simply, test that different component parts of your system integrate correctly - for example - maybe you simulate a web service request and check that the result comes back. I would generally use real (ish) static data and mocked dependencies to ensure that it can be consistently verified. Acceptance tests: An acceptance test should directly correlate to a business use case. It can be huge ("trades are submitted correctly") or tiny ("filter successfully filters a list") - it doesn't matter; what matters is that it should be explicitly tied to a specific user requirement. I like to focus on these for test-driven development because it means we have a good reference manual of tests to user stories for dev and qa to verify. Unit tests: For small discrete units of functionality that may or may not make up an individual user story by itself - for example, a user story which says that we retrieve all customers when we access a specific web page can be an acceptance test (simulate hitting the web page and checking the response) but may also contain several unit tests (verify that security permissions are checked, verify that the database connection queries correctly, verify that any code limiting the number of results is executed correctly) - these are all "unit tests" that aren't a complete acceptance test. Behaviour tests: Define what the flow should be of an application in the case of a specific input. For example, "when connection cannot be established, verify that the system retries the connection." Again, this is unlikely to be a full acceptance test but it still allows you to verify something useful. These are all in my opinion through much experience of writing tests; I don't like to focus on the textbook approaches - rather, focus on what gives your tests value. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/223991",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91641/"
]
} |
224,063 | I am having a discussion with my co-worker on how much work a constructor can do. I have a class, B that internally requires another object A. Object A is one of a few members that class B needs to do its job.
All of its public methods depends on the internal object A. Information about object A is stored in the DB so I try to validate and get it by looking it up on the DB in the constructor. My co-worker pointed out that the constructor should not do much work other than capture the constructor parameters. Since all of public methods would fail anyway if object A is not found using the inputs to the constructor, I argued that instead of allowing an instance to be created and later fail, it is actually better to throw early in constructor. Not all classes do this but i found that StreamWriter constructor also throws if it has problem opening a file and doesn't delay the validation until the first call to Write. What others think? I am using C# if that makes any difference. Reading Is there ever a reason to do all an object's work in a constructor? i wonder fetching object A by going to DB is part of "any other initialization necessary to make the object ready to use" because if user passed in wrong values to the constructor, i wouldn't be able to use any of its public methods. Constructors should instantiate the fields of an object and do any
other initialization necessary to make the object ready to use. This
is generally means constructors are small, but there are scenarios
where this would be a substantial amount of work. | Your question is composed from two completely separate parts: Should I throw exception from constructor, or should I have method fail? This is clearly application of Fail-fast principle. Making constructor fail is much easier to debug compared to having to find why the method is failing. For example, you might get the instance already created from some other part of code and you get errors when calling methods. Is it obvious the object is created wrong? No. As for the "wrap the call in try/catch" problem. Exceptions are meant to be exceptional . If you know some code will throw an exception, you do not wrap the code in try/catch, but you validate parameters of that code before you execute the code that can throw an exception. Exceptions are only meant as way to ensure the system doesn't get into invalid state. If you know some input parameters can lead to invalid state, you make sure those parameters never happen. This way, you only have to do try/catch in places, where you can logically handle the exception, which is usually on system's boundary. Can I access "other parts of the system, like DB" from constructor. I think this goes against principle of least astonishment . Not many people would expect constructor to access a DB. So no, you should not do that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/224063",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/114632/"
]
} |
224,132 | I've read Where are octals useful? and it seems like octals are something that were once upon a time useful. Many languages treat numbers preceding with a 0 as octal, so the literal 010 is actually 8. A few among these is JavaScript, Python (2.7), and Ruby. But I don't really see why these languages need octal, especially when the more likely use of the notation is to denote a decimal number with a superfluous 0. JavaScript is a client-side language, octal seems pretty useless. All three are pretty modern in other sense, and I don't think that there would be much code using octal notation that would be broken by removing this "feature". So, my questions are: Is there any point of these languages supporting octal literals? If octal literals are necessary, why not use something like 0o10 ? Why copy an old notation that overrides a more useful use case? | Blind copying of C, just like ratchet freak said in his comment The vast majority of "language designers" these days have never seen anything but C and its copies (C++, Java, Javascript, PHP, and probably a few dozen others I never heard of). They have never touched FORTRAN, COBOL, LISP, PASCAL, Oberon, FORTH, APL, BLISS, SNOBOL, to name a few. Once upon a time, exposure to multiple programming languages was MANDATORY in the computer science curriculum, and that didn't include counting C, C++, and Java as three separate languages. Octal was used in the earlier days because it made reading binary instruction values easier. The PDP-11, for example, BASICALLY had a 4-bit opcode, 2 3-bit register numbers, and 2 3-bit access mechanism fields. Expressing the word in octal made everything obvious. Because of C's early association with the PDP-11, octal notation was included, since it was very common on PDP-11s at the time. Other machines had instruction sets that didn't map well to hex. The CDC 6600 had a 60-bit word, with each word containing typically 2 to 4 instructions. Each instruction was 15 or 30 bits. As for reading and writing values, this is a solved problem, with a well-known industry best practice, at least in the defense industry. You DOCUMENT your file formats. There is no ambiguity when the format is documented, because the document TELLS you whether you are looking at a decimal number, a hex number, or an octal number. Also note: If your I/O system defaults to leading 0 meaning octal, you have to use some other convention on your output to denote hexadecimal values. This is not necessarily a win. In my personal opinion, Ada did it best: 2#10010010#, 8#222#, 16#92#, and 146 all represent the same value. (That will probably get me at least three downvotes right there, just for mentioning Ada.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/224132",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47255/"
]
} |
224,146 | As a "new" programmer (I first wrote a line of code in 2009), I've noticed it's relatively easy to create a program that exhibits quite complex elements today with things like .NET framework for example. Creating a visual interface, or sorting a list can be done with very few commands now. When I was learning to program, I was also learning computing theory in parallel. Things like sorting algorithms, principles of how hardware operates together, boolean algebra, and finite-state machines. But I noticed if I ever wanted to test out some very basic principle I'd learned in theory, it was always a lot more difficult to get started because so much technology is obscured by things like libraries, frameworks, and the OS. Making a memory-efficient program was required 40/50 years ago because there wasn't enough memory and it was expensive, so most programmers paid close attention to data types and how the instructions would be handled by the processor. Nowadays, some might argue that due to increased processing power and available memory, those concerns aren't a priority. My question is if older programmers see innovations like these as a godsend or an additional layer to abstract through, and why might they think so? And do younger programmers benefit more learning low-level programming BEFORE exploring the realms of expansive libraries? If so then why? | it just isn't necessary because of the increased amount of processing power and memory available. Having cheap memory, enormous disks and fast processors isn't the only thing that has freed people from the need to obsess over every byte and cycle. Compilers are now far, far better than humans at producing highly optimized code when it matters. Moreover, let's not forget what we're actually trying to optimize for, which is value produced for a given cost. Programmers are way more expensive than machines. Anything we do that makes programmers produce working, correct, robust, fully-featured programs faster and cheaper leads to the creation of more value in the world. My question though is how do people feel about this "hiding" of lower-level elements. Do you older programmers see it as a godsend or an unnecessary layer to get through? It is absolutely necessary to get any work done. I write code analyzers for a living; if I had to worry about register allocation or processor scheduling or any of those millions of other details then I would not be spending my time fixing bugs, reviewing performance reports, adding features, and so on. All of programming is about abstracting away the layer below you in order to make a more valuable layer on top of it. If you do a "layer cake diagram" showing all the subsystems and how they are built on each other you'll find that there are literally dozens of layers between the hardware and the user experience. I think in the Windows layer cake diagram there's something like 60 levels of necessary subsystems between the raw hardware and the ability to execute "hello world" in C#. Do you think younger programmers would benefit more learning low-level programming BEFORE exploring the realms of expansive libraries? You put emphasis on BEFORE, so I must answer your question in the negative. I'm helping a 12 year old friend learn to program right now and you'd better believe I'm starting them in Processing.js and not x86 assembler. If you start a young programmer in something like Processing.js they'll be writing their own shoot-em-up games in about eight hours. If you start them in assembler they'll be multiplying three numbers together in about eight hours. Which do you think is more likely to engage the interest of a younger programmer? Now if the question is "do programmers who understand layer n of the cake benefit from understanding layer n - 1?" the answer is yes, but that's independent of age or experience; it's always the case that you can improve your higher level programming by understanding better the underlying abstractions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/224146",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/114728/"
]
} |
224,174 | I'm working on a project in which we are trying to apply both domain-driven design and REST to a service-oriented architecture. We aren't worrying about 100% REST compliance; it would probably be better to say we are trying to build resource-oriented HTTP APIs (~ Level 2 of Richardson's REST maturity model). Nevertheless, we are trying to stay away from RPC-style use of HTTP requests, i.e. we attempt to implement our HTTP verbs according to RFC2616 rather than using POST to do IsPostalAddressValid(...) , for example. However, an emphasis on this seems to be at the expense of our attempt to apply domain-driven design. With only GET , POST , PUT , DELETE and a few other rarely used methods, we tend to build CRUDdy services, and CRUDdy services tend to have anemic domain models. POST : Receive the data, validate it, dump it to the database. GET : Retrieve the data, return it. No real business logic there. We also use messages (events) between the services, and it seems to me that most of the business logic ends up being built around that. Are REST and DDD at tension at some level? (Or am I misunderstanding something here? Are we maybe doing something else wrong?) Is it possible to build a strong domain model in a service-oriented architecture while avoiding RPC-style HTTP calls? | Martin Fowler's first law of distributed systems: "Don't distribute your objects!" Remote interfaces should be coarse-grained and internal interfaces fine-graned. Often rich domain model only applies within a bounded context . REST API separates two different contexts both having their own internal models. The contexts communicate through coarse-grained interface(REST API) using "anemic" objects(DTO). In your case it sounds like you are trying to spread a context over a boundary that is REST API. This can lead to fine-grained remote interface or anemic model. Depending on your project it may or may not be a problem. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/224174",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47624/"
]
} |
224,197 | Is it ok to just use git only locally? I don't want to have to pay for a service that provides private repositories (such as Github) but I think git is a great way to organize my closed-source project. | While it's entirely reasonable and possible to use git locally, it's better to have backup. You can arbitrarily push repos to basically anywhere. Github just happens to be easy hosting and collaboration. There are other options such as using Google Drive or Dropbox if you want remote storage. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/224197",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/86887/"
]
} |
224,371 | I need help creating view models for the following scenario: Deep, hierarchical data Multiple views for the same set of data Each view is a single, dynamically-changing view, based on the active selection Depending on the value of a property, display different types of tabs in a tab control My questions: Should I create a view-model representation for each view (VM1, VM2, etc)? 1. Yes:
a. Should I model the entire hierarchical relationship? (ie, SubVM1, HouseVM1, RoomVM1)
b. How do I keep all hierarchies in sync? (e.g, adding/removing nodes)
2. No:
a. Do I use a huge, single view model that caters for all views? Here's an example of a single view Figure 1: Multiple views updated based on active room. Notice Tab control Figure 2: Different active room. Multiple views updated. Tab control items changed based on object's property. Figure 3: Different selection type. Entire view changes | To answer the question, Yes, each view should have its own View Model.
But there is no need to model the entire hierarchy. Only what the view needs. The problem I had with most online resources regarding MVVM: In most examples, the View is almost 1-to-1 mapping of the Model.
But in my scenario, where there are different views for different facets of the same Model, I find myself stuck between two choices: One monolithic view model that is used by all other view models Or one view model for each view But both are not ideal. The Model-oriented View Model (MVM), while low in code duplication, is a nightmare to maintain The View-oriented View Model (VVM) produces highly-specialised classes for each view, but contains duplicates. In the end, I decided that having one VM per View is easier to maintain and code for, so I went with the VVM approach. Once the code is working, I began refactoring all common properties and operations into its current, final form: In this final form, the common view model class is composed into each VVM. Of course, I still have to decide what is considered common/specialised. And when a view is added/merged/deleted, this balance changes. But the nice thing about this is, I am now able to push up/down members from common to VVM and vice versa easily. And a quick note regarding keeping the objects in-sync: Having a Common View Model takes care of most of this. Each VVM can simply have a reference to the same Common View Model. I also tend to start with simple callback methods, and evolving to event/observer if the need for multiple listeners arise. And for really complex events (ie, unexpected cascading updates), I would switch over to using a Mediator. I do not shy away from code where a child has a back reference to its parent. Anything to get the code working. And if the opportunity to refactor arise, I would take it. The lessons I learnt: Ugly/Working code > Beautiful/Non-working code It is easier to merge multiple small classes, than it is to break up a huge class | {
"source": [
"https://softwareengineering.stackexchange.com/questions/224371",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/113997/"
]
} |
224,434 | I understand why floats served a purpose in the past. And I think I can see why they're useful in some simulation examples today. But I think those example are more exceptional than common. So I don't understand why floats are more prevalent in simple simulations rather than very high precision integers. A classic argument is that floats provide a greater range, but high precision integers can meet this challenge now. For example: with modern 64-bit processors, we can do fast integer calculations up to 2^64. The solar system is a little less than 10 billion km in width. 10 billion km divided by 2^64 is about 5 microns. Isn't being able to represent position within the solar system to the precision of half a human hair enough? On the flip-side, rounding errors from floating calculations can present problems. You need to consider the scale of the calculations to make certain that you're not inadvertently introducing error to your simulation. So why do personal computers even need FPUs anymore? Why not just leave floats to the supercomputers? | Your argumentation against floating point numbers is very fragile,
probably because of naivety. (No offense here, I find your question is
actually very interesting, I hope my answer will also be.) A classic argument is that floats provide a greater range, but high
precision integers can meet this challenge now. For example: with
modern 64-bit processors, we can do fast integer calculations up to
2^64. The solar system is a little less than 10 billion km in
width. 10 billion km divided by 2^64 is about 5 microns. Isn't being
able to represent position within the solar system to the precision
of half a human hair enough? You seem to make an implicit statement, according to which once we know
the scale of our problem, we can use fixed point arithmetic with respect
to this scale to solve that problem. Sometimes, this is a valid approach, and this is the one picked by
Knuth to implement distance computations in TeX. What makes the use
of fixed point arithmetic pertinent in this case is that all
quantities appearing within a computation are either integers or distances
occurring in a typesetting problem. Because the field of applications
is so narrow, it makes sense to choose a very small unit length, much
smaller than what the human eye can perceive, and to convert all
quantities into multiples of this unit. This leads to a very important result: in the typographical problems relying on this representation of numbers, we never need to multiply two lengths together, so that loss of precision caused by multiplications in fixed point arithmetic do not occur. Most of the times, it is however a terrible approach, here are a few
reasons why: There exists physical constants and you cannot always adapt their
units in a sensible way. Consider your solar system setting. The gravitational constant is
6.67×10−11 N·(m/kg)2, the speed of light is 3.00x10+5 m/s, the mass
of the Sun is 1.9891×10+30 kg and the mass of the Earth is
5.97219×10+24. In your fixed point setting, you will not be able
to represent the gravitational constant to a satisfying precision.
So you will change the unit. But by doing so, you have to replace
each number—replacing well-known, familiar quantities, by cryptic
values. Furthermore, it is very likely that finding a system to
appropriately represent all constants you need
might not even be possible. Think to quantum physicits working
with infinitely small particles whose speed is near the speed of light. There exists mathematical unitless constants. The value of Pi 3.1415 (up to the 4th decimal place) without any unit attached. There is
actually a lot of similar useful constants that cannot be
accurately represented in an arbitrary fixed point system. In the
solar system setting you described, we can represent Pi with 6 decimal places,
which gives a terrible accuracy when computing the circumference of
a planet orbit, for instance. In a fixed point system, we need to know in advance the size of
the quantity you are computing. Assume that we still do not know the value of the gravitational constant. We would
make a lot of measures and write a computer program to find an approximation of that
constant. Unfortunately, in the solar system setting you described, the gravitational
constant is represented by 0, which should be the, rather useless, result of our computation. Some mathematical functions will not work well with fixed precision
arithmetic, because of their growth rate. The most important ones are the exponential and the gamma function, which practically
means that every program working with anything else than polynomials will be flawed. In fixed point arithmetic, it is very hard to multiply and divide
numbers correctly. This is because if we do not know a priori the size of the numbers, we cannot tell
if their product will fit in the representation. That is, we would have to check
manually for precision underflow before each multiplication. Conclusion While the conclusion of your question implies that fixed point arithmetic could be
sufficient for all-purpose computations and that floating point
arithmetic should be reserved to supercalculators, it is precisely the
converse which is true: floating point arithmetic is a very good and
very sensible tool for all-purpose computations, while fixed point
will only do well in very specific, well analysed, cases. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/224434",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
224,798 | I am a big fan of agile development and used XP on a very successful project a few years ago. I loved everything about it, the iterative development approach, writing code around a test, pair programming, having a customer on site to run things by. It was a highly productive work environment and I never felt like I was under pressure. However the last few places I have worked use/used Scrum. I know it's the poster child for agile development these days but I'm not 100% convinced it is agile. Below are the two main reasons why it just doesn't feel agile to me. Project Managers Love It Project managers, who by their very nature are obsessed with timelines, all seem to love Scrum. In my experience they seem to use the Sprint Backlog as a means to track time requirements and keep a record of how much time was spent on a given task. Instead of using a whiteboard they all use an excel sheet, which each developer is required to fill out, religiously. In my opinion this is way too much documentation/time tracking for an agile process. Why would I waste time estimating how long a task is going to take me when I can just get on with the task itself. Or similarly why would I waste time documenting how long a task took when I can move onto the next task at hand. Standup Meetings The standup meetings in the previous place I worked were a nightmare. Everyday we had to explain what we had done yesterday and what what we were going to do that day. If we went over on our time "estimate" for a task the project manager would kick up a stink, and reference the Sprint Backlog as a means of showing of incompetent you are for not adhering to the timeline. Now I understand the need for communication but surely the tone of daily meetings should be lighthearted and focus on knowledge sharing. I don't think it should turn into a where's your homework style charade. Also surely the hole point of agile is that timelines change, they shouldn't be set in stone. Conclusion The idea of agile is to make the software better by making the developers life easier. Therefore in my opinion any agile process used by a team should be developer led. I don't think having a project manager use a process they have labeled "agile" to track a project has anything to do with agile development. Thoughts anyone? | Yes. Even one of the "fathers" of agile doesn't agree that Scrum is really agile : youtube.com/watch?v=hG4LH6P8Syk – Euphoric I think this link from one of the comments above really says it all. It's worth a watch, Uncle Bob gives a brief history on Scrum and basically says Scrum is not an Agile development process because Scrum has evolved over time to become a management process . The reasons behind this appear to be because it was project managers, and not developers, who were taking the Scrum courses. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/224798",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
224,914 | A colleague of mine works as a consultant. He is recently asked to help a software development company to improve their processes. The company is aware of some of the issues, like the number of bugs or the fact that they can't release their lead product more often than once per year, but nobody there knows what to do. My colleague identified that they have a primitive in-house process which they call Scrum, but which isn't. My colleague wants to suggest moving to “the real Scrum”. If he does that, he will encounter two issues: Managers will reply that they already use Agile, and that it wasn't helpful. It would be particularly tricky¹ to explain both to the managers and to the CEO of the company that it was not the real Scrum which was used until now, but rather a mix between Scrumfall and Scrumbutt. What to do? ¹ In this company, two of the three managers claim they know perfectly well Agile and Scrum and apply it flawlessly to their respective teams. Claiming that they know nothing about any of those two subjects would create a hard to handle situation. My colleague's role is also not only to give his opinion, but to apply the “good” methodology for the next three months within this company. Starting by claiming that managers are incompetent in this context is not a solution. | Don't fight them on a systemic level. What they are doing is what Scrum means to them, so using the word Scrum to mean anything but what they're doing will not make any sense to them. Don't even fight the system. Some of the things they do might be working for them, even if they're not Scrum. Scrum is not a one-size fits all solution; I haven't even read two books on Scrum that can agree on all the details. It's all just advice, which may or may not work in the context of any given company. As a consultant, the job is to sit down and take stock of all the things that are going wrong and find the solutions. You can pull those solutions from Scrum or XP or Kanban or you can just make them up. It doesn't matter. But don't try to teach them something they believe they already know. Don't tell them that they're flat-out wrong, even if they are. Just figure out what they're doing wrong -- by which I mean, very specifically, what isn't working for them -- and teach them to do it right. For example, why can't they release monthly, weekly or whenever they choose? Do they need a better (D)VCS? Do they need a continuous delivery system? Do they need to improve their deployment process? Teach them how to do those things, rather than pushing a single label on them for their entire development process. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/224914",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
224,929 | Is CSV considered a good option against XML and JSON for programming languages? I generally use XML and JSON (or sometimes a plain text file) as flat file storage. However, recently I came across an CSV implementation in PHP . I generally have seen CSV used for inputs in Excel files though, but I have never used it with programming. Would it be better than XML or JSON in any way? | The answer is, it depends. CSV is great for certain use cases. For example as a "streaming" format for large datasets, it's easier to stream than XML/JSON, and CSV files take much less storage space. I use it to stream datasets in the gigabyte range where other formats are impractical. It's also really common in certain industries when dealing with legacy systems and workflows. Try importing JSON into MS Excel. The ODI recently commented about CSV, calling 2014 "The year of CSV" For "proper" CSV formatting, consider using the CSV mime type in your HTTP responses. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/224929",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50008/"
]
} |
225,207 | Many Builder Pattern examples make the Builder an inner class of the object it builds. This makes some sense since it indicates what the Builder builds. However, in a statically typed language we know what the Builder builds. On the other hand if the Builder is an inner class, you should know what class the Builder builds without looking inside of the Builder . Also, having the builder as an inner class reduces the number of imports since it can be referenced by the outer class--if you care about that. And then there are practical examples where the Builder is in the same package, but not an inner class, like StringBuilder . You know that the Builder should build a String because it is named so. That being said, the only good reason I can think of for making a Builder an inner class is that you know what the class' Builder is without knowing its name or relying on naming conventions. For example, if StringBuilder was an inner class of String I probably would have known it existed sooner than I did (speculative). Are there any other reasons to make the Builder an inner class or does it just come down to preference and ritual? | I think that the reason for doing this is so that the inner class (the Builder) can access private members of the class that it is building. From http://docs.oracle.com/javase/tutorial/java/javaOO/nested.html Non-static nested classes (inner classes) have access to other members
of the enclosing class, even if they are declared private. ... Consider two top-level classes, A and B, where B needs access to
members of A that would otherwise be declared private. By hiding class
B within class A, A's members can be declared private and B can access
them. ... As with instance methods and variables, an inner class is associated
with an instance of its enclosing class and has direct access to that
object's methods and fields. Here's some code to try to illustrate this: class Example {
private int x;
public int getX() { return this.x; }
public static class Builder {
public Example Create() {
Example instance = new Example();
instance.x = 5; // Builder can access Example's private member variable
return instance;
}
}
} As a static class, Builder doesn't have a specific instance of Example that it is tied to. However, given an instance of Example (or one that it creates itself) Builder can still access the private members of that instance. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/225207",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/115840/"
]
} |
225,238 | This is less a question about the nature of duck typing and more about staying pythonic, I suppose. First of all - when dealing with dicts, in particular when the structure of the dict is fairly predictable and a given key is not typically present but sometimes is, I first think of two approaches: if myKey in dict:
do_some_work(dict[myKey])
else:
pass And of course Ye Olde 'forgiveness vs permission' approach. try:
do_some_work(dict[myKey])
except KeyError:
pass As a journeyman Python guy, I feel like I see the latter preferred a lot, which only feels odd I guess because in the Python docs try/excepts seem to be preferred when there is an actual mistake, as opposed to an, um… absence of success? If the occasional dict does not have key in myDict, and it is known that it will not always have that key, is a try/except contextually misleading? This isn't a programming error, it's just a fact of the data - this dict just didn't have that particular key. This seems particularly important when you look at the try/except/else syntax, which looks to be really useful when it comes to making sure that try isn't catching too many errors. You're able to do something like: try:
foo += bar
except TypeError:
pass
else:
return some_more_work(foo) Isn't that going to lead to swallowing all kinds of weird errors that are probably the result of some bad code? The above code might just be preventing you from seeing that you're trying to add 2 + {} and you may never realize that some part of your code has gone horribly wrong. I don't suggest that we should Check All The Types, that's why it's Python and not JavaScript - but again with the context of try/except, it seems like it's supposed to catch the program doing something it shouldn't be doing, instead of enabling it to carry on. I realize the above example is something of a straw-man argument, and is in fact intentionally bad. But given the pythonic creed of better to ask forgiveness than permission I can't help but feel like it begs the question of where the line in the sand actually is between correct application of if/else vs try/except is, in particular when you know what to expect out of the data you're working with. I'm not even talking about speed concerns or best practice here, I'm just sort of quietly confused by the perceived Venn diagram of cases where it looks like it could go either way, but people err on the side of a try/except because 'someone somewhere said it was Pythonic'. Have I drawn the wrong conclusions about the application of this syntax? | Use the get method instead: some_dict.get(the_key, default_value) ...where default_value is the value returned if the_key is not in some_dict . If you omit default_value , None is returned if the key is missing. In general, in Python, people tend to prefer try/except than checking something first - see the EAFP entry in the glossary . Note that many "test for membership" functions use exceptions behind the scenes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/225238",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
225,343 | Java 8 has a whole new library for dates and times in the package java.time which is very welcome thing to anyone who has had to use JodaTime before or hassle with making it's own date processing helper methods. Many classes in this package represent timestamps and have helper methods like getHour() to get hours from timestamp, getMinute() to get minutes from timestamp, getNano() to get nanos from timestamp etc... I noticed that they don't have a method called getMillis() to get the millis of the time stamp. Instead one would have to call method get(ChronoField.MILLI_OF_SECOND) . To me it seems like an inconsistency in the library. Does anyone know why such a method is missing, or as Java 8 is still in development is there a possibility that it will be added later? https://docs.oracle.com/javase/8/docs/api/java/time/package-summary.html The classes defined here represent the principal date-time concepts, including instants, durations, dates, times, time-zones and periods. They are based on the ISO calendar system, which is the de facto world calendar following the proleptic Gregorian rules. All the classes are immutable and thread-safe. Each date time instance is composed of fields that are conveniently made available by the APIs. For lower level access to the fields refer to the java.time.temporal package. Each class includes support for printing and parsing all manner of dates and times. Refer to the java.time.format package for customization options... Example of this kind of class: https://docs.oracle.com/javase/8/docs/api/java/time/LocalDateTime.html A date-time without a time-zone in the ISO-8601 calendar system, such as 2007-12-03T10:15:30. LocalDateTime is an immutable date-time object that represents a date-time, often viewed as year-month-day-hour-minute-second. Other date and time fields, such as day-of-year, day-of-week and week-of-year, can also be accessed. Time is represented to nanosecond precision. For example, the value "2nd October 2007 at 13:45.30.123456789" can be stored in a LocalDateTime ... | JSR-310 is based on nanoseconds, not milliseconds. As such, the minimal set of sensible methods are based on hour, minutes, second and nanosecond. The decision to have a nanosecond base was one of the original decisions of the project, and one that I strongly believe to be correct. Adding a method for millis would overlap that of nanosecond is a non-obvious way. Users would have to think about whether the nano field was nano-of-second or nano-of-milli for example. Adding a confusing additional method is not desirable, so the method was omitted. As pointed out, the alternative get(MILLI_OF_SECOND) is available. FWIW, I would oppose adding the getMillis() method in the future. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/225343",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/115940/"
]
} |
225,400 | What are the main benefits of Facebook's React over the upcoming Web Components spec and vice versa (or perhaps a more apples-to-apples comparison would be to Google's Polymer library)? According to this JSConf EU talk and the React homepage, the main benefits of React are: Decoupling and increased cohesion using a component model Abstraction, Composition and Expressivity Virtual DOM & Synthetic events (which basically means they completely re-implemented the DOM and its event system) Enables modern HTML5 event stuff on IE 8 Server-side rendering Testability Bindings to SVG, VML, and <canvas> Almost everything mentioned is being integrated into browsers natively through Web Components except this virtual DOM concept (obviously). I can see how the virtual DOM and synthetic events can be beneficial today to support old browsers, but isn't throwing away a huge chunk of native browser code kind of like shooting yourself in the foot in the long term? As far as modern browsers are concerned, isn't that a lot of unnecessary overhead/reinventing of the wheel? Here are some things I think React is missing that Web Components will take care of for you. Correct me if I'm wrong. Native browser support (read "guaranteed to be faster") Write script in a scripting language, write styles in a styling language, write markup in a markup language. Style encapsulation using Shadow DOM React instead has this , which requires writing CSS in JavaScript. Not pretty. Two-way binding | Update: this answer seems to be pretty popular so I took some time to clean it up a little bit, add some new info and clarify few things that I thought were not clear enough. Please comment if you think anything else needs clarification or updates. Most of your concerns are really a matter of opinion and personal preference but I'll try to answer as objectively as I can: Native vs. Compiled Write JavaScript in vanilla JavaScript, write CSS in CSS, write HTML
in HTML. Back in the day there were hot debates whether one should write native Assembly by hand or use a higher level language like C to make the compiler generate Assembly code for you. Even before that people refused to trust assemblers and preferred to write native machine code by hand ( and I'm not joking ). Meanwhile, today there are a lot of people who write HTML in Haml or Jade , CSS in Sass or Less and JavaScript in CoffeeScript or TypeScript . It's there. It works. Some people prefer it, some don't. The point is that there is nothing fundamentally wrong in not writing JavaScript in vanilla JavaScript, CSS in CSS and HTML in HTML. It's really a matter of preference. Internal vs. External DSLs Style encapsulation using Shadow DOM
React instead has this, which requires writing CSS in JavaScript. Not pretty. Pretty or not, it is certainly expressive. JavaScript is a very powerful language, much more powerful than CSS (even including any of CSS preprocessors). It kind of depends on whether you prefer internal or external DSLs for those sorts of things. Again, a matter of preference. (Note: I was talking about the inline styles in React that was referenced in the original question.) Types of DSLs - explanation Update: Reading my answer some time after writing it I think that I need to explain what I mean here. DSL is a domain-specific language and it can be either internal (using syntax of the host language like JavaScript - like for example React without JSX, or like the inline styles in React mentioned above) or it can be external (using a different syntax than the host language - like in this example would be inlining CSS (an external DSL) inside JavaScript). It can be confusing because some literature uses different terms than "internal" and "external" to describe those kinds of DSLs. Sometimes "embedded" is used instead of "internal" but the word "embedded" can mean different things - for example Lua is described as "Lua: an extensible embedded language" where embedded has nothing to do with embedded (internal) DSL (in which sense it is quite the opposite - an external DSL) but it means that it is embedded in the same sense that, say, SQLite is an embedded database. There is even eLua where "e" stands for "embedded" in a third sense - that it is meant for embedded systems ! That's why I don't like using the term "embedded DSL" because things like eLua can be "DSLs" that are "embedded" in two different senses while not being an "embedded DSL" at all! To make things worse some projects introduce even more confusion to the mix. Eg. Flatiron templates are described as "DSL-free" while in fact it is just a perfect example of an internal DSL with syntax like: map.where('href').is('/').insert('newurl'); That having been said, when I wrote "JavaScript is a very powerful language, much more powerful than CSS (even including any of CSS preprocessors). It kind of depends on whether you prefer internal or external DSLs for those sorts of things. Again, a matter of preference." I was talking about those two scenarios: One: /** @jsx React.DOM */
var colored = {
color: myColor
};
React.renderComponent(<div style={colored}>Hello World!</div>, mountNode); Two: // SASS:
.colored {
color: $my-color;
}
// HTML:
<div class="colored">Hello World!</div> The first example uses what was described in the question as: "writing CSS in JavaScript. Not pretty." The second example uses Sass. While I agree that using JavaScript to write CSS may not be pretty (for some definitions of "pretty") but there is one advantage of doing it. I can have variables and functions in Sass but are they lexically scoped or dynamically scoped? Are they statically or dynamically typed? Strongly or weakly? What about the numeric types? Type coersion? Which values are truthy and which are falsy? Can I have higher-order functions? Recursion? Tail calls? Lexical closures? Are they evaluated in normal order or applicative order? Is there lazy or eager evaluation? Are arguments to functions passed by value or by reference? Are they mutable? Immutable? Persistent? What about objects? Classes? Prototypes? Inheritance? Those are not trivial questions and yet I have to know answers to them if I want to understand Sass or Less code. I already know those answers for JavaScript so it means that I already understand every internal DSL (like the inline styles in React) on those very levels so if I use React then I have to know only one set of answers to those (and many similar) questions, while when I use for eg. Sass and Handlebars then I have to know three sets of those answers and understand their implications. It's not to say that one way or the other is always better but every time you introduce another language to the mix then you pay some price that may not be as obvious at a first glance, and this price is complexity. I hope I clarified what I originally meant a little bit. Data binding Two-way binding This is a really interesting subject and in fact also a matter of preference. Two-way is not always better than one-way. It's a question of how do you want to model mutable state in your application. I always viewed two-way bindings as an idea somewhat contrary to the principles of functional programming but functional programming is not the only paradigm that works, some people prefer this kind of behavior and both approaches seem to work pretty well in practice. If you're interested in the details of the design decisions related to the modeling of the state in React then watch the talk by Pete Hunt (linked to in the question) and the talk by Tom Occhino and Jordan Walke who explain it very well in my opinion. Update: See also another talk by Pete Hunt: Be predictable, not correct: functional DOM programming . Update 2: It's worth noting that many developers are arguing against bidirectional data flow, or two-way binding, some even call it an anti-pattern. Take for example the Flux application architecture that explicitly avoids the MVC model (that proved to be hard to scale for large Facebook and Instagram applications) in favor of a strictly unidirectional data flow (see the Hacker Way: Rethinking Web App Development at Facebook talk by Tom Occhino, Jing Chen and Pete Hunt for a good introduction). Also, a lot of critique against AngularJS (the most popular Web framework that is loosely based on the MVC model, known for two-way data binding) includes arguments against that bidirectional data flow, see: Things I Wish I Were Told About Angular.js by Ruoyu Sun You have ruined HTML by Danny Tuppeny AngularJS: The Bad Parts by Lars Eidnes What’s wrong with Angular.js by Jeff Whelpley ("Two way databinding is an anti-pattern.") Why you should not use Angular.js by Egor Koshelko ("Two-way data binding is not how to handle events in principle.") Update 3: Another interesting article that nicely explains some of the issues disscussed above is Deconstructing ReactJS's Flux - Not using MVC with ReactJS by Mikael Brassman, author of RefluxJS (a simple library for unidirectional data flow application architecture inspired by Flux). Update 4: Ember.js is currently going away from the two-way data binding and in future versions it will be one-way by default. See: The Future of Ember talk by Stefan Penner from the Embergarten Symposium in Toronto on November 15th, 2014. Update 5: See also: The Road to Ember 2.0 RFC - interesting discussion in the pull request by Tom Dale : "When we designed the original templating layer, we figured that making all data bindings two-way wasn't very harmful: if you don't set a two-way binding, it's a de facto one-way binding! We have since realized (with some help from our friends at React), that components want to be able to hand out data to their children without having to be on guard for wayward mutations. Additionally, communication between components is often most naturally expressed as events or callbacks . This is possible in Ember, but the dominance of two-way data bindings often leads people down a path of using two-way bindings as a communication channel . Experienced Ember developers don't (usually) make this mistake, but it's an easy one to make." [emphasis added] Native vs. VM Native browser support (read "guaranteed to be faster") Now finally something that is not a matter of opinion. Actually here it is exactly the other way around. Of course "native" code can be written in C++ but what do you think the JavaScript engines are written in? As a matter of fact the JavaScript engines are truly amazing in the optimizations that they use today - and not only V8 any more, also SpiderMonkey and even Chakra shines these days. And keep in mind that with JIT compilers the code is not only as native as it can possibly be but there are also run time optimization opportunities that are simply impossible to do in any statically compiled code. When people think that JavaScript is slow, they usually mean JavaScript that accesses the DOM. The DOM is slow. It is native, written in C++ and yet it is slow as hell because of the complexity that it has to implement. Open your console and write: console.dir(document.createElement('div')); and see how many properties an empty div element that is not even attached to the DOM has to implement. These are only the first level properties that are "own properties" ie. not inherited from the prototype chain: align, onwaiting, onvolumechange, ontimeupdate, onsuspend, onsubmit, onstalled, onshow, onselect, onseeking, onseeked, onscroll, onresize, onreset, onratechange, onprogress, onplaying, onplay, onpause, onmousewheel, onmouseup, onmouseover, onmouseout, onmousemove, onmouseleave, onmouseenter, onmousedown, onloadstart, onloadedmetadata, onloadeddata, onload, onkeyup, onkeypress, onkeydown, oninvalid, oninput, onfocus, onerror, onended, onemptied, ondurationchange, ondrop, ondragstart, ondragover, ondragleave, ondragenter, ondragend, ondrag, ondblclick, oncuechange, oncontextmenu, onclose, onclick, onchange, oncanplaythrough, oncanplay, oncancel, onblur, onabort, spellcheck, isContentEditable, contentEditable, outerText, innerText, accessKey, hidden, webkitdropzone, draggable, tabIndex, dir, translate, lang, title, childElementCount, lastElementChild, firstElementChild, children, nextElementSibling, previousElementSibling, onwheel, onwebkitfullscreenerror, onwebkitfullscreenchange, onselectstart, onsearch, onpaste, oncut, oncopy, onbeforepaste, onbeforecut, onbeforecopy, webkitShadowRoot, dataset, classList, className, outerHTML, innerHTML, scrollHeight, scrollWidth, scrollTop, scrollLeft, clientHeight, clientWidth, clientTop, clientLeft, offsetParent, offsetHeight, offsetWidth, offsetTop, offsetLeft, localName, prefix, namespaceURI, id, style, attributes, tagName, parentElement, textContent, baseURI, ownerDocument, nextSibling, previousSibling, lastChild, firstChild, childNodes, parentNode, nodeType, nodeValue, nodeName Many of them are actually nested objects - to see second level (own) properties of an empty native div in your browser, see this fiddle . I mean seriously, onvolumechange property on every single div node? Is it a mistake? Nope, it's just a legacy DOM Level 0 traditional event model version of one of the event handlers "that must be supported by all HTML elements , as both content attributes and IDL attributes" [emphasis added] in Section 6.1.6.2 of the HTML spec by W3C - no way around it. Meanwhile, these are the first level properties of a fake-DOM div in React: props, _owner, _lifeCycleState, _pendingProps, _pendingCallbacks, _pendingOwner Quite a difference, isn't it? In fact this is the entire object serialized to JSON ( LIVE DEMO ), because hey you actually can serialize it to JSON as it doesn't contain any circular references - something unthinkable in the world of native DOM ( where it would just throw an exception ): {
"props": {},
"_owner": null,
"_lifeCycleState": "UNMOUNTED",
"_pendingProps": null,
"_pendingCallbacks": null,
"_pendingOwner": null
} This is pretty much the main reason why React can be faster than the native browser DOM - because it doesn't have to implement this mess . See this presentation by Steven Luscher to see what is faster: native DOM written in C++ or a fake DOM written entirely in JavaScript. It's a very fair and entertaining presentation. Update: Ember.js in future versions will use a virtual DOM heavily inspired by React to improve perfomance. See: The Future of Ember talk by Stefan Penner from the Embergarten Symposium in Toronto on November 15th, 2014. To sum it up: features from Web Components like templates, data binding or custom elements will have a lot of advantages over React but until the document object model itself gets significantly simplified then performance will not be one of them. Update Two months after I posted this answers there was some news that is relevant here. As I have just written on Twitter , the lastest version of the Atom text editor written by GitHub in JavaScript uses Facebook's React to get better performance even though according to Wikipedia "Atom is based on Chromium and written in C++" so it has full control of the native C++ DOM implementation (see The Nucleus of Atom ) and is guaranteed to have support for Web Components since it ships with its own web browser. It is just a very recent example of a real world project that could've used any other kind of optimization typically unavailable to Web applications and yet it has chosen to use React which is itself written in JavaScript, to achieve best performance, even though Atom was not built with React to begin with, so doing it was not a trivial change. Update 2 There is an interesting comparison by Todd Parker using WebPagetest to compare performance of TodoMVC examples written in Angular, Backbone, Ember, Polymer, CanJS, YUI, Knockout, React and Shoestring. This is the most objective comparison that I've seen so far. What is significant here is that all of the respective examples were written by experts in all of those frameworks, they are all available on GitHub and can be improved by anyone who thinks that some of the code could be optimized to run faster. Update 3 Ember.js in future versions will include a number of React's features that are discussed here (including a virtual DOM and unidirectional data binding, to name just a few) which means that the ideas that originated in React are already migrating into other frameworks. See: The Road to Ember 2.0 RFC - interesting discussion in the pull request by Tom Dale (Start Date: 2014-12-03): "In Ember 2.0, we will be adopting a "virtual DOM" and data flow model that embraces the best ideas from React and simplifies communication between components." As well, Angular.js 2.0 is implementing a lot of the concepts discussed here. Update 4 I have to elaborate on few issues to answer this comment by Igwe Kalu: "it is not sensible to compare React (JSX or the compilation output) to
plain JavaScript, when React ultimately reduces to plain JavaScript.
[...]
Whatever strategy React uses for DOM insertion can be applied without
using React. That said, it doesn't add any special benefits when
considering the feature in question other than the convenience."
(full comment here ) In case it wasn't clear enough, in part of my answer I am comparing the performance of operating directly on the native DOM (implemented as host objects in the browser) vs. React's fake/virtual DOM (implemented in JavaScript). The point I was trying to make is that the virtual DOM implemented in JavaScript can outperform the real DOM implemented in C++ and not that React can outperform JavaScript (which obviously wouldn't make much sense since it is written in JavaScript). My point was that "native" C++ code is not always guaranteed to be faster than "not-native" JavaScript. Using React to illustrate that point was just an example. But this comment touched an interesting issue. In a sense it is true that you don't need any framework (React, Angular or jQuery) for any reason whatsoever (like performance, portability, features) because you can always recreate what the framework does for you and reinvent the wheel - if you can justify the cost, that is. But - as Dave Smith nicely put it in How to miss the point when comparing web framework performance : "When comparing two web frameworks, the question is not can my app be fast with framework X. The question is will my app be fast with framework X." In my 2011 answer to: What are some empirical technical reasons not to use jQuery I explain a similar issue, that it is not impossible to write portable DOM-manipulation code without a library like jQuery, but that people rarely do so. When using programming languages, libraries or frameworks, people tend to use the most convenient or idiomatic ways of doing things, not the perfect but inconvenient ones. The true value of good frameworks is making easy what would otherwise be hard to do - and the secret is making the right things convenient. The result is still having exactly the same power at your disposal as the simplest form of lambda calculus or the most primitive Turing machine, but the relative expressiveness of certain concepts means that those very concepts tend to get expressed more easily or at all, and that the right solutions are not just possible but actually implemented widely. Update 5 React + Performance = ? article by Paul Lewis from July 2015 shows an example where React is slower than vanilla JavaScript written by hand for an infinite list of Flickr pictures, which is especially significant on mobile. This example shows that everyone should always test performance for specific use case and specific target platforms and devices. Thanks to Kevin Lozandier for bringing it to my attention . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/225400",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60485/"
]
} |
225,523 | I'm trying to get to grips with unit testing. Say we have a die which can has a default number of sides equal to 6 (but can be 4, 5 sided etc.): import random
class Die():
def __init__(self, sides=6):
self._sides = sides
def roll(self):
return random.randint(1, self._sides) Would the following be valid/useful unit tests? test a roll in the range 1-6 for a 6 sided die test a roll of 0 for a 6 sided die test a roll of 7 for a 6 sided die test a roll in the range 1-3 for a 3 sided die test a roll of 0 for a 3 sided die test a roll of 4 for a 3 sided die I'm just thinking that these are a waste of time as the random module has been around for long enough but then i think if the random module gets updated (say i update my Python version) then at least i'm covered. Also, do I even need to test other variations of die rolls e.g. the 3 in this case, or is it good to cover another initialized die state? | You are right, your tests should not verify that the random module is doing its job; a unittest should only test the class itself, not how it interacts with other code (which should be tested separately). It is of course entirely possible that your code uses random.randint() wrong; or you be calling random.randrange(1, self._sides) instead and your die never throws the highest value, but that'd be a different kind of bug, not one you could catch with a unittest. In that case, your die unit is working as designed, but the design itself was flawed. In this case, I'd use mocking to replace the randint() function, and only verify that it has been called correctly. Python 3.3 and up comes with the unittest.mock module to handle this type of testing, but you can install the external mock package on older versions to get the exact same functionality import unittest
try:
from unittest.mock import patch
except ImportError:
# < python 3.3
from mock import patch
@patch('random.randint', return_value=3)
class TestDice(unittest.TestCase):
def _make_one(self, *args, **kw):
from die import Die
return Die(*args, **kw)
def test_standard_size(self, mocked_randint):
die = self._make_one()
result = die.roll()
mocked_randint.assert_called_with(1, 6)
self.assertEqual(result, 3)
def test_custom_size(self, mocked_randint):
die = self._make_one(sides=42)
result = die.roll()
mocked_randint.assert_called_with(1, 42)
self.assertEqual(result, 3)
if __name__ == '__main__':
unittest.main() With mocking, your test is now very simple; there are only 2 cases, really. The default case for a 6-sided die, and the custom sides case. There are other ways to temporarily replace the randint() function in the global namespace of Die , but the mock module makes this easiest. The @mock.patch decorator here applies to all test methods in the test case; each test method is passed an extra argument, the mocked random.randint() function, so we can test against the mock to see if it it indeed has been called correctly. The return_value argument specifies what is returned from the mock when it is called, so we can verify that the die.roll() method indeed returned the 'random' result to us. I've used another Python unittesting best practice here: import the class under test as part of the test. The _make_one method does the importing and instantiation work within a test , so that the test module will still load even if you made a syntax error or other mistake that'll prevent the original module to import. This way, if you made a mistake in the module code itself, the tests will still be run; they'll just fail, telling you about the error in your code. To be clear, the above tests are simplistic in the extreme. The goal here is not to test that random.randint() has been called with the right arguments, for example. Instead, the goal is to test that the unit is producing the right results given certain inputs, where those inputs include the results of other units not under test. By mocking the random.randint() method you get to take control over just another input to your code. In real world tests, the actual code in your unit-under-test is going to be more complex; the relationship with inputs passed to the API and how other units are then invoked can be interesting still, and mocking will give you access to intermediate results, as well as let you set the return values for those calls. For example, in code that authenticates users against a 3rd party OAuth2 service (a multi-stage interaction), you want to test that your code is passing the right data to that 3rd party service, and lets you mock out different error responses that that 3rd party service would return, letting you simulate different scenarios without having to build a full OAuth2 server yourself. Here it is important to test that information from a first response have been handled correctly and have been passed on to a second stage call, so you do want to see that the mocked service is being called correctly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/225523",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/116090/"
]
} |
225,541 | A few days ago, I appeared in an interview in a software company for .net developer. There I was asked the following question: If we close browser window without getting logout, what will happen when
we provide same URL in the browser? My answer was that user will be able to view the home page with login details instead of being prompted to provide username and password as session of the user has not ended. Login information will actually be retrieved from cookies. But it depends upon the logic we have implemented for Login. But the interviewer didn't seem be satisfied with my answer and didn't accept my answer. I am wondering what might be answer of this question. So, I need your kind guidance regarding correct answers. So please explain what I was supposed to reply. | You are right, your tests should not verify that the random module is doing its job; a unittest should only test the class itself, not how it interacts with other code (which should be tested separately). It is of course entirely possible that your code uses random.randint() wrong; or you be calling random.randrange(1, self._sides) instead and your die never throws the highest value, but that'd be a different kind of bug, not one you could catch with a unittest. In that case, your die unit is working as designed, but the design itself was flawed. In this case, I'd use mocking to replace the randint() function, and only verify that it has been called correctly. Python 3.3 and up comes with the unittest.mock module to handle this type of testing, but you can install the external mock package on older versions to get the exact same functionality import unittest
try:
from unittest.mock import patch
except ImportError:
# < python 3.3
from mock import patch
@patch('random.randint', return_value=3)
class TestDice(unittest.TestCase):
def _make_one(self, *args, **kw):
from die import Die
return Die(*args, **kw)
def test_standard_size(self, mocked_randint):
die = self._make_one()
result = die.roll()
mocked_randint.assert_called_with(1, 6)
self.assertEqual(result, 3)
def test_custom_size(self, mocked_randint):
die = self._make_one(sides=42)
result = die.roll()
mocked_randint.assert_called_with(1, 42)
self.assertEqual(result, 3)
if __name__ == '__main__':
unittest.main() With mocking, your test is now very simple; there are only 2 cases, really. The default case for a 6-sided die, and the custom sides case. There are other ways to temporarily replace the randint() function in the global namespace of Die , but the mock module makes this easiest. The @mock.patch decorator here applies to all test methods in the test case; each test method is passed an extra argument, the mocked random.randint() function, so we can test against the mock to see if it it indeed has been called correctly. The return_value argument specifies what is returned from the mock when it is called, so we can verify that the die.roll() method indeed returned the 'random' result to us. I've used another Python unittesting best practice here: import the class under test as part of the test. The _make_one method does the importing and instantiation work within a test , so that the test module will still load even if you made a syntax error or other mistake that'll prevent the original module to import. This way, if you made a mistake in the module code itself, the tests will still be run; they'll just fail, telling you about the error in your code. To be clear, the above tests are simplistic in the extreme. The goal here is not to test that random.randint() has been called with the right arguments, for example. Instead, the goal is to test that the unit is producing the right results given certain inputs, where those inputs include the results of other units not under test. By mocking the random.randint() method you get to take control over just another input to your code. In real world tests, the actual code in your unit-under-test is going to be more complex; the relationship with inputs passed to the API and how other units are then invoked can be interesting still, and mocking will give you access to intermediate results, as well as let you set the return values for those calls. For example, in code that authenticates users against a 3rd party OAuth2 service (a multi-stage interaction), you want to test that your code is passing the right data to that 3rd party service, and lets you mock out different error responses that that 3rd party service would return, letting you simulate different scenarios without having to build a full OAuth2 server yourself. Here it is important to test that information from a first response have been handled correctly and have been passed on to a second stage call, so you do want to see that the mocked service is being called correctly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/225541",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/116111/"
]
} |
225,674 | In most Java code, I see people declare Java objects like this: Map<String, String> hashMap = new HashMap<>();
List<String> list = new ArrayList<>(); instead of: HashMap<String, String> hashMap = new HashMap<>();
ArrayList<String> list = new ArrayList<>(); Why is there a preference to define the Java object using the interface rather than the implementation that is actually going to be used? | The reason is that the implementation of these interfaces is usually not relevant when handling them, therefore if you oblige the caller to pass a HashMap to a method, then you're essentially obliging which implementation to use. So as a general rule, you're supposed to handle its interface rather than the actual implementation and avoid the pain and suffering which might result in having to change all method signatures using HashMap when you decide you need to use LinkedHashMap instead. It should be said that there are exceptions to this when implementation is relevant. If you need a map when order is important, then you can require a TreeMap or a LinkedHashMap to be passed, or better still SortedMap which doesn't specify a specific implementation. This obliges the caller to necessarily pass a certain type of implementation of Map and strongly hints that order is important. That said, could you override SortedMap and pass an unsorted one? Yes, of course, however expect bad things to happen as a result. However best practice still dictates that if it isn't important, you shouldn't use specific implementations. This is true in general. If you're dealing with Dog and Cat which derive from Animal , in order to make best use of inheritance, you should generally avoid having methods specific to Dog or Cat . Rather all methods in Dog or Cat should override methods in Animal and it will save you trouble in the long run. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/225674",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60732/"
]
} |
225,893 | I am currently working on a software project that performs compression and indexing on video surveillance footage. The compression works by splitting background and foreground objects, then saving the background as a static image, and the foreground as a sprite. Recently, I have embarked on reviewing some of the classes that I have designed for the project. I noticed that there are many classes that only have a single public method. Some of these classes are: VideoCompressor (with a compress method that takes in an input video of type RawVideo and returns an output video of type CompressedVideo ). VideoSplitter (with a split method that takes in an input video of type RawVideo and returns a vector of 2 output videos, each of type RawVideo ). VideoIndexer (with an index method that takes in an input video of type RawVideo and returns a video index of type VideoIndex ). I find myself instantiating each class just to make calls like VideoCompressor.compress(...) , VideoSplitter.split(...) , VideoIndexer.index(...) . On the surface, I do think the class names are sufficiently descriptive of their intended function, and they are actually nouns. Correspondingly, their methods are also verbs. Is this actually a problem? | No, this is not a problem, quite the opposite. It is a sign of modularity and clear responsibility of the class. The lean interface is easy to grasp from the viewpoint of a user of that class, and it will encourage loose coupling. This has many advantages but almost no drawbacks. I wish more components would be designed that way! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/225893",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/116555/"
]
} |
225,956 | I am writing a script that does something to a text file (what it does is irrelevant for my question though). So before I do something to the file I want to check if the file exists. I can do this, no problem, but the issue is more that of aesthetics. Here is my code, implementing the same thing in two different ways. def modify_file(filename):
assert os.path.isfile(filename), 'file does NOT exist.'
Traceback (most recent call last):
File "clean_files.py", line 15, in <module>
print(clean_file('tes3t.txt'))
File "clean_files.py", line 8, in clean_file
assert os.path.isfile(filename), 'file does NOT exist.'
AssertionError: file does NOT exist. or: def modify_file(filename):
if not os.path.isfile(filename):
return 'file does NOT exist.'
file does NOT exist. The first method produces an output that is mostly trivial, the only thing I care about is that the file does not exist. The second method returns a string, it is simple. My questions is: which method is better for letting the user know that the file does not exist? Using the assert method seems somehow more pythonic. | You'd go with a third option instead: use raise and a specific exception. This can be one of the built-in exceptions , or you can create a custom exception for the job. In this case, I'd use IOError , but a ValueError might also fit: def modify_file(filename):
if not os.path.isfile(filename):
raise IOError('file does NOT exist.') Using a specific exception allows you to raise other exceptions for different exceptional circumstances, and lets the caller handle the exception gracefully. Of course, many file operations (like open() ) themselves raise OSError already; explicitly first testing if the file exists may be redundant here. Don't use assert ; if you run python with the -O flag, all assertions are stripped from the code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/225956",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/116625/"
]
} |
226,002 | I have a web service. Right now, I have passwords stored in plain text in a MySQL table on my server. I know this isn't the best practice, and that is why I am working on it. Why should passwords be encrypted if they are being stored in a secure database? I realize that if someone hacks in to my database they will get everyone's password. But I have other problems if someone gets in my database, for example, deleting data. The scenario I can think of is that you are hacked. You restore a database from a couple of hours ago and everything is well. However, if your passwords are plaintext... The thief has all the passwords and you have to reset them all. Hassle to your users. If the passwords were encrypted, you could just restore to previous database. Is this correct thinking? | First up, you should be more free with read-only access rights than read-write. It might be possible that a hacker has access to your data but isn't able to edit it. But, much more importantly, this is not about you. The fact that you might be screwed if someone has full access to your database is irrelevant. Much more important is your user's data. If you recover your database, the hacker still has access to your user's account. And who knows what else? What if they use the same password at Google? Or PayPal? What if that gives a hacker access to their mother's maiden name, or the last 4 digits of their credit card? What if that gets them into other accounts? Don't put it past a hacker to go through a user support system and get more info . Just ... just don't. That's your user's private information and you don't need to be able to see it. It's also your reputation. Encrypt it. EDIT: One extra comment, to save any future reader from reading every answer and comment ... If you're going to encrypt (in the strictest sense) then you need to use a public / private key pair , which is fine but makes life a little bit more difficult for both you and your user. A simpler, and just as effective, solution is to random-salt and hash the password. Hashing alone is not enough; if your user uses a common password, it will appear in reverse-hashing tables, which are readily available with a simple internet search. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/226002",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/116683/"
]
} |
226,155 | I have a very large in memory node tree and need to traverse the tree. Passing the returned values of each child node to their parent node. This has to be done until all the nodes have their data bubble up to the root node. Traversal works like this. private Data Execute(Node pNode)
{
Data[] values = new Data[pNode.Children.Count];
for(int i=0; i < pNode.Children.Count; i++)
{
values[i] = Execute(pNode.Children[i]); // recursive
}
return pNode.Process(values);
}
public void Start(Node pRoot)
{
Data result = Execute(pRoot);
} This works fine, but I'm worried that the call stack limits the size of the node tree. How can the code be rewritten so that no recursive calls to Execute are made? | Here is a general purpose tree traversal implementation that doesn't use recursion: public static IEnumerable<T> Traverse<T>(T item, Func<T, IEnumerable<T>> childSelector)
{
var stack = new Stack<T>();
stack.Push(item);
while (stack.Any())
{
var next = stack.Pop();
yield return next;
foreach (var child in childSelector(next))
stack.Push(child);
}
} In your case you can then call it like so: IEnumerable<Node> allNodes = Traverse(pRoot, node => node.Children); Use a Queue instead of a Stack for a breath first, rather than depth first, search. Use a PriorityQueue for a best first search. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/226155",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52871/"
]
} |
226,229 | If my code contains a known defect which should be fixed, but isn't yet, and won't be fixed for the current release, and might not be fixed in the forseeable future, should there be a failing unit test for that bug in the test suite? If I add the unit test, it will (obviously) fail, and getting used to having failing tests seems like a bad idea. On the other hand, if it is a known defect, and there is a known failing case, it seems odd to keep it out of the test suite, as it should at some point be fixed, and the test is already available. | The answer is yes, you should write them and you should run them. Your testing framework needs a category of "known failing tests" and you should mark these tests as falling into that category. How you do that depends on the framework. Curiously, a failing test that suddenly passes can be just as interesting as a passing test that unexpectedly fails. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/226229",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11068/"
]
} |
226,440 | Recently, I came across a number of open source Ruby (or majority of it was Ruby) projects on GitHub that when checked with a code analyzing tool like Rubocop , create a lot of offenses . Now, most of these offenses include using double quotation marks instead of single quotes (when not interpolation), not following the 2 spaces per level rule, exceeding the 80 character line length rule, or using { and } for multi-line blocks. [The] Ruby style guide recommends best practices so that real-world
Ruby programmers can write code that can be maintained by other
real-world Ruby programmers. ~ Source: Ruby Style Guide Although they are small and easy to fix, is it appropriate to change the coding style of an open source project by fixing the offenses and making a Pull Request? I acknowledge that some projects, like Rails, do not accept cosmetic changes and some are just too large to "fix" all at once (Rails for example generates over 80,000 offenses when Rubocop is run - regardless, they have their own small set of coding conventions to follow when contributing). After all, the Ruby Style Guide is there for a reason together with tools like Rubocop. People appreciate consistency so making these kinds of changes is kind of doing a good thing for the Ruby community in general, right? [The author(s) of the Ruby Style Guide] didn't come up with all the
rules out of nowhere - they are mostly based on my extensive career as
a professional software engineer, feedback and suggestions from
members of the Ruby community and various highly regarded Ruby
programming resources, such as "Programming Ruby 1.9" and "The Ruby
Programming Language". ~ Source: Ruby Style Guide Isn't not following community coding style conventions and best practices basically encouraging bad practices? | Ask the maintainers. Coding style is a quite subjective discussion, and rules like maximum line length of 80 characters are fairly subjective - while general agreement should be that shorter lines are better to read, 80 might be too restrictive for some with today's screen sizes and IDE's. Other rules can be ignored on purpose, too. For instance, a developer might consider global use of double quotes better for him and be willing to accept the "risk" of accidental interpolation and an extremely small increase on parsing time. Many maintainers also don't like large coding style changes as they are boring to review and there is a chance that it might introduce errors. For example, a string could be switched to single quote, even though it contained a deliberate interpolation and should have been using double quotes. Maintainers prefer to do style cleanups while working on that actual code so they can verify style changes don't introduce new bugs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/226440",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
226,521 | Most Java applications don't look the same as C/C++ applications. Swing might have been designed on purpose to have a distincitve look, but based on what I've read, SWT for example tried to 'look native', and doesn't completley succeed. My question is: Why is it hard for the developers of the Java language to design a GUI system that copies exactly the look of native GUIs? What's different in native GUIs? Isn't it only a matter of designing buttons that look like 'native' buttons? Or does it go deeper than that? | Isn't it only a matter of designing buttons that look like 'native' buttons? Well - sort of, for buttons. But this might be harder than you imagine. These days the graphics used to represent GUI components aren't as simple as random bitmaps that are stretched (since these don't scale very well at all) - they're often vector based graphics with a lot of corner cases programmed into them (so when the button reaches the edge of the screen it may look slightly different, for instance.) And of course, you'll need different graphics when a button is clicked. For copyright reasons, developers often can't just use these existing graphics outright, so they have to be recreated - and while they do a good job for the most part, inevitably some things get missed given the huge array of graphical components out there. This is much less severe than it used to be - these days if I set the default platform look and feel in Swing, I notice very little that looks odd. I'm saying all the above based on Swing, which is of course a lightweight, non-native GUI toolkit. You specifically mention SWT not looking native, which is a bit odd, because SWT is native. It's a toolkit that uses JNI underneath to call native components - so if something doesn't look right there, it's not going to be because of the look and feel. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/226521",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/117390/"
]
} |
226,534 | My manager wants me to refactor a gigantic amount of terribly-coupled, macroed, full of private namespace methods, hierarchy-perverted (even 10+ levels of inheritance) code which hasn't been (indeed) designed with a "whole-vision" concept. I tried to explain to him that this code needs to be rewritten, not just refactored but he didn't listen and stated explicitly that I need to refactor it.. somehow . I could dive into all the "dealing with legacy code" books but I think nothing useful would come out. How can I convince him that a rewriting, instead of a refactoring, is actually needed? | He's probably right. If the codebase is so monstrous, so gigantically complicated, so difficult to understand... what makes you think you can write something that does the same thing correctly? Generally a big refactoring is the best place to start - start ripping bits out and combining them into reusable chunks; tidy up the code so its easier to view; flatten the inheritance hierarchy and remove the dangling edge-cases; combine the namespaces; unroll the macros; whatever it takes to turn the big mess into a reasonably understandable system. You have to understand it before you can rewrite it - otherwise you will just end up with a different mess. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/226534",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22929/"
]
} |
226,573 | Tried to search the web and couldn't find an answer. It might have something to do with "load", but that doesn't make much sense to me. Obviously, "ln" was already taken, but where does that "d" come from? | Linkers in Linux were originally called loaders. See Assembly Language Step-by-Step: Programming with Linux by Jeff Duntemann: Linking the Object code File ...Linux comes with its own linker, called ld. (The name is actually short for "load", and "loader" was what linkers were originally called, in the First Age of Unix, back in the 1970s.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/226573",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/78254/"
]
} |
227,639 | I have two plug-ins. One has the GNU LGPL 3 license and the other has the Apache Software License, Version 2.0. Can I use them in my commercial app?
And if yes, what precautions should I take? | Can I use them in my commercial app? It depends on what you intend to do with the software that you produce. Firstly, neither ASL 1 , GPL or LGPL make any restrictions on what you can use software to do inside your organization. The restrictions are all on code that is distributed outside of your organization. For GPL the restriction is that if you incorporate GPL'ed code into your own software, AND you then distribute your software outside of your organization, THEN you must make the source code available under the terms of the GPL or a compatible open source license. So if you use GPL'ed code in your application and you distribute it, then your application must be open source ... or else you are violating the license. For LGPL, the restriction (see above) only applies to the source-code of the LGPL'ed library itself; i.e. if you change the library. If you just use the library, you are not required to make your source code available. There is also a restriction that the LGPL code in your application must be replaceable by the user of your code. That means (in effect) that if you distribute your code as binaries only, then you cannot statically link your code against that the library. You must use dynamic linking. For ASL, the only significant restriction is that you must say if you have changed anything from the original version the ASL'ed code that you using. Finally, just to make it clear, neither GPL, LPGL or ASL places any restriction on your purpose in using the software. And that includes whether your purpose is to make money. They just constrain the way you can make money ... and in the case of LGPL and ASL, the constraint is pretty minimal. And if yes, what precautions should I take? For LGPL and ASL, no precautions are necessary. IANAL - I am not a lawyer. If you need to be sure, ask a real, qualified expert; i.e. a lawyer who specializes in software IP law. 1 - For the purposes of this answer, ASL == Apache Software License version 2. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/227639",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/118490/"
]
} |
227,868 | I've always recognized the importance of utilizing design patterns. I'm curious as to how other developers go about choosing the most appropriate one. Do you use a series of characteristics (like a flowchart) to help you decide? For example: If objects are related, but we do not want to specify concrete class, consider Abstract When instantiation is left to derived classes, consider Factory Need to access elements of an aggregate object sequentially, try Iterator or something similar? | A key misconception in today's coding world is that patterns are building blocks. You take an AbstractFactory here and a Flyweight there and maybe a Singleton over there and connect them together with XML and presto, you've got a working application. They're not. Hmm, that wasn't big enough. Patterns are not building blocks That's better. A pattern is something that you use when you find that you've got a problem - you need some flexibility that the pattern provides, or that you've stumbled across when you are making a little language in the config file and you say "wait a moment, stop, this is its own interpreter that I'm writing — this is a known and solved problem, use an Interpreter pattern ." But note there, that it's something that you discover in your code, not something you start out with. The creators of Java didn't say "Oh, we'll put a Flyweight in the Integer" at the start, but rather realized a performance issue that could be solved by a flyweight . And thus, there's no "flow chart" that you use to find the right pattern. The pattern is a solution to a specific type of problem that has been encountered again and again and the key parts of it distilled into a Pattern. Starting out with the Pattern is like having a solution and looking for a problem. This is a bad thing: it leads to over engineering and ultimately inflexibility in design. As you are writing code, when you realize that you're writing a Factory, you can say "ah ha! that's a factory I'm about to write" and use your knowledge of knowing the Factory pattern to rapidly write the next bit of code without trying to rediscover the Factory pattern. But you don't start out with "I've got a class here, I'll write a factory for it so that it can be flexible" — because it won't. Here's an excerpt from an interview with Erich Gamma (of Gamma, Helm, Johnson, and Vissides ): How to Use Design Patterns : Trying to use all the patterns is a bad thing, because you will end up with synthetic designs—speculative designs that have flexibility that no one needs. These days software is too complex. We can't afford to speculate what else it should do. We need to really focus on what it needs. That's why I like refactoring to patterns. People should learn that when they have a particular kind of problem or code smell, as people call it these days, they can go to their patterns toolbox to find a solution. The best help for the "what to use, when" is likely the Wikipedia page for software design pattern - the "Classification and list" section describes the category each pattern is in and what it does. There's no flowchart; the description there is probably the best you'll find as a short snippet for "what to use, when." Note that you'll find different patterns in different areas of programming. Web design has its own set of patterns while JEE (not web design) has another set of patterns. The patterns for financial programming are completely different to those for stand alone application UI design. So any attempt to list them all is inherently incomplete. You find one, figure out how to use it and then it eventually becomes second nature and you don't need to think about how or when to use it ever again (until someone asks you to explain it). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/227868",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/118762/"
]
} |
227,893 | I usually try to follow the advice of the book Working Effectively with Legacy Cod e . I break dependencies, move parts of the code to @VisibleForTesting public static methods and to new classes to make the code (or at least some part of it) testable. And I write tests to make sure that I don't break anything when I'm modifying or adding new functions. A colleague says that I shouldn't do this. His reasoning: The original code might not work properly in the first place. And writing tests for it makes future fixes and modifications harder since devs have to understand and modify the tests too. If it's GUI code with some logic (~12 lines, 2-3 if/else block, for example), a test isn't worth the trouble since the code is too trivial to begin with. Similar bad patterns could exist in other parts of the codebase, too (which I haven't seen yet, I'm rather new); it will be easier to clean them all up in one big refactoring. Extracting out logic could undermine this future possibility. Should I avoid extracting out testable parts and writing tests if we don't have time for complete refactoring? Is there any disadvantage to this that I should consider? | Here's my personal unscientific impression: all three reasons sound like widespread but false cognitive illusions. Sure, the existing code might be wrong. It might also be right. Since the application as a whole seems to have value to you (otherwise you'd simply discard it), in the absence of more specific information you should assume that it is predominantly right. "Writing tests makes things harder because there's more code involved overall" is a simplistic, and very wrong, attitude. By all means expend your refactoring, testing and improvement efforts in the places where they add the most value with the least effort. Value-formatting GUI subroutines are often not the first priority. But not testing something because "it's simple" is also a very wrong attitude. Virtually all severe errors are committed because people thought they understood something better than they actually did. "We will do it all in one big swoop in the future" is a nice thought. Usually the big swoop stays firmly in the future, while in the present nothing happens. Me, I'm firmly of the "slow and steady wins the race" conviction. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/227893",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/118798/"
]
} |
227,922 | Is saving SQL statements in a MySQL table for executing later a bad idea? The SQL statements will be ready for execution, i.e. there will be no parameters to swap or anything, for example DELETE FROM users WHERE id=1 . I guess I'm being lazy, but I thought of this idea because I'm working on a project that will require quite a few cron jobs that will execute SQL statements periodically. | It's safe, if that's what you're asking. As long as you're as careful about your security as you are with your data's security. But don't reinvent the wheel, Stored Procedures ARE bits of SQL stored in a table. And they support, nay encourage, parameterisation. Also note, you can make your security simpler AND reduce the number of points of failure AND reduce the network communications by using the MySql Event Scheduler instead of cron. Other databases have equivalents to these, for good reason. You're not the first to need this functionality. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/227922",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47519/"
]
} |
227,991 | I came across this question a second ago, and I'm pulling some of the material off of there: Is there a name for the 'break n' construct? This appears to be a needlessly complex way for people to have to instruct the program to break out of a double-nested for loop: for (i = 0; i < 10; i++) {
bool broken = false;
for (j = 10; j > 0; j--) {
if (j == i) {
broken = true;
break;
}
}
if (broken)
break;
} I know textbooks like to say goto statements are the devil, and I'm not fond of them at all myself, but does this justify an exception to the rule? I'm looking for answers that deal with n-nested for loops. NOTE: Whether you answer yes, no, or somewhere in between, completely close-minded answers are not welcome. Especially if the answer is no, then provide a good, legitimate reason why (which is not too far from Stack Exchange regulations anyway). | The apparent need for a go-to statement arises from you choosing poor conditional expressions for the loops . You state that you wanted the outer loop to continue as long i < 10 and the innermost one to continue as long as j > 0 . But in reality that's not what you wanted , you simply didn't tell the loops the real condition you wanted them to evaluate, then you want to solve it by using break or goto. If you tell the loops your true intentions to begin with , then no need for breaks or goto. bool alsoThis = true;
for (i = 0; i < 10 && alsoThis; i++)
{
for (j = 10; j > 0 && alsoThis; j--)
{
if (j == i)
{
alsoThis = false;
}
}
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/227991",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/100669/"
]
} |
228,093 | Is it a good practice to put bug numbers in the file itself inside a header comment? The comments would look something like this: MODIFIED (MM/DD/YY)
abc 01/21/14 - Bug 17452317 - npe in drill across in dashboard edit mode
cde 01/17/14 - Bug 2314558 - some other error description It seems helpful, but is it considered bad practice? | I've seen this done before, both manually by authors and automatically by scripts and triggers integrated with version control systems to add author, check-in comment, and date information to the file. I think both methods are pretty terrible for two primary reasons. First, it adds clutter and noise to the file, especially as these comments age and become irrelevant to the current state of the file. Second, it's duplicate information from what's already maintained in the version control system, and if you are using a modern version control system that supports change-sets, then it's actually losing information about changes. If anything, consider integration with your defect tracking system. Some tools allow you to link a defect or task ID number in a check-in comment to an item in the tracking tool. If you have all of your defects, enhancement requests, and work tasks in the tool, you can provide linkage that way. Of course, this comes with the downside of a dependency on those tools for the project. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/228093",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60189/"
]
} |
228,095 | In Java and other high-level languages, a reference leads to an object. In C++, as far as I know, a pointer can also lead to an object. So what's the difference between a reference leading to an object, and a pointer leading to an object? | I've seen this done before, both manually by authors and automatically by scripts and triggers integrated with version control systems to add author, check-in comment, and date information to the file. I think both methods are pretty terrible for two primary reasons. First, it adds clutter and noise to the file, especially as these comments age and become irrelevant to the current state of the file. Second, it's duplicate information from what's already maintained in the version control system, and if you are using a modern version control system that supports change-sets, then it's actually losing information about changes. If anything, consider integration with your defect tracking system. Some tools allow you to link a defect or task ID number in a check-in comment to an item in the tracking tool. If you have all of your defects, enhancement requests, and work tasks in the tool, you can provide linkage that way. Of course, this comes with the downside of a dependency on those tools for the project. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/228095",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/117390/"
]
} |
228,569 | I've started working as a developer fairly recently, having worked as a systems administrator before. My understanding of how a software development team using Agile functions is that the "what we need to implement" communication happens mostly in one direction, from the product owner to the developers. Developers can express their concerns to the product owner about technical debt, but coming up with feature ideas should not be one of their main responsibilities. The company I'm working at has a different view. To them, developers should not only go to the product owners of their own team to suggest feature ideas, but also to the product owners of other teams if they think they have something to contribute to that team's product. The idea is that we're all one big Team <company name>, and all developers should use their expertise to push features they think will be useful. Is such an approach "normal", for lack of a better word? Am I being too passive, should I take the initiative and start pushing ideas to product owners? Conversely, has the company got it completely wrong and I should look for employment elsewhere? | The reason a lot of developers are "passive," as you put it, is because it takes a certain amount of domain knowledge and experience before good product ideas come to you. But if they do come, there's no reason not to suggest them and champion them. Keep in mind - developers, product owners, sales people, etc., are all on the same team, with the same goal: building a successful product. Work towards that goal however you can. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/228569",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/119455/"
]
} |
228,758 | We are using some utility methods in our company to simplify programming. So we have following string extension: public static bool IsNoE(this string s)
{
return string.IsNullOrEmpty(s);
} This is just for convenience and the intention of this question is not to discuss the sense or senseless of such methods (maintaining, ...) The question is about the naming. Some people think the method should be named in this way: public static bool IsNullOrEmpty(this string s)
{
return string.IsNullOrEmpty(s);
} The argument for the first naming is: It would not make sense to make extensions if they are not shorter to write. The argument for the second naming is: The only reason to add such extensions is just to support programming flow. (Writing a variable and then set the cursor back to the start to surround the variable with string.IsNull ) So why should I prefer one version over the other? Are there any naming conventions we can refer to? | The first naming is just plainly wrong. Want a proof? What does the next piece of code do? if (this.ah == PcX.Def)
{
this.Z.SecN.Coll();
} The second naming is ok. It's explicit enough, but not too long. It's the one which is used by .NET Framework, so other developers won't be lost. This being said, don't create aliases: you create additional code which has to be tested and maintained, while it doesn't bring anything useful to the project. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/228758",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43587/"
]
} |
228,851 | I'm developing a language which I intend to replace both Javascript and PHP. (I can't see any problem with this. It's not like either of these languages have a large install base.) One of the things I wanted to change was to turn the assignment operator into an assignment command, removing the ability to make use of the returned value. x=1; /* Assignment. */
if (x==1) {} /* Comparison. */
x==1; /* Error or warning, I've not decided yet. */
if (x=1) {} /* Error. */ I know that this would mean that those one-line functions that C people love so much would no longer work. I figured (with little evidence beyond my personal experience) that the vast majority of times this happened, it was really intended to be comparison operation. Or is it? Are there any practical uses of the assignment operator's return value that could not be trivially rewritten? (For any language that has such a concept.) | Technically, some syntactic sugar can be worth keeping even if it can trivially be replaced, if it improves readability of some common operation.
But assignment-as-expression does not fall under that. The danger of typo-ing it in place of a comparison means it's rarely used (sometimes even prohibited by style guides) and provokes a double take whenever it is used. In other words, the readability benefits are small in number and magnitude. A look at existing languages that do this may be worthwhile. Java and C# keep assignment an expression but remove the pitfall you mention by requiring conditions to evaluate to booleans. This mostly seems to work well, though people occasionally complain that this disallows conditions like if (x) in place of if (x != null) or if (x != 0) depending on the type of x . Python makes assignment a proper statement instead of an expression. Proposals for changing this occasionally reach the python-ideas mailing list, but my subjective impression is that this happens more rarely and generates less noise each time compared to other "missing" features like do-while loops, switch statements, multi-line lambdas, etc. However, Python allows one special case, assigning to multiple names at once: a = b = c . This is considered a statement equivalent to b = c; a = b , and it's occasionally used, so it may be worth adding to your language as well (but I wouldn't sweat it, since this addition should be backwards-compatible). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/228851",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3912/"
]
} |
229,232 | When Murray Gell-Mann was asked how Richard Feynman managed to solve so many hard problems Gell-Mann responded that Feynman had an algorithm: Write down the problem. Think real hard. Write down the solution. Gell-Mann was trying to explain that Feynman was a different kind of problem solver and there were no insights to be gained from studying his methods. I kinda feel the same way about managing complexity in medium/large software projects. The people that are good are just inherently good at it and somehow manage to layer and stack various abstractions to make the whole thing manageable without introducing any extraneous cruft. So is the Feynman algorithm the only way to manage accidental complexity or are there actual methods that software engineers can consistently apply to tame accidental complexity? | When you see a good move, look for a better one. —Emanuel Lasker, 27-year world chess champion In my experience, the biggest driver of accidental complexity is programmers sticking with the first draft, just because it happens to work. This is something we can learn from our English composition classes. They build in time to go through several drafts in their assignments, incorporating teacher feedback. Programming classes, for some reason, don't. There are books full of concrete and objective ways to recognize, articulate, and fix suboptimal code: Clean Code , Working Effectively with Legacy Code , and many others. Many programmers are familiar with these techniques, but don't always take the time to apply them. They are perfectly capable of reducing accidental complexity, they just haven't made it a habit to try . Part of the problem is we don't often see the intermediate complexity of other people's code, unless it has gone through peer review at an early stage. Clean code looks like it was easy to write, when in fact it usually involves several drafts. You write the best way that comes into your head at first, notice unnecessary complexities it introduces, then "look for a better move" and refactor to remove those complexities. Then you keep on "looking for a better move" until you are unable to find one. However, you don't put the code out for review until after all that churn, so externally it looks like it may as well have been a Feynman-like process. You have a tendency to think you can't do it all one chunk like that, so you don't bother trying, but the truth is the author of that beautifully simple code you just read usually can't write it all in one chunk like that either, or if they can, it's only because they have experience writing similar code many times before, and can now see the pattern without the intermediate stages. Either way, you can't avoid the drafts. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/229232",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
229,264 | In a situation where there are multiple teams making changes to some Maven projects with dependencies between them (otherwise unrelated projects i.e. no super POM or anything like that), with each team working in a separate working branch on all of these projects and merging them back to trunk only after a story is complete and tested, how should one handle the dependencies between snapshot builds that each team produces? Note that any of the teams could be making changes in any of the projects: they work on different pieces of functionality of the same system, and the projects are more like different layers. Before the split into multiple teams, artifacts were deployed to a central Nexus repository. Of course, it would be possible to set up separate Nexus repositories for each team, and an additional one for trunk, but I'm hoping there would be a better way to deal with this. | When you see a good move, look for a better one. —Emanuel Lasker, 27-year world chess champion In my experience, the biggest driver of accidental complexity is programmers sticking with the first draft, just because it happens to work. This is something we can learn from our English composition classes. They build in time to go through several drafts in their assignments, incorporating teacher feedback. Programming classes, for some reason, don't. There are books full of concrete and objective ways to recognize, articulate, and fix suboptimal code: Clean Code , Working Effectively with Legacy Code , and many others. Many programmers are familiar with these techniques, but don't always take the time to apply them. They are perfectly capable of reducing accidental complexity, they just haven't made it a habit to try . Part of the problem is we don't often see the intermediate complexity of other people's code, unless it has gone through peer review at an early stage. Clean code looks like it was easy to write, when in fact it usually involves several drafts. You write the best way that comes into your head at first, notice unnecessary complexities it introduces, then "look for a better move" and refactor to remove those complexities. Then you keep on "looking for a better move" until you are unable to find one. However, you don't put the code out for review until after all that churn, so externally it looks like it may as well have been a Feynman-like process. You have a tendency to think you can't do it all one chunk like that, so you don't bother trying, but the truth is the author of that beautifully simple code you just read usually can't write it all in one chunk like that either, or if they can, it's only because they have experience writing similar code many times before, and can now see the pattern without the intermediate stages. Either way, you can't avoid the drafts. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/229264",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/98343/"
]
} |
229,284 | I'm somewhat new to programming (I'm a mechanical engineer by trade), and I'm developing a small program during my downtime that generates a (solidworks) part based on input from various people from around the plant. Based on only a few inputs (6 to be exact), I need to make hundreds of API calls that can take up to a dozen parameters each; all generated by a set of rules I've gathered after interviewing everyone that handles the part. The rules and parameters section of my code is 250 lines and growing. So, what is the best way to keep my code readable and manageable? How do I compartmentalize all my magic numbers, all the rules, algorithms, and procedural parts of the code? How do I deal with a very verbose and granular API? My main goal is to be able to hand someone my source and have them understand what I was doing, without my input. | Based on what you describe, you're probably going to want to explore the wonderful world of databases. It sounds like many of the magic numbers you describe - particularly if they are part dependent - are really data, not code. You'll have much better luck, and find it far easier to extend the application in the long run, if you can categorize how the data relates to the parts and define a database structure for it. Keep in mind, 'databases' don't necessarily mean MySQL or MS-SQL. How you store the data is going to depend a lot on how the program is used, how you are writing it, etc. It may mean an SQL type database, or it may simply mean a formatted text file. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/229284",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/116512/"
]
} |
229,544 | What data structures can you use so you can get O(1) removal and replacement? Or how can you avoid situations when you need said structures? | There is a vast array of data structures exploiting laziness and other tricks to achieve amortized constant time or even (for some limited cases, such as queues ) constant time updates for many kinds of problems. Chris Okasaki's PhD thesis "Purely Functional Data Structures" and book of the same name is a prime example (perhaps the first major one), but the field has advanced since . These data structures are typically not only purely functional in interface, but can also be implemented in pure Haskell and similar languages, and are fully persistent. Even without any of these advanced tools, simple balanced binary search trees give logarithmic-time updates, so mutable memory can be simulated with at worst a logarithmic slow down. There are other options, which may be considered cheating, but are very effective with regard to implementation effort and real-world performance. For example, linear types or uniqueness types allow in-place updating as implementation strategy for a conceptually pure language, by preventing the program from holding on to the previous value (the memory that would be mutated). This is less general than persistent data structures: For example, you can't easily build an undo log by storing all previous versions of the state. It's still a powerful tool, though AFAIK not yet available in the major functional languages. Another option for safely introducing mutable state into a functional setting is the ST monad in Haskell. It can be implemented without mutation, and barring unsafe* functions, it behaves as if it was just a fancy wrapper around passing a persistent data structure implicitly (cf. State ). But due to some type system trickery that enforces order of evaluation and prevents escaping, it can safely be implemented with in-place mutation, with all the performance benefits. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/229544",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33996/"
]
} |
229,549 | It's very common in .NET for an exception to be wrapped in several layers of "outer exceptions" which give marginally more contextual data. For example, in EF if your update fails, you get exceptions wrapped similar to this: EntityException DbUpdateException SqlException The data I need to understand what failed is almost always in the SqlException , so what's the advantage to the other two? What if I was using EF inside a custom library, should I wrap this exception with one of my own? Like MyCustomLibraryException: Could not update the data. See inner exception for details. | to hide what the core cause was in places where it doesn't matter the top level only needs to know that a storage exception occurred instead of an SQLException, which may not happen if you decide to migrate to a non-sql data store not wrapping also leaks the abstraction and requires reimplementation of the top level when doing a migration of a lower level which will then use different exceptions | {
"source": [
"https://softwareengineering.stackexchange.com/questions/229549",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60527/"
]
} |
229,761 | With bytecode-based virtual machine languages like Java, VB.NET, C#, ActionScript 3.0, etc., you hear sometimes about how easy it is to just go download some decompiler off the Internet, run the bytecode through it one good time, and oftentimes, come up with something not too far from the original source code in a matter of seconds. Supposedly this sort of language is particularly vulnerable to that. I've recently started wondering why you don't hear more about this concerning native binary code, when you at least know which language it was written in originally (and thus, which language to try to decompile into). For a long time, I figured it was just because native machine language is so much crazier and more complex than typical bytecode. But what does bytecode look like? It looks like this: 1000: 2A 40 F0 14
1001: 2A 50 F1 27
1002: 4F 00 F0 F1
1003: C9 00 00 F2 And what does native machine code look like (in hex)? It, of course, looks like this: 1000: 2A 40 F0 14
1001: 2A 50 F1 27
1002: 4F 00 F0 F1
1003: C9 00 00 F2 And the instructions come from a somewhat similar frame of mind: 1000: mov EAX, 20
1001: mov EBX, loc1
1002: mul EAX, EBX
1003: push ECX So, given the language to try to decompile some native binary into, say C++, what's so hard about it? The only two ideas that immediately come to mind are 1) it really is that much more intricate than bytecode, or 2) something about the fact that operating systems tend to paginate programs and scatter their pieces causes too many problems. If one of those possibilities is correct, please explain. But either way, why do you never hear of this basically? NOTE I'm about to accept one of the answers, but I want to kind of mention something first. Almost everybody is referring back to the fact that different pieces of original source code might map to the same machine code; local variable names are lost, you don't know what type of loop was originally used, etc. However examples like the two that were just mentioned are kind of trivial in my eyes. Some of the answers though tend to state that the difference between machine code and the original source is drastically much more than something this trivial. But for example, when it comes down to things like local variable names and loop types, bytecode loses this information as well (at least for ActionScript 3.0). I've pulled that stuff back through a decompiler before, and I didn't really care whether a variable was called strMyLocalString:String or loc1 . I could still look in that small, local scope and see how it's being used without much trouble. And a for loop is pretty much the same exact thing as a while loop, if you think about it. Also even when I would run the source through irrFuscator (which, unlike secureSWF, doesn't do much more than just randomize member variable and function names), it still looked like you could just start isolating certain variables and functions in smaller classes, figure out how they're used, assign your own names to them, and work from there. In order for this to be a big deal, the machine code would need to lose a lot more information than that, and some of the answers do go into this. | At every step of compilation you lose information that is irrecoverable. The more information you lose from the original source, the harder it is to decompile. You can create a useful de-compiler for byte-code because a lot more information is preserved from the original source than is preserved when producing the final target machine code. The first step of a compiler is to turn the source into some for of intermediate representation often represented as a tree. Traditionally this tree does not contain non-semantic information such as comments, white-space, etc. Once this is thrown away you cannot recover the original source from that tree. The next step is to render the tree into some form of intermediate language that makes optimizations easier. There are quite a few choices here and each compiler infrastructure has there own. Typically, however, information like local variable names, large control flow structures (such as whether you used a for or while loop) are lost. Some important optimizations typically happen here, constant propagation, invariant code motion, function inlining, etc. Each of which transform the representation into a representation that has equivalent functionality but looks substantially different. A step after that is to generate the actual machine instructions which might involve what are called "peep-hole" optimization that produce optimized version of common instruction patterns. At each step you lose more and more information until, at the end, you lose so much it become impossible to recover anything resembling the original code. Byte-code, on the other hand, typically saves the interesting and transformative optimizations until the JIT phase (the just-in-time compiler) when the target machine code is produced. Byte-code contains a lot of meta-data such as local variable types, class structure, to allow the same byte-code to be compiled to multiple target machine code. All this information is not necessary in a C++ program and is discarded in the compilation process. There are decompilers for various target machine codes but they often do not produce useful results (something you can modify and then recompile) as too much of the original source is lost. If you have debug information for the executable you can do an even better job; but, if you have debug information, you probably have the original source too. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/229761",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/100669/"
]
} |
229,762 | Sorry if the title is hard to understand, it is easier if I describe the scenario to get the question across. I have a service that collects data, performs business logic and does all the normal stuff. I now have to perform an integration that will provide a set of data (for sage accounts) via an endpoint ( http://example.net/import-feed ). This endpoint can be polled by the external service at any time to pull the latest changes. When a piece of data is updated, say a payment or a date this needs to be added to the feed. This payment or date could be changed multiple times before the service gets polled. The question is what is the best methods to keep track of all or the last change (probably best) made to the record/row/data and pull all of these changes into the feed. The external service will then send notifications back to my service letting me know if adding that update was successful or not. Currently the only method I can think of is track the last modified time and the last time it was imported, and a status saying if it needs to be imported/updated. These imports/updates would then be pulled from the feed. I hope you understand the problem, I have had a hard time trying to find information on this type of integration. It may just be me not using the correct terms. So, what are the methods for this? | At every step of compilation you lose information that is irrecoverable. The more information you lose from the original source, the harder it is to decompile. You can create a useful de-compiler for byte-code because a lot more information is preserved from the original source than is preserved when producing the final target machine code. The first step of a compiler is to turn the source into some for of intermediate representation often represented as a tree. Traditionally this tree does not contain non-semantic information such as comments, white-space, etc. Once this is thrown away you cannot recover the original source from that tree. The next step is to render the tree into some form of intermediate language that makes optimizations easier. There are quite a few choices here and each compiler infrastructure has there own. Typically, however, information like local variable names, large control flow structures (such as whether you used a for or while loop) are lost. Some important optimizations typically happen here, constant propagation, invariant code motion, function inlining, etc. Each of which transform the representation into a representation that has equivalent functionality but looks substantially different. A step after that is to generate the actual machine instructions which might involve what are called "peep-hole" optimization that produce optimized version of common instruction patterns. At each step you lose more and more information until, at the end, you lose so much it become impossible to recover anything resembling the original code. Byte-code, on the other hand, typically saves the interesting and transformative optimizations until the JIT phase (the just-in-time compiler) when the target machine code is produced. Byte-code contains a lot of meta-data such as local variable types, class structure, to allow the same byte-code to be compiled to multiple target machine code. All this information is not necessary in a C++ program and is discarded in the compilation process. There are decompilers for various target machine codes but they often do not produce useful results (something you can modify and then recompile) as too much of the original source is lost. If you have debug information for the executable you can do an even better job; but, if you have debug information, you probably have the original source too. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/229762",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/120059/"
]
} |
229,879 | I have recently joined a young hackerspace still in the process of setting itself up. We're fortunate because the space has a few internal projects that need working on and no shortage of volunteers to work on them. There have been some discussions on how to organize these projects. My most recent professional experience has been with Scrum so I'm considering pitching a Scrum approach for our software projects, but I'm not sure it will be a good fit. Although I've seen Scrum work well for small full-time teams, the nature of this organisation is different: The members are volunteers . Some are full time students. Others work jobs full time. We can't expect a constant level of contribution from anyone as their real lives take priority. While pretty much everyone has years of experience writing software, not many members have done so professionally or in teams. There is no Product Owner . The requirements for these projects are determined by a committee. The members of this committee will also be working on the implementation. This means we will have no single, dedicated, Product Owner. We have no deadlines (soft or hard). The projects will get done when they get done. These are pretty significant differences, but I'm not convinced they will be blockers to applying Scrum. I think some minor tweaking could get us over this hurdle: If we change Sprints to have a fixed story-point size, but fluid duration (time), we can still benefit from iterative releases without putting unrealistic delivery pressure on volunteer devs. We can ditch burndown charts and velocity calculation. If I understand correctly, these are tools and metrics that work as a bridge between the dev team and the Management. They serve to report progress in a form that is meaningful to both the developers and the stakeholders. Considering we have no one to report to (no Project Manager, no Product Owner, and no outside stakeholders) I believe we can drop this altogether. Things I think we could benefit from which won't require tweaking: The Requirements Gathering meeting(s). Where everyone sits around a table and discusses User Stories, sketches up UI mocks, and builds up a Product Backlog. Sprint Retrospectives . This will be an interesting way for us to converge on a development process that works for us as a team of volunteers. Things I'm not sure about: How should daily Stand-ups be treated? I wonder if they would have much value at all in our setting. My understanding of the stand-up ritual is that it helps communication by naturally disseminating information throughout the team. Considering the fact that our Sprints will likely be delivering much less complexity than an average Sprint there might be less need to be abreast of all the other team members' progress/developments. Should I push for XP things like Continuous Integration, Code Reviews, and TDD? I'm concerned this will be asking for a lot. I'd be more tempted to bring these concepts in on future projects once people are more familiar with Scrum and working as a team. My Questions: Can Scrum be adapted to a volunteer-based environment? And, is my planned approach so far going in the right direction? | Look into Kanban. It's more appropriate than SCRUM for your constraints. Edit: SCRUM is (very roughly) an ordered backlog with sprints and ceremonies to ensure that the volume of work 'in progress' stays under control and have something solid at the end of every sprint. If you ditch the ceremonies and the sprints cadence you end up with Kanban: an ordered backlog and a strong emphasis on limiting work 'in progress' directly and by making sure everything marked 'done' is done rather than by imposing stability at the end of each sprint. You still get the agile benefits: release anytime, flexibility, some measure of predictability - although SCRUM can get you slightly further on that aspect - and without the ceremonies or aspects of SCRUM that don't fit well a loose, distributed team with no commitment. The catch? Ditching the ceremonies require more discipline, so you REALLY need to pay attention to tests, code quality, the current work in progress, and ensure the top of the backlog (stuff ready to be picked up by people) is sufficiently elaborated. You could have a vote based backlog, although in a volunteer setting some people just work on whatever they want to. (And yes, all the TDD, CI, reviews and peer programming ideas are good ideas). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/229879",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81495/"
]
} |
230,131 | When doing the Red, Green & Refactor cycle we should always write the minimum code to pass the test. This is the way I have been taught about TDD and the way almost all books describe the process. But what about the logging? Honestly I have rarely used logging in an application unless there was something really complicated that was happening, however, I have seen numerous posts that talk about the importance of proper logging. So other than logging an exception I couldn't justify the real importance of logging in a proper tested application (unit/integration/acceptance tests). So my questions are: Do we need to log if we are doing TDD? won't a failing test reveal
what wrong with the application? Should we add test for the logging process in each method in each class? If some log levels are disabled in the production environment for example, won't that introduce a dependency between the tests and enviroment? People talk about how logs ease debugging, but one of the main advantages about TDD is that I always know what's wrong due to a failing test. Is there something I am missing out there? | 1) Do we need to log if we are doing TDD? won't a failing test reveal what wrong with the application? That assumes you have every possible test your application needs, which is rarely true. Logs help you track down bugs you did not write tests for yet. 2) Should we add test for the logging process in each method in each class? If the logger itself is tested, it would not need to be retested in each class, similar to other dependencies. 3) If some log levels are disabled in the production environment for example, won't that introduce a dependency between the tests and environment? Humans (and log aggregators) depend on the logs, the tests should not depend on them. Typically there are several log levels, and some are used in production, and some additional levels are used in development, similar to: "Rails log level is info in production mode and debug in development and test" - http://guides.rubyonrails.org/debugging_rails_applications.html Other applications use a similar approach. 4) People talk about how logs ease debugging, but one of the main advantages about TDD is that I always know what's wrong due to a failing test. Production bugs will have passed all the tests, so you may need some other reference to investigate those issues. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230131",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50440/"
]
} |
230,158 | I am having a problem in understanding how to apply camelCase syntax to some of my variable names. For example, how should I correctly write a word like "phonenumber" in camel case? Is it phoneNumber or phonenumber ? Similarly with "username", is it username or userName ? I think it doesn't look right with camel case like motorCycle , passWord , sunDay , setUp or waveLength since these are just one word each. I think that could be why it's called hashMap but also hashtable in camel case without the capital in the last case because hashtable is one word while hash map is two words. But if the motorcycle has a color then would it be motorcycleColor since a word is concatenated? Is that correct or should it be phoneNUmber , waveLength , sunBlock and even sunDay for the Sunday of the week? Why for instance is the method called getISOCountries while it says HttpHeaders e.g. it's not clear what becomes lowercase if we have a method like String camelCaseString = dog.toCamelCase() or interface CamelCase . Related: https://english.stackexchange.com/questions/889/when-should-compound-words-be-written-as-one-word-with-hyphens-or-with-spaces | You should capitalize the letter after where there would "normally" be a space (when you would write a letter instead of sourcecode). Your first assumption is correct: It's a "motorcycle", not a "motor cycle", so your variable should be named motorcycle , not motorCycle . It's a "hash map", so you go with hashMap , not hashmap . "Sunday" is one word -> sunday , not sunDay . While "day of week" would be dayOfWeek , not dayofweek . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230158",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12893/"
]
} |
230,401 | I am trying to understand the definition of 'abstraction' in OOP. I have come across a few main definitions. Are they all valid? Is one of them wrong? I'm confused. (I re-wrote the definition with my own words). Definition 1: Abstraction is the concept of taking some object from the real world, and converting it to programming terms. Such as creating a Human class and giving it int health , int age , String name , etc. properties, and eat() etc. methods. Definition 2: A more general definition. Abstraction is a concept that takes place anywhere in a software system where 'making things more general/simpler/abstract' is involved. A few examples: An inheritance hierarchy, where the higher classes are simpler or more general,
and define more general and abstract implementation.
While the lower classes in the hierarchy are more concrete and define
more detailed implementations. Using encapsulation to hide the details of implementation of a class from other classes, thus making the class more 'abstract' (simpler) to the outside software world. Definition 3 Another general definition: Abstraction is the concept of moving the focus from the details and concrete implementation of things, to the types of things (i.e. classes), the operations available (i.e. methods), etc, thus making the programming simpler, more general, and more abstract. (This can take place anywhere and in any context in the software system).
It takes place for example when encapsulating, because encapsulation means to hide the details of implementation and only show the types of things and their more general and abstract definitions. Anotehr example would be using a List object in Java. this object actually uses the implementation details of an ArrayList or a LinkedList , but this information is abstracted using the more general name List . Is any of these definitions correct?
(I am referring to the most conventional and accepted definition). | Abstraction is one of the 4 pillars of Object Oriented Programming(OOP). It literally means to perceive an entity in a system or context from a particular perspective. We take out unnecessary details and only focus on aspects that are necessary to that context or system under consideration. Here is some good explanation: You as a person have different relationships in different roles. When you are at school, then you are a "Student" . When you are at work, you are an "Employee" . When you are at government institution, you can be viewed as a "Citizen" . So it boils down to what in what context are we looking at an entity/object.
So if I am modelling a Payroll System , I will look at you as an Employee(PRN, Full Time/Part Time, Designation) . If am modelling a Course Enrollment System , then I will consider your aspects and characteristics as a Student(Roll Number, Age, Gender, Course Enrolled) . And if I am modelling a Social Security Information System then I will look at your details as a Citizen(like DOB, Gender, Country Of Birth, etc.) Remember that Abstraction(focussing on necessary details) is different from Encapsulation(hiding details from the outer world). Encapsulation means hiding the details of the object and providing a decent interface for the entities in outer world to interact with that object or entity. For example, if someone want to know my name then he cannot directly access my brain cells to get to know what is my name. Instead that person will either ask my name. If a driver wants to speed up a vehicle then there is an interface(accelerator pedal, gear, etc) for that purpose. The 1st def s not very clear. Def 2 is good but it tends to confuse the newbie as it tries to link Abstraction with Encapsulation and Inheritance. Def 3 is the best one out of 3 definitions as it clearly defines what is Abstraction precisely. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230401",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/121368/"
]
} |
230,438 | I have a project with a git branching model that roughly follows that of nvie's git-flow . Our release branches are named in a SemVer format, e.g. v1.5.2 Once a release branch is given the green light for production, we close the branch, by merging it into master, applying a tag, and then deleting the branch. As we immediately delete the release branch, we've been using the same identifier for tagging the branch, e.g. v1.5.2 Here's the commands we'd use to close a release branch: $ git checkout master
$ git merge v1.5.2
$ git tag -a v1.5.2 -m "Version 1.5.2 - foo bar, baz, etc"
$ git branch -d v1.5.2
$ git branch -dr origin/v1.5.2
$ git push origin :v1.5.2
$ git push
$ git push --tags This seems to work in the majority of cases, however it's causing an issue in the scenario where another instance of the git repo (e.g. another dev machine, or staging environment) has a local checkout of the v1.5.2 branch. The git push origin :v1.5.2 command will delete the branch in the remote, but does not delete the local version of the branch (if it exists) in all repos. This leads to an ambiguous reference, when trying checkout v1.5.2 in those repos: $ git checkout v1.5.2
warning: refname 'v1.5.2' is ambiguous. Can this be avoided without using a different syntax for the branches, e.g. release-v1.5.2 , or v1.5.2-rc ? Or is it unavoidable, and therefore a fundamentally bad idea to create a tag with the same name as a deleted branch? | If you absolutely want to keep this naming scheme, you might: Decide that you don't care about these warnings That is, if you're happy with the fact that: git checkout <ref> will check out refs/heads/<ref> over refs/tags/<ref> (see git-checkout ) other commands will use refs/tags/<ref> over refs/heads/<ref> (see gitrevisions ) For example, in this test repository, the v1.5.2 branch points to commit B, but the v1.5.2 tag points to commit A. % git log --oneline --decorate
8060f6f (HEAD, v1.5.2, master) commit B
0e69483 (tag: v1.5.2) commit A git checkout prefers branch names: % git checkout v1.5.2
warning: refname 'v1.5.2' is ambiguous.
Switched to branch 'v1.5.2'
% git log --decorate --oneline -1
8060f6f (HEAD, v1.5.2, master) commit B but git log will use the tag name: % git log --decorate --oneline -1 v1.5.2
warning: refname 'v1.5.2' is ambiguous.
0e69483 (tag: v1.5.2) commit A This could be confusing. Train people to delete their local branches when they see a new tag This might be hard/awkward depending on the size of your organisation. Write a wrapper around "git pull" and "git fetch" That is, write a wrapper that checks if there are any tags that shadow branch names, and warn about (or delete) those branches. This sounds painful, and it could be undesirable if the shadowed branch is currently checked out. Unfortunately, it sounds like the easiest way to solve this problem might be to change the way you name your branches. The link you posted uses different naming schemes for tags and branches: if you're already mostly following that method, adopting its naming scheme might be the easiest solution. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230438",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/121395/"
]
} |
230,534 | I had my first programming exam recently...and well I pretty much flopped it. Did not do great at all. I have only myself to blame as outside of college time, I pretty much did nothing. Now I have another coming up near summer time and I'm not allowing this to happen again. For a couple of weeks now I've been reading, reading and reading some more. I keep going over the older things I missed and the newer things we're doing. So, obviously I can notice a huge difference in my understanding of the language. However, that's about it. I can read the code and I now have an idea of what's going on in the code... but when it comes to me writing the code myself I'm just clueless. It's like I never know what approach to take and can never really fully comprehend the questions. I have done a fair amount of reading (been doing around 5-6 hours for the past month or so) each day... But when opening my IDE I always feel doomed it's really demotivating. Especially because I have knowledge of nodes, lists, arraylists, interfaces ect ect but besides from reading them on a page that's about it. I can point out exactly everything that's going on in a program so annotating a presample code I find fine... but writing my own code is another story.. | You learn how to write programs by writing programs. But you gotta start small, man. public class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello World!");
}
} From there, begin building... public class HelloWorld {
static String test = "This is a test";
public static void main(String[] args) {
System.out.println(test);
}
} and then... public class HelloClass {
String test;
public void setTest(String str)
{
test = str;
}
public String getTest()
{
return test;
}
}
public class HelloWorld {
HelloClass myHelloInstance;
public static void main(String[] args) {
myHelloInstance = new HelloClass();
myHelloInstance.SetTest("Hello World.")
String myResult = myHelloInstance.getTest();
System.out.println(myResult);
}
} ... and so on. Once you understand the basics of how objects work, it will be much easier to write larger programs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230534",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/121483/"
]
} |
230,538 | I have a general idea of how the processor handles instructions but spend my time working in mostly high level languages. Maybe somebody who works closer to the iron can provide some valuable insight. Assuming that programming languages are basically very high level abstractions of a processor's instruction set, what is the most basic set of instructions necessary to create a turing complete machine? Note: I don't know anything about the diversity of hardware architectures but -- for the sake of simplicity -- lets assume it's a typical processor with an ALU (if necessary) and instruction stack. * | It turns out you only need one instruction to build a machine capable of Turing-computation. This class of machines that have only one instruction and are Turing-complete is called One Instruction Set Computers or also somewhat jokingly Ultimate RISC . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230538",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1256/"
]
} |
230,582 | Most probably my boss didn't write a single word on a keyboard in his entire life. He saw computers and he has a cell phone. He is a smart man. How can I explain to him what a 'virtual machine' is? (VM as in VMWare + virtual Box, not the VM as in Java, LLVM, .NET) One of the reasons for asking this is that I want to tell him that having a Linux VM to connect to internet with a Windows host is safer than connecting directly to Windows. | Well, that depends on why you want to explain it, and in how much details. A good first attempt could be A program to simulate a computer inside a computer. It is useful for
testing programs that need a separate computer, without actually
having a second computer. Then, you can refine your explanation depending on his questions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230582",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/114420/"
]
} |
230,604 | I am programming in Java in a very object-oriented (OO) style. OOP comes very intuitively to me, but I have very little knowledge about other kinds of programming. What exactly is procedural programming ? How exactly is it different from OOP? Is it the same thing as functional programming ? I used to think that all programming that isn't OO is procedural. But I'm starting to think this isn't true. | Wikipedia has good explanations for these terms. Regardless, here's the summary: Imperative programming models computation as a sequence of statements that alter mutable state. Procedural programming is imperative programming that breaks down the code into subroutines. Structured programming is a more disciplined approach to procedural programming that forbids arbitrary jumps (e.g. goto) and global state changes. Declarative programming is the opposite of imperative programming - it specifies what to calculate rather than how (e.g. SQL, regexes). Functional programming models computation as expressions that (may) yield values. Functions are values and can be passed to or returned from other functions. Mutation is discouraged; all variables are immutable by default. As a result, it's more declarative than imperative, since it emphasizes what is being computed rather than the sequence of state changes needed to achieve it. Purely functional programming disallows mutation altogether (though contrary to popular belief still has mechanisms for achieving side effects). Total functional programming additionally forbids exceptions and infinite looping. (A total function in mathematics is a function that returns a value for all of its inputs.) Object-oriented programming emphasizes the use of objects/interfaces to achieve abstraction and modularity. Their relationships are a bit complicated because OOP is a pretty loaded term. You can use objects in both functional languages and procedural languages, but the languages that advertise themselves as OO are procedural. To further confound the issue: Most people don't know the difference between an object and an abstract data type Mainstream OOP languages make no mention of ADTs, provide very poor support for them, and tout objects as The One True Way. No one says Abstract Data Type-Oriented Programming (because it'd be a silly thing to do; you need both ADTs and objects.) This causes people to think OOP is the only way to achieve abstraction, and that functional programming and OOP are somehow opposites or mutually exclusive. A lot of people also think all functional languages are pure and disallow mutation. Additionally, people generally toss around imperative/procedural interchangeably, sometimes contrasting it with OOP (implying abstraction-less code, generally C) and sometimes contrasting it to functional programming. The term structured programming has mostly fallen out of use as far as I can tell (likely because at this point most people take for granted that goto and globals are considered harmful.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230604",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/121368/"
]
} |
230,624 | I was naughty... Too much "cowboy coding," not enough committing. Now, here I am with an enormous commit. Yes, I should have been committing all along, but it's too late now. What is better? Do one very large commit listing all the things I changed Try to break it into smaller commits that likely won't compile, as files have multiple fixes, changes, additional method names, etc. Try to do partial reversions of files just for appropriate commits, then put the new changes back. Note: as of right now I am the only programmer working on this project; the only person who will look at any of these commit comments is me, at least until we hire more programmers. By the way: I am using SVN and Subclipse. I did create a new branch before doing any of these changes. More information : I asked a separate question related to how I got into this situation in the first place: How to prepare for rewriting an application's glue | To answer, you have to ask yourself how you expect to use the results of these commits in the future. The most common reasons are: To see what release a bug was introduced. To see why a certain line of code is present. To merge into another branch. To be able to check out a previous version for troubleshooting an issue a customer or tester is seeing in that version. To be able to include or exclude certain parts from a release. The first two reasons can be served just as well with one big check-in, assuming you can create a check-in message that applies equally well to each line of changed code. If you're the only programmer, then smaller commits aren't going to make your merge any easier. If you don't plan on doing a release or testing with only part of your unsubmitted changes, then the last two reasons don't apply. There are other reasons for making small commits, but they are for while you are in the middle of the changes, and that time is past. Those reasons are making it easier to back out a mistake or an experimental change, and making it easier to keep synced up with colleagues without huge scary merges. From my understanding of your situation as you described it, it seems there's little to no benefit to splitting your commit at this point. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230624",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/88986/"
]
} |
230,674 | You see this phrase or similar kicked around from time to time, generally referring to a program that claims they were not designed to take full advantage of multi-core processors. This is common especially with video game programming. (of course a lot of programs have no concurrency and do not need it, such as basic scripts, etc). How can this be? A lot of programs (especially games) inherently use concurrency, and since the OS is in charge of task scheduling on the CPU, then are these programs not inherently taking advantage of the multiple cores available? What would it mean in this context to "take advantage of multiple cores"? Are these developers actually forbidding OS task scheduling and forcing affinity or their own scheduling? (Sounds like a major stability issue). I'm a Java programmer, so maybe I have not had to deal with this due to abstractions or whatnot. | Good concurrency requires a lot more than throwing a few threads in an application and hoping for the best. There's a range in how concurrent a program can be going from embarrassingly parallel to pure sequential. Any given program can use Amdahl's law to express how scalable a problem or algorithm is. A couple qualifications for a embarrassingly parallel application would be: No shared state, every function only depends on the parameters passed in No access to physical devices (graphic cards, hard drives, etc) There are other qualifications, but with just these two we can understand why games in particular are not as easy as you might think to take advantage of multiple cores. For one, the model of the world that will be rendered has to be shared as different functions calculate physics, movement, apply artificial intelligence etc. Second, each frame of this game model has to be rendered on screen with a graphics card. To be fair, many game makers use game engines that are produced by third parties. It took a while, but these third party game engines are now much more parallel than they used to be. There are bigger architectural challenges in dealing with effective concurrency Concurrency can take many forms, from running tasks in the background to a full architectural support for concurrency. Some languages give you very powerful concurrency features such as ERLANG , but it requires you to think very differently about how you construct your application. Not every program really needs the complexity of full multicore support. One such example is tax software, or any form driven application. When most of your time is spent waiting on the user to do something, the complexity of multithreaded applications are just not that useful. Some applications lend themselves to a more embarrassingly parallel solution, such as web applications. In this case, the platform starts out embarrassingly parallel and it's up to you not have to impose thread contention. The bottom line: Not all applications are really hurt by not taking advantage of multiple threads (and thus, cores). For the ones that are hurt by that, sometimes the computations are not friendly to parallel processing or the overhead to coordinate it would make the application more fragile. Unfortunately, parallel processing is still not as easy as it should be to do well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230674",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/84157/"
]
} |
230,679 | Let us assume I have a basic object with a handful of self-relevant points of data. class A(object):
def __init__(self, x, y, width, length):
self.x = x
self.y = x
self.width = x
self.length = x And so we go on to instantiate several A objects and shove them into a list, as programmers are wont to do. Now let us assume that these objects need to be checked for a particular state every so often, but we needn't check them on every pass, because their state doesn't change particularly quickly. So a checking method is written somewhere: def check_for_ticks(object_queue):
for obj in object_queue:
if obj.checked == 0 and has_ticks(obj):
do_stuff(obj)
obj.checked -= 1 Ah, but wait - it wasn't designed with obj.checked attribute! Well no big deal, Python is kind and lets you add attributes whenever, so we change the method a bit. def check_for_ticks(object_queue):
for obj in object_queue:
try:
obj.checked =- 1
except AttributeError:
obj.checked = 0
if obj.checked == 0 and has_ticks(obj):
do_stuff(obj) This works, but it gives me pause. My thoughts are mixed, because while it's a functional attribute to give an object and it solves a problem, the attribute isn't really used by the object. A will probably never modify its own self.checked attribute, and it isn't really an attribute that it 'needs to know about' - it's just incredibly convenient to give it that attribute in passing. But -- what if there are multiple methods that might want to flag an object? Surely one does not create the class with every attribute that some other method might maybe want to flag it for? Should I be giving objects new attributes in this way? If I should not, what is the better alternative to just giving them new attributes? | Good concurrency requires a lot more than throwing a few threads in an application and hoping for the best. There's a range in how concurrent a program can be going from embarrassingly parallel to pure sequential. Any given program can use Amdahl's law to express how scalable a problem or algorithm is. A couple qualifications for a embarrassingly parallel application would be: No shared state, every function only depends on the parameters passed in No access to physical devices (graphic cards, hard drives, etc) There are other qualifications, but with just these two we can understand why games in particular are not as easy as you might think to take advantage of multiple cores. For one, the model of the world that will be rendered has to be shared as different functions calculate physics, movement, apply artificial intelligence etc. Second, each frame of this game model has to be rendered on screen with a graphics card. To be fair, many game makers use game engines that are produced by third parties. It took a while, but these third party game engines are now much more parallel than they used to be. There are bigger architectural challenges in dealing with effective concurrency Concurrency can take many forms, from running tasks in the background to a full architectural support for concurrency. Some languages give you very powerful concurrency features such as ERLANG , but it requires you to think very differently about how you construct your application. Not every program really needs the complexity of full multicore support. One such example is tax software, or any form driven application. When most of your time is spent waiting on the user to do something, the complexity of multithreaded applications are just not that useful. Some applications lend themselves to a more embarrassingly parallel solution, such as web applications. In this case, the platform starts out embarrassingly parallel and it's up to you not have to impose thread contention. The bottom line: Not all applications are really hurt by not taking advantage of multiple threads (and thus, cores). For the ones that are hurt by that, sometimes the computations are not friendly to parallel processing or the overhead to coordinate it would make the application more fragile. Unfortunately, parallel processing is still not as easy as it should be to do well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230679",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
230,682 | I am currently developing a math-physics program and I need to calculate if two vectors overlap one another, the one is vertical and the other is horizontral. Is there any fast algorithm to do this because what I came up so far, has a lot of repeats. Eg lets say we have vector V1((0,1),(2,1)) and a V2((1,0),(1,2)) where the first parenthesis is the coordinates starting and the second coordinates that the vector reaches. I want as a result to take that they overlap at (1,1) So far the only idea I came up is to ''expand'' each vector to a list of points and then compare the lists e.g for V1 its list would be (0,1) (1,1) (2,1) | Good concurrency requires a lot more than throwing a few threads in an application and hoping for the best. There's a range in how concurrent a program can be going from embarrassingly parallel to pure sequential. Any given program can use Amdahl's law to express how scalable a problem or algorithm is. A couple qualifications for a embarrassingly parallel application would be: No shared state, every function only depends on the parameters passed in No access to physical devices (graphic cards, hard drives, etc) There are other qualifications, but with just these two we can understand why games in particular are not as easy as you might think to take advantage of multiple cores. For one, the model of the world that will be rendered has to be shared as different functions calculate physics, movement, apply artificial intelligence etc. Second, each frame of this game model has to be rendered on screen with a graphics card. To be fair, many game makers use game engines that are produced by third parties. It took a while, but these third party game engines are now much more parallel than they used to be. There are bigger architectural challenges in dealing with effective concurrency Concurrency can take many forms, from running tasks in the background to a full architectural support for concurrency. Some languages give you very powerful concurrency features such as ERLANG , but it requires you to think very differently about how you construct your application. Not every program really needs the complexity of full multicore support. One such example is tax software, or any form driven application. When most of your time is spent waiting on the user to do something, the complexity of multithreaded applications are just not that useful. Some applications lend themselves to a more embarrassingly parallel solution, such as web applications. In this case, the platform starts out embarrassingly parallel and it's up to you not have to impose thread contention. The bottom line: Not all applications are really hurt by not taking advantage of multiple threads (and thus, cores). For the ones that are hurt by that, sometimes the computations are not friendly to parallel processing or the overhead to coordinate it would make the application more fragile. Unfortunately, parallel processing is still not as easy as it should be to do well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230682",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/108883/"
]
} |
230,905 | We have a "typical" SCRUM team and we commit to work for a sprint, and also maintain a backlog. Recently we have run into a problem of trying to integrate/handle the work of an overachieving developer doing out of band work (choosing to work outside of the normal working hours/sprint). To give an example, if the team takes in 50 points of work, let's say that they will complete all that work within the SCRUM framework by the end of the sprint and they and the company is happy. One of the team members decides to work on their own, on a backlog item, on their own free time. They do not check in this work, but instead save it (we use TFS and it is in a shelveset). How to handle this? A few of the problems .. During the next sprint this team members says the programming work is 99% done and just needs code review and testing. How do you deal with this in the SCRUM and agile methodology? Other developers complain about not being involved in design decisions related to these stories, since the work was done out of band. Our product owner is tempted to pull in this "free" work and the overachieving members is likely doing this on purpose in order to get more features into the product that the team otherwise would not be able to accomplish in the sprint(s). There is a view that this is breaking the "process". Obviously QA, UI and documentation work still need to be done on this work. I see a lot of discussion about not forcing a SCRUM team to work overtime, but what about a member of the team working above and beyond the expectations put forth during planning and execution of sprints? I would hesitate to reign this person in and say you cannot work extra (cautioning on burn out of course), but at the same time it seems to be causing some issues with certain members of the team (but not all). How to integrate work done by an overachieving member into the SCRUM and agile process for software development? | Alright, so someone's enthusiastically writing great code that needs to be done, just not in order. With all due emphasis: LET THEM It's causing some complications in your scrum sprints. Does it really matter in the grand scheme of things? If he's accomplishing what he's supposed to, then let him go on and build great things for you. I know several amazing programmers who have left companies because they did not let the programmers outside of the constraints of an artificial system like Scrum (I myself left my last job after being treated like nothing more than glorified QA). If there are complaints from other developers about input (perfectly valid complaints, I may add), it may be best to introduce a "20% time" program to let him (and others) do what they do best with minimal interference. Instead of future stories (that may require input from others), let the developer experiment with new tech or features. You may find a great new opportunity that never would have been explored otherwise. I'm sure this developer has a few things they'd like to try out if you just let them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230905",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/121825/"
]
} |
230,928 | Why does the operator -- not exist for bool whereas it does for operator ++ ? I tried in C++, and I do not know if my question apply to another language. I will be glad to know also. I know , I can use the operator ++ with a bool. It makes any bool equal to true. bool b = false;
b++;
// Now b == true. Why can't we use the operator -- in an opposite way? bool b = true;
b--;
// Now b == false; It is not very useful, but I am curious. | In the old days of C, there was no boolean type. People used the int for storing boolean data, and it worked mostly. Zero was false and everything else was true. This meant if you took an int flag = 0; and later did flag++ the value would be true. This would work no matter what the value of flag was (unless you did it a lot, it rolled over and you got back to zero, but lets ignore that) - incrementing the flag when its value was 1 would give 2, which was still true. Some people used this for unconditionally setting a boolean value to true. I'm not sure it ever became idiomatic , but its in some code. This never worked for -- , because if the value was anything other than 1 (which it could be), the value would still not be false. And if it was already false ( 0 ) and you did a decrement operator on it, it wouldn't remain false. When moving code from C to C++ in the early days, it was very important that C code included in C++ was still able to work. And so in the specification for C++ (section 5.2.6 (its on page 71)) it reads: The value obtained by applying a postfix ++ is the value that the operand had before applying the operator. [Note: the value obtained is a copy of the original value ] The operand shall be a modifiable lvalue. The type of the operand shall be an arithmetic type or a pointer to a complete object type. After the result is noted, the value of the object is modified by adding 1 to it, unless the object is of type bool , in which case it is set to true. [Note: this use is deprecated, see annex D. ] The operand of postfix -- is decremented analogously to the postfix ++ operator, except that the operand shall not be of type bool . This is again mentioned in section 5.3.2 (for the prefix operator - 5.2.6 was on postfix) As you can see, this is deprecated (Annex D in the document, page 709) and shouldn't be used. But thats why. And sometimes you may see the code. But don't do it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/230928",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/121841/"
]
} |
231,057 | There are many well-known best practices about exception handling in isolation. I know the "do's and don'ts" well enough, but things get complicated when it comes to best practices or patterns in larger environments. "Throw early, catch late" - I've heard many times and it still confuses me. Why should I throw early and catch late, if at a low-level-layer a null pointer exception is thrown? Why should I catch it at a higher layer? It doesn't make sense for me to catch a low-level exception at a higher level, such as a business layer. It seems to violate the concerns of each layer. Imagine the following situation: I have a service which calculates a figure. To calculate the figure the service accesses a repository to get raw data and some other services to prepare calculation. If something went wrong at the data retrieval layer, why should I throw a DataRetrievalException to higher level? In contrast I would prefer to wrap the exception into a meaningful exception, for example a CalculationServiceException. Why throw early, why catch late? | In my experience, its best to throw exceptions at the point where the errors occur. You do this because it's the point where you know the most about why the exception was triggered. As the exception unwinds back up the layers, catching and rethrowing is a good way to add additional context to the exception. This can mean throwing a different type of exception, but include the original exception when you do this. Eventually the exception will reach a layer where you are able to make decisions on code flow (e.g a prompt the user for action). This is the point where you should finally handle the exception and continue normal execution. With practice and experience with your code base it becomes quite easy to judge when to add additional context to errors, and where it's most sensible to actually, finally handle the errors. Catch → Rethrow Do this where you can usefully add more information that would save a developer having to work through all the layers to understand the problem. Catch → Handle Do this where you can make final decisions on what is an appropriate, but different execution flow through the software. Catch → Error Return Whilst there are situations where this is appropriate, catching exceptions and returning an error value to the caller should be considered for refactoring into a Catch → Rethrow implementation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/231057",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/100346/"
]
} |
231,060 | Delphi RAD libraries implement many thread friendly objects in a way which forces one to declare another object for storing the reference returned by that object's locking method. The object thus essentially encapsulates the prime object which is returned on locking. For example; List := ThreadList.Unlocklist;
// do something with List
ThreadList.LockList; Why is this approach taken? Why is inheriting from the prime object, adding the locking object, overriding constructors and destructors, discouraged? For example the following implementation adds a TMREWSync to a list; IMREWS = interface
['{5B6DE5FA-847B-42D5-8BF4-9EB20A452C54}']
procedure BeginRead;
function BeginWrite: Boolean;
procedure EndRead;
procedure EndWrite;
end;
TThreadList = class ( TList, IMREWS )
private
FLock : TMREWSync;
public
constructor Create;
destructor Destroy; override;
property Lock : TMREWSync read FLock implements IMREWS;
end;
constructor TThreadList.Create;
begin
FLock := TMREWSync.Create;
inherited;
end;
destructor TThreadList.Destroy;
begin
inherited;
FLock.Free;
end;
// Usage:
ThreadList.BeginWrite;
// Do something with ThreadList
ThreadList.EndWrite; | In my experience, its best to throw exceptions at the point where the errors occur. You do this because it's the point where you know the most about why the exception was triggered. As the exception unwinds back up the layers, catching and rethrowing is a good way to add additional context to the exception. This can mean throwing a different type of exception, but include the original exception when you do this. Eventually the exception will reach a layer where you are able to make decisions on code flow (e.g a prompt the user for action). This is the point where you should finally handle the exception and continue normal execution. With practice and experience with your code base it becomes quite easy to judge when to add additional context to errors, and where it's most sensible to actually, finally handle the errors. Catch → Rethrow Do this where you can usefully add more information that would save a developer having to work through all the layers to understand the problem. Catch → Handle Do this where you can make final decisions on what is an appropriate, but different execution flow through the software. Catch → Error Return Whilst there are situations where this is appropriate, catching exceptions and returning an error value to the caller should be considered for refactoring into a Catch → Rethrow implementation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/231060",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/119287/"
]
} |
231,092 | It seems like F# code often pattern matches against types. Certainly match opt with
| Some val -> Something(val)
| None -> Different() seems common. But from an OOP perspective, that looks an awful lot like control-flow based on a runtime type check, which would typically be frowned on. To spell it out, in OOP you'd probably prefer to use overloading: type T =
abstract member Route : unit -> unit
type Foo() =
interface T with
member this.Route() = printfn "Go left"
type Bar() =
interface T with
member this.Route() = printfn "Go right" This is certainly more code. OTOH, it seems to my OOP-y mind to have structural advantages: extension to a new form of T is easy; I don't have to worry about finding duplication of the route-choosing control flow; and route choice is immutable in the sense that once I have a Foo in hand, I need never worry about Bar.Route() 's implementation Are there advantages to pattern-matching against types that I'm not seeing? Is it considered idiomatic or is it a capability that is not commonly used? | You are correct in that OOP class hierarchies are very closely related to discriminated unions in F# and that pattern matching is very closely related to dynamic type tests. In fact, this is actually how F# compiles discriminated unions to .NET! Regarding extensibility, there are two sides of the problem: OO lets you add new sub-classes, but makes it hard to add new (virtual) functions FP lets you add new functions, but makes it hard to add new union cases That said, F# will give you a warning when you miss a case in pattern matching, so adding new union cases is actually not that bad. Regarding finding duplications in root choosing - F# will give you a warning when you have a match that is duplicate, e.g.: match x with
| Some foo -> printfn "first"
| Some foo -> printfn "second" // Warning on this line as it cannot be matched
| None -> printfn "third" The fact that "route choice is immutable" might also be problematic. For example, if you wanted to share the implementation of a function between Foo and Bar cases, but do something else for the Zoo case, you can encode that easily using pattern matching: match x with
| Foo y | Bar y -> y * 20
| Zoo y -> y * 30 In general, FP is more focused on first designing the types and then adding functions. So it really benefits from the fact that you can fit your types (domain model) in a couple of lines in a single file and then easily add the functions that operate on the domain model. The two approaches - OO and FP are quite complementary and both have advantages and disadvantages. The tricky thing (coming from the OO perspective) is that F# usually uses the FP style as the default. But if there really is more need for adding new sub-classes, you can always use interfaces. But in most systems, you equally need to add types and functions, so the choice really does not matter that much - and using discriminated unions in F# is nicer. I'd recommend this great blog series for more information. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/231092",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9865/"
]
} |
231,336 | There are many useful functions in <algorithm> , but all of them operate on "sequences" - pairs of iterators. E.g., if I have a container and like to run std::accumulate on it, I need to write: std::vector<int> myContainer = ...;
int sum = std::accumulate(myContainer.begin(), myContainer.end(), 0); When all I intend to do is: int sum = std::accumulate(myContainer, 0); Which is a bit more readable and clearer, in my eyes. Now I can see that there might be cases where you'd only want to operate on parts of a container, so it's definitely useful to have the option of passing ranges. But at least in my experience, that's a rare special case. I'll usually want to operate on whole containers. It's easy to write a wrapper function which takes a container and calls begin() and end() on it, but such convenience functions are not included in the standard library. I'd like to know the reasoning behind this STL design choice. | ... it's definitely useful to have the option of passing ranges. But at least in my experience, that's a rare special case. I'll usually want to operate on whole containers It may be a rare special case in your experience , but in reality the whole container is the special case, and the arbitrary range is the general case. You've already noticed that you can implement the whole container case using the current interface, but you can't do the converse. So, the library-writer had a choice between implementing two interfaces up front, or only implementing one which still covers all cases. It's easy to write a wrapper function which takes a container and calls begin() and end() on it, but such convenience functions are not included in the standard library True, especially since the free functions std::begin and std::end are now included. So, let's say the library provides the convenience overload: template <typename Container>
void sort(Container &c) {
sort(begin(c), end(c));
} now it also needs to provide the equivalent overload taking a comparison functor, and we need to provide the equivalents for every other algorithm. But we at least covered every case where we want to operate on a full container, right? Well, not quite. Consider std::for_each(c.rbegin(), c.rend(), foo); If we want to handle operating backwards on containers, we need another method (or pair of methods) per existing algorithm. So, the range-based approach is more general in the simple sense that: it can do everything the whole-container version can the whole-container approach doubles or triples the number of overloads required, while still being less powerful the range-based algorithms are also composable (you can stack or chain iterator adaptors, although this is more commonly done in functional languages and Python) There's another valid reason, of course, which is that it was already a lot of work to get the STL standardized, and inflating it with convenience wrappers before it had been widely used wouldn't be a great use of limited committee time. If you're interested, you can find Stepanov & Lee's technical report here As mentioned in comments, Boost.Range provides a newer approach without requiring changes to the standard. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/231336",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/121080/"
]
} |
231,397 | I like to raise a NotImplementedError for any method that I want to implement, but where I haven't gotten around to doing it yet. I might already have a partial implementation, but prepend it with raise NotImplementedError() because I don't like it yet. On the other hand, I also like to stick to conventions, because this will make it easier for other people to maintain my code, and conventions might exist for a good reason. However Pythons documentation for NotImplementedError states: This exception is derived from RuntimeError. In user defined base classes, abstract methods should raise this exception when they require derived classes to override the method. That is a much more specific, formal use case than the one I describe. Is it a good, conventional style to raise a NotImplementedError simply to indicate that this part of the API is a work in progress? If not, is there a different standardised way of indicating this? | It's worth noting that, while the Python documentation provides a use case (and probably the canonical one) for this exception, it doesn't specifically exclude its use in other scenarios. I would consider it appropriate to raise a NotImplementedError exception if you haven't overridden a method in a base class yet (to satisfy the "interface"). A cursory check on Google suggests that people will understand what you mean if you use the exception in this fashion. There are no side effects or unintended consequences that I know of; the method will simply throw an exception if it is called, and it will throw an exception that is well-understood by everyone. The documentation for Python 3 reflects this exact usage: In user defined base classes, abstract methods should raise this exception when they require derived classes to override the method, or while the class is being developed to indicate that the real implementation still needs to be added . [Emphasis added] | {
"source": [
"https://softwareengineering.stackexchange.com/questions/231397",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47323/"
]
} |
231,427 | I just wrote a function that spans approximately 100 lines. Hearing that, you are probably tempted to tell me about single responsibilities and urge me to refactor. This is my gut instinct as well, but here is the issue: The function does one thing. It performs a complex string manipulation, and the function body consists mostly of one verbose regex, broken up into many lines that are documented. If I broke up the regex into multiple functions, I feel like I would actually lose readability, since I am effectively switching languages, and won't be able to take advantage of some features regexes offer. Here now is my question: When it comes to string manipulation with regular expressions, are large function bodies still an anti-pattern? It seems like named capture groups serve a very similar purpose to functions. By the way, I have tests for every flow through the regex. | What you're encountering is the cognitive dissonance that comes from listening to people who favor slavish adherence to guidelines under the guise of "best practices" over reasoned decision making. You've clearly done your homework: The function's purpose is understood. The workings of its implementation is understood (i.e., readable). There are full-coverage tests of the implementation. Those tests pass, meaning you believe the implementation to be correct. If any of those points weren't true, I'd be first in line to say that your function needs work. So there's one vote for leaving the code as-is. The second vote comes from looking at your options and what you get (and lose) from each: Refactor. This gains you compliance with someone's idea of how long a function should be and sacrifices readability. Do nothing. This maintains existing readability and sacrifices compliance with with someone's idea of how long a function should be. This decision comes down to which you value more: readability or length. I fall into the camp that believes length is nice but readability is important and will take the latter over the former any day week. Bottom line: if it isn't broken, don't fix it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/231427",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/75344/"
]
} |
231,637 | My new boss has been working on this project for many years. I've only been here a few weeks, but I am not sure it's possible. He would like to design a system that is "100% data driven". So if we put in enough data, we can define and generate any application. I've managed to at least get him to concede some things like users, or apps should have predefined values, but he likes the concept of the structure of the system, the user interface and the logic all being stored as data. There are some demos of simple things and he's basically rediscovered some simple ideas of object oriented programming and your basic template systems, but I think overall that this goal might be actually impossible. I don't know how you can define logic using data without the system becoming so complex that you are doing actual programming anyway. I think theoretically it isn't because the thing that interprets the data ends up needing to become turing complete to describe the application so you've just shifted the problem one level higher to no net benefit. Is such a 100% Data Driven Application possible? | Your boss should read this piece: Bad Carma: The "Vision" project, a cautionary tale about inner platform effect or second system effect. Abstract Those of us who work in Information Technology (IT) have all been on a project where something important is just not right. We know it, most everyone knows it, but nobody is quite able to put his or her finger on the problem in a convincing way. This story is about such an IT project, the most spectacular failure I
have ever experienced. It resulted in the complete dismissal of a
medium-sized IT department, and eventually led to the destruction of a
growing company in a growing industry. The company, which we'll call "Upstart," was a successful and profitable subscription television business. The project occurred in the early 1990s, and it was a custom-built
order-entry and customer-service application, closely resembling what
is now referred to as Customer-Relationship Management or CRM. The
core functionality of the system included: Order entry and inventory Customer service, help desk General ledger, accounts receivable, billing, and accounts payable The application was
called "Vision" and the name was both its officially stated promise
for Upstart as well as a self-aggrandizing nod to its architect. The
application was innovative, in that it was built to be flexible enough
to accommodate any future changes to the business. Not just any
foreseeable future changes to the business, but absolutely any changes
to the business, in any form. It was quite a remarkable claim, but
Vision was intended to be the last application ever built. It achieved
this utter flexibility by being completely data-driven, providing
limitless abstraction, and using object-oriented programming
techniques that were cutting-edge at the time. Like many such projects that set out to create a mission-critical
application, the development effort spanned two years, about a year
longer than originally projected. But that was acceptable, because
this was the application that would last forever, adapting to any
future requirements, providing unlimited Return On Investment (ROI).
When the application finally went "live," almost everybody in the
company had invested so much in it that literally the fate of the
company hinged on its success. However, in the event of total project malfunction, mission-critical
applications running the core business of multinational corporations
are not permitted the luxury of the type of fast flameout demonstrated
by thousands of "dot-com" companies in the era of the Internet bubble. Within a month of Vision going "live," it was apparent to
all but those most heavily vested in its construction that it was a
failure. See Also http://en.wikipedia.org/wiki/Inner-platform_effect | {
"source": [
"https://softwareengineering.stackexchange.com/questions/231637",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/62089/"
]
} |
231,759 | I was wondering what are some techniques to locate which code implemented a specific feature, on a desktop application. I am a junior developer, with only professional programming experience lying around web programming. In the Web it is easier to do that. For example, you "inspect" a button with the browser tools, and you can see what is being done when you click it. And then, presuming you have the full source code, you can drill down the hierarchy of the calls. But how do you do this in desktop applications? At least, without having to dive into the full codebase? | Back Tracing Back tracing is locating an endpoint to an event associated with the feature (see below). Once there, a breakpoint is placed in the debugger. The feature is triggered and when the debugger stops. The call stack is reviewed to back trace the calling path. While walking up the call stack you can take notes on variable states, or place new breakpoints to inspect the event again. The feature is trigger again and the debugger stops at the new breakpoints. You can then repeat back tracing or perform forward tracing until the goal is found. Pros & Cons It's always easier to walk up the call stack and see how you got somewhere. There could be millions of conditions that need to be true before reaching an endpoint. If you know the endpoint already you've saved yourself lots of work. If the feature is broken. You may never reach the endpoint, and time can be wasted trying to figure out why. Endpoint Discovery To debug a feature you have to know where in the source code the final goal is achieved. Only from this point can you backtrace to see how the code got there. An example; To understand how undo is performed. You know where in the code things are undone, but you don't know how things get there . This would be a candidate for backtracing to figure out how the feature works. Forward Tracing Forward tracing is locating a start point for an event associated with a feature (see below). Once there, logging messages are inserted into the source code or breakpoints are set. This process is repeated as you progress further away from the start point until you discover the goal for the feature. Pros & Cons It's the easiest starting point for finding a feature. Code complexity reduces the effectiveness of forward tracing. The more conditions there are in the code the greater the chance you'll go in the wrong direction. Forward tracing often results in setting breakpoints that will be triggered by unrelated events. Interrupting the debugging process and interfering with your search. Start Point Discovery You can use keywords, user interface identifiers (button IDs, window names) or easy to find event listeners associated with the feature. For example, you might start with the button used to trigger an undo feature. Process Of Elimination You can think of this as the middle point compared to start point and end point positions. You perform a process of elimination when you already know a piece of code is used in a feature, but it is neither the start or end of the feature. The direction you take from the middle point depends upon the number of entries and exits. If the code chunk is used in many places, then back tracing from this position could be very time consuming as they all have to be inspected. You then employ a process of elimination to reduce this list. Alternative, you can perform a forward trace from this point, but again if the code chunk branches out to many places this can also be a problem. You have to reduce position directions by not following paths that clearly wouldn't be executed for the feature. Moving past this code and only placing breakpoints where it's likely related to the feature. Middle point debugging often requires more advance IDE features. The ability to see code hierarchy and dependencies. Without those tools it's difficult to do. Pros & Cons Middle points are often the first peice of code that pops into your head when you think of the feature. You say to yourself "Ah, that has to use XXXX to work." Middle points can reveal start points the easiest. Middle points can be an easy way to pick up the trail to a feature when lost by synchronization or threading changes. Middle points can take you to code you are not familiar with. Costing you time to learn what is going on. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/231759",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12443/"
]
} |
231,870 | Generally, on many platforms, I'm writing my string resources to a .resx or .xml file, and then I'm getting them using some platform-dependent approach. That is, on iOS, I'm getting them via NSBundle.MainBundle , and by using Context.Resources on Android. What are the advantages of this approach, and why not having it directly accessible in the code, so, for example: In a cross-platform project, any platform can access it directly, with no integration. There are no concerns during building about whether or not the resources built well. The coder can use functionalities such as multilanguage handling Long story short: what is the reason why string resources are structured that way? [Edit] Let's say that my file is part of a "core" project shared between other project. (Think about a PCL, cross-platform project file structure.) And suppose that my file is just totally similar to a .resx/.xml file, looking like this (I'm not a pro in xml, sorry!):
Parameters
Paramètres So, this is basically a custom xml, where you point to the key/language to get the proper string. The file would be part of the application just like you add any accessible file inside an app, and the system to access the string resources, coded using PCL. Would this add an overhead to the applications? | Localization and internationalization, Keeping the strings external allows them to change (read: translated) without needing to recompile (just a relink at most, and just dropping in a new folder at best). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/231870",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/53920/"
]
} |
231,915 | Lets say I have a Enemy class, and the constructor would look something like: public Enemy(String name, float width, float height, Vector2 position,
float speed, int maxHp, int attackDamage, int defense... etc.){} This looks bad because the constructor has so many parameters, but when I create an Enemy instance I need to specify all of these things. I also want these attributes in the Enemy class, so that I can iterate through a list of them and get/set these parameters. I was thinking maybe subclassing Enemy into EnemyB, EnemyA, while hardcoding their maxHp, and other specific attributes, but then I'd lose access to their hardcoded attributes if I wanted to iterate through a list of Enemy (consisting of EnemyA's, EnemyB's, and EnemyC's). I'm just trying to learn how to code cleanly. If it makes a difference, I work in Java/C++/C#. Any point in the right direction is appreciated. | The solution is to bundle up the parameters into composite types. Width and Height are conceptually related - they specify the dimensions of the enemy and will usually be needed together. They could be replaced with a Dimensions type, or perhaps a Rectangle type that also includes the position. On the other hand, it might make more sense to group position and speed into a MovementData type, especially if acceleration later enters the picture. From context I assume maxHp , attackDamage , defense , etc also belong together in a Stats type. So, a revised signature might look something like this: public Enemy(String name, Dimensions dimensions, MovementData movementData, Stats stats) The fine details of where to draw the lines will depend on the rest of your code and what data is commonly used together. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/231915",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/122844/"
]
} |
232,030 | I'm a junior developer (~3 years' exp.) and at my job we're in the process of architecting a new system. My lead developer will be the principal architect, however he's challenged me to try architecting the system myself (in parallel). Over the course of a few iterations of brainstorming ideas and proposing what I saw as architecture suggestions, my lead has given me the feedback that most of what I've been doing was "designing" and not "architecting". He described the difference as architecture being implementation-agnostic whereas a design is the description of an implementation. He said I need to take off my designer hat and put on my architect hat. He gave me a little bit of advice on how to do so, but I would like to ask you as well: How do I get out of software designer mode and start thinking more like an architect? Here are some examples of "designs" I came up with that weren't seen as relevant to the architecture by my lead: I came up with an algorithm for loading and unloading resources from our system and my lead said that algorithms are categorically not architecture. I came up with a set of events the system should be raising and in what order it should raise them, but this too didn't seem to cut it as architecture. I seem to be getting caught up in the details and not stepping back far enough. I find that even when I come up with something that is at an architecture level, I often got there by trying out various implementations and mucking around in the details then generalizing and abstracting. When I described this to my lead, he said that I was taking the wrong approach: I needed to be thinking "top down" and not "bottom up". Here are some more specific details about the project : The project we're architecting is a web application. I'm estimating around 10-100 thousand lines of code. We're a start up. Our engineering team is about 3-5 people. The closest thing I could compare our application to is a lightweight CMS. It has similar complexity and deals largely with component loading and unloading, layout management, and plug-in style modules. The application is ajax-y. The user downloads the client once then requests data as it needs it from the server. We will be using the MVC pattern. The application will have authentication. We aren't very concerned about old browser support (whew!), so we're looking to leverage the latest and greatest that is out there and will be coming out. (HTML5, CSS3, WebGL?, Media Source Extensions, and more!) Here are some goals of the project : The application needs to scale. In the near term our users will be on the order of hundreds to thousands, but we're planning for tens of thousands to millions and beyond. We hope the application will be around forever. This isn't a temporary solution. (Actually we already have a temporary solution, and what we're architecting is the long-term replacement for what we have). The application should be secure as it may have contact with sensitive personal information. The application needs to be stable. (Ideally, it'd be stable around the level of gmail but it doesn't need to be at the extreme of a Mars rover.) | First of all I'd say that the difference between architecture and design is mainly semantics. Some teams have check points between the two. Your technical lead defines the architecture as before the design and architecture as implementation agnostic. From that I assume we are talking about design as in the waterfall model and not Industrial Design which would help you design the product with a view to users before you get to the software architecture. I think architecture often slips into design and that is not necessarily a bad thing, it is often very helpful for the architect to have a deep knowledge of what is possible within the system at hand. Having said all that, you need some advice for the situation you are in. There is a whole world of software architecture out there, papers, books, conferences but you are generally looking for patterns and abstractions. Without more details on the project I can only give a broad example. For instance, if you were working in integration there is the Service Oriented Architecture ( SOA ) pattern where you split up parts of the system into 'services' so you can work with each part in a defined way, in web program this is often then implemented as Web Services (although shouldn't be though of as limited to that) and more recently the rise of RESTful APIs with JSON, again I would say this is a design coming from the architecture of SOA. I would say Model, View, Controller (MVC) is another example of an architecture pattern in common use, splitting up responsibility of the components of a system to allow for parts to be swapped out, to contain errors and testing. From a 10,000ft level, if you can draw it on a whiteboard and explain it to a competent programmer who doesn't work in your field and doesn't know your programming language and current implementation details then it is probably architecture. If you can write a book about it that anyone outside of your company would care about then it is probably architecture. If you find your self explaining detail and can't generalise it to other code-bases / companies / industries then it is probably design. I would agree that the two examples you give are code design and not architecture. The first because I think when you say you came up with an 'algorithm' for loading resources I think you mean you designed a set of instructions to accomplish that task, and not that you designed a new algorithm that they will be teaching in 1st year COMSC next year. In the second example, again I agree it is design. If you showed me either of these ideas I wouldn't be able to use them in my random software project. You have to go to a 'higher level', Object Oriented (OO) in Java rather than I want the Customer Class to be a sub-class of the Person Class. Even talking about Exceptions in general could be considered too low level (too close to the implementation). To try to address the specifics that you list, I think what you should be thinking about is how to architect a web based CMS. Wordpress has a site architecture codex where they talk a lot about design implementation details but it is clear from the post that their main architecture centers around making Wordpress extensible with themes. Architecting a clear interface for a theme such that it could be written by someone out side of the company was clearly an architecture decision they took. These are the kinds of things it is good to get down on paper when architecting your 'long-term' (not temporary) solution so that all the design and implementation decisions that are made during development (by all the developers not just the architect) are in-line with this idea. Other examples of architecture for your situation: Putting the whole thing on virtual machines, hosted on a cloud provider or in house and having stateless machine instances, so that any machine failing can be replaced with a new instance of a virtual machine without having to copy across any state or losing any information. Building in live production failure testing from the beginning with chaos simians . Maybe try drawing the whole system on a whiteboard. Try it at different levels of detail, first board could be GUI->dispatcher->backend->DB or something, and then drill down until you start using pronouns. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232030",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/80789/"
]
} |
232,107 | I see that some software have the version number included as a part of their file name, while others do not. I am more used to the latter type, and I think that is more popular, but I see the former type sometimes in javascript libraries. For example, jQuery's file name is like jquery-2.1.0.js instead of jquery.js . Whenever I update these types of files, I have to look for the places in other programs that load these files, and change the file name they refer to, and manually delete the older version of these libraries. That is inconvenient to me, so I rather rename the file to exclude the version number, and keep the file name referred to to not include the version number. I suspect that these numbers are for some kind of version control, but am not clear when and how they are used. What are the pros and cons for including version numbers in the file name? Is there de facto consensus on which areas of software or languages use version number in the file name, and which areas/languages do not? If so, is there any rationare for that? | It makes sense to specify the version you require. Behavior you may rely on could have changed, so newer is not always better. First, test whether a new version of a library works for you. Then, update explicitly. In the case of web resources, having the version be part of the filename is important in the context of caching . For static resources like jquery.js you will want a very long cache time before it's re-fetched. However, during an update you want your code to use the new version immediately , rather than having clients switch over to the new version during the next day or so. As foo-1.2.3.js is a different resource as foo-1.2.4.js , no caches will get in the way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232107",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43137/"
]
} |
232,130 | I'm designing a pragmatic REST API and I'm a little stuck on how best to add existing entities to a collection. My domain model includes a Project that has a collection of Sites. This is a strict many-to-many relationship and I have no need to create an entity that explicitly models the relationship (i.e. ProjectSite). My API will allow consumers to add an existing Site to a Project. Where I'm getting hung up is that the only data I really need are ProjectId and SiteId. My initial idea was: 1. POST myapi/projects/{projectId}/sites/{siteId} But I also thought about 2. POST myapi/projects/{projectId}/sites with a Site entity sent as JSON content. Option 1 is simple and works but doesn't feel quite right, and I have other relationships that cannot follow this pattern so it adds inconsistency to my API. Option 2 feels better but leads to two concerns: Should I create a Site or throw an exception if a new Site is posted (SiteId = 0)? Because I only need ProjectId and SiteId to create the relationship, the Site could be posted with wrong or missing data for other properties. A 3rd option is to provide a simple endpoint solely for creating and deleting the relationship. This endpoint would expect a JSON payload containing just ProjectId and SiteId. What do you think? | POST is the "append" verb, and also the "processing" verb. PUT is the "create/update" verb (for known identifiers), and almost looks like the right choice here, because the full target URI is known. projectId and siteId already exist, so you don't need to "POST to a collection" to produce a new ID. The problem with PUT is that it requires the body be the representation of the resource you're PUTting. But the intent here is to append to the "project/sites" collection resource, rather than updating the Site resource. What if someone PUTs a full JSON representation of an existing Site? Should you update the collection and update the object? You could support that, but it sounds like that's not the intent. As you said, the only data I really need are ProjectId and SiteId Rather, I'd try POSTing the siteId to the collection, and rely on the "append" and "process" nature of POST: POST myapi/projects/{projectId}/sites {'id': '...' } Since you're modifying the sites collection resource and not the Site resource , that's the URI you want. POST can know to "append/process" and add the element with that id to the project's sites collection. That still leaves the door open to creating brand new sites for the project by fleshing out the JSON and omitting the id. "No id" == "create from scratch". But if the collection URI gets an id and nothing else, it's pretty clear what needs to happen. Interesting question. :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232130",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/120162/"
]
} |
232,179 | At the office we just got out of a long period where we released patches on a too-frequent basis. Near the end of that period we were doing almost three patches per week on average. Beside that this was very demotivating for the developers, I was wondering what the customer would think about this. I asked the question myself and concluded that I never knew software that was updated that frequently. However, for the case that comes the closest I do not care really since the patches are pretty quickly applied. The customers which received these patches differ a lot from each other. Some were really waiting for the patch where others did not really cared, yet they all got the same patches.
The time to update the customers software is less than 30 seconds, so I do not expect any problems concerning time. They do need to be logged out though. So my question in more detail: Is getting updates frequently giving a 'negative' message to the receiver? Of course, I could ask the customers, but I'm not in that position nor do I want to 'Awaken the sleeping dogs'. PS: If there is anything I could do to improve my question, please leave a comment. | As with many things in computing, it depends. If the patches are a response to customer requests for new features or improvements, then your company will be viewed as responsive. If, on the other hand, your patches are a response to bug reports, then your company will be viewed as incompetent. Testing software on your customers is by far the most expensive possible way to detect bugs, no matter what anyone says. It's a false economy; the free labor that you think you're getting is more than offset by customer service effort, breaking the software development life-cycle, and losing customer confidence. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232179",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/64096/"
]
} |
232,213 | Should I always use is as prefix for boolean variables? What about booleans that indicate something in past? Should I write isInitialized or wasInitialized ? Should I write for properties IsManyMembers or HasManyMembers ? Is there any best practices? Or I should just write in accordance with English rules? | Not really, as booleans are not always used to indicate that an object "is" something. "has" is an equally valid prefix "was", "can" are also valid in particular circumstances, also, I have seen the suffix "Able" used. So Object herring:-
isFish = true
isCat = false
hasScales = true
hasFur = false
canSwim = true
wasEgg = true
eatAble = true
Object moggy:-
isFish = false
isCat = true
hasScales = false
hasFur = true
canSwim = false
wasEgg = false
eatAble = false It all depends on what makes the program readable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232213",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/98882/"
]
} |
232,228 | We are going to hold a meeting where everybody will speak in clockwise direction around a table. There are n people with n spots. Each person has a position preference (e.g. some want to go first, some last, etc). Everyone is seated randomly and cannot move from their position. How shall we compute the best starting position on the table to satisfy the most people? I have an O(n^2) solution:
See how many people would be satisfied having assumed each of the positions 1..n as start positions; then return the position that gave the maximum value. | Not really, as booleans are not always used to indicate that an object "is" something. "has" is an equally valid prefix "was", "can" are also valid in particular circumstances, also, I have seen the suffix "Able" used. So Object herring:-
isFish = true
isCat = false
hasScales = true
hasFur = false
canSwim = true
wasEgg = true
eatAble = true
Object moggy:-
isFish = false
isCat = true
hasScales = false
hasFur = true
canSwim = false
wasEgg = false
eatAble = false It all depends on what makes the program readable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232228",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/123158/"
]
} |
232,229 | I'm reading about dependency injection (DI). To me, it is a very complicated thing to do, as I was reading it was referencing inversion of control (IoC) as well and such I felt I was going to be in for a journey. This is my understanding: Instead of creating a model in the class which also consumes it, you pass (inject) the model (already filled with interesting properties) to where it is needed (to a new class which could take it as a parameter in the constructor). To me, this is just passing an argument. I must have miss understood the point? Maybe it becomes more obvious with bigger projects? My understanding is non-DI (using pseudo code): public void Start()
{
MyClass class = new MyClass();
}
...
public MyClass()
{
this.MyInterface = new MyInterface();
} And DI would be public void Start()
{
MyInterface myInterface = new MyInterface();
MyClass class = new MyClass(myInterface);
}
...
public MyClass(MyInterface myInterface)
{
this.MyInterface = myInterface;
} Could some one shed some light as I'm sure I'm in a muddle here. | Well yes, you inject your dependencies, either through a constructor or through properties. One of the reasons for this, is not to encumber MyClass with the details of how an instance of MyInterface needs to be constructed. MyInterface could be something that has a whole list of dependencies by itself and the code of MyClass would become ugly very fast if you had instantiate all the MyInterface dependencies inside of MyClass. Another reason is for testing. If you have a dependency on a file reader interface and you inject this dependency through a constructor in, say, ConsumerClass, that means that during testing, you can pass an in-memory implementation of the file reader to the ConsumerClass, avoiding the need to do I/O during testing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232229",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/119594/"
]
} |
232,260 | I am working on project on which I need to open-source part of my code in order to simplify extension by the enduser. What I want, is to make an npm module, that will expose part of my code, so my users can build extensions for the product (in JavaScript), but I want a guarantee that this code will not be used for commercial or other development work, beside extensions for my product. I found Creative Commons Attribution NonCommercial NoDerivs license to be a fit. My problem: Can I , as the author of this code, use it in a commercial closed-source application? Disclaimer: I know this is kind of a legal question, but please, state what you think, no one is holding you accountable/liable for it. Thanks. | Copyright licenses only specify what others can do with your code. If you are the copyright holder of the code, then you have all the rights to do with that code as you like and that includes using the code in ways that is not permitted for others. You are the copyright holder if you wrote the code yourself and it was not written as part of your job or under contract. Regarding your choice of license: Using Creative Commons licenses for software is not recommended . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232260",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/123198/"
]
} |
232,359 | I have come across the term "programming to an interface instead of an implementation" a lot, and I think I kind of understand what it means. But I want to make sure I understand it's benefits and it's possible implementations. "Programming to an interface" means, that when possible, one should refer to a more abstract level of a class (an interface, abstract class, or sometimes a superclass of some sort), instead of refering to a concrete implementation. A common example in Java, is to use: List myList = new ArrayList(); instead of ArrayList myList = new ArrayList(); . I have two questions regarding this: I want to make sure I understand the main benefits of this approach. I think the benefits are mostly flexibility. Declaring an object as a more high-level reference, rather than a concrete implementation, allows for more flexibility and maintainablity throughout the development cycle and throughout the code. Is this correct? Is flexibility the main benefit? Are there more ways of 'programming to an interface'? Or is "declaring a variable as an interface rather than a concrete implementation" the the only implementation of this concept? I'm not talking about the Java construct Interface . I'm talking about the OO principle "programming to an interface, not an implementation". In this principle, the world "interface" refers to any "supertype" of a class - an interface, an abstract class, or a simple superclass which is more abstract and less concrete than it's more concrete subclasses. | "Programming to an interface" means, that when possible, one should refer to a more abstract level of a class (an interface, abstract class, or sometimes a superclass of some sort), instead of refering to a concrete implementation. This is not correct . Or at least, it is not entirely correct. The more important point comes from a program design perspective. Here, "programming to an interface" means focusing your design on what the code is doing, not how it does it. This is a vital distinction that pushes your design towards correctness and flexibility. The main idea is that domains change far slower than software does. Say you have software to keep track of your grocery list. In the 80's, this software would work against a command line and some flat files on floppy disk. Then you got a UI. Then you maybe put the list in the database. Later on it maybe moved to the cloud or mobile phones or facebook integration. If you designed your code specifically around the implementation (floppy disks and command lines) you would be ill-prepared for changes. If you designed your code around the interface (manipulating a grocery list) then the implementation is free to change. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232359",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/121368/"
]
} |
232,407 | Is there an anti pattern that describes a historically grown software system where multiple developers just added new features to the system but no one really kept an eye on the overall architecture nor were refactorings ever done? I think this happens when management/customer asks for new feature constantly and no one ever refactors anything but just adds on to what other developers haven done before. A reason could also be that the developer is just overwhelmed with the software system and does not really understand how it currently works and then just adds/glues his code at the very end (instead of refactoring the code and changing it.) So over time it's getting harder and harder to maintain the system. (I wonder if there are pictures for these kind of anti pattern to make this more clear to none programming folks - like a car that was built by just adding more and more features without thinking of the overall design. Like someone has the need to tow trailers while riding backwards and then an engineer just welds a tow bar to the front of the car. Job done. But now the cowl doesn't open anymore.) | You refer to technical debt . We all accrue technical debt in the products we develop over time; refactoring is one of the very common and effective ways of reducing this technical debt, though many companies never pay down their technical debt. These companies tend to find their software extremely unstable years down the road, and the technical debt becomes so gruesome that you can't pay it down incrementally, because it would take too long to pay it down that way. Technical debt has the term, because it follows the same behaviours of debt. You get the debt, and as long as you continue spending (creating features) and not paying down that debt, it'll only grow. Much like debt, when it get's too large you get to points where you may wish to shed it entirely with dangerous tasks like full-out rewrites. Also like real debt, as it accrues to a certain point, it hinders your ability to spend (creating features) altogether. Just to throw another term in the mix, cohesion refers to how well a system, micro to the line level, or macro to the system level, fits together. A highly cohesive system will have all of its pieces fit together very well and look like one engineer wrote all of it. Your reference above to somebody just gluing their code to the end would be violating the cohesion of that system. Managing Technical Debt There are a lot of ways to manage technical debt, though like real debt the best approach is to pay it down frequently. Unfortunately like real debt it's occasionally a better idea to accrue more for a short period, where for instance time to market for a feature may double or triple your revenue. The tricky part is weighing these competing priorities as well as identifying when the debts ROI isn't worth it for the given feature vs. when it is. So sometimes it is worth it to accrue the debt for a short period, but that's rarely the case, and as with all debt, the shorter the period the better. So eventually (preferably quickly ) after you accrue technical debt, you have to pay it down, these are common approaches: Refactoring This allows you to take bits of code that were only realized to be misplaced partway through or after implementation was complete, and put them in their correct place (or a more correct one anyway). Rewrite This is like a bankruptcy. It wipes the slate clean, but you start with nothing and have every opportunity to make the same mistakes, or even bigger ones. High risk high reward approach to technical debt, but sometimes it's your only option. Though that's more rarely the case than many will tell you. Architecture Overview This is more of an active technical debt pay-down approach. This is done by having someone with authority over the technical details to halt an implementation regardless of project plans and schedules to ensure it accrues less technical debt. Code Freeze Freezing the code of changes can allow you breathing room where your debt doesn't go up or down. This gives you time to plan your approach to reducing the technical debt with hopes of having the highest ROI on your approach. Modularization This is like a tier-2 approach only available when you employ either Architecture Overview to have a modular system already, or Refactoring to move towards one. When you have a modular system, you can then pay down debt in whole pieces of the system in an isolated way. This allows you to do partial re-writes, partial refactoring, as well as minimizing the rate technical debt accrues because the isolation keeps the debt only in those areas where the features went in, as opposed to spread around the system. Automated Tests Automated testing can aid in managing your technical debt, because they can help you identify trouble spots in the system, hopefully before the debt in those areas has grown very large, but even after the fact they can still make engineers aware of those dangerous areas they may not have already realized. Furthermore, once you've got automated tests, you can more freely refactor things without concern for breaking too much. Not because developers won't break things, but because they'll find out when they do break things , relying on manual testers in highly indebted systems tends to have a poor track record for finding issues. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232407",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/93307/"
]
} |
232,562 | Most popular applications nowadays require account activation by email. I've never done it with apps that I've developed so am I missing some crucial security feature? By email activation I mean when you register to a site they send you an email that contains a link that you have to click before your account gets activated. | Activation confirms the email is yours. It's not so much about being bogus or non-existent, as it is about being yours and they need it for an alternative/plan-b way of authentication, in first place. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232562",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35806/"
]
} |
232,606 | (I'm talking about HTML / CSS code (not programming languages) but I think we also face the same issue as with programmers.) I am the senior front-end designer in a team and I often have to re-work my juniors' output in tight deadlines. I am faced with 2 problems: Their coding style is a bit of a mess. The aesthetics are not good. Their coding style, I find, is a mixed bag with no proper convention / standard. I am torn between cleaning up the code or just dealing with their code (even copying how they do things). I do find it frustrating to follow their coding style as I feel I might learn bad habits. But then, that is the fastest way of meeting the deadline. For those with much more experience, which is more effective? Should I save the clean-up for later? Or clean-up along the way as I make the changes? (I don't want to sound arrogant though but such is the reality. It will take them more years to write better code. I know, I wrote messy code when I was starting.) | I believe you are looking at the problem the wrong way - you are missing a great opportunity of teaching the juniors how to write better code. If you habitually re-write their code, you might give your juniors the impression that you don't value their work, which will lower their morale, and not help them code better the next time. A better approach, I believe, is to add to your team's development process a code-review task. It doesn't have to be about every piece of committed code, and it doesn't (I would argue that it shouldn't) have to be conducted only by you - whenever a member of your team finishes a big enough task he should pair with one (or more) of his team-mates, explain the code to them, and receive constructive opinion and criticism about his design, coding-style, possible bugs and security issues, etc. When the code-reviewing team-mate is you they will learn from your expertise much more then when you simply re-write their code (they get a chance to hear the reason the code should be changed), and might take less offense. Giving them a chance to also conduct code-reviews will further enhance their abilities - seeing how other people write code and why - and will raise their self-esteem. They will also learn a lot if you give them a chance to review your code. You might learn something too - so don't do it just for show! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232606",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
232,711 | In most OOP languages, objects are generally mutable with a limited set of exceptions (like e.g. tuples and strings in python). In most functional languages, data is immutable. Both mutable and immutable objects bring a whole list of advantages and disadvantages of their own. There are languages that try to marry both concepts like e.g. scala where you have (explicitly declared) mutable and immutable data (please correct me if I am wrong, my knowledge of scala is more than limited). My question is: Does complete (sic!) immutability -i.e. no object can mutate once it has been created- make any sense in an OOP context? Are there designs or implementations of such a model? Basically, are (complete) immutability and OOP opposites or orthogonal? Motivation: In OOP you normally operate on data, changing (mutating) the underlying information, keeping references between those objects. E.g. an object of class Person with a member father referencing another Person object. If you change the name of the father, this is immediately visible to the child object with no need for update. Being immutable you would need to construct new objects for both father and child. But you would have a lot less kerfuffle with shared objects, multi-threading, GIL, etc. | OOP and immutability are almost completely orthogonal to each other. However, imperative programming and immutability are not. OOP can be summarized by two core features: Encapsulation : I will not access the contents of objects directly, but rather communicate via a specific interface (“methods”) with this object. This interface can hide internal data from me. Technically, this specific to modular programming rather than OOP. Accessing data via a defined interface is roughly equivalent to an abstract data type. Dynamic Dispatch : When I call a method on an object, the executed method will be resolved at run time. (E.g. in class-based OOP, I might call a size method on a IList instance, but the call might be resolved to an implementation in a LinkedList class). Dynamic dispatch is one way to allow polymorphic behavior. Encapsulation makes less sense without mutability (there is no internal state that could be corrupted by external meddling), but it still tends to make abstractions easier even when everything is immutable. An imperative program consists of statements which are executed sequentially. A statement has side effects like changing the state of the program. With immutability, state cannot be changed (of course, a new state could be created). Therefore, imperative programming is fundamentally incompatible with immutability. It now happens that OOP has historically always been connected with imperative programming (Simula is based on Algol), and all mainstream OOP languages have imperative roots (C++, Java, C#, … are all rooted in C). This does not imply that OOP itself would be imperative or mutable, this just means that the implementation of OOP by these languages allows mutability. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232711",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/123697/"
]
} |
232,746 | Many years ago, I was talking with an Economics professor about design patterns, how they were establishing a common language for programmers and how they were solving well known problems in a nice manner, etc etc. Then he talked back to me that this is exactly the opposite approach he would use to his Economics students. He usually presented a problem and asked them to find a solution first, so they could think about it first and try to find ways to solve the problem first, and only after that, he presented the "classical" solution. So I was thinking if the "design pattern" approach is really something that makes programmers smarter or dumber, since they're many times just getting the "right solution for this problem" instead maybe of using creativity and imagination to solve some problem in a new and innovative way. What do you think? | Your economics professor is absolutely correct. Software Design Patterns are primarily a way for experienced software developers to communicate with each other . They are a shorthand for established solutions to known problems. But they should only be used by people who understand how to solve the problem without the pattern, or have come up with a similar pattern on their own. Otherwise, they will have the same problem as the copy/paste coder; they will have code, but they won't understand how it works, and therefore will be unable to troubleshoot it. Also, many of the design patterns are enterprise patterns, patterns that are intended to be used in large, corporate software systems. If you learn the wonders of the Inversion of Control container, you will want to use it in every program you write, even though most programs don't actually need it (there are better ways to inject your dependencies in smaller programs, way that don't require an IoC container). Learn the patterns. Understand the patterns, and their appropriate use. Know how to solve the same problem without the pattern (all software patterns are abstractions over fundamental algorithms). Then you will be able to use software patterns with confidence, when it makes sense to do so. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232746",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/123737/"
]
} |
232,843 | I follow TDD religiously. My projects typically have 85% or better test coverage, with meaningful test cases. I do a lot of work with HBase , and the main client interface, HTable, is a real pain to mock. It takes me 3 or 4 times longer to write my unit tests than it does to write tests that use a live endpoint. I know that, philosophically, tests that use mocks should take priority over tests that use a live endpoint. But mocking HTable is a serious pain, and I'm not really sure it offers much of an advantage over testing against a live HBase instance. Everyone on my team runs a single-node HBase instance on their workstation, and we have single-node HBase instances running on our Jenkins boxes, so it's not an issue of availability. The live endpoint tests obviously take longer to run than the tests that use mocks, but we don't really care about that. Right now, I write live endpoint tests AND mock-based tests for all my classes. I'd love to ditch the mocks, but I don't want quality to decline as a result. What do you all think? | My first recommendation would be to not mock types you don't own . You mentioned HTable being a real pain to mock - maybe you should wrap it instead in an Adapter that exposes the 20% of HTable's features you need, and mock the wrapper where needed. That being said, let's assume we're talking about types you all own. If your mock-based tests are focused on happy path scenarios where everything goes smoothly, you won't lose anything ditching them because your integration tests are probably already testing the exact same paths. However, isolated tests become interesting when you start thinking about how your system under test should react to every little thing that could happen as defined in its collaborator's contract, regardless of the actual concrete object it's talking to. That's part of what some call basic correctness . There could be many of those little cases and many more combinations of them. This is where integration tests start getting lousy while isolated tests will remain fast and manageable. To be more concrete, what happens if one of your HTable adapter's methods returns an empty list ? What if it returns null ? What if it throws a connection exception ? It should be defined in the Adapter's contract if any of those things could happen, and any of its consumers should be prepared to deal with these situations , hence the need for tests for them. To sum up : you won't see any quality decline by removing your mock-based tests if they tested the exact same things as your integration tests . However, trying to imagine additional isolated tests (and contract tests ) can help you think out your interfaces/contracts extensively and increase quality by tackling defects that would have been hard to think about and/or slow to test with integration tests. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/232843",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24173/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.