source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
293,868 | I have three views in my program (iOS app). Only one of them is ever active at the same time so I set the visibility off for two of them and switch visibility as the user presses buttons. The views are initialized as visible so I set the visibility off in code before the main view shows. I can do [view1 setAlpha:0.0f];
[view2 setAlpha:0.0f]; for two of the views, but now the third one (the one that should be visible at the start of the app) is not addressed. I put a [view3 setAlpha:1.0f]; after the first two, because I think it keeps it clear that there are in fact three views, not two as one might think when seeing the code. How do other programmers do this? Is it purely preference or are there some conventions? If the call is very heavy, it's obviously better to not call it when that isn't necessary, but I was wondering about small things like my example. | You have an invariant: Only a single view (out of 3) is ever active (and visible). Then, I suggest that you provide a function to switch the activity and visibility of ALL views at once: [setActiveView viewID:2] This function will: check if the view is already active, avoiding unnecessary work set the view as active, and visible set the other 2 views as inactive and invisible It has multiple advantages about a raw call to setVisibility : friendly: calling it unnecessarily does not create a performance issue defensive: its single parameter is much harder to botch, whereas for setVisibility it's harder to remember that the range of values is 0.0f - 1.0f and that only one must be set to 1.0f resilient: the next guy cannot accidentally forget one of the views adaptable: adding/removing a view does not require scrutinizing all the application code to find where the switches are, a single function (this one) need be updated Ideally, to help enforce the invariant, no other function should be able to mess up with this setting... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/293868",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/144888/"
]
} |
293,918 | A product version, such as v1.0.0.100 , represents not only a unique production release of software, but helps identify feature sets and hotfix stages for said product. Right now I see two ways to maintain the final package/build/binary version of a product: Version Control. A file somewhere stores the version number. Continuous Integration (CI) build server will have a script to build the software that uses this checked-in version number to apply it to all areas of the software needed (binaries, installer packages, help pages, documentation, etc). Environment and/or build parameters. These are maintained outside of version control (i.e. they are not tied to the snapshot/tag/branch). The build scripts distribute and use the number in the same way, however they just obtain the value differently (it is provided to the build script, instead of having the script know where to get it relative to the source tree). The problem with the first approach is that it can complicate merges across mainline branches. If you still maintain 2 parallel releases of the same software, you will resolve conflicts when merging between the two mainlines if the version has changed on both since the last merge. The problem with the second approach is reconciliation. When you go back to a release 1 year ago, you will rely solely on the tag information to identify its release number. In both cases, there might be certain aspects of the version number that are not known prior to the CI build. For example, a CI build may programmatically put in a 4th component that is really the automated build number (e.g. 140th build on the branch). It might also be a revision number in VCS. What is the best way to keep up with a software's version number? Should the "known" parts always be maintained in VCS? And if so, are the conflicts across mainline branches an issue? Right now we maintain our version number via parameters specified and maintained in the CI build plan (Atlassian Bamboo). We have to be careful before merging to our master branch that the version numbers are properly setup in advance of the CI build kicking off . With regards to the Gitflow workflow, I feel that if the version number were tracked in source control, we could guarantee it is setup properly when we create our release branch in preparation of the release. QA would perform final integration/smoke/regression testing on this branch and upon signoff, a merge to master takes place which signals commitment to release. | Personally, I choose option 3: keep versioning information in VCS metadata , specifically, tags. Git makes it very easy to do so, because there is a command git describe , which can uniquely describe a commit based on a tag. Here's how it works: If the current commit is tagged, output the name of the tag. Otherwise, walk the history backwards until you find a tag and then output a description in the following format: <tag>-<number of commits since the tag>-g<abbreviated commit hash> . If there are uncommitted changes in the workingtree, append -dirty . So, if you are doing a release build, and have the commit tagged 1.2.3 , it will output 1.2.3 . If you are currently working on 1.2.4 and you made 4 commits since 1.2.3, and have uncommitted changes in the tree, it will output 1.2.3-4-gdeadbee-dirty . This is guaranteed to be unique and monotonic, as well as human-readable, and thus can be used directly as a version string. The only thing you have to ensure is a proper naming convention for tags. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/293918",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31950/"
]
} |
293,921 | Consider this: public function polynominal($a, $b, $c, $d)
{
return $a * pow($x, 3) + $b * pow($x, 2) + $c * $x + $d;
} Suppose you write various tests for the above function and prove to yourself and others that "it works". Why not then remove those tests, and live happily ever after? My points is that some functions do not need to be tested continuously after they have been proven to work. I am looking for counter-points that state, yes these functions still need to be tested, because: ... Or that yes, these do not need to be tested... | Regression testing It's all about regression testing . Imagine the next developer looking at your method and noticing that you are using magical numbers. He was told that magical numbers are evil, so he creates two constants, one for the number two, the other one for the number three—there is nothing wrong in doing this change; it's not like he was modifying your already correct implementation. Being distracted, he inverts two constants. He commits the code, and everything seems to work fine, because there are no regression testing running after each commit. One day (could be weeks later), something breaks elsewhere. And by elsewhere, I mean in the completely opposite location of the code base, which seems to have nothing to do with polynominal function. Hours of painful debugging lead to the culprit. During this time, the application continues to fail in production, causing a lot of issues to your customers. Keeping the original tests you wrote could prevent such pain. The distracted developer would commit the code, and nearly immediately see that he broke something; such code won't even reach the production. Unit tests will additionally be very precise about the location of the error . Solving it wouldn't be difficult. A side effect... Actually, most refactoring is heavily based on regression testing. Make a small change. Test. If it passes, everything is fine. The side effect is that if you don't have tests, then practically any refactoring becomes a huge risk of breaking the code. Given that is many cases, it's already difficult to explain to the management that refactoring should be done, it would be even harder to do so after your previous refactoring attempts introduce multiple bugs. By having a complete suite of tests, you are encouraging refactoring, and so better, cleaner code. Risk-free, it becomes very tempting to refactor more, on regular basis. Changes in requirements Another essential aspect is that requirements change. You may be asked to handle complex numbers , and suddenly, you need to search your version control log to find the previous tests, restore them, and start adding new tests. Why all this hassle? Why removing tests in order to add them later? You could have kept them in the first place. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/293921",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/119333/"
]
} |
294,228 | A colleague of mine told me that he is thinking in making our CI server to revert commits that failed the build, so the HEAD in master is always stable (as in passing the build at least). Is this a best practice or it may be more problematic than just leaving master broken until the developer fixes it? My thinking is that reverting the commit will make more complex the task of readding the commit and fix (developer will have to revert the revert and then commit the fix, which will also clutter the git log ) and we should just leave the commit and then commit the fix. Although I see some advantages in having master stable, this revert of failing commits does not convince me. edit: Doesn't matter if it is master or any other development branch, but the question stays the same: should the CI system revert a commit that failed the build? another (lenghty) edit: Ok, we're using git in a strange way. We believe that the concept of branches goes against real CI, because committing to a branch isolates you from the other developers and their changes, and adds time when you have to reintegrate your branch and deal with possible conflicts. If everyone commits to master this conflicts are reduced to the minimum and every commit passes all tests. Of course, this forces you to push only stable (or you break the build) and program more carefully to not break backwards compatibility or do feature-toggling when introducing new features. There are tradeoffs when doing CI this or that way, but that is out of the scope of question (see related question for this). If you prefer, I may reword the question: a small team of developers work together in a feature branch. If one developer commits something that breaks the build for that branch, should the CI system revert the commit or not? | I would be against doing this for the following reasons: Any time you set up an automated tool to change code on your behalf , there is the risk that it will get it wrong, or that a situation will arise where you need it to stop making that change (e.g., the latest version of Google Mock had a bug in it, so it's not your code failing) and you have to waste time reconfiguring it. Plus, there's always a slight risk that the build will fail because of a bug in the build system, rather than a bug in your code. For me, CI is about gaining confidence that my code is correct; this would merely turn it into another source of potential problems for me to worry about. The kinds of bugs that break "the build" should be silly mistakes that take very little time to fix (as you've indicated in a comment, this is true for you). If more subtle and complicated bugs are regularly making it onto master, then the correct solution is not to "fix it faster", it's to be more careful when reviewing feature branches before they get merged. Leaving master unbuildable for a few minutes while the bug gets fixed properly doesn't hurt anyone. It's not like the CEO will personally check out master and publish the code straight to clients at any random moment (at least, hopefully not without your involvement). In the highly unlikely event that you need to release something before you can fix the bug, then you can easily make the decision to revert manually before publishing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/294228",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/19962/"
]
} |
294,320 | We have a lot of code in our internal codebase that calls our libraries internally - these libraries often have a lot of arguments (think matplotlib) and our code is often doing only a specific task and simply passes the **kwargs on to the next function called. E.g.: def our_method(dataframe, **kwargs):
result = do_something_with_data(dataframe)
external_module.draw(result, **kwargs) While **kwargs prevents us from repeating all the parameters in our method declaration, it also makes it extremely opaque which arguments are valid when calling our_method - I have to know which method is called, which I often don't want to know. What is your take on this? | How is your code used by developers? In other words, what exactly do they do to determine which arguments should be used and how? If they rely on documentation automatically generated from your code, and the generator has no clue what to do with **kwargs , this is indeed problematic. Instead of finding the list of arguments and their meaning in the documentation, they have absolutely no information except the vague “it takes some arguments”. This problem may probably be solved by documenting the method manually, replacing the documentation generated automatically. This requires extra work from the implementer of the method, but remember, code (and its documentation) is read much more frequently than it's written. If code is their documentation, the developers who use the method with **kwargs need two additional steps: they need not only to look at the signature of the method, but also at its actual implementation, in order to find the other method it actually calls. Then, they need to go to this other method to finally find what they were looking for. This doesn't involve a lot of effort, but still, the effort should be repeated, again and again. The worst part is that you can't help them by adding documentation: if you comment your method, listing the actual arguments, there is a big risk that the next version of the library your method calls will have different arguments, and your documentation will be outdated, since nobody will recall that it needs to be kept up to date. My recommendation is to rely on **kwargs only for methods which have a reduced scope. Private methods (and by private in a context of Python, I mean methods starting by _ ) which are used in few places in the class are good candidates, for example. On the other hand, methods which are used by dozens of classes all over the code base are very bad candidates. After all, it shouldn't take too much effort to rewrite the arguments of a method you call within the method you write. Hopefully, most methods don't take more than six to eight arguments, and if they do, ask yourself if you shouldn't be refactoring the code. In all cases: Making arguments explicit within your method doesn't require a lot of effort, You may want, later, to validate the arguments anyway (although if you rely only on this point to make arguments explicit, you violate YAGNI). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/294320",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81286/"
]
} |
294,443 | We have this code which, when simplified, looks like this: public class Room
{
public Client Client { get; set; }
public long ClientId
{
get
{
return Client == null ? 0 : Client.Id;
}
}
}
public class Client
{
public long Id { get; set; }
} Now we have three viewpoints. 1) This is good code because the Client property should always be set (i.e. not null) so the Client == null will never occur and the Id value 0 denotes a false Id anyway (this is the opinion of the writer of the code ;-)) 2) You can not rely on the caller to know that 0 is a false value for Id and when the Client property should always be set you should throw an exception in the get when the Client property happens to be null 3) When the Client property should always be set you just return Client.Id and let the code throw a NullRef exception when the Client property happens to be null. Which of these is most correct? Or is there a fourth possibility? | It smells like you should limit the number of states your Room class can be in. The very fact that you're asking about what to do when Client is null is a hint that Room 's state space is too large. To keep things simple I wouldn't allow the Client property of any Room instance to ever be null. That means the code within Room can safely assume the Client is never null. If for some reason in the future Client becomes null resist the urge to support that state. Doing so will increase your maintenance complexity. Instead, allow the code to fail and fail fast. After all, this is not a supported state. If the application gets itself into this state you've already crossed a line of no return. The only reasonable thing to do then is to close the application. This might happen (quite naturally) as the result of an unhandled null reference exception. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/294443",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23264/"
]
} |
294,588 | I have searched for this question, but I haven't found a good answer. Even the Wikipedia Article on URIs does not explain it thoroughly. I thought it was the protocol for accessing a web page. e.g. HTTP/HTTPS/FTP, but the wiki article says otherwise. Some URI schemes are not associated with any specific protocol (e.g.
"file") and many others do not use the name of a protocol as their
prefix (e.g. "news"). I know what part of the URL is the scheme . But my real question was what does it do? | Okay, I know what part of the URL is the scheme. But my real question was what does it do? It simply tells you how to interpret the part of the URL after the colon. For example, in file://usr/share/doc , the file tells me the part after the colon should be interpreted as a locally-available filesystem path. This isn't identical to a protocol, because there is no transport layer or encoding - a client just uses regular local system calls to access it. Conversely, https://programmers.stackexchange.com specifies a scheme ( https , which in turn means "HTTP over TLS"), but still requires the client to make its own choices about the physical transport used to reach it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/294588",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/186734/"
]
} |
294,748 | What advantage(s) of string literals being read-only justify(-ies/-ied) the: Yet another way to shoot yourself in the foot char *foo = "bar";
foo[0] = 'd'; /* SEGFAULT */ Inability to elegantly initialize a read-write array of words in one line: char *foo[] = { "bar", "baz", "running out of traditional placeholder names" };
foo[1][2] = 'n'; /* SEGFAULT */ Complicating the language itself. char *foo = "bar";
char var[] = "baz";
some_func(foo); /* VERY DANGEROUS! */
some_func(var); /* LESS DANGEROUS! */ Saving memory? I've read somewhere (couldn't find the source now) that long time ago, when RAM was scarce, compilers tried to optimize memory usage by merging similar strings. For example, "more" and "regex" would become "moregex". Is this still true today, in the era of digital blu-ray quality movies? I understand that embedded systems still operate in environment of restricted resources, but still, the amount of memory available has increased dramatically. Compatibility issues? I assume that a legacy program that would try to access read-only memory would either crash or continue with undiscovered bug. Thus no legacy program should try to access string literal and therefor allowing to write to string literal would not harm valid, non-hackish, portable legacy programs. Are there any other reasons? Is my reasoning incorrect? Would it be reasonable to consider a change to read-write string literals in new C standards or at least add an option to compiler? Was this considered before or are my "problems" too minor and insignificant to bother anyone? | Historically (perhaps by rewriting parts of it), it was the contrary. On the very first computers of the early 1970s (perhaps PDP-11 ) running a prototypical embryonic C (perhaps BCPL ) there was no MMU and no memory protection (which existed on most older IBM/360 mainframes). So every byte of memory (including those handling literal strings or machine code) could be overwritten by an erroneous program (imagine a program changing some % to / in a printf(3) format string). Hence, literal strings and constants were writable. As a teenager in 1975, I coded in the Palais de la Découverte museum in Paris on old 1960s era computers without memory protection: IBM/1620 had only a core memory -which you could initialize thru the keyboard, so you had to type several dozens of digits to read the initial program on punched tapes; CAB/500 had a magnetic drum memory; you could disable writing some tracks thru mechanical switches near the drum. Later, computers got some form of memory management unit (MMU) with some memory protection. There was a device forbidding the CPU to overwrite some kind of memory. So some memory segments, notably the code segment (a.k.a. .text segment) became read-only (except by the operating system which loaded them from disk). It was natural for the compiler and the linker to put the literal strings in that code segment, and literal strings became read only. When your program tried to overwrite them, it was bad, an undefined behavior . And having a read-only code segment in virtual memory gives a significant advantage: several processes running the same program share the same RAM ( physical memory pages) for that code segment (see MAP_SHARED flag for mmap(2) on Linux). Today, cheap microcontrollers have some read-only memory (e.g. their Flash or ROM), and keep their code (and the literal strings and other constants) there. And real microprocessors (like the one in your tablet, laptop or desktop) have a sophisticated memory management unit and cache machinery used for virtual memory & paging . So the code segment of the executable program (e.g. in ELF ) is memory mapped as a read-only, shareable, and executable segment (by mmap(2) or execve(2) on Linux; BTW you could give directives to ld to get a writable code segment if you really wanted to). Writing or abusing it is generally a segmentation fault . So the C standard is baroque: legally (only for historical reasons), literal strings are not const char[] arrays, but only char[] arrays that are forbidden to be overwritten. BTW, few current languages permit string literals to be overwritten (even Ocaml which historically -and badly- had writable literal strings has changed that behavior recently in 4.02, and now has read-only strings). Current C compilers are able to optimize and have "ions" and "expressions" share their last 5 bytes (including the terminating null byte). Try to compile your C code in file foo.c with gcc -O -fverbose-asm -S foo.c and look inside the generated assembler file foo.s by GCC At last, the semantics of C is complex enough (read more about CompCert & Frama-C which are trying to capture it) and adding writable constant literal strings would make it even more arcane while making programs weaker and even less secure (and with less defined behavior), so it is very unlikely that future C standards would accept writable literal strings. Perhaps on the contrary they would make them const char[] arrays as they morally should be. Notice also that for many reasons, mutable data is harder to handle by the computer (cache coherency), to code for, to understand by the developer, than constant data. So it preferable to have most of your data (and notably literal strings) stay immutable . Read more about functional programming paradigm . In the old Fortran77 days on IBM/7094, a bug could even change a constant: if you CALL FOO(1) and if FOO happened to modify its argument passed by reference to 2, the implementation might have changed other occurrences of 1 into 2, and that was a really naughty bug, quite hard to find. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/294748",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
294,983 | A doubly linked list has minimal overhead (just another pointer per cell), and allows you to append to both ends and go back and forth and generally have a lot of fun. | Well, if you look a bit deeper, both actually include arrays in the base language as well: The 5th revised Scheme Report (R5RS) includes the vector type , which are fixed-size integer-indexed collections with better than linear time for random access. The Haskell 98 Report has an array type as well. Functional programming instruction, however, has long emphasized single-linked lists over arrays or double-linked lists. Quite likely overemphasized, in fact. There are several reasons for it, however. First one is that single-linked lists are one of the simplest and yet most useful recursive data types. A user-defined equivalent of Haskell's list type can be defined like this: data List a -- A list with element type `a`...
= Empty -- is either the empty list...
| Cell a (List a) -- or a pair with an `a` and the rest of the list. The fact that lists are a recursive data type means that the functions that work on lists generally use structural recursion . In Haskell terms: you pattern match on the list constructors, and you recurse on a subpart of the list. In these two basic function definitions, I use the variable as to refer to the tail of the list. So note that the recursive calls "descend" down the list: map :: (a -> b) -> List a -> List b
map f Empty = Empty
map f (Cell a as) = Cell (f a) (map f as)
filter :: (a -> Bool) -> List a -> List a
filter p Empty = Empty
filter p (Cell a as)
| p a = Cell a (filter p as)
| otherwise = filter p as This technique guarantees that your function will terminate for all finite lists, and also is a good problem-solving technique—it tends to naturally splits problems into simpler, more tenable subparts. So single-linked lists are probably the best data type to introduce students to these techniques, which are very important in functional programming. The second reason is less of a "why single-linked lists" reason, but more of a "why not double-linked lists or arrays" reason: those latter data types often call for mutation (modifiable variables), which functional programming very often shies away from. So as it happens: In an eager language like Scheme you can't make a double-linked list without using mutation. In a lazy language like Haskell you can make a double-linked list without using mutation. But whenever you make a new list based off that one, you are forced to copy most if not all of the structure of the original. Whereas with single-linked lists you can write functions that use "structure sharing"—new lists can reuse the cells of old lists when appropriate. Traditionally, if you used arrays in an immutable manner it meant that every time you wanted to modify the array you had to copy the whole thing. (Recent Haskell libraries like vector , however, have found techniques that greatly improve on this problem). The third and final reason applies to lazy languages like Haskell primarily: lazy single-linked lists, in practice, are often more similar to iterators than to in-memory lists proper. If your code is consuming the elements of a list sequentially and throwing them out as you go, the object code will only materialize the list cells and its contents as you step forward through the list. This means that the whole list doesn't need to exist in memory at once, only the current cell. Cells before the current one can be garbage collected (which wouldn't be possible with a double-linked list); cells later than the current one don't need to be computed until you get there. It goes even further than that. There's technique used in several popular Haskell libraries, called fusion , where the compiler analyzes your list-processing code and spots intermediate lists that are being generated and consumed sequentially and then "thrown away." With this knowledge, the compiler can completely eliminate the memory allocation of those lists' cells. This means that a single-linked list in a Haskell source program, after compilation, might actually get turned into a loop instead of a data structure. Fusion is also the technique that the aforementioned vector library uses to generate efficient code for immutable arrays. Same goes for the extremely popular bytestring (byte arrays) and text (Unicode strings) libraries, that were built as a replacement for Haskell's not-very-great native String type (which is the same as [Char] , single-linked list of character). So in modern Haskell there is a trend where immutable array types with fusion support are becoming very common. List fusion is facilitated by the fact that in a single-linked list you can go forward but never backwards . This brings up a very important theme in functional programming: using the "shape" of a data type to derive the "shape" of a computation. If you want to process elements sequentially a single-linked list is a data type that, when you consume it with structural recursion, gives you that access pattern very naturally. If you want to use a "divide and conquer" strategy to attack a problem, then tree data structures tend to support that very well. A lot of people drop out of the functional programming wagon early on, so they get exposure to the single-linked lists but not to the more advanced underlying ideas. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/294983",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/168958/"
]
} |
294,985 | Java is object oriented, but, why do we need to create an object from the Scanner Class to get input? Couldn't next() methods, for example, just be Static? C looks to me pretty simpler as you just use scanf() , gets() or fgets() .
I'm sure there's a reason for Java developers for creating the Scanner class, but how is it better than just having a normal function to do the work? I've found this link that may seem asking the same question, but answers are just about "you need to create an object because is not static"... My guess is: since Java is Object Oriented, they decided to put all input methods in a class. They didn't do static methods so you can have all kind of different sources (keyboard Input, file Input...) in different objects ? I would appreciate if someone can edit the question to make it sound more clear! | The answer is "because a scanner has state." Looking at the code for java.util.Scanner , you will see a number of private fields such as a buffer and its associated information, a Matcher, a Pattern, an input source, information about if the source is closed or not, the type of the last thing matched, information about if the last thing was a valid match or not, the radix used for numbers, the locale (information about if you are using . or , as a thousands separator), and its own LRU cache for recently used patterns, the information about the last exception that was encountered, some information about parsing numbers, some information about parsing booleans, quite a bit more information about parsing integers... and I think thats about it. As you can see, thats a fairly large block of text there. That is the state of the Scanner. In order to make the Scanner into a static class, that state would need to be stored somewhere else. The C way of doing it really doesn't have that much state with it. You've got a fscanf . The FILE maintains some state about the position it is at (but that needs to be passed in for each invocation of fscanf ). If there was an error, you have to process it (and then you start writing code that looks like this ) - and that doesn't tell you information like "I was expecting an Integer, but found a String." When one looks at the theoretically static Scanner - all of the state is maintained outside of the class, it isn't encapsulated within the class. Other bits of code could tinker with those variables. When other code can tinker with the state of the class, it becomes very difficult to reason about what the class will do in any given situation. You could, maybe, write something like ScannerState { Locale loc; ... } and have code that results in: ScannerState state = new ScannerState(a whole lot of arguments);
int foo = Scanner.nextInt(state); But then, this is much more cumbersome than having the state encapsulated within a Scanner object in the first place (and not needing to pass in the state). Lastly, the Scanner implements the interface of Iterator<String> which means that one can use it in code such as: Scanner in = new Scanner(someFile);
whie(in.hasNext()) { ... } Without being able to get an instance of the Scanner class, this type of structure becomes more cumbersome within an object oriented language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/294985",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/193442/"
]
} |
295,119 | I'm actually studying the flux pattern and there's something that I can't understand concerning the stores . What are they exactly? I have read many articles, and it seems that it concerns the domain. Does it mean that this is the "abstract" part related to api calls or backend calls? It's not very clear for me. Edit: Could it be the same thing as the angular factory? Fetching remote data, making a business task or store some app states (current user connected for example)? | Ok let me explain you from Step by Step 1 What is Flux? A pattern Centralized dispatcher Unidirectional data flows List item They call it Flux for a reason too. Flux Implementations Facebook’s Flux Alt Reflux Flummox NuclearJS Fluxible A Chat with Flux React : Hey Action, someone clicked this “Save Course” button. Action : Thanks React! I registered an action creator with the dispatcher, so the dispatcher should take care of notifying all the stores that care. Dispatcher :Let me see who cares about a course being saved. Ah! Looks like the Store has registered a callback with me, so I’ll let her know. Store : Hi dispatcher! Thanks for the update! I’ll update my data with the payload you sent. Then I’ll emit an event for the React components that care. React : Ooo! Shiny new data from the store! I’ll update the UI to reflect this! Flux API register(function callback) –“Hey dispatcher, run me when actions happen. -Store” unregister(string id) –“Hey dispatcher, stop worrying about this action. -Store” waitFor(array ids) –“Update this store first. –Store” dispatch(object payload) -“Hey dispatcher, tell the stores about this action. -Action” isDispatching() –“I’m busy dispatching callbacks right now.” so the the question raise in our mind is So Flux Is a Publish-Subscribe Model? Not quite. Differs in two ways: 1.Every payload is dispatched to all registered callbacks. 2.Callbacks can wait for other callbacks Summary Flux is a pattern for unidirectional data flows
Actions encapsulate events
Dispatcher is a central hub that holds callbacks
Stores hold app state
Many implementations | {
"source": [
"https://softwareengineering.stackexchange.com/questions/295119",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/160326/"
]
} |
295,202 | We recently had a problem whereby a feature for our webapp (automatic signup) was postponed by management because they felt the start was too "cold" but they wanted all the other features we had been working on to go live. The problem is that this functionality had been merged into develop when it was finished along with all the other features that we expected to push live on the next release so we couldnt just merge dev -> test -> master like we usually do. How could we have avoided this issue? | One approach is feature flagging it. It can live in the code base but be disabled by configuration. Another option it to make a revert commit that reverts the feature merge so that it's not in develop any more. A new branch can be made which reverts the revert, and be left pending to merge later. If you're using Github pull requests, you can do this easily with the "revert merge" button on a merged pull request. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/295202",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/175261/"
]
} |
295,230 | I have seen, in many places, that it is canonical wisdom 1 that it is the responsibility of the caller to ensure you are on the UI thread when updating UI components (specifically, in Java Swing, that you are on the Event Dispatch Thread ). Why is this so? The Event Dispatch Thread is a concern of the view in MVC / MVP / MVVM; to handle it anywhere but the view creates a tight coupling between the view's implementation, and the threading model of that view's implementation. Specifically, let's say I have an MVC architected application that uses Swing. If the caller is responsible for updating components on the Event Dispatch Thread, then if I try to swap out my Swing View implementation for a JavaFX implementation, I must change all the Presenter / Controller code to use the JavaFX Application thread instead . So, I suppose I have two questions: Why is it the caller's responsibility to ensure UI component thread safety? Where is the flaw in my reasoning above? How can I design my application to have loose coupling of these thread safety concerns, yet still be appropriately thread-safe? Let me add some MCVE Java code to illustratrate what I mean by "caller responsible" (there are some other good practices here that I'm not doing but I'm trying on purpose to be as minimal as possible): Caller being responsible: public class Presenter {
private final View;
void updateViewWithNewData(final Data data) {
EventQueue.invokeLater(new Runnable() {
public void run() {
view.setData(data);
}
});
}
} public class View {
void setData(Data data) {
component.setText(data.getMessage());
}
} View being responsible: public class Presenter {
private final View;
void updateViewWithNewData(final Data data) {
view.setData(data);
}
} public class View {
void setData(Data data) {
EventQueue.invokeLater(new Runnable() {
public void run() {
component.setText(data.getMessage());
}
});
}
} 1: The author of that post has the highest tag score in Swing on Stack Overflow. He says this all over the place and I have also seen it being the caller's responsibility in other places, too. | Toward the end of his failed dream essay , Graham Hamilton (a major Java architect) mentions if developers "are to preserve the equivalence
with an event queue model, they will need to follow various
non-obvious rules," and having a visible and explicit event queue model "seems to help people to more reliably follow the model and thus construct GUI programs that work reliably." In other words, if you try to put a multithreaded facade on top of an event queue model, the abstraction will occasionally leak in non-obvious ways that are extremely difficult to debug. It seems like it will work on paper, but ends up falling apart in production. Adding small wrappers around single components probably isn't going to be problematic, like to update a progress bar from a worker thread. If you try to do something more complex that requires multiple locks, it starts getting really difficult to reason about how the multithreaded layer and the event queue layer interact. Note that these kinds of issues are universal to all GUI toolkits. Presuming an event dispatch model in your presenter/controller isn't tightly coupling you to only one specific GUI toolkit's concurrency model, it's coupling you to all of them . The event queueing interface shouldn't be that hard to abstract. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/295230",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/88986/"
]
} |
296,445 | I've been discussing this with colleagues, and we couldn't figure out what the use is of .Any for any given List<> , in C#. You can check the validity of an element in the array like the following statement: if (MyList.Any()){ ...} //Returns true or false Which is exactly the same as if (MyList.Count() != 0) { ... } and is much more common, readable and clear about the intent of the if statement. In the end, we were stuck with this thought: .Any() can be used, will work just as well, but is less clear about
the intent of the programmer, and it that case it should not be used. But we feel like this can't be right; we must be missing something. Are we? | Keep in mind that Any doesn't operate on a List ; it operates on an IEnumerable , which represents a concrete type that may or may not have a Count property. It's true that it's not necessarily the best thing to use on a List , but it definitely comes in handy at the end of a LINQ query. And even more useful than the standalone version is the override that takes a predicate just like Where . There's nothing built in on List that's anywhere near as convenient or expressive as the predicate- Any extension method. Also, if you're using Count() (the LINQ extension method for IEnumerable), rather than Count (the property on List ), it can have to enumerate the entire sequence if it can't optimize this away by detecting that your underlying data type has a Count Property . If you have a long sequence, this can be a noticeable performance hit when you don't really care about what the count is, and just want to know if there are any items in the collection. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/296445",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/151303/"
]
} |
296,721 | My instructor once told me that I should not define a variable inside a loop , but I honestly still do not understand why. What are the disadvantages of that? Could any body explain that to me? | It's not a problem to define a variable within a loop. In fact, it's good practice, since identifiers should be confined to the smallest possible scope. What's bad is to assign a variable within a loop if you could just as well assign it once before the loop runs. Depending on how complex the right-hand side of the assignment is, this could become rather expensive and might even dominate the run time of the loop. If you write a loop that uses the same computed value in all iterations, you should definitely compute it above the loop - that is more important than minimizing its scope. To clarify: as long as compute() always returns the same value, this int value = compute();
while (something) {
doSomething(value);
} is smarter than this: while (something) {
int value = compute();
doSomething(value);
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/296721",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/192185/"
]
} |
296,726 | I have a complex object which routinely needs to compute a sub-object representing various aspects of the parent's state as a bundle. For example, imagine the object represents information about an aircraft and the sub-object is a summary of various key parameters of the aircraft at some particular time such as its heading, speed, etc. The object needs to store the last computed version of the sub-oject. Currently the way I do this is that I have a void method in the object that does the computation and then sets a module-level variable to be equal to the newly computed sub-oject. The outside user of this sub-object then retrieves the current sub-oject via a getter. So, for the client methods the operation looks like this: main_object.computeState();
State new_state = main_object.getState(); To handle possible errors, however, in the computation, which is currently not done, I am thinking of changing to a compute method that returns the state: State new_state = main_object.computeState( error_msg );
if( new_state == null ){
print( error_msg );
goto failure continuation
}
.... [everything ok, continue normally ] Is there a better strategy for constructing this pattern? | It's not a problem to define a variable within a loop. In fact, it's good practice, since identifiers should be confined to the smallest possible scope. What's bad is to assign a variable within a loop if you could just as well assign it once before the loop runs. Depending on how complex the right-hand side of the assignment is, this could become rather expensive and might even dominate the run time of the loop. If you write a loop that uses the same computed value in all iterations, you should definitely compute it above the loop - that is more important than minimizing its scope. To clarify: as long as compute() always returns the same value, this int value = compute();
while (something) {
doSomething(value);
} is smarter than this: while (something) {
int value = compute();
doSomething(value);
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/296726",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/148612/"
]
} |
296,752 | PHP is a mixed paradigm language, allowing to use and return non-object data types, such as arrays. I pose a question to try to clarify some guidelines for selection of arrays vs objects when deciding upon what programming construct to use in a particular situation. This is really a question about ways to encode data using PHP language constructs and when one way is to be more likely picked over another for data passing purposes (i.e. Service-Oriented Architecture or web services). Example Suppose you have an item type consisting of {cost, name, part_number, item_count}. Your program calls for display of several such item types, to where you decide to use an array as an outer container to hold each of the item types. [You can also use PHP's ArrayObject for OO paradigm, but my question is not about that (outer) array]. My question is about how to encode the item type data, and about which paradigm to use. PHP allows you to use PHP Native Arrays or PHP Objects . I can encode such data, in two ways here, like so: //PHP's associative arrays:
$ret = array(
0 => array(
'cost' => 10.00,
'name' => 'item1',
'part_number' => 'zyz-100',
'item_count' => 15
),
1 => array(
'cost' => 34.00,
'name' => 'item2',
'part_number' => 'abc-230',
'item_count' => 42
),
); vs //here ItemType is encapsulated into an object
$ret = array(
0 => new ItemType(10.00, 'item1', 'zyz-100', 15),
1 => new ItemType(34.00, 'item2', 'abc-230', 42),
);
class ItemType
{
private $price;
private $name;
private $partNumber;
private $itemCount;
function __construct($price, $name, $partNumber, $itemCount) {..}
} What I am thinking Array encoding is light-weight, and more JSON-ready, but can be easier to mess up. Misspell one of the associative array keys and you may have an error that is more difficult to catch. But it is also easier to change on a whim. Say I don't want to store item_count anymore, I can use any text-processing software to easily remove all item_count instances in the array and then update other functions that use it accordingly. It may be a more tedious process, but it is simple. Object oriented encoding calls upon IDE and PHP language facilities and makes it easier to catch any errors beforehand, but is harder to program and code up in the first place. I say harder, because you have to think a bit about your objects, think ahead, and OO coding takes a bit higher cognitive load than typing up array structures. That said, once it is coded up, some changes maybe easier to implement, in a sense, that removing item_count , for example, will require changing less lines of code. But changes themselves may still require a higher cognitive load in comparison with the array method, since higher-level OO facilities are involved. Question In some cases it is clear, like cases where I will need to perform manipulations on the data. But in some cases, where I need to just store a few lines of "Item Type" data, I don't have clear guidelines or considerations to lean on when trying to decide whether to use arrays or whether to construct objects. It seems I can just toss a coin and pick one. Is that the case here? | The way I see this, it depends on what you intend to do with the data afterwards. Based on a few simple checks you can determine which of the two data structures is better for you: Does this data have any logic associated with it? For example, is $price stored as an integer number of cents, so a product with a price of $9.99 would have price = 999 and not price = 9.99 ? (Probably, yes) Or does partNumber need to match a specific regex? Or, do you need to be able to easily check if the itemCount is available in your inventory? Will you need to do these these functions in the future? If so, then your best bet is to create a class now. This means that you can define constraints and logic built into the data structure: private $myPrice is set to 999 but $item->getPriceString() returns $9.99 and $item->inStock() is available to be called in your application. Are you going to be passing this data to multiple PHP functions? If so, then use a class. If you're generating this data once to perform some transformations on it, or just to send as JSON data to another application (JavaScript or otherwise) then an array is an easier choice. But if you have more than two PHP functions which accept this data as a parameter, use a class. If nothing else, that lets you define someFunction(MyProductClass $product) { and it's very clear what your functions expect as input. As you scale out your code and have more functions it will be much easier to know what type of data each function accepts. Seeing someFunction($someArrayData) { is not nearly as clear. Also, this does not enforce type consistency and means that (as you said) the flexible structure of the array can cause development pain later on Are you building a library or shared code base? If so, use a class! Think about some new developer who is using your library, or another developer somewhere else in the company who has never used your code before. It will be much easier for them to look at a class definition and understand what that class does, or see a number of functions in your library which accept objects of a certain class, than to try and guess what structure they need to generate in a number of arrays. Also, this touches on the data consistency issues with #1: if you're developing a library or shared code base, be nice to your users: give them classes which enforce data consistency and protect them from making errors with the design of your data. Is this a small part of an application or just a transformation of data? Do you not fit into any of the above? A class might be too much; use an array if it suits you and you find it easier. As mentioned above, if you're just generating structured data to send as JSON or YAML or XML or whatever, don't bother with a class unless there's a need to. If you are writing a small module in a larger application, and no other modules/teams need to interface with your code, maybe an array is sufficient. Ultimately, consider the scaling needs of your code and consider than a structured array might be a quick fix, but a class is a much more resilient and scalable solution. Also, consider the following: if you have a class, and you want to output to JSON, there's no reason you can't define a json_data() method of your class which returns a JSON-ifiable array of the data in the class. This is what I did in my PHP applications where I needed to send class data as JSON. As an example: class Order {
private $my_total;
private $my_lineitems;
public function getItems() { return $this->my_lineitems; }
public function addItem(Product $p) { $this->my_lineitems[] = $p; }
public function getTotal() { return $this->my_total; }
public function forJSON() {
$items_json = array();
foreach($this->my_lineitems as $item) $items_json[] = $item->forJSON();
return array(
'total' => $this->getTotal(),
'items' => $items_json
);
}
}
$o = new Order();
// do some stuff with it
$json = json_encode($o->forJSON()); | {
"source": [
"https://softwareengineering.stackexchange.com/questions/296752",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/119333/"
]
} |
296,803 | Is there a difference between these two versions of code? foreach (var thing in things)
{
int i = thing.number;
// code using 'i'
// pay no attention to the uselessness of 'i'
} int i;
foreach (var thing in things)
{
i = thing.number;
// code using 'i'
} Or does the compiler not care? When I'm speaking of difference I mean in terms of performance and memory usage. ..Or basically just any difference or do the two end up being the same code after compilation? | TL;DR - they're equivalent examples at the IL layer. DotNetFiddle makes this pretty to answer as it allows you to see the resulting IL. I used a slightly different variation of your loop construct in order to make my testing quicker. I used: Variation 1: using System;
public class Program
{
public static void Main()
{
Console.WriteLine("Hello World");
int x;
int i;
for(x=0; x<=2; x++)
{
i = x;
Console.WriteLine(i);
}
}
} Variation 2: Console.WriteLine("Hello World");
int x;
for(x=0; x<=2; x++)
{
int i = x;
Console.WriteLine(i);
} In both cases, the compiled IL output rendered the same. .class public auto ansi beforefieldinit Program
extends [mscorlib]System.Object
{
.method public hidebysig static void Main() cil managed
{
//
.maxstack 2
.locals init (int32 V_0,
int32 V_1,
bool V_2)
IL_0000: nop
IL_0001: ldstr "Hello World"
IL_0006: call void [mscorlib]System.Console::WriteLine(string)
IL_000b: nop
IL_000c: ldc.i4.0
IL_000d: stloc.0
IL_000e: br.s IL_001f
IL_0010: nop
IL_0011: ldloc.0
IL_0012: stloc.1
IL_0013: ldloc.1
IL_0014: call void [mscorlib]System.Console::WriteLine(int32)
IL_0019: nop
IL_001a: nop
IL_001b: ldloc.0
IL_001c: ldc.i4.1
IL_001d: add
IL_001e: stloc.0
IL_001f: ldloc.0
IL_0020: ldc.i4.2
IL_0021: cgt
IL_0023: ldc.i4.0
IL_0024: ceq
IL_0026: stloc.2
IL_0027: ldloc.2
IL_0028: brtrue.s IL_0010
IL_002a: ret
} // end of method Program::Main So to answer your question: the compiler optimizes out the declaration of the variable, and renders the two variations equivalent. To my understanding, the .NET IL compiler moves all variable declarations to the beginning of the function but I couldn't find a good source that clearly stated that 2 . In this particular example, you see that it moved them up with this statement: .locals init (int32 V_0,
int32 V_1,
bool V_2) Wherein we get a bit too obsessive in making comparisons.... Case A, do all variables get moved up? To dig into this a bit further, I tested the following function: public static void Main()
{
Console.WriteLine("Hello World");
int x=5;
if (x % 2==0)
{
int i = x;
Console.WriteLine(i);
}
else
{
string j = x.ToString();
Console.WriteLine(j);
}
} The difference here is that we declare either an int i or a string j based upon the comparison. Again, the compiler moves all the local variables to the top of the function 2 with: .locals init (int32 V_0,
int32 V_1,
string V_2,
bool V_3) I found it interesting to note that even though int i won't be declared in this example, the code to support it is still generated. Case B: What about foreach instead of for ? It was pointed out that foreach has different behavior than for and that I wasn't checking the same thing that had been asked about. So I put in these two sections of code to compare the resulting IL. int declaration outside of the loop: Console.WriteLine("Hello World");
List<int> things = new List<int>(){1, 2, 3, 4, 5};
int i;
foreach(var thing in things)
{
i = thing;
Console.WriteLine(i);
} int declaration inside of the loop: Console.WriteLine("Hello World");
List<int> things = new List<int>(){1, 2, 3, 4, 5};
foreach(var thing in things)
{
int i = thing;
Console.WriteLine(i);
} The resulting IL with the foreach loop was indeed different from the IL generated using the for loop. Specifically, the init block and the loop section changed. .locals init (class [mscorlib]System.Collections.Generic.List`1<int32> V_0,
int32 V_1,
int32 V_2,
class [mscorlib]System.Collections.Generic.List`1<int32> V_3,
valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32> V_4,
bool V_5)
...
.try
{
IL_0045: br.s IL_005a
IL_0047: ldloca.s V_4
IL_0049: call instance !0 valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32>::get_Current()
IL_004e: stloc.1
IL_004f: nop
IL_0050: ldloc.1
IL_0051: stloc.2
IL_0052: ldloc.2
IL_0053: call void [mscorlib]System.Console::WriteLine(int32)
IL_0058: nop
IL_0059: nop
IL_005a: ldloca.s V_4
IL_005c: call instance bool valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32>::MoveNext()
IL_0061: stloc.s V_5
IL_0063: ldloc.s V_5
IL_0065: brtrue.s IL_0047
IL_0067: leave.s IL_0078
} // end .try
finally
{
IL_0069: ldloca.s V_4
IL_006b: constrained. valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32>
IL_0071: callvirt instance void [mscorlib]System.IDisposable::Dispose()
IL_0076: nop
IL_0077: endfinally
} // end handler The foreach approach generated more local variables and required some additional branching. Essentially, on the first time in it jumps to the end of the loop to get the first iteration of the enumeration and then jumps back to almost the top of the loop to execute the loop code. It then continues to loop through as you'd expect. But beyond the branching differences caused by using the for and foreach constructs, there was no difference in the IL based upon where the int i declaration was placed. So we're still at the two approaches being equivalent. Case C: What about different compiler versions? In a comment that was left 1 , there was a link to an SO question regarding a warning about variable access with foreach and using closure . The part that really caught my eye in that question was that there may have been differences in how the .NET 4.5 compiler worked versus earlier versions of the compiler. And that's where the DotNetFiddler site let me down - all they had available was .NET 4.5 and a version of the Roslyn compiler. So I brought up a local instance of Visual Studio and started testing out the code. To make sure I was comparing the same things, I compared locally built code at .NET 4.5 to the DotNetFiddler code. The only difference that I noted was with the local init block and variable declaration. The local compiler was a bit more specific in naming the variables. .locals init ([0] class [mscorlib]System.Collections.Generic.List`1<int32> things,
[1] int32 thing,
[2] int32 i,
[3] class [mscorlib]System.Collections.Generic.List`1<int32> '<>g__initLocal0',
[4] valuetype [mscorlib]System.Collections.Generic.List`1/Enumerator<int32> CS$5$0000,
[5] bool CS$4$0001) But with that minor difference, it was so far, so good. I had equivalent IL output between the DotNetFiddler compiler and what my local VS instance was producing. So I then rebuilt the project targeting .NET 4, .NET 3.5, and for good measure .NET 3.5 Release mode. And in all three of those additional cases, the generated IL was equivalent. The targeted .NET version had no effect on the IL that was generated in these samples. To summarize this adventure: I think we can confidently say that the compiler does not care where you declare the primitive type and that there is no effect upon memory or performance with either declaration method. And that holds true regardless of using a for or foreach loop. I considered running yet another case that incorporated a closure inside of the foreach loop. But you had asked about the effects of where a primitive type variable was declared, so I figured I was delving too far beyond what you were interested in asking about. The SO question I mentioned earlier has a great answer that provides a good overview about closure effects on foreach iteration variables. 1 Thank you to Andy for providing the original link to the SO question addressing closures within foreach loops. 2 It's worth noting that the ECMA-335 spec addresses this with section I.12.3.2.2 'Local variables and arguments'. I had to see the resulting IL and then read the section for it to be clear regarding what was going on. Thanks to ratchet freak for pointing that out in chat. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/296803",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/154465/"
]
} |
296,867 | So I had to write some code related to splitting Bezier curves into parts. I read through several references and particularly referred this rather detailed one . The final code outcome is however around 20-30 LOC. But without having this kind of background, it would be really difficult for someone to figure out what the code is doing. Explaining it in details would require me to write too much comments as the function's explanation. Putting a link to this document into comments did not seem a very nice an idea (links might break in future) Q. Should I rather generate this as a doc, keep it locally with project docs, and give a reference to it in the comments ? Q. Any other nicer way in general to give comments about some rather complex/large area of work associated with a particular functionality. P.S. I don't want some body reading this code later to curse me for what it is, so, you see :p | Having a big comment section explaining "whys" and "hows" details of a complicated algorithm is a good idea. And it is better to have close to the code, so that developer does not need to switch context to read about it (even worse - switching back and forth between algorithm and document). Just remember to include a sort of TL;DR on top of the lengthy comment, for those who need to get just the idea/outline, without details of implementation. P.S. I was porting a project with such comment blocks a few months ago - they were very helpful. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/296867",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/59554/"
]
} |
297,090 | This question is somewhat language-agnostic, but not completely, since Object Oriented Programming (OOP) is different in, for example, Java , which doesn't have first-class functions, than it is in Python . In other words, I feel less guilty for creating unnecessary classes in a language like Java, but I feel like there might be a better way in the less boilerplate-y languages like Python. My program needs to do a relatively complex operation a number of times.
That operation requires a lot of "bookkeeping", has to create and delete some temporary files, etc. That's why it also needs to call a lot of other "suboperations" - putting everything into one huge method isn't very nice, modular, readable, etc. Now these are approaches that come to my mind: 1. Make a class that has one public method only and keeps the internal state needed for the suboperations in its instance variables. It would look something like this: class Thing:
def __init__(self, var1, var2):
self.var1 = var1
self.var2 = var2
self.var3 = []
def the_public_method(self, param1, param2):
self.var4 = param1
self.var5 = param2
self.var6 = param1 + param2 * self.var1
self.__suboperation1()
self.__suboperation2()
self.__suboperation3()
def __suboperation1(self):
# Do something with self.var1, self.var2, self.var6
# Do something with the result and self.var3
# self.var7 = something
# ...
self.__suboperation4()
self.__suboperation5()
# ...
def suboperation2(self):
# Uses self.var1 and self.var3
# ...
# etc. The problem I see with this approach is that the state of this class makes sense only internally, and it can't do anything with its instances except call their only public method. # Make a thing object
thing = Thing(1,2)
# Call the only method you can call
thing.the_public_method(3,4)
# You don't need thing anymore 2. Make a bunch of functions without a class and pass the various internally needed variables between them (as arguments). The problem I see with this is that I have to pass a lot of variables between functions. Also, the functions would be closely related to each other, but they wouldn't be grouped together. 3. Like 2. but make the state variables global instead of passing them. This would be no good at all, since I have to do the operation more than once, with different input. Is there a fourth, better, approach? If not, which one of these approaches would be better, and why? Is there something I'm missing? | Make a class that has one public method only and keeps the internal state needed for the suboperations in its instance variables. The problem I see with this approach is that the state of this class makes sense only internally, and can't do anything with its instances except call their only public method. Option 1 is a good example of encapsulation used correctly. You want the internal state to be hidden from outside code. If that means your class only has one public method, then so be it. It'll be that much easier to maintain. In OOP if you have a class that does exactly 1 thing, has a small public surface, and keeps all its internal state hidden, then you are (as Charlie Sheen would say) WINNING . Make a bunch of functions without a class and pass the various internally needed variables between them (as arguments). The problem I see with this is that I have to pass a lot of variables between functions.
Also, the functions would be closely related to each other, but wouldn't be grouped together. Option 2 suffers from low cohesion . This will make maintenance more difficult. Like 2. but make the state variables global instead of passing them. Option 3, like option 2, suffers from low cohesion, but much more severely! History has shown that the convenience of global variables is outweighed by the brutal maintenance cost it brings. That's why you hear old farts like me ranting about encapsulation all the time. The winning option is #1 . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/297090",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/58748/"
]
} |
297,160 | Mergesort is a divide and conquer algorithm and is O(log n) because the input is repeatedly halved. But shouldn't it be O(n) because even though the input is halved each loop, each input item needs to be iterated on to do the swapping in each halved array? This is essentially asymptotically O(n) in my mind. If possible please provide examples and explain how to count the operations correctly! I haven't coded anything up yet but I've been looking at algorithms online. I've also attached a gif of what wikipedia is using to visually show how mergesort works. | It's O(n * log(n)), not O(log(n)). As you've accurately surmised, the entire input must be iterated through, and this must occur O(log(n)) times (the input can only be halved O(log(n)) times). n items iterated log(n) times gives O(n log(n)). It's been proven that no comparison sort can operate faster than this. Only sorts that rely on a special property of the input such as radix sort can beat this complexity. The constant factors of mergesort are typically not that great though so algorithms with worse complexity can often take less time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/297160",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/71718/"
]
} |
297,162 | for (Canvas canvas : list) {
} NetBeans suggests me to use "functional operations": list.stream().forEach((canvas) -> {
}); But why is this preferred ? If anything, it is harder to read and understand. You are calling stream() , then forEach() using a lambda expression with parameter canvas . I don't see how is that any nicer than the for loop in the first snippet. Obviously I am speaking out of aesthetics only. Perhaps there is a technical advantage here that I am missing. What is it? Why should I use the second method instead? | Streams provide much better abstraction for composition of different operations you want to do on top of collections or streams of data coming in. Especially when you need to map elements, filter and convert them. Your example is not very practical. Consider the following code from Oracle's site . List<Transaction> groceryTransactions = new Arraylist<>();
for(Transaction t: transactions){
if(t.getType() == Transaction.GROCERY){
groceryTransactions.add(t);
}
}
Collections.sort(groceryTransactions, new Comparator(){
public int compare(Transaction t1, Transaction t2){
return t2.getValue().compareTo(t1.getValue());
}
});
List<Integer> transactionIds = new ArrayList<>();
for(Transaction t: groceryTransactions){
transactionsIds.add(t.getId());
} can be written using streams: List<Integer> transactionsIds =
transactions.stream()
.filter(t -> t.getType() == Transaction.GROCERY)
.sorted(comparing(Transaction::getValue).reversed())
.map(Transaction::getId)
.collect(toList()); The second option is much more readable. So when you have nested loops or various loops doing partial processing, it's very good candidate for Streams/Lambda API usage. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/297162",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13833/"
]
} |
297,276 | JVM developer here. Lately I've seen banter on IRC chat rooms and even in my own office about so-called " shaded " Java libraries. The context of the use will be something like: " Such and so provides a "shaded" client for XYZ. " Perfect example is this Jira issue for HBase : " Publish a client artifact with shaded dependencies " So I ask: What is a shaded JAR, what does it mean to be "shaded"? | Shading dependencies is the process of including and renaming dependencies (thus relocating the classes & rewriting affected bytecode & resources) to create a private copy that you bundle alongside your own code . The concept is usually associated with uber-jars (aka fat jars ). There is some confusion about the term , because of maven shade plugin, which under that single name does 2 things (quoting their own page): This plugin provides the capability to package the artifact in an uber-jar, including its dependencies and to shade - i.e. rename - the packages of some of the dependencies. So the shading part is actually optional: the plugin allows to include dependencies in your jar (fat jar), and optionally rename (shade) dependencies . Adding another source : To Shade a library is to take the contents files of said library, put them in your own jar, and change their package . This is different from packaging which is simply shipping the libraries files in side your own jar without relocating them to a different package. Technically speaking, dependencies are shaded. But it's common to refer to a fat-jar-with-shaded-dependencies as "shaded jar", and if that jar is a client for another system, it can be referred to as "shaded client". Here is the title of the Jira issue for HBase that you linked in your question: Publish a client artifact with shaded dependencies So in this post I'm trying to present the 2 concepts without conflating them. The Good Uber-jars are often used to ship an application as a single file (makes it easy to deploy and run). They can also be used to ship libraries along with some (or all) of their dependencies shaded , in order to avoid conflicts when used by other applications (which might use different versions of those libraries). There are several ways to build uber-jars, but maven-shade-plugin goes one step further with its class relocation feature: If the uber JAR is reused as a dependency of some other project, directly including classes from the artifact's dependencies in the uber JAR can cause class loading conflicts due to duplicate classes on the class path. To address this issue, one can relocate the classes which get included in the shaded artifact in order to create a private copy of their bytecode. (Historical note: Jar Jar Links offered that relocation feature before) So with this you can make your library dependencies an implementation detail , unless you expose classes from those libraries in your API. Let's say I have a project, ACME Quantanizer™, which provides DecayingSyncQuantanizer class, and depends on Apache commons-rng (because of course to properly quantanize you need a XorShift1024Star , duh). If I use the shade maven plugin to produce a uber-jar, and I look inside, I see these class files: com/acme/DecayingSyncQuantanizer.class
org/apache/commons/rng/RandomProviderState.class
org/apache/commons/rng/RestorableUniformRandomProvider.class
...
org/apache/commons/rng/core/source64/XorShift1024Star.class
org/apache/commons/rng/core/util/NumberFactory.class Now if I use the class-relocating feature: <plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.0.0</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<relocations>
<relocation>
<pattern>org.apache.commons</pattern>
<shadedPattern>com.acme.shaded.apachecommons</shadedPattern>
</relocation>
</relocations>
</configuration>
</execution>
</executions>
</plugin> The content of uber-jar looks like this: com/acme/DecayingSyncQuantanizer.class
com/acme/shaded/apachecommons/rng/RandomProviderState.class
com/acme/shaded/apachecommons/rng/RestorableUniformRandomProvider.class
...
com/acme/shaded/apachecommons/rng/core/source64/XorShift1024Star.class
com/acme/shaded/apachecommons/rng/core/util/NumberFactory.class It's not just renaming files, it rewrites bytecode that references relocated classes (so, my own classes & commons-rng classes are all transformed). In addition, Shade plugin will also generate a new POM ( dependency-reduced-pom.xml ) wherein shaded dependencies are removed from the <dependencies> section. This helps use the shaded jar as a dependency for another project. So you can publish that jar instead of the base one, or both (using a qualifier for the shaded jar). So that can be very useful... The Bad ...but it also poses a number of issues. Aggregating all dependencies into a single "namespace" within the jar can get messy, and require shading & messing with resources. For example: how to deal with resource files that include class or package names? Resource files such as service provider descriptors which all live under META-INF/services ? The shade plugin offers resource transformers that can help with that: Aggregating classes/resources from several artifacts into one uber JAR is straight forward as long as there is no overlap. Otherwise, some kind of logic to merge resources from several JARs is required. This is where resource transformers kick in. But it's still messy, and the problems are almost impossible to anticipate (quite often you discover the issues the hard way in production). See why-we-stopped-building-fat-jars . All in all, deploying a fat jar as a standalone app/service is still very common, you just need to be aware of the gotchas, and for some of those you might need shading or other tricks. The Ugly There are many more difficult issues (debugging, testability, compatibility with OSGi & exotic classloaders...). But more importantly, when you produce a library, the various issues that you thought you could control are now getting infinitely more complicated, because your jar will be used in many different contexts (unlike a fat jar that you deploy as a standalone app/service in a controlled environment). For example, ElasticSearch used to shade some dependencies in the jars they shipped, but they decided to stop doing that : Before version 2.0, Elasticsearch was provided as a JAR with some (but not all) common dependencies shaded and packaged within the same artifact. This helped Java users who embed Elasticsearch in their own applications to avoid version conflicts of modules like Guava, Joda, Jackson, etc. Of course, there was still a list of other unshaded dependencies like Lucene that could still cause conflicts. Unfortunately, shading is a complex and error prone process which solved problems for some people while creating problems for others. Shading makes it very difficult for developers and plugin authors to write and debug code properly because packages are renamed during the build. Finally, we used to test Elasticsearch unshaded then ship the shaded jar, and we don’t like to ship anything that we aren’t testing. We have decided to ship Elasticsearch without shading from 2.0 onwards. Please note they too refer to shaded dependencies , not shaded jar | {
"source": [
"https://softwareengineering.stackexchange.com/questions/297276",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/154753/"
]
} |
297,289 | I use a class that just extracts data from one known object, and distributes it to other known objects. No persistent configuration or such is needed in that class instance. How should I decide whether to set up that class as a singleton , or just instantiate it every time I need it? | If the class has no state, you could consider turning it into a function or static method depending on your language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/297289",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/192574/"
]
} |
297,327 | This is a hard enough idea to wrap my head around and I would greatly appreciate any edits/help to get it more readable for those in-the-know. Is it theoretically possible to have a hard drive that has saved on it one copy of every possible binary permutation of one kilobyte and then have the rest of the system simply create pointers to these locations? Would a system made such a way be any faster than simply having information stored directly? To explain another way, say instead of having sentences: "Hello, I'm Bob." and "That sandwich looks delicious." ...stored on the hard drive, we would have all permutations of the alphabet and other characters up to some number (say, 1000 characters or so), and then have store our sentences as something like: [Pointer#21381723] | There are 2 8192 possible different 1K blocks. Storing them all would take 2 8202 bits of storage. Since the universe contains only about 10 80 (or ~2 266 ) particles, it's a safe bet that it isn't possible to store them all, and you don't have to wonder about whether it would save time or not. But there is, in fact a more interesting way of answering this. You are suggesting creating an index into a huge pool of constants. But how would you know which index to dereference? Imagine for the sake of an argument that you want to store only 1-character blocks: a , b , c ... Presumably your indices would be 0, 1, 2 etc., since that's the most efficient layout of storing those blocks. Do you notice something about the arrangement? Your index is, in fact, a coded representation of the stored data ! In other words, you don't have to dereference at all, you just have to transform the index into the data you want. When you store all possible values of something in a table, this always happens: your index becomes merely an encoded version of the data itself, so storing the data becomes unnecessary in the first place. This why in the real world, indices are only useful for sparse data (e.g. all web pages you've visited, not all web pages that could exist , or even all that do exist). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/297327",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/196631/"
]
} |
297,373 | The specific use case I'm interested in here is authenticating REST clients against publicly-available server endpoints (such as a public REST API). The simplest solution here is Basic Auth . But I often hear OAuth2 touted as a superior auth solution in almost all circumstances. The thing is, the only OAuth2 grant type that is feasible for a REST client authenticating against a REST server is Resource Owner Password Credentials (ROPC) , because Code Grants and Implicit Grants require a UI/webpage (hosted by the Auth Server) for the user to login to and manually authorize the client app. The way ROPC works is, by sending the resource owner's username/password, and the client ID as query string params ?!? This is even less secure (IMHO) then Basic Auth, which at least base-64 encodes the credentials and sends them inside a header which can be encrypted by TLS! So I ask: in the context of public REST APIs, is OAuth2 ROPC really any better than Basic Auth? What is more secure than OAuth2 ROPC? Update I just read this excellent article which explains Amazon's non-OAuth2 based REST security for AWS. It is essentially a private key-based solution where hashes of each REST request are generated and sent as sidecars along-side the normal (un-encrypted) request. Only the client and the server know the private key, so when the server receives the request (again, containing the normal request + the hashed request), the server looks up the client's private key, applies the same hash to the normal request, and then compares the two hashes. This sounds way more complicated, complex and secure than OAuth2's ROPC! Unless I'm missing something major here, OAuth2 ROPC is just sending client_id , username and password as query string params...totally and utterly unsecure! This HMAC/hashed-based solution seems to be much more impressive and secure. The thing is, even the author of that article goes on to say: You [will] also slowly realize and accept that at some point you will have to implement OAuth... Ba-ba-bwhat?!?! If OAuth2 is less secure than this clever HMAC/hash-based solution, why does the author of this article feel OAuth needs to be embraced at some point. I'm so confused. | The answer to your question can be at the code level, protocol level or architecture level. I will attempt to summarize here most of the protocol level issues since that is usually critical in pros and cons analysis. Keep in mind that OAuth2 is much more than Resource Owner Password Credentials which, according to the specification, exists for "legacy or migration reasons", is considered "higher risk than other grant types" and the specification explicitly states that the clients and authorization servers "SHOULD minimize use of this grant type and utilize other grant types whenever possible". There are still many advantages of using ROPC over basic authentication but before we get into that, let's understand the basic protocol difference between OAuth2 and basic authentication. Please bear with me as I explain these and will come to ROPC later. User authentication flows There are four roles defined in OAuth2 specification. With examples, they are: Resource owner: The user who has access to some resource, e.g. in your case, different users may have different access level to the REST API; The client: usually the application the user is using, and needs access to the resource to provide services to the user; Resource server: the REST API in your case; and Authorization server: the server to which user's credentials are presented and which will authenticate the user. When a client application runs, it is granted access to the resources based on the user. If a user has administrator privileges, the resources and operations available to the user in REST API may be far more than a user without administrator privileges. OAuth2 also allows the possibility of using a single authorization server with multiple clients and for multiple resources. As an example, a resource server can accept user's authentication with Facebook (which can act as authorization server in such a case). So when the user runs an application (i.e. the client), it sends the user to Facebook. User types their credentials in Facebook, and the client gets back a "token" which it can present to the resource server. Resource server looks at the token and accepts it after verifying that Facebook in fact issued it and allow the user access to the resource. In this case, the client never sees user's credentials (i.e. their Facebook credentials). But let's say you are managing your user's identities (and have an authorization server) instead of Facebook, which grants tokens to your client already. Now, let's say you also have a partner and you want to allow their application (i.e. client) to access your REST API. With basic authentication (or even ROPC), the user will provide credentials to that client which will send it to the authorization server. Authorization server will then provide a token that can be used by the client to access the resources. Unfortunately, this means that user's credentials are now visible to that client too. However, you would not want a partner's application (who could be external to your organization) to even know a user's password. That's a security issue now. In order to achieve that goal, you would want to use another flow (such as the authorization code grant) in which the user directly provides credentials to the authorization server. Thus, with OAuth2, one would ideally not use ROPC in such cases rather use a different one, such as authorization code flow. This protects any application from knowing the user's credentials which are presented only to the authorization server. Thus, a user's credentials are not leaked. The same issues apply with basic authentication, but in the next section, I will explain how ROPC is still better because the user's credentials still do not need to be stored by the client in ROPC for persistent access by the clients. Note that when the user goes to the authorization server, the authorization server can also ask user to confirm that they want to allow the client to access the resources on their behalf or not. That is why it is called the authorization server because the process of authorizing a client to access resources is entailed in the process. If the user does not authorize the client, it will not get access to the resources. Likewise, if the user themselves do not have access to the resources, the authorization server can still deny access and not issue a token. In basic authentication, even the authorization server and resource server are combined into a single entity. Thus, the resource server wants to authorize the user, so asks the credentials from client. The client furnishes those credentials which are used by the resource server to authenticate the user. This means that multiple resource servers will essentially be requiring credentials from the user. Token issuance The clients get tokens from authorization server, keep them around and use those to access the resources (more details on tokens themselves below). The clients never know the user's password (in flows other than ROPC) and do not need to store it. In ROPC, even though the clients do know the user's password, they still do not need to store it because they use these tokens to access resources. By contrast, in basic authentication, if a client does not want to have user to provide credentials in every session, then the client has to store the user's password so they can furnish it the next time around. This is a major drawback to using basic authentication unless the client is only a web application in which case, cookies can address some of these concerns. With native applications, that's usually not an option. There is another aspect of OAuth2 which is entailed in how tokens are issued and they work. When a user furnishes credentials to the authorization server (even in ROPC), the authorization server can give one or more of the two types of tokens: 1) access token, and 2) refresh token. Access tokens are sent to the resource server which will grant access to the resources after validating it, and usually they have a short lifetime, e.g. 1hr. Refresh tokens are sent to the authorization server by the client to get another access token when it expires, and usually have a large lifetime (e.g. a few days to months or even years). When the client provides the access token to the resource server, it looks at the token and after validating, looks inside the token to determine whether to allow access or not. As long as access token is valid, the client can keep using it. Let's say the user closes the application and starts it the next day, and the access token is expired. Now the client will make a call to the authorization server and present the refresh token assuming it's not expired. Authorization server, since it already issued the token, verifies it and can determine that the user does not need to provide the credentials again and thus gives another access token to the client. The client now has access to the resource server again. This is how typically the client applications for Facebook and Twitter ask for credentials one time and then do not require the user to provide credentials again. These applications never need to know the users credentials and yet can access resources every time user starts the application. Now the user can go into the authorization server (e.g. in their Facebook user profile), change password without impacting any client applications. They will all continue to function properly. If the user loses a device on which they already had an application with refresh tokens, they can tell authorization server (e.g. Facebook) to "log them out" of those applications which the authorization server (i.e. Facebook) will accomplish by not honoring any existing refresh tokens and forcing the user to provide credentials again when they try to access resources through those applications. JWT is simply the token format that is usually used with OAuth2 and OpenID Connect. The methods of signing the token and validating it is also standardized with libraries available for those instead of every resource server implementing yet another solution. Thus, the advantage lies in reusability of code that has been vetted and continues to be supported. Security implications Basic authentication will be weaker when any of the above scenarios are in the picture. There is also an extensive threat model for OAuth2 available for developers who can use the suggestions in it to avoid common vulnerabilities in their implementations. If you go through the threat model, you will see that many implementation related vulnerabilities (such as open redirector and CSRF) are also covered in it. I did not go through comparison of those against basic authentication in this response. The last major advantage of OAuth2 is that the protocol is standardized and multiple authorization servers, clients and resource servers honor it. Numerous libraries are available to developers, which are maintained so as security issues are found in implementations, the libraries are updated while allowing interoperability. Conclusion If you are writing a new application, IMO, the ideal case would be to avoid both the basic authentication and ROPC because of the issues inherent in them. However, each application has different needs, timelines, developer proficiency etc. so the decision is case-by-case. But even if you did not have any more need than basic authentication, by choosing it, you could lock yourself into an architecture that may not be easy to extend (e.g. if you have multiple servers in the future, you would not necessarily want to have the user provide credentials to each one of them rather just provide to authorization server once, which can hand out tokens, etc.) Note that I did not address your comment about how the credentials are sent over the wire because those can be secured using TLS or a similar protocol, or proof of possession etc. As someone already suggested, base 64 encoding is 0 security, please do not be deluded by that. The differences mentioned above are usually at the architectural level and thus that is where I focused because architecture is the hardest to change once implemented. Azure Active Directory B2C Basic , a service which I work on and was recently released for public preview, allows third party application to use AAD as the authorization server with interoperability with social IDPs (such as Facebook, Google, etc.). It also allows users to create their own accounts instead of using social IDPs and those can later be used for authentication purposes. There are a few other services also like that (e.g. another one I know of is auth0 ) which can be used by developers to completely outsource authentication and user management for their applications and resources. The same protocols characteristics that I mentioned above is used by developers to decouple authorization server (AAD), a resource (e.g. their REST APIs), the client (e.g. their mobile applications), and users. I hope this explanation helps somewhat. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/297373",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/154753/"
]
} |
297,598 | In functional programming, it is considered bad practice (at least from my observations) to use state changes. Since computers operate in an imperative-language-like matter (performing one operation at a time, changing states of RAM), isn't functional programming contradictory to the way a computer operates? Note: I'm not trying to say that a functional language cannot exist, because obviously there are several out there that do exist and work very well. Further note: Besides the difference in processes, wouldn't a functional language be inherently extremely slow, due to the amount of operations involved? | All programming languages and programs are generalized abstractions. We simulate those abstractions on a machine that does things its own way. Were we confined to the way a computer works, we would all be using languages like C or Forth, which are "close to the metal" languages. Computers are general-purpose machines; we are free to create whatever paradigm over the processor's instruction set that we see fit. It is this quality of computers that is what makes them so powerful: the ability to adapt them to any problem domain. In other words, functional abstractions properly belong to the programming language being used, not the underlying machine that executes it. In the 1980's Xerox and MIT built machines that could execute Lisp, a functional language, directly. We don't do that anymore because we have processors that are fast enough to translate from one language (the one you write programs in) to another (the processor instruction set), and it's better to have a general-purpose machine that can execute any programming language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/297598",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/196988/"
]
} |
297,847 | I've recently been using some build tools for a Nodejs project at work when I realized that most languages' main build tool/system use a different language than the underlying programming language itself. For example, make does not use C or C++ to write scripts and ant (nor Maven) doesn't use Java as their language for scripting. Newer languages like Ruby do use the same language for build tools like rake , which makes sense to me. But why hasn't this always been the case? What't the advantage to having a build tool that uses a different language from the underlying language? | The choice of what programming language to use to get anything done must depend on the specific features of that goal and not on the other tasks related to that project. A build tool does a very specific job, and no matter what language you're using for the main project, a build tool is a software by itself. Trying to tie the main project and its build tool could be a very bad decision. What you should need with a build tool is mainly a fast process of simple rules, with a fast management of files and IO. If languages like ruby do use themselves to implement their build tool, is related more on the features of that language than on the assumed need to keep everything written with the same language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/297847",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36853/"
]
} |
297,915 | I'm pretty sure there is a special name for the 'spec' of a function/method. It's a word that refers to how many arguments it takes the order of arguments which arguments are optional Is there a name for that? | Usually this is called a type signature . A type signature includes the function's return type, the number of
arguments, the types of arguments, or errors it may pass back. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/297915",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30385/"
]
} |
297,926 | There are plenty of reasons why globals are evil in OOP. If the number or size of the objects needing sharing is too large to be efficiently passed around in function parameters, usually everyone recommends Dependency Injection instead of a global object. However, in the case where almost everyone needs to know about a certain data structure, why is Dependency Injection any better than a global object? Example (a simplified one, to show the point generally, without delving too deep in a specific application) There is a number of virtual vehicles which have a huge number of properties and states, from type, name, color, to speed, position, etc. A number of users can remote control them, and a huge number of events (both user-initiated and automatic) can change a lot of their states or properties. The naive solution would be to just make a global container of them, like vector<Vehicle> vehicles; which can be accessed from anywhere. The more OOP-friendly solution would be to have the container be member of the class which handles the main event loop, and be instantiated in its constructor. Every class which needs it, and is member of the main thread, will be given access to the container via a pointer in their constructor.
For example, if an external message comes in via a network connection, a class (one for each connection) handling the parsing will take over, and the parser will have access to the container via a pointer or reference. Now if the parsed message results in either a change in an element of the container, or requires some data out of it to perform an action, it can be handled without the need of tossing around thousands of variables through signals and slots (or worse, storing them in the parser to be later retrieved by the one who called the parser). Of course, all classes which receive access to the container via dependency injection, are part of the same thread. Different threads will not directly access it, but do their job and then send signals to the main thread, and the slots in the main thread will update the container. However, if the majority of classes will get access to the container, what makes it really different from a global? If so many classes need the data in the container, isn't the "dependency injection way" just a disguised global? One answer would be thread safety: even though I take care not to abuse the global container, maybe another developer in the future, under the pressure of a close deadline, will nevertheless use the global container in a different thread, without taking care of all the collision cases.
However, even in the case of dependency injection, one could give a pointer to someone running in another thread, leading to the same problems. | in the case where almost everyone needs to know about a certain data structure, why is Dependency Injection any better than a global object? Dependency injection is the best thing since sliced bread , while global objects have been known for decades to be the source of all evil , so this is a rather interesting question. The point of dependency injection is not simply to ensure that every actor who needs some resource can have it, because obviously, if you make all resources global, then every actor will have access to every resource, problem solved, right? The point of dependency injection is: To allow actors to access resources on a need basis , and To have control over which instance of a resource is accessed by any given actor. The fact that in your particular configuration all actors happen to need access to the same resource instance is irrelevant. Trust me, you will one day have the need to reconfigure things so that actors will have access to different instances of the resource, and then you will realize that you have painted yourself in a corner. Some answers have already pointed such a configuration: testing . Another example: suppose you split your application into client-server. All actors on the client use the same set of central resources on the client, and all actors on the server use the same set of central resources on the server. Now suppose, one day, that you decide to create a "standalone" version of your client-server application, where both the client and the server are packaged in a single executable and running in the same virtual machine. (Or runtime environment, depending on your language of choice.) If you use dependency injection, you can easily make sure that all the client actors are given the client resource instances to work with, while all the server actors receive the server resource instances. If you do not use dependency injection, you are completely out of luck, as only one global instance of each resource can exist in one virtual machine. Then, you have to consider: do all actors really need access to that resource? really? It is possible that you have made the mistake of turning that resource into a god object, (so, of course everyone needs access to it,) or perhaps you are grossly overestimating the number of actors in your project that actually need access to that resource. With globals, every single line of source code in your entire application has access to every single global resource. With dependency injection, each resource instance is only visible to those actors that actually need it. If the two are the same, (the actors that need a particular resource comprise 100% of the lines of source code in your project,) then you must have made a mistake in your design. So, either Refactor that great big huge god resource into smaller sub-resources, so different actors need access to different pieces of it, but rarely an actor needs all of its pieces, or Refactor your actors to in turn accept as parameters only the subsets of the problem that they need to work on, so they do not have to be consulting some great big huge central resource all the time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/297926",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47197/"
]
} |
297,949 | While I've never used a language that had built-in variable privacy, a book I'm reading by Douglas Crockford explains a way to create privacy in JavaScript, however it doesn't make sense to me so. The term "private" isn't something I think of as even being possible in a programming environment, since everyone working on the code can potentially change any mutable object or method right? So the two things that don't make sense to me are how privacy is helpful how privacy is defined - fundamentally one developer can't keep something in source code absolutely private ("untouchable", as I understand it) from another developer, so where does one consider the variable "private"? When it's hard to reach? Can someone explain this, possibly with a "real-world" example? | Within software development, privacy (of software entities) is usually defined as restricting access to that variable/function/method.
If a variable is private, then only functions or methods that belong to the same class or module are allowed/supposed to access that variable. As you see, privacy here is completely unrelated to which developer is writing the code, but rather is is about which parts of the code have access to which other parts of the code. The advantage of private variables or functions is that, by restricting which parts of the code have access, you can more easily reason about how the variable gets used and who will be affected if you change it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/297949",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/100972/"
]
} |
298,117 | If we look at the vintage program Netscape Navigator or an early version of Microsoft Word, those programs were less than 50 MB in size. Now when I install google chrome it is 200 MB and desktop version of Slack is 300 MB. I read about some rule that programs will take all available memory no matter how much it is but why? Why are the current sizes of programs so large compared to 10 or 15 years ago? The programs are not doing significantly more functions and do not look very different. What is it that is the resource hog now? | "Looking very different" is a matter of perception. Today's graphics have to look good at totally different screen resolutions than they used to, with the result that a 100x100 image that used to be more than good enough for a logo would now look horribly tacky. It has had to be replaced with a 1000x1000 image of the same thing, which is a factor of 100 right there. (I know you can use vector graphics instead, but that just emphasizes the point - vector graphics rendering code has had to be added to systems that didn't need it before, so this is just a trade-off from one kind of size increase to another.) "Working differently" is likewise a matter of perception. Today's browser does massively more things than one from 1995. (Try surfing the internet with a historic laptop one rainy day - you'll find it's almost unusable.) Not many of them are used very much, and uses may be completely unaware of 90% of them, but they're there. On top of that, of course, is the general tendency to spend less time on optimizing things for space and more on introducing new features. This is a natural side-effect of larger, faster, cheaper computers for everyone. Yes, it would be possible to write programs that are as resource-efficient as they were in 1990, and the result would be stunningly fast and slick. But it wouldn't be cost-effective anymore; your browser would take ten years to complete, by which time the requirements would have completely changed. People used to program with extreme attention to efficiency because yesteryear's slow, small machines forced them to, and everyone else was doing it as well. As soon as this changed, the bottleneck for program success shifted from being able to run at all to running more and more shiny things , and that's where we are now. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/298117",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12893/"
]
} |
298,145 | I came across with following statements while reading the Clean Code book of Robert C. Martin. Chapter : 7 : Error Handling Page No : 109 ..In fact, wrapping third-party APIs is a best practice . When you wrap
a third-party API, you minimize your dependencies upon it: You can
choose to move to a different library in the future without much
penalty. Wrapping also makes it easier to mock out third-party calls
when you are testing your own code. One final advantage of wrapping is
that you aren’t tied to a particular vendor’s API design choices. You
can define an API that you feel comfortable with. I am quite confused about the bold part. I am totally clueless how can we move to different library so easily if we wrap the third party library and how testing becomes easy with it ? | Imagine you have a complicated library that you depend on. Say the library exposes four different calls to its functionality, and your code base uses each of them three times. If you use the raw vendor API in your code, then if the vendor library changes or you have to replace it, you'll have to change twelve API calls in your code base. If you had an abstraction layer between your code and their code, you'd have to change four places in the abstraction layer and none at all in your code base. You can see that this saves effort - the more effort the more calls the API offers. And most real-world APIs offer more than four entry points. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/298145",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/131715/"
]
} |
298,247 | As someone who's worked effectively with Agile before, I am trying to convince my current employers of its benefits. However, management are insistent that we retain the ability to make upfront estimates in order to assess the business value of projects. Most of my customers are internal, and I was recently tasked with going round teams and asking them for ideas on business processes to automate. I was then to find out how much time this was taking them, work out how much time the solution would save and estimate the total development time. That way, managers could attempt to measure how effective a solution was likely to be in terms of time saved. However, it looks to me like there's no way to approach this requirement in an "Agile" way. Flexible requirements means that not only will estimates of time taken be wrong, so will estimates of potential time saved. I explained as much, explained why it was likely to be problematic, but was told it was non-negotiable. The question How to sell Agile development to (waterfall) clients has some useful advice on how to "sell" Agile to external customers. I'm not trying to sell it to external clients: I'm trying to work out how I can best reconcile the demands of internal management while retaining a methodology I believe works well. Is there any way to approach this task in a flexible manner which allows me to retain at least some Agile benefits? | As other answers have stated, Management has every right to get a high level estimate upfront of a project. They are not unreasonable for trying to determine ROI. One of the approaches that I like about Agile however is that the scope of a project is not fixed. It can be initially sized out at the Feature and Epic level, then business can determine ROI based on what are the most important features. Maybe the fancy UI with bells and whistles has low business value, but the workflow engine for handling claims has a high ROI. When you lump the whole project together then it harder to meet ROI than if you focus on the critical business functionality that is desired. Here is a way that I have done this: Take your WBS milestones and turn each of these into a deliverable feature This allows you to categorize your project into mini subprojects that have varying business value. Each of these should stand on their own in terms of business value. T-Shirt Size the Effort on Features This is a very easy way to get a rough idea about how big or involved a particular feature might be. Perhaps low value features still have a great ROI if they look like easy wins. Break Down a Feature into Stories Go through the exercise to find a small feature that is well understood and break it down into stories initially. Estimate these stories by points. Now you have a basis where Small -> 40 points This will be a basis of comparison to other features Associate story point effort to all Features Compare your Small Feature to other features. For example, Medium Feature Y feels like it is twice the size and effort of Small Feature X of 40 story points. Medium Feature Y is probably 80 story points. Continue this until you have story points estimated at a high level for all features. Estimate your Team Velocity Looking at your development team, try to determine how many story points could this team effectively deliver in a given sprint. If you have previous Agile projects as an example with this team that is a great place to start. If you do not have such history behind the team then go through a mock Sprint Planning with your team where you start looking at your Small feature that you have detailed out. What kinds of hourly estimates are people giving for their tasks on these stories? Based on how much work the team thinks they can deliver in 2 weeks, use that total story point number as the average potential velocity of your team! Find your Projected Completion Date If your team in mock sprint planning feels comfortable delivering 25 story points in a sprint, and your total backlog looks like 300 story points for the gold Cadillac version of your project, then it looks like your team would ideally take 12 sprints or 24 weeks to complete everything. Now it is trivial to turn cost of resources on your team into dollars per week to arrive at a cost for ROI vs. Business Value. The negotiation can continue on what the most important features are and then your project management becomes basically a Knapsack Problem. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/298247",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22742/"
]
} |
298,472 | Suppose I have 4 types of services I offer (they are unlikely to change often): Testing Design Programming Other Suppose I have 60-80 of actual services that each fall into one of the above categories. For example, 'a service' can be "Test Program using technique A" and it is of type "Testing". I want to encode them into a database. I came up with a few options: Option 0: Use VARCHAR directly to encode service type directly as a string Option 1: Use database enum . But, enum is evil Option 2: use two tables: service_line_item (id, service_type_id INT, description VARCHAR);
service_type (id, service_type VARCHAR); I can even enjoy referential integrity: ALTER service_line_item
ADD FOREIGN KEY (service_type_id) REFERENCES service_type (id); Sounds good, yes? But I still have to encode things and deal with integers, i.e when populating the table. Or I have to create elaborate programming or DB constructs when populating or dealing with the table. Namely, JOINs when dealing with the database directly, or creating new object oriented entities on the programming side, and making sure I operate them correctly. Option 3: Don't use enum , do not use two tables, but just use an integer column service_line_item (
id,
service_type INT, -- use 0, 1, 2, 3 (for service types)
description VARCHAR
); This is like a 'fake enum' that requires more overhead on the code side of things, like i.e. knowing that {2 == 'Programming'} and dealing with it appropriately. Question: Currently I have implemented it using Option 2 , guided under concepts do not use enum (option 1) avoid using a database as a spreadsheet (option 0) But I can't help to feel that seems wasteful to me in terms of programming and cognitive overhead -- I have to be aware of two tables, and deal with two tables, vs one. For a 'less wasteful way', I am looking at Option 3 . IT is lighter and requires essentially the same code constructs to operate (with slight modifications but complexity and structure is basically the same but with a single table) I suppose ideally it is not always wasteful, and there are good cases for either option, but is there a good guideline as to when one should use Option 2 and when Option 3? When there are only two types (binary) To add a bit more to this question... in the same venue, I have a binary option of "Standard" or "Exception" Service, which can apply to the service line item. I have encoded that using Option 3 . I chose not to create a new table just to hold values {"Standard", "Exception"}. So my column just holds {0, 1} and my column name is called exception , and my code is doing a translation from {0, 1} => {STANDARD, EXCEPTION} (which I encoded as constants in programming language) So far not liking that way either..... (not liking option 2 nor option 3).
I do find option 2 superior to 3, but with more overhead, and still I cannot escape encoding things as integers no matter which option I use out of 2, and 3. ORM To add some context, after reading answers - I have just started using an ORM again (recently), in my case Doctrine 2. After defining DB schema via Annotations, I wanted to populate the database. Since my entire data set is relatively small, I wanted to try using programming constructs to see how it works. I first populated service_type s, and then service_line_item s, as there was an existing list from an actual spreadsheet. So things like 'standard/exception' and 'Testing' are all strings on the spreadsheet, and they have to be encoded into proper types before storing them in DB. I found this SO answer: What do you use instead of ENUM in doctrine2? ,
which suggested to not use DB's enum construct, but to use an INT field and to encode the types using 'const' construct of the programming language. But as pointed out in the above SO question, I can avoid using integers directly and use language constructs -- constants -- once they are defined.... But still .... no matter how you turn it, if I am starting with string as a type, I have to first convert it to a proper type, even when using an ORM. So if say $str = 'Testing'; , I still need to have a block somewhere that does something like: switch($str):
{
case 'Testing': $type = MyEntity::TESTING; break;
case 'Other': $type = MyEntity::OTHER; break;
} The good thing is you are not dealing with integers/magic numbers [instead, dealing with encoded constant quantities], but the bad thing is you can't auto-magically pull things in and out of the database without this conversion step, to my knowledge. And that's what I meant, in part, by saying things like "still have to encode things and deal with integers". (Granted, now, after Ocramius' comment, I won't have to deal directly with integers, but deal with named constants and some conversion to/from constants, as needed). | Option #2, using reference tables, is the standard way of doing it. It has been used by millions of programmers, and is known to work. It is a pattern , so anyone else looking at your stuff will immediately know what is going on. There exist libraries and tools that work on databases, saving you from lots and lots of work, that will handle it correctly. The benefits of using it are innumerable. Is it wasteful? Yes, but only slightly. Any half-decent database will always keep such frequently joined small tables cached, so the waste is generally imperceptible. All other options that you described are ad hoc and hacky, including MySQL's enum , because it is not part of the SQL standard. (Other than that, what sucks with enum is MySQL's implementation, not the idea itself. I would not mind seeing it one day as part of the standard.) Your final option #3 with using a plain integer is especially hacky. You get the worst of all worlds: no referential integrity, no named values, no definitive knowledge within the database of what a value stands for, just arbitrary integers thrown all over the place. By this token, you might as well quit using constants in your code, and start using hard-coded values instead. circumference = radius * 6.28318530718; . How about that? I think you should re-examine why you find reference tables onerous. Nobody else finds them onerous, as far as I know. Could it be that it is because you are not using the right tools for the job? Your sentence about having to "encode things and deal with integers", or having to "create elaborate programming constructs", or "creating new object oriented entities on the programming side", tells me that perhaps you may be attempting to do object-relational mapping (ORM) on the fly dispersed throughout the code of your application, or in the best case you may be trying to roll your own object-relational mapping mechanism, instead of using an existing ORM tool for the job, such as Hibernate. All these things are a breeze with Hibernate. It takes a little while to learn it, but once you have learned it, you can really focus on developing your application and forget about the nitty gritty mechanics of how to represent stuff on the database. Finally, if you want to make your life easier when working directly with the database, there are at least two things that you can do, that I can think of right now: Create views that join your main tables with whatever reference tables they reference, so that each row contains not only the reference ids, but also the corresponding names. Instead of using an integer id for the reference table, use a CHAR(4) column, with 4-letter abbreviations. So, the ids of your categories would become "TEST", "DSGN", "PROG", "OTHR". (Their descriptions would remain proper English words, of course.) It will be a bit slower, but trust me, nobody will notice. Finally, when there are only two types, most people just use a boolean column. So, that "standard/exception" column would be implemented as a boolean and it would be called "IsException". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/298472",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/119333/"
]
} |
298,564 | Programmers all seem to agree that readability of code is far more important than short-syntaxed one-liners which work, but require a senior developer to interpret with any degree of accuracy - but that seems to be exactly the way regular expressions were designed. Was there a reason for this? We all agree that selfDocumentingMethodName() is far better than e() . Why should that not apply to regular expressions as well? It seems to me that rather than designing a syntax of one-line logic with no structural organization: var parse_url = /^(?:([A-Za-z]+):)?(\/{0,3})(0-9.\-A-Za-z]+)(?::(\d+))?(?:\/([^?#]*))?(?:\?([^#]*))?(?:#(.*))?$/; And this isn't even strict parsing of a URL! Instead, we could make a some pipeline structure organized and readable, for a basic example: string.regex
.isRange('A-Z' || 'a-z')
.followedBy('/r'); What advantage does the extremely terse syntax of a regular expression offer other than the shortest possible operation and logic syntax? Ultimately, is there a specific technical reason for the poor readability of regular expression syntax design? | There is one big reason why regular expressions were designed as terse as they are: they were designed to be used as commands to a code editor, not as a language to code in. More precisely, ed was one of the first programs to use regular expressions, and from there regular expressions started their conquest for world domination. For instance, the ed command g/<regular expression>/p soon inspired a separate program called grep , which is still in use today. Because of their power, they subsequently were standardized and used in a variety of tools like sed and vim But enough for the trivia. So why would this origin favor a terse grammar? Because you don't type an editor command to read it even one more time. It suffices that you can remember how to put it together, and that you can do the stuff with it that you want to do. However, every character you have to type slows down your progress editing your file. The regular expression syntax was designed to write relatively complex searches in a throw-away fashion, and that is precisely what gives people headaches who use them as code to parse some input to a program. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/298564",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/100972/"
]
} |
298,628 | As I understand, the BIOS code/bitstream that is held in the ROM should be generic (work alongside with multiple CPU types or ISAs). In addition, I saw mentioned on the web that it is possible to dump its code (and to "disassemble" it). So, in which language, instruction set or machine code is it written? Doesn't it need any kind of processor to perform its operations? If so, I guess that it will use the external CPU, then how does it know the specific instruction set of the employed one? Maybe it has an internal processor? | BIOSes used to be written exclusively in assembly language, but the transition was made a long time ago to write the majority of the code in some higher level language, and leave written in assembly as few portions of it as possible, preferably only the bootstrapper, (the very first few hundreds of instructions that the CPU jumps to after a start / reset,) and whatever routines deal with specific quirks of the underlying architecture. BIOSes were already being written primarily in C as early as the early nineties. (I wrote a BIOS in 90% C, 10% assembly in the early nineties.) What has also helped greatly in this direction is: C libraries that target a specific architecture and include functions for dealing with peculiarities of that architecture, for example, functions for reading/writing bytes to/from I/O ports of the x86 architecture. Microsoft C has always offered library functions for that kind of stuff. C compilers that not only target a specific CPU architecture but even offer extensions to the C language that you can use in order to write code which makes use of special CPU features. For example, the x86 architecture supports things known as interrupts, which invoke routines known as interrupt handlers, and it requires them to have special entry/exit instruction sequences. From the very early days, Microsoft C supported special keywords that you could use to mark a function as an interrupt handler, so it could be invoked directly by a CPU interrupt, so you did not have to write any assembly for it. Nowadays I would assume that most of the BIOS is written in C++, if not in any higher level language. The vast majority of the code that makes up a BIOS is specific to the underlying hardware, so it does not really need to be portable: it is guaranteed that it will always run on the same type of CPU. The CPU may evolve, but as long as it maintains backwards compatibility with previous versions, it can still run the BIOS unmodified. Plus, you can always recompile the parts of the BIOS written in C to run natively on any new CPU that comes up, if the need arises. The reason why we write BIOSes in languages of a higher level than assembly is because it is easier to write them this way, not because they really need to be portable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/298628",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/107284/"
]
} |
298,665 | I understand C and C++ are different languages but when I was learning C++ I was always told that C is a subset of C++ or C++ is C with classes. And that was quite true until the appearance of C++x0, C++11 (or the modern C++ 11/14/17 in general). In fact (especially when working on embedded systems) it's very likely to find code written in C++ but with a lot of parts written entirely in pure C language. Here I have several questions: Should I stop using the term C/C++? If the answer to #1 is yes, how would I call a program that use a mix of C and C++? Given that both of them are 'different' languages is it likely that at some point C++ compilers stop supporting code written in the C language (since modern c++ is diverging from the C mentality for basic stuff like pointers, dynamic memory handling, etc) Is there right now any collaboration between the people who makes the standards of C/C++ to keep the compatibility If #4 is yes, such collaboration could end up in the near future with the appearance of the modern c++ (11/14/17) I know that there already similar questions, but I'm sure that a lot of people share these questions so I'm very interested to get good answers especially for the points that have to do with the C++ tendency in the near future. | C was never a subset of C++. The most obvious example of this is int new; . This has been true since C89 and C++98, and the languages have only grown further from each other as new standards have come out. Should I stop using the term C/C++ Yes If the answer to #1 is yes, how would I call a program that use a mix of C and C++? A source file is written in one language or the other. A program can consist of code from multiple languages working together, or an executable produced by linking different compiled objects. You would say the program was written in C and C++, "C/C++" is not a language. Given that both of them are 'different' languages is it likely that at some point C++ compilers stop supporting code written in the C language They never did. char *a = malloc(10); . C and C++ have never been fully compatible for at least as long as they've had ISO standards (I don't know all the details about the pre-standardized days). click the links or see below for a file that is fine with C89 and up, but isn't valid under any C++ standard. afaik no, but I don't know much about the C working group. /* A bunch of code that compiles and runs under C89 but fails under any C++ */
/* type aliases and struct names occupy separate namespaces in C, not in C++ */
struct S { int i; };
typedef int S;
struct Outer { struct Inner { int i; } in; };
/* struct Inner will be Outer::Inner in C++ due to name scope */
struct Inner inner;
/* default return type of int in C, C++ functions need explicit return types */
g() {
return 0;
}
/* C sees this as two declarations of the same integer,
* C++ sees it as redefinition */
int n;
int n;
/* K&R style argument type declarations */
void h(i) int i; { }
/* struct type declaration in return type */
struct S2{int a;} j(void) { struct S2 s = {1}; return s; }
/* struct type declaration in argument, stupid and useless, but valid */
/*void dumb(struct S3{int a;} s) { } */
/* enum/int assignment */
enum E{A, B};
enum E e = 1;
void k() {
goto label; /* C allows jumping past an initialization */
{
int x = 0;
label:
x = 1;
}
}
/* () in declaration means unspecified number of arguments in C, the definition
* can take any number of arguments,
* but means the same as (void) in C++ (definition below main) */
void f();
int main(void) {
f(1); /* doesn't match declaration in C++ */
{
/* new is a keyword in C++ */
int new = 0;
}
/* no stdio.h include results in implicit definiton in C. However,
* as long as a matching function is found at link-time, it's fine.
* C++ requires a declaration for all called functions */
puts("C is not C++");
{
int *ip;
void *vp = 0;
ip = vp; /* cast required in C++, not in C */
}
return 0;
}
/* matches declaration in C, not in C++ */
void f(int i) { } I always feel it's worth mentioning that C is a subset of Objective-C. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/298665",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/198524/"
]
} |
298,677 | I just noticed that the universal newline feature of file operations seems to be on its way out. The documentation for Python 3.5 open 's mode parameter indicates that it's deprecated: 'U' universal newlines mode (deprecated) At least as far back as Python 3.2, open contains a similar "backwards compatibility only" warning when documenting the usage of the mode argument: 'U' universal newlines mode (for backwards compatibility; should not be used in new code) Even in Python 2.7, a similar warning is placed in the documentation of io.open . What's the reason for this? | The open() function in the Python 3 library has a newline argument. Setting it to None enables universal newlines. This is the accepted way to do it, rendering the mode='U' argument redundant. Use newline=None to enable universal newlines mode (this is the default). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/298677",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92517/"
]
} |
298,720 | All examples of semantic versioning I've seen show 3 components in use. No more than 2 period characters. At $DAYJOB , we use 4 components in our release numbers: 5.0.1.2 Does Semantic Versioning allow for this? And as a higher-level and more arguable side question, does it really even matter? I started to think it might be a good idea to enforce semantic versioning, but ultimately entities like PCI override it. I should have clarified on my PCI comment. The issue is that audits and their cost influence when the major and minor components change, not necessarily a true new feature. For example, if a feature related to payments is introduced, we bump the minor number for PCI. But if we add a brand new feature related to something in the gui, it doesn't. Only the patch changes. So in this case we don't really get a say in the matter as developers since someone else makes those decisions. | It sounds like you are bypassing normal conventions just to avoid process overhead/audits. That... strikes me as concerning. What you are doing is effectively making an extra version number (your minor PCI digit) somewhat intentionally in order to move your feature/minor version numbers back a place, to no longer trigger your internal audit criteria. Anyways, getting to your question about semantic versioning, the spec for Semantic Versioning states: Given a version number MAJOR.MINOR.PATCH, increment the: MAJOR version when you make incompatible API changes, MINOR version when you add functionality in a backwards-compatible manner, and PATCH version when you make backwards-compatible bug fixes. Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format . Emphasis mine. So the question is, are you using the fourth character for pre-release/build metadata? Or is it basically another version indication that you are releasing? If "yes" then semantic versioning's spec does allow for it. If "no" then you technically are not following semantic versioning. And as a higher-level and more arguable side question, does it really even matter? Whether you want to rigidly follow it or not is a decision you and your team have to make. The purpose of semantic versioning is to help with API compatibility: Bug fixes not affecting the API increment the patch version, backwards compatible API additions/changes increment the minor version, and backwards incompatible API changes increment the major version. I call this system "Semantic Versioning." Under this scheme, version numbers and the way they change convey meaning about the underlying code and what has been modified from one version to the next. It's a system that helps make it more clear when versioning affects downstream users of the API. As long as your API is similarly clear it's not a huge deal which way you choose. Semantic versioning just happens to be straightforward, for example if I'm using 3.4.2 and need to upgrade to 3.4.10 I know I can do so without breaking anything. If the new version is 3.5.1 I know it's backwards compatible. And I know version 4.0.1 would be a breaking change. That's all part of what the version numbers mean. @enderland Yes basically. MAJOR(PCI).MINOR(PCI).FEATURE.HOTFIX+BUILD. We're basically only allowed to change the 3rd and 4th component without getting PCI (and subsequently the PCI overlords at the company) involved. To me it feels like this is a bit contrived, I am not sure they are justified in the way they manage the version number, but I do not know enough about PCI and audit process to say otherwise. Ok, this is fine. You have a system which works for you and meets your needs. That's the point of versioning. If your API is private (only internally facing) it really doesn't matter how you version as long as it makes sense to you and everyone using it. Where versioning in a standard format matters is when you have many other consumers of your API that need to know "what does this version mean?" Having an arbitrary versioning system will confuse people who are used to other systems, such as semantic versioning. But if no-one is really using your versioning system except the people creating it - it doesn't really matter. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/298720",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31950/"
]
} |
298,973 | I am still trying to find the best security solution for protecting REST API, because the amount of mobile applications and API is increasing every day. I have tried different ways of authentication, but still has some misunderstandings, so I need advice of someone more experienced. Let me tell, how I understand all this stuff. If I understand something incorrectly, please let me know. As far as REST API is stateless as well as WEB in general, we need to send some auth data in each request(cookies, token....). I know three widely used mechanisms to authenticate user Token with HTTPS. I have used this approach a lot of times it is good enough with HTTPS. If user provides correct password and login, he will receive token in response, and will use it for the further requests. Token is generated by the server and stored, for instance in the table separate or the same where user info is stored. So for each request server checks if user has token and it is the same as in the database. Everything is pretty straightforward. JWT Token. This token is self-descriptive, it contains all necessary information about the token itself, user cannot change for example expiration date or any other claim, because this token is generated (signed) by the server with secret keyword. This is also clear. But one big problem, personally for me, how to invalidate token. OAuth 2. I don't understand why this approach should be used when communication is established directly between server and client. As far as I understand, OAuth server is used to issue token with restricted scope to allow other applications access user information without storing password and login. This is great solution for the social networks, when user wants to sign up on some page, server can request permissions to get user info, for instance from twitter or facebook, and fill registration fields with user data and so on. Consider mobile client for online store. First question should I prefer JWT over first type token ? As far as I need login/logout user on mobile client, I need to store somewhere token or in case of JWT, token should be invalidated on logout. Different approaches are used to invalidate token one of the is to create invalid token list (black list). Hmm. Table/file will have much bigger size than if token was stored in table and associated with user, and just removed on logout. So what are benefits of JWT token ? Second question about OAuth , should I use it in case of direct communication with my server? What is the purpose of one more layer between client and server only to issue token, but communication will be not with oauth server but with the main server. As I understand OAuth server is responsible only for giving third-party apps permissions (tokens) to access user private information. But my mobile client application is not third-party. | Consider the first case. Each client gets a random ID that lasts for the duration of the session - which could be several days if you like. Then you store the information relevant to that session somewhere server side. It could be in a file or a database. Let's suppose you pass the ID via a cookie but you could use the URL or an HTTP header. Session IDs/ Cookies Pros: Easy to code both the client and server. Easy to destroy a session when someone logs out. Cons: The server side periodically needs to delete expired sessions where the client didn't logout. Every HTTP request requires a lookup to the data store. Storage requirements grow as more users have active sessions. If there are multiple front end HTTP servers the stored session data needs to be accessible by all of them. This could be a bit more work than storing it on one server. The bigger issues are the data store becomes a single point of failure and it can become a bottleneck. JSON Web Tokens (JWT) In the second case the data is stored in a JWT that is passed around instead of on the server. Pros: The server side storage issues are gone. The client side code is easy. Cons: The JWT size could be larger than a session ID. It could affect network performance since it is included with each HTTP request. The data stored in the JWT is readable by the client. This may be an issue. The server side needs code to generate, validate, and read JWTs. It's not hard but there is a bit of a learning curve and security depends on it. Anyone who gets a copy of the signing key can create JWTs. You might not know when this happens. There was (is?) a bug in some libraries that accepted any JWT signed with the "none" algorithm so anyone could create JWTs that the server would trust. In order to revoke a JWT before it expires you need to use a revocation list. This gets you back to the server side storage issues you were trying to avoid. OAuth Often OAuth is used for authentication (i.e. identity) but it can be used to share other data like a list of content the user has purchased and is entitled to download. It can also be used to grant access to write to data stored by the third party. You might use OAuth to authenticate users and then use server side storage or JWT for the session data. Pros: No code for users to signup or reset their password. No code to send an email with a validation link and then validate the address. Users do not need to learn/write-down another username and password. Cons: You depend on the third party in order for your users to use your service. If their service goes down or they discontinue it then you need to figure something else out. Eg: how do you migrate the user's account data if their identity changes from "[email protected]" to "[email protected]"? Usually you have to write code for each provider. eg Google, Facebook, Twitter. You or your users might have privacy concerns. The providers know which of their users use your service. You are trusting the provider. It is possible for a provider to issue tokens that are valid for one user to someone else. This could be for lawful purposes or not. Miscellaneous Both session IDs and JWTs can be copied and used by multiple users. You can store the client IP address in a JWT and validate it but that prevents clients from roaming from say Wi-Fi to cellular. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/298973",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/181882/"
]
} |
299,152 | I keep seeing recruiters, developers, etc. refer to Node.js as a framework. In my opinion, this is out of ignorance for what Node.js really is. Oftentimes, in job descriptions, Node.js is grouped in as a library among AngularJS , React , etc. Generally, I see it as being entered by someone that doesn't know the difference (HR, a recruiter, etc.). In my opinion, Node.js is a platform , or a runtime environment; it switches out the DOM API (JavaScript in the browser) for various other APIs, like the file system (since it runs as a server, and not in the browser). Why is it that people think Node.js is a framework; am I wrong? Is it actually a framework? | It's a bit tough to say because these words are not well-defined. In common parlance, I think it's a bit atypical to call Node.js a framework, sure, but I'd have a hard time arguing as to why exactly it is not. This all gets dicey, and I often see really poor uses of language, so I'll be explicit and start from the bottom JavaScript is a computer language, which is to say, narrowly, a set of conventions which allow us to read and interpret a bunch of text as having execution semantics —a fancy word for "way of interpreting the language as a set of instructions". Classes of programs called interpreters , compilers , transpilers , linters , highlighters , etc. all take text in and attempt to do something with this conventional understanding of how to execute the code. Interpreters actually perform the execution semantics by operating some machine—usually your computer. You can think of them as a little man inside your computer flipping switches like "print this character" based on instruction written in your JavaScript program. Compilers try to convert the JavaScript text to a new set of text which has execution semantics for a different language—perhaps one with the special property that computers can directly execute it. Transpilers are a generalized form of compiler in that they take JavaScript text in and output text of some other language. The difference is thus a little subjective, but usually one thinks of a compiler as outputting very low level code and a transpiler as outputting high level code. Linters , highlighters , type checkers , etc. all take in JavaScript text and output some kind of analytical product, highlighted text e.g., which is influenced by the execution semantics, but not actually representative of it. Now, let's dig into execution semantics a bit. Generally, execution semantics involves a process of reading language text and arriving at either a description of an abstract machine or a description of observable side effects . What I'd like to suggest is that both of these assume the need for there to be some kind of "low-level API" either to operate the machine or to perform the observable effects. These are usually considered to be part of the runtime environment The runtime environment or runtime is a set of assumed primitives that the language convention requires to exist in order to operate. As far as the language goes, there may be some assumption about their behavior, but they are unobservable. In the imagery of the interpreter above, the "man inside" just flicks the runtime's switches---he cannot personally inspect what they're doing. The word runtime is usually abused to mean both the set of assumed primitives themselves and an actual instantiation of them. So, now we get to something hairy. A language is a set of conventions which assumes the existence of a runtime in order to provide meaning to its execution semantics. It never "probes into them" as they are out of scope. In order to actually use a language you want something like a compiler or interpreter alongside a runtime implementation. The compiler/interpreter and this runtime go hand in hand in actually executing your code. Chrome's V8 , often called an engine , is a package deal containing an interpreter, compiler, runtime implementation compatible with the runtime interface demanded by the ECMA standard JavaScript conventions. So where does Node.js fit into this? We have to break it into parts: Node.js expands the JavaScript language by providing a larger set of runtime environment primitives—those which are outside the scope of ECMA's standards. These include things like file I/O . This means that Node.js changes the language and is in some sense a new language: "Node.js JavaScript" Node.js, as a package, contains an interpreter and a compiler. It just steals these from V8. Node.js provides an implementation of the Node.js runtime environment which allows "Node.js JavaScript" to be executed. Node.js provides a set of standard libraries built atop the new primitives which make them more accessible to end users of "Node.js JavaScript". So Node.js is a lot of things! But is it a framework? This is where terminology totally falls apart—nobody has a good, consistent, meaningful definition of what a framework actually is. There are debates that rage: "what is a framework versus a library" and they end on unsatisfactory things like "a library is something you call and a framework is something that calls you". I don't even really want to give such a sad explanation the light of day—but JavaScript, and Node.js JavaScript in particular, is an enormous blow to this definition since the whole callback-passing technique means you are constantly switching between calling and being called. In my personal opinion, there is something substantial here. I don't want to draw a bright line, but so I'll merely say A set of code is library-like if it works like a set of legos : divisible and made for assembly. While there might be some examples for how to use the library, it is generally on the user themselves to assemble it toward their needs. A set of code is framework-like if it is non-divisible and implies conventions*: pulling pieces of it apart can cause many assumptions to fail so you have to understand conventional use in order to use a framework properly. This is a hand-wavey line to be sure, but I want to draw out a really interesting point about frameworks: Frameworks imply a set of conventions of how to interpret code; they are therefore a language in their own right. This might be something that people want to argue about as well, but if you bought my earlier definition that a language is just a set of conventions which give life to a block of text, then whenever you lay down a new layer of conventions you've built a new language. Perhaps with frameworks the raw materials are the semantic interpretations of their host language instead of raw text files, but the idea is the same! So with all that said, I'm totally happy to call Node.js a framework even if it goes a bit against the norm! Node.js adds functionality to raw JavaScript in the way of expanding the language . With it brings new assumptions and tools for working in this expanded language. Functionally, these ideas are the same as the ideas of other well-accepted frameworks like Ruby on Rails . I would argue that if at this moment you feel a bit queasy and want to argue that there's a huge divide between Ruby on Rails and Node.js in this way of things then I'm right there with you , of course. The kind of conceptual worlds that the two live in are dramatically different —I merely want to say that they're the same kind of thing: sets of conventions for expanding the powers of a base language within a particular domain. I'm also happy to suggest that Node.js's domain is tiny and tight and thus the conventions it adds are simple to reason about and relatively easy to make correct. OTOH, Ruby on Rails lives in a complex, poorly defined domain of "business web applications" which means that the conventions it lays are certainly fuzzy and broken. But all of this is a long way of saying, yeah, recruiters probably have no idea what they mean when the say that. I'm guessing "framework" just sounds like a better, more grokkable word than "runtime" or "engine". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/299152",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/176468/"
]
} |
299,157 | I am working with the following system: Network Data Feed -> Third Party Nio Library -> My Objects via adapter pattern We recently had an issue where I updated the version of the library I was using, which, among other things, caused timestamps (which the third party library returns as long ), to be changed from milliseconds after the epoch to nanoseconds after the epoch. The Problem: If I write tests that mock the third party library's objects, my test will be wrong if I have made a mistake about the third party library's objects. For example, I didn't realize that the timestamps changed precision, which resulted in a need for change in the unit test, because my mock returned the wrong data. This is not a bug in the library , it happened because I missed something in the documentation. The problem is, I cannot be sure about the data contained in these data structures because I cannot generate real ones without a real data feed. These objects are big and complicated and have a lot of different pieces of data in them. The documentation for the third party library is poor. The Question: How can I set up my tests to test this behavior? I'm not sure I can solve this issue in a unit test, because the test itself can easily be wrong. Additionally, the integrated system is large and complicated and it's easy to miss something. For example, in the situation above, I had correctly adjusted the timestamp handling in several places, but I missed one of them. The system seemed to be doing mostly the right things in my integration test, but when I deployed it to production (which has a lot more data), the problem became obvious. I do not have a process for my integration tests right now. Testing is essentially: try to keep the unit tests good, add more tests when things break, then deploy to my test server and make sure things seem sane, then deploy to production. This timestamp issue passed the unit tests because the mocks were created wrong, then it passed the integration test because it didn't cause any immediate, obvious problems. I do not have a QA department. | It sounds like you're already doing due diligence. But ... At the most practical level, always include a good handful of both "full-loop" integration tests in your suite for your own code, and write more assertions than you think you need. In particular, you should have a handful of tests that perform a full create-read-[do_stuff]-validate cycle. [TestMethod]
public void MyFormatter_FormatsTimesCorrectly() {
// this test isn't necessarily about the stream or the external interpreter.
// but ... we depend on them working how we think they work:
var stream = new StreamThingy();
var interpreter = new InterpreterThingy(stream);
stream.Write("id-123, some description, 12345");
// this is what you're actually testing. but, it'll also hiccup
// if your 3rd party dependencies introduce a breaking change.
var formatter = new MyFormatter(interpreter);
var line = formatter.getLine();
Assert.equal(
"some description took 123.45 seconds to complete (id-123)", line
);
} And it sounds like you're already doing this sort of thing. You're just dealing with a flaky and/or complicated library. And in that case, it's good to throw in a few "this is how the library works" types of tests that both verify your understanding of the library and serve as examples of how to use the library. Suppose you need to understand and depend on how a JSON parser interprets each "type" in a JSON string. It's helpful and trivial to include something like this in your suite: [TestMethod]
public void JSONParser_InterpretsTypesAsExpected() {
String datastream = "{nbr:11,str:"22",nll:null,udf:undefined}";
var o = (new JSONParser()).parse(datastream);
Assert.equal(11, o.nbr);
Assert.equal(Int32.getType(), o.nbr.getType());
Assert.equal("22", o.str);
Assert.equal(null, o.nll);
Assert.equal(Object.getType(), o.nll.getType());
Assert.isFalse(o.KeyExists(udf));
} But secondly, remember that automated testing of any kind, and at almost any level of rigor, will still fail to protect you against all bugs. It's perfectly common to add tests as you discover problems. Not having a QA department, this means a lot of those problems will be discovered by end-users. And to a significant degree, that's just normal. And thirdly, when a library changes the meaning of a return-value or field without renaming the field or method or otherwise "breaking" dependent code (maybe by changing its type), I'd be pretty damn unhappy with that publisher. And I'd argue that, even though you should probably have read the changelog if there is one, you should probably also pass some of your stress onto the publisher. I'd argue they need the hopefully-constructive criticism ... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/299157",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/88986/"
]
} |
299,249 | I am struggling with testing a method that uploads documents to Amazon S3, but I think this question applies to any non-trivial API/external dependecy. I've only come up with three potential solutions but none seem satisfactory: Do run the code, actually upload the document, check with AWS's API that it has been uploaded and delete it at the end of the test. This will make the test very slow, will cost money every time the test is run and won't alway return the same result. Mock S3. This is super hairy because I have no idea about that object's internals and it feels wrong because it's way too complicated. Just make sure that MyObject.upload() is called with the right arguments and trust that I am using the S3 object correctly. This bothers me because there is no way to know for sure I used the S3 API correctly from the tests alone. I checked how Amazon tests their own SDK and they do mock everything. They have a 200 lines helper that does the mocking. I don't feel it's practical for me to do the same. How do I solve this? | There are two issues we have to look at here. The first is that you seem to be looking at all of your tests from the unit test perspective. Unit tests are extremely valuable, but are not the only kinds of tests. Tests can actually be divided into several different layers, from very fast unit tests to less fast integration tests to even slower acceptance tests . (There can be even more layers broken out, like functional tests .) The second is that you are mixing together calls to third-party code with your business logic, creating testing challenges and possibly making your code more brittle. Unit tests should be fast and should be run often. Mocking dependencies helps to keep these tests running fast, but can potentially introduce holes in coverage if the dependency changes and the mock doesn't. Your code could be broken while your tests still run green. Some mocking libraries will alert you if the dependency's interface changes, others cannot. Integration tests, on the other hand, are designed to test the interactions between components, including third-party libraries. Mocks should not be used at this level of testing because we want to see how the actual object interact together. Because we are using real objects, these tests will be slower, and we will not run them nearly as often as our unit tests. Acceptance tests look at an even higher level, testing that the requirements for the software are met. These tests run against the entire, complete system that would get deployed. Once again, no mocking should be used. One guideline people have found valuable regarding mocks is to not mock types you don't own . Amazon owns the API to S3 so they can make sure it doesn't change beneath them. You, on the other hand, do not have these assurances. Therefore, if you mock out the S3 API in your tests, it could change and break your code, while your tests all show green. So how do we unit test code that uses third-party libraries? Well, we don't. If we follow the guideline, we can't mock objects we don't own. But… if we own our direct dependencies, we can mock them out. But how? We create our own wrapper for the S3 API. We can make it look a lot like the S3 API, or we can make it fit our needs more closely (preferred). We can even make it a little more abstract, say a PersistenceService rather than an AmazonS3Bucket . PersistenceService would be an interface with methods like #save(Thing) and #fetch(ThingId) , the types of methods we might like to see (these are examples, you might actually want different methods). We can now implement a PersistenceService around the S3 API (say a S3PersistenceService ), encapsulating it away from our calling code. Now to the code that calls the S3 API. We need to replace those calls with calls to a PersistenceService object. We use dependency injection to pass our PersistenceService into the object. It's important not to ask for a S3PersistenceService , but to ask for a PersistenceService . This allows us to swap out the implementation during our tests. All the code that used to use the S3 API directly now uses our PersistenceService , and our S3PersistenceService now makes all the calls to the S3 API. In our tests, we can mock out PersistenceService , since we own it, and use the mock to make sure that our code makes the correct calls. But now that leaves how to test S3PersistenceService . It has the same problem as before: we can't unit test it without calling to the external service. So… we don't unit test it. We could mock out the S3 API dependencies, but this would give us little-to-no additional confidence. Instead, we have to test it at a higher level: integration tests. This may sound a little troubling saying that we shouldn't unit test a part of our code, but let's look at what we accomplished. We had a bunch of code all over the place we couldn't unit test that now can be unit tested through the PersistenceService . We have our third-party library mess confined to a single implementation class. That class should provide the necessary functionality to use the API, but does not have any external business logic attached to it. Therefore, once it is written, it should be very stable and should not change very much. We can rely on slower tests that we don't run that often because the code is stable. The next step is to write the integration tests for S3PersistenceService . These should be separated out by name or folder so we can run them separately from our fast unit tests. Integration tests can often use the same testing frameworks as unit tests if the code is sufficiently informative, so we don't need to learn a new tool. The actual code to the integration test is what you would write for your Option 1. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/299249",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/199400/"
]
} |
299,286 | The Law of Demeter states the following: Each unit should have only limited knowledge about other units: only units "closely" related to the current unit. Each unit should only talk to its friends; don't talk to strangers. Only talk to your immediate friends. C# 6.0 introduced a new operator called the null-conditional operator . IMHO, it makes coding easier and improves readability.
But it also makes it easier to write more coupled code, as it is easier to navigate through class fields, already checking for nullity (something like var x = A?.B?.C?.D?.E?.F? ). Is it correct to state that this new operator goes against the Law of Demeter? | Is it correct to state that this new operator goes against the Law of Demeter? No * * The null conditional operator is a tool within the language and the .NET framework. Any tool has the ability to be abused and used in ways that could harm the maintainability of a given application. But the fact that a tool can be abused doesn't necessarily mean that it has to be abused, nor that the tool violates any particular principle(s) that may be held. The Law of Demeter and others are guidelines about how you should write your code. It's targeted to humans, not the tools. So the fact that the C# 6.0 language has a new tool within it doesn't necessarily affect how you should be writing and structuring your code. With any new tool, you need to evaluate it as ... if the guy who ends up maintaining your code will be a violent psychopath ... . Note again, that this is guidance to the person writing the code and not about the tools being used. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/299286",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/157212/"
]
} |
299,347 | I am reading C# Design Pattern Essentials . I'm currently reading about the iterator pattern. I fully understand how to implement, but I don't understand the importance or see a use case. In the book an example is given where someone needs to get a list of objects. They could have done this by exposing a public property, such as IList<T> or an Array . The book writes The problem with this is that the internal representation in both of these classes has been exposed to outside projects. What is the internal representation? The fact it's an array or IList<T> ? I really don't understand why this is a bad thing for the consumer (the programmer calling this) to know... The book then says this pattern works by exposing its GetEnumerator function, so we can call GetEnumerator() and expose the 'list' this way. I assume this patterns has a place (like all) in certain situations, but I fail to see where and when. | Software is a game of promises and privileges. It is never a good idea to promise more than you can deliver, or more than your collaborator needs. This applies particularly to types. The point of writing an iterable collection is that its user can iterate over it - no more, no less. Exposing the concrete type Array usually creates many additional promises, e.g. that you can sort the collection by a function of your own choosing, not to mention the fact that a normal Array will probably allow the collaborator to change the data that's stored inside it. Even if you think this is a good thing ("If the renderer notices that the new export option is missing, it can just patch it right in! Neat!"), overall this decreases the coherence of the code base, making it harder to reason about - and making code easy to reason about is the foremost goal of software engineering. Now, if your collaborator needs access to a number of thingies so that they are guaranteed not to miss any of them, you implement an Iterable interface and expose only those methods that this interface declares. That way, next year when a massively better and more efficient data structure appears in your standard library, you'll be able to switch out the underlying code and benefit from it without fixing your client code everywhere . There are other benefits to not promising more than is needed, but this one alone is so big that in practice, no others are needed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/299347",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/119594/"
]
} |
299,365 | The existing coding standards on a large C# project includes a rule that all type names be fully qualified, forbidding employment of the 'using' directive. So, rather than the familiar: using System.Collections.Generic;
.... other stuff ....
List<string> myList = new List<string>(); (It's probably no surprise that var also is prohibited.) I end up with: System.Collections.Generic.List<string> myList = new System.Collections.Generic.List<string>(); That's a 134% increase in typing, with none of that increase providing useful information. In my view, 100% of the increase is noise (clutter) that actually impedes understanding. In 30+ years of programming, I've seen such a standard proposed once or twice, but never implemented. The rationale behind it escapes me. The person imposing the standard is not stupid, and I don't think he's malicious. Which leaves misguided as the only other alternative unless I'm missing something. Have you ever heard of such a standard being imposed? If so, what was the reason behind it? Can you think of arguments other than "it's stupid," or "everybody else employs using " that might convince this person to remove this prohibition? Reasoning The reasons for this prohibition were: It's cumbersome to hover the mouse over the name to get the fully qualified type. It's better to always have the fully qualified type visible all the time. Emailed code snippets don't have the fully qualified name, and therefore can be difficult to understand. When viewing or editing the code outside of Visual Studio (Notepad++, for example), it's impossible to get the fully qualified type name. My contention is that all three cases are rare, and that making everybody pay the price of cluttered and less-understandable code just to accommodate a few rare cases is misguided. Potential namespace conflict issues, which I expected to be the primary concern, weren't even mentioned. That's especially surprising because we have a namespace, MyCompany.MyProject.Core , which is an especially bad idea. I learned long ago that naming anything System or Core in C# is a quick path to insanity. As others have pointed out, namespace conflicts are easily handled by refactoring, namespace aliases, or partial qualification. | The broader question: Have you ever heard of such a standard being imposed? If so, what was the reason behind it? Yes, I've heard of this, and using fully qualified object names prevents name collisions. Though rare, when they happen, they can be exceptionally thorny to figure out. An Example: That type of a scenario is probably better explained with an example. Let's say we have two Lists<T> belonging to two separate projects. System.Collections.Generic.List<T>
MyCorp.CustomCollections.Optimized.List<T> When we use the fully qualified object name, it's clear as to which List<T> is being used. That clarity obviously comes at the cost of verbosity. And you might be arguing, "Wait! No one would ever use two lists like that!" Which is where I'll point out the maintenance scenario. You've written module Foo for your corporation which uses the corporation approved, optimized List<T> . using MyCorp.CustomCollections.Optimized;
public class Foo {
List<object> myList = ...;
} Later on, a new developer decides to extend the work you've been doing. Not being aware of the company's standards, they update the using block: using MyCorp.CustomCollections.Optimized;
using System.Collections.Generic; And you can see how things go bad in a hurry. It should be trivial to point out that you could have two proprietary classes of the same name but in different namespaces within the same project. So it's not just a concern about colliding with .NET Framework supplied classes. MyCorp.WeightsAndLengths.Measurement();
MyCorp.TimeAndSpace.Measurement(); The reality: Now, is this likely to occur in most projects? No, not really. But when you're working on a large project with a lot of inputs, you do your best to minimize the chances of things exploding on you. Large projects with multiple contributing teams are a special kind of beast in the application world. Rules that seem unreasonable for other projects become more pertinent due to the input streams to the project and the likelihood that those contributing haven't read the project's guidelines. This can also occur when two large projects are merged together. If both projects had similarly named classes, then you'll see collisions when you start referencing from one project to the other. And the projects may be too large to refactor or management won't approve the expense to fund the time spent on refactoring. Alternatives: While you didn't ask, it's worth pointing out that this is not a great solution to the problem. It's not a good idea to be creating classes that will collide without their namespace declarations. List<T> , in particular, ought to be treated as a reserved word and not used as the name for your classes. Likewise, individual namespaces within the project should strive to have unique class names. Having to try and recall which namespace's Foo() you're working with is mental overhead that is best avoided. Said another way: having MyCorp.Bar.Foo() and MyCorp.Baz.Foo() is going to trip your developers up and best avoided. If nothing else, you can use partial namespaces in order to resolve the ambiguity. For example, if you absolutely can't rename either Foo() class you could use their partial namespaces: Bar.Foo()
Baz.Foo() Specific reasons for your current project: You updated your question with the specific reasons you were given for your current project following that standard. Let's take a look at them and really digress down the bunny trail. It's cumbersome to hover the mouse over the name to get the fully qualified type. It's better to always have the fully qualified type visible all the time. "Cumbersome?" Um, no. Annoying perhaps. Moving a few ounces of plastic in order to shift an on-screen pointer is not cumbersome . But I digress. This line of reasoning seems more like a cover-up than anything else. Offhand, I'd guess that the classes within the application are weakly named and you have to rely upon the namespace in order to glean the appropriate amount of semantic information surrounding the class name. This is not a valid justification for fully qualified class names, perhaps it's a valid justification for using partially qualified class names. Emailed code snippets don't have the fully qualified name, and therefore can be difficult to understand. This (continued?) line of reasoning reinforces my suspicion that classes are currently poorly named. Again, having poor class names is not a good justification for requiring everything to use a fully qualified class name. If the class name is difficult to understand, there's a lot more wrong than what fully qualified class names can fix. When viewing or editing the code outside of Visual Studio (Notepad++, for example), it's impossible to get the fully qualified type name. Of all the reasons, this one nearly made me spit out my drink. But again, I digress. I'm left wondering why is the team frequently editing or viewing code outside of Visual Studio? And now we're looking at a justification that's pretty well orthogonal to what namespaces are meant to provide. This is a tooling backed argument whereas namespaces are there to provide organizational structure to the code. It sounds like the project you own suffers from poor naming conventions along with developers who aren't taking advantage of what the tooling can provide for them. And rather than resolve those actual issues, they attempt to slap a band-aid over one of the symptoms and are requiring fully qualified class names. I think it's safe to categorize this as a misguided approach. Given that there are poorly named classes, and assuming you can't refactor, the correct answer is to use the Visual Studio IDE to its full advantage. Possibly consider adding in a plugin like the VS PowerTools package. Then, when I'm looking at AtrociouslyNamedClass() I can click on the class name, press F12 and be taken directly to the definition of the class in order to better understand what it's trying to do. Likewise, I can press Shift-F12 to find all the spots in the code currently suffering from having to use AtrociouslyNamedClass() . Regarding the outside tooling concerns - the best thing to do is to just stop it. Don't email snippets back and forth if they aren't immediately clear what they refer to. Don't use other tools outside of Visual Studio as those tools don't have the intelligence surrounding the code that your team needs. Notepad++ is an awesome tool, but it's not cut out for this task. So I agree with your assessment regarding the three specific justifications you were presented with. That said, what I think you were told was "We have underlying issues on this project that can't / won't address and this is how we 'fixed' them." And that obviously speaks to deeper issues within the team that may serve as red flags for you. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/299365",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/199541/"
]
} |
299,468 | For a company I used to work for, I had to implement a socket receiver that mostly took data in UDP form over a local connection from some specialized sensor hardware. The data in question was a well-formed UDP packet, but interestingly, the data payload always ended with a CRC16 checksum formed using the rest of the data. I implemented the check on my end, as per the spec, but I always wondered if this was necessary. After all, doesn't the UDP protocol itself carry a 16-bit CRC? Therefore, although UDP packets can be lost or out-of-order, I was under the impression that they can not be corrupted without being discarded by the network hardware before they reach the OS's processes. Or is there some special use-case I'm missing? It's worth adding that I was working in the defence industry, which as I'm sure you can imagine, likes to be super-explicit about everything like this, so I'm wondering whether it was just a case of "security OCD"... | The UDP protocol does not guarantee that messages are delivered in order or delivered at all, but it does ensure that those messages which do get delivered are complete and unchanged by automatically including a 16-bit checksum. That means adding another 16-bit checksum on the application layer is usually redundant. ...usually.... First, with IPv4 (not IPv6), the checksum is optional . That means you might be using an exotic configuration which doesn't do checksum generation and validation (but in that case you should rather fix your network stack instead of jury-rigging this on the application layer). Second, with a 16bit checksum there is a one in 65536 chance that a completely random message happens to have a valid checksum. When this margin of error is too large for your use-case (and in the defense industry I could imagine several where it is), adding another CRC-16 checksum would further reduce it. But in that case you might consider to use a proper message digest like SHA-256 instead of CRC-16. Or go all the way and use a real cryptographic signature. This protects not just against random corruption but also intentional corruption by an attacker. Third, depending on where the data comes from and where it goes to, it might be corrupted before or after being sent over the network. In that case the additional checksum inside the message might protect the integrity of the message further than just between the two network hosts. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/299468",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/157461/"
]
} |
299,505 | At work, we have a large internal application which has been under development for close to 2 years now; I've just recently joined the project and some of the architecture has me slightly perplexed, so I'm hoping someone here can provide some advice before I go out to ask the architects these same questions (so I can have an informed discussion with them). My apologies if the below is a little long, I just want to try to paint a good picture of what the system is before I ask my question :) The way the system is setup is that we have one main web application (asp.net, AngularJS) which does mostly just aggregates data from various other services. So basically it is a host for an AngularJS application; there is literally one MVC controller that bootstraps the client side, and then every other controller is a WebAPI controller. Calls from the client-side are handled by these controllers, which is always deployed to boxes that do nothing but host the Web Application. We currently have 4 such boxes. However, the calls are then ultimately routed through to yet another set of WebAPI applications (typically these are per business area, such as security, customer data, product data, etc). All of these WebAPIs get deployed together to dedicated boxes as well; we also have 4 of these boxes. With a single exception, these WebAPIs are not used by any other parts of our organisation. Finally these WebAPIs make yet another set of calls to the "back end" services, which are typically legacy asmx or wcf services slapped on top of various ERP systems and Data stores (over which we have no control). Most of our application's business logic is in these WebApis, such as transforming legacy data, aggregating it, executing business rules, the usual type of thing. What has me confused is what possible benefit there is in having such a separation between the WebApplication and the WebAPIs that serve it. Since nobody else is using them, I don't see any scalability benefit (i.e there's no point in putting in another 4 API boxes to handle increased load, since increased load on the API servers must mean there is increased load on the Web servers - therefore there has to be a 1:1 ratio of Web server to Api server) I also don't see any benefit at all of having to make an extra HTTP call
Browser=>HTTP=>WebApp=>HTTP=>WebAPI=>HTTP=>Backend services. (that HTTP call between WebApp and WebAPI is my problem) So I am currently looking to push to have the current WebAPIs moved from separate solutions, to just separate projects within the WebApplication solution, with simple project references in between, and a single deployment model. So they would ultimately just become class libraries. Deployment-wise, this means we would have 8 "full stack" web boxes, as opposed to 4+4. The benefits I see of the new approach are Increase in performance because there is one less cycle of serialisation/deserialisation between the Web application and the WebAPI servers Tons of code that can be deleted (i.e. no need to maintain/test) in terms of DTOs and mappers at the outgoing and incoming boundaries of the Web Application and WebApi servers respectively. Better ability to create meaningful automatied Integration Tests, because I can simply mock the back-end services and avoid the messiness around the mid-tier HTTP jumps. So the question is: am I wrong? Have I missed some fundamental "magic" of having separated WebApplication and WebAPI boxes? I have researched some N-Tier architecture material but can't seem to find anything in them that can give a concrete benefit for our situation (since scalabilty isn't an issue as far as I can tell, and this is an internal app so security in terms of the WebAPI applications isn't an issue.) And also, what would I be losing in terms of benefits if I were to re-organise the system to my proposed setup? | One reason is security - if (haha! when ) a hacker gains access to your front-end webserver, he gets access to everything it has access to. If you've placed your middle tier in the web server, then he has access to everything it has - ie your DB, and next thing you know, he's just run "select * from users" on your DB and taken it away from offline password cracking. Another reason is scaling - the web tier where the pages are constructed and mangled and XML processed and all that takes a lot more resource than the middle tier which is often an efficient method of getting data from the DB to the web tier. Not to mention transferring all that static data that resides (or is cached) on the web server. Adding more web servers is a simple task once you've got past 1. There shouldn't be a 1:1 ratio between web and logic tiers - I've seen 8:1 before now (and a 4:1 ratio between logic tier and DB). It depends what your tiers do however and how much caching goes on in them. Websites don't really care about single-user performance as they're built to scale, it doesn't matter that there is an extra call slowing things down a little if it means you can serve more users. Another reason it can be good to have these layers is that it forces more discipline in development where an API is developed (and easily tested as it is standalone) and then the UI developed to consume it. I worked at a place that did this - different teams developed different layers and it worked well as they had specialists for each tier who could crank out changes really quickly because they didn't have to worry about the other tiers - ie a UI javscript dev could add a new section to the site by simply consuming a new webservice someone else had developed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/299505",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/78953/"
]
} |
299,729 | I'm trying to understand the inherent tradeoff between roles and permissions when it comes to access control (authorization). Let's start with a given: in our system, a Permission will be a fine-grained unit of access (" Edit resource X ", " Access the dashboard page ", etc.). A Role will be a collection of 1+ Permissions. A User can have 1+ Roles. All these relationships (Users, Roles, Permissions) are all stored in a database and can be changed on the fly and as needed. My concerns: (1) What is so "bad" about checking Roles for access control? What benefits are gained by checking for permissions instead? In other words, what's the difference between these two snippets below: if(SecurityUtils.hasRole(user)) {
// Grant them access to a feature
}
// vs.
if(SecurityUtils.hasPermission(user)) {
// Grant them access to a feature
} And: (2) In this scenario what useful value do Roles even provide? Couldn't we just assign 1+ Permissions to Users directly? What concrete value of abstraction do Roles offer (can someone give specific examples)? | (1) What is so "bad" about checking Roles for access control? What
benefits are gained by checking for permissions instead? At the moment of checking, the calling code only needs to know "does user X have permission to perform action Y?" . The calling code does not care about and should not be aware of relationships between roles and permissions. The authorization layer will then check if the user has this permission, typically by checking if the user's role has this permission. This allows you to change authorization logic without updating the calling code. If you directly check for role at the call site, you are implicitly forming role ⇄ permission relationships and injecting authorization logic into the calling code, violating separation of concerns. Should you later decide that role foo should not have permission baz , you would have to change every code which checks if the user is a foo . (2) In this scenario what useful value do Roles even provide? Couldn't
we just assign 1+ Permissions to Users directly? What concrete value
of abstraction do Roles offer (can someone give specific examples)? Roles conceptually represent a named collection of permissions. Let's say you are adding a new feature which allows a user to edit certain settings. This feature should be available to administrators only. If you are storing permissions per user, you would have to find all users in your database which you somehow know are administrators (If you're not storing role information for users, how would you even know which users are administrators?) , and append this permission to their list of permissions. If you use roles, you only have to append the permission to the Administrator role, which is both easier to perform, more space efficient, and is less prone to mistakes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/299729",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/154753/"
]
} |
299,796 | I find tests a lot trickier and harder to write than the actual code they are testing. It's not unusual for me to spend more time writing the test than the code it is testing. Is that normal or am I doing something wrong? The questions “ Is unit testing or test-driven development worthwhile? ”, “ We are spending more time implementing functional test than implementing the system itself, is this normal? ” and their answers are more about whether testing is worth it (as in "should we skip writing tests altogether?"). While I'm convinced tests are important, I'm wondering if my spending more time on tests than actual code is normal or if it's only me. Judging by the number of views, answers and upvotes my question received, I can only assume its a legitimate concern that isn't addressed in any other question on the website. | I remember from a software engineering course, that one spends ~10% of development time writing new code, and the other 90% is debugging, testing, and documentation. Since unit-tests capture the debugging, and testing effort into (potentially automate-able) code, it would make sense that more effort goes into them; the actual time taken shouldn't be much more than the debugging and testing one would do without writing the tests. Finally tests should also double as documentation! One should write unit-tests in the way the code is intended to be used; i.e. the tests (and usage) should be simple, put the complicated stuff in the implementation. If your tests are hard to write then, the code they test is probably hard to use! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/299796",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/199400/"
]
} |
299,825 | I have a medium-sized python program (~5000 lines of code), which I've built up over time, with no particular plan as I went ahead. The architecture I've ended up with consists of 5-6 large Singleton objects, each of which handle a certain aspect of my program, such as database comms, a web-server, data gathering client, and internal calculations. I feel that using several singletons like this doesn't really make use of the true benefit of OO-programming, namely that it easily allows you to create numerous related objects. My singletons are also fairly dependent on each other, and if I grew the program a lot (say ~50 000 lines), I can see my approach becoming spaghetti code. So I'm wondering if there is a better way of structuring my program. E.g. should my singletons actually be separate modules? But then how would I approach the attributes that are very neatly organised in my singleton objects? Are there other architecture options? I'm not very familiar with design patterns (the GOF book is on my todo-list) so perhaps there are design and/or architectural patterns that would be better for my program? | Singletons are considered 'evil' by many, and while a singleton pattern has its uses they are few and far between... I've worked with several large codebases and have pretty much always managed to move them away from singletons. The easiest way to eliminate singletons is: Introduce an interface on top of your singleton : this allows you to separate the contract from the implementation and reduces the coupling you have to the actual implementation Provide a setter on your singleton so you can set the singleton instance from a unittest : this allows you to 'swap out' the singleton instance in a unittest. Create unittests on the class you want to change : using a mock of the interface introduced in 1 and the setter introduced in 2 you can now write unittests on the class. Inject the singleton instead of getting it statically : in the class that depends on your singleton, have a way to inject that singleton (via the interface created in 1). You can use constructor injection, method-based setters.... Whatever works for you. This is the first time you actually touch the class you are refactoring! The tests introduced in 3 will help you refactor without breaking anything You can avoid step 2 if you want to do it 'all in', but this approach is the most pure in the sense that you avoid changing any classes that do not have tests to check that nothing breaks. Personally, I find this to be a bit overkill but YMMV. The basic premise is that you move towards a class that has dependencies injected instead of fetched statically. Also, even IF you have legitimate uses for a singleton you should STILL inject them. Singletons are about ensuring that only a single instance of a given class exists. The classic pattern with a globally-accessible static function achieves this, but unfortunately also moves us away from injecting that instance due to bad example code floating about the web. There are plenty of frameworks out there that can handle dependency injection for you and most of them will have a way to configure an object so that the same instance is reused throughout the application, effectively making this a singleton. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/299825",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/165102/"
]
} |
300,043 | The usual instinct is to remove any code duplication that you see in the code. However, I found myself in a situation where the duplication is illusory . To describe the situation in more details: I am developing a web application, and most views are basically the same - they display a list of items which the user can scroll and choose from, a second list that contains selected items, and a "Save" button to save the new list. It seemed to me that the problem is easy. However, each and every view has its own quirks - sometimes you need to recalculate something, sometimes you must store some additional data etc. These, I solved by inserting callback hooks in the main logic code. There are so many minute differences between the views that it is becoming less and less maintainable, because I need to provide callbacks for basically all functionality, and the main logic starts to look like a huge sequence of callback invocations. In the end I am not saving any time or code, because every view has its own code that is executed - all in callbacks. The problems are: the differences are so minute that the code looks almost exactly alike in all views, there are so many differences that when you look at the details, to code is not a bit alike How should I handle this situation? Is having core logic composed entirely of callback calls a good solution? Or should I rather duplicate the code and drop the complexity of callback-based code? | Ultimately you have to make a judgment call about whether to combine similar code to eliminate duplication. There seems to be an unfortunate tendency to take principles like "Don't repeat yourself" as rules that must be followed by rote at all times. In fact, these are not universal rules but guidelines that should help you think about and develop good design. As everything in life, you must consider the benefits versus the costs. How much duplicated code will be removed? How many times is the code repeated? How much effort will it be to write a more generic design? How much are you likely to develop the code in the future? And so on. Without knowing your specific code, this is unclear. Perhaps there is a more elegant way to remove duplication (such as that suggested by LindaJeanne). Or, perhaps there simply isn't enough true repetition to warrant abstraction. Insufficient attention to design is a pitfall, but also beware over-design. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300043",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/200203/"
]
} |
300,080 | Dilemma I've been reading a lot of best practice books about object oriented practices, and almost every book I've read had a part where they say that enums are a code smell. I think they've missed the part where they explain when enums are valid. As such, I am looking for guidelines and/or use-cases where enums are NOT a code smell and in fact a valid construct. Sources: "WARNING As a rule of thumb, enums are code smells and should be refactored
to polymorphic classes. [8]"
Seemann, Mark, Dependency Injection in .Net, 2011, p. 342 [8] Martin Fowler et al., Refactoring: Improving the Design of Existing Code (New York: Addison-Wesley, 1999), 82. Context The cause of my dilemma is a trading API. They give me a stream of Tick data by sending thru this method: void TickPrice(TickType tickType, double value) where enum TickType { BuyPrice, BuyQuantity, LastPrice, LastQuantity, ... } I've tried making a wrapper around this API because breaking changes is the way of life for this API. I wanted to keep track of the value of each last received tick type on my wrapper and I've done that by using a Dictionary of ticktypes: Dictionary<TickType,double> LastValues To me, this seemed like a proper use of an enum if they are used as keys. But I am having second thoughts because I do have a place where I make a decision based on this collection and I can't think of a way how I could eliminate the switch statement, I could use a factory but that factory will still have a switch statement somewhere. It seemed to me that I'm just moving things around but it still smells. It's easy to find the DON'Ts of enums, but the DOs, not that easy, and I'd appreciate it if people can share their expertise, the pros and cons. Second thoughts Some decisions and actions are based on these TickType and I can't seem to think of a way to eliminate enum/switch statements. The cleanest solution I can think of is using a factory and return an implementation based on TickType . Even then I will still have a switch statement that returns an implementation of an interface. Listed below is one of the sample classes where I'm having doubts that I might be using an enum wrong: public class ExecutionSimulator
{
Dictionary<TickType, double> LastReceived;
void ProcessTick(TickType tickType, double value)
{
//Store Last Received TickType value
LastReceived[tickType] = value;
//Perform Order matching only on specific TickTypes
switch(tickType)
{
case BidPrice:
case BidSize:
MatchSellOrders();
break;
case AskPrice:
case AskSize:
MatchBuyOrders();
break;
}
}
} | Enums are intended for use cases when you have literally enumerated every possible value a variable could take. Ever. Think use cases like days of the week or months of the year or config values of a hardware register. Things that are both highly stable and representable by a simple value. Keep in mind, if you're making an anti-corruption layer, you can't avoid having a switch statement somewhere , because of the design you're wrapping, but if you do it right you can limit it to that one place and use polymorphism elsewhere. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300080",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/200567/"
]
} |
300,127 | I've been reading up more on the Inversion of Control principle and Dependency Injection as an implementation of it and am pretty sure I understand it. It seems to be basically saying 'don't declare your class members' instantiations within the class'. Rather that the instantiations should be passed in and assigned through the constructor; 'injected' into the class from an outside source. If it's this simple, which it seems to be, why do we need frameworks like spring or guice that implement this with annotations? Am I missing something fundamental here? I'm really struggling to understand what the use of Dependency Injection frameworks are. Edit: About the possible duplicate, I believe my question is more unique as it is asking about DI frameworks in general, not just Spring. Spring is not just a DI framework, so there are many reasons that someone would want to use Spring that aren't related to DI. | We don't need the frameworks. It is entirely possible to implement dependency injection manually, even for a relatively large project. On the other hand, using a framework makes it easier, particularly one based on annotations or automatic detection of dependencies, as it makes the process simpler: if I decide that I need a new dependency in a class, all I have to do is add it to the constructor (or declare a setter) and the object is injected - I don't need to change any instantiation code. Also, frameworks often contain other useful functionality. Spring, for example, contains a useful framework for aspect oriented programing, including declarative transaction demarcation (which is extremely handy) and a variety of adapter implementations that make many 3rd party libraries easier to integrate. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300127",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/200661/"
]
} |
300,373 | I have a client that is looking to get a website/mobile apps/desktop apps built that deal with very sensitive data (more sensitive than bank/card details). Because of the sensitive nature of the data, they do not want to save it in a central database but they still want their apps to synchronise (let's say I add some data into my mobile app, I then want to be able to go to my desktop app and see the same data). I cannot think of a nice, reliable way of doing this and I am not sure there is one. Which is why I am here. Does anyone know how I could deal with this data? One solution I was thinking about was to have a client side database on each app that would somehow sync between apps, I can see this being very unreliable and getting messy though. | Plenty of sensitive information gets stored in databases. In fact, a central database is probably the most secure way to store this data. Large enterprise databases have tons of functionality to do things like encrypt sensitive information, to audit who accesses it, to limit or prevent people including DBAs from viewing the data, etc. You can have professional security experts monitoring the environment and professional DBAs overseeing backups so that you don't lose data. It would almost certainly be much easier to compromise data stored on some random user's mobile device or laptop than to penetrate a well designed security infrastructure and compromise a proper central database. You could design the system with a central database that stores only encrypted data and store the user's private key on the user's device. That way even if the central database is completely compromised, the data is usable only by the user. Of course, that means that you can't restore the user's data if they lose their key (say the only copy was on their phone and their phone was damaged). And if someone compromises the key and, presumably, their login credentials, they would be able to see the data. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300373",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/200999/"
]
} |
300,381 | I am having some trouble writing a variant of sub-string search. Essentially the goal is to write a method that can perform sub-string search except that the source data is in an array of Strings rather the one String. I have looked around and can't find anyone who has managed to solve this elegantly. Consider some input data such as: final List<String> source = new ArrayList<String>();
source.add("abc");
source.add("def");
source.add("ghi");
source.add("jkl");
source.add("mnop"); Now let's say I want to write a method that can return a Pair of the first location of where the target String appears. This Pair represents the first index of the String in the source array where the target appears and its index within that String where the target starts. Examples with 0 based indices: subStringArray(source, "def"); //returns Pair(1,0) - 2nd string - 1st index
subStringArray(source, "ef"); //returns Pair(1,1) - 2nd string - 2nd index
subStringArray(source, "fgh"); //returns Pair(1,2) - 2nd string - 3rd index
subStringArray(source, "hijklmno"); //returns Pair(2, 1) - 3rd string - 2nd index
subStringArray(source, "abcf"); //returns null or Pair(-1,-1); I know it would involve three for loops but I'm not sure how to handle the edge cases, i.e where the target String takes up multiple Strings in the source array. I should note that I can't allocate more memory. | Plenty of sensitive information gets stored in databases. In fact, a central database is probably the most secure way to store this data. Large enterprise databases have tons of functionality to do things like encrypt sensitive information, to audit who accesses it, to limit or prevent people including DBAs from viewing the data, etc. You can have professional security experts monitoring the environment and professional DBAs overseeing backups so that you don't lose data. It would almost certainly be much easier to compromise data stored on some random user's mobile device or laptop than to penetrate a well designed security infrastructure and compromise a proper central database. You could design the system with a central database that stores only encrypted data and store the user's private key on the user's device. That way even if the central database is completely compromised, the data is usable only by the user. Of course, that means that you can't restore the user's data if they lose their key (say the only copy was on their phone and their phone was damaged). And if someone compromises the key and, presumably, their login credentials, they would be able to see the data. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300381",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/79037/"
]
} |
300,593 | I study the topics of compilers and interpreters intensively. I want to check if my base understanding is right, so let's assume the following: I have a language called "Foobish" and its keywords are <OUTPUT> 'TEXT', <Number_of_Repeats>; So if I want to print to the console 10 times, I would write OUTPUT 'Hello World', 10; Hello World.foobish-file. Now I write an interpreter in the language of my choice - C# in this case: using System;
namespace FoobishInterpreter
{
internal class Program
{
private static void Main(string[] args)
{
analyseAndTokenize(Hello World.foobish-file)//Pseudocode
int repeats = Token[1];
string outputString = Token[0];
for (var i = 0; i < repeats; i++)
{
Console.WriteLine(outputString);
}
}
}
} On a very easy interpreter level, the interpreter would analyze the script-file, etc. and execute the foobish-language in the way of the interpreter's implementation. Would a compiler create machine language which runs on the physical hardware directly? So an interpreter doesn't produce machine language, but does a compiler do it for its input? Do I have any misunderstandings in the basic way how compilers and interpreters work? | The terms "interpreter" and "compiler" are much more fuzzy than they used to be. Many years ago it was more common for compilers to produce machine code to be executed later, while interpreters more or less "executed" the source code directly. So those two terms were well understood back then. But today there are many variations on the use of "compiler" and "interpreter." For example, VB6 "compiles" to byte code (a form of Intermediate Language ), which is then "interpreted" by the VB Runtime. A similar process takes place in C#, which produces CIL that is then executed by a Just-In-Time Compiler (JIT) which, in the old days, would have been thought of as an interpreter. You can "freeze-dry" the output of the JIT into an actual binary executable by using NGen.exe , the product of which would have been the result of a compiler in the old days. So the answer to your question is not nearly as straightforward as it once was. Further Reading Compilers vs. Interpreters on Wikipedia | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300593",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/199646/"
]
} |
300,606 | Our current queues publish messages that consumed by 3rd party services with rate limits. Currently the messages are retried with exponential back-off. However there could be cases where data is coming in so fast that the retries will never catch up. Most of the 3rd party services offer alternative batch imports, and the solution I've come up with so far is to write the data to file(s) to be processed out of band. Are there any design patterns for storing overflowing data? | The terms "interpreter" and "compiler" are much more fuzzy than they used to be. Many years ago it was more common for compilers to produce machine code to be executed later, while interpreters more or less "executed" the source code directly. So those two terms were well understood back then. But today there are many variations on the use of "compiler" and "interpreter." For example, VB6 "compiles" to byte code (a form of Intermediate Language ), which is then "interpreted" by the VB Runtime. A similar process takes place in C#, which produces CIL that is then executed by a Just-In-Time Compiler (JIT) which, in the old days, would have been thought of as an interpreter. You can "freeze-dry" the output of the JIT into an actual binary executable by using NGen.exe , the product of which would have been the result of a compiler in the old days. So the answer to your question is not nearly as straightforward as it once was. Further Reading Compilers vs. Interpreters on Wikipedia | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300606",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34513/"
]
} |
300,634 | So I need a bootloader for an avr chip and after some research, I selected this one . License OpenSource, base on GPL license. It needed minor modifications to compile without the dedicated AvrStudio. Namely: conformance to c99 writing a makefile custom configuration for my PCB and chip. I would like to push the modified project to GitHub, mainly as means to archive it in case of hard disk failure. The bootloader is going to be a part of my final year project, which I would like to publish as MIT/BSD/any-non-copyleft license. The project seems to have been abandoned during 2012 and the author is not returning my emails. After this background, my question. What is supposed to be the content of the LICENSE file, that GitHub recommends? Should I put a large "I DO NOT OWN THIS CODE", should I select a GPL version or do I have more options? And another question. How is my project supposed to be called? Do I have the right to name it the same as the original? Do I have the right to change the name? Related, but the original project is MIT: Licensing on forked projects | The terms "interpreter" and "compiler" are much more fuzzy than they used to be. Many years ago it was more common for compilers to produce machine code to be executed later, while interpreters more or less "executed" the source code directly. So those two terms were well understood back then. But today there are many variations on the use of "compiler" and "interpreter." For example, VB6 "compiles" to byte code (a form of Intermediate Language ), which is then "interpreted" by the VB Runtime. A similar process takes place in C#, which produces CIL that is then executed by a Just-In-Time Compiler (JIT) which, in the old days, would have been thought of as an interpreter. You can "freeze-dry" the output of the JIT into an actual binary executable by using NGen.exe , the product of which would have been the result of a compiler in the old days. So the answer to your question is not nearly as straightforward as it once was. Further Reading Compilers vs. Interpreters on Wikipedia | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300634",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54268/"
]
} |
300,649 | We have a new piece of functionality coming up. It comprises of 9 different user stories, covering the functions needed. Three of these are listed below: As a user I should be able to add a person so that... As a user I should be able to edit a person so that... As a user I should be able to delete a person so that... These three issues make out about 20 sp's each. The challenge is that these three issues are in practice treated as one issue. So, instead of having a nice burndown with 20 sp's burned on Tuesday, 20 sp's burned on Thursday and 20 on the following Monday, I get to see 60 sp's burned on the last day (e.g. Monday). In theory, (I believe) this is wrong. Each issue implemented should be "atomic" in the way that you can deploy this in production once the issue is done. This makes sense, and in my scenario above, you can deploy "add" to production without having edit and delete. However, from a technical perspective, it makes sense to "combine" the implementation of the three. Or at least add and edit. They share a lot of logic, and you might have to "pay more" if you first complete "add" and then start on "edit" the day after. Question is if we should continue as this, and have a rotten burndown in these cases, or if we should allow for more overhead and focus on the theoretical part - e.g. "atomic" issues. Or should we consider something else? | The terms "interpreter" and "compiler" are much more fuzzy than they used to be. Many years ago it was more common for compilers to produce machine code to be executed later, while interpreters more or less "executed" the source code directly. So those two terms were well understood back then. But today there are many variations on the use of "compiler" and "interpreter." For example, VB6 "compiles" to byte code (a form of Intermediate Language ), which is then "interpreted" by the VB Runtime. A similar process takes place in C#, which produces CIL that is then executed by a Just-In-Time Compiler (JIT) which, in the old days, would have been thought of as an interpreter. You can "freeze-dry" the output of the JIT into an actual binary executable by using NGen.exe , the product of which would have been the result of a compiler in the old days. So the answer to your question is not nearly as straightforward as it once was. Further Reading Compilers vs. Interpreters on Wikipedia | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300649",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/201384/"
]
} |
300,682 | Since the purity of an input parameter is an unknown until runtime, is a function immediately considered impure if it takes a function as an input parameter? Related: if a function applies a pure function that is defined outside of the function, but is not passed in as a parameter, is it still pure if it fulfills the criteria of having no side effects and output depends solely on input? For context, I'm writing functional code in JavaScript. | As long as all values used in the function are defined solely by its parameters, it's a pure function. The facet that output is the same each time for the same input is controlled by whether the parameters are pure. If you assume the parameters (like a function argument) are also pure, then it is pure. In a language like Javascript where purity isn't enforced, this means that it's possible to make an otherwise pure function have impure behavior by invoking an impure function passed as a parameter. This effectively means that for languages that don't enforce purity (ie almost all), it's impossible to define a pure function which invokes functions passed as arguments. It's still useful to write them as pure as possible, and to reason about them as pure functions, but you have to exercise caution because the assumption that it's pure will be broken if you pass in the wrong arguments. In my experience in practice this isn't usually a big deal - I find it rare to have impure functions be used as function arguments to pure functions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300682",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9702/"
]
} |
300,701 | Must all the controllers, models and views be placed in the private folders? If so, what are the roles of the public folder? What are the correct terminologies for such roles in computer science? How does the GUI relate with such concepts? Otherwise, does the controllers and/or views have to be distributed across the private and public folders? In that case, the scripts that are directly invoked via POST, are just interfaces, or controller interfaces, or view interfaces? | As long as all values used in the function are defined solely by its parameters, it's a pure function. The facet that output is the same each time for the same input is controlled by whether the parameters are pure. If you assume the parameters (like a function argument) are also pure, then it is pure. In a language like Javascript where purity isn't enforced, this means that it's possible to make an otherwise pure function have impure behavior by invoking an impure function passed as a parameter. This effectively means that for languages that don't enforce purity (ie almost all), it's impossible to define a pure function which invokes functions passed as arguments. It's still useful to write them as pure as possible, and to reason about them as pure functions, but you have to exercise caution because the assumption that it's pure will be broken if you pass in the wrong arguments. In my experience in practice this isn't usually a big deal - I find it rare to have impure functions be used as function arguments to pure functions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300701",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/161954/"
]
} |
300,706 | I know this is a hot debate and the opinions tend to change over time as to the best approach practice. I used to use exclusively field injection for my classes, until I started reading up on different blogs (exs: petrikainulainen and schauderhaft and fowler ) about the benefits of constructor injection. I have since switched my methodologies to use constructor injection for required dependencies and setter injection for optional dependencies. However, I have recently gotten into a debate with the author of JMockit - a mocking framework - in which he sees constructor & setter injection as bad practice and indicates that the JEE community agrees with him. In today's world, is there a preferred way of doing injection? Is Field injection preferred? Having switched to constructor inject from field injection in the last couple of years, I find it a lot clearer to use, but I am wondering if I should re-examine my viewpoint. The author of JMockit (Rogério Liesenfeld) , is clearly well versed in DI, so I feel obligated to review my approach given that he feels so strongly against constructor/setter injection. | Field injection is a bit too "spooky action at a distance" for my taste. Consider the example you provided in your Google Groups post: public class VeracodeServiceImplTest {
@Tested(fullyInitialized=true)
VeracodeServiceImpl veracodeService;
@Tested(fullyInitialized=true, availableDuringSetup=true)
VeracodeRepositoryImpl veracodeRepository;
@Injectable private ResultsAPIWrapper resultsApiWrapper;
@Injectable private AdminAPIWrapper adminApiWrapper;
@Injectable private UploadAPIWrapper uploadApiWrapper;
@Injectable private MitigationAPIWrapper mitigationApiWrapper;
static { VeracodeRepositoryImpl.class.getName(); }
...
} So basically what you are saying is that "I have this class with private state, to which I have attached @injectable annotations, which means that the state can be automagically populated by some agent from the outside, even though my state has all been declared private. " I understand the motivations for this. It is an attempt to avoid much of the ceremony that is inherent in setting up a class properly. Basically, what one is saying is that "I'm tired of writing all of this boilerplate, so I'm just going to annotate all of my state, and let the DI container take care of setting it for me." It's a perfectly valid point of view. But it's also a workaround for language features that arguably should not be worked around. Also, why stop there? Traditionally, DI has relied on each class having a companion Interface. Why not eliminate all those interfaces with annotations as well? Consider the alternative (this is going to be C#, because I know it better, but there's probably an exact equivalent in Java): public class VeracodeService
{
private readonly IResultsAPIWrapper _resultsApiWrapper;
private readonly IAdminAPIWrapper _adminApiWrapper;
private readonly IUploadAPIWrapper _uploadApiWrapper;
private readonly IMitigationAPIWrapper _mitigationApiWrapper;
// Constructor
public VeracodeService(IResultsAPIWrapper resultsApiWrapper, IAdminAPIWrapper adminApiWrapper, IUploadAPIWrapper uploadApiWrapper, IMitigationAPIWrapper mitigationApiWrapper)
{
_resultsAPIWrapper = resultsAPIWrapper;
_adminAPIWrapper = adminAPIWrapper;
_uploadAPIWrapper = uploadAPIWrapper;
_mitigationAPIWrapper = mitigationAPIWrapper;
}
} Already I know some things about this class. It's an immutable class; state can only be set in the constructor (references, in this particular case). And because everything derives from an interface, I can swap out implementations in the constructor, which is where your mocks come in. Now all my DI container has to do is reflect over the constructor to determine what objects it needs to new up. But that reflection is being done on a public member, in a first-class way; i.e. the metadata is already part of the class, having been declared in the constructor, a method whose express purpose is to provide the class with the dependencies it needs. Granted this is a lot of boilerplate, but this is how the language was designed. Annotations seem like a dirty hack for something that should have been built into the language itself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300706",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/201440/"
]
} |
300,842 | How should a product owner in scrum deal with very detailed questions from the team regarding the features they are implementing that he cannot instantly answer himself? When it would clearly be the faster solution for the developer to directly talk to the customer himself? I wonder if direct communication between the team and the customer undermines the role of the product owner. I feel like the PO should exclusively represent the customer and therefore answer all questions regarding the requirements - even if that takes longer. Bypassing him seems to weaken him and eventually make him superfluous... Is there a best practice in scrum? | It is always a good idea (especially in so-called Agile projects) not to stick to some cargo cult or text book telling you "who should (not) talk to whom", but switch on your brain and do whatever works best in a project. Though the communication between PO and the customer should be the standard (because of the reasons scetched by @PatrickHughes in his comment), you may face a situation where a complex business requirement has to be clarified, and the direct communication between a dev and a business expert will speed up things a lot. In such a situation, one should avoid playing "chinese whisper" with the PO in the middle, and let the dev and the business expert directly talk to each other - for this restricted context. However, the PO should never be bypassed. Ideally, he takes part in that conversation, probably as a moderator. He can verify the customer does not bring up completly new requirements on the table during the talk, or requirements contrary to what was agreed upon before. This depends also on the people involved, and the situation. The PO might have enough trust in the specific dev and the customer's expert, to let the two talk alone about a specific topic, and let him or her report what was said afterwards. In another situation, with other people involved, he might prefer to take a more active part. To get this decisions right is the core of good project management. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/300842",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/201612/"
]
} |
301,015 | Can you think of any specific reason why deletion is usually significantly harder to implement than insertion for many (most?) data structures? Quick example: linked lists. Insertion is trivial, but deletion has a few special cases that make it significantly harder. Self-balancing binary search trees such as AVL and Red-black are classic examples of painful delete implementation. I would like to say it has to do with the way most people think: it is easier for us to define things constructively, which leads nicely to easy insertions. | It's more than just a state of mind; there are physical (i.e. digital) reasons why deletion is harder. When you delete, you leave a hole where something used to be. The technical term for the resulting entropy is "fragmentation." In a linked list, this requires you to "patch around" the removed node and deallocate the memory it is using. In binary trees, it causes unbalancing of the tree. In memory systems, it causes memory to go unused for awhile if newly-allocated blocks are larger than the blocks left behind by deletion. In short, insertion is easier because you get to choose where you are going to insert. Deletion is harder because you can't predict in advance which item is going to get deleted. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301015",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/197349/"
]
} |
301,086 | I've read this article: How to Write an Equality Method in Java . Basically, it provides a solution for an equals() method that supports inheritance: Point2D twoD = new Point2D(10, 20);
Point3D threeD = new Point3D(10, 20, 50);
twoD.equals(threeD); // true
threeD.equals(twoD); // true But is it a good idea? these two instances appears to be equal but may have two different hash codes. Isn't that a bit wrong? I believe this would be better achieved by casting the operands instead. | This should not be equality because it breaks transitivity . Consider these two expressions: new Point3D(10, 20, 50).equals(new Point2D(10, 20)) // true
new Point2D(10, 20).equals(new Point3D(10, 20, 60)) // true Since equality is transitive, this should mean that the following expression is also true: new Point3D(10, 20, 50).equals(new Point3D(10, 20, 60)) But of course - it isn't. So, your idea of casting is correct - expect that in Java, casting simply means casting the type of the reference. What you really want here is a conversion method that'll create a new Point2D object from a Point3D object. This would also make the expression more meaningful: twoD.equals(threeD.projectXY()) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301086",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/165064/"
]
} |
301,114 | The plus sign + is used for addition and for string concatenation, but its companion: the minus sign, - , is generally not seen for trimming of strings or some other case other than subtraction. What could be the reason or limitations for that? Consider the following example in JavaScript: var a = "abcdefg";
var b = "efg";
a-b == NaN
// but
a+b == "abcdefgefg" | In short, there aren’t any particularly useful subtraction-like operations on strings that people have wanted to write algorithms with. The + operator generally denotes the operation of an additive monoid , that is, an associative operation with an identity element: A + (B + C) = (A + B) + C A + 0 = 0 + A = A It makes sense to use this operator for things like integer addition, string concatenation, and set union because they all have the same algebraic structure: 1 + (2 + 3) == (1 + 2) + 3
1 + 0 == 0 + 1 == 1
"a" + ("b" + "c") == ("a" + "b") + "c"
"a" + "" == "" + "a" == "a" And we can use it to write handy algorithms like a concat function that works on a sequence of any “concatenable” things, e.g.: def concat(sequence):
return sequence.reduce(+, 0) When subtraction - gets involved, you usually talk about the structure of a group , which adds an inverse −A for every element A, so that: A + −A = −A + A = 0 And while this makes sense for things like integer and floating-point subtraction, or even set difference, it doesn’t make so much sense for strings and lists. What is the inverse of "foo" ? There is a structure called a cancellative monoid , which doesn’t have inverses, but does have the cancellation property, so that: A − A = 0 A − 0 = A (A + B) − B = A This is the structure you describe, where "ab" - "b" == "a" , but "ab" - "c" is not defined. It’s just that we don’t have many useful algorithms that use this structure. I guess if you think of concatenation as serialisation, then subtraction could be used for some kind of parsing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301114",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/105798/"
]
} |
301,400 | Git internally stores objects (Blobs, trees) in the .git/objects/ folder. Each object can be referenced by a SHA1 hash that is computed from the contents of the object. However, Objects are not stored inside the .git/objects/ folder directly. Instead, each object is stored inside a folder that starts with the prefix of its SHA1 hash. So an object with the hash b7e23ec29af22b0b4e41da31e868d57226121c84 would be stored at .git/objects/b7/e23ec29af22b0b4e41da31e868d57226121c84 Why does Git subdivide its object storage this way? The resources I could find, such as the page on Git's internals on git-scm, only only explained how , not why . | It is possible to put all the files in one directory, though sometimes that can become a bit large. Many file systems have a limit . You want to put a git repository on a FAT32 formatted drive on a USB stick? You can only store 65,535 files in a single directory. This means that it is necessary to subdivide the directory structure so that filling a single directory is less likely. This would even become a problem with other file systems and larger git repositories. A relatively small git repo that I've got hanging out (about 360MiB) and it has 181,546 objects for 11k files. Pull the Linux repo and you've got 4,374,054 objects. If you were to put these all in one directory, it would be impossible to check out and would crash (for some meaning of 'crash') the file system. So? You split it up by byte. Similar approaches are done with applications such as FireFox: ~/Li/Ca/Fi/Pr/7a/Cache $ ls
0/ 4/ 8/ C/ _CACHE_001_
1/ 5/ 9/ D/ _CACHE_002_
2/ 6/ A/ E/ _CACHE_003_
3/ 7/ B/ F/ _CACHE_MAP_ Beyond this, it also goes to a question of performance. Consider NTFS Performance with Numerous Long Filenames : Windows NT takes a long time to perform directory operations on Windows NT file system (NTFS) formatted drives that contain a large number of files with long file names (names that do not conform to the 8.3 convention) in a single directory. When NTFS enumerates files in a directory, it has to look up the 8.3 names associated with the long file names. Because an NTFS directory is maintained in a sorted state, corresponding long file names and 8.3 names are generally not next to one another in the directory listing. So, NTFS uses a linear search of the directory for every file present. As a result, the amount of time required to perform a directory listing increases with the square of the number of files in the directory. For small numbers of files (less than a few hundred) the time delay is negligible. But as the number of files in a directory increases to several thousand, the time required to perform a listing can increase to minutes, hours, or even days. The problem is aggravated if the long file names are very similar -- differing only in the last few characters. With files named after SHA1 checksums, this could be a recipe for disaster and abysmal performance. While the above is from a tech note from Windows NT 3.5 (and NTFS 1.2 - commonly used from 1995 to the early 2000s) this can also be seen in things such as EXT3 with implementations of the filesystem being linked lists requiring O(n) lookup. And even with that B-tree change: While the HTree algorithm significantly improved lookup times, it could cause some performance regressions for workloads that used readdir() to perform some operation of all of the files in a large directory. ... One potential solution to mitigate this performance issue, which has been suggested by Daniel Phillips and Andreas Dilger, but not yet implemented, involves the kernel choosing free inodes whose inode numbers meet a property that groups the inodes by their filename hash. Daniel and Andreas suggest allocating the inode from a range of inodes based on the size of the directory, and then choosing a free inode from that range based on the filename hash. This should in theory reduce the amount of thrashing that results when accessing the inodes referenced in the directory in readdir order. In it is not clear that this strategy will result in a speedup, however; in fact it could increase the total number of inode blocks that might have to be referenced, and thus make the performance of readdir() + stat() workloads worse. Clearly, some experimentation and further analysis is still needed. Incidentally, this bit on how to improve performance was from 2005, the same year git was released. As seen with Firefox and many other applications that have lots of hash cached files, the design of splitting up the cache by byte. It has negligible performance cost, and when used cross platform with systems that may be a bit on the old side, could very well be the difference between the program working or not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301400",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41643/"
]
} |
301,479 | Some people maintain that integration tests are all kinds of bad and wrong - everything must be unit-tested, which means you have to mock dependencies; an option which, for various reasons, I'm not always fond of. I find that, in some cases, a unit-test simply doesn't prove anything. Let's take the following (trivial, naive) repository implementation (in PHP) as an example: class ProductRepository
{
private $db;
public function __construct(ConnectionInterface $db) {
$this->db = $db;
}
public function findByKeyword($keyword) {
// this might have a query builder, keyword processing, etc. - this is
// a totally naive example just to illustrate the DB dependency, mkay?
return $this->db->fetch("SELECT * FROM products p"
. " WHERE p.name LIKE :keyword", ['keyword' => $keyword]);
}
} Let's say I want to prove in a test that this repository can actually find products matching various given keywords. Short of integration testing with a real connection object, how can I know that this is actually generating real queries - and that those queries actually do what I think they do? If I have to mock the connection object in a unit-test, I can only prove things like "it generates the expected query" - but that doesn't mean it's actually going to work ... that is, maybe it's generating the query I expected, but maybe that query doesn't do what I think it does. In other words, I feel like a test that makes assertions about the generated query, is essentially without value, because it's testing how the findByKeyword() method was implemented , but that doesn't prove that it actually works . This problem isn't limited to repositories or database integration - it seems to apply in a lot of cases, where making assertions about the use of a mock (test-double) only proves how things are implemented, not whether they're going to actually work. How do you deal with situations like these? Are integration tests really "bad" in a case like this? I get the point that it's better to test one thing, and I also understand why integration testing leads to myriad code-paths, all of which cannot be tested - but in the case of a service (such as a repository) whose only purpose is to interact with another component, how can you really test anything without integration testing? | Write the smallest useful test you can. For this particular case, an in-memory database might help with that. It is generally true that everything that can be unit-tested should be unit-tested, and you're right that unit tests will take you only so far and no further—particularly when writing simple wrappers around complex external services. A common way of thinking about testing is as a testing pyramid . It's a concept frequently connected with Agile, and many have written about it, including Martin Fowler (who attributes it to Mike Cohn in Succeeding with Agile ), Alistair Scott , and the Google Testing Blog . /\ --------------
/ \ UI / End-to-End \ /
/----\ \--------/
/ \ Integration/System \ /
/--------\ \----/
/ \ Unit \ /
-------------- \/
Pyramid (good) Ice cream cone (bad) The notion is that fast-running, resilient unit tests are the foundation of the testing process. There should be more focused unit tests than system/integration tests, and more system/integration tests than end-to-end tests. As you get closer to the top, tests tend to take more time/resources to run, tend to be subject to more brittleness and flakiness, and are less-specific in identifying which system or file is broken ; naturally, it's preferable to avoid being "top-heavy". To that point, integration tests aren't bad , but heavy reliance on them may indicate that you haven't designed your individual components to be easy to test. Remember, the goal here is to test that your unit is performing to its spec while involving a minimum of other breakable systems : You may want to try an in-memory database (which I count as a unit-test-friendly test double alongside mocks) for heavy edge-case testing, for instance, and then write a couple of integration tests with the real database engine to establish that the main cases work when the system is assembled. As you noted, it's possible for tests to be too narrow: you mentioned that the mocks you write simply test how something is implemented, not whether it works . That's something of an antipattern: A test that is a perfect mirror of its implementation isn't really testing anything at all. Instead, test that every class or method behaves according to its own spec , at whatever level of abstraction or realism that requires. In that sense your method's spec might be one of the following: Issue some arbitrary SQL or RPC and return the results exactly
(mock-friendly, but doesn't actually test the query you care about) Issue exactly the SQL query or RPC and return the results exactly
(mock-friendly, but brittle, and assumes SQL is OK without testing it) Issue an SQL command to a similar database engine and check that it
returns the right results (in-memory-database-friendly,
probably the best solution on balance) Issue an SQL command to a staging copy of your exact DB engine
and check that it returns the right results
(probably a good integration test, but may be prone to infrastructure
flakiness or difficult-to-pinpoint errors) Issue an SQL command to your real production DB engine and check that
it returns the right results
(may be useful to check deployed behavior, same issues as #4 plus
the dangers of modifying production data or overwhelming your server) Use your judgment: Pick the quickest and most resilient solution that will fail when you need it to and give you confidence that your solution is correct. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301479",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48470/"
]
} |
301,483 | I'm working on upgrading an application written by a former developer at my work and I've been converting all the SQL queries in the code into Stored Procedures. I'm doing this with the idea that it can be easily called whenever it is needed instead of having the SQL query string repeated several times in the code. Also, I can easily use parameters with the stored procedure and avoid SQL injection vulnerabilities in my code. I was talking with another developer on my team who asked why I did it that way and told me that " calling stored procedures a lot can impact the SQL Server's performance and that it is better performance-wise to just use the SQL string in the application's code. " (Please note that this is a small VB.NET WinForms development team that doesn't use MVVM or other similar design patterns) Is that an accurate statement? I've not found anything in my googling prior to posting this question that would indicate running stored procedures has a bigger impact to performance than a regular SQL query. | Write the smallest useful test you can. For this particular case, an in-memory database might help with that. It is generally true that everything that can be unit-tested should be unit-tested, and you're right that unit tests will take you only so far and no further—particularly when writing simple wrappers around complex external services. A common way of thinking about testing is as a testing pyramid . It's a concept frequently connected with Agile, and many have written about it, including Martin Fowler (who attributes it to Mike Cohn in Succeeding with Agile ), Alistair Scott , and the Google Testing Blog . /\ --------------
/ \ UI / End-to-End \ /
/----\ \--------/
/ \ Integration/System \ /
/--------\ \----/
/ \ Unit \ /
-------------- \/
Pyramid (good) Ice cream cone (bad) The notion is that fast-running, resilient unit tests are the foundation of the testing process. There should be more focused unit tests than system/integration tests, and more system/integration tests than end-to-end tests. As you get closer to the top, tests tend to take more time/resources to run, tend to be subject to more brittleness and flakiness, and are less-specific in identifying which system or file is broken ; naturally, it's preferable to avoid being "top-heavy". To that point, integration tests aren't bad , but heavy reliance on them may indicate that you haven't designed your individual components to be easy to test. Remember, the goal here is to test that your unit is performing to its spec while involving a minimum of other breakable systems : You may want to try an in-memory database (which I count as a unit-test-friendly test double alongside mocks) for heavy edge-case testing, for instance, and then write a couple of integration tests with the real database engine to establish that the main cases work when the system is assembled. As you noted, it's possible for tests to be too narrow: you mentioned that the mocks you write simply test how something is implemented, not whether it works . That's something of an antipattern: A test that is a perfect mirror of its implementation isn't really testing anything at all. Instead, test that every class or method behaves according to its own spec , at whatever level of abstraction or realism that requires. In that sense your method's spec might be one of the following: Issue some arbitrary SQL or RPC and return the results exactly
(mock-friendly, but doesn't actually test the query you care about) Issue exactly the SQL query or RPC and return the results exactly
(mock-friendly, but brittle, and assumes SQL is OK without testing it) Issue an SQL command to a similar database engine and check that it
returns the right results (in-memory-database-friendly,
probably the best solution on balance) Issue an SQL command to a staging copy of your exact DB engine
and check that it returns the right results
(probably a good integration test, but may be prone to infrastructure
flakiness or difficult-to-pinpoint errors) Issue an SQL command to your real production DB engine and check that
it returns the right results
(may be useful to check deployed behavior, same issues as #4 plus
the dangers of modifying production data or overwhelming your server) Use your judgment: Pick the quickest and most resilient solution that will fail when you need it to and give you confidence that your solution is correct. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301483",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/200248/"
]
} |
301,484 | Is there a name for this pattern where an API offers several variations of a method (differing in number of parameters) and internally forwards them with default parameters like this: public void doSomething(Object one) {
doSomething(one, null);
}
public void doSomething(Object one, Object two) {
doSomething(one, two, null);
}
public void doSomething(Object one, Object two, Object three) {
// real implementation here
} | Write the smallest useful test you can. For this particular case, an in-memory database might help with that. It is generally true that everything that can be unit-tested should be unit-tested, and you're right that unit tests will take you only so far and no further—particularly when writing simple wrappers around complex external services. A common way of thinking about testing is as a testing pyramid . It's a concept frequently connected with Agile, and many have written about it, including Martin Fowler (who attributes it to Mike Cohn in Succeeding with Agile ), Alistair Scott , and the Google Testing Blog . /\ --------------
/ \ UI / End-to-End \ /
/----\ \--------/
/ \ Integration/System \ /
/--------\ \----/
/ \ Unit \ /
-------------- \/
Pyramid (good) Ice cream cone (bad) The notion is that fast-running, resilient unit tests are the foundation of the testing process. There should be more focused unit tests than system/integration tests, and more system/integration tests than end-to-end tests. As you get closer to the top, tests tend to take more time/resources to run, tend to be subject to more brittleness and flakiness, and are less-specific in identifying which system or file is broken ; naturally, it's preferable to avoid being "top-heavy". To that point, integration tests aren't bad , but heavy reliance on them may indicate that you haven't designed your individual components to be easy to test. Remember, the goal here is to test that your unit is performing to its spec while involving a minimum of other breakable systems : You may want to try an in-memory database (which I count as a unit-test-friendly test double alongside mocks) for heavy edge-case testing, for instance, and then write a couple of integration tests with the real database engine to establish that the main cases work when the system is assembled. As you noted, it's possible for tests to be too narrow: you mentioned that the mocks you write simply test how something is implemented, not whether it works . That's something of an antipattern: A test that is a perfect mirror of its implementation isn't really testing anything at all. Instead, test that every class or method behaves according to its own spec , at whatever level of abstraction or realism that requires. In that sense your method's spec might be one of the following: Issue some arbitrary SQL or RPC and return the results exactly
(mock-friendly, but doesn't actually test the query you care about) Issue exactly the SQL query or RPC and return the results exactly
(mock-friendly, but brittle, and assumes SQL is OK without testing it) Issue an SQL command to a similar database engine and check that it
returns the right results (in-memory-database-friendly,
probably the best solution on balance) Issue an SQL command to a staging copy of your exact DB engine
and check that it returns the right results
(probably a good integration test, but may be prone to infrastructure
flakiness or difficult-to-pinpoint errors) Issue an SQL command to your real production DB engine and check that
it returns the right results
(may be useful to check deployed behavior, same issues as #4 plus
the dangers of modifying production data or overwhelming your server) Use your judgment: Pick the quickest and most resilient solution that will fail when you need it to and give you confidence that your solution is correct. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301484",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/84069/"
]
} |
301,547 | In a C# or VB.NET project, should we include the PACKAGE folder (NuGet package folder that is created in the root of my project that contains the nupkg files and other content) to our source control repository (Git for instance)? | It depends. Check out Bart van Ingen Schenau's answer to determine if it's possible to ignore the packages folder at all. Basically: yes, NuGet is designed so that you can ignore the packages folder and NuGet will pull everything from the Internet if it's missing. But should you ignore it? I say: it depends. IMO it's a question of "can we continue working in case the package repository is not available" (be it temporarily or permanently) For my personal OSS projects, I have the packages folder ignored in all of them. When nuget.org is offline, I'll just wait and continue another day. But it's something different at work. Sure, you probably still have the packages locally on some machine, but is saving some space worth the hassle when your builds are breaking because your build server can't reach nuget.org? We decided that space is cheap and we don't want the hassle, that's why we're committing the packages folder to source control. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301547",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21623/"
]
} |
301,551 | I've written a linux application that interfaces with a lot of GPL software. My application is written in javascript/HTML/CSS/python and uses MariaDB on a CentOS 7 platform (it also uses jquery and a few simple third party tools); we would like to sell this on our hardware as an appliance . The GPL software we are interfacing with (usually via command line or configuration files) include those mentioned above along with lots of little linux tools (e.g. yum, hdparm, psutils and lots of python modules etc...). We do not want to publicly release the code we have written, even though it would be pretty easy to reverse engineer a lot of it due to our programming language selections (obviously). We intend to lock down the command line, its to dangerous for our clients to access. We will happily direct the users to where they can download all the GPL software, that we aren't modifying, if we are required to; but we don't want to direct them to our software... why would they pay for software updates if they can get it for free? We couldn't survive like that. So, assuming you say yes I have to issue the software I write under GPL, how can this be avoided and how much will it cost per distribution... do I have to pay the developer of every little tool I read from or write to? It would seem impossible to manage. MySQL seemed to indicate that I do have to pay them in one area of their site without telling me the price, then that I didn't have to pay them in another area. I have read GPL many times, and have been researching for months... I still don't understand what I need to do, if anything, to achieve my intent of not having to give my software away (other than get a lawyer we can't afford to tell me). For example, could I sell the appliance with the GPL software on it (using the GPL license), then sell our software separate under whatever license we please? If so, does that mean I only need to make sure they are separate line items on our quotations? | It depends. Check out Bart van Ingen Schenau's answer to determine if it's possible to ignore the packages folder at all. Basically: yes, NuGet is designed so that you can ignore the packages folder and NuGet will pull everything from the Internet if it's missing. But should you ignore it? I say: it depends. IMO it's a question of "can we continue working in case the package repository is not available" (be it temporarily or permanently) For my personal OSS projects, I have the packages folder ignored in all of them. When nuget.org is offline, I'll just wait and continue another day. But it's something different at work. Sure, you probably still have the packages locally on some machine, but is saving some space worth the hassle when your builds are breaking because your build server can't reach nuget.org? We decided that space is cheap and we don't want the hassle, that's why we're committing the packages folder to source control. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301551",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/202412/"
]
} |
301,591 | In a discussion about static and instance methods, I always think, that Sqrt() should be a instance method of number types instead of a static method. Why is that? It obviously works on a value. // looks wrong to me
var y = Math.Sqrt(x);
// looks better to me
var y = x.Sqrt(); Value types obviously can have instance methods, as in many languages, there is an instance method ToString() . To answer some questions from the comments: Why should 1.Sqrt() not be legal? 1.ToString() is. Some languages do not allow to have methods on value types, but some languages can. I am talking about these, including Java, ECMAScript, C# and Python (with __str__(self) defined). The same applies to other functions like ceil() , floor() etc. | Suppose we're designing a new language and we want Sqrt to be an instance method. So we look at the double class and begin designing. It obviously has no inputs (other than the instance) and returns a double . We write and test the code. Perfection. But taking the square root of an integer is valid, too, and we don't want to force everyone to convert to a double just to take a square root. So we move to int and start designing. What does it return? We could return an int and make it work only for perfect squares, or round the result to the nearest int (ignoring the debate about the proper rounding method for now). But what if someone wants a non-integer result? Should we have two methods - one that returns an int and one that returns a double (which is not possible in some languages without changing the name). So we decide that it should return a double . Now we implement. But the implementation is identical to the one we used for double . Do we copy-and-paste? Do we cast the instance to a double and call that instance method? Why not put the logic in a library method that can be accessed from both classes. We'll call the library Math and the function Math.Sqrt . Why is Math.Sqrt a static function?: Because the implementation is the same regardless of the underlying numeric type Because it does not affect a particular instance (it takes in one value and returns a result) Because numeric types do not depend on that functionality, therefore it makes sense to have it in a separate class We haven't even addressed other arguments: Should it be named GetSqrt since it returns a new value rather than modifying the instance? What about Square ? Abs ? Trunc ? Log10 ? Ln ? Power ? Factorial ? Sin ? Cos ? ArcTan ? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301591",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20580/"
]
} |
301,595 | When designing exceptions should I write messages that a user or a developer should understand? Who should actually be the reader of exception messages? I find exception messages aren't useful at all and I always have a hard time writing them. By convention the type of the exception should already tell us why something didn't work and custom properties might add even more information like file names, indexes, keys etc. so why repeating it in the message itself? An autogenerated message could also do and all what it had to contain is the name of the exception with a list of additional properties. This would be exactly as useful as a handwritten text. Wouldn't it be better to not write messages at all but instead have special exception renderers that takes care of creating meaningful messages maybe in different languages rather then hardcoding them in code? I've been asked whether either of those questions provide an answer to my question: How to write a good exception message Why do many exception messages not contain useful details? I've read them both and I wasn't happy with their answers. They talk about users in general and focus on the content of the message itself rather then on the addressee and it turns out there can be at least two of them: the end-user and the developer. I never know which one I should speak to when writing exception messages. I even think that the famous message doesn't have any real value at all as it just repeats the name of the exception type in different words so why bother writing them at all? I can perfectly generate them automatically. To me exception message lacks a differentiation of its readers. A perfect exception would need to provide at least two versions of the message: one for the end-user and one for the developer. Calling it just a message is too generic. A developer message should be then written in English but the end-user's message might need to be translated into other languages. It is not possible to achieve all this with only one message so an exception would need to provide some identifier to the end-user message that as I just said, might be available in different languages. When I read all the other linked questions I get the impression that an exception message is indeed intended to be read by an end-user and not a developer... a single message is like to have a cake and eat it too. | Those messages are for other developers Those messages are expected to be read by developers to help them debug the application. This can take two forms: Active debugging. You're actually running a debugger while writing code and trying to figure out what happens. In this context, a helpful exception will guide you by making it easy to understand what's going wrong, or eventually suggesting a workaround (although this is optional). Passive debugging. The code runs in production and fails. The exception is logged, but you only get the message and the stack trace. In this context, a helpful exception message will help you quickly localize the bug. Since those messages are often logged, it also means that you shouldn't include sensitive information there (such as private keys or passwords, even if it could be useful to debug the app). For instance, IOSecurityException type of an exception thrown when writing a file is not very explicit about the problem: is it because we don't have permissions to access a file? Or maybe we can read it, but not write? Or maybe the file doesn't exist and we don't have permissions to create files there? Or maybe it's locked (hopefully, the type of the exception will be different in this case, but in practice, types can sometimes be cryptic). Or maybe Code Access Security prevents us from doing any I/O operation? Instead: IOSecurityException: the file was found but the permission to read its contents was denied. is much more explicit. Here, we immediately know that permissions on the directory are set correctly (otherwise, we won't be able to know that the file exists), but permissions at file level are problematic. This also means that if you cannot provide any additional information which is not already in the type of the exception, you can keep the message empty. DivisionByZeroException is a good example where the message is redundant. On the other hand, the fact that most languages let you throw an exception without specifying its message is done for a different reason: either because the default message is already available, or because it will be generated later if needed (and the generation of this message is enclosed within the exception type, which makes perfect sense, "OOPly" speaking). Note that for technical (often performance) reasons, some messages end up being much more cryptic than they should be. .NET's NullReferenceException : Object reference not set to an instance of an object. is an excellent example of a message which is not helpful. A helpful message would be: Object reference product not set to an instance of an object when called in product.Price . Those messages are not for end users! End users are not expected to see exception messages. Never. Although some developers end up showing those messages to the users, this leads to poor user experience and frustration. Messages such as: Object reference not set to an instance of an object. mean absolutely nothing to an end user, and should be avoided at all costs. The worst case scenario is to have a global try/catch which throws the exception to the user and exits the app. An application which cares about its users: Handles exceptions in the first place. Most ones can be handled without disturbing the user. Network is down? Why not wait for a few seconds and try again? Prevents a user from leading the application to an exceptional case. If you ask the user to enter two numbers and divide the first one by the second one, why would you let the user to enter zero in the second case in order to blame him a few seconds later? What about highlighting the text box in red (with a helpful tool tip telling that the number should be different than zero) and disabling the validation button until the field remains red? Invites a user to perform an action in a form which is not an error. There are no enough permissions to access a file? Why not ask the user to grant administrative permissions, or pick a different file? If nothing else works, shows a helpful, friendly error which is specifically written to reduce user's frustration, help the user to understand what went wrong and eventually solve the problem, and also help him prevent the error in the future (when applicable). In your question, you suggested to have two messages in an exception: a technical one for developers, and the one for end users. While this is a valid suggestion in some minor cases, most exceptions are produced at a level where it is impossible to produce a meaningful message for the users. Take DivisionByZeroException and imagine that we couldn't prevent the exception from happening and can't handle it ourselves. When the division occurs, does the framework (since it's the framework, and not business code, which throws the exception) know what will be a helpful message for a user? Absolutely not: The division by zero occurred. [OK] Instead, one can let it throw the exception, and then catch it at a higher level where we knew the business context and could act accordingly in order to actually help the end user, thus showing something like: The field D13 cannot have the value identical to the one in field E6, because the subtraction of those values is used as a divisor. [OK] or maybe: The values reported by ATP service are inconsistent with the local data. This may be caused by the local data being out of sync. Would you like to synchronize the shipping information and retry? [Yes] [No] Those messages are not for parsing Exception messages are not expected to be parsed or used programmatically either. If you think that additional information can be needed by the caller, include it in the exception side by side with the message. This is important, because the message is subject to change without notice. Type is a part of the interface, but the message is not: never rely on it for exception handling. Imagine the exception message: Connecting to the caching sever timed out after waiting for 500 ms. Either increase the timeout or check the performance monitoring to identify a drop in server performance. The average wait time for caching server was 6 ms. for the last month, 4 ms. for the last week and 377 ms. for the last hour. You want to extract the “500”, “6”, “4” and “377” values. Think a bit about the approach you will use to do the parsing, and only then continue reading. You have the idea? Great. Now, the original developer discovered a typo: Connecting to the caching sever timed out after waiting [...] should be: ↓
Connecting to the caching server timed out after waiting [...] Moreover, the developer considers that month/week/one hour are not particularly relevant, so he also does an additional change: The average wait time for caching server was 6 ms. for the last month, 5 ms. for the last 24 hours and 377 ms. for the last hour. What happens with your parsing? Instead of parsing, you could be using exception properties (which can contain whatever one wants, as soon as the data can be serialized): {
message: "Connecting to the caching [...]",
properties: {
"timeout": 500,
"statistics": [
{ "timespan": 1, "unit": "month", "average-timeout": 6 },
{ "timespan": 7, "unit": "day", "average-timeout": 4 },
{ "timespan": 1, "unit": "hour", "average-timeout": 377 },
]
}
} How easy is to use this data now? Sometimes (such as within .NET), the message can even be translated into user's language (IMHO, translating those messages is absolutely wrong, since any developer is expected to be able to read in English). Parsing such messages is close to impossible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301595",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/160257/"
]
} |
301,691 | I have just forked a project in Github, made my changes etc. This got me wondering: I see mostly README.txt in opensource projects and the file I edited was Readme.txt. Is this some sort of standartisation or should I have left it as is? | All-uppercase letters stand out and make the file easily visible which makes sense because it is probably the first thing a new user would want to look at. (Or, at least, should have looked at…) As others have already said, file names starting with a capital letter will be listed before lower-case names in ASCIIbetical sorting ( LC_COLLATE=C ) which helps make the file visible at a first glance. The README file is part of a bunch of files a user of a free software package would normally expect to find. Others are INSTALL (instructions for building and installing the software), AUTHORS (list of contributors), COPYING (license text), HACKING (how to get started for contributing, maybe including a TODO list of starting points), NEWS (recent changes) or ChangeLog (mostly redundant with version control systems). This is what the GNU Coding Standards have to say about the README file. The distribution should contain a file named README with a general overview of the package: the name of the package; the version number of the package, or refer to where in the package the version can be found; a general description of what the package does; a reference to the file INSTALL , which should in turn contain an explanation of the installation procedure; a brief explanation of any unusual top-level directories or files, or other hints for readers to find their way around the source; a reference to the file which contains the copying conditions. The GNU GPL, if used, should be in a file called COPYING . If the GNU LGPL is used, it should be in a file called COPYING.LESSER . Since it is always good to strive for the least surprise of your users, you should follow this convention unless there are compelling reasons for a deviation. In the UNIX world, file name extensions were traditionally used sparingly so the canonical name of the file is README without any suffix. But most users probably would have no troubles understanding that a file named README.txt has the same meaning. If the file is written in Markdown , a file name like README.md might also be reasonable. Avoid using more complicated markup languages like HTML in the README file, however, because it should be convenient to read on a text-only terminal. You can point users to the manual of the software or its on-line documentation, that might be written in a more sophisticated format, for details from the README file. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301691",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/202757/"
]
} |
301,867 | We have a large website made in PHP running in Apache on Linux. There are programmers as well as non-technical users modifying the website every day and we do it directly on the test server. Is there any alternative to modifying the files directly on the server? Sure, the programmers could all setup an Apache instance on their machine and manage it, but it's a bit of trouble as sometimes we add new extension to PHP or change configuration in Apache which means that everyone need to change those things manually. Also, it's not realistic to expect our non-technical users to manage their own Apache instance. Another problem is that we are all on Windows computer, but the website runs under Linux. The code is not compatible with Windows so that is another issue. Switching completely to Linux is not an option either as we are using many programs that only runs on Windows for other tasks, so it would need to be a dual boot or a VM. It feels strange to all work directly on the server, but is it the best way to go in our case? | Everyone altering the application should be doing it in source control, and you should have an automated process for deploying a specific version from source control to your test server. It may require some persuasion to get your non-technical people to use source control, but their really is no alternative to keeping a non-toy system running reliably. If the step to check-in and deploy the code is simple enough, it won't matter that no one is working directly on the server. People like using git for this because git is fast and has developed an ecosystem of good end-user tools. Having people work directly on the server is a recipe for disaster. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301867",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/171358/"
]
} |
301,947 | Is spinlock and polling the same thing? Wikipedia: a spinlock is a lock which causes a thread trying to acquire it to simply wait in a loop ("spin") while repeatedly checking if the lock is available This sounds awfully lot like: while(!ready); I was taught to avoid polling whenever possible as it was throughly sub-optimal. So, is spinlock a fancy name for bad old polling? How is a spinlock different from polling? | Polling refers to repeatedly checking whether a resource ( any kind of resource) is ready. A spinlock is when the resource you are polling is a lock. Note that polling is not bad. In particular, polling is efficient when there is usually data ready when you poll. Polling is only inefficient if you do it without then getting any data in return. On the other hand, interrupts are inefficient if there is so much data that you constantly get interrupted. They are efficient if data arrives rarely enough that you can actually get some useful work done before getting interrupted. I can give you a real-life example from my own experience: 15 years ago, I had my email program set up to interrupt me every time a new email comes in. Which happened once or twice a week. Constantly checking my inbox would have been a colossal waste of time. Nowadays, I have all notifications turned off. I know that whenever I look into my inbox, there'll be new emails there. Polling is much more efficient now. Spinlocks are efficient when a) the likelihood that the lock is taken is low, and b) if the lock is taken, it will only be held for a short time. In other words: it is efficient for mostly uncontended fine-grained locks, but inefficient for highly contended coarse-grained locks. (And of course, spinlocks only work when there is true parallelism, otherwise the other thread won't have a chance to release the lock. I guess that's kind of obvious, but I wanted to state it anyway.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/301947",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27217/"
]
} |
302,147 | Currently we have one master branch for our PHP application in a shared repository. We have more than 500 clients who are subscribers of our software, most of whom have some customization for different purposes, each in a separate branch. The customization could be a different text field name, a totally new feature or module, or new tables/columns in the database. The challenge we face is that as we maintain these hundreds of customized branches and distribute to clients, from time to time we provide new feature and update our master branch, and we would like to push master branch changes to the custom branches in order to update them to the latest version. Unfortunately this often results in many conflicts in the custom code, and we spend many hours going through every single branch to solve all the conflicts. This is very inefficient, and we've found that mistakes are not uncommon when solving these conflicts. I am looking for a more efficient way to keep our client release branches up to date with the master branch that will result in less effort during merging. | You are completely abusing branches! You should have the customisation powered by flexibility in your application, not flexibility in your version control (which, as you have discovered, is not intended/designed for this sort of use). For example, make textfield labels come from a text file, not be hardcoded into your application (this is how internationalisation works). If some customers have different features, make your application modular , with strict internal boundaries governed by stringent and stable APIs, so that features can be plugged-in as needed. The core infrastructure, and any shared features, then only need be stored, maintained and tested once . You should have done this from the start. If you already have five hundred product variants (!), fixing this is going to be a huge job … but no more so than ongoing maintenance. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/302147",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/203422/"
]
} |
302,198 | I am writing a program that involves working with both polar and Cartesian coordinates. Does it make sense to create two different structs for each kind of points, one with X and Y members and one is with R and Theta members. Or is it too much and it is better to have just one struct with first and second as members. What I am writing is simple and it won't change much. But I am curious what is better from a design point of view. I am thinking the first option is better. It seems more readable and I will get the benefit of type checking. | Yes, it makes a lot of sense. The value of a struct is not just that it encapsulates data under a handy name. The value is that it codifies your intentions so that the compiler can help you verify that you don't violate them some day (e.g. mistake a polar coordinate set for a cartesian coordinate set). People are bad at remembering such niggling details, but good at creating bold, inventive plans. Computers are good at niggling details and bad at creative plans. Therefore it is always a good idea to shift as much niggling-detail maintenance to the computer, leaving your mind free to work on the grand plan. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/302198",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/103901/"
]
} |
302,217 | Introduction Many "C-like" programming languages use compound statements (code blocks specified with "{" and "}") to define a variables scope. Here is a simple example. for (int i = 0; i < 100; ++i) {
int value = function(i); // Here 'value' is local to this block
printf("function(%d) == %d\n", i, value);
} This is good because it limits the scope of the value to where it is used. It is hard for programmers to use value in ways that they are not meant to because they can only access it from within its scope. I am almost all of you are aware of this and agree that it is good practice to declare variables in the block they are used to limit their scope. But even though it is an established convention to declare variables in their smallest possible scope it is not very common to use an naked compound statement (that is an compound statement that is not connected to an if , for , while statement). Swapping the values of two variables Programmers often write the code like this: int x = ???
int y = ???
// Swap `x` and `y`
int tmp = x;
x = y;
y = tmp; Would it not be better to write the code like this: int x = ???
int y = ???
// Swap `x` and `y`
{
int tmp = x;
x = y;
y = tmp;
} It looks quite ugly but I think that this is a good way to enforce variable locality and make the code safer to use. This does not only apply to temporaries I often see similar patterns where a variable is used once in a function Object function(ParameterType arg) {
Object obj = new Object(obj);
File file = File.open("output.txt", "w+");
file.write(obj.toString());
// `obj` is used more here but `file` is never used again.
...
} Why don't we write it like this? RET_TYPE function(PARAM_TYPE arg) {
Object obj = new Object(obj);
{
File file = File.open("output.txt", "w+");
file.write(obj.toString());
}
// `obj` is used more here but `file` is never used again.
...
} Summary of question It is hard to come up with good examples. I am sure that there are betters ways to write the code in my examples but that is not what this question is about. My question is why we do not use "naked" compound statements more to limit the scope of variables. What do you think about using a compound statement like this {
int tmp = x;
x = y;
y = z;
} to limit the scope of tmp ? Is it good practice? Is it bad practice? Explain your thoughts. | It is indeed a good practice to keep your variable's scope small. However, introducing anonymous blocks into large methods only solves half the problem: the scope of the variables shrinks, but the method (slightly) grows! The solution is obvious: what you wanted to do in an anonymous block, you should be doing in a method. The method gets its own block and its own scope automatically, and with a meaningful name you get better documentation out of it, too. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/302217",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/154627/"
]
} |
302,289 | Here is the deal, I have joined a new company and have been asked to finish off the work on a branch which hasn't been touched for almost a year. In the meanwhile, the master branch has been growing with a steady pace. Ideally I would like to merge all of the changes from the master branch into the feature branch and continue the work from there, but I'm not too sure how to approach this. How do I perform this merge safely while preserving important changes on both sides of the branch? | At its heart, how to combine two (possibly non-compatible) pieces of code is a development problem , not a version control problem. The Git merge command may help in this process, but it depends on the shape of the problem. Comparing both versions with the base first makes the most sense. This will give you an idea of the best strategy for taking this forward. Your approach might be different based on the nature and overlap of the changes in each branch. Imagine the ideal scenario: you would discover that the main branch and the feature branch each only modified mutually exclusive parts of the code, so you could just commit all the changes in and be good to go. Of course, that will almost certainly not be the case, but the question is: how far removed from this ideal scenario will it be? i.e. how intermingled are the changes? Also, how mature was the old feature branch? Was it in a good working state, or not (or unknown)? How much of the feature was finished? If the relevant code in the main branch has changed a lot in the past year, or the feature is not in a very mature state, I might consider creating a new fork of the latest and manually incorporating the old feature in again. This will allow you to take an incremental approach to getting it working. If you do a messy merge of lots of code and it doesn't work, it will be quite hard to debug. If the main branch has changed a lot over the past year, major design changes may be needed to the feature to get it working. It would not be appropriate to make these changes via "resolve conflicts", since this would require making all the changes at once and hoping it works. This problem would be compounded by the possibility of bugs in the old partially finished branch. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/302289",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/76933/"
]
} |
302,421 | I fooled around with for -loops, remembered the with keyword from delphi and came up with following pattern (defined as a live template in IntelliJ IDEA): for ($TYPE$ $VAR$ = $VALUE$; $VAR$ != null; $VAR$ = null) {
$END$
} Should I use it in productive code? I think it might be handy for creating temporary shortcut variables like these one-character variables in lamdas and couting for loops. Plus, it checks if the variable you are going to use in the block is null first. Consider following case in a swing application: // init `+1` button
for (JButton b = new JButton("+1"); b != null; add(b), b = null) {
b.setForeground(Color.WHITE);
b.setBackground(Color.BLACK);
b.setBorder(BorderFactory.createRaisedBevelBorder());
// ...
b.addActionListener(e -> {
switch (JOptionPane.showConfirmDialog(null, "Are you sure voting +1?")) {
case JOptionPane.OK_OPTION:
// ...
break;
default:
// ...
break;
}
});
} | It's not an anti-pattern, because that would mean it is a commonly used technique that's problematic somehow. This code fails to meet the "commonly used" criterion. However, it is problematic. Here are some of its problems: Misleading: it uses a loop structure, but never executes more than once. Hiding a check: the != null check is easy to miss where it is placed. Hidden code is harder to understand. It's also not clear whether the condition is actually necessary or just there to terminate the loop after the first iteration. (Your statements about the check indicate that you have situations where it's necessary, whereas in your example it's not, as new never returns null.) Hiding an action: the add(b) statement is hidden even better. It's not even sequentially at the position where you'd expect it. Unnecessary: You can just declare a local variable. If you don't want its name to be visible, you can use a block statement to limit its scope, although that would be an anti-pattern (or at least a code smell), indicating that you should extract a function. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/302421",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/203827/"
]
} |
302,730 | As a good programmer one should write robust codes that will handle every single outcome of his program. However, almost all functions from the C library will return 0 or -1 or NULL when there's an error. It's sometimes obvious that error checking is needed, for example when you try to open a file. But I often ignore error checking in functions such as printf or even malloc because I don't feel necessary. if(fprintf(stderr, "%s", errMsg) < 0){
perror("An error occurred while displaying the previous error.");
exit(1);
} Is it a good practice to just ignore certain errors, or is there a better way to handle all the errors? | In general, code should deal with exceptional conditions wherever it is appropriate. Yes, this is a vague statement. In higher level languages with software exception handling this is often stated as "catch the exception in the method where you can actually do something about it." If a file error occurred, maybe you let it bubble up the stack to the UI code that can actually tell the user "your file failed to save to disk." The exception mechanism effectively swallows up "every little error" and implicitly handles it at the appropriate place. In C, you do not have that luxury. There are a few ways to handle errors, some of which are language/library features, some of which are coding practices. Is it a good practice to just ignore certain errors, or is there a
better way to handle all the errors? Ignore certain errors? Maybe. For example, it is reasonable to assume that writing to standard output will not fail. If it does fail, how would you tell the user, anyway? Yes, it is a good idea to ignore certain errors, or code defensively to prevent them. For example, check for zero before dividing. There are ways to handle all, or at least most, errors: You can use jumps, similar to gotos, for error handling . While a contentious issue among software professionals, there are valid uses for them especially in embedded and performance-critical code (e.g. Linux kernel). Cascading if s: if (!<something>) {
printf("oh no 1!");
return;
}
if (!<something else>) {
printf("oh no 2!");
return;
} Test the first condition, e.g. opening or creating a file, then assume subsequent operations succeed. Robust code is good, and one should check for and handle errors. Which method is best for your code depends on what the code does, how critical a failure is, etc. and only you can truly answer that. However, these methods are battle-tested and used in various open source projects where you can take a look to see how real code checks for errors. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/302730",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/153287/"
]
} |
302,763 | I have noticed a pattern while working on several software projects: the big majority of the bugs reported had a high/very-high priority. I asked some colleagues about why this may be happening, and they mentioned that if a bug doesn't haven't that level of priority it is very rare that the Bug gets developer attention, which indeed makes sense. So, I wanted to know if this problem is common or if I just had bad luck. I did a quick Google search, and I found that some teams implement Bug Reporting guidelines or have a separate "Bug Triage" team. If you have faced and solved this issue, what was the approach that worked for you? This question is specifically about the "Priority Inflation" problem: If you face the scenario and what meassures result effective against this problem. | If you have this problem where users are assigning ever-higher priority bugs then the only realistic solution is a triage mechanism. All bugs get reported with whatever priority they like, but some poor manager will have to go through every newly reported bug and reset its priority to a sensible level. After a while your users will either get the message, or you can change the reporting system so that every bug has a default priority. If they want it escalated they will have to contact someone to bump it, which will require some justification. This fact alone will cause 99% of all bugs to be left un-escalated by the user. Obviously you have more bugs than you can process, so maybe you need to embark on a bug-fix roundup to clear the backlog. This will show the users that their bugs will get fixed without needing them to be marked as super-super-dooper-really-no-honest-this-time-important. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/302763",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48738/"
]
} |
302,772 | I am trying to generate a table which would show all possible combinations based on the key. To make it more clear lets say I have a kind of Key, Value pair set(Its mentioned kind of because there are unique keys and different values). I need to fill the table with all possible combinations of the values with different keys. Now i need to make the key as the heading of the table and add the values accordingly so that there are no duplicate rows and all combinations are present. I am doing it in C#. I could only retrieve these key value pairs from my database based on conditions, but I have no idea how the table can be built. Are there any examples someone could provide? Or a basic approach I should follow to achieve it? So far my ideas: To create an array with size equals number of distinct parameters of key. Add values to the array and then pass a list of the Array to the View. P.S : My data model is complex. Reference to this question would give an idea of a part of my Data model. The value I am talking about comes from the SetValue described in the model and the key from another class which is not described (I feel this information is irrelevant as I have already the list of my Key value pairs). I think I have posted the question in the right forum, as I am not asking for any code. If not please let me know the right forum where this question will belong to. | If you have this problem where users are assigning ever-higher priority bugs then the only realistic solution is a triage mechanism. All bugs get reported with whatever priority they like, but some poor manager will have to go through every newly reported bug and reset its priority to a sensible level. After a while your users will either get the message, or you can change the reporting system so that every bug has a default priority. If they want it escalated they will have to contact someone to bump it, which will require some justification. This fact alone will cause 99% of all bugs to be left un-escalated by the user. Obviously you have more bugs than you can process, so maybe you need to embark on a bug-fix roundup to clear the backlog. This will show the users that their bugs will get fixed without needing them to be marked as super-super-dooper-really-no-honest-this-time-important. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/302772",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/197004/"
]
} |
302,780 | Why does this ShapeFactory use conditional statements to determine what object to instantiate. Dont we have to modify ShapeFactory if we want to add other classes in the future? Why doesnt this violate the open closed principle? | The conventional object-oriented wisdom is to avoid if statements and replace them with dynamic dispatch of overridden methods in subclasses of an abstract class. So far, so good. But the point of the factory pattern is to relieve you from having to know about the individual subclasses and work only with the abstract superclass . The idea is that the factory knows better than you which specific class to instantiate, and you will be better off working only with the methods publishd by the super class. This is often true and a valuable pattern. Therefore, there is no way that writing a factory class can forego the if statements. It would shift the burden of choosing a specific class to the caller, which is exactly what the pattern is supposed to avoid. Not all principles are absolute (in fact, no principle is absolute), and if you use this pattern you'd assume that the benefit from it is greater than the benefit of not using an if . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/302780",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/153300/"
]
} |
302,811 | I'm CS student. I am currently attending lectures, where we're taught Objective Analysis and Design. It consists mostly of writing use cases, analysing the problem that we can face when writing some application for the client, and how to design the project so that it's both extensible, clear to developers, and doesn't spawn problems when client argues about some features. Since it's 'objective', we're learning it from OOP point of view(classes and such). Now we're using UML as a helper tool. I believe I have a good grasp on OOP, but I have also learned functional paradigm and used it successfully in some of my smaller projects. Our teacher, when confronted with "what about functional paradigm?" question, answered that he wasn't programming any larger project in functional languages, and he doesn't know what tool may functional programs be using. So, what would they use? Is there some methodology for this? Or maybe there's no need for such thing? | I can't speak for all functional programmers, but those I know all start out by writing the type signatures of the top-level functions, then as they need more detail, they write the type signatures of the helper functions, and so forth. This works because of the lack of side effects in functional programming, so functions are all specified in terms of only their inputs and outputs. This makes their type signatures much more useful as a design tool than in imperative programming. That's one reason you see them used even when the compiler could infer them. As far as diagramming tools go, with all due respect to your professor, I haven't used those to any significant degree in any paradigm since I left school. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/302811",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/120783/"
]
} |
302,892 | Through the years I've worked in web development it's been ingrained in me that client-side validation is absolutely and completely necessary in all web applications. Seems to me like all the people in the profession are very adamant on using it every single time. The most commonly mentioned benefits are: Instantaneous showing of form errors Less unnecessary requests to the server But as we're moving towards the modern web, more and more applications are AJAX based. The simple task of submitting forms takes a lot less time now than it did before and validating on the server and returning errors barely impacts the user experience. Could it be justifiable to completely avoid client-side validation (for example) on an application that only contains small and simple forms? EDIT: I would argue that the linked duplicate question doesn't really capture the same example I'm giving here. A lot has changed since 2011. All of the answers on the related question are arguing for user experience whereas in my question I'm describing a scenario where the user experience would be unaffected. When client-side validation no longer improves the user experience does it continue to rank in the same level of importance as before? | Server side validation is absolutely necessary. Client side validation is purely a user experience improvement since the same validation should always happen on the server anyway. After all, you can always disable JavaScript or simply post arbitrary data directly via HTTP. If you can provide server side validation which gives just as smooth a user experience as client side validation, you don't need client side validation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/302892",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/154465/"
]
} |
303,018 | We have a few givens: Developers need a replica of the production database on their machines. Developers have the password to said database in the App.config files. We don't want the data in said database compromised. A few suggested solutions and their drawbacks: Full-disk-encryption. This solves all problems, but degrades the laptop's performance, and we are a start-up, so don't have money for powerhorses. Creating a VM with encrypted hard disk, and store the database on it. It works well, but it doesn't help too much, since there's a password in Web.Config. Solution number 2 + requiring the developer to type the database password every time he runs anything. It solves all problems, but it is really cumbersome for developers that sometimes fire up the application multiple times a minute. Also, we have multiple applications that connect to the same database, and implementation of a password screen will have to differ in each. So, my question is, if there's any common solution to such problem, or any suggestions on how to make any of the above solutions workable? | Not only do you not want a copy of the production database, it may actually be illegal. For example, in the US, you cannot move production data out of the production environment if it contains regulated information like personal health data, financial data, or even data that could be used in identity theft. If you do, you could be fined, lose your compliance standing and therefore be subject to more aggressive audits, or even be named in a lawsuit. If you need production-scale data for testing, you have a couple options: Generate all dummy data. This is trickier than it sounds. It's surprisingly difficult and labor-intensive to generate sensible imaginary data. Anonymize your production data. This may be easier, but proceed with caution. For option #2 In the production environment, an authorized database admin makes a copy of the production data. Still in the production environment, the same authorized admin runs a routine that anonymizes all sensitive data. If in doubt, anonymize. Only then should the data be moved to another environment. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/303018",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/204707/"
]
} |
303,082 | I have been programming into a number of languages like Java, Ruby, Haskell and Python. I have to switch between many languages per day due to different projects I work on. Now, the issue is I often forget to write self as the first parameter in the function definitions in Python same is with calling methods on the same object. That said, I am quite amazed by this approach of Python. Basically we have to type more to get the things done, in the languages like Java and Ruby things are made simple by automatically referencing the variables in the current object. My question is why is this self necessary? Is it purely a style choice, or is there a reason why Python can't let you omit self the way Java and C++ let you omit this ? | 1) Why is self required as an explicit parameter in method signatures? Because methods are functions and foo.bar(baz) is just syntactic sugar for bar(foo, baz) . Classes are just dictionaries where some of the values are functions. (Constructors are also just functions, which is why Python doesn't need new ) You can say that Python makes it explicit that objects are built from simpler components. This is in accordance with the "explicit is better than implicit"-philosophy. In contrast, in Java objects really are magic and cannot be reduced to simpler components in the language. In Java (at least until Java 8) a function is always a method owned by an object, and this ownership cannot be changed due to the static nature of the language. Therefore there is no ambiguity about what this refers to, so it makes sense to have it implicitly defined. JavaScript is an example of a language that has an implicit this like Java, but where functions can exist separately from objects like in Python. This leads to a lot of confusion about what this refers to when functions are passed around and called in different contexts. Many instinctively think this must refer to some intrinsic property of the function, while it is actually purely determined by the way the function is called. I believe having this as an explicit parameter like in Python would make this much less confusing. Some other benefits of the explicit self -parameter: Decorators are just functions which wraps other functions. Since methods are just functions, decorators works just as fine on methods. If there were some kind of implicit self, decorators would not work transparently on methods. Classmethods and static methods does not take an instance parameter. Classmethods take a class as the first argument (typically called cls ). The explicit self or cls parameters makes it much clearer what is going on, and what you have access to in the method. 2) Why must instances variables always be qualified with" self. ? In Java, you don't need to prefix member variables with " this. ", but in Python " self. " is always required. The reason is that Python does not have an explicit syntax for declaring variables, so there would be no way to tell if x = 7 is supposed to declare a new local variable or assign to a member variable. Specifying self. solves this ambiguity. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/303082",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81282/"
]
} |
303,207 | Note: The code sample is written in c#, but that shouldn't matter. I've put c# as a tag because I can't find a more appropiate one. This is about the code structure. I'm reading Clean Code and trying to become a better programmer. I often find myself struggling to follow the Single Responsibility Principle (classes and functions should do only one thing), specially in functions. Maybe my problem is that "one thing" is not well-defined, but still... An example: I have a list of Fluffies in a database. We don't care what a Fluffy is. I want a class to recover fluffies. However, fluffies can change according to some logic. Depending on some logic, this class will return the data from cache or get the latest from the database.
We could say that it manages fluffies, and that is one thing.
To make it simple, let's say loaded data is good for an hour, and then it must be reloaded. class FluffiesManager
{
private Fluffies m_Cache;
private DateTime m_NextReload = DateTime.MinValue;
// ...
public Fluffies GetFluffies()
{
if (NeedsReload())
LoadFluffies();
return m_Cache;
}
private NeedsReload()
{
return (m_NextReload < DateTime.Now);
}
private void LoadFluffies()
{
GetFluffiesFromDb();
UpdateNextLoad();
}
private void UpdateNextLoad()
{
m_NextReload = DatTime.Now + TimeSpan.FromHours(1);
}
// ...
} GetFluffies() seems ok to me. The user asks for some fluffies, we provide them. Going to recover them from the DB if needed, but that could be considered a part of getting the fluffies (of course, that's somewhat subjective). NeedsReload() seems right, too. Checks if we need to reload the fluffies.
UpdateNextLoad is fine. Updates the time for the next reload. that's definitely one single thing. However, I feel what LoadFluffies() do can't be described as one single thing. It's getting the data from the Database, and it's scheduling the next reload. It's hard to argue that calculating the time for the next reload is part of getting the data. However, I can't find a better way to do it (renaming the function to LoadFluffiesAndScheduleNextLoad may be better, but it just makes the problem more obvious). Is there an elegant solution to really write this class according to the SRP?
Am I being too pedantic? Or maybe my class isn't really doing just one thing? | One general-purpose mechanism for handling a broad range of cases where we want to add value without violating the Single Responsibility Principle is the Decorator Pattern . This mechanism is suitable when the adding of value can be done without changing the existing interface, which is typically what caches do. The beautiful thing with this mechanism is that it takes the handling out of the code and into the design. Specifically: We begin by declaring an interface for what we want to do; in your case, a FluffiesProvider . Then we write a class which implements FluffiesProvider and offers the fundamental functionality, in your case a DatabaseFluffiesProvider which reads the fluffies from the database without worrying at all about caching. Then we write a decorator of FluffiesProvider which does nothing but caching and has no idea where the fluffies come from. Finally, we wire them together as one, so the final FluffiesProvider that we end up with is a cached database fluffies provider. Here is some example code: /// Provides Fluffies.
interface FluffiesProvider
{
Fluffies GetFluffies();
}
/// Implements FluffiesProvider using a database.
class DatabaseFluffiesProvider : FluffiesProvider
{
public override Fluffies GetFluffies()
{
... load fluffies from DB ...
(the entire implementation of "GetFluffiesFromDb()" goes here.)
}
}
/// Decorates FluffiesProvider to add caching.
class CachingFluffiesProvider : FluffiesProvider
{
private FluffiesProvider decoree;
private DateTime m_NextReload = DateTime.MinValue;
private Fluffies m_Cache;
public CachingFluffiesProvider( FluffiesProvider decoree )
{
Assert( decoree != null );
this.decoree = decoree;
}
public override Fluffies GetFluffies()
{
if( DateTime.Now >= m_NextReload )
{
m_Cache = decoree.GetFluffies();
m_NextReload = DatTime.Now + TimeSpan.FromHours(1);
}
return m_Cache;
}
} and here is the instantiation and wiring together of the classes: FluffiesProvider provider = new DatabaseFluffiesProvider();
provider = new CachingFluffiesProvider( provider );
...go ahead and use provider... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/303207",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/129422/"
]
} |
303,242 | There are times where using recursion is better than using a loop, and times where using a loop is better than using recursion. Choosing the "right" one can save resources and/or result in fewer lines of code. Are there any cases where a task can only be done using recursion, rather than a loop? | Yes and no. Ultimately, there's nothing recursion can compute that looping can't, but looping takes a lot more plumbing. Therefore, the one thing recursion can do that loops can't is make some tasks super easy. Take walking a tree. Walking a tree with recursion is stupid-easy. It's the most natural thing in the world. Walking a tree with loops is a lot less straightforward. You have to maintain a stack or some other data structure to keep track of what you've done. Often, the recursive solution to a problem is prettier. That's a technical term, and it matters. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/303242",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/90773/"
]
} |
303,438 | In the event an alien invasion occurred and we were forced to support their languages in all of our existing computer systems, is UTF-8 designed in a way to allow for their possibly vast amount of characters? (Of course, we do not know if aliens actually have languages, if or how they communicate, but for the sake of the argument, please just imagine they do.) For instance, if their language consisted of millions of newfound glyphs, symbols, and/or combining characters , could UTF-8 theoretically be expanded in a non-breaking way to include these new glyphs and still support all existing software? I'm more interested in if the glyphs far outgrew the current size limitations and required more bytes to represent a single glyph. In the event UTF-8 could not be expanded, does that prove that the single advantage over UTF-32 is simply size of lower characters? | The Unicode standard has lots of space to spare. The Unicode codepoints are organized in “planes” and “blocks”. Of 17 total planes, there are 11 currently unassigned . Each plane holds 65,536 characters, so there's realistically half a million codepoints to spare for an alien language (unless we fill all of that up with more emoji before first contact). As of Unicode 8.0, only 120,737 code points have been assigned in total (roughly 10% of the total capacity), with roughly the same amount being unassigned but reserved for private, application-specific use. In total, 974,530 codepoints are unassigned. UTF-8 is a specific encoding of Unicode, and is currently limited to four octets (bytes) per code point, which matches the limitations of UTF-16. In particular, UTF-16 only supports 17 planes. Previously, UTF-8 supported 6 octets per codepoint, and was designed to support 32768 planes. In principle this 4 byte limit could be lifted, but that would break the current organization structure of Unicode, and would require UTF-16 to be phased out – unlikely to happen in the near future considering how entrenched it is in certain operating systems and programming languages. The only reason UTF-16 is still in common use is that it's an extension to the flawed UCS-2 encoding which only supported a single Unicode plane. It otherwise inherits undesirable properties from both UTF-8 (not fixed-width) and UTF-32 (not ASCII compatible, waste of space for common data), and requires byte order marks to declare endianness. Given that despite these problems UTF-16 is still popular, I'm not too optimistic that this is going to change by itself very soon. Hopefully, our new Alien Overlords will see this impediment to Their rule, and in Their wisdom banish UTF-16 from the face of the earth . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/303438",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47136/"
]
} |
303,478 | BACKGROUND : I'm trying to use Uncle Bob's clean architecture in my android app. I studied many open source projects that are trying to show the right way to do it, and I found an interesting implementation based on RxAndroid. WHAT I NOTICIED : In every layer (presentation, domain and data), there's a model class for the same entity (talking UML). Plus, there are mapper classes that take care of object's transformation whenever the data is crossing the boundaries (from layer to another). QUESTION : Is it required to have model classes in every layer when I know that they'll all end up with the same attributes if all CRUD operations are needed? Or, is it a rule or a best practice when using the clean architecture? | In my opinion, that's absolutely not how it's meant. And it's a violation of DRY. The idea is that the entity / domain object in the middle is modeled to represent the domain as good and as convenient as possible. It is in the center of everything and everything can depend on it since the domain itself doesn't change most of the time. If your database on the outside can store those objects directly, then mapping them to another format for the sake of separating layers is not just pointless but creating duplicates of the model and that is not the intention. To begin with, the clean architecture was made with a different typical environment / scenario in mind. Business server applications with behemoth outer layers that need their own types of special objects. For example databases that produce SQLRow objects and need SQLTransactions in return to update items. If you were to use those in the center, you were to violate the dependency direction because your core would depend on the database. With lightweight ORMs that load and store entity objects thats not the case. They do the mapping between their internal SQLRow and your domain. Even if you need put an @Entitiy annotation of the ORM into your domain object, I'd argue that this does not establish a "mention" of the outer layer. Because annotations are just metadata, no code that isn't specifically looking for them will see them. And more importantly, nothing needs to change if you remove them or replace them with a different database's annotation. In contrast, if you do change your domain and you made all those mappers, you have to change a lot. Amendment: Above is a little oversimplified and could even be wrong. Because there is a part in clean architecture that wants you to create a representation per layer. But that has to be seen in context of the application. Namely the following here https://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html The important thing is that isolated, simple, data structures are passed across the boundaries. We don’t want to cheat and pass Entities or Database rows. We don’t want the data structures to have any kind of dependency that violates The Dependency Rule. Passing entities from the center towards the outer layers does not violate the dependency rule, yet they are mentioned. But this has a reason in the context of the envisioned application. Passing entities around would move the application logic towards the outside. Outer layers would need to know how to interpret the inner objects, they would effectively have to do what inner layers like the "use case" layer is supposed to do. Besides that, it also decouples layers so that changes to the core don't necessarily require changes in outer layers (see SteveCallender's comment). In that context, it's easy to see how objects should represent specifically the purpose they are used for. Also that layers should talk to each other in terms of objects that are made specifically for the purpose of this communication. This can even mean that there are 3 representations, 1 in each layer, 1 for transport between layers. And there is https://blog.8thlight.com/uncle-bob/2011/11/22/Clean-Architecture.html which addresses above: Other folks have worried that the net result of my advice would be lots of duplicated code, and lots of rote copying of data from one data structure to another across the layers of the system. Certainly I don’t want this either; and nothing I have suggested would inevitably lead to repetition of data structures and an inordinate of field copying. That IMO implies that plain 1:1 copying of objects is a smell in the architecture because you're not actually using the proper layers and /or abstractions. He later explains how he imagines all the "copying" You separate the UI from the business rules by passing simple data structures between the two. You don’t let your controllers know anything about the business rules. Instead, the controllers unpack the HttpRequest object into a simple vanilla data structure, and then pass that data structure to an interactor object that implements the use case by invoking business objects. The interactor then gathers the response data into another vanilla data structure and passes it back to the UI. The views do not know about the business objects. They just look in that data structure and present the response. In this application, there is a big difference between the representations. The data that flows isn't just the entities. And this warrants and demands different classes. However, applied to a simple Android application like a photo viewer where the Photo entity has about 0 business rules and the "use case" that deals with them is nearly non-existing and is actually more concerned about caching & downloading (that process should IMO be represented more explicitly), the point to make separate representations of a photo starts to vanish. I even get the feeling that the photo itself is the data transfer object while the real business-logic-core-layer is missing. There is a difference between "separate the UI from the business rules by passing simple data structures between the two" and "when you want to display a photo rename it 3 times on the way" . Besides that, the point where I see those demo applications fail at representing the clean architecture is that they add huge emphasis on separating layers for the sake of separating layers but effectively hide what the application does. That is in contrast to what is said in https://blog.8thlight.com/uncle-bob/2011/09/30/Screaming-Architecture.html - namely that the architecture of a software application scream about the use cases of the application I don't see that emphasis on separating layers in the clean architecture. It's about dependency direction and focusing on representing the core of the application - entities and use cases - in ideally plain java without dependencies towards the outside. It's not so much about dependencies towards that core. So if your application actually has a core that represents business rules and use cases, and / or different people work on different layers, please separate them in the intended way. If you're on the other hand just writing a simple app all by yourself don't overdo it. 2 layers with fluent bounds may be more than enough. And layers can be added later on as well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/303478",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/205342/"
]
} |
303,553 | After a discussion with some my colleagues, I've a 'philosophical' question about how treat the char data type in Java, following the best practices. Suppose a simple scenario (obviously this is only a very simple example in order to give a practice meaning to my question) where, given a String 's' as input, you have to count the number of numeric characters present in it. These are the 2 possible solutions: 1) for(int i=0; i<s.length(); i++) {
if(s.charAt(i) >= 48 && s.charAt(i) <= 57) {
n++;
}
} 2) for(int i=0; i<s.length(); i++) {
if(s.charAt(i) >= '0' && s.charAt(i) <= '9' ) {
n++;
}
} Which of the two is more 'clean' and compliant with the Java best practices? | Both are horrible, but the first is more horrible. Both ignore Java's built-in capability to decide what characters are "numeric" (via methods in Character ). But the first one not only ignores the Unicode nature of strings, assuming that there can be only 0123456789, it also obscures even this invalid reasoning by using character codes that make sense only if you know something about the history of character encodings. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/303553",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/203660/"
]
} |
303,854 | Suppose I have a REST API that is also used to set/reset passwords. Let's also suppose that this works over a HTTPS connections. Is there any good reason not to put that password in the call path, let's also say I will encode it in BASE64? An example would be to reset a password like this: http://www.example.com/user/joe/resetpassword/OLDPASSWD/NEWPASSWD I understand BASE64 is no encryption, but I only want to protect the password to shoulder surfing in this case. | A good server logs all requests sent to it, including URLs (often, without variable part after '?'), source IP, execution time... Do you really want this log (potentially read by a wide group of admins) to contain critically secure info as passwords? Base64 isn't a stopper against them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/303854",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/70273/"
]
} |
303,980 | I understand what they determine but is it really useful to assign those to issues found? I mean, it is either required to fixed quickly or not. I know how to set them, categorize them etc. I know IEEE/ISO do require to do that. I just do not see why. | It is absolutely possible to have those values differ. If you have a sale to make to an important government agency that requires high performance but won't ever use module X, then it makes a lot of business sense to fix a minor database availability error sooner than a severe error in the X module. Basically, technical reasons are not the only factor when you run a software business . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/303980",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/197285/"
]
} |
304,169 | I guess this is another question about hard coding and best practices. Say I have a list of values, lets say fruit, stored in the database (it needs to be in the database as the table is used for other purposes such as SSRS reports), with an ID: 1 Apple
2 Banana
3 Grapes I may present them to the user, he selects one, it gets stored in his profile as FavouriteFruit and the ID stored in his record in the database. When it comes to business rules / domain logic, what are the recommendations for assigning logic to specific values. Say if the user has selected Grapes I want to perform some extra task, what's the best way to reference the Grapes value: // Hard coded name
if (user.FavouriteFruit.Name == "Grapes")
// Hard coded ID
if (user.FavoriteFruit.ID == 3) // Grapes
// Duplicate the list of fruits in an enum
if (user.FavouriteFruit.ID == (int)Fruits.Grapes) or something else? Because of course the FavouriteFruit will be used throughout the application, the list may be added to, or edited. Someone may decide that they want 'Grapes' renamed to 'Grape' and this would of course break the hardcoded string option. The hardcoded ID isn't completely clear although, as shown you could just add a comment to quickly identify which item it is. The enum option involves duplicating data from the database which seems wrong as it may get out of sync. Anyway, thanks in advance for any comments or suggestions. | Avoid strings and magic constants at all costs. They are completely out of the question, they should not even be considered as options. This appears to leave you with only one viable option: identifiers, that is, enums. However, there is also one more option, which in my opinion is the best. Let's call this option "Preloaded Objects". With Preloaded Objects, you can do the following: if( user.FavouriteFruit.ID == MyApplication.Grape.ID ) What has just happened here is that I have obviously loaded the entire row of Grape into memory, so I have its ID ready to use in comparisons. If you happen to be using Object-Relational Mapping (ORM), it looks even better: if( user.FavouriteFruit == MyApplication.Grape ) (That's why I call it "Preloaded Objects".) So, what I do is that during startup I load all of my "enumeration" tables (small tables like days of the week, months of the year, genders, etc.) into the main application domain class. I load them by name, because obviously, MyApplication.Grape must receive the row called "Grape", and I assert that each and every one of them is found. If not, we have a guaranteed run-time error during startup, which is the least malignant of all run-time errors. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/304169",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/197597/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.