source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
304,244 | The stack pointer points to the top of the stack, which stores data on what we call a "LIFO" basis. To steal someone else's analogy, it's like a stack of dishes in which you put and take dishes at the top. The stack pointer, OTOH, points to the top "dish" of the stack. At least, that's true for x86. But why does the computer/program "care" what the stack pointer's pointing to? In other words, what purpose does having the stack pointer and knowing where it points to serve? An explanation understandable by C programmers would be appreciated. | What purpose does this stack actually serve, as opposed to explaining its structure? You have many answers which accurately describe the structure of the data stored on the stack, which I note is the opposite of the question you asked. The purpose that the stack serves is: the stack is part of the reification of continuation in a language without coroutines . Let's unpack that. Continuation is simply put, the answer to the question "what is going to happen next in my program?" At every point in every program something is going to happen next. Two operands are going to be computed, then the program continues by computing their sum, and then the program continues by assigning the sum to a variable, and then... and so on. Reification is just a highfalutin word for making a concrete implementation of an abstract concept. "What happens next?" is an abstract concept; the way the stack is laid out is a part of how that abstract concept is turned into a real machine that really computes things. Coroutines are functions that can remember where they were, yield control to another coroutine for a while, and then resume where they left off later, but not necessarily immediately after the just-called coroutine yields. Think of "yield return" or "await" in C#, which must remember where they were when the next item is requested or the asynchronous operation completes. Languages with coroutines or similar language features require more advanced data structures than a stack in order to implement continuation. How does a stack implement continuation? Other answers say how. The stack stores (1) values of variables and temporaries whose lifetimes are known to be not greater than the activation of the current method, and (2) the address of the continuation code associated with the most recent method activation. In languages with exception handling the stack may also store information about the "error continuation" -- that is, what the program will do next when an exceptional situation occurs. Let me take this opportunity to note that the stack does not tell you "where did I come from?" -- though it is often so used in debugging. The stack tells you where you are going to next , and what the values of the activation's variables will be when you get there . The fact that in a language without coroutines, where you are going next is almost always where you came from makes this kind of debugging easier. But there is no requirement that a compiler store information about where control came from if it can get away without doing so. Tail-call optimizations for example destroy information about where the program control came from. Why do we use the stack to implement continuation in languages without coroutines? Because the characteristic of synchronous activation of methods is that the pattern of "suspend the current method, activate another method, resume the current method knowing the result of the activated method" when composed with itself logically forms a stack of activations. Making a data structure that implements this stack-like behaviour is very cheap and easy. Why is it so cheap and easy? Because chip sets have been for many decades specifically designed to make this sort of programming easy for compiler writers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/304244",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/164475/"
]
} |
304,270 | From : http://www.artima.com/lejava/articles/designprinciples4.html Erich Gamma: I still think it's true even after ten years. Inheritance is a cool way to change behavior. But we know that it's brittle, because the subclass can easily make assumptions about the context in which a method it overrides is getting called. There's a tight coupling between the base class and the subclass, because of the implicit context in which the subclass code I plug in will be called. Composition has a nicer property. The coupling is reduced by just having some smaller things you plug into something bigger, and the bigger object just calls the smaller object back. From an API point of view defining that a method can be overridden is a stronger commitment than defining that a method can be called. I don't understand what he means. Could anyone please explain it? | A commitment is something that reduces your future options. Publishing a method implies that users will call it, therefore you can't remove this method without breaking compatibility. If you'd kept it private , they couldn't (directly) call it, and you could some day refactor it away without problems. Therefore, publishing a method is a stronger commitment than not publishing it. Publishing an overridable method is an even stronger commitment. Your users can call it, and they can create new classes where the method doesn't do what you think it does! For instance, if you publish a clean-up method, you can ensure that resources are properly deallocated as long as users remember to call this method as the last thing they do.
But if the method is overridable, someone might override it in a subclass and not call super . As a result, a third user might use that class and cause a resource leak even though they dutifully called cleanup() at the end ! This means that you can no longer guarantee the semantics of your code, which is a very bad thing. Essentially, you can no longer rely on any code running in user-overridable methods, because some middleman might override it away. This means that you have to implement your clean-up routine entirely in private methods, with no help from the user. Therefore it's usually a good idea to publish only final elements unless they are explicitly intended for overriding by API users. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/304270",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/204027/"
]
} |
304,335 | Liskov's work in this area focused on behavioral subtyping, which besides the type system safety discussed in this article also requires that subtypes preserve all invariants guaranteed by the supertypes in some contract.[3] This definition of subtyping is generally undecidable, so it cannot be verified by a type checker. From : http://www.wikiwand.com/en/Subtyping#/Function_types | Let the contract of operation o of Type T be that it halts for all inputs. Now decide whether operation o of subtype S <: T satisfies that contract: you have just solved the Halting Problem . More generally, S::o must compute the same function as T::o if S <: T . Deciding whether two programs compute the same function is called the Function Problem and is equivalent to solving the Halting Problem. In general, statically deciding any non-trivial runtime property is almost always equivalent to the Halting Problem. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/304335",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/204027/"
]
} |
304,445 | I understand that we should use %s to concatenate a string rather than + in Python. I could do any of: hello = "hello"
world = "world"
print hello + " " + world
print "%s %s" % (hello, world)
print "{} {}".format(hello, world)
print ' '.join([hello, world]) But why should I use anything other than the + ? It's quicker to write concatenation with a simple + . Then if you look at the formatting string, you specify the types e.g. %s and %d and such. I understand it could be better to be explicit about the type. But then I read that using + for concatenation should be avoided even though it's easier to type. Is there a clear reason that strings should be concatenated in one of those other ways? | Readability. The format string syntax is more readable, as it separates style from the data. Also, in Python, %s syntax will automatically coerce any non str types to str ; while concatenation only works with str , and you can't concatenate str with int . Performance. In Python str is immutable, so the left and right string have to be copied into the new string for every pair of concatenation. If you concatenate four strings of length 10, you will be copying (10+10) + ((10+10)+10) + (((10+10)+10)+10) = 90 characters, instead of just 40 characters. And things gets quadratically worse as the number and size of the string increases. Java optimizes this case some of the times by transforming the series of concatenation to use StringBuilder , but CPython doesn't. For some use cases, the logging library provide an API that uses format string to create the log entry string lazily ( logging.info("blah: %s", 4) ). This is great for improved performance if the logging library decided that the current log entry will be discarded by a log filter, so it doesn't need to format the string. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/304445",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12893/"
]
} |
304,453 | I've seen some programmers use this: foreach (var item in items)
{
if (item.Field != null)
continue;
if (item.State != ItemStates.Deleted)
continue;
// code
} instead of where I'd normally use: foreach (var item in items.Where(i => i.Field != null && i.State != ItemStates.Deleted))
{
// code
} I've even seen a combination of both. I really like the readability with 'continue', especially with more complex conditions. Is there even a difference in performance? With a database query I'm assuming there would be. What about regular lists? | I would regard this as an appropriate place to use command/query separation . For example: // query
var validItems = items.Where(i => i.Field != null && i.State != ItemStates.Deleted);
// command
foreach (var item in validItems) {
// do stuff
} This also allows you to give a good self-documenting name to the query result. It also helps you see opportunities for refactoring, because it's much easier to refactor code that only queries data or only mutates data than mixed code that tries to do both. When debugging, you can break before foreach to quickly check whether the contents of validItems resolve to what you expect. You don't have to step into the lambda unless you need to. If you do need to step into the lambda, then I suggest factoring it out into a separate function, then step through that instead. Is there a difference in performance? If the query is backed by a database, then the LINQ version has the potential to run faster, because the SQL query may be more efficient. If it's LINQ to Objects, then you won't see any real performance difference. As always, profile your code and fix the bottlenecks that are actually reported, rather than trying to predict optimisations in advance. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/304453",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/113569/"
]
} |
304,574 | There seems to be a recent trend in JavaScript towards treating data structures as immutable. For example, if you need to change a single property of an object, better to just create a whole new object with the new property, and just copy over all the other properties from the old object, and let the old object be garbage collected. (That's my understanding anyway.) My initial reaction is, that sounds like it would be bad for performance. But then libraries like Immutable.js and Redux.js are written by smarter people than me, and seem to have a strong concern for performance, so it makes me wonder if my understanding of garbage (and its performance impact) is wrong. Are there performance benefits to immutability I'm missing, and do they outweigh the downsides of creating so much garbage? | For example, if you need to change a single property of an object, better to just create a whole new object with the new property, and just copy over all the other properties from the old object, and let the old object be garbage collected. Without immutability, you might have to pass an object around between different scopes, and you do not know beforehand if and when the object will be changed. So to avoid unwanted side effects, you start creating a full copy of the object "just in case" and pass that copy around, even if it turns out no property has to be changed at all. That will leave a lot more garbage than in your case. What this demonstrates is - if you create the right hypothetical scenario, you can prove anything, especially when it comes to performance. My example, however, is not so hypothetical as it might sound. I worked last month on a program where we stumbled over exactly that problem because we initially decided against using an immutable data structure, and hesitated to refactor this later because it did not seem worth the hassle. So when you look at cases like this one from an older SO post , the answer to your questions becomes probably clear - it depends . For some cases immutability will hurt performance, for some the opposite might be true, for lots of cases it will depend on how smart your implementation is, and for even more cases the difference will be negligible. A final note: a real world problem you might encounter is you need to decide early for or against immutability for some basic data structures. Then you build a lot of code upon that, and several weeks or months later you will see if the decision was a good or a bad one. My personal rule of thumb for this situation is: If you design a data structure with only a few attributes based on primitive or other immutable types, try immutability first. If you want to design a data type where arrays with large (or undefined) size, random access and changing contents are involved, use mutability. For situations between these two extremes, use your judgement. But YMMV. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/304574",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30385/"
]
} |
304,593 | I am trying to understand how to properly store ordered information in a relational database. An example: Say I have a Playlist, consisting of Songs. Inside my Relational Database, I have a table of Playlists , containing some metadata (name, creator, etc). I also have a table called Songs , containing a the playlist_id , as well as song-specific info (name, artist, duration, etc). By default, when a new Song is added to a Playlist, it is appended to the end. When ordering on Song-ID (ascending), the order will be the order of addition. But what if a user should be able to re-order songs in the playlist? I came up with a couple of ideas, each with their advantages and disadvantages: A column called order , which is an integer . When a song is moved, the order of all songs between its old and new position are changed, to reflect the change. The drawback of this is that a lot of queries need to be done each time a song is moved, and the moving algorithm is not as trivial as with the other options. A column called order , which is a decimal ( NUMERIC ). When a song is moved, it is assigned the floating point value between the two adjacent numbers. Drawback: Decimal fields take more space, and it might be possible to run out of precision, unless care is taken to re-distribute the range after every few changes. Another way would be to have a previous and a next field that reference other Songs. (or are NULL in the case of the first, resp. last song in the playlist right now; Basically you create a linked-list ). Drawback: Queries like 'find the Xth Song in the list' are no longer constant-time, but instead linear-time. Which of these procedures is most often used in practice? Which of these procedures is fastest on medium to large databases? Are there any other ways to archieve this? EDIT: For simplicities sake, in the example a Song only belongs to one Playlist (a many-to-one relationship). Of course, one could also use a Junction Table so song⟷playlist is a many-to-many relationship (and apply one of above strategies on that table). | Databases are optimized for certain things. Updating lots of rows quickly is one of them. This becomes especially true when you let the database do its work. Consider: order song
1 Happy Birthday
2 Beat It
3 Never Gonna Give You Up
4 Safety Dance
5 Imperial March And you want to move Beat It to the end, you would have two queries: update table
set order = order - 1
where order >= 2 and order <= 5;
update table
set order = 5
where song = 'Beat It' And that's it. This scales up very well with very large numbers. Try putting a few thousand songs in a hypothetical playlist in your database and see how long it takes to move a song from one location to another. As these have very standardized forms: update table
set order = order - 1
where order >= ? and order <= ?;
update table
set order = ?
where song = ? You have two prepared statements that you can reuse very efficiently. This provides some significant advantages - the order of the table is something that you can reason about. The third song has an order of 3, always. The only way to guarantee this is to use consecutive integers as the order. Using pseudo-linked lists or decimal numbers or integers with gaps won't let you guarantee this property; in these cases the only way to get the nth song is to sort the entire table and get the nth record. And really, this is a lot easier than you think it is. It is simple to figure out what you want to do, to generate the two update statements and for other people to look at those two update statements and realize what is being done. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/304593",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41643/"
]
} |
304,878 | I have heard the story of how Douglas Mcllroy came up with the concept and how Ken Thompson implemented it in one night. As far as I understand, pipe is a system call which shares a piece of memory between two processes where one process writes and other reads from. As someone who is not a familiar with OS internals or concepts, I was wondering what exactly is the "genius" in the story?
Is it the idea of two processes sharing memory? Or is it the implementation? Or both? PS: I am aware of the utility of the pipe or how to use it in shell. The question is about concept and implementation of the | | As far as I understand, pipe is a system call which shares a piece of memory between two processes where one process writes and other reads from. Actually, there is no shared memory involved. The reader and writer are NOT sharing any part of their address space, and they are not using any explicit synchronization. The reading and writing processes are making read and write system calls exactly as they would if they were reading from / writing to a file. THAT is the genius... the innovation: the notion that (simple) interprocess communication and file I/O can be handled the same way... from the perspective of the application programmer and the user. Once the pipe has been set up, the OS (not application code, or libraries in user-space) takes care of the buffering and the coordination. Transparently. By contrast, before the invention of the pipe concept, if you needed to do "pipeline" processing, you would typically have one application write output to a file, and then when it is finished, you would run the second application to read from the file. Alternatively, if you wanted a true pipeline you could code both applications to set up a (real) shared memory segment and use semaphores (or something) to coordinate the reading / writing. Complicated... and as a consequence not often done. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/304878",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/207378/"
]
} |
304,886 | I was discussing unit/integration testing with a colleague, and he made an interesting case against writing unit tests. I'm a big unit test (JUnit primarily) proponent, but am interested to hear others' takes, as he made some interesting points. To sum up his points: When major code changes occur (new set of POJOs, major application
refactoring, etc.), unit tests tend to be commented out rather than
reworked. Time is better spent on integration tests covering use cases,
which make the smaller-scoped tests less/not-at-all important. Thoughts on this? I'm still pro-unit test (as I consistently see it producing improved code), although integration tests sound at least as valuable. | I tend to side with your friend because all too often, unit tests are testing the wrong things . Unit tests are not inherently bad. But they often test the implementation details rather than the input/output flow. You end up with completely pointless tests when this happens. My own rule is that a good unit test tells you that you just broke something; a bad unit test merely tells you that you just changed something. An example off the top of my head is one test that got tucked into WordPress a few years back. The functionality being tested revolved around filters that called one another, and the tests were verifying that callbacks would then get called in the correct order. But instead of (a) running the chain to verify that callbacks get called in the expected order, the tests focused on (b) reading some internal state that arguably shouldn't have been exposed to begin with. Change the internals and (b) turns red; whereas (a) only turns red if changes to the internals break the expected result while doing so. (b) was clearly a pointless test in my view. If you have a class that exposes a few methods to the outside world, the correct thing to test in my view are the latter methods only . If you test the internal logic as well, you may end up exposing the internal logic to the outside world, using convoluted testing methods, or with a litany of unit tests that invariably break whenever you want to change anything. With all that said, I'd be surprised if your friend is as critical about unit tests per se as you seem to suggest. Rather I'd gather he's pragmatic. That is, he observed that the unit tests that get written are mostly pointless in practice. Quoting: "unit tests tend to be commented out rather than reworked". To me there's an implicit message in there - if they tend to need reworking it is because they tend to suck. Assuming so, the second proposition follows: developers would waste less time writing code that is harder to get wrong - i.e. integration tests. As such it's not about one being better or worse. It's just that one is a lot easier to get wrong, and indeed very often wrong in practice. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/304886",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/150047/"
]
} |
304,921 | I have been using GitHub for quite some time now and I usually used to push my feature-branches and then start a Pull Request which I myself merged. I found it helped me keep track of where I merged branches. But recently I have been reading more and more about how Git works and I realised that I can use the merge-commits to refer to when I merged branches. So, what should I do when merging a feature-branch into master: Perform a merge-commit on master and then push it upstream OR Push the local branch and start a Pull Request? I have read Introducing Pull Requests for a 2 person team - merge my own requests? and Whats the work flow with 2 people on a project and Should I open pull requests from a branch on the official repo or my fork? but none of them seem to answer what I am looking for. | git-merge mechanism: Using git merge feature while on master merges the branch feature to master and produces a merge-commit (if the branch cannot be fast-forwarded) in the git history. To force a merge-commit being made, use the --no-ff option with merge . Merge Pull Request mechanism: When we start a Pull Request on GitHub, it creates a GitHub Issue where people can talk and discuss the commits in the PR before merging it. When a PR is merged on GitHub it does the exact same thing as git merge feature . What should I do? So, as far as history is concerned, there is no difference between the two. And as far as contribution goes, your contributors will not have to do anything different for the two situations. They are the same (minus the nice little chat). Best Practices: And I was unable to find a best practices but logic says that PRs are not much helpful if there is only a single contributor to a repository. @lxrec and @amon helped me reach this conclusion. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/304921",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/168944/"
]
} |
305,006 | In languages like C and C++, while using pointers to variables we need one more memory location to store that address. So isn't this a memory overhead? How is this compensated? Are pointers used in time critical low memory applications? | Actually, the overhead does not really lie in the extra 4 or 8 bytes needed to store the pointer. Most times pointers are used for dynamic memory allocation , meaning that we invoke a function to allocate a block of memory, and this function returns to us a pointer which points to that block of memory. This new block in and of itself represents a considerable overhead. Now, you don't have to engage in memory allocation in order to use a pointer: You can have an array of int declared statically or on the stack, and you can use a pointer instead of an index to visit the int s, and it is all very nice and simple and efficient. No memory allocation needed, and the pointer will usually occupy exactly as much space in memory as an integer index would. Also, as Joshua Taylor reminds us in a comment, pointers are used to pass something by reference. E.g., struct foo f; init_foo(&f); would allocate f on the stack and then call init_foo() with a pointer to that struct . That's very common. (Just be careful not to pass those pointers "upward".) In C++ you might see this being done with a "reference" ( foo& ) instead of a pointer, but references are nothing but pointers that you may not alter, and they occupy the same amount of memory. But the main reason why pointers are used is for dynamic memory allocation, and this is done in order to solve problems that could not be solved otherwise. Here is a simplistic example: Imagine you want to read the entire
contents of a file. Where are you going to store them? If you try
with a fixed-size buffer, then you will only be able to read files
that are not longer than that buffer. But by using memory
allocation, you can allocate as much memory as necessary to read the
file, and then proceed to read it. Also, C++ is an object-oriented language, and there are certain
aspects of OOP like abstraction that are only
achievable using pointers. Even languages like Java and C# make extensive use of pointers, they just don't allow you to directly manipulate the pointers, so as to prevent you from doing dangerous
stuff with them, but still, these languages only begin to make sense
once you have realized that behind
the scenes everything is done using pointers. So, pointers are not only used in time-critical, low-memory applications, they are used everywhere . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305006",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/207545/"
]
} |
305,093 | So, yes, diagrams can be inappropriate at times. When are they inappropriate? When you create them
without code to validate them, and then intend to follow them. There is nothing wrong with drawing a diagram to
explore an idea. Agile Software Development: Principles, Patterns, and Practices - Robert C. Martin What exactly does he mean by this? Isn't UML designed to help plan out how to structure your code before "diving in"? What's the point of using it if you don't follow the diagrams you came up with? Context: In this chapter, Uncle Bob makes a UML diagram for the Score Keeper of a Bowling game. Then he goes on to develop the program in a test driven manner, without consulting the UML diagram. The resulting program is nothing like the UML diagram, and Uncle Bob comes to the conclusion quoted above. | To properly explain this, we need a short history lesson. In the early days of software engineering, an often used analogy was building a house. An architect and structural engineer discuss plans with a customer and come up with a design. Builders then follow that design to build the actual house. Writing code was seen as the equivalent to building the actual house. Thus, there was a perceived need for up front design before that build could take place. Various graphical design tools were created, with UML being one of them. The idea originally with UML, was that one would fully design a system with UML, then hand it over to coders to translate that design into code. In reality, this just doesn't work, and led to years of programmers being seen as "implementers", rather than "designers", projects being late, the designs having to constantly change after they were supposed to be complete etc. The reason is simple. Coding is design . With the house analogy, the code is the architect's drawings. The compiler is the builder who takes those designs and builds a program from them. This realisation then led to agile techniques, TDD etc being born: tools to help improve the quality of that code design. Just as an architect might produce preliminary sketches to help her and her team visualise the overall design, so a developer might use UML, or other tools, to help visualise the design needed. Just as those sketches aren't blindly followed, so the UML should not be blindly followed. The code design should evolve out of agile iterations and using TDD. LIkewise, just as an architect might build a model of the house to help her and her team visualise the drawings, so UML can be used to help visualise the code structure. As Uncle Bob says, you can't validate the UML, you can only validate the code. Therefore the code is the prime design documentation and UML, if used, is secondary documenation only. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305093",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/204027/"
]
} |
305,148 | I've seen a number of questions, like this , asking for advice on how to store enums in DB. But I wonder why would you do that. So let's say that I have an entity Person with a gender field, and a Gender enum. Then, my person table has a column gender. Besides the obvious reason of enforcing correctness, I don't see why I would create an extra table gender to map what I already have in my application. And I don't really like having that duplication. | Let's take another example that is less fraught with conceptions and expectations. I've got an enum here, and it is the set of priorities for a bug. What value are you storing in the database? So, I could be storing 'C' , 'H' , 'M' , and 'L' in the database. Or 'HIGH' and so on. This has the problem of stringly-typed data. There's a known set of valid values, and if you aren't storing that set in the database, it can be difficult to work with. Why are you storing the data in the code? You've got List<String> priorities = {'CRITICAL', 'HIGH', 'MEDIUM', 'LOW'}; or something to that effect in the code. It means that you've got various mappings of this data to the proper format (you're inserting all caps into the database, but you're displaying it as Critical ). Your code is now also difficult to localize. You have bound the database representation of the idea to a string that is stored in the code. Anywhere you need to access this list, you either need to have code duplication or a class with a bunch of constants. Neither of which are good options. One should also not forget that there are other applications that may use this data (which may be written in other languages - the Java web application has a Crystal Reports reporting system used and a Perl batch job feeding data into it). The reporting engine would need to know the valid list of data (what happens if there's nothing marked in 'LOW' priority and you need to know that that is a valid priority for the report?), and the batch job would have the information about what the valid values are. Hypothetically, you might say "we're a single-language shop - everything is written in Java" and have a single .jar that contains this information - but now it means that your applications are tightly coupled to each other and that .jar containing the data. You'll need to release the reporting part and the batch update part along with the web application each time there is a change - and hope that that release goes smoothly for all parts. What happens when your boss wants another priority? Your boss came by today. There's a new priority - CEO . Now you have to go and change all the code and do a recompile and redeploy. With an 'enum-in-the-table' approach, you update the enum list to have a new priority. All the code that gets the list pulls it from the database. Data rarely stands alone With priorities, the data keys into other tables that might contain information about workflows, or who can set this priority or whatnot. Going back to the gender as mentioned in the question for a bit: Gender has a link to the pronouns in use: he/his/him and she/hers/her ... and you want to avoid hard coding that into the code itself. And then your boss comes by and you need to add you've got the 'OTHER' gender (to keep it simple) and you need to relate this gender to they/their/them ... and your boss sees what Facebook has and... well, yeah. By restricting yourself to a stringly-typed bit of data rather than an enum table, you've now needed to replicate that string in a bunch of other tables to maintain this relationship between the data and its other bits. What about other datastores? No matter where you store this, the same principle exists. You could have a file, priorities.prop , that has the list of priorities. You read this list in from a property file. You could have a document store database (like CouchDB ) that has an entry for enums (and then write a validation function in JavaScript ): {
"_id": "c18b0756c3c08d8fceb5bcddd60006f4",
"_rev": "1-c89f76e36b740e9b899a4bffab44e1c2",
"priorities": [ "critical", "high", "medium", "low" ],
"severities": [ "blocker", "bad", "annoying", "cosmetic" ]
} You could have an XML file with a bit of a schema: <xs:element name="priority" type="priorityType"/>
<xs:simpleType name="priorityType">
<xs:restriction base="xs:string">
<xs:enumeration value="critical"/>
<xs:enumeration value="high"/>
<xs:enumeration value="medium"/>
<xs:enumeration value="low"/>
</xs:restriction>
</xs:simpleType> The core idea is the same. The data store itself is where the list of valid values needs to be stored and enforced. By placing it here, it is easier to reason about the code and the data. You don't have to worry about defensively checking what you have each time (is it upper case? or lower? Why is there a critical type in this column? etc...) because you know what you are getting back from the datastore is exactly what the datastore is expecting you to send otherwise - and you can query the datastore for a list of valid values. The takeaway The set of valid values is data , not code. You do need to strive for DRY code - but the issue of duplication is that you are duplicating the data in the code, rather than respecting its place as data and storing it in a database. It makes writing multiple applications against the datastore easier and avoids having instances where you will need to deploy everything that is tightly coupled to the data itself - because you haven't coupled your code to the data. It makes testing applications easier because you don't have to retest the entire application when the CEO priority is added - because you don't have any code that cares about the actual value of the priority. Being able to reason about the code and the data independently from each other makes it easier to find and fix bugs when doing maintenance. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305148",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/183242/"
]
} |
305,250 | Several servers I have dealt with will return HTTP 200 for requests that the client ought to consider a failure, with something like 'success : false' in the body. This does not seem like a proper implementation of HTTP codes to me, particularly in cases of failed authentication. I have read HTTP error codes pretty succinctly summed up as, '4xx' indicates that the request should not be made again until changed, while '5xx' indicates that the request may or may not be valid and can be retried, but was unsuccessful. In this case 200: login failed, or 200: couldn't find that file, or 200: missing parameter x, definitely seem wrong. On the other hand, I could see the argument being made that '4xx' should only indicate a structural issue with the request. So that is proper to return 200: bad user/password rather than 401 unauthorized because the client is permitted to make the request, but it happens to be incorrect. This argument could be summarized as, if the server was able to process the request and make a determination at all, the response code ought to be 200, and it's up to the client to check the body for further information. Basically, this seems to be a matter of preference. But that is unsatisfying, so if anyone has a reason why either one of these paradigms is more correct, I would like to know. | Interesting question. Basically, we can reduce this down to the right way to classify things in terms analogous to OSI layers. HTTP is commonly defined as an Application Level protocol, and HTTP is indeed a generic Client/Server protocol. However, in practice, the server is almost always a relaying device, and the client is a web browser, responsible for interpreting and rendering content: The server just passes things on to an arbitrary application, and that applications sends back arbitrary scripts which the browser is responsible for executing. The HTTP interaction itself--the request/response forms, status codes, and so on--is mostly an affair of how to request, serve, and render arbitrary content as efficiently as possible, without getting in the way. Many of the status codes and headers are indeed designed for these purposes. The problem with trying to piggyback the HTTP protocol for handling application-specific flows, is that you're left with one of two options: 1) You must make your request/response logic a subset of the HTTP rules; or 2) You must reuse certain rules, and then the separation of concerns tends to get fuzzy. This can look nice and clean at first, but I think it's one of those design decisions you end up regretting as your project evolves. Therefore, I would say it is better to be explicit about the separation of protocols. Let the HTTP server and the web browser do their own thing, and let the app do its own thing. The app needs to be able to make requests, and it needs the responses--and its logic as to how to request, how to interpret the responses, can be more (or less) complex than the HTTP perspective. The other benefit of this approach, which is worth mentioning, is that applications should, generally speaking, not be dependent upon an underlying transport protocol (from a logical point of view). HTTP itself has changed in the past, and now we have HTTP 2 kicking in, following SPDY. If you view your app as no more than an HTTP functionality plugin, you might get stuck there when new infrastructures take over. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305250",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/207878/"
]
} |
305,464 | I am working on a design, but keep hitting a roadblock. I have a particular class (ModelDef) that is essentially the owner of a complex node tree built by parsing an XML schema (think DOM). I want to follow good design principles (SOLID), and ensure that the resulting system is easily testable. I have every intention of using DI to pass dependencies into the constructor of ModelDef (so that these can easily be swapped out, if need be, during testing). What I'm struggling with, though, is the creation of the node tree. This tree is going to be made up entirely of simple "value" objects which will not need to be independently tested. (However, I may still pass an Abstract Factory into ModelDef to assist with the creation of these objects.) But I keep reading that a constructor should not do any real work (e.g. Flaw: Constructor does Real Work ). This makes perfect sense to me if "real work" means constructing heavy-weigh dependent objects that one might later want to stub out for testing. (Those should be passed in via DI.) But what about light-weight value objects such as this node tree? The tree has to be created somewhere, right? Why not via the constructor of ModelDef (using, say, a buildNodeTree() method)? I don't really want to create the node tree outside of ModelDef and then pass it in (via constructor DI), because creating the node tree by parsing the schema requires a significant amount of complex code -- code that needs to be thoroughly tested. I don't want to relegate it to "glue" code (which should be relatively trivial, and will likely not be directly tested). I have thought of putting the code to create the node tree in a separate "builder" object, but hesitate to call it a "builder", because it doesn't really match the Builder Pattern (which seem to be more concerned with eliminating telescoping constructors). But even if I called it something different (e.g. NodeTreeConstructor), it still feels like a bit of a hack just to avoid having the ModelDef constructor build the node tree. It has to be built somewhere; why not in the object that's going to own it? | And, besides what Ross Patterson suggested, consider this position which is the exact opposite: Take maxims such as "Thou Shalt Not Do Any Real Work In Thy Constructors" with a grain of salt. A constructor is, really, nothing but a static method. So, structurally, there is really not much difference between: a) a simple constructor and a bunch of complex static factory methods, and b) a simple constructor and a bunch of more complex constructors. A considerable part of the negative sentiment towards doing any real work in constructors comes from a certain period of the history of C++ when there was debate as to precisely what state the object will be left in if an exception is thrown within the constructor, and whether the destructor should be invoked in such an event. That part of the history of C++ is over, and the issue has been settled, while in languages like Java there never was any issue of this kind to begin with. My opinion is that if you simply avoid using new in the constructor, (as your intention to employ Dependency Injection indicates,) you should be fine. I laugh at statements like "conditional or looping logic in a constructor is a warning sign of a flaw". Besides all that, personally, I would take the XML parsing logic out of the constructor, not because it is evil to have complex logic in a constructor, but because it is good to follow the "separation of concerns" principle. So, I would move the XML parsing logic into some separate class altogether, not into some static methods that belong to your ModelDef class. Amendment I suppose that if you have a method outside of ModelDef which creates a ModelDef from XML, you will need to instantiate some dynamic temporary tree data structure, populate it by parsing your XML, and then create your new ModelDef passing that structure as a constructor parameter. So, that could perhaps be thought of as an application of the "Builder" pattern. There is a very close analogy between what you want to do and the String & StringBuilder pair. However, I have found this Q&A which seems to disagree, for reasons which are not clear to me: Stackoverflow - StringBuilder and Builder Pattern . So, to avoid a lengthy debate over here as to whether the StringBuilder does or does not implement the "builder" pattern, I would say feel free to be inspired by how StrungBuilder works in coming up with a solution that suits your needs, and postpone calling it an application of the "Builder" pattern until that little detail has been settled. See this brand new question: Programmers SE: Is “StringBuilder” an application of the Builder Design Pattern? | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305464",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/208250/"
]
} |
305,618 | Assumptions One of the advantages of header-only libraries for C++ is that they do not need to be compiled separately. In C and C++ inline makes sense only if the function is defined in a header file*. Traditionally, in C, .c/.h layout has been used, where the header represents the minimal public interface of the translation unit. Similarly, .cpp/hpp. Question Are header-only libraries generally more efficient code- and execution time wise than the traditional layout? If so, is this because of extensive inlining or other optimizations? * - defining the function in a header allows the compiler to see the implementation during compilation of any translation unit and practically makes inlining code possible | One of the advantages of header-only libraries for C++ is that they do not need to be compiled separately No, that is not an advantage, quite the opposite - the main part of the library has to be compiled as often as it gets included, not just once. That will typically increase compile times. However, if you are referring to the advantages listed here in Wikipedia : that article is talking about decreased administrative overhead concerning the whole build, packaging and deployment process. In C and C++ inline makes sense only if the function is defined in a header file* This depends on the compiler/linker system, but I guess for most existing C and C++ compilers this is true. Traditionally, in C, .c/.h layout has been used, where the header represents the minimal public interface of the translation unit. Similarly, .cpp/hpp. That is mostly correct. C++ class headers often contain more than the minimal public interface - they typically contain also a lot of private stuff. To mitigate this, things like the PIMPL idiom are used. This is something like "the opposite" of a header-only library, it tries to minimize the necessary header content. But to answer your main question: this is a trade-off. The more library code one puts into the header files, the more the compiler has a chance for optimizing the code for speed (if this really happens, or if the increasement is noteable, is a completely different question). On the other hand, too much code in headers increases the compile time. Especially in large C++ projects this can become a serious problem, see "Large Scale C++ Software Design" by John Lakos - though the book is a little bit outdated and some of the problems described in there are addressed by modern compilers, the general ideas/solutions are still valid. In particular, when you are not using a stable (third party) library, but you are developing your own libs during your project, the compile times become apparent. Everytime you change something in the lib, you must change a header file, which will cause a recompile and linkage of all dependent units. IMHO the popularity of header-only libs is caused by the popularity of template meta programming. For most compilers, templated libs must be header-only because the compiler can only start the main compile process when the type parameters are provided, and for full compilation and optimization the compiler must see "both at once" - the library code plus the template parameter values. That makes it impossible (or at least hard) to produce any "precompiled" compilation units for such a library. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305618",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54268/"
]
} |
305,641 | While designing my first 'serious' C++ library, I'm asking myself: Is it good style to derive ones exceptions from std::exception and it's offsprings?! Even after reading Designing exception classes What is a 'good number' of exceptions to implement for my library? I'm still not sure. Because, besides common (but maybe not good) practice, I would assume, as a library user, that a library function would throw std::exception s only when standard library functions failed in the library implementation, and it can't do anything about it. But still, when writing application code, for me it's very convenient, and also IMHO good looking to just throw a std::runtime_error . Also my users also can rely on the defined minimum interface, like what() or codes. And for example, my user supplies faulty arguments, what would be more convenient, than to throw a std::invalid_argument , wouldn't it?
So combined with the yet common use of std::exception I see in others code:
Why not go even further and derive from your custom exception class (e.g. lib_foo_exception) and also from std::exception . Thoughts? | All exceptions should inherit from std::exception . Suppose, for example, I need to call ComplexOperationThatCouldFailABunchOfWays() , and I want to handle any exceptions that it could throw. If everything inherits from std::exception , this is easy. I only need a single catch block, and I have a standard interface ( what() ) for getting details. try {
ComplexOperationThatCouldFailABunchOfWays();
} catch (std::exception& e) {
cerr << e.what() << endl;
} If exceptions do NOT inherit from std::exception , this gets much uglier: try {
ComplexOperationThatCouldFailABunchOfWays();
} catch (std::exception& e) {
cerr << e.what() << endl;
} catch (Exception& e) {
cerr << e.Message << endl;
} catch (framework_exception& e) {
cerr << e.Details() << endl;
} Regarding whether to throw runtime_error or invalid_argument versus creating your own std::exception subclasses to throw: My rule of thumb is to introduce a new subclass whenever I need to handle a particular type of error differently than other errors (i.e., whenever I need a separate catch block). If I introduce a new exception subclass for every conceivable type of error, even if I don't need to handle them separately, then that adds a lot of class proliferation. If I reuse existing subclasses to mean something specific (i.e., if a runtime_error thrown here means something different than a generic runtime error), then I run the risk of conflicting with other uses of the existing subclass. If I don't need to handle an error specifically, and if the error that I'm throwing exactly matches one of the existing standard library's errors (such as invalid_argument ), then I reuse the existing class. I just don't see much benefit to adding a new class in this case. (The C++ Core Guidelines disagree with me here - they recommend always using your own classes.) The C++ Core Guidelines have further discussion and examples. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305641",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/192462/"
]
} |
305,658 | We use exceptions to let the consumer of the code handle unexpected behaviour in a useful way. Usually exceptions are built around "what happened" scenario - like FileNotFound (we were unable to find file you specified) or ZeroDivisionError (we can divide by 0 ). What if there is a need to specify the expected behaviour of the consumer? For example, imagine we have fetch resource, which performs HTTP request and returns retrieved data. And instead of errors like ServiceTemporaryUnavailable or RateLimitExceeded we would just raise a RetryableError suggesting that the consumer should just retry the request and forget about specific failure. So, we are basically suggesting an action to the caller - the "what to do". We do not do this often because we don't know all the consumers' usecases. But imagine that in some specific component we do know the best course of actions for a caller - so should we then make use of "what to do" approach? | But imagine that is some specific component we do know the best course of actions for a caller. This almost always fails for at least one of your callers, for which this behaviour is incredibly irritating. Don't assume you know best. Tell your users what's happening, not what you assume they should do about it. In many cases it's already clear what a sane course of action should be (and, if it's not, make a suggestion in your user manual). For example, even the exceptions given in your question demonstrate your broken assumption: a ServiceTemporaryUnavailable equates to "try again later", and RateLimitExceeded equates to "woah there chill out, maybe adjust your timer parameters, and try again in a few minutes". But the user may as well want to raise some sort of alarm on ServiceTemporaryUnavailable (which indicates a server problem), and not for RateLimitExceeded (which doesn't). Give them the choice . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305658",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/208520/"
]
} |
305,712 | Just curious. The most I have ever had was a for loop within a for loop, because after reading this from Linus Torvalds: Tabs are 8 characters, and thus indentations are also 8 characters.
There are heretic movements that try to make indentations 4 (or even
2!) characters deep, and that is akin to trying to define the value of
PI to be 3. Rationale: The whole idea behind indentation is to clearly define
where a block of control starts and ends. Especially when you've been
looking at your screen for 20 straight hours, you'll find it a lot
easier to see how the indentation works if you have large
indentations. Now, some people will claim that having 8-character indentations makes
the code move too far to the right, and makes it hard to read on a
80-character terminal screen. The answer to that is that if you need
more than 3 levels of indentation, you're screwed anyway, and should
fix your program. https://www.kernel.org/doc/Documentation/CodingStyle I figured it was an unacceptable practice for me to go to a third layer of looping, and would restructure my code (Primarily Qt). Was Linus joking? Does it depend on the language or application? Are there some things which absolutely need three or more levels of looping? | To a degree, I stopped taking this quote seriously at "Tabs are 8 characters" . The whole point of tabulators is that they are not a fixed number of characters (if anything, a tab is one character). What a load of tosh. Similarly, I'm not completely convinced that setting a hard-and-fast rule of "three levels of indentation" is sane (as much as setting a hard-and-fast rule for anything is sane). However, limiting your levels of indentation is in general a reasonable suggestion, and not one that should come as a surprise to you. Ultimately, if your program needs three levels of iteration, that's what your program needs . The spirit of the quote is not to magically alleviate that requirement from your project, but to hive off logic into functions and types so that your code is terser and more expressive. This just feeds back into the same guideline given above regarding indentation levels. It's about how you structure your code and keep it readable, maintainable and fun to modify for years to come. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305712",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/136084/"
]
} |
305,797 | Although in the code below a simple single item purchase in an e-commerce site is used, my general question is about updating all data members to keep an object's data in valid state at all times. I found "consistency" and "state is evil" as relevant phrases, discussed here: https://en.wikibooks.org/wiki/Object_Oriented_Programming#.22State.22_is_Evil.21 <?php
class CartItem {
private $price = 0;
private $shipping = 5; // default
private $tax = 0;
private $taxPC = 5; // fixed
private $totalCost = 0;
/* private function to update all relevant data members */
private function updateAllDataMembers() {
$this->tax = $this->taxPC * 0.01 * $this->price;
$this->totalCost = $this->price + $this->shipping + $this->tax;
}
public function setPrice($price) {
$this->price = $price;
$this->updateAllDataMembers(); /* data is now in valid state */
}
public function setShipping($shipping) {
$this->shipping = $shipping;
$this->updateAllDataMembers(); /* call this in every setter */
}
public function getPrice() {
return $this->price;
}
public function getTaxAmt() {
return $this->tax;
}
public function getShipping() {
return $this->shipping;
}
public function getTotalCost() {
return $this->totalCost;
}
}
$i = new CartItem();
$i->setPrice(100);
$i->setShipping(20);
echo "Price = ".$i->getPrice().
"<br>Shipping = ".$i->getShipping().
"<br>Tax = ".$i->getTaxAmt().
"<br>Total Cost = ".$i->getTotalCost(); Any disadvantages, or maybe better ways to do this? This is a recurring issue in real-world applications backed by a relational database, and if you do not use stored procedures extensively to push all the validation into the database. I think that the data store should just store data, while the code should do all the run-time state maintaining work. EDIT: this is a related question but does not have a best-practice recommendation regarding a single big function to maintain valid state: https://stackoverflow.com/questions/1122346/c-sharp-object-oriented-design-maintaining-valid-object-state EDIT2: Although @eignesheep's answer is the best, this answer - https://softwareengineering.stackexchange.com/a/148109/208591 - is what fills the lines between @eigensheep's answer and what I wanted to know - code should only process, and global state should be substituted by DI-enabled passing of state between objects. | All else being equal, you should express your invariants in code. In this case you have the invariant $this->tax = $this->taxPC * 0.01 * $this->price; To express this in your code, remove the tax member variable and replace getTaxAmt() with public function getTaxAmt() {
return $this->taxPC * 0.01 * $this->price;
} You should do something similar to get rid of the total cost member variable. Expressing your invariants in your code can help avoid bugs. In the original code, the total cost is incorrect if checked before setPrice or setShipping is called. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305797",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/208591/"
]
} |
305,801 | I am currently working on JavaEE application (Spring, Hibernate). I have to put a big XML file (more than 1 gigabyte) on a relational database (Postgres). The application does not use batch processing. I've done some searching but I did not find any solution for the design of the DAO layer: if I use only one transaction, the server will not response to any request until it finishes the insertion of rows to fill a complex database schema (a huge number of rows: the order of added rows is thousands (for every table)). So, using 1 transaction is not a good idea.
I can split XML file basing on its tags data: every tag content will be inserted on a row.
The idea is to use multithreading to manage transactions (every transaction inserts a defined number of rows). Is it a good idea?
I have found a difficulties to would know how to define the necessary number of transactions to maintain a good time response of the application. I also search how to manage failure of certain transactions. For example, If only 3 transactions write over 1000000 fail, I should try again all the transactions? When searching, I find that batch processing like Spring batch manages database records and transactions failure. But in my application, we did not use batch processing. Unfortunately, I can not change the database to Nsql database or add Spring Batch framework to the project. N.B: I can not bypass Spring and Hibernate in this project, but I am open to any suggestion even for curiosity. | All else being equal, you should express your invariants in code. In this case you have the invariant $this->tax = $this->taxPC * 0.01 * $this->price; To express this in your code, remove the tax member variable and replace getTaxAmt() with public function getTaxAmt() {
return $this->taxPC * 0.01 * $this->price;
} You should do something similar to get rid of the total cost member variable. Expressing your invariants in your code can help avoid bugs. In the original code, the total cost is incorrect if checked before setPrice or setShipping is called. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305801",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/208623/"
]
} |
305,838 | I started learning AngularJS and ASP.NET MVC, but am not sure why to use them both together in the same project? Aren't they both MVC frameworks? Should I be using them both in the same application? Isn't it one or the other? | If you're building a single page application (SPA) , then you probably don't need the "MVC" in ASP.NET MVC . Views, especially dynamic views, are likely delivered/manipulated client-side. Angular handles that just fine. But maybe you don't want a 100% SPA. Then what? Imagine instead 10 pages, but 10 pages that are very dynamic. After a user logs on, there's a little user badge up in the right-hand corner. That's not dynamic. It just shows a few nifty things like the user's "score" and their latest selfie. You cache the nifty things so they can be easily retrieved. Now, you can go two ways with this. If you're a client-side MVC purist, you just fetch the badge data after the initial HTML payload is delivered, just like all the other data. But maybe you're not a purist. Maybe you're the opposite of a purist. Maybe you're an impurist. So, instead of delivering the initial HTML, delivering some JavaScript that will post back to your server, post via JavaScript to grab badge data, and then ultimately merge that data into a view via client-side MVC, you simply decide to merge the data already in your cache into a view on the server and then deliver that as your initial HTML. After your initial HTML is delivered, you proceed with your typical client-side MVC antics. So... MVC on the server and on the client is just a convenient way to organize code that used to be a mess in 2001. You don't have to choose one or the other. You can choose both. Granted, the more you do after that initial HTML is delivered, the less you need server-side MVC. Still, it's there for you if you need it. For example, I worked on a ASP.NET MVC/Angular application where external Angular templates might actually be .NET MVC ActionResult. That means your server controller could merge data into its view, deliver it to Angular as a template, and Angular's controller could then merge its data into the view. I'm not saying this is a good idea, but it just shows that one form of MVC doesn't make the other obsolete. Besides, no matter how you deploy Angular, you're going to need a way to deliver that initial HTML, the templates, and most importantly the data. Why not use a platform that makes it easy? There are many, but .NET MVC is no slouch. Like I said, you can make the initial HTML and external Angular templates the result of an MVC action, but better yet, you can use .NET'S Web API to deliver the data. Web API is as delicious as apricot compote. Summarized: MVC is just a pattern. You may want to use that pattern on any number of physical layers. It can't be used up. Use it freely if it makes sense. Besides, Angular may not be MVC anyway (so says people who care about these things), so feel free to use it with a tool that has "MVC" in the name. Hell, even if it is MVC, mix and match as desired. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305838",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/208578/"
]
} |
305,886 | The C++ Core Guidelines have the rule ES.20: Always initialize an object . Avoid used-before-set errors and their associated undefined behavior. Avoid problems with comprehension of complex initialization. Simplify refactoring. But this rule doesn't help to find bugs, it only hides them. Let's suppose that a program has an execution path where it uses an uninitialized variable. It is a bug. Undefined behavior aside, it also means that something went wrong, and the program probably doesn't meet its product requirements. When it will be deployed to production, there can be a money loss, or even worse. How do we screen bugs? We write tests. But tests don't cover 100% of execution paths, and tests never cover 100% of program inputs. More than that, even a test covers a faulty execution path - it still can pass. It's undefined behavior after all, an uninitialized variable can have a somewhat valid value. But in addition to our tests, we have the compilers which can write something like 0xCDCDCDCD to uninitialized variables. This slightly improves detection rate of the tests. Even better - there are tools like Address Sanitizer, which will catch all the reads of uninitialized memory bytes. And finally there are static analyzers, which can look at the program and tell that there is a read-before-set on that execution path. So we have many powerful tools, but if we initialize the variable - sanitizers find nothing . int bytes_read = 0;
my_read(buffer, &bytes_read); // err_t my_read(buffer_t, int*);
// bytes_read is not changed on read error.
// It's a bug of "my_read", but detection is suppressed by initialization.
buffer.shrink(bytes_read); // Uninitialized bytes_read could be detected here.
// Another bug: use empty buffer after read error.
use(buffer); There is another rule - if program execution encounters a bug, program should die as soon as possible. No need to keep it alive, just crash, write a crashdump, give it to the engineers for investigation. Initializing variables unnecessarily does the opposite - program is being kept alive, when it would already get a segmentation fault otherwise. | Your reasoning goes wrong on several accounts: Segmentation faults are far from certain to occur. Using an uninitialized variable results in undefined behaviour . Segmentation faults are one way that such behaviour can manifest itself, but appearing to run normal is just as likely. Compilers never fill the uninitialized memory with a defined pattern (like 0xCD). This is something that some debuggers do to assist you in finding places where uninitialized variables get used. If you run such a program outside a debugger, then the variable will contain completely random garbage. It is equally likely that a counter like the bytes_read has the value 10 as that it has the value 0xcdcdcdcd . Even if you are running in a debugger that sets the uninitialized memory to a fixed pattern, they only do so at startup. This means that this mechanism only works reliably for static (and possibly heap-allocated) variables. For automatic variables, which get allocated on the stack or live only in a register, the chances are high that the variable is stored in a location that was used before, so the tell-tale memory pattern has already been overwritten. The idea behind the guidance to always initialize variables is to enable these two situations The variable contains a useful value right from the very beginning of its existence. If you combine that with the guidance to declare a variable only once you need it, you can avoid future maintenance programmers falling in the trap of starting to use a variable between its declaration and the first assignment, where the variable would exist but be uninitialized. The variable contains a defined value that you can test for later, to tell if a function like my_read has updated the value. Without initialization, you can't tell if bytes_read actually has a valid value, because you can't know what value it started with. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305886",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7270/"
]
} |
305,919 | Suppose I have a front-end which is mostly a single-page application written using angular, grunt, and bower. And suppose I have a backend, which is mostly just a REST API sitting on top of an ORM, which stores/retrieves objects from a database, using things like grunt, express and sequelize. The angular application does all of the visual stuff the user sees, but it does so by being a GUI over the services provided by the back-end. It would be desirable to separate these into two different codebases, to permit independent development, versioning, continuous integration, push to development, etc. My question is, what methods are out there for doing this cleanly? Are there recommended best practices for full-stack javascript? Option #1 seems to be a monolith, i.e. "don't separate them". The pro is that the build chain is simple, and everything is in one place - but there seem to be many cons; harder to version independently, a broken front means an un-deployable back, and so on. Option #2 seems to be a quasi-monolith, where the front-end build chain results in writing a bunch of files to the back-end. The dist directory on the front-end would refer to some directory on the back-end, so essentially when the front end minifies, uglifies, etc, it ends up publishing to the back-end, which runs everything. Option #3 seems full-separation: front-end and back-end each run their own servers on different ports, and they are fully separate projects. The drawback seems that they need to be configured to know about each other's ports; the back-end must allow CORS from the front-end, and the front-end needs to know where all those endpoints are expected to be. Option #4 might be to use something like docker-compose to rig the whole thing together. I'm sure there are other options. What's the recommended best practice? | It's a front-end, back-end application, with a REST interface in between. You already have full separation. My vote is for option # 3. You seem worried about configuration, but that's kinda the whole point. Configuration allows you to have full separation without requiring tightly-coupled code bindings. If you're worried about CORS, put everything on one domain. If you must have CORS, the best way to manage that is a configuration. But there is no "best practice" here. The best practice is the one that is going to best meet your specific needs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305919",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/126928/"
]
} |
305,930 | In many books and tutorials, I've heard the practice of memory management stressed and felt that some mysterious and terrible things would happen if I didn't free memory after I'm done using it. I can't speak for other systems (although to me it's reasonable to assume that they adopt a similar practice), but at least on Windows, the Kernel is basically guaranteed to cleanup most resources (with the exception of an odd few) used by a program after program termination. Which includes heap memory, among various other things. I understand why you would want to close a file after you're done using it in order to make it available to the user or why you would want to disconnect a socket connected to a server in order to save bandwidth, but it seems silly to have to micromanage ALL your memory used by your program. Now, I agree that this question is broad since how you should handle your memory is based on how much memory you need and when you need it, so I will narrow the scope of this question to this: If I need to use a piece of memory throughout the lifespan of my program, is it really necessary to free it right before program termination? Edit:
The question suggested as a duplicate was specific to the Unix family of operating systems. Its top answer even specified a tool specific to Linux (e.g. Valgrind). This question is meant to cover most "normal" non-embedded operating systems and why it is or isn't a good practice to free memory that is needed throughout the lifespan of a program. | If I need to use a piece of memory throughout the lifespan of my program, is it really necessary to free it right before program termination? It is not mandatory, but it can have benefits (as well as some drawbacks). If the program allocates memory once during its execution time, and would otherwise never release it until the process ends, it may be a sensible approach not to release the memory manually and rely on the OS. On every modern OS I know, this is safe, at the end of the process all allocated memory is reliably returned to the system. In some cases, not cleaning up the allocated memory explicitly may even be notably quicker than doing the clean-up. However, by releasing all the memory at end of execution explicitly, during debugging / testing, mem leak detection tools won't show you "false positives" it might be much easier to move the code which uses the memory together with allocation and deallocation into a separate component and use it later in a different context where the usage time for the memory need to be controlled by the user of the component The lifespan of programs can change. Maybe your program is a small command line utility today, with a typical lifetime of less than 10 minutes, and it allocates memory in portions of some kb every 10 seconds - so no need to free any allocated memory at all before the program ends. Later on the program is changed and gets an extended usage as part of a server process with a lifetime of several weeks - so not freeing unused memory in between is not an option any more, otherwise your program starts eating up all available server memory over time. This means you will have to review the whole program and add deallocating code afterwards. If you are lucky, this is an easy task, if not, it may be so hard that chances are high you miss a place. And when you are in that situation, you will wish you had added the "free" code to your program beforehand, at the time when you added the "malloc" code. More generally, writing allocating and related deallocating code always pairwise counts as a "good habit" among many programmers: by doing this always, you decrease the probability of forgetting the deallocation code in situations where the memory must be freed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/305930",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/208914/"
]
} |
306,082 | ES6 added fat-arrow functions ( => ), which have two major differences from normal functions: shorter syntax (including implicit return if you use a single-expression body) inherit this from surrounding scope These are both very useful features, but seem to me completely separate in their value and application – sometimes I want one, or the other, or both, or neither. It seems odd that if I want to use a short-syntax function, I have to also use the this -modifying behaviour. And vice versa. I don't see why these two capabilities are implemented as a single addition to the language. What if I want to use a short syntax function for its implicit return and brevity (in some context where a full function (..) { return ...} would be slightly less readable), but I want to use this in my function to refer to the calling context? There's no way to do this. CoffeeScript has both -> and => style functions, and apparently ES6 borrowed the => style from there. So my question is, why didn't ES6 also borrow the -> style? | See the proposal to add arrow functions: http://wiki.ecmascript.org/doku.php?id=harmony:arrow_function_syntax 1 What it says is: However, we don’t want CoffeeScript’s ->, it’s confusing to have two
arrows and dynamic this binding is an oft-fired footgun. You can also see some discussion of a previous version of the proposal which did have the -> syntax as well: https://esdiscuss.org/topic/arrow-function-syntax-simplified It appears to come down to the following: Having two arrow syntaxes with subtly different semantics would increase complication and confusion. The dynamic this binding of function() and -> was deemed rarely useful, and a foot-gun. If you really need dynamic this binding, you can still use function(), having a shortcut syntax wasn't very helpful. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306082",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/30385/"
]
} |
306,092 | There has been a discussion in chat relating to a question (the question itself being irrelevant to this one), that has revealed I may not know Python whatsoever. In my mind, although terminology differs across languages, we may generally categorise functions as: [free] functions static methods / static member functions non-static methods / non-static member functions Apparently in Python there is another kind of function that doesn't fit into the above categories, one that is a method but "doesn't know its class". What are "class methods" and "instance methods", in Python? | The short answer an instance method knows its instance (and from that, its class) a class method knows its class a static method doesn't know its class or instance The long answer Class methods A class method is one that belongs to the class as a whole. It doesn't require an instance. Instead, the class will automatically be sent as the first argument. A class method is declared with the @classmethod decorator. For example: class Foo(object):
@classmethod
def hello(cls):
print("hello from %s" % cls.__name__)
Foo.hello()
-> "Hello from Foo"
Foo().hello()
-> "Hello from Foo" Instance Methods On the other hand, an instance method requires an instance in order to call it, and requires no decorator. This is by far the most common type of method. class Foo(object):
def hello(self):
print("hello from %s" % self.__class__.__name__)
Foo.hello()
-> TypeError: hello() missing 1 required positional argument: 'self' (note: the above is with python3; with python2 you'll get a slightly different error) Static methods A static method is similar to a class method, but won't get the class object as an automatic parameter. It is created by using the @staticmethod decorator. class Foo(object):
@staticmethod
def hello(cls):
print("hello from %s" % cls.__name__)
Foo.hello()
-> TypeError: hello() missing 1 required positional argument: 'cls' Documentation links Here are links to the relevant python3 documentaton: https://docs.python.org/3.5/library/functions.html#classmethod https://docs.python.org/3.5/library/functions.html#staticmethod The data model documentation has this to say about the difference between class methods and static methods: Static method objects Static method objects provide a way of defeating
the transformation of function objects to method objects described
above. A static method object is a wrapper around any other object,
usually a user-defined method object. When a static method object is
retrieved from a class or a class instance, the object actually
returned is the wrapped object, which is not subject to any further
transformation. Static method objects are not themselves callable,
although the objects they wrap usually are. Static method objects are
created by the built-in staticmethod() constructor. Class method objects A class method object, like a static method object, is a
wrapper around another object that alters the way in which that object
is retrieved from classes and class instances. The behaviour of class
method objects upon such retrieval is described above, under
“User-defined methods”. Class method objects are created by the
built-in classmethod() constructor. Related questions What is the difference between @staticmethod and @classmethod in Python? (stackoverflow) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306092",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17853/"
]
} |
306,098 | Since we are becoming more and more reliant on computing, including very critical tasks of day-to-day life, I was just wondering how those vital components are tested. More technically, how are the compilers and assemblers tested? (I suppose this relates to the halting problem !!) | You can't be certain, but you just assume they are, until you discover they are not. There have been plenty of bugs in compilers and hardware over the years. The way these are tested, for example a compiler, is that they are very narrowly and rigidly defined, carefully written, then tested with an enormous test suite to verify correctness. Add to that the wide user base of a compiler, and more bugs will be detected and reported. A dentist appointment scheduling app, comparatively, has many fewer users, and fewer still that are capable of detecting defects. SQLite consists of about 73k lines of code, while its test suite consists of about 91378k lines of code, more than 1250x times that of SQLite itself. I expect compilers and other core tools have similar ratios. Processors today are designed essentially with software, using hardware description languages like Verilog or VHDL, and those have software tests run on them as well, as well as specialized IO pins for running self tests at the point of manufacture. Ultimately it's a probability game, and repeated and broadly covering testing allows you to push the probability of defects down to an acceptably low level, the same as an other software project. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306098",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/207545/"
]
} |
306,105 | Pretty straight-forward. I'm implementing an interface, but there's one property that is unnecessary for this class and, in fact, shouldn't be used. My initial idea was to just do something like: int IFoo.Bar
{
get { raise new NotImplementedException(); }
} I suppose there's nothing wrong with this, per se, but it doesn't feel "right". Has anyone else come across a similar situation before? If so, how did you approach it? | This is a classical example of how people decide to violate the Liskov Subtitution Principle. I strongly discourage it but would encourage possibly a different solution: Perhaps the class you're writing doesn't provide the functionality the interface prescribes if it doesn't have use of all the members of the interface. Alternatively, that interface may be doing multiple things and could be separated per the Interface Segregation Principle. If the first is the case for you, just don't implement the interface on that class. Think of it like an electrical socket where the ground hole is unnecessary so it doesn't actually attach to ground. You don't plug anything with ground in and no big deal! But as soon as you use something which needs a ground - you could be in for a spectacular fail. Better off not punching a fake-ground hole in. So if your class doesn't actually do what the interface intends, don't implement the interface. Here are a few quick bits from wikipedia: Liskov Substitution Principle can be simply formulated as, "Don't strengthen pre-conditions, and don't weaken post-conditions". More formally, the Liskov substitution principle (LSP) is a particular definition of a subtyping relation, called (strong) behavioral subtyping, that was initially introduced by Barbara Liskov in a 1987 conference keynote address entitled Data abstraction and hierarchy. It is a semantic rather than merely syntactic relation because it intends to guarantee semantic interoperability of types in a hierarchy , [...] For semantic interoperability and substitutability between different implementations of the same contracts - you need them all to commit to the same behaviours. Interface Segregation Principle speaks to the idea that interfaces should be separated into cohesive sets such that you don't require an interface that does many disparate things when you only want one facility. Think again of the interface of an electrical socket, it could have a thermostat also, but it would make it harder to install an electrical socket and may make it harder to use for non-heating purposes. Like an electrical socket with a thermostat, large interfaces are hard to implement and hard to use. The interface-segregation principle (ISP) states that no client should be forced to depend on methods it does not use.[1] ISP splits interfaces which are very large into smaller and more specific ones so that clients will only have to know about the methods that are of interest to them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306105",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/65618/"
]
} |
306,314 | I'm going to save some string payload in the database. I have two global configurations: encryption compression These can be enabled or disabled using the configuration in a way that either only one of them is enabled, both are enabled or both are disabled. My current implementation is this: if (encryptionEnable && !compressEnable) {
encrypt(data);
} else if (!encryptionEnable && compressEnable) {
compress(data);
} else if (encryptionEnable && compressEnable) {
encrypt(compress(data));
} else {
data;
} I'm thinking about the Decorator pattern. Is it the right choice, or is there perhaps a better alternative? | The only problem I see with your current code is the risk of combinatorial explosion as you add more settings, which can be easily be mitigated by structuring the code more like this: if(compressEnable){
data = compress(data);
}
if(encryptionEnable) {
data = encrypt(data);
}
return data; I'm not aware of any "design pattern" or "idiom" that this could be considered an example of. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306314",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/208350/"
]
} |
306,377 | In response to Aaronaught's response to the question at: Can't I just use all static methods? Isn't less memory used for a static method? I am under the impression that each object instance carries around its own executable version of a non-static member function. Regardless of how much overhead is involved in calling a static method, regardless of poor OO design, and possible headaches down the road, doesn't it use less memory at runtime? Here is an example: I make a vector of zero-initialized objects. Each object contains one piece of data (a triangle consisting of nine double's). Each object is populated in sequence from data read from a .stl file. Only one static method is needed. Proper OO design dictates that a method dealing with the the data directly should be distributed to each object. Here is the standard OO solution: foreach(obj in vec) {
obj.readFromFile(fileName);
} Each obj carries readFromFile 's compiled code alongside the data! Memory is more of a concern than performance in this case, and there is a LOT of data on a constrained system. Solutions: Namespace method (great for C++ but not possible in Java) One static method in obj's class. Executable code is kept in one place at runtime. There is a small overhead to call the method. A parent class from which obj is derived, which contains the private method readFromFile . Call with super.callPrivateMethod() which calls readFromFile . Messy, and still some memory overhead in each object. Implement readFromFile outside obj's scope, so in vec's class or in the calling class. This, in my opinion, breaks data encapsulation. I realize for large amounts of data one explicit object for each triangle is not the best approach. This is only an example. | Methods are not stored on a per-instance basis, even with virtual methods. They're stored in a single memory location, and they only "know" which object they belong to because the this pointer is passed when you call them. The only extra memory required in C++ is if you're using virtual methods, which require a single extra pointer per instance to point at the virtual method table (in Java of course you always have a base class with virtual methods, Object , so it's unavoidable). If it worked the way you described, OO languages would not be very popular! Feel free to add as many methods as you need to your objects. It won't affect their memory usage. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306377",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/209593/"
]
} |
306,380 | I am working on a legacy code base and I need to find a way to write unit tests for this project. The project has a three layer architecture ( UI-Biz-DAL as we call them) and DAL is totally implemented using ADO.Net and Typed-Datasets and it is full of SQL Scripts . Our Biz classes have methods that are responsible for doing business logic stuffs and they are dependent on other helper classes and DAL classes. I know I can use DI to inject these classes to my Biz classes but I think that I should change a lot of code. Here's a solution that I can think of : There's this TestContext class that acts as a container and can contain mock objects for tests but it does not have anything when it comes to run the actual code so that real objects can be used instead , here is an example : var dal=TestContext.Current.Resolve<IMyDAL>(@default:new MyDal()); as you can see Resolve method accepts an argument of type IMyDAL that will be used in case of not running our tests. First I would like to know what do you think about this solution Second I am still thinking about a way to test SQL scripts that are hardcoded in our code base . How can I test them ? | Methods are not stored on a per-instance basis, even with virtual methods. They're stored in a single memory location, and they only "know" which object they belong to because the this pointer is passed when you call them. The only extra memory required in C++ is if you're using virtual methods, which require a single extra pointer per instance to point at the virtual method table (in Java of course you always have a base class with virtual methods, Object , so it's unavoidable). If it worked the way you described, OO languages would not be very popular! Feel free to add as many methods as you need to your objects. It won't affect their memory usage. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306380",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49472/"
]
} |
306,436 | Simply put I'm new to the company, should I rather write advanced techniques with things like template s, std techniques..etc to make a first good impression and have my colleagues trust/be impressed at my work or should I be more concerned in writing more solid/based standard code for each current problem to be solved? | Absolutely not. If you joined my team and spent your whole time using more advanced techniques than were necessary for the task at hand, I would certainly be less than impressed. You'd be making it more difficult to read, understand and maintain the code … not only for your teammates, but also for yourself. And for what reason? Showing off? Not cool. Though at least you'd be making it easy for me to find content for your first performance review! The way to impress me is to use the right tool for the job. If a problem comes up which is best solved with an "advanced technique", and you are capable of delivering such a solution in such a way that code stability and maintainability is not compromised, then I'd be impressed. To be clear, though, the use of templates and standard library facilities is hardly "advanced". I require anyone joining my C++ team to be familiar with these as a matter of course. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306436",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/65797/"
]
} |
306,483 | I have three classes that are circular dependant to each other: TestExecuter execute requests of TestScenario and save a report file using ReportGenerator class.
So: TestExecuter depends on ReportGenerator to generate the report ReportGenerator depends on TestScenario and on parameters set from TestExecuter. TestScenario depends on TestExecuter. Can't figure out how to remove thoses dependencies. public class TestExecuter {
ReportGenerator reportGenerator;
public void getReportGenerator() {
reportGenerator = ReportGenerator.getInstance();
reportGenerator.setParams(this.params);
/* this.params several parameters from TestExecuter class example this.owner */
}
public void setTestScenario (TestScenario ts) {
reportGenerator.setTestScenario(ts);
}
public void saveReport() {
reportGenerator.saveReport();
}
public void executeRequest() {
/* do things */
}
} public class ReportGenerator{
public static ReportGenerator getInstance(){}
public void setParams(String params){}
public void setTestScenario (TestScenario ts){}
public void saveReport(){}
} public class TestScenario {
TestExecuter testExecuter;
public TestScenario(TestExecuter te) {
this.testExecuter=te;
}
public void execute() {
testExecuter.executeRequest();
}
} public class Main {
public static void main(String [] args) {
TestExecuter te = new TestExecuter();
TestScenario ts = new TestScenario(te);
ts.execute();
te.getReportGenerator();
te.setTestScenario(ts);
te.saveReport()
}
} EDIT: in response to an answer, more details about my TestScenario class: public class TestScenario {
private LinkedList<Test> testList;
TestExecuter testExecuter;
public TestScenario(TestExecuter te) {
this.testExecuter=te;
}
public void execute() {
for (Test test: testList) {
testExecuter.executeRequest(test);
}
}
}
public class Test {
private String testName;
private String testResult;
}
public class ReportData {
/*shall have all information of the TestScenario including the list of Test */
} An example of the xml file to be generated in case of a scenario containing two tests: <testScenario name="scenario1">
<test name="test1">
<result>false</result>
</test>
<test name="test1">
<result>true</result>
</test>
</testScenario > | Technically, you can resolve any cyclic dependency by using interfaces, as shown in the other answers. However, I recommend to rethink your design. I think it is not unlikely you can avoid the need for additional interfaces completely, while your design becomes even simpler. I guess it is not necessary for a ReportGenerator to depend on a TestScenario directly. TestScenario seems to have two responsibilites: it is used for test execution, and it works also as a container for the results. This is a violation of the SRP. Interestingly, by resolving that violation, you will get rid of the cyclic dependency as well. So instead of letting the report generator grab data from the test scenario, pass the data explicitly by using some value object. That means, replace reportGenerator.setTestScenario(ts); by some code like reportGenerator.insertDataToDisplay(ts.getReportData()); The method getReportData needs to have a return type like ReportData , a value object which works as a container for the data to be displayed in the report. insertDataToDisplay is a method which expects an object of exactly that type. This way, ReportGenerator and TestScenario will both depend on ReportData , which depends on nothing else, and the first two classes do not depend on each other any more. As a second approach: to resolve the SRP violation let TestScenario be responsible for holding the results of a test execution, but not for calling the test executer. Consider to reorganize the code so not the test scenario accesses the test executer, but the test executer is started from outside and writes the results back into the TestScenario object. In the example you showed us, that will be possible by making the access to LinkedList<Test> inside of TestScenario public, and by moving the execute method from TestScenario to somewhere else, maybe directly into a TestExecuter , maybe into a new class TestScenarioExecuter . That way, TestExecuter will depend on TestScenario and ReportGenerator , ReportGenerator will depend on TestScenario , too , but TestScenario will depend on nothing else. And finally, a third approach: TestExecuter has too many responsibilities, too. It is responsible for executing tests as well as for providing a TestScenario to a ReportGenerator . Put these two responsibilities into two separate classes, and your cyclic dependency will vanish again. There may be more variants to approach your problem, but I hope you get the general idea: your core problem are classes with too many responsibilities . Solve that problem, and you will get rid of the cyclic dependency automatically. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306483",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/202067/"
]
} |
306,486 | I'm freshly coming to the Python world after years of Java and PHP. While the language itself is pretty much straightforward, I'm struggling with some 'minor' issues that I can't wrap my head around — and to which I couldn't find answers in the numerous documents and tutorials I've read this far. To the experienced Python practitioner, this question might seem silly, but I really want an answer to it so I can go further with the language: In Java and PHP ( although not strictly required ), you are expected to write each class on its own file, with file's name is that of the class as a best practice. But in Python, or at least in the tutorials I've checked, it is ok to have multiple classes in the same file. Does this rule hold in production, deployment-ready code or it's done just for the sake of brevity in educative-only code? | Is it ok to have multiple classes in the same file in Python? Yes. Both from a philosophical perspective as well as a practical one. In Python, modules are a namespace that exist once in memory. Say we had the following hypothetical directory structure, with one class defined per file: Defines
abc/
|-- callable.py Callable
|-- container.py Container
|-- hashable.py Hashable
|-- iterable.py Iterable
|-- iterator.py Iterator
|-- sized.py Sized
... 19 more All of these classes are available in the collections module and (there are, in fact, 25 in total) defined in the standard library module in _collections_abc.py There are a couple of issues here that I believe makes the _collections_abc.py superior to the alternative hypothetical directory structure. These files are sorted alphabetically. You could sort them in other ways, but I am not aware of a feature that sorts files by semantic dependencies. The _collections_abc module source is organized by dependency. In non-pathological cases, both modules and class definitions are singletons, occurring once each in memory. There would be a bijective mapping of modules onto classes - making the modules redundant. The increasing number of files makes it less convenient to casually read through the classes (unless you have an IDE that makes it simple) - making it less accessible to people without tools. Are you prevented from breaking groups of classes into different modules when you find it desirable from a namespacing and organizational perspective? No. From the Zen of Python , which reflects the philosophy and principles under which it grew and evolved: Namespaces are one honking great idea -- let's do more of those! But let us keep in mind that it also says: Flat is better than nested. Python is incredibly clean and easy to read. It encourages you to read it. Putting every separate class in a separate file discourages reading. This goes against the core philosophy of Python. Look at the structure of the Standard Library , the vast majority of modules are single-file modules, not packages. I would submit to you that idiomatic Python code is written in the same style as the CPython standard lib. Here's the actual code from the abstract base class module . I like to use it as a reference for the denotation of various abstract types in the language. Would you say that each of these classes should require a separate file? class Hashable:
__metaclass__ = ABCMeta
@abstractmethod
def __hash__(self):
return 0
@classmethod
def __subclasshook__(cls, C):
if cls is Hashable:
try:
for B in C.__mro__:
if "__hash__" in B.__dict__:
if B.__dict__["__hash__"]:
return True
break
except AttributeError:
# Old-style class
if getattr(C, "__hash__", None):
return True
return NotImplemented
class Iterable:
__metaclass__ = ABCMeta
@abstractmethod
def __iter__(self):
while False:
yield None
@classmethod
def __subclasshook__(cls, C):
if cls is Iterable:
if _hasattr(C, "__iter__"):
return True
return NotImplemented
Iterable.register(str)
class Iterator(Iterable):
@abstractmethod
def next(self):
'Return the next item from the iterator. When exhausted, raise StopIteration'
raise StopIteration
def __iter__(self):
return self
@classmethod
def __subclasshook__(cls, C):
if cls is Iterator:
if _hasattr(C, "next") and _hasattr(C, "__iter__"):
return True
return NotImplemented
class Sized:
__metaclass__ = ABCMeta
@abstractmethod
def __len__(self):
return 0
@classmethod
def __subclasshook__(cls, C):
if cls is Sized:
if _hasattr(C, "__len__"):
return True
return NotImplemented
class Container:
__metaclass__ = ABCMeta
@abstractmethod
def __contains__(self, x):
return False
@classmethod
def __subclasshook__(cls, C):
if cls is Container:
if _hasattr(C, "__contains__"):
return True
return NotImplemented
class Callable:
__metaclass__ = ABCMeta
@abstractmethod
def __call__(self, *args, **kwds):
return False
@classmethod
def __subclasshook__(cls, C):
if cls is Callable:
if _hasattr(C, "__call__"):
return True
return NotImplemented So should they each have their own file? I hope not. These files are not just code - they are documentation on the semantics of Python. They are maybe 10 to 20 lines on average. Why should I have to go to a completely separate file to see another 10 lines of code? That would be highly impractical. Further, there would be nearly identical boilerplate imports on each file, adding more redundant lines of code. I find it quite useful to know that there is a single module where I can find all of these Abstract Base Classes, instead of having to look over a list of modules. Viewing them in context with each other allows me to better understand them. When I see that an Iterator is an Iterable, I can quickly review what an Iterable consists of by glancing up. I sometimes wind up having a couple of very short classes. They stay in the file, even if they need to grow larger over time. Sometimes mature modules have over 1000 lines of code. But ctrl-f is easy, and some IDE's make it easy to view outlines of the file - so no matter how large the file, you can quickly go to whatever object or method that you're looking for. Conclusion My direction, in the context of Python, is to prefer to keep related and semantically similar class definitions in the same file. If the file grows so large as to become unwieldy, then consider a reorganization. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306486",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/209730/"
]
} |
306,518 | This Stack Overflow question is about a child having reference to its parent, through a pointer. Comments were pretty critical initially of the design being a horrible idea. I understand this is probably not the best idea in general. From a general rule of thumb it seems fair to say, "don't do this!" However, I am wondering what sorts of conditions would exist where you would need to do something like this. This question here and associated answers/commentary suggests even for graphs to not do something like this. | The key here is not whether two objects have circular references, but whether those references indicate ownership of each other. Two objects cannot "own" each other: this causes an intractable dilemma for initialization and deletion order. One must be an optional reference, or otherwise indicate that one object will not manage the other's lifetime. Consider a doubly-linked list: two nodes link back and forth to each other, but neither "owns" the other (the list owns them both). This means neither node allocates memory for the other or is otherwise responsible for the identity or lifetime management of the other. Trees have a similar relationship, although nodes in a tree may allocate children and parents do own children. The link from a child to parent helps with traversal, but again does not define ownership. In most OO designs, a reference to another object as an object's data member implies ownership. For example, suppose we have classes Car and Engine. Neither one is very useful on its own. We can say that these objects depend on each other: they require the presence of the other in order to perform useful work. But which "owns" the other? In this case we would say that Car owns Engine because the car is the "container" in which all of the automotive components live. In both an OO and real-world design, the car is the sum of its parts, and all of those parts are connected together within the context of the car. Engine may have a reference back to Car, or it may have a reference to TorqueConverter, but no component inside of Car owns Car even if said component has a reference to the Car. Circular references can be a bad design smell, but not necessarily. When used judiciously and documented correctly, they can make using data structures easier. Try traversing a tree without references going both ways between parents and children. Sure, you could come up with a stack-based approach that is brittle and complex, or you could use the reference-based approach that is trivially simple. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306518",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52929/"
]
} |
306,801 | Today I had an argument with someone. I was explaining the benefits of having a rich domain model as opposed to an anemic domain model. And I demoed my point with a simple class looking like that: public class Employee
{
public Employee(string firstName, string lastName)
{
FirstName = firstName;
LastName = lastname;
}
public string FirstName { get private set; }
public string LastName { get; private set;}
public int CountPaidDaysOffGranted { get; private set;}
public void AddPaidDaysOffGranted(int numberOfdays)
{
// Do stuff
}
} As he defended his anemic model approach, one of his arguments was: "I am a believer in SOLID . You are violating the single responsibility principle (SRP) as you are both representing data and performing logic in the same class." I found this claim really surprising, as following this reasoning, any class having one property and one method violates the SRP, and therefore OOP in general is not SOLID, and functional programming is the only way to heaven. I decided not to reply to his many arguments, but I am curious what the community thinks on this question. If I had replied, I would have started by pointing to the paradox mentioned above, and then indicate that the SRP is highly dependent on the level of granularity you want to consider and that if you take it far enough, any class containing more than one property or one method violates it. What would you have said? Update: The example has been generously updated by guntbert to make the method more realistic and help us focus on the underlying discussion. | Single Responsibility should be understood as an abstraction of logical tasks in your system. A class should have the single responsibility to (do everything necessary in order to) perform one single, specific task. This can actually bring a lot into a well-designed class, depending on what the responsibility is. The class that runs your script engine, for example, can have a lot of methods and data involved in processing scripts. Your coworker is focusing on the wrong thing. The question is not "what members does this class have?" but "what useful operation(s) does this class perform within the program?" Once that is understood, your domain model looks just fine. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306801",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/143959/"
]
} |
306,881 | In PHP I have this if statement ( $first and $second will evaluate to true or false ): if ($first && $second) {
// evereything is OK
} else {
throw new Exception()...
} My real code is much more complicated, I am trying to make simple example here. I want to turn this if/else into one if with negation like this: if (!($first && $second)){
throw new Exception()...
}
// everything is OK As you can see in this example, I've put ! negation sign in front of parentheses. Is this correct ? Do I need to negate every condition itself like this: if (!$first && !$second) Or I should use || operator: if (!$first || !$second) // I am using OR here I am not sure how these conditions are going to evaluate at the end, and I am confused by my dummy testing results. I really hope that someone can explain to me how all these checks are going to evaluate at the end. Thanks to everyone who answered my question. Due to my low rep, I can not up-vote or pick some answer as the right one. You are all good for me :) | Build a truth table: p q p && q p || q !(p && q) !p || !q !(p || q) !p && !q
==========================================================================
0 0 0 0 1 1 1 1
0 1 0 1 1 1 0 0
1 0 0 1 1 1 0 0
1 1 1 1 0 0 0 0 Thus, you see that !(p && q) is equivalent to !p || !q , but not equivalent to !p && !q . You see that !(p && q) and !p || !q are the opposite of p && q . Note that !(p && q) and !p || !q are equivalent and can be proved by using the De Morgan's laws . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306881",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/210317/"
]
} |
306,890 | Is it bad practice that a controller calls a repository instead of a service? To explain more: I figure out that in good design controllers call services and services use repositories. But sometimes in controller I don't have/need any logic and just need to fetch from the database and pass it to the view. And I can do it by just calling the repository - no need to call the service. Is this bad practice? | No, think of it this way: a repository is a service (also). If the entities you retrieve through the repository handle most of the business logic there is no need for other services. Just having the repository is enough. Even if you have some services that you must pass through to manipulate your entities. Grab the entity from the repository first and then pass it to said service. Being able to toss up a HTTP 404 before even trying is very convenient. Also for read scenario's it's common you just need the entity to project it to a DTO/ViewModel. Having a service layer in between then often results in a lot of pass through methods which is rather ugly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306890",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/150418/"
]
} |
306,963 | Generally when writing automated unit tests (eg JUnit, Karma) I aim to: cover all the boundary conditions get a high level of coverage. I heard someone say: coverage and boundary conditions aren't enough for a unit test, you need to write them so they will break if the code changes. This sounds good to me in theory - but I'm not sure how to apply it. My question is: Should I write automated unit tests that fail when the code changes? If so, how. | Your aim should be not to write unit tests that fail when the code changes, but unit tests that fail when the behaviour changes. Here, behaviour means anything that an external caller of the method wants it to do, like returning the right response to a question or saving the right thing to a databse. How it achieves that is its own internal implementation, not its behaviour. By testing behaviour rather than implementation, you can refactor code to make improvements, and instantly verify whether you've accidentally changed the way it behaves by running your tests. In reality, it's not possible to perfectly achieve this goal. If you have a method: int add(int x, int y) {
return x + y;
} You can write as many unit tests as you want for it, but it's extremely unlikely any of them will fail if you modify it to: int add(int x, int y) {
if(x==10731 && y == -405571) {
return 0;
}
return x + y;
} However, you can take some sensible steps to get as close to full behavioural coverage as is practical: As you said, think about boundary conditions and corner cases. These are the places where you're most likely to see accidental behaviour change. Think of line-by-line coverage as "necessary, but not sufficient". Imagine trying to be as lazy as you possibly can while still getting full line-by-line coverage, and you'll see how easy it is to write an inadequate set of tests that follow this Think about the behaviour your method is supposed to provide, and the branches it can follow. Ideally you should test that for every route through its implementation, it provides all the behaviour that's expected. When you've written a set of tests, ask "Are there any implementations of my method which are at least as simple as the existing one, which would pass all of my tests, but which have the wrong behaviour?" This is a good rule of thumb to see if your behavioural coverage is good enough. As shown in the example add method above, you can never really fully defend against modifications to your method that add extra stuff to it, making it less simple. But much more likely, bugs are going to sneak in through modifying or removing parts, without adding to the complexity (because, why would you refactor something in a way that adds to its complexity?). So by adding the "at least as simple" condition, you get something practically achievable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/306963",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13382/"
]
} |
307,063 | Consider the following situation: You have a clone of a git repository You have some local commits (commits that have not yet been pushed anywhere) The remote repository has new commits that you have not yet reconciled So something like this: If you execute git pull with the default settings, you'll get something like this: This is because git performed a merge. There's an alternative, though. You can tell pull to do a rebase instead: git pull --rebase and you'll get this: In my opinion, the rebased version has numerous advantages that mostly center around keeping both your code and the history clean, so I'm a little struck by the fact that git does the merge by default. Yes, the hashes of your local commits will get changed, but this seems like a small price to pay for the simpler history you get in return. By no means am I suggesting that this is somehow a bad or a wrong default, though. I am just having trouble thinking of reasons why the merge might be preferred for the default. Do we have any insight into why it was chosen? Are there benefits that make it more suitable as a default? The primary motivation for this question is that my company is trying to establish some baseline standards (hopefully, more like guidelines) for how we organize and manage our repositories to make it easier for developers to approach a repository they haven't worked with before. I am interested in making a case that we should usually rebase in this type of situation (and probably for recommending developers set their global config to rebase by default), but if I were opposed to that, I would certainly be asking why rebase isn't the default if it's so great. So I'm wondering if there is something I'm missing. It has been suggested that this question is a duplicate of Why do so many websites prefer “git rebase” over “git merge”? ; however, that question is somewhat the reverse of this one. It discusses the merits of rebase over merge, while this question asks about the benefits of merge over rebase. The answers there reflect this, focusing on problems with merge and benefits of rebase. | It is hard to know for sure why merge is the default without hearing from the person who made that decision. Here is a theory... Git cannot presume it is ok to --rebase every pull. Listen to how that sounds. "Rebase every pull." just sounds wrong if you use pull requests or similar. Would you rebase on a pull request? In a team that is not just using Git for centralized source control... You may pull from upstream and from downstream. Some people do a lot of pulling from downstream, from contributors, etc. You may work on features in close collaboration with other developers, pulling from them or from a shared topic branch and still occasionally updated from upstream. If you always rebase then you end up changing shared history, not to mention fun conflict cycles. Git was designed for a large highly distributed team where everyone does not pull and push to a single central repo. So the default makes sense. Developers who do not know when it's ok to rebase will merge by default. Developers can rebase when they want to. Commiters who do a lot of pulls and have a lot of pull get the default that suits them best. For evidence of intent, here's a link to a well known email from Linus Torvalds with his views on when they should not rebase. Dri-devel git pull email If you follow the whole thread you can see that one developer is pulling from another developer and Linus is pulling from both of them. He makes his opinion pretty clear. Since he probably decided Git's defaults, this might explain why. A lot of people now use Git in a centralized way, where everyone in a small team pulls only from an upstream central repo and pushes to that same remote. This scenario avoids some of the situations where a rebase is not good, but usually does not eliminate them. Suggestion: Don't make a policy of changing the default. Any time you put Git together with a big group of developers some of the developers won't understand Git all that deeply (myself included). They will go to Google, SO, get cookbook advice and then wonder why some things don't work, e.g. why does git checkout --ours <path> get the wrong version of the file? You can always revise your local environment, make aliases, etc. to suit your taste. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/307063",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92517/"
]
} |
307,076 | In a project we have a rather formal code review, which is currently done manually and documented in Excel sheets and Word docs. We would like to improve the current code review process by integrating it into a Git workflow (utilizing tools like Bitbucket Server, which we already have, Gerrit, etc.). The current idea is that each developer implements features and bugfixes and creates a pull request. This pull request is reviewed by other developers and then merged into our main development branch or not. We would like to export all pull requests (which are now the code reviews) to formally document them in an offline document. This code review document is a delivery item for our customer. Is this a feasible approach at all? | It is hard to know for sure why merge is the default without hearing from the person who made that decision. Here is a theory... Git cannot presume it is ok to --rebase every pull. Listen to how that sounds. "Rebase every pull." just sounds wrong if you use pull requests or similar. Would you rebase on a pull request? In a team that is not just using Git for centralized source control... You may pull from upstream and from downstream. Some people do a lot of pulling from downstream, from contributors, etc. You may work on features in close collaboration with other developers, pulling from them or from a shared topic branch and still occasionally updated from upstream. If you always rebase then you end up changing shared history, not to mention fun conflict cycles. Git was designed for a large highly distributed team where everyone does not pull and push to a single central repo. So the default makes sense. Developers who do not know when it's ok to rebase will merge by default. Developers can rebase when they want to. Commiters who do a lot of pulls and have a lot of pull get the default that suits them best. For evidence of intent, here's a link to a well known email from Linus Torvalds with his views on when they should not rebase. Dri-devel git pull email If you follow the whole thread you can see that one developer is pulling from another developer and Linus is pulling from both of them. He makes his opinion pretty clear. Since he probably decided Git's defaults, this might explain why. A lot of people now use Git in a centralized way, where everyone in a small team pulls only from an upstream central repo and pushes to that same remote. This scenario avoids some of the situations where a rebase is not good, but usually does not eliminate them. Suggestion: Don't make a policy of changing the default. Any time you put Git together with a big group of developers some of the developers won't understand Git all that deeply (myself included). They will go to Google, SO, get cookbook advice and then wonder why some things don't work, e.g. why does git checkout --ours <path> get the wrong version of the file? You can always revise your local environment, make aliases, etc. to suit your taste. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/307076",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10158/"
]
} |
307,101 | I've been learning about NoSQL Databases for a week now. I really understand the advantages of NoSQL Databases and the many use cases they are great for. But often people write their articles as if NoSQL could replace Relational Databases. And there is the point I can't get my head around: NoSQL Databases are (often) key-value stores. Of course it is possible to store everything into a key-value store (by encoding the data in JSON, XML, whatever), but the problem I see is that you need to get some amount of data that matches a specific criterion, in many use cases. In a NoSQL database you have only one criterion you can search for effectively - the key. Relational Databases are optimized to search for any value in the data row effectively. So NoSQL Databases are not really a choice for persisting data that need to be searched by their content. Or have I misunderstood something? An example: You need to store user data for a webshop. In a relational database you store every user as an row in the users table, with an ID, the name, his country, etc. In a NoSQL Database you would store each user with his ID as key and all his data (encoded in JSON, etc.) as value. So if you need to get all users from a specific country (for some reason the marketing guys need to know something about them), it's easy to do so in the Relational Database, but not very effective in the NoSQL Database, because you have to get every user, parse all the data and filter. I don't say it's impossible , but it gets a lot more tricky and I guess not that effective if you want to search in the data of NoSQL entries. You could create a key for each country that stores the keys of every user who lives in this country, and get the users of a specific country by getting all the keys which are deposited in the key for this country. But I think this techique makes a complex dataset even more complex - it's harder to implement and not as effective as querying an SQL Database. So I think it's not a way you would use in production. Or is it? I'm not really sure if I misunderstood something or overlooked some concepts or best practices to handle such use cases. Maybe you could correct my statements and answer my questions. | While I agree with your premise that NoSQL is not a panacea for all database woes, I think you misunderstand one key point. In NoSQL database you have only one criterion you can search for effectively - the key. This is clearly not true. For example MongoDB supports indices. (from https://docs.mongodb.org/v3.0/core/indexes-introduction/ ) Indexes support the efficient execution of queries in MongoDB. Without
indexes, MongoDB must perform a collection scan, i.e. scan every
document in a collection, to select those documents that match the
query statement. If an appropriate index exists for a query, MongoDB
can use the index to limit the number of documents it must inspect. Indexes are special data structures [1] that store a small portion of
the collection’s data set in an easy to traverse form. The index
stores the value of a specific field or set of fields, ordered by the
value of the field. The ordering of the index entries supports
efficient equality matches and range-based query operations. In
addition, MongoDB can return sorted results by using the ordering in
the index. As does couchbase (from http://docs.couchbase.com/admin/admin/Views/views-intro.html ) Couchbase views enable indexing and querying of data. A view creates an index on the data according to the defined format
and structure. The view consists of specific fields and information
extracted from the objects in Couchbase. In fact anything that calls itself a NoSQL database rather than a key-value store should realy support some kind of indexing schemes. In fact, it is often the flexibility of these index schemes that makes NoSQL shine. In my opinion, the language used to define the NoSQL indices are often more expressive or natural than SQL, and since they usually live outside the table, you don't need to change your table schemas to support them. (Not to say you cant do similar things in SQL but to me it feels like there is a lot more hoop-jumping involved). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/307101",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/198145/"
]
} |
307,168 | At our team, in addition to individual units of work (Stories), we have longer-running themes of work (Epics). Multiple stories make an epic. Traditionally we've had feature branches for each Story, and merged those straight to master when they pass QA. However, we'd like to start holding back on release of completed stories in an Epic until the Epic is deemed "feature complete". We'd only release these features to production when the entire Epic is closed. Furthermore, we have a nightly build server - we'd like all closed Stories (including those that are part of incomplete Epics) to be deployed to this nightly server automatically. Are there any suggestions on how to manage our repo to achieve this? I've considered introducing "epic branches", where we'd merging closed stories to the related epic branch instead of direct to master, but my concerns are: I worry about the merge conflicts that may arise if epic branches are kept open for long Nightly builds would require merging all epic branches into a "nightly build" branch. Again, merge conflicts could arise, and this is to be done automatically | Simple suggestion: don't do that. git branches are not for long-running forks of the code, as discussed here and here . Branches are best treated as transient things used to organize commits by an individual developer on a day-to-day level. So if they have a name that corresponds to something a project manager (let alone end user) might care about you are doing something wrong. Recommended practice is to use continuous integration with feature toggles or branch-by-abstraction to ensure that: all code is integrated at all times (at least every day, preferably more often) what gets deployed is under explicit control. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/307168",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/210752/"
]
} |
307,292 | I'm an experienced developer, but have not done many code reviews. I'm being asked to review code written in Python but I do not know Python. Does it make any sense at all to review code in a language I don't know? | Any sense? Yes. Even if you know nothing about the semantics of a programming language, you can still read characters and notice inconsistent formatting, missing comments, badly chosen identifiers, obvious duplication etc. Much sense, or enough sense to repay the cost of your time ? I'm not sure. This depends on your position, the importance of code reviews in the workflow of your team, and several other factors that we can't quantify well enough. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/307292",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/210942/"
]
} |
307,346 | I'm relatively new to programming (July 2015), and I've always wondered why it's good programming practice to hide variables as much as possible. I've run into this question mainly recently when I looked into events and delegates in C#. I searched around as to why I should use events rather than just a delegate, since they do the same thing it seems.
I read that it's better programming practice to hide the delegate fields and use an event. I decided it was time to learn why it was good programming practice, but I couldn't really find anything other than "Because it's good programming practice". If you could provide some basic examples and maybe some pseudo-code that would be helpful. | Because the more things you have to deal with in any task the harder it becomes. For example, try patting your head. Then try patting your head and counting backwards from 1000. Then try patting your head counting backwards from 1000 and hopping on one leg. Then try patting your head counting backwards from 1000 and hopping on one leg and singing the national anthem. Gets a lot harder doesn't it? Each of those tasks were simple and would be easy on their own. If you keep your code small and granular and limit the amount of variables in scope you're dealing with less things at a time. This means you're less likely to fall over while standing on one leg because you were distracted by singing the national anthem and counting backwards from 1000. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/307346",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/211016/"
]
} |
307,467 | While developing the application I started to wonder - How should I design command line arguments? A lot of programs are using formula like this -argument value or /argument value . Solution which came to my mind was argument:value . I thought it is good because with no white spaces there is no way that values and arguments can be messed up. Also it is easy to split a string into two on the first from the left : character. My questions are: Is popular -argument value formula better than argument:value (more readable, easier to write, bugfree, easier to understand by expert developers)? Are there some commonly known rules which I should follow while designing command line arguments (other than if it works it is OK)? Asked for some more details I will provide it. However I think they should not affect the answers. The question is about a good habits in general. I think they are all the same for all kinds of applications. We are working on an application which will be used in public places (touch totems, tables). Applications are written using Qt Quick 5 (C++, QML, JS). Devices will have Windows 8.1/10 installed. We will provide front-end interface to manage the devices. However some advanced administrators may want to configure the application on their own. It is not very important from the side of the business but as I agree with what Kilian Foth said I do not want my application to be a pain for a user. Not finding in the Internet what I want I asked here. To more advanced Stack Exchange users: I wanted this question to be general. Maybe it qualifies to the community wiki (I do not know if existing question can be converted with answers). As I want this question to be operating system and programming language independent the answers appearing here can be a valuable lesson for other developers. | On POSIX systems (e.g. Linux, MacOSX), at least for programs possibly started in a shell terminal (e.g. most of them), I would recommend using the GNU coding conventions (which also lists common argument names) and look into the POSIX utilities guidelines , even for proprietary software: always handle --version and --help (even /bin/true accepts them!!). I curse the authors of software not understanding --help , I hate them (because prog --help is the first command I am trying on a new program)! Often --help can be abbreviated as -h Have the --help message list all the options (unless you have too many of them... in that case list the most common ones and explicitly refer to some man page or some URL) and default values of options, and perhaps important (and program-specific) environment variables. Show these option lists on option argument error. accept -a short argument (single letter) and have some equivalent --long-argument , so -a2 --long-argument=2 , --long-argument 2 ; of course you could have (for rarely used options) some --only-long-argument name; for modal arguments without extra options -cf is generally handled as -c -f , etc. so your -argument:value proposal is weird, and I don't recommend doing that. use GLIBC getopt_long or better (e.g. argp_parse , in OCaml it's Arg module , ...) often use - for standard input or output (if you can't do that, handle /dev/stdin & /dev/stdout even on the few operating systems not having them) mimic the behavior of similar programs by reusing most of their options conventions; in particular -n for dry run (à la make ), -h for help, -v for verbosity, etc... use -- as separator between options & file or other arguments if your program uses isatty to test than stdin is a terminal (and behave "interactively" in that case), provide an option to force non-interactive mode, likewise if your program has a GUI interface (and tests getenv("DISPLAY") on X11 desktop) but could also be used in batch or command line. Some programs (e.g. gcc ) accept indirect argument lists, so @somefile.txt is meaning read program arguments from somefile.txt ; this could be useful when your program might accept a very big lot of arguments (more than your kernel's ARG_MAX ) BTW, you might even add some auto-complete facilities for your program and usual shells (like bash or zsh ) Some old Unix commands (e.g. dd , or even sed ) have weird command arguments for historical compatibility. I would recommend not following their bad habits (unless you are making some better variant of them). If your software is a series of related command-line programs, take inspiration from git (which you surely use as a development tool), which accepts git help and git --help and have many git subcommand and git subcommand --help In rare cases you might also use argv[0] (by using symlinks on your program), e.g. bash invoked as rbash has a different behavior ( restricted shell). But I usually don't recommend doing that; it might make sense if your program could be used as a script interpreter using shebang i.e. #! on first line interpreted by execve(2) . If you do such tricks, be sure to document them, including in --help messages. Remember that on POSIX the shell is globbing arguments ( before running your program!), so avoid requiring characters (like * or $ or ~ ) in options which would need to be shell-escaped. In some cases, you could embed an interpreter like GNU guile or Lua in your software (avoid inventing your own Turing-complete scripting language if you are not expert in programming languages). This has deep consequences on the design of your software (so should be thought of early!). You should then easily be able to pass some script or some expression to that interpreter. If you take that interesting approach, design your software and its interpreted primitives with care; you could have some weird user coding big scripts for your thing. In other cases, you might want to let your advanced users load their plugin into your software (using dynamic loading techniques à la dlopen & dlsym ). Again, this is a very important design decision (so define and document the plugin interface with care), and you'll need to define a convention to pass program options to these plugins. If your software is a complex thing, make it accept some configuration files (in addition or replacement of program arguments) and probably have some way to test (or just parse) these configuration files without running all the code. For example, a mail transfer agent (like Exim or Postfix) is quite complex, and it is useful to be able to "half-dry" run it (e.g. observing how it is handling some given email address without actually sending an email). Notice that the /option is a Windows or VMS thing. It would be insane on POSIX systems (because the file hierarchy uses / as a directory seperator, and because the shell does the globbing). All my answer is mostly for Linux (and POSIX). P.S. If possible, make your program a free software , you would get improvements from some users & developers (and adding a new program option is often one of the easiest things to add to an existing free software).
Also, your question depends a lot on the intended audience : a game for teenagers or a browser for grandma probably does not need the same kind and amount of options than a compiler, or a network inspector for datacenter sysadmins, or a CAD software for microprocessor architects or for bridge designers. An engineer familiar with programming & scripting probably likes much more having lots of tunable options than your grandma, and probably might want to be able to run your application without X11 (perhaps in a crontab job). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/307467",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/210140/"
]
} |
307,861 | I've been reviewing C programming and there are just a couple things bothering me. Let's take this code for example: int myArray[5] = {1, 2, 2147483648, 4, 5};
int* ptr = myArray;
int i;
for(i=0; i<5; i++, ptr++)
printf("\n Element %d holds %d at address %p", i, myArray[i], ptr); I know that an int can hold a maximum value of positive 2,147,483,647. So by going one over that, does it "spill over" to the next memory address which causes element 2 to appear as "-2147483648" at that address? But then that doesn't really make sense because in the output it still says that the next address holds the value 4, then 5. If the number had spilled over to the next address then wouldn't that change the value stored at that address? I vaguely remember from programming in MIPS Assembly and watching the addresses change values during the program step by step that values assigned to those addresses would change. Unless I am remembering incorrectly then here is another question: If the number assigned to a specific address is bigger than the type (like in myArray[2]) then does it not affect the values stored at the subsequent address? Example: We have int myNum = 4 billion at address 0x10010000. Of course myNum can't store 4 billion so it appears as some negative number at that address. Despite not being able to store this large number, it has no effect on the value stored at the subsequent address of 0x10010004. Correct? The memory addresses just have enough space to hold certain sizes of numbers/characters, and if the size goes over the limit then it will be represented differently (like trying to store 4 billion into the int but it will appear as a negative number) and so it has no effect on the numbers/characters stored at the next address. Sorry if I went overboard. I've been having a major brain fart all day from this. | No, it does not. In C, variables have a fixed set of memory addresses to work with. If you are working on a system with 4-byte ints , and you set an int variable to 2,147,483,647 and then add 1 , the variable will usually contain -2147483648 . (On most systems. The behavior is actually undefined.) No other memory locations will be modified. In essence, the compiler will not let you assign a value that is too big for the type. This will generate a compiler error. If you force it to with a case, the value will be truncated. Looked at in a bitwise way, if the type can only store 8 bits, and you try to force the value 1010101010101 into it with a case, you will end up with the bottom 8 bits, or 01010101 . In your example, regardless of what you do to myArray[2] , myArray[3] will contain '4'. There is no "spill over". You are trying to put something that is more than 4-bytes it will just lop off everything on the high end, leaving the bottom 4 bytes. On most systems, this will result in -2147483648 . From a practical standpoint, you want to just make sure this never, ever happens. These sorts of overflows often result in hard-to-solve defects. In other words, if you think there is any chance at all your values will be in the billions, don't use int . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/307861",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/211871/"
]
} |
307,949 | I am currently working on a system where there are Users, and each user have one or multiple roles. Is it a good practice to use List of Enum values on User? I can't think of anything better, but this doesn't feel alright. enum Role{
Admin = 1,
User = 2,
}
class User{
...
List<Role> Roles {get;set;}
} | Why not to use Set? If using List: It is easy to add the same role twice Naive compare of lists won't work here properly: [User, Admin] is not the same as [Admin, User] Complex operations such as intersect and merge are not straightforward to implement If you are concerned about performance, then, for example in Java, there is EnumSet which is implemented as fixed length array of booleans (i'th boolean element answers the question whether i'th Enum value is present in this set or not). For example, EnumSet<Role> . Also, see EnumMap . I suspect that C# has something similar. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/307949",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/212028/"
]
} |
307,966 | I've generally been working with PHP warnings and notices off, since I work on a lot of projects where it's already in live production. Now, if I turn on the warnings and notices on these live production websites, they'll be overloaded with them. The projects that I work on at home, on local, I usually try to work away ALL warnings and notices. Sometimes, there's no solution to not having a notice, so I'd just have to deal with looking at the notice until I decide to turn them off altogether. In the end, I don't know whether I'm wasting my time trying to get rid of all warnings and notices, or that I'm actually doing this for the greater good. Hence my question, is it good practice to avoid warnings and notices altogether, or does it really not matter? | If warnings and notices are coming from your code, most definitely fix it. From my experience, in 95% it may be benign, but the 5% highlight a real problem that may lead to countless hours spent chasing. If they are coming from the third-party code that you must use for one reason or the other, you generally don't have much choice. It's a different question if your legacy codebase is really large, then you can treat legacy code as third-party, but require the new code be warning-free. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/307966",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/210884/"
]
} |
307,981 | The primary use for Node.JS is of course as a full server stack, and I've used it in that manner to great success. However, a number of useful, interesting NPM packages deal with things like transpiling a styling language, adding typing information to typeless JavaScript, running JavaScript unit tests, even having a "piped" build system like Gulp. Currently, I work on a project that uses Tomcat, and is primarily written in Java. Java and Ant utilities have felt somewhat limiting in terms of interacting with our JavaScript files when building/testing, so I'm looking into the possibility of adding a dependency on NodeJS, and setting up build dependencies. Why would I do this? I distinctly want to avoid adding dependencies "because they're cool". I only want to add Node packages for scenarios where it is infeasible, or even deprecated, to solve particular problems using Java-based programs (eg, Ant). One example: Our JavaScript widget library, Dojo, has stated they will not support doing Dojo builds via Java for much longer, having largely moved to Node builds. Additionally, some CSS compression toolkits run using Java have stopped being maintained in favor of those like LESS. There are also developer tools like JavaScript unit tests, or TypeScript types we'd like to consider to make development more reliable. Using a dependency manager might also help us engineer a solution where we don't have Dojo's entire source code committed to our repository. The question: Proper project layout and potential pitfalls What would be some reliable practices to follow if I want to use Node as a build/test/development dependency, but not require it for the final, packaged app (And, pros/cons of different approaches)? In this scenario, does it actually make sense to include a package.json in the root project hierarchy? Should Node-related build tasks be invoked via Ant, or does it make sense to require separate commandline actions to invoke them? Under what scenarios should developers/build machines run "npm install", and should it always be automated? If based on experience people tend to find that the answer to those questions don't matter, that is also a helpful answer - but I'm asking for experience to hopefully avoid some design pitfalls. | If warnings and notices are coming from your code, most definitely fix it. From my experience, in 95% it may be benign, but the 5% highlight a real problem that may lead to countless hours spent chasing. If they are coming from the third-party code that you must use for one reason or the other, you generally don't have much choice. It's a different question if your legacy codebase is really large, then you can treat legacy code as third-party, but require the new code be warning-free. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/307981",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/90889/"
]
} |
308,036 | Is it a good practice to replace constants used outside of classes by getters? As an example, is it better to use if User.getRole().getCode() == Role.CODE_ADMIN or if User.getRole().isCodeAdmin() ? That would lead to this class: class Role {
constant CODE_ADMIN = "admin"
constant CODE_USER = "user"
private code
getRoleCode() {
return Role.code
}
isCodeAdmin () {
return Role.code == Role.CODE_ADMIN
}
isCodeUser () {
return Role.code == Role.CODE_USER
}
} | First off, please note that doing something like entity.underlyingEntity.underlyingEntity.method() is considered a code smell according to the Law of Demeter . This way, you're exposing a lot of implementation details to the consumer. And each need of extension or modification of such a system will hurt a lot. So given that, I'd recommend you to have a HasRole or IsAdmin method on the User as per CodesInChaos' comment. This way, the way how the roles are implemented on the user remains implementation detail for the consumer. And also it feels more natural to ask the user what his role is instead of asking him about details of his role and then deciding based on that. Please also avoid using string s unless where necessary. name is a good example of string variable because the contents are unknown beforehand. On the other hand, something like role where you have two distinct values that are well known at compilation time, you'd better use strong typing. That's where enumeration type comes into play... Compare public bool HasRole(string role) with public enum Role { Admin, User }
public bool HasRole(Role role) The second case gives me a lot more idea of what I should be passing around. It also prevents me from erroneously passing in an invalid string in case I had no idea about your role constants. Next is the decision on how will the role look. You can either use enum directly stored on the user: public enum Role
{
Admin,
User
}
public class User
{
private Role _role;
public bool HasRole(Role role)
{
return _role == role;
}
// or
public bool IsAdmin()
{
return _role == Role.Admin;
}
} On the other hand, if you want your role to have a behaviour itself, it should definitely again hide the details of how its type is being decided: public enum RoleType
{
User,
Admin
}
public class Role
{
private RoleType _roleType;
public bool IsAdmin()
{
return _roleType == RoleType.Admin;
}
public bool IsUser()
{
return _roleType == RoleType.User;
}
// more role-specific logic...
}
public class User
{
private Role _role;
public bool IsAdmin()
{
return _role.IsAdmin();
}
public bool IsUser()
{
return _role.IsUser();
}
} This is however quite verbose and the complexity would rise with each role addition - that's usually how the code ends up when you try to fully adhere to the Law of Demeter. You should improve the design, based on the concrete requirements of the system being modeled. According to your question, I guess you'd better go with the first option with enum directly on User . Should you need more logic on the Role , the second option should be considered as a starting point. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308036",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/174167/"
]
} |
308,108 | We are designing coding standards, and are having disagreements as to if it is ever appropriate to break code out into separate functions within a class, when those functions will only ever be called once. For instance: f1()
{
f2();
f4();
}
f2()
{
f3()
// Logic here
}
f3()
{
// Logic here
}
f4()
{
// Logic here
} versus: f1()
{
// Logic here
// Logic here
// Logic here
} Some argue that it is simpler to read when you break up a large function using separate single-use sub functions. However, when reading code for the first time, I find it tedious to follow the logic chains and optimize the system as a whole. Are there any rules typically applied to this sort of function layout? Please note that unlike other questions, I am asking for the best set of conditions to differentiate allowable and non-allowable uses of single call functions, not just if they are allowed. | The rationale behind splitting functions is not how many times (or how often) they will be called , it's keeping them small and preventing them from doing several different things. Bob Martin's book Clean Code gives good guidelines on when to split a function: Functions should be small; how small? See the bullet bellow. Functions should do only one thing. So if the function is several screens long, split it. If the function does several things, split it. If the function is made of sequential steps aimed to a final result, then no need to split it, even if it's relatively long. But if the function does one thing, then another, then another, and then another, with conditions, logically separated blocks, etc., it should be split. As a result of that logic, functions should be usually small. If f1() does authentication, f2() parses input into smaller parts, if f3() makes calculations, and f4() logs or persist results, then they obviously should be separated, even when every one of them will be called just once. That way you can refactor and test them separately, besides the added advantage of being easier to read . On the other hand if all the function does is: a=a+1;
a=a/2;
a=a^2
b=0.0001;
c=a*b/c;
return c; then there is no need to split it, even when the sequence of steps is long. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308108",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/172875/"
]
} |
308,178 | I am trying to understand these classifications and why they exist. Is my understanding right? If not, what? P is polynomial complexity, or O(n k ) for some non-negative real number k , such as O(1), O(n 1/2 ), O(n 2 ), O(n 3 ) , etc. If a problem belongs to P, then there exists at least one algorithm that can solve it from scratch in polynomial time. For example I can always figure out if some integer n is prime by looping over 2 <= k <= sqrt(n) and checking at each step if k divides n . NP is non-deterministic polynomial complexity. I don't really know what it means for it to be non-deterministic. I think it means it is easy to verify in polynomial time, but may or may not be polynomial time to solve from scratch if we didn't already know the answer. Since it may be solvable in polynomial time, all P problems are also NP problems. Integer factorization gets quoted as an example of NP, but I don't understand why it's not P, personally, since trial factorization takes O(sqrt(n)) time. NP-Complete I don't understand at all, but the Traveling Salesman Problem is quoted as an example of this. But in my opinion the TSP problem might just be NP, because it takes something like O(2 n n 2 ) time to solve, but O(n) to verify if you are given the path up front. NP-Hard I assume is just full of unknowns. Hard to verify, hard to solve. | You're basically correct about P and NP, but not about NP-hard and NP-complete. For starters, here are the super-concise definitions of the four complexity classes in question: P is the class of decision problems which can be solved in polynomial time by a deterministic Turing machine. NP is the class of decision problems which can be solved in polynomial time by a non-deterministic Turing machine. Equivalently, it is the class of problems which can be verified in polynomial time by a deterministic Turing machine. NP-hard is the class of decision problems to which all problems in NP can be reduced to in polynomial time by a deterministic Turing machine. NP-complete is the intersection of NP-hard and NP. Equivalently, NP-complete is the class of decision problems in NP to which all other problems in NP can be reduced to in polynomial time by a deterministic Turing machine. And here's a Euler diagram from Wikipedia showing the relationships between these four classes (assuming that P is not equal to NP): The part that I assume you're most unfamiliar with or confused by is the notion of a "polynomial time reduction" from problem X to problem Y. A reduction from X to Y is simply an algorithm A which solves X by making use of some other algorithm B which solves problem Y. This reduction is called a "polynomial time reduction" if all parts of A other than B have a polynomial time complexity. As a trivial example, the problem of finding the smallest element in an array is constant-time reducible to the sorting problem, since you can sort the array and then return the first element of the sorted array. One thing that's easy to miss about the NP-hard definition is that the reduction goes from NP problems to the NP-hard problem, but not necessarily vice versa . This means that NP-hard problems might be in NP, or in a much higher complexity class (as you can see from the Euler diagram), or they might not even be decidable problems. That's why people often say something like "NP-hard means at least as hard as NP" when trying to explain this stuff informally. The halting problem is a good example of an NP-hard problem that's clearly not in NP, as Wikipedia explains : It is easy to prove that the halting problem is NP-hard but not NP-complete. For example, the Boolean satisfiability problem can be reduced to the halting problem by transforming it to the description of a Turing machine that tries all truth value assignments and when it finds one that satisfies the formula it halts and otherwise it goes into an infinite loop. It is also easy to see that the halting problem is not in NP since all problems in NP are decidable in a finite number of operations, while the halting problem, in general, is undecidable. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308178",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/212406/"
]
} |
308,245 | Suppose I am limited to use C++ by the environment in the project. Is it good to prevent the use of some language features that C++ has but Java doesn't have (e.g.: multiple inheritance, operator overloading)? I think the reasons are: As Java is newer than C++, if Java doesn't provide a feature that C++ has, it means that the feature is not good, so we should avoid using it. C++ code with C++ specific features (e.g.: friend functions, multiple inheritance) can only be maintained or reviewed by C++ programmers, but if we just write C++ like Java (without C++ language specific feature), the code can be maintained or reviewed by both C++ and Java programmers. You may be asked to convert the code to Java some day Code without C++ specific features is usually more maintainable Every C++ language specific feature (e.g.: multiple inheritance) should have alternatives to be implemented in Java. If it doesn't, that means the design pattern or code architecture is problematic. Is that true? | No. This is woefully and terribly misguided. Java features are not somehow better than C++ features, especially in a vacuum. If your programmers don't know how to use a feature, train or hire better developers; limiting your developers to the worst of your team is a quick and easy way to lose your good developers. YAGNI . Solve your actual problem today, not the ghost of some future problem. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308245",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/196142/"
]
} |
308,250 | Suppose I have a segment of code to connect to internet and show connection results like it: HttpRequest* httpRequest=new HttpRequest();
httpRequest->setUrl("(some domain .com)");
httpRequest->setRequestType(HttpRequest::Type::POST);
httpRequest->setRequestData("(something like name=?&age=30&...)");
httpRequest->setResponseCallback([=](HttpClient* client, HttpResponse* response){
string responseString=response->getResponseDataString();
if(response->getErrorCode()!=200){
if(response->getErrorCode()==404){
Alert* alert=new Alert();
alert->setFontSize(30);
alert->setFontColor(255,255,255);
alert->setPosition(Screen.MIDDLE);
alert->show("Connection Error","Not Found");
}else if((some other different cases)){
(some other alert)
}else
Alert* alert=new Alert();
alert->setFontSize(30);
alert->setPosition(Screen.MIDDLE);
alert->setFontColor(255,255,255);
alert->show("Connection Error","unknown error");
}
}else{
(other handle methods depend on different URL)
}
} the code is long, and it is commonly used, but the code above does not require any extra things such as custom function and class (HttpRequest and Alert are both provided by framework by default), and although the code segment is long, it is straightforward and not complex (it is long just because there are bundles of settings such as url, font size...), and the code segment has little variations among class (e.g.: url,request data, error code handle cases, normal handle cases...) My question is, is it acceptable to copy and paste long but straightforward code instead of wrapping them in a function to reduce the dependency of code? | You need to consider the cost of change. What if you wanted to change how connections are made? How easy would it be? If you have a lot of duplicated code, then finding all the places that need changing could be quite time consuming and error prone. You also need to consider clarity. Most likely, having to look at 30 lines of code isn't going to be as easy to understand as a single call to a "connectToInternet" function. How much time is going to be lost trying to understand the code when new functionality needs to be added? There are certain rare cases where duplication isn't a problem. For example, if you are doing an experiment and the code is going to be thrown away at the end of the day. But in general, the cost of duplication outweighs the small time savings of not having to pull the code out into a separate function . See also https://softwareengineering.stackexchange.com/a/103235/63172 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308250",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/196142/"
]
} |
308,252 | Since I started programming, I've always been taught to leave a trailing blank line at the end of my files, the reason usually being something relating to how it makes concatenated files easier to read when using cat . While I can't find an example right now, GitHub indicates missing blank lines at the end of a file using a red symbol, or at least, used to - so clearly it's frowned upon by a considerable chunk of the community. Working with Go lately, I noticed that gofmt doesn't like blank lines at the end of a file, and my Vim plugin removes them automatically. Why are blank lines at the end of a file discouraged rather than enforced in Go? | You need to consider the cost of change. What if you wanted to change how connections are made? How easy would it be? If you have a lot of duplicated code, then finding all the places that need changing could be quite time consuming and error prone. You also need to consider clarity. Most likely, having to look at 30 lines of code isn't going to be as easy to understand as a single call to a "connectToInternet" function. How much time is going to be lost trying to understand the code when new functionality needs to be added? There are certain rare cases where duplication isn't a problem. For example, if you are doing an experiment and the code is going to be thrown away at the end of the day. But in general, the cost of duplication outweighs the small time savings of not having to pull the code out into a separate function . See also https://softwareengineering.stackexchange.com/a/103235/63172 | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308252",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/71709/"
]
} |
308,279 | I am trying to explain segmentation faults to someone, and I was thinking about the level 256 kill-screen in Pacman, and how it's triggered by integer overflow, and how similar the behavior is to the "unknown state" oft-described in a segmentation fault. I want to say this is a good example of what I call an "unhandled segfault", but I would rather get a second opinion before I potentially spread misinformation. I tried looking it up, but all I'm getting are documents on the bug itself, as well as that collab between Hipster Whale and Namco. So, would you consider the behavior in level 256 of Pacman to be an example of unhandled segmentation violation? | Definitely not. Accessing a memory address you didn't allocate is always a programming error. And acting on the information you get out of it produces undefined behavior, that much is accurate. I have no idea what platform the original Pac-man was written for, but I'm pretty sure it exhibited this behavior just like any other von Neumann machine. However, "segmentation fault" is a technical term for a much more specific condition. It happens when the computer automatically detects that this happened and terminates the process rather than allow undefined behavior to occur. This requires a specific (segmented) memory model with sophisticated ownership tagging. I don't think 1980 arcade games had that, and in fact the behavior of the game suggests that the error was not detected, and the undefined behavior did occur. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308279",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/198539/"
]
} |
308,515 | Why would you run unit tests on a CI server? Surely, by the time something gets committed to master, a developer has already run all the unit tests before and fixed any errors that might've occurred with their new code. Isn't that the point of unit tests? Otherwise they've just committed broken code. | Surely, by the time something gets committed to master, a developer has already run all the unit tests before and fixed any errors that might've occurred with their new code. Or not. There can be many reasons why this can happen: The developer doesn't have the discipline to do that They have forgotten They didn't commit everything and pushed an incomplete commit set (thanks Matthieu M. ) They only ran some tests, but not the whole suite (thanks [nhgrif][2]) They tested on their branch prior to merging (thanks [nhgrif][2] * 2) But the real point is to run the tests on a machine that is not the developer machine. One that is configured differently. This helps catch out issues where tests and/or code depend on something specific to a developer box (configuration, data, timezone, locale, whatever). Other good reasons for CI builds to run tests: Testing on different platforms other than the main development platforms, which may be difficult for a developer to do. (thanks [TZHX][3]) Acceptance/Integration/End to End/Really long running tests may be run on the CI server that would not be run on a developer box usually. (thanks [Ixrec][4]) A developer may make a tiny change before pushing/committing (thinking this is a safe change and therefore not running the tests). (thanks [Ixrec][4] * 2) The CI server configuration doesn't usually include all the developer tools and configuration and thus is closer to the production system CI systems build the project from scratch every time, meaning builds are repeatable A library change could cause problems downstream - a CI server can be configured to build all dependent codebases, not just the library one [2]: What's the point of running unit tests on a CI server? )
[3]: What's the point of running unit tests on a CI server? [4]: https://softwareengineering.stackexchange.com/users/161917/ixrec | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308515",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/175261/"
]
} |
308,640 | I always seem to write code in C that is mostly object oriented, so say I had a source file or something I would create a struct then pass the pointer to this struct to functions (methods) owned by this structure: struct foo {
int x;
};
struct foo* createFoo(); // mallocs foo
void destroyFoo(struct foo* foo); // frees foo and its things Is this bad practice? How do I learn to write C the "proper way". | No, this is not bad practice, it is even encouraged to do so, although one could even use conventions like struct foo *foo_new(); and void foo_free(struct foo *foo); Of course, as a comment says, only do this where appropriate. There is no sense in using a constructor for an int . The prefix foo_ is a convention followed by a lot of libraries, because it guards against clashing with naming of other libraries. Other functions often have the convention to use foo_<function>(struct foo *foo, <parameters>); . This allows your struct foo to be an opaque type. Have a look at the libcurl documentation for the convention, especially with "subnamespaces", so that calling a function curl_multi_* looks wrong at first sight when the first parameter was returned by curl_easy_init() . There are even more generic approaches, see Object-Oriented Programming With ANSI-C | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308640",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/205999/"
]
} |
308,766 | I'm asking in terms of a loop, obviously break is important in switch statements. Whether or not switch statements themselves are code smells is a separate issue. So consider the following use cases for iterating a data structure: You want to do something to the entire structure (no break needed) You want to do something to part of a data structure. You want to find something(s) in the data structure (which may or may not involve iterating the entire structure) The above list seems more-or-less exhaustive to me, maybe I'm missing something there. Case 1 can be thrown right out, we can use map / forEach . Case 2 sounds like filter or reduce would work. For case 3, needing to iterate the data structure to find something seems plain wrong, either the data structure itself should provide a relevant method or you are likely using the wrong data structure. While not every javascript data structure (or implementation) has those methods it's trivially simple to write the relevant functions for pretty much any javascript data structure. I saw this when researching but it is explicitly in the context of C/C++. I could understand how it would be more-or-less a necessity in C, but my understanding is that C++ has lambdas and many data structures are generally objects (e.g. std::vector , std::map ), so I'm not sure I understand why the answers are all so universally in favor of break there, but I don't feel I know C++ well enough to even begin to comment. I also realize that for some corner-case exceedingly large data structure the cost of iterating the entire structure may be unacceptably high, but I doubt those are very common when working even in node.js. Certainly it's the kind of thing you'd want to profile. So I just don't see a use-case for break in today's javascript. Am I missing something here? | Having a break out of a loop is no different than having that loop get refactored out to a function of its own and a return statement in a guard clause. while(condition) {
if(test) { break; }
doStuff;
} vs doMuchStuff();
function doMuchStuff() {
while(condition) {
if(test) { return; }
doStuff;
}
} Those are effectively the same. Single keywords are not code smells. They are tools for flow control. Many of them are variations on goto wrapped with some other level of safety to avoid the nightmare of spaghetti code. In judging if a particular bit of code is problematic, it is necessary to look at how it works and if it is appropriate for the language constructs. The key word alone is not sufficient to establish a code smell. It might be a whiff in that it can be misused, but going on witch hunts because one sees a switch or break or some other flow control construct is counterproductive and leads to style guides that prevent people from writing understandable, straight forward code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308766",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/176474/"
]
} |
308,797 | I'm developing a simple application that crawls in web pages to obtain some information. For this I used and tested some libraries, like crawler4j, jsoup, jaunt and htmlunit.
I exchanged several times a API to another because sometimes perceived that one served me better in certain feature than the one I was using. I may have to do it other times and every time I do so I have to go around the code making various refactorings. So I decide to separate the calls for this APIs in a kind of encapsulated classes that holds all the operations I have to do with this API. Is there a design pattern to mitigate this problem? Below a simple example that I use "Handler" suffix: EDIT the final version: public interface CrawlerApiHandler {
String visit (String url);
}
public class JsoupCrawlerApiHandler implements CrawlerApiHandler {
public static final String USER_AGENT = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.152 Safari/537.36";
@Override
public String visit(String url) {
try {
return Jsoup.connect(url).timeout(20000)
.userAgent(USER_AGENT).get().toString();
} catch (IOException e) {
//LOG
return "";
}
}
}
public class JauntApiHandlerImpl implements CrawlerApiHandler {
UserAgent userAgent;
public JauntApiHandlerImpl(UserAgent userAgent) {
this.userAgent = userAgent;
}
@Override
public String visit(String url) {
try {
return userAgent.visit(url).toString();
} catch (ResponseException e) {
return "";
}
}
} | Having a break out of a loop is no different than having that loop get refactored out to a function of its own and a return statement in a guard clause. while(condition) {
if(test) { break; }
doStuff;
} vs doMuchStuff();
function doMuchStuff() {
while(condition) {
if(test) { return; }
doStuff;
}
} Those are effectively the same. Single keywords are not code smells. They are tools for flow control. Many of them are variations on goto wrapped with some other level of safety to avoid the nightmare of spaghetti code. In judging if a particular bit of code is problematic, it is necessary to look at how it works and if it is appropriate for the language constructs. The key word alone is not sufficient to establish a code smell. It might be a whiff in that it can be misused, but going on witch hunts because one sees a switch or break or some other flow control construct is counterproductive and leads to style guides that prevent people from writing understandable, straight forward code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308797",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/175424/"
]
} |
308,817 | I was once advised that a C++ program should ultimately catch all exceptions. The reasoning given at the time was essentially that programs which allow exceptions to bubble up outside of main() enter a weird zombie state. I was told this several years ago and in retrospect I believe the observed phenomenon was due to lengthy generation of exceptionally large core dumps from the project in question. At the time this seemed bizarre but convincing. It was totally nonsensical that C++ should "punish" programmers for not catching all exceptions but the evidence before me did seem to back this up. For the project in question, programs that threw uncaught exceptions did seem to enter a weird zombie state -- or as I suspect the cause was now, a process in the midst of an unwanted core dump is unusually hard to stop. (For anyone wondering why this wasn't more obvious at the time: The project generated a large amount of output in multiple files from multiple processes which effectively obscured any sort of aborted (core dumped) message and in this particular case, post-mortem examination of core dumps wasn't an important debugging technique so core dumps weren't given much thought. Issues with a program usually didn't depend on state accumulated from many events over time by a long lived program but rather the initial inputs to a short lived program (<1 hour) so it was more practical to just rerun a program with the same inputs from a debug build or in a debugger to get more info.) Currently, I'm unsure of whether there is any major advantage or disadvantage of catching exceptions solely for the purpose of preventing exceptions from leaving main() . The small advantage I can think of for allowing exceptions to bubble up past main() is that it causes the result of std::exception::what() to be printed to the terminal (at least with gcc compiled programs on Linux). On the other hand, this is trivial to achieve by instead catching all exceptions derived from std::exception and printing the result of std::exception::what() and if it's desirable to print a message from an exception that doesn't derive from std::exception then it must be caught before leaving main() in order to print the message. The modest disadvantage I can think of for allowing exceptions to bubble up past main() is that unwanted core dumps may be generated. For a process using a large amount of memory this can be quite a nuisance and controlling core dumping behavior from a program requires OS-specific function calls. On the other hand, if a core dump and exit is desired then this could instead be achieved at any time by calling std::abort() and an exit without core dump can be achieved at any time by calling std::exit() . Anecdotally, I don't think I've ever seen the default what(): ... message printed by a widely distributed program upon crashing. What, if any, are the strong arguments for or against allowing C++ exceptions to bubble up past main() ? Edit: There are a lot of general exception handling questions on this site. My question is specifically about C++ exceptions that cannot be handled and have made it all the way to main() -- maybe an error message can be printed but it's an immediately show stopping error. | One problem with letting exceptions go past main is that the program will end with a call to std::terminate which default behavior is to call std::abort . It is only implementation defined if stack unwinding is done before calling terminate so your program can end without calling a single destructor! If you have some resource that really needed to be restored by a destructor call, you're in a pickle... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308817",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/128967/"
]
} |
308,829 | I have often heard developers mention that Java can't " do Real Time ", meaning a Java app running on Linux cannot meet the requirements of a deterministic real-time system, such as something running on RIOT-OS, etc. I am trying to understand why . My SWAG tells me that this is probably largely due to Java's Garbage Collector, which can run at any time and totally pause the system. And although there are so-called "pauseless GCs" out there, I don't necessarily believe their advertising, and also don't have $80K-per-JVM-instance to fork over for a hobby project! I was also reading this article about running drone software on Linux . In that article, the author describes a scenario where Linux almost caused his drone to crash into his car: I learnt a hard lesson after choosing to do the low level control loop (PIDs) on the Pi - trying to be clever I decided to put a log write in the middle of the loop for debugging - the quad initially flied fine but then Linux decided to take 2seconds to write one log entry and the quad almost crashed into my car! Now although that author wrote his drone software in C++, I would imagine a Java app running on Linux could very well suffer the same fate. According to Wikipedia: A system is said to be real-time if the total correctness of an operation depends not only upon its logical correctness, but also upon the time in which it is performed. So to me, this means " You don't have real-time if total correctness requires logical correctness and timeliness. " Let's pretend I've written a Java app to be super performant, and that I've "squeezed the lemon" so to speak, and it couldn't reasonably be written (in Java) to be any faster. All in all, my question is: I'm looking for someone to explain to me all/most of the reasons for why a Java app running n Linux would fail to be a "real time app". Meaning, what are all the categories of things on a Java/Linux stack that prevent it from "being timely", and therefore, from being " totally correct "? As mentioned, it looks like GC and Linux log-flushing can pause execution, but I'm sure there are more things outside the Java app itself that would cause bad timing/performance, and cause it to meet hard deadline constraints. What are they? | A software is real time not when it is as fast as possible, but when it is guaranteed that a process completes within some determined time slot. In a soft real time system, it is good but not absolutely necessary that this is guaranteed. E.g. in a game, the calculations necessary for a frame should complete within the period of a frame, or the framerate will drop. This degrades the quality of the gameplay, but does not make it incorrect. E.g. Minecraft is enjoyable even though the game occasionally stutters. In a hard real time system, we don't have such liberties. A flight control software must react within some deadline, or the vehicle could crash. And the hardware, OS, and software must work together to support real time. For example, the OS has a scheduler to decide when which thread is run. For a real-time program, the scheduler has to guarantee big enough, frequent enough time slots. Any other process that wants to execute in such a slot must be interrupted in favour of the real-time process. This requires a scheduler with explicit real-time support. Also, a user-space program will do system calls into the kernel. In a real-time OS, these too must be real-time. E.g. writing to a file handle would have to be guaranteed to take no more that x time units, which would solve the log problem. This impacts how such a system call can be implemented, e.g. how buffers can be used. It also means that a call must fail if it can't complete within the required time, and that the user-space program must be prepared to deal with these cases. In the case of Java, the JVM and the standard library are also kernel-like and would need explicit real-time support. For anything that is real-time, your programming style will change. If you don't have endless time, you have to restrict yourself to small problems. All your loops must be bounded by some constant. All memory can be allocated statically, since you have an upper bound on size. Unrestricted recursion is forbidden. This goes against a lot of best practices, but they don't apply for real-time systems. E.g. a logging system might use a statically allocated ring buffer to store log messages when they are written. Once the start is reached, old logs would be discarded, or this condition might be an error. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308829",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/154753/"
]
} |
308,842 | Until now, I always believed that you should learn programming languages that make you do low-level stuff (e.g. C) to understand what's really happening under the hood and how the computer really works. this question , this question and an answer from this question reinforced that belief: The more I program in the abstracted languages, the more I miss what got me into computers in the first place: poking around the computer and seeing what twitches. Assembler and C are very much suited for poking :) Eventually, I thought you will become a better programmer knowing this, because you'll know what's happening rather than assuming that everything is magic. And knowing/writing low-level stuff is much more interesting than writing business programs, I think. But a month ago, I came across this book called Structure and Interpretation of Computer Programs . Everything in the web suggests that this is one of the best computer science books, and you will get better as a programmer when reading it. I'm really enjoying the concepts a lot. But I find that the book makes it seem that abstraction is the best concept in computer science while only spending one chapter on the low-level part. My goal is to become a better programmer, to understand computer science more and this got me really confused. Mainly shouldn't we avoid all abstractions and observe what really is happening at the very low-level? I know why abstraction is great, but doesn't that prevent you from learning how computers work? | No, abstractions don't prevent you from understanding how things work. Abstractions allow you to understand why (to what end) things work the way they do. First off, let's make one thing clear: pretty much everything you've ever known is at a level of abstraction. Java is an abstraction, C++ is an abstraction, C is an abstraction, x86 is an abstraction, ones and zeroes are an abstraction, digital circuits are an abstraction, integrated circuits are an abstraction, amplifiers are an abstraction, transistors are an abstraction, circuits are an abstraction, semiconductors are an abstraction, atoms are an abstraction, electron bands are an abstraction, electrons are an abstraction (and for all we know it could be abstractions all the way down). By the logic that low level knowledge is required to understand how something really works, if you want to understand how computers really work, you need to study physics, then electrical engineering, then computer engineering, then computer science, and essentially work your way up in terms of abstraction. (I've taken the liberty of not mentioning that you also need to study math first , to really understand physics.) Now realistically, the days when you could make sense of computers and programming by building your way up from the lowest level details were the earliest days of computers. By now, this field has advanced too much, to the point where it can't possibly be rediscovered from scratch by a single person. There are hundreds of thousands of very qualified people specializing at every level of abstraction, working hard daily to make advances that you can't hope to understand without spending years of studying a specific portion thoroughly and committing to keeping up with the latest advancements there. Consider this Java snippet: public void Example() {
Object obj = new String("...");
// ...
} You can easily understand what this snippet promises (and what it doesn't promise), at the level of the Java language. But unless you are well-versed in topics like stack frames, heap data structures, concurrent generational tracing garbage collection, memory compacting, static analysis, escape analysis, virtual machines, dynamic analysis, assembly language and executable space protection, you are wrong if you think that you really know all the low level details that are involved in actually running it on a real computer. Alternatively, consider this C snippet: void example(int i) {
int j;
if(i == 0) {
j = i * 2;
printf("Received zero, printing %d", j);
} else {
printf("Received non-zero, printing %d", j);
}
} If you show it to a beginner, they'll tell you that when the argument is non-zero, the residual contents of a memory location will be printed, because "variables are actually just memory addresses behind the scenes" and "when you don't initialize them it simply means that you don't move anything to their address" and "there's nothing magic about them". If you show it to a non-beginner, they'll tell you that this program's behavior is undefined for non-zero values of this function's parameter and the compiler could potentially remove the conditional, treat all arguments to this function as zero, replace all of the function's call spots with calls that pass zero as the argument, set all variables that are ever passed as arguments to this function to zero, or do other seemingly paradoxical things . It's important to realize that you're always working at a level of abstraction. The beginner in this example took everything he/she knows into account and arrived elegantly to a completely wrong answer because (a) he/she didn't read the spec of the language (which is an abstraction on purpose , not because C programmers aren't clever enough to understand computer architecture) and (b) tried to reason about implementation details which he/she didn't fully grasp and which have evolved way beyond his/her mental model by now. This example is fictional, but it draws from everyday real-world misconceptions - the kind that sometimes lead to perilous bugs and occasionally famous security holes. It's also important to see the bigger picture. For example, if you don't understand higher abstractions well enough, you may find out that C has structs, struct pointers of equal sizes, incomplete type declarations and function pointers, and you'll likely see them as a bunch of unrelated features that could occasionally be useful. But if you understand a higher abstraction like OOP well enough, you'll recognize the aforementioned features as the building blocks for OOP concepts: structs can contain other structs (code reuse), data pointers can pass something as a reference (like classes), the fact that these pointers have the same size allows them to be substituted (subtyping), incomplete type declarations allow you to have opaque pointers to structs (private members) and function pointers allow you to build dispatch tables (polymorphism). In this fictional example, knowledge of an OOP language not only didn't prevent you from understanding C, but it actually taught you concepts that you can carry over to C. These concepts can be applied selectively when you need them to make your code easier to manage even when the language doesn't actively push you towards it. (I would argue that there is a similar relationship between OOP and FP, but let's not get carried away.) Some of the best programmers I've met are the way they are because they understand abstractions and they can carry their knowledge over to any language, and adapt it to any problem they need to solve, on any level they happen to be working at. Some of the worst programmers I've met are the way they are because they insist on focusing on details and trivia which they don't really understand and which a lot of the time aren't exactly up-to-date, or relevant to the problem, or applicable in the context they attempt to use them in, or have never been true in the first place. All you need to realize is that there is no single level of abstraction at any given point and a person isn't limited to a single level of abstraction at any given moment. You can understand one, then move on to another. You can employ one, then switch to another - any time you want. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308842",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/144461/"
]
} |
308,935 | I have seen programs using this strategy and I have also seen posts considering this bad practice. However, the posts considering this bad practice have been written in c# or some other programming language, where there are some error handling built in. This question is about the c++. Further, the errors I am addressing are errors which are fatal enough to force a program to shutdown, not some general "maybe-I-have-missed-something-error". With using expressions at highest level I mean this in a bit sloppy sense. It can be either something like int main(){
try {
//run
} catch (fatalException){
//handle error and shutdown
}
} or it can be something like this in for example a graphical application. void runApplication(){
try {
//run
} catch (fatalException){
//handle error and shutdown
}
} The alternative to this would be to handle this error where it happens and hierarchically return from functions, one-by-one until the program is terminated. The reason I can see for terminating fatal errors by a try-catch on the top level is that the reasons for this kind of error can be different, and is probably fairly unusual (corrupted databases, some out-of-memory errors, corrupted configuration files, etc...). Handling exceptions locally and return from functions one-by-one would make the code less clear and require much effort to handle problems which are unlikely to occur. However, I am not sure whether it is good practice to start the program with a try . Personally, I think it is ugly, but somehow this ugliness reflects the ugliness of the problem itself, so this may not be a reason to not use it for this case. EDIT I may have misunderstood the dupe post, but I do not think this solves the problem. The question is not about not-catching some exceptions. The point is that there is in most cases are other parts of the code which can handle the exception closer the the throw in a fairly satisfactory way, but to the cost of having to step through the hierarchy to the top and terminate. However, since the program would still need termination, are there in general any reason to catch earlier that on top level? One can argue that this implies that the lower levels are unable to decide how to handle this. But in this case I believe it is unclear. I mean "what error would have the authority to determine if a program needs to terminate?" | I have a question regarding the use of exceptions at the highest level of a program. I have seen programs using this strategy and I have also seen posts considering this bad practice. However, the posts considering this bad practice have been written in c# or some other programming language, where there are some error handling built in. This question is about the c++. The principles of [Structured] Exception Handling are largely language independent. Any Unhandled Exception will kill your program.
What you're seeing here are "backstop" Exception Handlers that do little more than record the exception somewhere (so that you can examine it later) and show the user a big, friendly "Oops" message, instead of the nasty, system-generated one. The alternative to this would be to handle this error where it happens and hierarcially return from functions, one-by-one until the program is terminated. That'a a terrible idea. Use the Right Tool for the Right Job. Flow Control - regular call and return. Exceptional stuff that causes havoc - Exceptions. You've got the idea of why to throw Exceptions but I think you're missing the [far] more important part that is where and why to catch (or "handle") them. OK, so you're database is corrupted. Do you just crash the program? Or do you let the user, say, open a backup copy of the database? Or open the program is some "diagnostic" mode to try and sort the problem out. These are actions that your program can perform - ways to "handle" your CorruptDatabaseException, say, if and when it happens. Yes, Exceptions are .. exceptional. You don't expect them to happen in normal operation but, by creating an Exception for such a case, at least you've thought about the possibility of it happening and put some "response" in place to deal with it.
It might well be that that action is just "shut the program down (and tell the user why)". In some cases that's all you can do and, for that, the Backstop exception catcher will probably do. The point is that there is in most cases are other parts of the code which can handle the exception closer the the throw in a fairly satisfactory way, but to the cost of having to step through the hierarchy to the top and terminate. Again, a crucial part of Exception Handling is that, having successfully handled an Exception, the rest of the program must be able to continue as though the Exception had never happened . If your program can't do that, then the Exception is still happening , in which case you need to re-throw it up the call chain until something can handle it (even if, eventually, that's just to kill the program). This is the principle behind the "finally" block seen in C#; "clean-up" code in any try-catch block that gets automatically executed as an Exception "passes through" it, on its way up through the call stack, on its way to being handled. However, since the program would still need termination, are there in general any reason to catch earlier that on top level? If you were to code something like ... throw new KillProgramException(); ... then no. The intention is pretty clear and you wouldn't expect anything in your program to try and handle this (except, perhaps, the "backstop"). I mean what error would have the authority to determine if a program needs to terminate? Depending on circumstance, it might be absolutely any of them! In some uber-critical section of code, an IndexOutOfRangeException might be just a fatal as DatabaseCorruptException. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308935",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/144639/"
]
} |
308,972 | I've seen this part of PEP-8 https://www.python.org/dev/peps/pep-0008/#package-and-module-names I'm not clear on whether this refers to the file name of a module/class/package. If I had one example of each, should the filenames be all lower case with underscores if appropriate? Or something else? | Quoting https://www.python.org/dev/peps/pep-0008/#package-and-module-names : Modules should have short, all-lowercase names. Underscores can be used in the module name if it improves readability. Python packages should also have short, all-lowercase names, although the use of underscores is discouraged. For classes : Class names should normally use the CapWords convention. And function and (local) variable names should be: lowercase, with words separated by underscores as necessary to improve readability See this answer for the difference between a module, class and package: A Python module is simply a Python source file, which can expose classes, functions and global variables. A Python package is simply a directory of Python module(s). So PEP 8 tells you that : modules (filenames) should have short, all-lowercase names , and they can contain underscores; packages (directories) should have short, all-lowercase names , preferably without underscores; classes should use the CapWords convention. PEP 8 tells that names should be short ; this answer gives a good overview of what to take into account when creating variable names, which also apply to other names (for classes, packages, etc.): variable names are not full descriptors; put details in comments; too specific name might mean too specific code; keep short scopes for quick lookup; spend time thinking about readability. To finish, a good overview of the naming conventions is given in the Google Python Style Guide . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308972",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/206740/"
]
} |
308,977 | Suppose I have a stream of Things and I want to "enrich" them mid stream, I can use peek() to do this, eg: streamOfThings.peek(this::thingMutator).forEach(this::someConsumer); Assume that mutating the Things at this point in the code is correct behaviour - for example, the thingMutator method may set the "lastProcessed" field to the current time. However, peek() in most contexts means "look, but don't touch". Is using peek() to mutate stream elements an antipattern or ill-advised? Edit: The alternative, more conventional, approach would be to convert the consumer: private void thingMutator(Thing thing) {
thing.setLastProcessed(System.currentTimeMillis());
} to a function that returns the parameter: private Thing thingMutator(Thing thing) {
thing.setLastProcessed(currentTimeMillis());
return thing;
} and use map() instead: stream.map(this::thingMutator)... But that introduces perfunctory code (the return ) and I'm not convinced it's clearer, because you know peek() returns the same object, but with map() it's not even clear at a glance that it's the same class of object. Further, with peek() you can have a lambda that mutates, but with map() you have to build a train wreck. Compare: stream.peek(t -> t.setLastProcessed(currentTimeMillis())).forEach(...)
stream.map(t -> {t.setLastProcessed(currentTimeMillis()); return t;}).forEach(...) I think the peek() version is clearer, and the lambda is clearly mutating, so there's no "mysterious" side effect. Similarly, if a method reference is used and the name of the method clearly implied mutation, that too is clear and obvious. On a personal note, I don't shy away from using peek() to mutate - I find it very convenient. | You are correct, "peek" in the English sense of the word means "look, but do not touch." However the JavaDoc states: peek Stream peek(Consumer action) Returns a stream consisting of the elements of this stream,
additionally performing the provided action on each element as
elements are consumed from the resulting stream. Key words: "performing ... action" and "consumed". The JavaDoc is very clear that we should expect peek to have the ability to modify the stream. However the JavaDoc also states: API Note: This method exists mainly to support debugging, where you want to see
the elements as they flow past a certain point in a pipeline This indicates that it is intended more for observing, e.g. logging elements in the stream. What I take from all of this is that we can perform actions using the elements in the stream, but should avoid mutating elements in the stream. For example, go ahead and call methods on the objects, but try to avoid mutating operations on them. At the very least, I would add a brief comment to your code along these lines: // Note: this peek() call will modify the Things in the stream.
streamOfThings.peek(this::thingMutator).forEach(this::someConsumer); Opinions differ on the usefulness of such comments, but I would use such a comment in this case. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/308977",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31101/"
]
} |
309,068 | Having chaining implemented on beans is very handy: no need for overloading constructors, mega constructors, factories, and gives you increased readability. I can't think of any downsides, unless you want your object to be immutable , in which case it would not have any setters anyway. So is there a reason why this isn't an OOP convention? public class DTO {
private String foo;
private String bar;
public String getFoo() {
return foo;
}
public String getBar() {
return bar;
}
public DTO setFoo(String foo) {
this.foo = foo;
return this;
}
public DTO setBar(String bar) {
this.bar = bar;
return this;
}
}
//...//
DTO dto = new DTO().setFoo("foo").setBar("bar"); | So is there a reason why isn't this a OOP convention? My best guess: because it violates CQS You've got a command (changing the state of the object) and a query (returning a copy of state -- in this case, the object itself) mixed into the same method. That's not necessarily a problem, but it does violate some of the basic guidelines. For instance, in C++, std::stack::pop() is a command that returns void, and std::stack::top() is a query that returns a reference to the top element in the stack. Classically, you would like to combine the two, but you can't do that and be exception safe. (Not a problem in Java, because the assignment operator in Java doesn't throw). If DTO were a value type, you might achieve a similar end with public DTO setFoo(String foo) {
return new DTO(foo, this.bar);
}
public DTO setBar(String bar) {
return new DTO(this.foo, bar);
} Also, chaining return values are a colossal pain-in-the- when you are dealing with inheritance. See the "Curiously recurring template pattern" Finally, there's the issue that the default constructor should leave you with an object that is in a valid state. If you must run a bunch of commands to restore the object to a valid state, something has gone Very Wrong. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/309068",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/139342/"
]
} |
309,134 | Take the two code examples: if(optional.isPresent()) {
//do your thing
}
if(variable != null) {
//do your thing
} As far as I can tell the most obvious difference is that the Optional requires creating an additional object. However, many people have started rapidly adopting Optionals. What is the advantage of using optionals versus a null check? | Optional harnesses the type system for doing work that you'd otherwise have to do all in your head: remembering whether or not a given reference may be null . This is good. It's always smart to let the compiler handle boring drugework, and reserve human thought for creative, interesting work. Without Optional , every reference in your code is like an unexploded bomb. Accessing it may do something useful, or else it may terminate your program wth an exception. With Optional and without null , every access to a normal reference succeeds, and every reference to an Optional succeeds unless it's unset and you failed to check for that. That is a huge win in maintainability. Unfortunately, most languages that now offer Optional haven't abolished null , so you can only profit from the concept by instituting a strict policy of "absolutely no null , ever". Therefore, Optional in e.g. Java is not as compelling as it should ideally be. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/309134",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/213932/"
]
} |
309,377 | To simplify the interface, is it better to just not have the getBalance() method? Passing 0 to the charge(float c); will give the same result: public class Client {
private float bal;
float getBalance() { return bal; }
float charge(float c) {
bal -= c;
return bal;
}
} Maybe make a note in javadoc ? Or, just leave it to the class user to figure out how to get the balance? | You seem to suggest that the complexity of an interface is measured by the number of elements it has (methods, in this case).
Many would argue that having to remember that the charge method can be used to return the balance of a Client adds much more complexity than having the extra element of the getBalance method. Making things more explicit is much simpler, especially to the point where it leaves no ambiguity, regardless of the higher number of elements in the interface. Besides, calling charge(0) violates the principle of least astonishment , also known as the WTFs per minute metric (from Clean Code, image below), making it hard for new members of the team (or current ones, after a while away from the code) until they understand that the call is actually used to get the balance. Think of how other readers would react: Also, the signature of the charge method goes against the guidelines of doing one and only one thing and command-query separation , because it causes the object to change its state while also returning a new value. All in all, I believe that the simplest interface in this case would be: public class Client {
private float bal;
float getBalance() { return bal; }
void charge(float c) { bal -= c; }
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/309377",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/207788/"
]
} |
309,383 | We use TFS and Visual Studio 2015 at work but don't get full benefit from the ALM features as we have code in non-TFS Git repos. We would like to integrate these products in with all the TFS goodies like checkin-to-work item linking, in-IDE pull requests, etc. Is there any way to access these features in a non-TFS-hosted Git repo? | You seem to suggest that the complexity of an interface is measured by the number of elements it has (methods, in this case).
Many would argue that having to remember that the charge method can be used to return the balance of a Client adds much more complexity than having the extra element of the getBalance method. Making things more explicit is much simpler, especially to the point where it leaves no ambiguity, regardless of the higher number of elements in the interface. Besides, calling charge(0) violates the principle of least astonishment , also known as the WTFs per minute metric (from Clean Code, image below), making it hard for new members of the team (or current ones, after a while away from the code) until they understand that the call is actually used to get the balance. Think of how other readers would react: Also, the signature of the charge method goes against the guidelines of doing one and only one thing and command-query separation , because it causes the object to change its state while also returning a new value. All in all, I believe that the simplest interface in this case would be: public class Client {
private float bal;
float getBalance() { return bal; }
void charge(float c) { bal -= c; }
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/309383",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24628/"
]
} |
309,438 | I have been having this idea of using encryption to prevent users from figuring out content in my program outside of the program itself. Like users may find textures never used in the game meant to be part of some kind of Easter egg while going though the games data. This may e.g. ruin it for everybody if posted online. Imagine a secret room where the player have to press the correct numbers on a security door in the game, which if correct should generate the correct decryption key and then decrypting that part of the level and opening the door. Thus making the Easter egg otherwise inaccessible even when looking through the game-data since the key isn't actually stored, it's generated based on user-input. Here is another example of what I was imagining. I have a puzzle game with let's say 20 levels, each one encrypted using a different key. Instead of storing the decryption key with the program directly allowing someone to decompile the program and find it, I instead generate the encryption/decryption key based on the solution of the previous puzzle. This way the player would have to actually figure out the puzzle before getting any information about the next level, even when looking through the game-data. The player, if knowledgeable, could possible brute-force it "easily" given that the number of puzzle solutions is probably less that the number of decryption keys. It is really a matter complexity of the puzzle and is not very important here. Though I did post an answer regarding it over here Is there programs/games today that has done something like this? Storing encrypted content in their games? And if not why? Is there a lot of rules and regulation about it, either at the store or country level? Does anyone see any obvious pitfalls that I'm missing? Ignoring things like user-experience, the idea seems sound to me and makes me curious why I haven't seen this before. Edit : It may not be clear exactly what I'm saying, so here is a more concrete example. Let's say I have a function that takes in a string of 20 characters and generates a symmetric key which I can use to encrypt/decrypt some content in the game. The only way the user could get to that content is to know those 20 characters and generate the same key. This key is never directly stored and is generated on the fly based on user input. These characters would be hidden in the game in what could be books, dialog with NPCs, maybe even outside of the game on the back of the box even. So with 2*10^28 possible combinations to try it would probably be more likely people will find the content in the way intended rather that by looking through the game-data. Edit 2 :
The content in question would be encrypted with an arbitrary and secret key before being shipped to the consumer. This key will obviously not be shipped with the game. He or she would have to somehow puzzle the key back together given a series of clues made based on the key, and that are hidden throughout the game or somewhere else. This system would however be transparent for the user as you wouldn't know the content was encrypted unless you actually looked through the game-data. As a lot has mention this has one obvious downside in that its use case is limited. Once a single person has figured it out he/she may share it with everybody else, if not the key/solution then the content itself. However if your intention is to keep something so secret that a single person shouldn't be able to solve it and people have to work together to solve it, or you are afraid that your easter egg is so well hidden (by design) that it is more likely someone will find it in the code rather through game play. Then I think this could work great. I would personally recommend to only use it maybe once per game and only for things that does not affect core game-play, e.g. easter eggs, a secret ending. Any puzzle would have to be so complicated or well hidden for it to slow people down enough to make encrypting the content worth it and if this puzzle stood in the way of people progressing then nobody is probably having any fun. | This question is asking whether it is possible (not just commercially viable or some other interpretation) to force at least 1 user to solve a puzzle instead of hacking to unlock certain game content. To my knowledge this is definitely possible. In fact, it is bizarre to me that other answers are saying it is outright not possible, even when it has been very explicitly stated that the key is neither generated nor stored, merely read in from the game state (which could have googleplexes of possible values). The argument that "the game knows how to decrypt it" so it won't work would then imply that any open source encryption/decryption software (for example, TrueCrypt) is inherently insecure because you know "how" to decrypt containers (just enter a string based on keyboard input, it's simple right?). In the absolute simplest case, imagine that the game itself is a keyboard simulator and it asks the question "what is the password to my encrypted files?", clearly we all agree that having the source code to the game wouldn't help. Now imagine the game asks the questions "what is the name of the largest animal on Earth concatenated with the name of the smallest animal?". Is it not clear that even having the game's source code would not help, and you still have to solve the puzzle? With respect to whether this is possible the answer is absolute YES, this is possible. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/309438",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/205514/"
]
} |
309,767 | Why did old BASICs (and maybe other languages) use line numbers as part of the source code? I mean, what problems did it (try to) solve? | BASIC needs to be taken into context with its contemporary languages: early fortran, cobol and assembly. Back when I was dabbling on 6502 assembly without labels, this meant that when you found that you needed to add an instruction somewhere in the middle of tightly packed code (I later added NOP s) you needed to go through and redo all of the jump addresses. This was time consuming. Fortran was a line numbered based system that predated BASIC. In Fortran, columns 1-5 were a line number to be used for targets for branching. The key thing with Fortran was that the compilers tended to be a bit more intelligent than the BASIC interpreter and adding a few instructions was just a matter of punching some cards and putting them in the deck at the right place. BASIC, on the other hand had to keep all of its instructions ordered. There wasn't much of a concept of a 'continuation of the previous line'. Instead, in Applesoft BASIC (one of the widely used dialects that I am familiar with and can find information on) each line in memory was represented as: NN NN TT TT AA BB CC DD .. .. 00 It had two bytes for the address of the next line ( NN NN ). Two bytes for the line number of this line ( TT TT ), and then a list of tokens ( AA BB CC DD .. .. ) followed by the end of line marker ( 00 ). (This is from page 84-88 of Inside the Apple //e ) An important point to realize when looking at that memory representation is that the lines can be stored in memory out of order. The structure of memory was that of a linked list with a 'next line' pointer in the structure. This made it easy to add new lines between two lines - but you had to number each line for it to work properly. Many times when working with BASIC, you were actually working in BASIC itself. In particular, a given string was either a line number and BASIC instructions, or a command to the basic interpreter to RUN or LIST . This made it easy to distinguish the code from the commands - all code starts with numbers. These two pieces of information identifies why numbers were used - you can get a lot of information in 16 bits. String based labels would take much more space and are harder to order. Numbers are easy to work with, understandable, and easier to represent. Later BASIC dialects where you weren't in the interpreter all the time were able to do away with the every line numbered and instead only needed to number the lines that were branch targets. In effect, labels. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/309767",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12450/"
]
} |
309,815 | Static typing in a programming language can be helpful for enforcing certain guarantees at compile time- but are types the only tool for this job? Are there other ways of specifying invariants? For example, a language or environment could help enforce a guarantee regarding array length, or regarding the relationships between the inputs to a function. I just haven't heard of anything like this outside of a type system. A related thing I was wondering about is if there are any non-declarative ways to do static analysis (types are declarative, for the most part ). | Static type systems are a kind of static analysis, but there are many static analyses that aren’t generally encoded in type systems. For example: Model checking is an analysis and verification technique for concurrent systems that allows you to prove that your program is well-behaved under all possible thread interleavings. Data flow analysis gathers information about the possible values of variables, which can determine whether some computation is redundant, or some error is not accounted for. Abstract interpretation conservatively models the effects of a program, usually in such a way that the analysis is guaranteed to terminate—type checkers may be implemented in a similar manner to abstract interpreters. Separation logic is a program logic (used for example in the Infer analyzer) which can be used to reason about program states, and identify issues such as null pointer dereferences, invalid states, and resource leaks. Contract-based programming is a means of specifying preconditions, postconditions, side effects, and invariants. Ada has native support for contracts and can verify some of them statically. Optimising compilers do many small analyses in order to build intermediate data structures for use during optimisation—such as SSA, estimates of inlining costs, instruction pairing information, and so on. Another example of non-declarative static analysis is found in the Hack typechecker, where normal control-flow constructs can refine the type of a variable: $x = get_value();
if ($x !== null) {
$x->method(); // Typechecks because $x is known to be non-null.
} else {
$x->method(); // Does not typecheck.
} And speaking of “refining”, back in the land of type systems, refinement types (as used in LiquidHaskell ) pair types with predicates that are guaranteed to hold for instances of the “refined” type. And dependent types take this further, allowing types to depend on values. The “hello world” of dependent typing is usually the array concatenation function: (++) : (a : Type) -> (m n : Nat) -> Vec a m -> Vec a n -> Vec a (m + n) Here, ++ takes two operands of type Vec a m and Vec a n , being vectors with element type a and lengths m and n respectively, which are natural numbers ( Nat ). It returns a vector with the same element type whose length is m + n . And this function proves this constraint abstractly, without knowing the specific values of m and n , so the lengths of the vectors may be dynamic. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/309815",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/201215/"
]
} |
310,071 | I'm curious to know how programmer teams typically managed their software development back in the 80s and early 90s. Was all the source code simply stored on one machine which everyone worked on, or was the source passed around and copied manually via floppy and merged manually, or did they actually use revision control systems over a network (CVS for example) like how we do now? Or perhaps something like an offline CVS was being used? Nowadays everyone is dependent on source control.. it's a no-brainer. But in the 80s, computer networks weren't that easy to set up, and things like best practices were still being figured out... I do know that in the 70s and 60s programming was pretty different so revision control was not necessary. But it's in the 80s and 90s that people started using computers for writing code, and applications started increasing in size and scope, so I am wondering how people managed all that back then. Also, how does this differ between platforms? Say Apple vs Commodore 64 vs Amiga vs MS-DOS vs Windows vs Atari Note: I'm mostly talking about programming on microcomputers of the day, not big UNIX machines. | Firstly when microcomputers first come out, the software was mostly written on Unix or VMS systems and “cross compiler/assembled” onto the target system. These computer systems were multi user often with many terminals and had source coded control systems like SCCS . Networking was an option on microcomputers from the mid 1980s, often connected to a Unix system as the "file server" (maybe just using RS232 and Kermit to transfer the files, with SCCS on the Unix system) See A History of Version Control by Eric Sink to get a overview of how version control system have changed over the years. I recall reading about source code control in "BYTE" in the late 1980s, so it must have been in use on "small systems" by then. SourceSafe was well established by the mid 90s running on Dos, Windows, etc. This link shows an article about PVCS running on PC from 1994 , it is at version 6.2 so had clearly been about for some time, Wikipedia says it dates from 1985 . However numbered floppy disks were used by most programmers working on small scale software until the late 1990s, to be replaced with folders on their hard disk, making a copy of the source code every day. I remember working on a project porting software from Unit to Windows NT 3.5. Programmers that know how to write programs for Windows often had not even heard of source code control. This timeline is taken from a blog post by codicesoftware , they sell Plastic SCM, however the overview of the history of other systems seems reasonable, a few older system before RCS are left of the image. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/310071",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/124792/"
]
} |
310,159 | I'm diving into the world of functional programming and I keep reading everywhere that functional languages are better for multithreading/multicore programs. I understand how functional languages do a lot of things differently, such as recursion , random numbers etc but I can't seem to figure out if multithreading is faster in a functional language because it's compiled differently or because I write it differently. For example, I have written a program in Java which implements a certain protocol. In this protocol the two parties send and receive to each other thousands of messages, they encrypt those messages and resend them (and receive them) again and again. As expected, multithreading is key when you deal in the scale of thousands. In this program there's no locking involved . If I write the same program in Scala (which uses the JVM), will this implementation be faster? If yes, why? Is it because of the writing style? If it is because of the writing style, now that Java includes lambda expressions, couldn't I achieve the same results using Java with lambda? Or is it faster because Scala will compile things differently? | The reason people say functional languages are better for parallel processing is due to the fact that they usually avoid mutable state. Mutable state is the "root of all evil" in the context of parallel processing; they make it really easy to run into race conditions when they are shared between concurrent processes. The solution to the race conditions then involve locking and synching mechanisms, as you mentioned, which cause runtime overhead, as the processes wait for one another to make use of the shared resource, and greater design complexity, as all of these concepts tend to be deeply nested within such applications. When you avoid mutable state, the need for synchronization and locking mechanisms disappears along with it. Because functional languages usually avoid mutable state, they are naturally more efficient and effective for parallel processing - you won't have the runtime overhead of shared resources, and you won't have the added design complexity that usually follows. However, this is all incidental. If your solution in Java also avoids mutable state (specifically shared between threads), converting it to a functional language like Scala or Clojure would not yield any benefits in terms of the concurrent efficiency, because the original solution is already free of the overhead caused by the locking and synching mechanisms. TL;DR: If a solution in Scala is more efficient in parallel processing than one in Java, it is not because of the way the code is compiled or run through the JVM, but instead because the Java solution is sharing mutable state between threads, either causing race conditions or adding the overhead of synchronization in order to avoid them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/310159",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46143/"
]
} |
310,422 | I have written a struct that represents latitude/longitude coordinates. Their values range from -180 to 180 for longtitudes and 90 to -90 for lattitudes. If a user of that struct gives me a value outside of that range, I have 2 options: Throw an exception (arg out of range) Convert the value to the constraint Because a coordinate of -185 has meaning (it can very easily be converted to +175 as those are polar coordinates), I could accept it and convert it. Is it better to throw an exception to tell the user that his code has given me a value that it shouldn't have? Edit: Also I know the difference between lat/lng and coordinates, but I wanted to simplify that for easier discussion - it wasn't the brightest of ideas | If the core of your question is this... If some client code passes an argument whose value is invalid for the thing that my data structure is modeling, should I reject the value or convert it to something sensible? ...then my general answer would be "reject", because this will help draw attention to potential bugs in the client code that are actually causing the invalid value to appear in the program and reach your constructor. Drawing attention to bugs is generally a desired property in most systems, at least during development (unless it's a desired property of your system to muddle through in case of errors). The question is whether you're actually facing that case . If your data structure is intended to model polar coordinates in general, then accept the value because angles out of the -180 and +180 range aren't really invalid. They are perfectly valid and they just happen to always have an equivalent that's within the range of -180 and +180 (and if you want to convert them to target that range, feel free - the client code doesn't usually need to care). If your data structure is explicitly modeling Web Mercator coordinates (according to the question in its initial form), then it's best to follow any provisions mentioned in the specification (which I don't know, so I won't say anything about it). If the specification of the thing you're modeling says that some values are invalid, reject them. If it says that they can be interpreted as something sensible (and thus they're actually valid), accept them. The mechanism you use to signal whether the values were accepted or not depends on the features of your language, its general philosophy and your performance requirements. So, you could be throwing an exception (in the constructor) or returning a nullable version of your struct (through a static method that invokes a private constructor) or returning a boolean and passing your struct to the caller as an out parameter (again through a static method that invokes a private constructor), and so on. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/310422",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/207585/"
]
} |
310,559 | I have never used a Continuous Integration system (CI) before. I primarily code in MATLAB, Python or PHP. Neither of these have a build step and I do not see how a CI could be used for my work. A friend on a large project in a large firm told me that language does not matter. I do not see how CI would be of use to me if I do not have a build step. I can think of CI as a testing environment that would run unit tests. Am I missing something? | Continuous integration as a term refers to two distinct ideas. The first is a workflow: instead of everyone in a team working on their own branch and then after a couple of weeks of programming try to merge their changes into the mainline, that changes are integrated (nearly) continuously. This allows problems to surface early, and avoids incompatible changes. However, that requires that we can easily check whether a change “works”. This is where the second idea comes in, which turned out much more popular. A CI server is a clean environment where the changes are tested as quickly as possible. The clean environment is necessary so that the build is reproducible. If it works once, it should always work. This avoids “but it worked on my machine” problems. In particular, a CI server is valuable when your software runs on different systems or in different configurations and you need to be sure everything works. The lack of a build step is irrelevant. However, CI only makes sense if you have a test suite. This test suite must be automatic, and must have no failures. If the tests fail, the appropriate developer should get a notification so that they can fix the problem they introduced (“breaking the build”, even when there is no build as compilation). It turns out that such a server is valuable for more than just testing. In fact, most CI software is really crappy at running tests in various configurations, but good at managing all kinds of jobs. E.g. in addition to “continuous” unit tests, there could be a full test as a nightly build. The software can be tested with multiple Python versions, different library versions. A web site could be tested for dead links. We can run static analysis, style checkers, test coverage tools, etc. over the code. Documentation can be generated. When all test suites pass, the packaging process could be initiated so that you would be ready to release your software. This is useful in an agile setting where you want a deployable (and demoable) product at all times. With the rise of web apps, there's also the idea of continuous deployment : If all tests pass, we can automatically push the changes to production. Of course, this requires that you are really confident in your test suite (if not, you have bigger problems). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/310559",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27217/"
]
} |
310,658 | Lately when I've been writing C or C++, I'll declare all my variables on the stack just because it's an option, unlike with Java. However, I've heard that it's a bad idea to declare large things on the stack. Why exactly is this the case? I figure stack overflow is involved, but I'm not very clear on why that happens. How much stuff on the stack is too much? I'm not trying to put 100MB files on the stack, just a dozen kilobyte arrays to use as string buffers or whatever. Is this too much stack usage? (Sorry if duplicate, searching for stack kept giving references to Stack Overflow. There isn't even a call stack tag, I just used the abstract one.) | It depends on your operating system. On Windows, the typical maximum size for a stack is 1MB, whereas it is 8MB on a typical modern Linux, although those values are adjustable in various ways. If the sum of your stack variables (including low-level overhead such as return addresses, stack-based arguments, return value placeholders and alignment bytes) in the entire call stack exceeds that limit, you get a stack overflow, which typically takes down your program without any chance at recovery. A few kilobytes are usually fine. Tens of kilobytes is dangerous because it starts to sum up. Hundreds of kilobytes is a very bad idea. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/310658",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/217269/"
]
} |
310,833 | I'm getting ready to take the bend out of asp and into an mvc framework, asp.net mvc or nancy. Wherever I go, I see folders for controllers/modules and folders for views. Is this just a pavlovian reflex of tidying things away by type, or is there some deeper wisdom operating? I have a little proof-of-concept project where I store together the files I'm likely to open together, a considerable comfort. Since these files are also likely to call each other, they can do so with shorter, less brittle, relative links. This pattern is challenged by mvc, because the folder path no longer automatically corresponds to the url path, and, in asp.net mvc, the project templates and routing enforce the views\ controllers\ schism. This microsoft page introduces the concept of areas. It can be read as an admission of how unwieldy large apps become because of this artificial separation. People will object "separation of concerns", but separation of concerns is already achieved by having separate source files. There's no concrete gain, it seems to me, from taking these source files that are tightly coupled, and sending them to opposite ends of the folder structure? Is anyone else fighting this? Any tips? | I'd like to say it's cargo cult programming , but there are technical reasons for this structure. Asp.Net MVC took a convention over configuration approach to nearly everything. By default, the Razor view engine searches the Views directory in order to resolve which view to return from the controller. There are however a few hacks to get a different project structure and Microsoft even provides an MVC feature called Areas to let us create a more sane project structure. You could also implement your own view engine in order to specify where to look for views. Why do I say that it's cargo cult programming and that you're correct about this? Uncle Bob convinced me that the project's directory structure shouldn't tell me that it's an MVC application. It should tell me that it's a store front, or a time off request system, or whatever. The high level structure and architecture should tell us about what this thing is, not how it was implemented. In short, I believe you're right about this, but any other directory structure would simply be fighting against the framework and trust me when I say that you don't want to try to make the Asp.Net MVC framework do something it wasn't designed to do. It's a pity that it's not more configurable really. To quickly address architectural concerns, I do still believe that the business models (business, not view) and the DAL should live in a separate project/library that gets called from your MVC app. It's just that the controller really is very tightly coupled with the view and likely to be modified together. We're all wise to remember the difference between coupling via dependency and logical coupling. Just because the code has had its dependencies decoupled doesn't make it less logically coupled. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/310833",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/132665/"
]
} |
310,838 | I've found a piece of code like this in one of our projects: SomeClass QueryServer(string args)
{
try
{
return SomeClass.Parse(_server.Query(args));
}
catch (Exception)
{
return null;
}
} As far as I understand, suppressing errors like this is a bad practice, as it destroys useful information from the original server's exception and makes the code continue when it actually should terminate. When is it appropriate to completely suppress all errors like this? | Imagine code with thousands files using a bunch of libraries.
Imagine all of them are coded like this. Imagine, for example, an update of your server causes one configuration file disappear; and now all you have is a stack trace is a null pointer exception when you try using that class: how would you resolve that? It could take hours, where at least just logging the raw stack trace of the file not found [file path] may enable you to resolve momentarily. Or even worse: a failure in one off the libraries you use after an update that makes your code crash later on. How can you track this back to the library ? Even without robust error handling just doing throw new IllegalStateException("THIS SHOULD NOT HAPPENING") or LOGGER.error("[method name]/[arguments] should not be there") may save you hours of time. But there are some cases where you might really want to ignore the exception and return null like this (or do nothing). Especially if you integrate with some badly designed legacy code and this exception is expected as a normal case. In fact when doing this you should just wonder if you really are ignoring the exception or "properly handling" for your needs. If returning null is "properly handling" your exception in that given case, then do it. And add a comment why this is the proper thing to do. Best practices are things to follow in most of cases, maybe 80%, maybe 99%, but you'll always find one edge case where they don't apply. In that case, leave a comment why you're not following the practice for the others (or even yourself) who will read your code months later. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/310838",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31310/"
]
} |
311,007 | In other programming languages, I have seen Map and Reduce, and those are cornerstones of functional programming. I could not find any reasoning or history why LINQ has Aggregate (same as Reduce ) and Select (same as Map )? Why I am asking is that it took me a while to understand it is the same thing and I am curious what is the reasoning for this. | This mostly comes down to the history of LINQ. LINQ was originally intended to be SQL-like, and used (largely, though not exclusively) to connect to SQL databases. This leads to much of its terminology being based on SQL. So, "select" came from the SQL select statement, and "aggregate" came from SQL aggregate functions (e.g., count , sum , avg , min , max ). For those who question the degree to which LINQ originally related to SQL, I'd refer to (for example) Microsoft's articles on Cω, which was a language devised by Microsoft Research, and appears to be where most of the basics of LINQ were worked out before they were added to C# and .NET. For example, consider an MSDN article on Cω , which says: Query Operators in Cω Cω adds two broad classes of query operators to the C# language: - XPath-based operators for querying the member variables of an object by name or by type. - SQL-based operators for performing sophisticated queries involving projection, grouping, and joining of data from one or more objects. At least as far as I know, the XPath-based operators were never added to C#, leaving only the operators that were documented (before LINQ existed) as being based directly on SQL. Now, it's certainly true that LINQ isn't identical to the SQL-based query operators in Cω. In particular, LINQ follows C#'s basic objects and function calls syntax much more closely than Cω did. Cω queries followed SQL syntax even more closely, so you could write something like this (again, drawn directly from the article linked above): rows = select c.ContactName, o.ShippedDate
from c in DB.Customers
inner join o in DB.Orders
on c.CustomerID == o.CustomerID; And yes, the same article does talk specifically about using the SQL-based queries to query data coming from actual SQL databases: To connect to a SQL database in Cω, it must be exposed as a managed assembly (that is, a .NET library file), which is then referenced by the application. A relational database can be exposed to a Cω as a managed assembly either by using the sql2comega.exe command line tool or the Add Database Schema... dialog from within Visual Studio. Database objects are used by Cω to represent the relational database hosted by the server. A Database object has a public property for each table or view, and a method for each table-valued function found in the database. To query a relational database, a table, view, or table-valued function must be specified as input to the one or more of the SQL-based operators. The following sample program and output shows some of the capabilities of using the SQL-based operators to query a relational database in Cω. The database used in this example is the sample Northwind database that comes with Microsoft SQL Server. The name DB used in the example refers to a global instance of a Database object in the Northwind namespace of the Northwind.dll assembly generated using sql2comega.exe . So, yes, from the very beginning (or even before the beginning, depending on your viewpoint) LINQ was explicitly based on SQL, and intended specifically to allow access to data in SQL databases. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/311007",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14792/"
]
} |
311,047 | Many Scrum books and articles say that a failed sprint (when the team fails to complete some features from the Sprint Backlog) is not something that bad, it happens from time to time, and it can actually be useful if the team learns from their mistakes and improves something in the following sprints. And the team should not be punished for not completing the work they committed to. This looks great from the developer's point of view, however, let's say we have a software company " Scrum-Addicts LLC " developing something for serious clients (" Money-Bags Corporation "): Scrum-Addicts managers suggest making a piece of software for Money-Bags They agree on a list of features, and Money-Bags asks to provide a shipping date Scrum-Addicts managers consult their scrum team, and the team says it will take 3 week-long sprints to complete all of the features Scrum-Addicts manager adds 1 week to be safe, promises to ship the software in 1 month and signs a contract with Money-Bags After 4 sprints (shipping deadline) Scrum team can only deliver 80% of features (because of inexperience with the new system, the need to fix critical bugs in previous features in production environment, etc...) As Scrum suggests, at this point, the product is potentially shippable, but Money-Bags needs 100% of features, as mentioned in the contract. So they break the contract and pay nothing. Scrum-Addicts is on the brink of bankruptcy because they got no money from Money-Bags, and the investors were disappointed with the results and are unwilling to help the company any more. Obviously, no software company wants to be in Scrum-Addicts' shoes. What I fail to understand about Agile and Scrum is how they suggest teams should deal with planning and deadlines to avoid the situation described above. So, to summarize, I have 2 questions: Who is to Blame? Managers, because it's their job to do the proper planning The team, because they committed to doing more work than they could Someone else What Is to Be Done? The managers should move the deadline 2x (or 3x) times later than the original team's estimate. Team members should be encouraged to do all the work they committed to no matter what (by issuing penalties for failed sprints) The team should drop Scrum because it doesn't fit the company deadline policy We should all drop software development and join a monastery ??? | I see several fundamental management issues in your example: if a Scrum-Addicts manager signs a "hard-deadline" contract, but adds only a safety margin of 33% in a situation where "a new system is involved", that is pretty reckless. the availability of delivering at least x% of the features after one month could have been used to negotiate a contract where the customers pays the money at least partially when he gets only 80% of the features at the deadline. An all-or-nothing contract is something neither the software vendor nor the customer will benefit from - this means not just 0 money for the vendor, but also 0 features for the customer. And an all-or-nothing development methodology like "Waterfall" will only let you write such contracts, an agile approach offers additional possibilities. looking at the results of the first one or two sprints should have made obvious to the manager that the team cannot meet the deadline. So he should have taken earlier actions, and re-prioritize the remaining tasks and features, or try to re-negotiate with the customer earlier. For example, the manager could have tried to downsize the scope of some of the remaining features, so the team could have delivered all features mentioned in the contract, but each of them in a reduced scope. If a task turns out to take longer than you thought, no development methodology will save you from that. But an agile approach like Scrum gives management more opportunities to control what happens in that situation. If they don't make use of those opportunities, it is clearly their fault, not the team's, not the fault of "Scrum", and not the customer's fault because "he does not accept agility". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/311047",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/211802/"
]
} |
311,081 | Javascript in the DOM has a peculiar characteristic. There's a different Object object that an object (by default) inherits per window. In order to find what kind of object is being sent to a function when an object from a different window may be given, the methods I've read to know what was given are the use of duck-typing and analysis of the output of .toString() . Given that instanceof is useless. Duck-typing is done by checking if certain properties from an object in a variable exist or exist and have a value of certain type to confirm that an object is actually what is expected. The idea I want to explore is an alternative to Duck-typing (even if it is slower) to get the actual object's constructor for when the object is one of the javascript's default objects in the DOM (Object, Array, Element, Function, etc.)
to try getting the window that has a certain object and then to try working from there. In other words, is there a way to get the window from where an object was created at all times (assuming the object is an instance of Object in the window it was created)? If so, then how would you do such check? My current idea is to get all the windows in the parent document something like: var windows = [window.top];
var iframes = document.getElementsByTagName('iframe');
for(var i = 0; i < iframes.length; i++){
windows.push(iframes[i].contentWindow);
} And then function isATypeOfB(A, B){
for(var i = 0; i < windows.length; i++){
if(windows[i][A] && windows[i][A].prototype.isPrototypeOf(B)){
return true;
}
}
return false;
} I already have the impression it doesn't work due to the "contentWindow" limited access to same-domain policy. I also don't show here checks to update the windows array when the number of iframes change or the url of an iframe changes. The main goal here is to be able to tell the constructor of a certain object instance that exists in a variable has a certain object in its prototype chain. For example, to know if a variable contains an instance of an Array . What are your views here? | I see several fundamental management issues in your example: if a Scrum-Addicts manager signs a "hard-deadline" contract, but adds only a safety margin of 33% in a situation where "a new system is involved", that is pretty reckless. the availability of delivering at least x% of the features after one month could have been used to negotiate a contract where the customers pays the money at least partially when he gets only 80% of the features at the deadline. An all-or-nothing contract is something neither the software vendor nor the customer will benefit from - this means not just 0 money for the vendor, but also 0 features for the customer. And an all-or-nothing development methodology like "Waterfall" will only let you write such contracts, an agile approach offers additional possibilities. looking at the results of the first one or two sprints should have made obvious to the manager that the team cannot meet the deadline. So he should have taken earlier actions, and re-prioritize the remaining tasks and features, or try to re-negotiate with the customer earlier. For example, the manager could have tried to downsize the scope of some of the remaining features, so the team could have delivered all features mentioned in the contract, but each of them in a reduced scope. If a task turns out to take longer than you thought, no development methodology will save you from that. But an agile approach like Scrum gives management more opportunities to control what happens in that situation. If they don't make use of those opportunities, it is clearly their fault, not the team's, not the fault of "Scrum", and not the customer's fault because "he does not accept agility". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/311081",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/75111/"
]
} |
311,155 | We all know 0/0 is Undefined and returns an error if I were to put it into a calculator, and if I were to create a program (in C at least) the OS would terminate it when I try to divide by zero. But what I've been wondering is if the computer even attempts to divide by zero , or does it just have "built in protection", so that when it "sees" 0/0 it returns an error even before attempting to compute it? | The CPU has built in detection. Most instruction set architectures specify that the CPU will trap to an exception handler for integer divide by zero (I don't think it cares if the dividend is zero). It is possible that the check for a zero divisor happens in parallel in hardware along with the attempt to do the division, however, the detection of the offending condition effectively cancels the division and traps instead, so we can't really tell if some part of it attempted the division or not. (Hardware often works like that, doing multiple things in parallel and then choosing the appropriate result afterwards because then each of the operations can all get started right away instead of serializing on the choice of appropriate operation.) The same trap to exception mechanism will also be used when overflow detection is turned on, which you ask for usually by using different add/sub/mul instructions (or a flag on those instructions). Floating point division also has built in detection for divide by zero, but returns a different value ( IEEE 754 specifies NaN ) instead of trapping to an exception handler. Hypothetically speaking, if the CPU omitted any detection for attempt to divide by zero, the problems could include: hanging the CPU (e.g. in an inf. loop) — this might happen if the CPU uses an algorithm to divide that stops when the numerator is less than the divisor (in absolute value). A hang like this would pretty much count as crashing the CPU. a (possibly predictable) garbage answer, if the CPU uses a counter to terminate division at the maximum possible number of divide steps (e.g. 31 or 32 on a 32-bit machine). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/311155",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/168415/"
]
} |
311,212 | I have an opinion (which I am sure will be shared by some) that passing anonymous functions which contain more than a few lines of code, as arguments to other functions affects readability and self-documentation drastically, to the point where I feel it would be far better for anyone likely the use the code to just declare a named function. Or at least assign that anonymous function to a variable before declaring the main function However, many JavaScript libraries (jQuery, d3.js/NVD3.js) just to give a couple of examples, use large functions in this way. Why is this so widely accepted in JavaScript? Is it a cultural thing, or are there advantages which I'm missing, which would make the use more preferred than declaring a named function? | Three main reasons I can think of: Parent Scope Access Privacy Reduction of names defined in higher scopes Parent Scope Access: Inline function definitions allow the inline code to have access to variables defined in parent scopes. This can be very useful for many things and can reduce the amount or complexity of code if done properly. If you put the code in a function defined outside of this scope and then call the code, you would then have to pass any parent state that it wanted to access to the function. Privacy: Code inside an inline anonymous definition is more private and cannot be called by other code. Reduction of names defined in higher scopes: This is most important when operating in the global scope, but an inline anonymous declaration keeps from having to define a new symbol in the current scope. Since Javascript doesn't natively require the use of namespaces, it is wise to avoid defining any more global symbols than minimally required. Editorial: It does seem to have become a cultural thing in Javascript where declaring something anonymously inline is somehow considered "better" than defining a function and calling it even when parent scope access is not used. I suspect this was initially because of the global namespace pollution problem in Javascript, then perhaps because of privacy issues. But it has now turned into somewhat of a cultural thing and you can see it expressed in lots of public bodies of code (like the ones you mention). In languages like C++, most would probably consider it a less-than-ideal practice to have one giant function that extends across many pages/screens. Of course, C++ has namespacing built in, doesn't provide parent scope access and has privacy features so it can be motivated entirely by readability/maintainability whereas Javascript has to use the code expression to achieve privacy and parent scope access. So, JS just appears to have been motivated in a different direction and it's become somewhat a cultural thing within the language, even when the things that motivated that direction aren't needed in a specific case. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/311212",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/216835/"
]
} |
311,465 | So, a scrum sprint is a fixed time period during which a specific set of features should be implemented. And a scrum team consists of all the people committed to delivering those features, the majority of them typically being developers and testers. Having established these rules, one might wonder how to keep all of these people busy during the whole sprint. At the beginning of the sprint there is nothing to test yet, and at the end of the sprint there is typically nothing or very little left to develop/fix. I've seen 2 approaches to handle this, but neither of them seems to properly solve the problem. 1) Let the team members decide what to do whenever they're out of tasks. Cons: If what they do is not thoroughly planned (i.e. major refactoring, switching to new testing framework), their work may turn out to be useless or be stuck halfway through On the other hand, planning such work can take plenty of time, and the client may be disappointed to see the team waste time on something that does't bring immediate value Such tasks usually can't be thoroughly estimated, so it's quite easy for unprincipled workers to spend their time watching YouTube cats without it being reflected on the scrum board or anywhere else 2) Make room in the sprint only for development, and start testing after the sprint is finished (when developers start working on the features from the next sprint) Cons: While developing features for the current sprint, developers get distracted by fixing bugs from the previous one, and they can fail to perform the amount of work that was estimated to be done during the current sprint Two scrum boards are needed: one for the current sprint features, and one for the previous sprint bugs So my question is: how to properly distribute work during the sprint between developers and testers so that no-one gets overloaded with work or ends up without tasks at any point? Are there ways to improve the approaches described above? Or are there any better approaches? | At the beginning of the sprint there is nothing to test yet Really? You have no requirements to validate? No discussions to have with your customer? No wire-frames to evaluate? No test plans to think about? at the end of the sprint there is typically nothing or very little
left to develop/fix I have never been in that place in a project. No more work to do? There is always something. Are all your tests fully automated? How is your CI looking? Could the database access layer be refactored to be simpler? And I've never worked on anything with an empty bug list and backlog. What did your developers used to do in a waterfall testing phase? I know some people get very religious about what is and what is not 'SCRUM'. I couldn't care less about that. But I think you have two issues here: A 'traditional' QA department that test code once it is 'finished' by developers rather than working with customers and developers to make sure you are building the right thing as well as building it right. Take a look at the agile testing quadrants by Lisa Crispin. The best testers are involved in every stage of the software development lifecycle and the best developers write their own tests. Trying to stick too closely to a SCRUM timetable of 1 week / 2 week sprints without having a prioritised and sized backlog that is split down into tasks that are easy enough to complete within a short amount of time within a single sprint. If you had this then there would always be more work to get on with. Maybe the last feature you work on in this sprint doesn't get in to this sprint's release, but it can always go in to the next one. Aside. If you have a small cohesive team then the roles matter less. Instead of having someone with the label tester who isn't allowed to write production code, or someone labelled a developer who thinks they are above testing, everyone should be doing whatever is necessary for the team to succeed, including the dreaded project management tasks when they are necessary, this is called a cross functional team. One extra point brought up by @Cort Ammon in the comments. The agile manifesto talks about customer collaboration over contract negotiation. You say that: the client may be disappointed to see the team waste time on something that does't bring immediate value It can be difficult to explain and I understand customers can be very difficult at times but this would be a big red flag for me. They are trusting you with their source code / client relationship / business / whatever you are developing for them. If they can't trust you to act professionally in their best interest then either you have a problem or they do. I have written a post that talks about software developers not being considered professionals. A professional doctor, lawyer, civil engineer faced with a client who changed the requirements on them part way through would not just reduce the quality and moan about it. They would tell their clients that it would be a problem. If the client pushed then a professional would not just blindly do it to a dangerously inferior standard because they would be liable. We don't take professional entrance exams and so are not liable. That doesn't mean we shouldn't try to be better. In summary, I wouldn't worry too much about trying to get people to be more efficient at the beginning and end of a sprint but rather see it as a symptom of a wider issue within the team. Have you heard of eXtreme Programming (XP) . I'd say the principles from XP to apply here are communication and respect: Respect your team to do what they think is best. I would argue that if there is a lot of watching cat videos then either you have poor developers or you are treating them poorly. Communication. If your developers are talking to each other, to the testers, to management, to the customer, then everyone should probably have a good feeling of what is up next and if they don't then they can just ask. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/311465",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/218368/"
]
} |
311,507 | Most modern languages (which are somehow interpreted) have some kind of eval function. Such a function executes arbitrary language code, most of the time passed as the main argument as a string (different languages may add more features to the eval function). I understand users should not be allowed to execute this function ( edit i.e. take directly or indirectly arbitrary input from an arbitrary user to be passed to eval ), especially with server-side software, since they could force the process to execute malicious code. In that way, tutorials and communities tell us to not use eval. However, there are many times where eval is useful and used: Custom access rules to software elements (IIRC OpenERP has an object ir.rule which can use dynamic python code). Custom calculations and/or criteria (OpenERP has fields like that to allow custom code calculations). OpenERP report parsers (yes I know I'm freaking you out with OpenERP stuff... but it is the main example I have). Coding spell effects in some RPG games. So they have a good use, as long as they are used properly. The main advantage is that the feature allows admins to write custom code without having to create more files and include them (although most frameworks using eval features have also a way to specify a file, module, package, ... to read from). However, eval is evil in the popular culture. Stuff like breaking into your system comes to mind. However, there are other functions which could be harmful if somehow accessed by users: unlink, read, write (file semantics), memory allocation and pointer arithmetic, database model access (even if not considering SQL-injectable cases). So, basically, most of the time when any code is not written properly or not watched properly (resources, users, environments, ...), the code is evil and can lead even to economic impact. But there's something special with eval functions (regardless of the language). Question : Is there any historical fact for this fear becoming part of the popular culture, instead of giving the same attention to the other possibly dangerous features? | An eval function by itself is not evil, and there is a subtle point that I do not believe you are making: Allowing a program to execute arbitrary user input is bad I have written code that used an eval type of function and it was secure: the program and parameters were hard-coded. Sometimes, there is no language or library feature to do what the program needs and running a shell command is the short path. "I have to finish coding this in a few hours, but writing Java/.NET/PHP/whatever code will take two days. Or I can eval it in five minutes." Once you allow users to execute anything they want, even if locked down by user privilege or behind a "secure" screen, you create attack vectors. Every week, some random CMS, blogging software, etc. has a security hole patched where an attacker can exploit a hole like this. You are relying on the entire software stack to protect access to a function that can be used to rm -rf / or something else catastrophic (note: that command is unlikely to succeed, but will fail after causing a bit of damage). Is there any historical fact for this fear becoming part of the
popular culture, instead of putting the same attention to the other
possibly dangerous features? Yes, there is a historical precedent. Due to the numerous bugs that have been fixed over the years in various software that allow remote attackers to execute arbitrary code, the idea of eval has mostly fallen out of favor. Modern languages and libraries have rich sets of functionality that make eval less important, and this is no accident. It both makes functions easier to use and reduces the risk of an exploit. There has been much attention paid to many potentially insecure features in popular languages. Whether one receives more attention is primarily a matter of opinion, but the eval features certainly have a provable security problem that is easy to understand. For one, they allow executing operating system commands including shell built-ins and external programs that are standard (e.g. rm or del ). Two, combined with other exploits, an attacker may be able to upload their own executable or shell script then execute it via your software, opening the door for almost anything to happen (none of it good). This is a difficult problem. Software is complex, and a software stack (e.g. LAMP ) is multiple pieces of software that interact with each other in complex ways. Be careful how you use language features such as this, and never allow users to execute arbitrary commands. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/311507",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/130510/"
]
} |
311,710 | A function f() uses eval() (or something as dangerous) with data which I created and stored in local_file on the machine running my program: import local_file
def f(str_to_eval):
# code....
# ....
eval(str_to_eval)
# ....
# ....
return None
a = f(local_file.some_str) f() is safe to run since the strings I provide to it are my own. However, if I ever decide to use it for something unsafe (e.g. user input) things could go terribly wrong . Also, if the local_file stops being local then it would create a vulnerability since I would need to trust the machine that provides that file as well. How should I ensure that I never "forget" that this function is unsafe to use (unless specific criteria are met)? Note: eval() is dangerous and can usually be replaced by something safe. | Define a type to decorate “safe” inputs. In your function f() , assert that the argument has this type or throw an error. Be smart enough to never define a shortcut def g(x): return f(SafeInput(x)) . Example: class SafeInput(object):
def __init__(self, value):
self._value = value
def value(self):
return self._value
def f(safe_string_to_eval):
assert type(safe_string_to_eval) is SafeInput, "safe_string_to_eval must have type SafeInput, but was {}".format(type(safe_string_to_eval))
...
eval(safe_string_to_eval.value())
...
a = f(SafeInput(local_file.some_str)) While this can be easily circumvented, it makes it harder to do so by accident. People might criticize such a draconian type check, but they would miss the point. Since the SafeInput type is just a datatype and not an object-oriented class that could be subclassed, checking for type identity with type(x) is SafeInput is safer than checking for type compatibility with isinstance(x, SafeInput) . Since we do want to enforce a specific type, ignoring the explicit type and just doing implicit duck typing is also not satisfactory here. Python has a type system, so let's use it to catch possible mistakes! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/311710",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/166557/"
]
} |
311,972 | Memory (and resource locks) are returned to the OS at deterministic points during a program's execution. The control flow of a program by itself is enough to know where, for sure, a given resource can be deallocated. Just like how a human programmer knows where to write fclose(file) when the program is done with it. GCs solve this by figuring it out directly during runtime when the control flow is executed. But the real source of truth about the control flow is the source. So theoretically, it should be possible to determine where to insert the free() calls before compilation by analyzing the source (or AST). Reference counting is an obvious way to implement this, but it's easy to encounter situations where pointers are still referenced (still in scope) yet no longer needed. This just converts the responsibility of manually deallocating pointers to a responsibility to manually manage the scope/references to those pointers. It seems like it's possible to write a program that can read a program's source and: predict all the permutations of the program's control flow---to similar accuracy as watching the live execution of the program track all the references to allocated resources for each reference, traverse the whole subsequent control flow in order to find the earliest point that the reference is guaranteed to never be dereferenced at that point, insert a deallocation statement at that line of source code Is there anything out there that does this already? I don't think Rust or C++ smart pointers/RAII is the same thing. | Take this (contrived) example: void* resource1;
void* resource2;
while(true){
int input = getInputFromUser();
switch(input){
case 1: resource1 = malloc(500); break;
case 2: resource2 = resource1; break;
case 3: useResource(resource1); useResource(resource2); break;
}
} When should free be called? before malloc and assign to resource1 we can't because it might be copied to resource2 , before assigning to resource2 we can't because we may have gotten 2 from the user twice without a intervening 1. The only way to be sure is to test resource1 and resource2 to see if they are not equal in cases 1 and 2 and free the old value if they were not. This is essentially reference counting where you know there are only 2 possible references. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/311972",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/212019/"
]
} |
312,009 | The project in which work is private for commercial purposes and not its source code is distributed to anyone. Only the functional application shown consumers through a website. It has the following structure: The source code is written in PHP, the only part where third-party code is used when using Composer and PHPUnit. HTML, CSS, Javascript technologies are used (with free third-party libraries) Server-side MySQL server, PHP, and Memcached is used. And the server is not a shared hosting, is a VPS. We do not want anyone to see our source code, but if for any reason our code is stolen or otherwise obtained, we want to have a license that does not allow disclosure of any kind. My questions are: Is this type of project with third-party code and private code compatible? Is it possible to license these works? | The simple answer is "don't license your code." Instead, place a copyright statement on your code (which you should have done anyway) and add a statement to the effect that no one is allowed to use your code. Here's the longer answer: We do not want anyone to see our source code, but if for any reason our code is stolen or otherwise obtained, we want to have a license that does not allow disclosure of any kind. If someone steals your code, they're not terribly interested in how you may have licensed that code. They wanted it and stole it, there's nothing a license will do to prevent them from using it as they please based upon the fact that they stole your code . More broadly, a license is there to give permission to others in order to use code that you created. The license dictates the terms that others have to follow, and specifies how they may use your code. As you don't want anyone using your code then you shouldn't put a license on it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312009",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/53411/"
]
} |
312,022 | I've noticed something lately looking at some popular projects on GitHub, that there's no develop branch. And in fact, the GitHub Flow guide doesn't mention it either. From my understanding, master should always be totally stable and reflect production. If developers are working on feature branches, and then merging those into master when they're done, that means there's a period of time where features/fixes are being merged into master and the master branch is actually newer than production. Wouldn't it make more sense to have the team create feature/fix branches off of develop , merge back into that, and then when the next version is totally ready for release, develop is merged into master and a tag is created? Imagine if people are merging straight into master , and a bug is reported in production that becomes difficult to fix because the master branch codebase has changed significantly. Then the devs just have to tell the user to wait until the next release to see the issue resolved. EDIT: This question is different than "to branch or not to branch." It specifically addresses people moving away from using the develop branch and the reasons surrounding that, since that was touted as a best practice for a long time. | It comes from the CI mindset where there is integration several times a day. There are pros and cons of both. On our team we have abandoned the develop branch as well since we felt it provided no additional benefit but a few drawbacks. We have configured our CI software(Teamcity) to compensate for the drawbacks: Enable deployment of a specific commit. In other words: We don't deploy a branch. We deploy a commit. We can deploy either master or branches starting with a hotfix/ prefix. The reason this works is because all pull request contain potentially releasable code but this doesn't mean we deploy all commits in master. The main reason we abandoned the develop branch is because it tended to get too large and too time consuming to see what it actually contained. If we have deployed something a little prematurely we just branch off a hotfix branch and deploy that directly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312022",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/210564/"
]
} |
312,135 | Sometimes in a programming exercise, boilerplate generation, putting guide rails around the tasks for a junior programmer to implement, etc., it happens that the programmer is presented unimplemented code and told to "fill in the blank." For example, a unit test that may compile, but fails, or a class declaration with empty methods. Is there a common term for this practice? | You are referring to a stub or skeleton : Stub This is typically a method or function with a mostly-empty body that simply returns a dummy value so code will compile. Skeleton This is a method that has a high-level algorithm implemented, but individual parts are left unimplemented. They may be empty code blocks, or reference stub methods (see above) that will eventually perform subtasks. This is a good way to express a software design for a junior programmer who may struggle with the larger design effort, or for making sure you have the algorithm correct before investing too much time in the low-level details. The practice of using these code elements would be called stubbing or creating a code skeleton . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312135",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/193199/"
]
} |
312,183 | At some point a program is in development. Features are being added or removed or changed all the time. Every version is nothing but a prototype.
So I don't waste much time on writing super clean code at that point because I never know how long something lasts. Of course I try to keep the code quality to certain standards, but time is always an issue. Then comes the point where the program is finished and the decision maker(s) say "that's it". I do have a working prototype at this point, but the code inside is a bit messy from all the back and forth during the development phase. I am expected to start testing/final debugging but my gut says I should now somehow clean up and or rewrite stuff to give it proper architecture that makes maintenance etc easier. Once stuff has been tested and approved, it makes no sense to rewrite then.
On a regular basis I am standing there with a working 'finished' prototype and I get a bug during testing and I see that it is a result of not-smart coding which is a result of the whole development process.
I am in the middle of testing and the bugfix would be a rewrite... it's a mess! There are better/textbook ways, I am sure. But i have to work in a real work environment where not everything is textbook. So how do I transition my working prototype to a release version with a stable code base ?
Maybe I should not consider the development finished once I do and actually see it as the clean-up phase... I don't know, I need help here. EDIT I want to clarify a few things. I am 100% on the side of doing it right before and not after, code clean and readable.
But i also have to get things done and can't dream about the beauty of code all clean and shiny. I have to find a compromise. often a new feature is really just something that we want to try out and see if it makes sense to implement something like this. (esp. in mobile apps, to get a real look-and-feel on an actual device)
So it is something small that (imho) does not justify too much work in a first "let's see" iteration. However sometimes the question arises WHEN do i pay this tech.debt ? That's what this question is all about. If I know that half of the features will be dropped one day later (enough experience in our company by now) I really find it hard to believe that the best way to approach my problem is to nonetheless invest extra time to write everything clean even if most of it will be dropped shortly after. It feels to me that I will save time if I do one big cleanup once the thing is solid, hence my question. | So I don't waste much time on writing super clean code at that point because I never know how long something lasts. Not knowing how long something lasts should never be an excuse for sloppiness - quite the opposite. The cleanest code is IMHO the one which does not come into your way when you have to change something. So my recommendation is: always try to write the cleanest code you can - especially when coding a prototype. Because it will be much easier to adapt it when something has to be changed (which surely will happen). Don't get me wrong - my understanding of "the cleanest code" has nothing to do with making code beautiful for the sake of beauty. That is indeed something which can slow you down. In my point of view, clean code is code which is mostly self-explaining (no need to write so much docs - causes speedup), easy to understand (less errors, so less debugging needed - speedup, less time needed to find the correct place to alter - speedup), solves the given problem with the least amount of necessary code (less code to debug - obvious speedup), is DRY (only one place to change when something has to be changed - speedup - and less risk to introduce new bugs by forgetting to change a second place), follows coding standards (less cumbersome things to think about - speedup), uses small, reusable building blocks (which can be reused for many features or even prototypes - speedup), and so on. I am expected to start testing/final debugging but my gut says I should now somehow clean up and or rewrite stuff to give it proper architecture that makes maintenance etc easier Doing "cleanup" afterwards never works. Consider you cleanup before you implement a new feature, or when starting to implement it, but not afterwards. For example, whenever you start to touch a method for a feature, and you notice it gets longer than 10 lines, consider to refactor it into smaller methods - immediately , before getting the feature complete. Whenever you detect an existing variable or function name you do not know exactly what it means, find out what it is good for and rename the thing before doing anything else. If you do this regularly, you keep your code at least in a "clean enough" state. And you start saving time - because you need much less time for debugging. I am in the middle of testing and the bug fix would be a rewrite ... which is the actual proof for what I wrote above: being "dirty" haunts immediately back on you when you start debugging your code and will make you slower. You can avoid this almost completely if you do the cleanup immediately. Then bug fixes will mostly mean small changes to the code, but never a major architectural change. If you really detect evidence for an architectural improvement during testing, delay it, put it into your issue tracking system, and implement it the next time you have to implement a feature which benefits from that change ( before you start with that feature). This takes some discipline, and some coding experience, of course. It is a similar idea like the idea behind "test driven development", doing these things beforehand instead of doing them afterwards (TDD can help, too, but what I wrote works even when you do not use TDD). When you do this consequently, you will not need any special "clean-up phase" before releasing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312183",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/147886/"
]
} |
312,197 | We're building a new app and I'd like to include structured logging. My ideal setup would be something like Serilog for our C# code, and Bunyan for our JS. These would feed into fluentd and then could go out to any number of things, I was thinking initially elasticsearch + kibana . We have a MySQL database already, so in the short term I'm more interested in getting Serilog + Bunyan setup and the devs to use it and we can log to MySQL while we take a bit more time bringing in fluentd and the rest. However, one of our more experienced coders would prefer to just do something like: log.debug("Disk quota {0} exceeded by user {1}", quota, user); using log4net and then just run select statements against MySQL like: SELECT text FROM logs WHERE text LIKE "Disk quota"; That being said, which approach is better and/or what things do we need to consider when choosing the type of logging system? | There are two fundamental advances with the structured approach that can't be emulated using text logs without (sometimes extreme levels of) additional effort. Event Types When you write two events with log4net like: log.Debug("Disk quota {0} exceeded by user {1}", 100, "DTI-Matt");
log.Debug("Disk quota {0} exceeded by user {1}", 150, "nblumhardt"); These will produce similar text: Disk quota 100 exceeded by user DTI-Matt
Disk quota 150 exceeded by user nblumhardt But, as far as machine processing is concerned, they're just two lines of different text. You may wish to find all "disk quota exceeded" events, but the simplistic case of looking for events like 'Disk quota%' will fall down as soon as another event occurs looking like: Disk quota 100 set for user DTI-Matt Text logging throws away the information we initially have about the source of the event, and this has to be reconstructed when reading the logs usually with more and more elaborate match expressions. By contrast, when you write the following two Serilog events: log.Debug("Disk quota {Quota} exceeded by user {Username}", 100, "DTI-Matt");
log.Debug("Disk quota {Quota} exceeded by user {Username}", 150, "nblumhardt"); These produce similar text output to the log4net version, but behind the scenes, the "Disk quota {Quota} exceeded by user {Username}" message template is carried by both events. With an appropriate sink, you can later write queries where MessageTemplate = 'Disk quota {Quota} exceeded by user {Username}' and get exactly the events where the disk quota was exceeded. It's not always convenient to store the entire message template with every log event, so some sinks hash the message template into a numeric EventType value (e.g. 0x1234abcd ), or, you can add an enricher to the logging pipeline to do this yourself . It's more subtle than the next difference below, but a massively powerful one when dealing with large log volumes. Structured Data Again considering the two events about disk space usage, it may be easy enough using text logs to query for a particular user with like 'Disk quota' and like 'DTI-Matt' . But, production diagnostics aren't always so straightforward. Imagine it's necessary to find events where the disk quota exceeded was below 125 MB? With Serilog, this is possible in most sinks using a variant of: Quota < 125 Constructing this kind of query from a regular expression is possible, but it gets tiring fast and usually ends up being a measure of last resort. Now add to this an event type: Quota < 125 and EventType = 0x1234abcd You start to see here how these capabilities combine in a straightforward way to make production debugging with logs feel like a first-class development activity. One further benefit, perhaps not as easy to prevent up front, but once production debugging has been lifted out of the land of regex hackery, developers start to value logs a lot more and exercise more care and consideration when writing them. Better logs -> better quality applications -> more happiness all around. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312197",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/219726/"
]
} |
312,219 | What is easier to understand, a big boolean statement (quite complex), or the same statement broken down into predicate methods (lots of extra code to read)? Option 1, the big boolean expression: private static bool ContextMatchesProp(CurrentSearchContext context, TValToMatch propVal)
{
return propVal.PropertyId == context.Definition.Id
&& !repo.ParentId.HasValue || repo.ParentId == propVal.ParentId
&& ((propVal.SecondaryFilter.HasValue && context.SecondaryFilter.HasValue && propVal.SecondaryFilter.Value == context.SecondaryFilter) || (!context.SecondaryFilter.HasValue && !propVal.SecondaryFilter.HasValue));
} Option 2, The conditions broken down into predicate methods: private static bool ContextMatchesProp(CurrentSearchContext context, TValToMatch propVal)
{
return MatchesDefinitionId(context, propVal)
&& MatchesParentId(propVal)
&& (MatchedSecondaryFilter(context, propVal) || HasNoSecondaryFilter(context, propVal));
}
private static bool HasNoSecondaryFilter(CurrentSearchContext context, TValToMatch propVal)
{
return (!context.No.HasValue && !propVal.SecondaryFilter.HasValue);
}
private static bool MatchedSecondaryFilter(CurrentSearchContext context, TValToMatch propVal)
{
return (propVal.SecondaryFilter.HasValue && context.No.HasValue && propVal.SecondaryFilter.Value == context.No);
}
private bool MatchesParentId(TValToMatch propVal)
{
return (!repo.ParentId.HasValue || repo.ParentId == propVal.ParentId);
}
private static bool MatchesDefinitionId(CurrentSearchContext context, TValToMatch propVal)
{
return propVal.PropertyId == context.Definition.Id;
} I prefer the second approach, because I see the method names as comments, but I understand that it's problematic because you have to read all the methods to understand what the code does, so it abstracts the code's intent. | What is easier to understand The latter approach. It's not only easier to understand but it is easier to write, test, refactor and extend as well.
Each required condition can be safely decoupled and handled in it's own way. it's problematic because you have to read all the methods to understand the code It's not problematic if the methods are named properly. In fact it would be easier to understand as the method name would describe the intent of condition. For an onlooker if MatchesDefinitionId() is more explanatory than if (propVal.PropertyId == context.Definition.Id) [Personally, the first approach sores my eyes.] | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312219",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/24822/"
]
} |
312,296 | I am very new to programming and a bit confused from reading\hearing different conventions from different sources: Does Object-oriented programming have 4 or 5 concepts? As a newcomer, I understand these are the 5 concepts: Abstraction Inheritance Encapsulation Polymorphism Modularity So how come I don't find a more "strict" definition and there seem to be several arrangements of these concepts out there? | The reason you find different explanations of what object-oriented programming means is because there is no single person or organization with the authority to formulate a strict universally-applicable definition. Object-oriented programming is not an ISO standard or a scientific law. It is a philosophy. And as with all philosophies, there are all kinds of different interpretations and no interpretation is universally applicable. When you read a text which tells you what concepts you should follow when designing a software architecture, you should see this as a guideline based on the opinions the author formed during their professional experience, and not as an universal truth. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312296",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
312,339 | I'm fully aware that pylint and other static analysis tools are not all-knowing, and sometimes their advice must be disobeyed. (This applies for various classes of messages, not just convention s.) If I have classes like class related_methods():
def a_method(self):
self.stack.function(self.my_var)
class more_methods():
def b_method(self):
self.otherfunc()
class implement_methods(related_methods, more_methods):
def __init__(self):
self.stack = some()
self.my_var = other()
def otherfunc(self):
self.a_method() Obviously, that's contrived. Here's a better example, if you like . I believe this style is called using "mixins". Like other tools, pylint rates this code at -21.67 / 10 , primarily because it thinks more_methods and related_methods don't have self or attributes otherfunc , stack , annd my_var because without running the code, it apparently can't see related_methods and more_methods are mixed-in to implement_methods . Compilers and static analysis tools can't always solve the Halting Problem , but I feel this is certainly a case in which looking at what's inherited by implement_methods would show this is perfectly valid, and that would be a very easy thing to do. Why do static analysis tools reject this valid (I think) OOP pattern? Either: They don't even try to check inheritance or mixins are discouraged in idiomatic, readable Python #1 is obviously incorrect because if I ask pylint to tell me about a class of mine that inherits unittest.TestCase that uses self.assertEqual , (something defined only in unittest.TestCase ), it does not complain. Are mixins unpythonic or discouraged? | Mixins just aren't a use case that was considered by the tool. That doesn't mean it's necessarily a bad use case, just an uncommon one for python. Whether mixins are used appropriately in a particular instance is another matter. The mixin anti-pattern I see most frequently is using mixins when there is only ever intended to be one combination mixed. That's just a roundabout way to hide a god class. If you can't think of a reason right now to swap out or leave out one of the mixins, it shouldn't be a mixin. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312339",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/201219/"
]
} |
312,401 | For a system that consist of multiple services calling each other (e.g. Front End -> Backend -> Storage), I often heard people using terminology such as "downstream" or "upstream" services. I'm not clear which direction these mean. Data flows in both direction. Requests flow from more user-facing to more backend service, but responses flow in the opposite direction, so it seems to me one either way can be argued | The downstream services are the ones that consume the upstream service. In particular, they depend on the upstream service. So the front-end is downstream to the back-end because it depends on the back-end. The back-end can exist meaningfully without the front-end, but the front-end doesn't make sense without the back-end. The dependency doesn't have to be as strong as I made it out to be in the previous paragraph. More generally, upstream services don't need to know or care about the existence of downstream services. Downstream services care about the existence of upstream services, even if they only optionally consume them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/312401",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/69715/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.