source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
199,345 | I a little while ago I joined a new development team and recently we had our first major release. We've used Git since the beginning of the project and by now are somewhat comfortable with it. However, now that the product is in the field, we are discovering new issues/processes that need to be established. Up to this point, everything we've done was by referencing the very popular successful branching model post , which indeed has been very helpful Currently we have the following branches, all live and all being updated: master - only released, stable code (tagged every time release is cut) develop - wide open. used for long term development hotfix-1.0.1 - branched off master's 1.0.0 tag for small, very targetted fixes which are already lining up release-1.1 - This is a small incremental release that we want to push relatively soon so we wanted to manage it separately from develop and to limit scope of changes. These are the merging rules we are establishing: If a code change is made in release-1.1, it must be merged up to develop. If a code change is made in hotfix-1.0.1, it must be merged up to release-1.1. Nobody except for one team member should merge anything into master and that merge only happens when a product version is about to be shipped. My questions are: When should the merges take place? As soon as the fix is applied in lower-level branch? Or periodically in chunks of changes? If periodically, how do you typically determine merge period? Who should do the merges? Person making the original code change? Or one individual who would be designated as "Director of Git Services"? Reason I'm asking all this is because it seems that while Git is very flexible (and I do love that part about it), it also allows you to easily shoot yourself in the foot. With just few commands, someone could easily, and hopefully not on purpose, merge new development right into hotfix that should ship out in 2 days. Many of us are new to Git and we are still feeling our way around the tool. I was thinking how my other companies/teams handled such concepts in the past and I think the biggest difference is that most other source control products that I've used, work with individual commits, so each developer could be responsible for making sure his fixes are applied to the correct places. But with Git, when one developer runs: git checkout release-1.1
git merge hotfix-1.0.1 ... after his commit, those commands will end up merging an entire branch including code that he has never seen before and may not be the best person to resolve, if there are any conflicts. | Wikipedia merely refers to it as the Circle-ellipse problem The circle-ellipse problem in software development (sometimes known as the square-rectangle problem ) illustrates a number of pitfalls which can arise when using subtype polymorphism in object modelling. The issues are most commonly encountered when using object-oriented programming. This is the L in the acronym S.O.L.I.D. which is known as the Liskov substitution principle . This problem arises as a violation of that principle. The problem concerns which subtyping or inheritance relationship should exist between classes which represent circles and ellipses (or, similarly, squares and rectangles). More generally, the problem illustrates the difficulties which can occur when a base class contains methods which mutate an object in a manner which might invalidate a (stronger) invariant found in a derived class, causing the Liskov substitution principle to be violated... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199345",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20673/"
]
} |
199,479 | I want to look at how my code base has grown over time. GitHub has a nice +/- display along the list of checkins which gives a sense of this. Is there something similar I can use with my Google Code hosted repo or offline? | There are a few options natively in Git to get data about the changes. git log --stat will show the amount each file was changed. git whatchanged gives some detail into the files that were modified. git diff --stat <sha1> <sha2> gives the files and the amount of changes between two commits. There are many other blogs that give various formatted logs. A google search can point you at these. Also doing git log --help will give the various options for formatting the history of your repo. Git has the ability to give you quite a bit of data through the various command line log options (filtering by author, file, etc). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199479",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/69090/"
]
} |
199,627 | The famous Strassen's matrix multiplication algorithm is a real treat for us, as it reduces the time complexity from the traditional O(n 3 ) to O(n 2.8 ). But of all the resources I have gone through, even Cormen and Steven Skienna's book, they clearly do not state of how Strassen thought about it. What is the rationale of Strassen's matrix multiplication algorithm? Is this a lucky accident or is there something deeper in it? | Apart from Strassen, nobody is able to tell you how Strassen has got
his idea. Howeber¹, I can tell you, how you could have found that
formula yourself—provided that you are interested in algebraic
geometry and representation theory. This also gives you the tools to show that Strassen's formula is as good as it can, or more precisely, that there is no formula computing the product of two 2×2 matrices that uses fewer than 7 multiplications . Since you are interested by matrices I assume you know basic linear
algebra and will be a bit blurry for the more advanced details. First let be E the set of all linear maps from a plane to a
plane. This is basically the set of all 2×2 matrices, but we forget
about a particular coordinate system—because, if there were a better
coordinate system than the “default one” we could have interest in
using it for matrix multiplication. We also denote by E† the dual
space of E and by X = P(E⊗E†⊗E†) the projective space associated
to the tensor product E⊗E†⊗E† . An element of X = P(E⊗E†⊗E†) of the special form [c⊗α⊗β] can be
interpreted as an elementary operation on matrices, which, in some
appopriate coordinate systems, reads a coefficient of a matrix A and
a coefficient of a matrix B and writes the product of these
coefficients in some matrix C . A general element of X is a combination of
these elementary operations, so the product π of two matrices,
understood as a map from P(E)×P(E) to P(E), is a point in X . The usual matrix product formula and Strassen's formula can be
expressed as combinations of these linear operations, so let me denote
by W₁ the set of these elementary operations [c⊗α⊗β] and let me describe
geometrically their combinations. Let W₂ be the variety of secants of W₁ in X. It is obtained by
taking the (closure of the) union of all lines going through two
(generic) points of W₁ . We can think of a it as of the set of all
combinations of two elemetary operations. Let W₃ be the variety of secant planes of W₁ in X. It is obtained by
taking the (closure of the) union of all planes going through three
(generic) points of W₁ . We can think of a it as of the set of all
combinations of three elemetary operations. Similarly, we define secant varieties for greater indices. Note that
these varieties grow larger and larger, that is W₁⊂W₂⊂W₃⊂⋯ Hence
the classical matrix product formula shows that the product of
matrices is a point of W₈ . Actually PROPOSITION(Strassen) — The product of matrices π lies in W₇. As far as I know, Strassen did not put things that way, however this
is a geometric point of view on this question. This point of view is
very useful, because it also lets you prove that Strassen's formula is
the best, that is, that π does not lie in W₆ . Geometric methods
developped here can also be used for a broader range of problems. I hope, I caught your curiosity. You can go further by reading this
article by Landsberg and Manivel: http://arxiv.org/abs/math/0601097 ¹ I will not fix this typo, because I caught a cold. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199627",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/90320/"
]
} |
199,644 | I'm writing a program for some quiz software. I have a question class containing the ArrayLists for the question, answer, options, marks and negative marks. Something like this: class question
{
private ArrayList<Integer> index_list;
private ArrayList<String> question_list;
private ArrayList<String> answer_list;
private ArrayList<String> opt1_list;
private ArrayList<String> opt2_list;
} I want to shuffle all questions, but for questions to be shuffled, all the objects need to be shuffled. I would have approached this problem in this way: First of all, I would not have used this design and used String not ArrayList<String> type as instance variables, and would then have used the Collections.shuffle method to shuffle objects. But my team insists on this design. Now, the question class is containing increasing ArrayLists as the entry to the questions are made. How to shuffle the questions now? | Your team suffers from a common problem: object denial . Instead of a class that holds a single question with all the information associated with it, you try to create a class called question that holds all the questions in a single instance. That's the wrong way to go about it, and it complicates what you try to do a lot ! Sorting (and shuffling) parallel arrays (or Lists) is nasty business and there's no common API for it, simply because you usually want to avoid it at all . I suggest you restructure your code like this: class Question
{
private Integer index;
private String question;
private String answer;
private String opt1;
private String opt2;
}
// somewhere else
List<Question> questionList = new ArrayList<Question>(); This way, shuffling your question becomes trivial (using Collections.shuffle() ): Collections.shuffle(questionList); | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199644",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/90320/"
]
} |
199,742 | I'm currently coding an API for a social network with the Slim Framework. My question is: What are the best practices when there are no rows to return in the json structure? Lets say that this call /v1/get/movies returns 2 rows from the table movie names: [
{"name": "Ghostbusters"},
{"name": "Indiana Jones"}
] But, then I call /v1/get/books and there are no rows in that table. Should I just return an empty structure? [
] ...or would it be better a message and an error code? [
"errors": {
"message": "no matches found",
"code": 134
}
] Which is a better practice? (the API will be used in iOS and Android apps) Thanks! | Usually I would return number of records in result as metadata. I am not sure if that is normal REST practice, but it is not much extra data, and it is very precise.
Usually there is pagination for lots of services, it is impractical to return huge resultset at once. Personally I am annoyed when there is pagination for small result sets..
If it is empty, return number_of_records : 0 and books as empty list/array books : [] . {
meta: {
number_of_records : 2,
records_per_page : 10,
page : 0
},
books : [
{id:1},
{id:27}
]
} EDIT (few years later):
Answer from Martin Wickman is much better, here is "short" of explanation why. When dealing with pagination always keep in mind possibility of contents or ordering changing. Like, first request comes, 24 results, you return first 10. After that, "new book" is inserted and now you have 25 results, but with original request it would come ordered in 10th place. When first user requests 2nd page, he would not get "new book". There are ways to handle such problems, like providing "request id" which should be sent with following API calls, then returning next page from "old" result set, which should be stored somehow and tied to "request id". Alternative is to add field like "result list changed since first request". Generally, if you can, try to put extra effort and avoid pagination. Pagination is additional state which can be mutated and tracking such changes is error prone, even more so because both server and client need to handle it. If you have too much data to process at once , consider returning "id list" with all results and details for some chunk of that list, and provide multi_get/get_by_id_list API calls for resource. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199742",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92380/"
]
} |
199,803 | I am a DBA fledgling with a lot of experience in programming. I have developed several CLI, non interactive apps that solve some daily repetitive tasks or eliminate the human error from more complex albeit not so daily tasks. These tools are now part of our tool box. I find CLI apps are great because you can include them in an automated workflow. Also the Unix philosophy of doing a single thing but doing it well, and letting the output of a process be the input of another, is a great way of building a set of tools than would consolidate into an strategic advantage. My boss recently commented that developing CLI tools is "backward", or constitutes a "regression". I told him I disagreed, because most CLI tools that exist now are not legacy but live projects with improved versions being released all the time. Is this kind of development considered "backwards" in the market? Does it look bad on a rèsumè? I also considered all solutions whether they are web or desktop, should have command line, non-interactive options. Some people consider this a waste of programming resources. Is this goal a worthy one in a software project? I also think that for a web or desktop app, having an alternate CLI interface is a great way of demonstrating that the business logic is completely decoupled from the GUI. | It basically comes down to "use the right tool for the job." If you have to interact with a user, you'll want some sort of GUI. We've got decades of research and experience showing that they make computing far more intuitive and productive. That's why GUIs have inexorably taken over the world ever since 1984: they just work better for interacting with people. But if you're automating a program with scripts, your program isn't interacting with people; it's interacting with a script. And the best interface for that is a text-based one, for reasons that should be intuitively obvious. The development of CLI programs for users to work with directly is considered backward, and with good reason. But if that's not what you're doing, if you're writing automation productivity tools, then you aren't doing anything wrong by giving them a CLI. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199803",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/61852/"
]
} |
199,870 | I am an intern at a small software company, doing typical grunt work. Many tasks that the actual developers are doing are above my head however I finally got to see some real "action" and write some code myself. I will admit the task was perhaps a bit too difficult for me but I managed and my coded worked in the end. When my mentor was doing code-review he had said to me "Man, this part is ugly." When I said, "Why? Is something wrong with it?" he replied, "No, the code works, but it's just not clean code." The concept of of clean code is one I have run across a few times before, especially now since that I get to hang around professional developers. But what exactly is it that makes code "clean"? In other words what is the criteria? I have heard in formal mathematics, a "beautiful" proof, is one that is ideally as clear and as short as possible and uses creative techniques. Many people know that in literature, "good" literature is one that can express feelings effortlessly and elegantly to the extent that one can "feel" it. But one can't "feel" code (at least I can't) and I think most would agree that short code is not necessarily the best (as it could be very inefficient) or even that the most efficient way is always preferred (as it could be very long and complex). So, what is it! Please, try to enlighten me to just what exactly makes code "clean". | In my estimation, clean code has the following characteristics: It is clear and easy to understand, It is loosely coupled, It has relatively low cyclomatic complexity, It has a sensible, flexible architecture, It is easily discoverable, It is well-designed and It is testable. Note that these are all subjective, although principles like SOLID and practices like TDD can go a long way towards achieving these goals. Also note that, as with most software dynamics, they're not necessarily mutually compatible; there are always tradeoffs. Also note that I didn't mention performance at all, because code that is highly performing is often the opposite of clean. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199870",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/88783/"
]
} |
199,884 | We are developing an application; it includes a library developed by another coder, this library communicates with server via multiple network connections, and this involves multiple threads working together. The server side code is quite complicated, and we don't have access to the source code. Recently I've discovered a mandelbug making application crash sometimes. I could reproduce it once and got a stack trace, so I opened a bug report. The bug itself is easy to fix (uncaught web exception in one of background threads, that makes CLR terminate the program). The problem is that developer is refusing to fix the bug, because "he is not convinced it exists". Unfortunately for me the boss is siding with him and says this bug cannot be fixed unless I make a "solid test case" to prove existence of the bug, and to make unit test verifying that it's gone. What is basically impossible due to a nature of the bug. Any advice? | If possible, may be spend some time to check if this defect can be reproduced by putting some sleep or block in your application code. But do not spend too much time. As this issue is due to multi-theading (and also as you observed), it's occurrence will be rare. My advice is not to sweat over this too much. Continue your work. Whenever you come across this crash, update your bug report with the stack trace, saying that this is repeat occurrence , and changing the owner to library developer. Let the management/lead decide whether to fix or not depending on it's frequency. Also try to understand the developer's mentality. You said "uncaught web exception". The developer at this stage may not be entirely sure what will be other effects of catching this . So he/she may be is reluctant to touch the code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199884",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/79533/"
]
} |
199,939 | If we assume we have this little snippet of code: string str = "checked";
bool test1;
if (str == "checked")
{
test1 = true;
}
else
{
test1 = false;
} Is it bad practice to change a simple statement like this to the following?: bool test2 = (str == "checked"); Because they work exactly the same, and work as required, so I can't imagine how it would be. However, as a young, inexperienced programmer I am not aware of whether such a thing is frowned upon or not. Can anyone tell me, if this is NOT ok, why not? The following test program: using System;
public class Test
{
public static void Main()
{
string str = "checked";
bool test1;
if (str == "checked")
{
test1 = true;
}
else
{
test1 = false;
}
bool test2 = (str == "checked");
bool test3 = (str != "checked");
Console.WriteLine(test1.ToString());
Console.WriteLine(test2.ToString());
Console.WriteLine(test3.ToString());
}
} Outputs: True
True
False Any insight etc is appreciated. | Is it bad practice to change a simple statement like this to the following?: bool test2 = (str == "checked"); No, it's good practice. To me, the longer code: if (str == "checked")
{
test1 = true;
}
else
{
test1 = false;
} indicates that the programmer doesn't understand Boolean expressions. The shorter form is much clearer. Similarly, don't write: if (boolean-expression) {
return true;
} else {
return false;
} Just write return boolean-expression; | {
"source": [
"https://softwareengineering.stackexchange.com/questions/199939",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81895/"
]
} |
200,041 | When forming opinions, it is a good practice to follow scholastic tradition - think as hard as you can against the opinion you hold and try to find counter-arguments. However, no matter how hard I try, I just cannot find reasonable arguments in favor of antivirus (and related security measures) on development machines. Arguments against antivirus(AV) in development are plentiful: It is not uncommon for 1 minute build to take 10 times longer with AV on In a conference talk, IntelliJ developers claim AV software is #1 suspect when their IDE is sluggish Unzipping comes with roughly 100 kb/s speed with AV on AV renders Cygwin completely unusable (vim takes 1 minute to open a simple file) AV blocks me from downloading useful files (JARs, DLLs) from colleagues' e-mails I can't use multiple computers for development, since AV / security measures prevent me from unblocking ports AV kills performance of programs with high file turnover, such as Maven or Ant Last, but not least - what does AV actually protect me from? I am not aware of my AV program ever stopping any security thread. If the reason is fear of disclosing NDA stuff - no AV can possibly prevent me from doing it if I set my mind to it. If the reason is fear of losing source code and/or documentation - there are distributed revision systems for this (there are at least 20 copies of our repo and we sync on daily basis). If the reason if fear of disclosing customer data - developers rarely work connected to real production databases, instead they are playing around in toy environments. Even if there are meaningful arguments in favor of having AV on development machines, they fall apart when faced with the ability to run a Virtual Machine in your paranoidly protected environment. Since I want to keep an open mind of the issue, could anyone present meaningful, strong argument in favor of Anti-virus software for developers? | The one reason to use anti-virus software on development machines that trumps all your arguments is: To comply with security audits. Banks, government agencies, large regulated firms with sensitive data don't have a choice on this matter. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200041",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92669/"
]
} |
200,115 | I keep hearing about early and late binding, but I do not understand what they are. I found the following explanation which I do not understand: Early binding refers to assignment of values to variables during design time whereas late binding refers to assignment of values to variables during run time. Could someone please define the two types of binding and compare them? | There are two major concepts in confusion: binding and loading. It is conflated by the concept of DataBinding, which is somewhere in the middle often doing both. After considering it, I am going to add one more concept, to complete the trifecta, dispatch. Types Late Binding : type is unknown until the variable is exercised during run-time; usually through assignment but there are other means to coerce a type; dynamically typed languages call this an underlying feature, but many statically typed languages have some method of achieving late binding Implemented often using [special] dynamic types, introspection/reflection, flags and compiler options, or through virtual methods by borrowing and extending dynamic dispatch Early Binding : type is known before the variable is exercised during run-time, usually through a static, declarative means Implemented often using standard primitive types Functions Static Dispatch : known, specific function or subroutine at compile time; it is unambiguous and matched by the signature Implemented as static functions; no method can have the same signature Dynamic Dispatch : not a specific function or subroutine at compile time; determined by the context during execution. There are two different approaches to "dynamic dispatch," distinguished by what contextual information is used to select the appropriate function implementation. In single [ dynamic ] dispatch , only the type of the instance is used to determine the appropriate function implementation. In statically-typed languages, what this means in practice is that the instance type decides which method implementation is used irrespective of the reference type indicated when the variable is declared/assigned. Because only a single type -- the type of the object instance -- is used to infer the appropriate implementation, this approach is called "single dispatch". There is also multiple [ dynamic ] dispatch , where input parameter types also help determine which function implementation to call. Because multiple types -- both the type of the instance and the type(s) of the parameter(s) -- influence which method implementation is selected, this approach is dubbed "multiple dispatch". Implemented as virtual or abstract functions; other clues include overridden, hidden, or shadowed methods. NB: Whether or not method overloading involves dynamic dispatch is language-specific. For example, in Java, overloaded methods are statically dispatched. Values Lazy Loading : object initialization strategy that defers value assignment until needed ; allows an object to be in an essentially valid but knowingly incomplete state and waiting until the data is needed before loading it; often found particularly useful for loading large datasets or waiting on external resources Implemented often by purposefully not loading a collection or list into a composite object during the constructor or initialization calls until some downstream caller asks to see the contents of that collection (eg. get_value_at, get_all_as, etc). Variations include loading meta information about the collection (like size or keys), but omitting the actual data; also provides a mechanism to some runtimes to provide developers with a fairly safe and efficient singleton implementation scheme Eager Loading : object initialization strategy that immediately performs all value assignments in order to have all the data needed to be complete before considering itself to be in a valid state. Implemented often by providing a composite objects with all their known data as soon as possible, like during a constructor call or initialization Data Binding : often involves creating an active link or map between two compatible information streams so that changes to one are reflected back into the other and vice versa; in order to be compatible they often have to have a common base type, or interface Implemented often as an attempt to provide cleaner, consistent synchronization between different application aspects (eg view-model to view, model to controller, etc.) and talks about concepts like source and target, endpoints, bind/unbind, update, and events like on_bind, on_property_change, on_explicit, on_out_of_scope EDIT NOTE: Last major edit to provide description of examples of how these often occur. Particular code examples depend entirely on the implementation/runtime/platform | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200115",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/85240/"
]
} |
200,319 | I ask because so many of the questions I see in SQL amount to: "This is slow. How do I speed it up"? Or are tutorials stating "Do this this way and not that way as it's faster". It seems to me that a large part of SQL is knowing just how an expression would be performed and from that knowledge chosing expression styles that perform better. This doesn't square with one aspect of declaritive programming - that of leaving the system to decide how best to perform the calculation with you just specifying what the calculation should produce. Shouldn't an SQL engine not care about if you used in , exists or join if it is truly declarative shouldn't it just give you the correct answer in reasonable time if possible by any of the three methods? This last example is prompted by this recent post which is of the type mentioned in my opening paragraph. Indexes I guess the easiest example I could have used relates to creating an index for a table. The gumph here on w3schools.com even tries to explain it as something unseen by the user that is there for performance reasons. Their description seems to put SQL indices in the non-declarative camp and they are routinely added by hand for purely performance reasons. Is it the case that their is somewhere an ideal SQL DB that is much more declarative than all the rest but because it is that good one doesn't hear about it? | SQL is theoretically declarative. But you know what they say about the difference between theory and practice... At its core, the concept of "declarative programming" has never been truly effective, and likely never will until we have an AI-based compiler that's capable of looking at code and answering the question "what is the intention of this code?" intelligently, in the same way that the person who wrote it would. At the heart of every declarative language is a whole bunch of imperative code trying frantically to solve that problem without the help of an AI. Often it works surprisingly well, because the most common cases are common cases , which the people who wrote the language's implementation knew about and found good ways to handle. But then you run up against an edge case that the implementor didn't consider, and you see performance degrade quickly as the interpreter is forced to take the code much more literally and handle it in a less efficient manner. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200319",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/17036/"
]
} |
200,320 | I work as a solo developer in a small company. There's more than enough work, but the same does not apply for money. Thus, I won't be seeing any new colleagues in the near future. I am responsible for absolutely everything that has do to with IT operations. This involves development and maintenance of software used in-house, development and maintenance of various websites which our clients use, website infrastructure, local network infrastructure including maintenance of several servers and in-house support to mention the most immediate things. I really enjoy 95% of what I do, and I've got a high degree of flexibility in my work. I get to decide what do when, and no one really tells me what do to except that I now and then sit down with my colleagues to create a roadmap for what I need to get done. I do consider myself to have a high work ethic and being above average focused on what I do, so things get done. However, I've come to the point where I really miss having other people around me who work with the same. Even though I need to get familiar with a wide range of technologies as I am a solo developer, I have the feeling that I am missing out one the "knowledge sharing" which other "like minded" people who work in bigger companies are taking part in. I don't really have anyone to discuss programming obstacles and design decisions with - and I am starting to miss that. Also, I'm worried about what future employers might think of this "hermit" who has been working on his own for too long to ever be able to take part in a team. However, on the other side, I'm thinking that I won't get my current degree of flexibility in a larger company. I'll be seeing a lot more strict deadlines, late hours and specialized areas of work. Also; I'm not sure if this idea of "knowledge sharing" will ever take take place? Has anyone else been in this situation? Is it a good idea seen from a career perspective and a personal development perspective? Should I consider moving on to a bigger place to (maybe) become a part of a larger group of developers and "like minded" people? In other words, will the grass be greener on the other side? | If you are enjoying your work and only missing knowledge sharing, consider joining an open source project instead of changing the job. Unless you already know the people you will be working with, you have no idea whether the grass will be greener on the other side. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200320",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26283/"
]
} |
200,362 | The majority of my work over the last three years has largely been around maintaining legacy systems that needed patching up or the occasional revamp before being sold again. I understand the critical role dedicated maintenance programmers have to play in companies with a large number of projects and limited developers on hand. But as I judge my current career progress and look at my peers; contractors and corporate developers alike; I do feel as if I am lagging far behind since I've gained a great deal of breadth in terms of the areas I've touched but not much depth.
I've begun to address this by starting a blog, working on my own little git-hub projects and rescheduling my life to have time to do personal coding after work on a regular basis. I feel that were I to interview at other companies to escape maintenance work I would have to represent myself as being quite junior in skill level since I would not have the depth of level of knowledge required of a person with three years experience focused on a particular path in feature development would. So half my current work experience would count for naught in the long run. But this leads me to my main questions, apologies if this feels too centered around my personal dilemma,: Do dedicated maintenance programming roles end up being detrimental to an early career?
Are other programmers right to avoid roles like these?
Does doing this line of work lock you into doing similar tasks unless you're prepared to start over as a junior? | Do dedicated maintenance programming roles end up being detrimental to an early career? Are other programmers right to avoid roles like these? Does doing this line of work lock you into doing similar tasks unless you're prepared to start over as a junior? First up, you should know that you're considered a junior for quite a while. You may get arbitrary promotions because you're good and this is the only way to give you a decent payrise, but you'll still be considered a junior as you head on to your next job. Second, if I'm hiring someone with 2-4 years' experience, I don't really care whether their work was purely maintenance. If you've spent 10 years in maintenance and I'm hiring for a greenfields project, I may have questions but, for the first few years, I honestly kind of expect it. On the other hand, if I'm hiring someone who has NEVER worked in maintenance, I'm going to be more suspicious. I've had many candidates for jobs who have spent their first 4 years skipping from one "good" job to another and every single one has learned nothing about what makes for maintainable code. And, make no mistake, if I'm hiring for a greenfields project I intend to stick with, I don't care whether YOU are going to maintain the code, I care that you know how to leave it maintainable for future developers. These other programmers you mention, who avoid jobs like these, generally avoid them because they're less fun, not because it hinders their career. Finally, you should know that a very large percentage (I would conservatively guess at about 80%) of software development jobs are more than 50% maintenance. So, to cut through all that and answer your question: No, I don't think it's going to hinder your career. Unless you stay there for far too long. The common rule of thumb is "once you start feeling like you're getting the same year of experience every year, it's time to go." If you feel, each year, like you're a better developer than you were last year, you're fine (and that goes for me, 20 years into my career, as much as you). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200362",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/88515/"
]
} |
200,522 | I have created a library in Python that contains functions for accessing a database. This is a wrapper library around a third party application database, written due to the fact that the third party application does not offer a decent API. Now I originally let each function open a database connection for the duration of the function call which was OK, until my program logic used nested calls to the functions where I would then be calling a particular function a few thousand times. This wasn't very performant. Profiling this showed that the overhead was in the database connection setup - once per function call. So I moved the open connection from within the function(s) to the module itself, so that the database connection would be opened when the library module was imported. This gave me an acceptable performance. Now I have two questions regarding this. Firstly, do I need to be concerned that I am no longer explicitly closing the database connection and how could I do it explicitly with this set-up? Secondly, does what I have done fall anywhere close to the realm of good practice and how might I otherwise approach this? | It really depends on the library you're using. Some of them could be closing the connection on their own (Note: I checked the builtin sqlite3 library, and it does not). Python will call a destructor when an object goes out of scope, and these libraries might implement a destructor that closes the connections gracefully. However, that might not be the case! I would recommend, as others have in the comments, to wrap it in an object. class MyDB(object):
def __init__(self):
self._db_connection = db_module.connect('host', 'user', 'password', 'db')
self._db_cur = self._db_connection.cursor()
def query(self, query, params):
return self._db_cur.execute(query, params)
def __del__(self):
self._db_connection.close() This will instantiate your database connection at the start, and close it when the place your object was instantiated falls out of scope. Note: If you instantiate this object at the module level, it will persist for your entire application. Unless this is intended, I would suggest separating your database functions from the non-database functions. Luckily, python has standardized the Database API , so this will work with all of the compliant DBs for you :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200522",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/74014/"
]
} |
200,545 | I've been working as an app developer for a year and a half now (not long I know), and I've just been given my first big project. Needless to say it didn't go very smoothly, so I sought advice from a senior programmer involved in the project about how to approach it. He said that I had drastically been over thinking the task at hand, and that because I had never tackled a project of this scale before I'd been spending too much time over thinking design patterns. In his wise words, he told me "F*ck the future, build for now". Is this a trend programmers typically follow when going about a project like this? For example, if you were asked to do a proof of concept model, is it a typical trend just to mash a workable example out as soon as possible? Edit: In light of the debate this has sparked, I'd like to mention that this situation is fairly extreme: we have very tight deadlines due to factors beyond our control (i.e. the market we're aiming for will lose interest if we don't show them something) and his advice proved to be very effective for this particular task. | Captain Obvious to the Rescue! I'll be Captain Obvious here and say that there's some middle ground to be found. You do want to build for the future and avoid locking yourself into a technological choice or a bad design. But you don't want to spend 3 months designing something that should be simple, or adding extension points for a quick and dirty app that will have a 2 year lifespan and is unlikely to have follow-up projects. It's difficult to find the distinction, because you can't always predict the success of your product and if you'll need to extend it later. Build for Now if... the project is going to get scrapped the project has a short life-span the project should not have extensions the project doesn't have a risk impact value (mostly in terms of image) In general, in-house projects or something built for a customer should be developed for now. Be sure to have straight requirements, and relate to them as needed to know what's needed and what's not. Don't want to spend too much time on anything that's "nice to have." But don't code like a pig either. Leave the General Problem for later, if it may ever be necessary and worth the effort: Build for the Future if... the project will be public the project is a component to be reused the project is a stepping stone for other projects the project will have follow-up projects or service releases with enhancements If you're building for something public, or that's going to be reused in other projects, then you've got a much higher probability that a bad design will come back to haunt you, so you should pay more attention to that. But it's not always guaranteed. Guidelines I'd say adhere to the following principles as best as you can, and you should put yourself in the position of designing efficient, adaptable products: know that YAGNI , KISS , whenever you feel like scratching an itch and think of an addition, write it down. Look back at your project requirements and ask yourself if additions are priorities or not. Ask if they add primary business value or not. I know that I personally tend to overthink and overengineer. It really helps to write ideas down and very often re-think if I need additional features. Often, the answer is no, or, "it would be cool later." Those last ideas are dangerous, because they stay in the back of my head, and I need to force myself not to plan for them. The best way to code without overengineering and without blocking yourself for later is to focus on a good minimal design. Break things down nicely as components that you can later extend, but without thinking already about how they may be extended later. You can't predict the future. Just build simple things. Dilemmata Overengineering Is this a trend programmers typically follow when going about a project like this? Hell yeah. It's a known dilemma, and it only shows that you care about the product. If you don't, that's more worrying. There is disagreement about whether or not less is always truly more and if worse is always truly better . You may be a MIT or New Jersey kind of guy . There is no easy answer here. Prototyping / Quick-n-Dirty / Less is More Is it a typical trend just to mash a workable example out as soon as possible? It's a prevalent practice, but it's not how the vast majority of projects are approached. Still, prototyping is a good trend in my opinion, but one with a mean downside. It can be tempting to promote quick and dirty prototypes to actual products, or to use them as the base for actual products under management pressure or time constraints. That's when prototyping can come back to haunt you. There are obvious advantages to prototyping , but also a lot of potential for misuse and abuse (many the exact inverse of the previously listed advantages as an outcome). When to Use Prototyping? There are hints as to the best types of projects to use prototyping : [...] prototyping is most beneficial in systems that will have many interactions with the users. [...] prototyping is very effective in the analysis and design of on-line systems, especially for transaction processing, where the use of screen dialogs is much more in evidence. The greater the interaction between the computer and the user, the greater the benefit [...] "One of the most productive uses of rapid prototyping to date has been as a tool for iterative user requirements engineering and human-computer interface design." On the other hand: Systems with little user interaction, such as batch processing or systems that mostly do calculations, benefit little from prototyping. Sometimes, the coding needed to perform the system functions may be too intensive and the potential gains that prototyping could provide are too small. And if you have a green monster around, just make sure to keep a prototype within budget... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200545",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/93104/"
]
} |
200,647 | From the "Gang of Four" design patterns, there's the Factory method: class Factory(product)
case product
when a
new A
when b
new B
when c
new C
end
new Factory(a) Why is this more useful than having three classes, a , b , and c and calling them individually? | Because your example is not complicated enough. For such a simple scenario it doesn't even make sense to use an advanced pattern. But if you have to know more than the product to construct A, B or C, and you can't have direct access to that knowledge, then it is useful. Then you are using the factory to act as a knowledge center for producing needed objects. Maybe those objects need a reference to some object X, which the factory can provide, but your code in the place where you want to construct A, B or C can't or shouldn't have access to X. Maybe when you have X you create A and B but if you have Y type then you create C. Also consider that some objects might need 20 dependencies to create; what then?
Going to hunt for those dependencies in a place where they should not be accessible might be problematic. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200647",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28697/"
]
} |
200,663 | I noticed that a lot of GitHub accounts only have repositories which are forked from other accounts. In addition the people who do this usually don't make any contributions to the forked repositories. I've heard of people collecting stamps and seashells, but why would anybody want to collect repositories? Personally I would only fork a repository if I wanted to make some changes to it. | As you mentioned in your question, people fork repositories when they want to make a change the code, because you
don't have write access to the original repository (unless you've been added as a collaborator by the owner of the repository). In the forked repository they have write access and can push changes. They may even contribute
back to the original repository using pull requests . I think there are multiple reasons why people fork repositories but don't change them: they might fork a repository which looks cool, simply fork it (because it's easy (only one click)) and want
to make a change later (and then probably forget it/didn't have time to do so) they fork a repository to make a change and then discover that the change is unnecessary and forget to delete
the own repository they might fork a repository because one of the projects depends on an other repository (maybe via submodules) and
they want total control over the repository used as a dependency (the owners of the original repository might decide to move from github to google code etc.) they might simply forget to push the commits | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200663",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/75147/"
]
} |
200,664 | I have a question regarding generalization. I know when it is needed to be done, but is it worth having an extra class for 1 field? e.g.: StatisticsCustomer has fields: Customer customer and int projectCount StatisticsCountry has fields: Country country and int projectCount What is best in this situation? Creating an extra class just for projectCount or keeping it twice in the two classes? | As you mentioned in your question, people fork repositories when they want to make a change the code, because you
don't have write access to the original repository (unless you've been added as a collaborator by the owner of the repository). In the forked repository they have write access and can push changes. They may even contribute
back to the original repository using pull requests . I think there are multiple reasons why people fork repositories but don't change them: they might fork a repository which looks cool, simply fork it (because it's easy (only one click)) and want
to make a change later (and then probably forget it/didn't have time to do so) they fork a repository to make a change and then discover that the change is unnecessary and forget to delete
the own repository they might fork a repository because one of the projects depends on an other repository (maybe via submodules) and
they want total control over the repository used as a dependency (the owners of the original repository might decide to move from github to google code etc.) they might simply forget to push the commits | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200664",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91471/"
]
} |
200,681 | I have an opensource project currently under MIT license. I have received a request from a company to use my code for their commercial project without having to give any attribution or credit. To be honest, when I released the code, my sole intention was only to help a fellow programmer, and I didn't really think about if I was credited. Choosing the license was just one of the step I had to do to set up the project on codeplex. On one hand, I feel honored and appreciate that they actually bothered to ask, on the other hand, I felt if I just allowed them to do so without any cost may just destroy the spirit of open source. What are the typical things I or other code owners can do or request from the company to make it a fair trade? Should I even allow it? I am thinking of asking the company to write a official letter of intent and I will sign against it just to make it more formal; and also to request a donation to project/charity of my choice or buy something on my wishlist as compensation (not very expensive). Will that be too much? | Many open source applications have closed source licensing options for just this scenario. How much you charge them is dependent on: the size of the company (how much can they afford) what they're going to do with it (if they're stealing it or just using it) what they expect you to do (support/updates/extensions? what contractual level?) a ton of other things. Do you want to avoid tax implications of income? Do you hate the company? etc. In general, I would treat it as a business deal while knowing that you've got all the leverage. The mindset of "I'd like to promote open source, so I'm charging you $5k (or whatever else high quote seems appropriate for that company for your project) - do you really not just want to give me attribution?" | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200681",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9291/"
]
} |
200,709 | There are some (quite rare) cases where there is a risk of: reusing a variable which is not intended to be reused (see example 1), or using a variable instead of another, semantically close (see example 2). Example 1: var data = this.InitializeData();
if (this.IsConsistent(data, this.state))
{
this.ETL.Process(data); // Alters original data in a way it couldn't be used any longer.
}
// ...
foreach (var flow in data.Flows)
{
// This shouldn't happen: given that ETL possibly altered the contents of `data`, it is
// not longer reliable to use `data.Flows`.
} Example 2: var userSettingsFile = SettingsFiles.LoadForUser();
var appSettingsFile = SettingsFiles.LoadForApp();
if (someCondition)
{
userSettingsFile.Destroy();
}
userSettingsFile.ParseAndApply(); // There is a mistake here: `userSettingsFile` was maybe
// destroyed. It's `appSettingsFile` which should have
// been used instead. This risk can be mitigated by introducing a scope: Example 1: // There is no `foreach`, `if` or anything like this before `{`.
{
var data = this.InitializeData();
if (this.IsConsistent(data, this.state))
{
this.ETL.Process(data);
}
}
// ...
// A few lines later, we can't use `data.Flows`, because it doesn't exist in this scope. Example 2: {
var userSettingsFile = SettingsFiles.LoadForUser();
if (someCondition)
{
userSettingsFile.Destroy();
}
}
{
var appSettingsFile = SettingsFiles.LoadForApp();
// `userSettingsFile` is out of scope. There is no risk to use it instead of
// `appSettingsFile`.
} Does it look wrong? Would you avoid such syntax? Is it difficult to understand by beginners? | If your function is so long that you cannot recognize any unwanted side effects or illegal reuse of variables any more, then it is time to split it up in smaller functions - which makes an internal scope pointless. To back this up by some personal experience: some years ago I inherited a C++ legacy project with ~150K lines of code, and it contained a few methods using exactly this technique. And guess what - all of those methods were too long. As we refactored most of that code, the methods became smaller and smaller, and I am pretty sure there are no remaining "internal scope" methods any more; they are simply not needed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200709",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
200,790 | My question has to do with JavaScript security. Imagine an authentication system where you're using a JavaScript framework like Backbone or AngularJS , and you need secure endpoints. That's not a problem, as the server always has the last word and will check if you're authorized to do what you want. But what if you need a little security without involving the server? Is that possible? For example, say you've got a client-side routing system and you want a concrete route to be protected for logged-in users. So you ping the server asking if you're allowed to visit protected routes and you go on. The problem is that when you ping the server, you store the response in a variable, so the next time you go to a private route, it will check that if you're already logged in (no ping to the server), and depending on the response it will go or not. How easy is for a user to modify that variable and get access? My security (and JavaScript) knowledge isn't great. But if a variable is not in global scope and is in the private part of a module pattern which only have getters but not setters, even in that case, can you hack the thing out? | It's simple: any security mechanism that relies on the client to do only what you tell it to do can be compromised when an attacker has control over the client. You can have security checks on the client, but only to effectively act as a "cache" (to avoid making an expensive round-trip to the server if the client already knows that the answer will be "no"). If you want to keep information from a set of users, make sure that those users' client never gets to that information. If you send that "secret data" together with instructions "but please don't display it," it'll become trivial to disable the code that checks that request. As you see, this answer doesn't really mention any JavaScript/Browser specifics. That's because this concept is the same, no matter what your client is. It doesn't really matter it's a fat client (traditional client/server app), an old-school web application, or a single-page-app with extensive client-side JavaScript. Once your data leaves the server, you must assume that an attacker has full access to it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200790",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/59749/"
]
} |
200,812 | Me and a friend of mine were discussing yesterday about differences between writing a large C++ software and understanding it as a new recruit. Is it possible that since a software gets done one line at a time and this process resembles how we (humans) learn things and build a thing on top of another one, writing a large software is actually easier than reading it and understanding what it does (stepping through the code helps but you need to remember multiple classes/source files together you don't even know what they've been written for, multithreaded code adds malus points)? This sounds weird at first but after we thought a bit it seemed reasonable | Based on my experience, I would rank the following activities in order from easiest to hardest. Reading good code Writing bad code Writing good code Reading bad code The above ranking leads to 2 conclusions While it is easier to write code than reading bad code, it is easier to read good code than write your own code Writing bad code is easier than writing good code, but writing bad code sets you up for reading bad code, which is the hardest thing of all. Especially since bad code is read more than it is written. Of course, good code and bad code are broad generalizations. I recommend Code Complete and Clean Code for more details about good code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200812",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92404/"
]
} |
200,821 | As the title says, I would like to write a HTTP server. My question is this, how do I do this? I know this sounds VERY general and too "high level", but there is a method to my madness. An answer to this question should be, I believe, language agnostic; meaning, no matter what language I use (e.g., C, C++, Java, etc.) the answer should be the same. I have a general idea of how this is supposed to work: Open a socket on port 80. Wait for a client to make a request. Read the request (i.e., this person wants page "contact-us.html"). Find and read "contact-us.html". Send an html header, then send the content of "contact-us.html" Done Like I said, I believe this is the process, but I am not 100% sure. This leads me to the heart of my question. How or where does a person find out this information? What if I didn't want to write just an HTTP server, what if I wanted to write an FTP server, a chat server, an image viewer, etc.? How does a person find out the exact steps/process needed to create a working HTTP server? A co-worker told me about the html header, so I would have NEVER know this without him. He also said something about handing off each request to a new thread. Is there some big book of how things work? Is there some manual of what it takes to be an HTTP server? I tried googling "how does a HTTP server work", but the only answers I could find were geared towards your average Joe, and not towards a person wanting to program a HTTP server. | Use the RFC2616 , Luke! You read the RFC 2616 on HTTP/1.1 , and you go for it. That was actually a project in my 3rd year in engineering school, and that's pretty much the project description. Tools Your tools are: basic networking stuff (socket management, binding, understand addresses), good understanding of I/O streams, a lot patience to get some shady parts of the RFC (mime-types are fun). Fun Considerations Things to consider for extra fun: plug-in architecture to add CGI / mod support, configuration files for, well, many things, lots of experimentation on how to optimize transfers, lots of experimentation to see how to manage load in terms of CPU and memory, and to pick a dispatch model (big fat even loop, single accept dispatch, multi-thread, multi-process, etc...). Have fun. It's a very cool thing to look at. Other (Simpler) Suggestions FTP client/server (mostly RFC959 but there are older versions and also some extensions) IRC client/server (mostly RFC1459 , but there are extensions) They're way easier to tackle first, and their RFCs are a lot easier to digest (well, the IRC one has some odd parts, but the FTP one is pretty clear). Language Choice Of course, some implementation details will be highly dependant on the language and stack you use to implement it. I approached all that in C, but I'm sure it can be fun just as well in other languages (ok, maybe not as much fun, but still fun). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200821",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/93349/"
]
} |
200,838 | I have often wondered why strict parsing was not chosen when creating HTML. For most of the Internet history, browsers have accepted any kind of markup and tried their best to parse it. The process degrades performance, permits people to write gibberish, and makes it difficult discontinue obsolete features. Is there a specific reason why HTML is not strictly parsed? | The reason is simple: At the time of the first graphical browsers, NCSA Mosiac and later Netscape Navigator, almost all HTML was written by hand. The browser authors (Netscape was built by ex-Mosaic folks) recognized quickly that refusing to render incorrect HTML would be held against them by the users, and voila! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200838",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/75581/"
]
} |
200,840 | I have a application which requires me to save data to a database not located on the users computer. Which approach is the best to save and access data for this scenario? Normally I would use Entity Framework and MS Sql Server but this seems not to a options for me with windows store apps. | The reason is simple: At the time of the first graphical browsers, NCSA Mosiac and later Netscape Navigator, almost all HTML was written by hand. The browser authors (Netscape was built by ex-Mosaic folks) recognized quickly that refusing to render incorrect HTML would be held against them by the users, and voila! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200840",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/83321/"
]
} |
200,945 | I've been pondering for a while why Java and C# (and I'm sure other languages) default to reference equality for == . In the programming I do (which certainly is only a small subset of programming problems), I almost always want logical equality when comparing objects instead of reference equality. I was trying to think of why both of these languages went this route instead of inverting it and having == be logical equality and using .ReferenceEquals() for reference equality. Obviously using reference equality is very simple to implement and it gives very consistent behavior, but it doesn't seem like it fits well with most of the programming practices I see today. I don't wish to seem ignorant of the issues with trying to implement a logical comparison, and that it has to be implemented in every class. I also realize that these languages were designed a long time ago, but the general question stands. Is there some major benefit of defaulting to this that I am simply missing, or does it seem reasonable that the default behavior should be logical equality, and defaulting back to reference equality it a logical equality doesn't exist for the class? | C# does it because Java did. Java did because Java does not support operator overloading. Since value equality must be redefined for each class, it could not be an operator, but instead had to be a method. IMO this was a poor decision. It is much easier to both write and read a == b than a.equals(b) , and much more natural for programmers with C or C++ experience, but a == b is almost always wrong. Bugs from the use of == where .equals was required have wasted countless thousands of programmer hours. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/200945",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/93454/"
]
} |
201,020 | I'm developing a program using a library made by another programmer (he works in the same company). Recently I discovered a leak in the library, which happens under certain network conditions after a few hours of running. I filed a bug with description of conditions to make this leak happen. That developer answered that "this is not enough", "it's not his responsibility to reproduce bugs" and I have to create unit test to reproduce this bug, otherwise he doesn't do anything. Is he right? What I can do in this situation? Creating unit test is impossible, because it depends on some random network timings. | Is he right is probably a question that can't really be answered without knowing your company. However, he certainly isn't being very helpful. I would raise the bug with him (which you've done), if it is causing an issue with your project then I would raise it as a blocker with your project manager and make it very clear that you've raised the bug with appropriate person but it's going to impact your project if it isn't fixed promptly. I would also go over and talk to the developer and explain why it's infeasible to create unit tests but you'd be happy to show him it on your machine (assuming that's feasible?). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201020",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31310/"
]
} |
201,104 | I've seen two styles of using sizeof for memory-related operations (such as in memset or malloc ): sizeof(type) , and sizeof variable or sizeof(variable) Which one would you prefer, or would you use a mix of the two styles, and when would you use each style? What are the pros and cons of each style and when you use them? As an example, I can see the following pair of situations where one style helps and the other doesn't: When you get the pointer indirection wrong: type *var;
...
memset(var, 0, sizeof var); /* oops */ When the type changes: new_type var; /* changed from old_type to new_type */
...
memset(&var, 0, sizeof(old_type)); /* oops */ | I perfer sizeof(variable) over sizeof(type) . Consider: int a1;
float a2;
memset(&a1,0,sizeof(a1));
memset(&a2,0,sizeof(a2)); vs. int a1;
float a2;
memset(&a1,0,sizeof(int));
memset(&a2,0,sizeof(float)); In the first case, it's easy to verify that the right sizes are being passed to memset . In the second case, you need to constantly review top and bottom sections to make sure you are consistent. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201104",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/81527/"
]
} |
201,139 | I have just been reading through some of the white papers & examples from Microsoft "Roslyn" and the concept seems very interesting. From what I can tell, it opens up the black box that is the compiler and provides an interface that we can use to get information and metrics about code written in Visual Studio. Roslyn also appears to have the ability to "script" code and compile/execute it on the fly (similar to the CodeDom) but I have only come across limited uses for that type of functionality in my experience. Whilst the code analysis & metrics element is an interesting space...it is something that has been around for a very long time and there are numerous providers that have already invested a lot of money into code analysis & refactoring tools (e.g. ReSharper, CodeRush, nCover, etc) and they do a pretty good job of it! Why would any company go out of their way to implement something that can be provided at a fraction of a cost through buying a license for one of the existing tools? Maybe I have missed some key functionality of the Roslyn project that places it outside the domain of the mentioned tools... | Roslyn also appears to have the ability to "script" code and compile/execute it on the fly (similar to the CodeDom) but I have only come across limited uses for that type of functionality in my experience. On-the-fly compilation and execution is the key benefit of Roslyn. I think you may be undervaluing the benefit of this feature because you have never come across a use case in your experience where it really shines. And this makes sense; the need for dynamic compilation is probably a niche feature, but having it does provide for some powerful applications that would be much more difficult without it. Here are a couple examples off the top of my head where dynamic compilation would be quite useful. There are other ways to accomplish all of these things, but Roslyn makes them easier. Having plugin files that are loaded at runtime, compiled, and included in the execution of the "parent" application. Creating a DSL which is then translated to C# at runtime and compiled using Roslyn. Creating an programmer-oriented application which takes C#, analyzes it, translates it, etc. Comparing two chunks of code for their differences after compilation, as opposed to just "surface" differences such as whitespace. This is known as Semantic Diff . So, to sum up, you may never find a use for Roslyn, depending on what software you spend your time writing. However, there are plenty of use cases where Roslyn brings a lot to the table. None of the tools you mention provide this feature. Nor could they based on their architecture and purpose. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201139",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/93631/"
]
} |
201,175 | Let's say I have a boolean condition a AND b OR c AND d and I'm using a language where AND has a higher order of operation precedence than OR . I could write this line of code: If (a AND b) OR (c AND d) Then ... But really, that's equivalent to: If a AND b OR c AND d Then ... Are there compelling arguments in favor or against including the extraneous parentheses? Does practical experience suggest that including them significantly improves readability? Or is wanting them a sign that a developer really needs to sit down and become conversant in the basics of their language? | Good developers strive to write code that is clear and correct . Parentheses in conditionals, even if they are not strictly required, help with both. As for clarity , think of parentheses like comments in code: they aren't strictly necessary, and in theory a competent developer should be able to figure out code without them. And yet, these cues are exceedingly helpful, because: They reduce the work required to understand the code. They provide confirmation of the developer's intent. Furthermore, extra parentheses, just like indentations, whitespace, and other style standards, help visually organize the code in a logical way. As for correctness , conditions without parentheses are a recipe for silly mistakes. When they happen, they can be bugs that are hard to find--because often an incorrect condition will behave correctly most of the time, and only occasionally fail. And even if you get it right, the next person to work on your code may not, either adding errors to the expression or misunderstanding your logic and thus adding errors elsewhere (as LarsH rightly points out). I always use parentheses for expressions that combine and and or (and also for arithmetic operations with similar precedence issues). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201175",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44862/"
]
} |
201,388 | It appears in not just one language that comments can't be nested. Do you have a good solution for this problem? One workaround in C/C++ and Java is to only use the single-line comment but it becomes impossible then to comment out a larger block. I'm facing something like this: </li><!--
<li><!-- Save --> So I must manually go through and edit the comments. Can you advice how we should handle this, in many languages? I'm not sure but maybe python has a solution for this with the ''' way that might be able to include a # comment in python?
` | The best solution is, obviously, to just not nest your comments. Nested comments are usually a sign that you are using comments wrong. The most common example is commented-out code that contains comments itself, and the fix is to remove the code instead of commenting it out. That said, many programming languages have more than one type of comment syntax, and you can use this fact to nest at least one level deep. For example, in Java: /* This is commented out!
Foo.bar.baz();
// And now for something completely different...
Quux.runWith(theMoney);
*/ Also, in many languages, at least one type of comment is kind-of-nestable; in C-like languages, line comments inside line comments are ignored: // some_commented_out(code);
// // This is a comment inside the comment!
// // Still inside the nested comment.
// some_more_code_in(outer_comment); Most IDEs support commenting entire blocks of code with line comments in one action, and they handle this kind of commenting style correctly. The same example in Python: # some_commented_out(code)
# # This is a comment inside the comment!
# # Still inside the nested comment.
# some_more_code_in(outer_comment) Often, coding standards for a particular project have rules about which comment style to use when; a common convention is to use block comments ( /* */ ) for method and class documentation, and inline comments ( // ) for remarks inside method bodies and such, e.g.: /**
* Helper class to store Foo objects inside a bar.
*/
public class Foobar {
/**
* Stores a Foo in this Foobar's bar, unless the bar already contains
* an equivalent Foo.
* Returns the number of Foos added (always 0 or 1).
*/
public int storeFoo(Foo foo) {
// Don't add a foo we already have!
if (this.bar.contains(foo)) {
return 0;
}
// OK, we don't have this foo yet, so we'll add it.
this.bar.append(foo);
return 1;
}
} With such a style, it is unlikely that you'll ever need to nest /* */ comments (if you have to temporarily disable entire methods or classes, renaming them work just as nicely, if not better); and // comments do nest, at least with a little help from your IDE. Finally, to disable code, you have other options in many programming languages; for example, in C, you can leverage the preprocessor: this_is(activated);
#if 0
this_is(!activated);
/* Comments inside this block don't really nest, they are simply removed
along with the rest of the block! */
#endif In dynamic languages, you can often just use regular if statements instead: <?php
if (0) {
// This should never run...
some_stuff_that_should_never_run();
} However, unlike the CPP example, this strategy requires the source file as a whole to be syntactically valid, so it's by far not as flexible. And finally, there are at least some languages that allow for nested comments. In case you're interested, wikipedia has a nice comparison chart . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201388",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12893/"
]
} |
201,397 | As far as I understand, the adapter pattern is creating a wrapper object for our real object of interest, simply one more level of indirection, which provides flexibility. The flexibility is in that if the real object's interface is changed, then we change the wrapper interface pointing at the real object, leaving the client-side exposed interface unchanged. The proxy pattern is the same, with the difference that every proxy wrapper provides only a coherent subset of the real object's functionality. Why would this be useful when we strive to make "one class for one purpose" is beyond me. Have I gotten this correctly? | Not entirely. The primary purpose of the adapter pattern is to change the interface of class/library A to the expectations of client B. The typical implementation is a wrapper class or set of classes.
The purpose is not to facilitate future interface changes, but current interface incompatibilities. The proxy pattern also uses wrapper classes, but for a different purpose. The purpose of the proxy pattern is to create a stand-in for a real resource. Reasons for using a proxy can be The real resource resides on a remote computer (the proxy facilitates the interaction with the remote resource) The real resource is expensive to create (the proxy ensures the cost is not incurred unless/until really needed) The most important thing is that a proxy provides a drop-in replacement for the real resource it is a stand-in for, so it must provide the same interface. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201397",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54268/"
]
} |
201,487 | I want to develop a mobile application. I recently read an article on Telerik Forum , which compares among three types of mobile application and I don't know which one should I select to begin with. Here is an image describing the pros and cons of different mobile design choices To decide between these design choices, I would like to better understand the pros and cons of each architecture choice listed in the diagram. What are the pros and cons of each architecture approach? | I'm a mobile developer who has spent a great deal of time considering this issue. Why do you ask? Most likely, you hope to reduce app development costs by: Using existing HTML5/Javascript development skills Targetting multiple platforms without writing multiple apps from scratch Not having to maintain multiple codebases in the future Reasons may also include: HTML5/Javascript development perceived as "easier" than native platform development Avoiding payment of developer programme registration fees Avoiding appstore content restrictions (gambling etc) Avoiding purchase of development hardware (e.g. Mac for iPhone development) Definitions Let's establish exactly what we mean by each of the three approaches mentioned: Native An app that is installed on a device, usually from its app store (although can sometimes be sideloaded). For the purposes of this discussion, the UI of a native app does not usually consist of a full-screen webview only. Mobile web This can in fact be any web page at all, however for this discussion let's consider a single-page web app which attempts to imitate the look and feel of a native app. It is not a native app, it runs in the device's browser. Hybrid Hybrid app instanceof native app. Most people probably understand a hybrid app to be a single-page mobile web app (again most likely imitating the look and feel of a native app), but packaged as a native app with access to native services (à la Phonegap). However there is in fact a spectrum between the Phonegap model and fully native which I'll come to later. Mobile web Technical restrictions Let's first list some technical restrictions on mobile web apps which could in themselves be deal-breakers depending on what you're doing: HTML/canvas UI only No access to certain device events and services (these are widely documented) Cannot be listed in app stores (affecting discoverability) Can become full screen and have homescreen icon on iOS, however this is an unusual and unfamiliar experience for most users If you can live with all the above, then read on for more about the challenges of single-page native-style web apps. However this section wouldn't be complete without reference to the FT app. Financial Times The FT web app is a famous example of this style of app. Here's an interesting feature from the UK Guardian newspaper about it. It's certainly a remarkable feat of engineering. Note that it is currently still only available on iOS only -- this tells me that they are finding that resolving the challenges of advanced cross-browser development to be very difficult indeed. Single-page native-style web apps This section applies to both mobile web and Phonegap-style apps. Native-style look and feel within a web app is usually achieved with a framework such as Sencha Touch which provides a suite of UI components for you to use. Such frameworks are fine for very simple UIs. However they lack flexibility. You won't be able to implement any native app design using Sencha, you'll need to adapt your design to what the framework can accommodate. The main way in which these frameworks suffer is in trying to emulate the platform's own UI intricacies. That nice little bouncing effect you get when you've scrolled to the end of a list on the iPhone? Your framework needs to emulate that in Javascript. It's impossible to recreate it completely, it will be prone to slow down, and your users will be stuck in the the "uncanny valley" of an app which looks sort-of like native, but clearly isn't, and a non-technical user won't be able to put their finger on exactly why. The "HTML5/Javascript is easy" myth Device fragmentation within web browsers is rife, and when you get beyond the most basic HTML and CSS you'll notice things don't quite work as you'd expect. You might find yourself spending more time solving fiddly UI issues than you'd have saved doing it natively twice over. If you're going native, note that native app webviews are not the same as device browsers and have their own fragmentation issues. And as your app gets more functionally complex, you'll find that you need more than basic jquery skills to keep your Javascript clean and maintainable. That said, it's perfectly possible to create simple, functional apps pretty quickly with this approach. But it's pretty obvious when an app is doing it. Further along the spectrum So, we want a better UX than Phonegap-style apps can offer, without writing absolutely everything from scratch multiple times. What can we do? Share non-UI code There are a range of techniques available for sharing business logic across multiple native platforms. Google have launched J2ObjC which translates Java into Objective-C. With careful factoring of code, a Java library could be used on both Android and iOS. Libraries such as Calatrava and Kirin allow codebases written in Javascript (and therefore anything that can be compiled to Javascript) to be manipulated from native code. Disclaimer: I work for Future Platforms who created Kirin; we have had great success using it on iOS with Javascript generated from Java with GWT, with Java code also being run natively on Android. Use webviews... where appropriate Full screen webviews have a lot of work to do to be able to imitate screen transitions and bounce effects. But a webview inside native app chrome can be indistinguishable from native. There are standard and well documented methods for native apps and webviews to communicate. Lists and tables can work particularly well when done in this way, however text entry is an example of something best handled natively (for full control over keyboard). In summary The approach that's right for you depends on how complicated your app is, and what level of UI polish you'll be satisfied with. My motto: use webviews wherever you can, but make sure your users can't tell . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201487",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/26930/"
]
} |
201,507 | Normally I do this: if (Animal is Dog)
{
Doc d = (Dog)Animal;
// d. etc..
}
else if (Animal is Cat)
{
Cat c = (Cat)Animal;
// c. ....
} Is this a good way or are there better ways to implement this code above (performance, ...) ? Should it be like this?: Dog d = Animal as Dog;
if (d != null;)
{
// d. etc..
}
else if (Animal is Cat)
{
Cat c = (Cat)Animal;
// c. ....
} Or maybe like this?: Dog d = Animal as Dog;
Cat c;
if (d != null;)
{
// d. etc..
}
else if ((c = Animal as Cat) != null)
{
// c. ....
} Or maybe something else? | Ideally, you should aim for just: Animal.DoWhateverYouNeedDone(); That is, implement the polymorphic behavior (if it's a dog, do a dog thing, if it's a cat, do a cat thing, etc.) in the classes themselves. So instead of this: if (Animal is Dog) {
var d = Animal as Dog;
d.Bark();
}
if (Animal is Cat) {
var c = Animal as Cat;
c.Meow();
} Do this: Animal.Speak();
// where:
public class Dog: Animal {
public void Speak() {
this.Bark();
}
// ---- snip -----
}
public class Cat: Animal {
public void Speak() {
this.Meow();
}
// ---- snip -----
} | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201507",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49065/"
]
} |
201,657 | I am a young programmer (finished computer science university but still under a year of working in the industry) and I recently got a job working on some C code for a decent size web service. Looking at the code the only places I saw comments were when people were stashing their old code. Function and variable names are similarly informative most of the time - futex_up(&ws->g->conv_lock[conv->id%SPIN]); . Confronting a senior programmer about the situation and explaining that adding comments and meaningful names would make the code more maintainable and readable in the future, I got this reply: In general, I hate comments. Most of them time, like the case you
mention with the return value, people use comments to get around
reading the code. The comments don't say anything other than what the
guy thought the code does at the time he put in the comment (which is
often prior to his last edit). If you put in comments, people won't
read the code as much. Then bugs don't get caught, and people don't
understand the quirks, bottlenecks, etc. of the system. That's
provided the comments are actually updated with code changes, which is
of course totally unguaranteed. I want to force people to read the code. I hate debuggers for a
similar reason. They are too convenient and allow you to step through
dirty code with watches and breakpoints and find the so-called
problem, when the real problem was that there are bugs in the code
because the code has not been simplified enough. If we didn't have the
debugger, we would refuse to read ugly code and say, I have to clean
this up just so I can see what it is doing. By the time you are done
cleaning up, half the time the bug just goes away. While what he wrote goes against a lot I have been taught in the university, it does make some sense. However, since experience in the studies sometimes doesn't work in real life, I would like to get an opinion of people more vetted in code. Is the approach of avoiding commenting code to make people actually read the code and understand what is going on make sense in a medium-sized coding environment (one that can be reasonably read in whole by every person working on it within a month or two), or is it a recipe for a long-term disaster? What are the advantages and disadvantages of the approach? | Well written code should be sufficiently self-documenting that you don't need any comments explaining what the code does, because it is obvious from reading the code itself.
This implies also that all functions and variables have descriptive names, although it might be needed to learn the lingo of the problem and solution domains. This does not mean that well-written code should be completely without comments, because the important comments are those explaining why a particular, non-trivial, function/block/etc is implemented as it is and why not a different solution has been chosen. The problem with comments describing the code itself is that they tend to get outdated, because the code gets changed to fix a bug, but the comments remain untouched. This is far less the case with comments describing the reasoning for coming to the current solution. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201657",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37473/"
]
} |
201,701 | I wonder if object initializing have some performance gain in ASP.NET website. I have an office mate that told me that object initialization is much readable and faster than constructor. But one of my office mate disagree with that and said that it always depends.
For example I am doing this: using (var dataAccess = new DatabaseAccess { ConnectionString = ConfigurationManager.ConnectionStrings["test"].ConnectionString, Query = query, IsStoredProc = true })
{
//some code here
} Is it better if I will do this way? using (var dataAccess = new DatabaseAccess())
{
dataAccess.ConnectionString = ConfigurationManager.ConnectionStrings["test"].ConnectionString;
dataAccess.Query = query;
dataAccess.IsStoredProc = true;
} Thanks in advance guys! --- EDIT ---- is this somehow much better: using (
var dataAccess = new DatabaseAccess
{
ConnectionString = ConfigurationManager.ConnectionStrings["test"].ConnectionString,
Query = query,
IsStoredProc = true
}
)
{
//some code here
} | Stop! You have a bigger problem than the readability. It looks like you don't understand how object initializers work. Let's create a disposable class which traces its execution: public class Demo : IDisposable
{
private string hello;
public Demo()
{
Debug.WriteLine("The parameterless constructor was called.");
}
public Demo(string hello)
: this()
{
Debug.WriteLine("The constructor with parameters was called.");
this.Hello = hello;
}
public string Hello
{
get
{
Debug.WriteLine("The getter was called.");
return this.hello;
}
set
{
Debug.WriteLine("The setter was called.");
this.hello = value;
throw new NotImplementedException();
}
}
public void Dispose()
{
Debug.WriteLine("The disposer was called.");
}
} Now, let's write an app which initializes this class in three different ways (sorry, it's too big; I'll explain it more in details later): public static class Program
{
// --- Interesting stuff ---
private static void WithPropertiesInitialization()
{
using (var demo = new Demo())
{
demo.Hello = "Hello World!";
// Do nothing.
}
}
private static void WithObjectInitializer()
{
using (var demo = new Demo { Hello = "Hello World!" })
{
// Do nothing.
}
}
private static void WithConstructor()
{
using (var demo = new Demo("Hello World!"))
{
// Do nothing.
}
}
// --- Not so interesting stuff ---
public static void Main()
{
Debug.Listeners.Add(new TextWriterTraceListener(Console.Out));
Console.WriteLine("With constructor:");
try
{
WithConstructor();
}
catch (NotImplementedException)
{
}
Console.WriteLine();
Console.WriteLine("With object initializer:");
try
{
WithObjectInitializer();
}
catch (NotImplementedException)
{
}
Console.WriteLine();
Console.WriteLine("With properties initialization:");
try
{
WithPropertiesInitialization();
}
catch (NotImplementedException)
{
}
Console.WriteLine();
Console.WriteLine("Press any key to continue...");
Console.ReadKey(true);
}
} There are three types of initialization here: Through the constructor, the value to set to the property being passed to the constructor, With the usage of an object initializer, With a parameterless constructor, the property being assigned later. Before continuing to read the answer, ask yourself two questions: Do those initialization techniques result in the same console output? Is there a bug? Those techniques are totally different, and there is effectively two bugs: only one of them is correct. 1. Constructor using (var demo = new Demo("Hello World!"))
{
// Do nothing.
} has nothing wrong, but it's the actual implementation of the constructor which is wrong. Constructors shouldn't throw exceptions , and one of the reasons is that it may cause inconsistent state when the object is used inside a using block: it's not fully initialized yet, so it will not be disposed, causing disaster if the unfinished initialization allocated some resources which should have been disposed. using (var a = new A())
{
DoWork(a);
} is basically the same as: var a = new A();
try
{
DoWork(a);
}
finally
{
a.Dispose();
} As you can see, there is no try around the constructor, so an exception within it would terminate the program before reaching a.Dispose() : in all cases, we can't invoke a method of an object which is not initialized yet. The setter of Hello throws an exception; the constructor should have been using the backing field directly, instead of the property. 2. Object initializer using (var demo = new Demo { Hello = "Hello World!" })
{
// Do nothing.
} is itself incorrect, and if you have used Code Analysis even at Minimum Recommended Rules level, you would have seen the warning CA2000, telling that: object '<>g__initLocal0' is not disposed along all exception paths. What it that?! That is the indicating that somewhere, an object is created and not disposed. To avoid concurrency issues, when you use an object initializer, a temporary instance of created: this.Pet = new Gremlin
{
Title = "Mogwai",
Age = 14,
} is transformed by the compiler into: Gremlin someCrypticName = new Gremlin();
someCrypticName.Title = "Mogwai";
someCrypticName.Age = 14;
this.Pet = someCrypticName; Indeed, when you use an object initializer inside a using (...) : The temporary object is created, The properties are assigned, The temporary object is assigned to the one which is used inside a using , The using block runs. Any exception thrown when assigning properties will prevent disposing the object, since the assignment happens before the try/finally . 3. Late assignment using (var demo = new Demo())
{
demo.Hello = "Hello World!";
// Do nothing.
} is correct. Since the constructor is not expected to throw an exception (and if it will, it would be the fault of the person who implemented the constructor, not the person who is using it within the using ), all exceptions will occur within the try block, resulting in the demo object being properly disposed. Conclusion: using should contain either: A parameterless constructor, A constructor which takes parameters, given that constructors are not expected to throw exceptions. using should not contain an object initializer: properties should be initialized within the using { ... } , in order to guarantee the proper disposal of objects. Readability Now that you've understood that the two pieces of code in your question are totally difference, since one is correct, and another one is buggy, let's talk readability. Until C# 4.0, object initialization was slightly more readable than a similar constructor: var product = new Product("PowerEdge R620 rack server", 1, "Intel® Xeon® E5-2620", 16, 1929, true); Here, I have no idea what is "1" or "16" or "1929". A person who is not at all familiar with hardware wouldn't know what "Intel® Xeon® E5-2620" is. No one would find what is true . Instead: var product = new Product
{
Title = "PowerEdge R620 rack server",
RackUnits = 1,
ProcessorName = "Intel® Xeon® E5-2620",
MemoryInGB = 16,
PriceInDollars = 1929,
IsAvailable = true,
} is much more readable. .NET Framework introduced named arguments, and since then, a constructor can be as readable as an object initializer, while being more compact. For example, title is explicit enough, and we would consider that anyone will understand the names of processors, so we can write: var product = new Product(
"PowerEdge R620 rack server",
"Intel® Xeon® E5-2620",
rackUnits: 1,
memoryInGB: 16,
priceInDollars: 1929,
isAvailable: true); I have an office mate that told me that object initialization is much readable and faster than constructor. We have seen that with named arguments, a constructor can be as readable as an object initializer. What about performance? Above, I've explained that an object initializer this.a = new A { B = b } is translated by the compiler into: A temp = new A();
temp.B = b;
this.a = temp; If the only thing the constructor is doing is to assign fields or properties from arguments, then the performance would be approximately the same. Unless your office mate can give precise profiling results and benchmarks proving his point, his statement about the performance should be considered wrong. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201701",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/93926/"
]
} |
201,724 | I'm working on an old codebase which is... not perfect , in an environment which isn't either. It's not the worst codebase I've seen in my life, but there are still lots of issues: zero unit tests; methods with thousand+ lines of code; misunderstanding of basic object oriented principles; etc. It hurts to maintain the code. Every time I have to debug a thousand lines of a badly written method with variables reused all over, I'm totally lost. Some modifications or refactoring I've done introduced bugs in other places of the application. Lacking any documentation, tests, or an observable architecture and combined with badly named methods, I feel that I fill up all of my available working memory. There is no room left over for all the other things I have to remember in order to understand the code I should modify. Constant interruptions at the workplace disturb me and slow me down. I can't remember more than two or three tasks at a time without a bug tracking system, and I forget all of them over the weekend. My colleagues don't seem to have similar issues. They manage to debug badly written methods much faster than me. They introduce fewer bugs than I do when changing the codebase. They seem to remember very well all they need to in order to change the code, even when it requires reading thousands of lines of code in twenty different files. They don't appear to be disturbed by emails, ringing phones, people talking all around, and other people asking them questions. They don't want to use the bug tracking system that we already have since we use TFS. They prefer to just remember every task they should do. Why does this happen? Is it a particular skill developers acquire when working with badly written code for a long time? Does my relative lack of experience with bad code contribute to these problems / feelings? Do I have issues with my memory? | Yes, it is normal for structured people to be affected by unstructured code/environments. Your colleagues probably are better filtering out all the background noise. As a migraine sufferer I know my ability to filter out my environment greatly drops when a migraine is coming on. People vary. The same is true for the code, your colleagues have probably learned to filter out the "code noise" that comes from multiple levels of abstraction in a single method and have become adept at "chunking" the code into larger areas of functionality. It simply takes time to adapt to a code base such as the one you describe. Your colleagues probably have had much more time to grow into it and possibly have picked up on conventions, patterns and constructs that don't jump out on "code base novices". There may be more structure to the chaos than you can imagine. Talk to your colleagues, ask them to pair with you some time and pick their brains on how they approach solving one of the bugs assigned to you. When they ask you to open unit X, Y or Z, ask them why that one, what about it is telling them it may be relevant, etc. Being lost in a thousand-lines method is normal. Attack it with a good folding editor and adding comments to chunk the various parts into functions and/or procedures without actually doing so. Printing the stuff and using an old fashioned highlighter can also help. Refactoring without the safety net of unit tests is shooting yourself in the foot. Don't. Just don't. Nobody is requiring you to keep everything in memory. If your colleagues don't want or need a bug system, just write the task assigned to you in your own todo list and write notes when/after talking with someone about the details of your tasks. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201724",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
201,726 | We've all had this experience. You go to someone who you know has the answer to a question, ask that person the question and they answer with the typical response: "why?" You explain why you need to know, and they attempt to solve your problem. It takes time, arm twisting and patience to steer the conversation back to the original question and just get that darn answer. Why do programmers constantly do this, and why does the behavior get worse the more senior the programmer becomes? How can you ask a programmer a question in a way most efficient in extracting the answer to the original question? | Why do developers ask "why" when someone asks them how to implement a solution? Because it requires more knowledge to evaluate whether a solution is appropriate than it does to actually implement the solution. It's very difficult to believe someone when they say, "I don't know how to do this, but I know for sure it's what I need to do." Programmers constantly insist on probing deeper because people constantly insist on asking the wrong questions. Yes, sometimes it eventually comes back around to your original question, but not always. As an analogy, imagine if someone walked up to a mechanic and asked him how to replace a car battery. Usually if you're qualified to diagnose a defective battery, you're qualified to change one, so the mechanic will ask how you know it needs replacing. He knows if he doesn't do this, and it turns out you don't need a battery, then you'll keep coming back asking more and more questions until eventually you figure out that you have to turn the lights off when the engine's not running. By asking you up front, it feels like he's wasting your time, but really he knows from experience that he's potentially saving both of you a lot more time. So, if you want to avoid the line of questioning, you need to convince him up front that you know what you're talking about. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201726",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52871/"
]
} |
201,728 | I was studying Mediator Pattern and I noticed that to use this pattern you should register the Colleagues into Mediator from the Colleague concrete classes. for that we have to make an instance of Mediator inside Colleague concrete classes which violates IoC and you can not inject the Colleagues into Mediator (as far as I know! whether it is right or wrong) Questions: 1- Am I right about the thing I said? 2- Shall we always use IoC at all or there are some times you can forget about it? 3- If we always have to use IoC, can we say Mediator is an anti-Pattern? | Why do developers ask "why" when someone asks them how to implement a solution? Because it requires more knowledge to evaluate whether a solution is appropriate than it does to actually implement the solution. It's very difficult to believe someone when they say, "I don't know how to do this, but I know for sure it's what I need to do." Programmers constantly insist on probing deeper because people constantly insist on asking the wrong questions. Yes, sometimes it eventually comes back around to your original question, but not always. As an analogy, imagine if someone walked up to a mechanic and asked him how to replace a car battery. Usually if you're qualified to diagnose a defective battery, you're qualified to change one, so the mechanic will ask how you know it needs replacing. He knows if he doesn't do this, and it turns out you don't need a battery, then you'll keep coming back asking more and more questions until eventually you figure out that you have to turn the lights off when the engine's not running. By asking you up front, it feels like he's wasting your time, but really he knows from experience that he's potentially saving both of you a lot more time. So, if you want to avoid the line of questioning, you need to convince him up front that you know what you're talking about. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201728",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/93914/"
]
} |
201,777 | I am a bit puzzled on whenever or not to include break after the last case, often default . switch (type) {
case 'product':
// Do behavior
break;
default:
// Do default behavior
break; // Is it considered to be needed?
} break s sole purpose is in my understanding to stop the code from running through the rest of the switch -case. Is it then considered more logical to have a break last due to consistency or skip having it due to the break applying no functional use whatsoever? Both are logical in different ways in my opinion. This could to a certain degree be compared with ending a .php file with ?> . I never end with ?> mostly due to the risk of outputting blank spaces, but one could argue that it would be the logical thing to end the file with. | break isn't technically needed after the last alternative (which, mind you, doesn't have to be default : it is perfectly legal, and sometimes even useful to put the default branch first); whether your code falls through the end of the switch statement or breaks out at the end of its last branch has the same result. However, I'd still end every branch, including the last one, with a return or break statement, for three reasons: Refactorability. If all your branches end with break or return , you can reorder them without changing the meaning. This makes it less likely for such a reordering to introduce a regression. Consistency, and Least Surprise. Consistency says your branches should end consistently, unless they are actually different in meaning. The Principle of Least Surprise dictates that similar things should look similar. Ending the last branch of a switch block exactly like the preceding ones fulfills both, which makes for easier reading and understanding. If you leave out the explicit break , the last branch will be optically different (which is especially important for quick scanning), and in order to see that it's really not different, the reader has to descend to the nitty-gritty level of reading individual statements. Protecting yourself. If you make a habit of ending all your switch branches with a break , it will become automatic after a while, and you'll be less likely to accidentally forget it where it does matter. Training yourself to expect the break at the end of every branch also helps detecting missing break statements, which is great for debugging and troubleshooting. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201777",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49129/"
]
} |
201,796 | I'm currently working as an intern at a very large, non-software development company. The position I applied for wasn't specifically a development position, but the team that hired me wanted a CS major to help try to develop some internal projects for them. I've been here for four weeks and the initial bewilderment is starting to wear off. However, I'm the only CS major in the entire office -- no one on my team, in the building, or even in the neighboring locations has any background in software development. The best I've got is a database manager, and their department is too busy to support me with my projects. My teammates are helping me learn how they do their jobs (which is important for me to do my job), but there's no one to help my do my job i.e. development. The projects they've given me are larger in scope than anything I've done in school. That, combined with the fact that I'm working alone, trying to develop applications from scratch with no form of guidance or even clearly defined goals, has me very worried about my ability to be successful. I barely know where I should be begin, and now I have probably less than two months remaining. I feel like I should be learning the software development process, but right now it's like I'm feeling my way through the dark. This is especially troubling for me since I'm not very confident with my development skills in the first place. I've been researching and teaching myself, but I'm only getting bits and pieces. They have high expectations from me, but I'm unsure of my ability to deliver. Obviously, I need to sit down and talk with my managers about the position I'm in and I intend to do that as soon as possible (they're often travelling and out of the office). How should I deal with this? This internship will be over before I know it, and I don't want to leave with nothing to show for my time here. They don't want that either, and they're always available to help me but without knowledge of programming there's only so much they can do. I'm afraid to tell them that I'm incapable of producing what they want. How should I relate this to them? I see the engineering interns getting help from other engineers, learning how to do their jobs, and I feel like I'm just sitting here biding my time. Any advice on how to rectify my situation would be greatly appreciated. Update I appreciate all the helpful feedback from everyone, it's helped put my mind at ease. The first thing I did was meet with my managers and supervisors. We discussed what was expected from my time here. They understand that I don't have that much time as an intern, and this helped put a scope on the type of framework we want to accomplish, which will allow future interns or employees to hopefully build off of what I leave. I also addressed my concerns regarding my capabilities with the allotted time, which they understood and expected. I received a call from the database administrator at another location - my manager talked with his supervisor and they're going to support my project, which will now give me a resource to use so I'm not sitting around with no idea what I'm doing. That's only one half though. Out of all the possible projects, we narrowed it down to the two most important to work on. As for my other project, as someone mentioned I'm essentially the lead software architect which is a unique situation for an intern. If things go at least semi-successfully, I think I'll have gained a good deal of knowledge and experience that can help me with future employers. For now, I think I have some solid footing to start researching and developing my projects. Thanks again for the responses from everyone! | I've got some bad news for you bhamlin: You aren't an intern. Rather, you are an unpaid/cheap employee. An internship is a unpaid or low-paid position where you can practice your newly aquired skills in a safe, (usually) relaxed environment, and get a chance to observe 'real' professionals in your field doing 'real' work, while getting feedback on the pieces that they allow you to modify (usually under supervision and/or approval). What your company wanted, was not in fact, an intern, but rather a free/cheap source of software development. This is fairly common , in my opinion. I live in a college town, and at my last place of work, managers were often heard saying "Hey, the IT dept is too busy to do Project X, lets see if we can get some interns in from the University to write it for free/cheap!" We would grumble and groan and gnash our teeth to the heavens, but this was the reality of the place, and I could understand why the managers would suggest such a thing. Sadly, the results weren't great: the software delivered by the interns was never cohesive/scaleable/clean/etc (but to be honest, neither was the stuff the IT dept put out anyway...) Its up to you what you do. My advice is to just develop whatever you can (sometimes pressure is a great motivator), BUT you should also plan to taking on a 'real' internship elsewhere when this one is over if possible. So don't blame yourself, but what you walked into was NOT a real internship. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201796",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/94119/"
]
} |
201,972 | In the book The Pragmatic Programmer , the writers mention the programming by coincidence concept. It explains what it is, why it is caused, what are the dangers you may encounter and it compares with a landmine field in a war. Do you ever watch old black-and-white war movies? The weary soldier advances cautiously out of the brush. There's a clearing ahead: are there any land mines, or is it safe to cross? There aren't any indications that it's a minefield—no signs, barbed wire, or craters. The soldier pokes the ground ahead of him with his bayonet and winces, expecting an explosion. There isn't one. So he proceeds painstakingly through the field for a while, prodding and poking as he goes. Eventually, convinced that the field is safe, he straightens up and marches proudly forward, only to be blown to pieces. The soldier's initial probes for mines revealed nothing, but this was merely lucky. He was led to a false conclusion—with disastrous results. As developers, we also work in minefields. There are hundreds of traps just waiting to catch us each day. Remembering the soldier's tale, we should be wary of drawing false conclusions. We should avoid programming by coincidence—relying on luck and accidental successes—in favor of programming deliberately... But I am not really satisfied on the way they describe the "how to overcome it" issue. Yeah, you have to think ahead before writing the code, but how to practice that? The only thing I can think is by adding features to existing Open source projects, where you must have knowledge on both the "what I am doing now" and the "How the other pieces of code are working", and it is not that applicable when you are writing your own projects. | You don't have to think ahead, merely be very clear on what was done, and be very clear on what you are doing right now. Subroutines should say what they do, do what they say, and not have hidden dependencies. Then someone calling them can more easily reason about what they will do. Avoid global state. (Variables, singletons, etc.) The more state that you have to have in your head to understand what things do, the harder it is to understand what is supposed to happen and find the edge cases. Write unit tests. Unit tests are great for capturing the actual behavior of code that you just wrote, rather than the ideal behavior that you are hoping to find. Shorten your edit/compile/test cycle. When you add a large chunk of code and test poorly, then the odds are that it will behave differently than you think. Then you "fix" it with some random change, and you got the right answer for the moment, but have no idea how it actually happened. You're now programming by coincidence. But when you add 5 lines and then test, the odds that you got the right answer because it works like you think it works are much better. I can say from experience that 5 chunks of 10 lines each, individually tested, is a very different beast than 50 lines of code tested all at once. Refactor ruthlessly. Many times I've spotted a refactor that will make my code somewhat simpler but take a bunch of work that I didn't want to do. After I began deliberately tackling those refactors as a priority, I have found that it usually pays off for itself inside of a month. But note the key, the refactors that I focus on are ones that make day to day life simpler, and not ones that meet some arbitrary aesthetic of better or more general. Those refactors I've learned to be much more cautious with. None of these things require advance planning. But they all make it easier to understand your existing code, and therefore make it easy to implement your next little chunk in a deliberate way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/201972",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12443/"
]
} |
202,003 | I recently had a phone interview with a company. After that phone interview, I was told to complete a short programming assignment (a small program; shouldn't take more than three hours). I'm only directly instructed to complete the assignment and turn in the code. I was given complete freedom to use any language I wished and was not told exactly how to turn in the code. Immediately I planned on throwing it on Github, writing a test suite for it, using Travis-CI (free continuous integration for public Github repositories) to run the test suites, and using CMake to build the Linux makefiles for Travis-CI. That way, not only can I demonstrate that I understand how to use Git, CMake, Travis-CI, and how to write tests, but I can also simply link to the Travis-CI page so they can see the output of the tests. I figured that'd make it a tiny bit more convenient for the interviewer. Since I know those technologies well, it would add essentially no time to the assignment. However, I'm a bit worried that doing all this for a relatively simple task would look bad. Although it wouldn't add much more time at all for me, I don't want them thinking I spend too much time on things that should be simple. | As an interviewer I would be happy to see the knowledge of the process of developing software demonstrated by this approach; as opposed to just the writing of the code. In particular, having a test suite for even very simple problems would be a good sign (even FizzBuzz level). I've seen candidates submit solutions that didn't even solve the problem and a simple set of tests would have shown them this. Also, having the commit history allows me to get an idea of the thought process that the candidate has used to get to the solution. On the other hand, I have known people to be rejected by some companies at an early stage of the process for over-engineering. However, in most cases, this has been due to over-engineering of the solution not necessarily the processes used. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202003",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/18948/"
]
} |
202,031 | I take a good deal of criticism from other programmers due to my use of full proper casing for all my variables. For example, your typical programmer will use employeeCount for a variable name, but I use EmployeeCount . I use full proper casing for everything , be it a void method, return method, variable, property, or constant. I even follow this convention in Javascript. That last one really rustles people's jimmies. The typical reason given as to why I shouldn't follow this "non-standard" casing convention is because full proper case should be reserved for properties and void methods. Local variable and methods that return a value should have the first word in lowercase like int employeeCount = getEmployeeCount() . However, I don't understand why. When I question this, it seems that I just get an arbitrary answer of that's the standard . Whatever the answer is, it usually always boils down to That's just the way it is and I don't question it. I just follow it. . Arbitrary answers are never good enough for me. Ever since my early days of programming Excel 97 macros with the Office IDE, I've never needed a casing convention to tell me whether or not something is a local variable or property. This is because I've always used a very intuitive naming convention. For example, GetNuggetCount() clearly suggests a method that goes somewhere an gets a count of all the nuggets. SetNuggetCount(x) suggests that you are assigning a new value to the count of nuggets. NuggetCount all by itself suggests a property or local variable that is simply holding a value. To that last one, one may be tempted to say, "Ah ha! That is the question. Property or variable? WHICH IS IT?" To that, I'd reply with, "Does it really matter?" So here's the tl;dr;: What are the objective, logical, non-arbitrary reasons to use lowercase for the first word in your variable or return method? Edit: For MainMa Replace this code with the first code sample in your answer and see how well your argument holds up: public void ComputeMetrics()
{
const int MaxSnapshots = 20;
var Snapshots = this.LiveMeasurements.IsEnabled ?
this.GrabSnapshots(MaxSnapshots, this.cache) :
this.LoadFromMemoryStorage();
if (!Snapshots.Any())
{
this.Report(LogMessage.SnapshotsAreEmpty);
return;
}
var MeasurementCount = Measurements.Count();
this.Chart.Initialize((count + 1) * 2);
foreach (var s in Snapshots)
{
this.Chart.AppendSnapshot(s);
}
} | That naming convention is often used when people want to be able to give a variable the same name as its type. For example: Employee employee; Some languages even enforce that capitalization. This prevents having to use annoying variable names like MyEmployee , CurrentEmployee , EmployeeVar , etc. You can always tell if something is a type or a variable, just from the capitalization. That prevents confusion in situations like: employee.function(); // instance method
Employee.function(); // static method Also, in English, nouns are not generally capitalized, so you can't really claim your capitalization is "proper." So what does that have to do with your situation? You obviously have no trouble reading your own code, but by keeping things as consistent as possible, you reduce the mental workload of anyone else needing to read your code. In coding as in writing, you adjust your style to match the readers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202031",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27695/"
]
} |
202,038 | Are there any benefits to removing unused using statements in a VS project (such as can be done using Resharper), or will VS automatically take care of that when building/deploying? | There aren't any performance benefits, if that's what you mean. All references in an assembly are fully qualified; the compiler merely uses the references you provide in your code to fully qualify identifiers, so the only impact of unused references in your source code is a slight decrease in readability (why is this reference here?), and a trivial increase in compile time. To put it another way, the generated IL is exactly the same whether you remove the unused references or not. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202038",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/65774/"
]
} |
202,167 | There seems to be an aversion to writing even the most basic documentation. Our project READMEs are relatively bare. There aren't even updated lists of dependencies in the docs. Is there something I'm unaware of in the industry that makes programmers dislike writing documentation? I can type out paragraphs of docs if needed, so why are others so averse to it? More importantly, how do I convince them that writing docs will save us time and frustration in the future? | There are two main factors in my experience: Deadlines Most companies are so date driven that QA, tech debt, and actual design are cut just so the project manager doesn't look bad or to hit some absurd over-promised client deadline. In this environment where even functional quality is cut, then a long-term investment like documentation has little chance. Change A relatively new best practice for developers is to de-emphasize comments. The idea is that keeping information in two places (the code [including tests] and the comments around the code) leads to a lot of overhead in keeping them in sync for little benefit. "If your code is so hard to read that you need comments, wouldn't time be better spent cleaning up the code?" I personally won't even look at comments any more. Code can't lie. Documentation follows the same vein. With the widespread adoption of agile, people acknowledge that requirements change regularly. With the widespread use of refactoring, the organization of code will shift pretty substantially. Why spend the time documenting all of this stuff that's bound to change? Code and tests should do a good enough job doing that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202167",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
202,383 | In a nutshell, should we design death into our programs, processes, and threads at a low level, for the good of the overall system? Failures happen. Processes die. We plan for disaster and occasionally recover from it. But we rarely design and implement unpredictable program death. We hope that our services' uptimes are as long as we care to keep them running. A macro-example of this concept is Netflix's Chaos Monkey , which randomly terminates AWS instances in some scenarios. They claim that this has helped them discover problems and build more redundant systems. What I'm talking about is lower level. The idea is for traditionally long-running processes to randomly exit. This should force redundancy into the design and ultimately produce more resilient systems. Does this concept already have a name? Is it already being used in the industry? EDIT Based on the comments and answers, I'm afraid I wasn't clear in my question. For clarity: yes, I do mean randomly, yes, I do mean in production, and no, not just for testing. To explain, I'd like to draw an analogy to multicellular organisms. In nature, organisms consist of many cells. The cells fork themselves to create redundancy, and they eventually die. But there should always be enough cells of the right kinds for the organism to function. This highly redundant system also facilitates healing when injured. The cells die so the organism lives. Incorporating random death into a program would force the greater system to adopt redundancy strategies to remain viable. Would these same strategies help the system remain stable in the face of other kinds of unpredictable failure? And, if anyone has tried this, what is it called? I'd like to read more about it if it already exists. | No. We should design proper bad-path handling, and design test cases (and other process improvements) to validate that programs handle these exceptional conditions well. Stuff like Chaos Monkey can be part of that, but as soon as you make "must randomly crash" a requirement actual random crashes become things testers cannot file as bugs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202383",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/54852/"
]
} |
202,432 | What does "branching is free" mean in Git? I hear this a lot whenever Git is mentioned in comparison to other version control systems. I haven't had the opportunity (?) to deal with others ( SVN , etc.), so how is branching "expensive" in others? | The claim that "branching is free in git" is a simplification of facts because it isn't "free" per se. Looking under the hood a more correct claim would be to say that branching is redonkulously cheap instead, because branches are basically references to commits . I define "cheapness" here as the less overhead the cheaper. Lets dig in to why Git is so "cheap" by examining what kinds of overhead it has: How are branches implemented in git? The git repository, .git mostly consists of directories with files that contain metadata that git uses. Whenever you create a branch in git, with e.g. git branch {name_of_branch} , a few things happen: A reference is created to the local branch at: .git/refs/heads/{name_of_branch} A history log is created for the local branch at: .git/logs/refs/heads/{name_of_branch} That's basically it, a couple of text files are created. If you open the reference as a textfile the contents will be the id-sha of the commit the branch is pointing at. Note that branching does not require you to make any commits as they're another kind of object. Both branches and commits are "first-class citizens" in git and one way is to think about the branch-to-commit relationship as an aggregation rather than a composition. If you remove a branch, the commits will still exist as "dangling". If you accidentally removed a branch you can always try to find the commit with git-lost-found or git-fsck --lost-found and create a branch on the sha-id you find left hanging (and as long as git hasn't done any garbage collection yet). So how does git keep track of which branch you're working on? The answer is with the .git/HEAD file, that looks sort of like this if you're on the master branch. ref: refs/heads/master Switching branches simply changes the reference in the .git/HEAD file, and then proceeds to change the contents of your workspace with the ones defined in the commit. How does this compare in other version control systems? In Subversion , branches are virtual directories in the repository . So the easiest way to branch is to do it remotely, with a one-liner svn copy {trunk-url} {branch-url} -m "Branched it!" . What SVN will do is the following: Copy the source directory, e.g. trunk , to to a target directory, Commit the changes to finalize the copy action. You will want to do this action remotely on the server, because making that copy locally is a linear-time operation, with files being copied and symlinked. This is a very slow operation, whereas doing it on the server is a constant time operation. Note that even when performing the branch on the sever, subversion requires a commit when branching while git does not, which is a key difference. That is one kind of overhead that makes SVN marginally less cheap than Git. The command for switching branches in SVN , i.e. svn switch , is really the svn update in disguise. Thanks to the virtual directory concept the command is a bit more flexible in svn than in git. Sub directories in your workspace can be switched out to mirror another repository url. The closest thing would be to use git-submodule but using that is semantically quite different from branching. Unfortunately this is also a design decision that makes switching a bit slower in SVN than in Git as it has to check every workspace directory which remote-url it is mirroring. In my experience, Git is quicker to switch branches than SVN. SVN's branching comes with a cost as it copies files and always need to be made publicly available. In git, as explained above, branches are "just references" and can be kept in your local repository and be published to your discretion. In my experience however SVN is still remarkably cheaper and more performant than e.g. ClearCase. It's only a bummer that SVN is not decentralized. You can have multiple repositories as mirrored to some source repo but synching differing changes multiple SVN-repositories is not possible as SVN does not have uniquely identifiers for commits (git has hashed identifiers that are based on the contents of the commit). The reason why I personally started using git over SVN though is because initiating a repository is remarkably easier and cheaper in git . Conceptually in terms of software configuration management, each divergent copy of a project (clone, fork, workspace or whatever) is a "branch", and given this terminology creating a new copy in SVN is not as cheap as Git, where the latter has branches "built-in". As another example, in Mercurial , branching started out a bit different as a DVCS and creating/destroying named branches required seperate commits. Mercurial developers implemented later in development bookmarks to mimic git's same branching model though heads are called tips and branches are bookmarks instead in mercurial terminology. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202432",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91451/"
]
} |
202,477 | As programmers I feel that our goal is to provide good abstractions on the given domain model and business logic. But where should this abstraction stop? How to make the trade-off between abstraction and all it's benefits (flexibility, ease of changing etc.) and ease of understanding the code and all it's benefits. I believe I tend to write code overly abstracted and I don't know how good is it; I often tend to write it like it is some kind of a micro-framework, which consists of two parts: Micro-Modules which are hooked up in the micro-framework: these modules are easy to be understood, developed and maintained as single units. This code basically represents the code that actually does the functional stuff, described in requirements. Connecting code; now here I believe stands the problem. This code tends to be complicated because it is sometimes very abstracted and is hard to be understood at the beginning; this arises due to the fact that it is only pure abstraction, the base in reality and business logic being performed in the code presented 1; from this reason this code is not expected to be changed once tested. Is this a good approach at programming? That it, having changing code very fragmented in many modules and very easy to be understood and non-changing code very complex from the abstraction POV? Should all the code be uniformly complex (that is code 1 more complex and interlinked and code 2 more simple) so that anybody looking through it can understand it in a reasonable amount of time but change is expensive or the solution presented above is good, where "changing code" is very easy to be understood, debugged, changed and "linking code" is kind of difficult. Note: this is not about code readability! Both code at 1 and 2 is readable, but code at 2 comes with more complex abstractions while code 1 comes with simple abstractions. | The very first words of The C++ Programming Language, 4th edition : All problems in computer science
can be solved by another level of indirection,
except for the problem of too many layers of indirection.
– David J. Wheeler (David Wheeler was my thesis advisor. The quote without the important last line is sometimes called "The first law of Computer Science.") | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202477",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/57792/"
]
} |
202,490 | Except the garbage collector, what are some other features in Java that make it unsuitable for real time programming? On the net, whenever Java vs C++ is discussed with regards to real time programming, it is always the garbage collector that is mentioned. Is there anything else? | There are two additional items I can remember off-hand: JIT compilation Threading implementation In term of real-time, predictability of performance is probably the most important factor; That's why an unpredictable GC cycle makes Java unsuitable for real-time. JIT offers improved performances, but kicks in at some point after the program is running, taking some resources, and changing the execution speeds of the system. It can also happen again at a later stage, if the VM believes it can do a "better" job at that time. As far as threading: I don't quite remember at this point if this is part of the language design, or just a very common implementation, but Java usually provides no tools to precisely control thread execution; For example, while there are 10 "priorities" specified for threads, there's no requirement that the VM actually considers these priorities. Operators for stopping and switching threads are also either not defined, or not rigidly adhered to by the system. There are several implementations of JSR 1: Real-time Specification for Java - a spec that has been approved in 1998.
This spec addresses as much as possible of the issues that makes standard Java unsuitable for real-time. As of maybe 5 years ago, Sun (Now Oracle) had a RTSJ VM (That never had a name, AFAIK); IBM had WebSphere Real Time; And JamaicaVM was a free(?), platform-independent solution. Googling those today doesn't bring much. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202490",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14157/"
]
} |
202,571 | At a recent job interview, I couldn't answer a question about SOLID -- beyond providing the basic meaning of the various principles. It really bugs me. I have done a couple of days worth of digging around and have yet to come up with a satisfactory summary. The interview question was: If you were to look at a .Net project that I told you strictly followed SOLID principles, what would you expect to see in terms of the project and code structure? I floundered around a bit, didn't really answer the question, and then bombed out. How could I have better handled this question? | S = Single Responsibility Principle So I'd expect to see a well organised folder/file structure & Object Hierarchy. Each class/piece of functionality should be named that its functionality is very obvious, and it should only contain logic to perform that task. If you saw huge manager classes with thousand of lines of code, that would be a sign that single responsibility wasn't being followed. O = Open/closed Principle This is basically the idea that new functionality should be added through new classes that have a minimum of impact on/require modification of existing functionality. I'd expect to see lots of use of object inheritance, sub-typing, interfaces and abstract classes to separate out the design of a piece of functionality from the actual implementation, allowing others to come along and implement other versions along side without affecting the original. L = Liskov substitution principle This has to do with the ability to treat sub-types as their parent type. This comes out of the box in C# if you are implementing a proper inherited object hierarchy. I'd expect to see code treating common objects as their base type and calling methods on the base/abstract classes rather than instantiating and working on the sub-types themselves. I = Interface Segregation Principle This is similar to SRP. Basically, you define smaller subsets of functionality as interfaces and work with those to keep your system decoupled (e.g. a FileManager might have the single responsibilty of dealing with File I/O, but that could implement a IFileReader and IFileWriter which contained the specific method definitions for the reading and writing of files). D = Dependency Inversion Principle. Again this relates to keeping a system decoupled. Perhaps you'd be on the lookout for the use of a .NET Dependency Injection library, being used in the solution such as Unity or Ninject or a ServiceLocator system such as AutoFacServiceLocator . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202571",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/94779/"
]
} |
202,843 | In building an application that deals with a lot of mathematical calculations, I have encountered the problem that certain numbers cause rounding errors. While I understand that floating point is not exact , the problem is how do I deal with exact numbers to make sure that when calculations are preformed on them floating point rounding doesn't cause any issues? | There are three fundamental approaches to creating alternative numeric types that are free of floating point rounding. The common theme with these is that they use integer math instead in various ways. Rationals Represent the number as a whole part and rational number with a numerator and a denominator. The number 15.589 would be represented as w: 15; n: 589; d:1000 . When added to 0.25 (which is w: 0; n: 1; d: 4 ), this involves calculating the LCM, and then adding the two numbers. This works well for many situations, though can result in very large numbers when you are working with many rational numbers that are relatively prime to each other. Fixed point You have the whole part, and the decimal part. All numbers are rounded (there's that word - but you know where it is) to that precision. For example, you could have fixed point with 3 decimal points. 15.589 + 0.250 becomes adding 589 + 250 % 1000 for the decimal part (and then any carry to the whole part). This works very nicely with existing databases. As mentioned, there is rounding but you know where it is and can specify it such that it is more precise than is needed (you are only measuring to 3 decimal points, so make it fixed 4). Floating fixed point Store a value and the precision. 15.589 is stored as 15589 for the value and 3 for the precision, while 0.25 is stored as 25 and 2 . This can handle arbitrary precision. I believe this is what the internals of Java's BigDecimal uses (haven't looked at it recently) uses. At some point, you will want to get it back out of this format and display it - and that may involve rounding (again, you control where it is). Once you determine the choice for the representation, you can either find existing third party libraries that use this, or write your own. When writing your own, be sure to unit test it and make sure you are doing the math correctly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202843",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/94998/"
]
} |
202,903 | Suppose you've got a project that is running late. Is there any proof or argument that teams become much more productive by adding more people? I am looking for answers that can be supported by facts and references if possible. What I'm thinking about is that existing devs have to teach the new ones (thus losing overall development time), and then the new developers have to study the code (and tasks) before they can become fully productive. | Your questions has the answer in it. Adding man-power to a project that is running late, only makes it worse because the communication overhead increases in a non-linear way. It's already been studied. Read "The Mythical Man-Month". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202903",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12629/"
]
} |
202,908 | What I mean about that is that in nearly every tutorial I've read about functional languages, is that one of the great things about functions, is that if you call a function with the same parameters twice, you'll always end up with the same result. How on earth do you then make a function that takes a seed as a parameter, and then returns a random number based on that seed? I mean this would seem to go against one of the things that are so good about functions, right? Or am I completely missing something here? | You can't create a pure function called random that will give a different result every time it is called. In fact, you can't even "call" pure functions. You apply them. So you aren't missing anything, but this doesn't mean that random numbers are off-limits in functional programming. Allow me to demonstrate, I'll use Haskell syntax throughout. Coming from an imperative background, you may initially expect random to have a type like this: random :: () -> Integer But this has already been ruled out because random cannot be a pure function. Consider the idea of a value. A value is an immutable thing. It never changes and every observation that you can make about it is consistent for all time. Clearly, random can't produce a Integer value. Instead, it produces a Integer random variable. It's type might look like this: random :: () -> Random Integer Except that passing an argument is completely unnecessary, functions are pure, so one random () is as good as another random () . I'll give random, from here on, this type: random :: Random Integer Which is all well and fine, but not very useful. You may expect to be able to write expressions like random + 42 , but you can't, because it won't typecheck. You can't do anything with random variables, yet. This raises an interesting question. What functions should exist to manipulate random variables? This function can't exist: bad :: Random a -> a in any useful way, because then you could write: badRandom :: Integer
badRandom = bad random Which introduces an inconsistency. badRandom is supposedly a value, but it is also a random number; a contradiction. Maybe we should add this function: randomAdd :: Integer -> Random Integer -> Random Integer But this just a special case of a more general pattern. You should be able to apply any function to random thing in order to get other random things like so: randomMap :: (a -> b) -> Random a -> Random b Instead of writing random + 42 , we can now write randomMap (+42) random . If all you had was randomMap, you wouldn't be able to combine random variables together. You couldn't write this function for instance: randomCombine :: Random a -> Random b -> Random (a, b) You might try to write it like this: randomCombine a b = randomMap (\a' -> randomMap (\b' -> (a', b')) b) a But it has the wrong type. Instead of ending up with a Random (a, b) , we end up with a Random (Random (a, b)) This can be fixed by adding another function: randomJoin :: Random (Random a) -> Random a But, for reasons that may eventually become clear, I'm not going to do that. Instead I'm going to add this: randomBind :: Random a -> (a -> Random b) -> Random b It's not immediately obvious that this actually solves the problem, but it does: randomCombine a b = randomBind a (\a' -> randomMap (\b' -> (a', b')) b) In fact, it's possible to write randomBind in terms of randomJoin and randomMap. It's also possible to write randomJoin in terms of randomBind. But, I'll leave doing this as an exercise. We could simplify this a little. Allow me to define this function: randomUnit :: a -> Random a randomUnit turns a value into a random variable. This means that we can have random variables that aren't actually random. This was always the case though; we could have done randomMap (const 4) random before. The reason defining randomUnit is a good idea is that now we can define randomMap in terms of randomUnit and randomBind: randomMap :: (a -> b) -> Random a -> Random b
randomMap f x = randomBind x (randomUnit . f) Ok, now we are getting somewhere. We have random variables that we can manipulate. However: It's not obvious how we might actually implement these functions, It's quite cumbersome. Implementation I'll tackle pseudo random numbers. It is possible implement these functions for real random numbers, but this answer is already getting quite long. Essentially, the way this is going to work is that we are going to pass a seed value around everywhere. Whenever we generate a new random value, we will produce a new seed. At the end, when we're done constructing a random variable, we will want to sample from it using this function: runRandom :: Seed -> Random a -> a I'm going to define the Random type like this: data Random a = Random (Seed -> (Seed, a)) Then, we just need to provide implementations of randomUnit, randomBind, runRandom and random which is quite straight-forward: randomUnit :: a -> Random a
randomUnit x = Random (\seed -> (seed, x))
randomBind :: Random a -> (a -> Random b) -> Random b
randomBind (Random f) g =
Random (\seed ->
let (seed', x) = f seed
Random g' = g x in
g' seed')
runRandom :: Seed -> Random a -> a
runRandom seed (Random f) = (snd . f) seed For random, I'm going to assume there's already a function of the type: psuedoRandom :: Seed -> (Seed, Integer) In which case random is just Random psuedoRandom . Making things less cumbersome Haskell has syntactic sugar to make things like this nicer on the eyes. It's called do-notation and to use it all we have to do it create an instance of Monad for Random. instance Monad Random where
return = randomUnit
(>>=) = randomBind Done. randomCombine from before could now be written: randomCombine :: Random a -> Random b -> Random (a, b)
randomCombine a b = do
a' <- a
b' <- b
return (a', b') If I was doing this for myself, I would even go one step further than this and create an instance of Applicative. (Don't worry if this makes no sense). instance Functor Random where
fmap = liftM
instance Applicative Random where
pure = return
(<*>) = ap Then randomCombine could be written: randomCombine :: Random a -> Random b -> Random (a, b)
randomCombine a b = (,) <$> a <*> b Now that we have these instances, we can use >>= instead of randomBind, join instead of randomJoin, fmap instead of randomMap, return instead of randomUnit. We also get a whole load of functions for free. Is it worth it? You could argue, that getting to this stage, where working with random numbers isn't completely horrendous was quite difficult and long-winded. What did we get in exchange for this effort? The most immediate reward is that we can now see exactly which parts of our program are dependent on randomness and which parts are entirely deterministic. In my experience, forcing a strict separation like this simplifies things immensely. We've assumed so far that we just want a single sample from each random variable that we generate, but if it turns out that in the future we'd actually like to see more of the distribution, this is trivial. You can just use runRandom lots of times on the same random variable with different seeds. This is, of course, possible in imperative languages, but in this case, we can be certain that we aren't going to perform unanticipated IO every time we sample a random variable and we don't have to be careful about initializing state. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202908",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91470/"
]
} |
202,922 | Sometimes, although not often, I have to include math logic in my code. The concepts used are mostly very simple, but the resulting code is not - a lot of variables with unclear purpose, and some operations with not so obvious intent. I don't mean that the code is unreadable or unmaintainable, just that it's waaaay harder to understand than the actual math problem. I try to comment the parts which are hardest to understand, but there is the same problem as in just coding them - text does not have the expressive power of math . I am looking for a more efficient and easy to understand way of explaining the logic behind some of the complex code, preferably in the code itself. I have considered TeX - writing the documentation and generating it separately from the code. But then I'd have to learn TeX, and the documentation will not be in the code itself. Another thing I thought of is taking a picture of the mathematical notations, equations and diagrams written on paper/whiteboard, and including it in javadoc. Is there a simpler and clearer way? P.S. Giving descriptive names( timeOfFirstEvent instead of t1 ) to the variables actually makes the code more verbose and even harder too read. | The right thing to do in such circumstances is to implement the algorithm, formula or whatever with exactly the same variable names as in the primary real-world source (as far as the programming language allows this), and have a succinct comment above it saying something like "Levenshtein distance computation as described in [Knuth1968]", where the citation links to a readily accessible description of the math. (If you don't have such a reference, but your math is sound and useful, maybe you should consider publishing it yourself. Just sayin'.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/202922",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52728/"
]
} |
203,024 | Without being presumptuous, I would like you to consider the possibility of this.
Most OS today are based on pretty low level languages (mainly C/C++)
Even the new ones such as Android uses JNI & underlying implementation is in C In fact, (this is a personal observation) many programs written in C run a lot faster than their high level counterparts (eg: Transmission (a bittorrent client on Ubuntu) is a whole lot faster than Vuze(Java) or Deluge(Python)). Even python compilers are written in C, although PyPy is an exception. So is there a particular reason for this? Why is it that all our so called "High Level Languages" with the great "OOP" concepts can't be used in making a solid OS? So I have 2 questions basically. Why are applications written in low level languages more efficient than their HLL counterparts? Do low level languages perform better for the simple reason that they are low level and are translated to machine code easier? Why do we not have a full fledged OS based entirely on a High Level Language? | Microsoft has done some very interesting research in this direction, if you look into Singularity: http://research.microsoft.com/en-us/projects/singularity/ Also, Mothy Roscoe et al have been working on Barrelfish which uses the Eclipse constraint programming language as an OS service to sort out all kinds of OS management and resource allocation problems: http://www.barrelfish.org/ | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203024",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/95140/"
]
} |
203,077 | In a blog post on F# for fun and profit, it says: In a functional design, it is very important to separate behavior from
data. The data types are simple and "dumb". And then separately, you
have a number of functions that act on those data types. This is the exact opposite of an object-oriented design, where
behavior and data are meant to be combined. After all, that's exactly
what a class is. In a truly object-oriented design in fact, you should
have nothing but behavior -- the data is private and can only be
accessed via methods. In fact, in OOD, not having enough behavior around a data type is
considered a Bad Thing, and even has a name: the " anemic domain
model ". Given that in C# we seem to keep borrowing from F#, and trying to write more functional-style code; how come we're not borrowing the idea of separating data/behavior, and even consider it bad? Is it simply that the definition doesn't with with OOP, or is there a concrete reason that it's bad in C# that for some reason doesn't apply in F# (and in fact, is reversed)? (Note: I'm specifically interested in the differences in C#/F# that could change the opinion of what is good/bad, rather than individuals that may disagree with either opinion in the blog post). | The main reason FP aims for this and C# OOP does not is that in FP the focus is on referential transparency; that is, data goes into a function and data comes out, but the original data is not changed. In C# OOP there's a concept of delegation of responsibility where you delegate an object's management to it, and therefore you want it to change its own internals. In FP you never want to change the values in an object, therefore having your functions embedded in your object doesn't make sense. Further in FP you have higher kinded polymorphism allowing your functions to be far more generalized than C# OOP allows. In this way you may write a function that works for any a , and therefore having it embedded in a block of data doesn't make sense; that would tightly couple the method so that it only works with that particular kind of a . Behaviour like that is all well and common in C# OOP because you don't have the ability to abstract functions so generally anyway, but in FP it's a tradeoff. The biggest problem I've seen in anemic domain models in C# OOP is that you end up with duplicate code because you have DTO x, and 4 different functions that commits activity f to DTO x because 4 different people didn't see the other implementation. When you put the method directly on DTO x, then those 4 people all see the implementation of f and reuse it. Anemic data models in C# OOP hinder code reuse, but this isn't the case in FP because a single function is generalized across so many different types that you get greater code reuse since that function is usable in so many more scenarios than a function you would write for a single DTO in C#. As pointed out in comments , type inference is one of the benefits FP relies on to allow such significant polymorphism, and specifically you can trace this back to the Hindley Milner type system with Algorithm W type inference; such type inference in the C# OOP type system was avoided because the compilation time when constraint-based inference is added becomes extremely long due to the exhaustive search necessary, details here: https://stackoverflow.com/questions/3968834/generics-why-cant-the-compiler-infer-the-type-arguments-in-this-case | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203077",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8107/"
]
} |
203,104 | In many languages such as C, C++, and Java, the main method/function has a return type of void or int , but not double or String . What might be the reasons behind that? I know a little bit that we can't do that because main is called by runtime library and it expects some syntax like int main() or int main(int,char**) so we have to stick to that. So my question is: why does main have the type signature that it has, and not a different one? | The return value of main is to be passed to the operating system ( any operating system) in a single, consistent way. The information that the operating system needs to know is "did the program terminate successfully, or was there an error?" If this is a string, the response becomes difficult in different languages. The internals of a Pascal string (first byte is length) and a FORTRAN string (fixed, padded to some value) and a C string (null terminated) are all different. This would make returning a consistent value to the operating system challenging. Assuming that this was solved, what would you do to answer the question the OS had of the program? String comparisons are fraught with errors ("success" vs "Success"), and while the error may be more useful to a human, it is more difficult for the operating system or another program (shell) to deal with. There also were significant differences even in the strings themselves -- EBCDIC (with all of its code pages) vs. ASCII. Floats and doubles provide no additional value over the integer for communicating back data to the OS (and shell). For the most part, neither of these parts of the computer deal with floating point numbers. Doubles are also not enumerable making comparisons difficult. Not being enumerable, they make reporting what the error was (assuming you have picked a particular value for success). Again, floating points are not consistent - a float on an 8 bit machine was different than the float on a 16 bit and a 32 bit machine (and those are just the 'normal' ones - even within IBM, floating point wasn't standardized between machines by the same manufacturer until the 1980's). And then you've got decimal vs. binary computers. Floating point values aren't consistent and don't provide meaningful data back. That really leaves us with the byte and integer as options. The convention that was established was '0' was success, and anything else was an error. An integer gives more room than a byte for reporting the error. It can be enumerated (return of 1 means XYZ, return of 2 means ABC, return of 3, means DEF, etc..) or used as flags ( 0x0001 means this failed, 0x0002 means that failed, 0x0003 means both this and that failed). Limiting this to just a byte could easily run out of flags (only 8), so the decision was probably to use an integer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203104",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/95092/"
]
} |
203,111 | I'm developing some cross-platform software which is intended to run on mobile devices. Both iOS and Android provide low memory warnings. I plan to make a wrapper class that will free cached resources (like textures) when low memory warnings are issued (assuming the resource is not in use). If the resource returns to use, it'll re-cache it, etc... I'm trying to think of what this is called. In .Net, it's similar to a "weak reference" but that only really makes sense when dealing with garbage collection, and since I'm using c++ and shared_ptr, a weak reference already has a meaning which is distinct from the one I'm thinking of. There's also the difference that this class will be able to rebuild the cache when needed. What is this pattern/whatever is called? Edit: Feel free to recommend tags for this question. | The return value of main is to be passed to the operating system ( any operating system) in a single, consistent way. The information that the operating system needs to know is "did the program terminate successfully, or was there an error?" If this is a string, the response becomes difficult in different languages. The internals of a Pascal string (first byte is length) and a FORTRAN string (fixed, padded to some value) and a C string (null terminated) are all different. This would make returning a consistent value to the operating system challenging. Assuming that this was solved, what would you do to answer the question the OS had of the program? String comparisons are fraught with errors ("success" vs "Success"), and while the error may be more useful to a human, it is more difficult for the operating system or another program (shell) to deal with. There also were significant differences even in the strings themselves -- EBCDIC (with all of its code pages) vs. ASCII. Floats and doubles provide no additional value over the integer for communicating back data to the OS (and shell). For the most part, neither of these parts of the computer deal with floating point numbers. Doubles are also not enumerable making comparisons difficult. Not being enumerable, they make reporting what the error was (assuming you have picked a particular value for success). Again, floating points are not consistent - a float on an 8 bit machine was different than the float on a 16 bit and a 32 bit machine (and those are just the 'normal' ones - even within IBM, floating point wasn't standardized between machines by the same manufacturer until the 1980's). And then you've got decimal vs. binary computers. Floating point values aren't consistent and don't provide meaningful data back. That really leaves us with the byte and integer as options. The convention that was established was '0' was success, and anything else was an error. An integer gives more room than a byte for reporting the error. It can be enumerated (return of 1 means XYZ, return of 2 means ABC, return of 3, means DEF, etc..) or used as flags ( 0x0001 means this failed, 0x0002 means that failed, 0x0003 means both this and that failed). Limiting this to just a byte could easily run out of flags (only 8), so the decision was probably to use an integer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203111",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/89102/"
]
} |
203,114 | Here are two options: Option 1: enum QuizCategory {
CATEGORY_1(new MyCollection<Question>()
.add(Question.QUESTION_A)
.add(Question.QUESTION_B)
.add...),
CATEGORY_2(new MyCollection<Question>()
.add(Question.QUESTION_B)
.add(Question.QUESTION_C)
.add...),
...
;
public MyCollection<Question> collection;
private QuizCategory(MyCollection<Question> collection) {
this.collection = collection;
}
public Question getRandom() {
return collection.getRandomQuestion();
}
} Option 2: enum QuizCategory2 {
CATEGORY_1 {
@Override
protected MyCollection<Question> populateWithQuestions() {
return new MyCollection<Question>()
.add(Question.QUESTION_A)
.add(Question.QUESTION_B)
.add...;
}
},
CATEGORY_2 {
@Override
protected MyCollection<Question> populateWithQuestions() {
return new MyCollection<Question>()
.add(Question.QUESTION_B)
.add(Question.QUESTION_C)
.add...;
}
};
public Question getRandom() {
MyCollection<Question> collection = populateWithQuestions();
return collection.getRandomQuestion();
}
protected abstract MyCollection<Question> populateWithQuestions();
} There will be around 1000 categories, each containing 10 - 300 questions (100 on average). At runtime typically only 10 categories and 30 questions will be used. Each question is itself an enum constant (with its fields and methods). I'm trying to decide between those two options in the mobile application context. I haven't done any measurements since I have yet to write the questions and would like to gather more information before committing to one or another option. As far as I understand: (a) Option 1 will perform better since there will be no need to populate the collection and then garbage-collect the questions;
(b) Option 1 will require extra memory: 1000 categories x 100 questions x 4 bytes for each reference = 400 Kb, which is not significant. So I'm leaning to Option 1, but just wondered if I'm correct in my assumptions and not missing something important? Perhaps someone has faced a similar dilemma? Or perhaps it doesn't actually matter that much? | The return value of main is to be passed to the operating system ( any operating system) in a single, consistent way. The information that the operating system needs to know is "did the program terminate successfully, or was there an error?" If this is a string, the response becomes difficult in different languages. The internals of a Pascal string (first byte is length) and a FORTRAN string (fixed, padded to some value) and a C string (null terminated) are all different. This would make returning a consistent value to the operating system challenging. Assuming that this was solved, what would you do to answer the question the OS had of the program? String comparisons are fraught with errors ("success" vs "Success"), and while the error may be more useful to a human, it is more difficult for the operating system or another program (shell) to deal with. There also were significant differences even in the strings themselves -- EBCDIC (with all of its code pages) vs. ASCII. Floats and doubles provide no additional value over the integer for communicating back data to the OS (and shell). For the most part, neither of these parts of the computer deal with floating point numbers. Doubles are also not enumerable making comparisons difficult. Not being enumerable, they make reporting what the error was (assuming you have picked a particular value for success). Again, floating points are not consistent - a float on an 8 bit machine was different than the float on a 16 bit and a 32 bit machine (and those are just the 'normal' ones - even within IBM, floating point wasn't standardized between machines by the same manufacturer until the 1980's). And then you've got decimal vs. binary computers. Floating point values aren't consistent and don't provide meaningful data back. That really leaves us with the byte and integer as options. The convention that was established was '0' was success, and anything else was an error. An integer gives more room than a byte for reporting the error. It can be enumerated (return of 1 means XYZ, return of 2 means ABC, return of 3, means DEF, etc..) or used as flags ( 0x0001 means this failed, 0x0002 means that failed, 0x0003 means both this and that failed). Limiting this to just a byte could easily run out of flags (only 8), so the decision was probably to use an integer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203114",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/95217/"
]
} |
203,205 | I'm trying to write a "standard" business website. By "standard", I mean this site runs the usual HTML5, CSS and JavaScript for the front-end, a back-end (to process stuff), and runs MySQL for the database. It's a basic CRUD site: the front-end just makes pretty whatever the database has in store; the backend writes to the database whatever the user enters and does some processing. Just like most sites out there. In creating my GitHub repositories to begin coding, I've realized I don't understand the distinction between the front-end back-end, and the API . Another way of phrasing my question is: where does the API come into this picture? I'm going to list some more details and then questions I have - hopefully this gives you guys a better idea of what my actual question is because I'm so confused that I don't know the specific question to ask. Some more details: I'd like to try the Model-View-Controller pattern. I don't know if this changes the question/answer. The API will be RESTful I'd like my back-end to use my own API instead of allowing the back-end to cheat and call special queries. I think this style is more consistent. My questions: Does the front-end call the back-end which calls the API? Or does the front-end just call the API instead of calling the back-end? Does the back-end just execute an API and the API returns control to the back-end (where the back-end acts as the ultimate controller, delegating tasks)? Long and detailed answers explaining the role of the API alongside the front-end back-end are encouraged. If the answer depends on the model of programming (models other than the Model-View-Controller pattern), please describe these other ways of thinking of the API. | I think you're being confused by the way the term API is being misused and abused by many web developers. API means Application Programming Interface, i.e. any officially specified interface between different systems (or parts of the same system). Some time ago, it became a big thing for web startup to offer public access to some of their internal data through a web service API, typically using REST and JSON, thus allowing third-party developers to integrate with their systems. Web developers started using the term "API" to mean specifically (and only) "publically accessible web service", and misusing it to include the implementation thereof. In terms of frontend and backend, this web service API (and its implementation) is the backend . Some parts of it may be publically accessible and others only to your frontend. A different name for this is "service layer", i.e. code that represents services which the frontend calls contains no display logic (that's the job of the frontend, after all) is more abstract and coarse grained than simple CRUD actions (one service call will often involve multiple CRUD actions and should be executed within a database transaction). contains the business logic of the application | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203205",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48162/"
]
} |
203,333 | Does it make sense to give signoff authority to testers? Should a test team Just test features, issues, etc, and simply report on a pass/fail basis, leaving it up to others to act on those results, or Have authority to hold up releases themselves based on those results? In other words, should testers be required to actually sign off on releases? The testing team I'm working with feels that they do, and we're having an issue with this because of "testing scope creep" -- the refusal to approve releases is sometimes based on issues explicitly not addressed by the release in question. | Most places I have worked the QA people do have some sort of sign-off step, but do not have final authority on if the release proceeds or not. Their sign-off represents that they completed the testing expected by the release plan, not that the release is flawless. Ultimately QA != the business and the business needs to decide if they are OK with deploying the code in the current state or if the benefit outweighs the downside or whatever. This is often done by clients or stakeholders immediately prior to deploy and is often called User Acceptance. If your QA is also your User Acceptance group then there is the potential that they do have the authority to define your release candidate as unacceptable, but if you are getting this over issues that are out of scope to the bugfix/iteration/sprint/change request/whatever you bucket your time in, then the Project Manager or the business line stakeholders needs to have a come to Jesus meeting with the QA team. It is fine to report on preexisting defects or unintended outcomes of new requirements, but if it is out of scope and non-disastrous it is generally not acceptable to label it as a blocking issue. It goes in the backlog for the product owner to prioritize like everything else. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203333",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20410/"
]
} |
203,469 | For example, there is a common snippet in JS to get a default value: function f(x) {
x = x || 'default_value';
} This kind of snippet is not easily understood by all the members of my team, their JS level being low. Should I not use this trick then? It makes the code less readable by peers, but more readable than the following according to any JS dev: function f(x) {
if (!x) {
x = 'default_value';
}
} Sure, if I use this trick and a colleague sees it, then they can learn something. But the case is often that they see this as "trying to be clever". So, should I lower the level of my code if my teammates have a lower level than me? | Ok, here goes my take on this big and complicated topic. Pros for keeping your coding style: Things like x = x || 10 are idiomatic in JavaScript development and offer a form of consistency between your code and the code of external resources you use. Higher level of code is often more expressive, you know what you get and it's easier to read across highly trained professionals. You'll enjoy your job more. I personally value creating pretty code. I think it brings me a lot of satisfaction in my work. Generally it creates more readable style. Sticking with the idioms of the language can be very valuable - they are often idioms for a reason. Cons for keeping your coding style: It'll be harder for the lower-level programmers to keep up. These are often the people maintaining your code and the ones who'll have to actually read the stuff you write. Maintainers of code, often JavaScript code come from other languages. Your programmers might be competent in Java or C# but not understand how and when JavaScript differs exactly. These points are often idiomatic - an immediately invoked function expression (IIFE) is an example of such a construct. My personal opinion You should not lower the skill of your code. You should aspire to write code that is expressive, clear and concise. If you have any doubts about the level of your team - educate them . People are more than willing to learn than you might think, and are willing to adapt new constructs when convinced they are better. If they think you are 'just being clever,' try to argue your point. Be willing to admit that you're wrong sometimes, and no matter what, try to keep styles consistent throughout your work environment. Doing so will help to avoid hostility. The most important thing is to stay consistent. A team's code should be written as if one person coded it. You absolutely have to agree on coding guidelines. You should abide by those guidelines. If the coding guidelines specify that reading optional parameters should be done in the 'less clever' way, then that is the way. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203469",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42132/"
]
} |
203,471 | Is there a reason why functions in most(?) programming languages are designed to support any number of input parameters but only one return value? In most languages, it is possible to "work around" that limitation, e.g. by using out-parameters, returning pointers or by defining/returning structs/classes. But it seems strange that programming languages were not designed to support multiple return values in a more "natural" way. Is there an explanation for this? | Because functions are mathematical constructs that perform a calculation and return a result. Indeed, much that's "under the hood" of not a few programming languages focuses solely on one input and one output, with multiple inputs being just a thin wrapper around the input - and when a single value output doesn't work, using a single cohesive structure (or tuple, or Maybe ) be the output (though that "single" return value is composed of many values). This has not changed because programmers have found out parameters to be awkward constructs that are useful in only a limited set of scenarios. Like with many other things, the support isn't there because the need/demand isn't there. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203471",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10901/"
]
} |
203,492 | I am working on a project and after arguing with people at work for about more than a hour. I decided to know what people on stack-exchange might say. We're writing an API for a system, there is a query that should return a tree of Organization or a tree of Goals. The tree of Organization is the organization in which the user is present, In other words, this tree should always exists. In the organization, a tree of goal should be always present. (that's where the argument started). In case where the tree doesn't exist, my co-worker decided that it would be right to answer response with status code 200. And then started asking me to fix my code because the application was falling apart when there is no tree. I'll try to spare flames and fury. I suggested to raise a 404 error when there is no tree. It would at least let me know that something is wrong. When using 200, I have to add special check to my response in the success callback to handle errors. I'm expecting to receive an object, but I may actually receive an empty response because nothing is found. It sounds totally fair to mark the response as a 404. And then war started and I got the message that I didn't understand HTTP status code schema. So I'm here and asking what's wrong with 404 in this case? I even got the argument "It found nothing, so it's right to return 200". I believe that it's wrong since the tree should be always present. If we found nothing and we are expecting something, it should be a 404. More info, I forgot to add the urls that are fetched. Organizations /OrgTree/Get Goals /GoalTree/GetByDate?versionDate=...
/GoalTree/GetById?versionId=... My mistake, both parameters are required. If any versionDate that can be parsed to a date is provided, it will return the closes revision. If you enter something in the past, it will return the first revision. If by Id with a id that doesn't exists, I suspect it's going to return an empty response with 200. Extra Also, I believe the best answer to the problem is to create default objects when organizations are created, having no tree shouldn't be a valid case and should be seen as an undefined behavior. There is no way an account can be used without both trees. For that reasons, they should be always present. also I got linked this (one similar but I can't find it) http://viswaug.files.wordpress.com/2008/11/http-headers-status1.png | When in doubt, consult the documentation . Reviewing the W3C definitions for HTTP Status codes, gives us this: 200 OK - The request has succeeded. The information returned with the response is dependent on the method used in the request. 404 Not Found - The server has not found anything matching the Request-URI. In the context of your API, it very much depends on how queries are created and how objects are retrieved. But, my interpretation has always been that: If I ask for a particular object, and it exists return 200 code, if it doesn't exist return the correct 404 code. But, if I ask for a set of objects that match a query, a null set is a valid response and I want that returned with a 200 code. The rationale for this is that the query was valid, it succeeded and the query returned nothing. So in this case you are correct , the service isn't searching for "a specific thing" it is requesting a particular thing, if that thing isn't found say that clearly. I think Wikipedia puts it best: 200 OK - ... The actual response will depend on the request method used. In a GET request, the response will contain an entity corresponding to the requested resource. 404 Not Found - The requested resource could not be found but may be available again in the future. Subsequent requests by the client are permissible. Seems pretty clear to me. Regarding the example requests /GoalTree/GetByDate?versionDate=...
/GoalTree/GetById?versionId=... For the format, you said, you always return the nearest revision to that date. It will never not return an object, so it should always be returning 200 OK . Even if this were able to take a date range, and the logic were to return all objects within that timeframe returning 200 OK - 0 Results is ok, as that is what the request was for - the set of things that met that criteria. However, the latter is different as you are asking for a specific object , presumably unique, with that identity. Returning 200 OK in this case is wrong as the requested resource doesn't exist and is not found . Regarding choosing status codes 2xx codes Tell a User Agent (UA) that it did the right thing , the request worked. It can keep doing this in the future. 3xx codes Tell a UA what you asked probably used to work, but that thing is now elsewhere. In future the UA might consider just going to the redirect . 4xx codes Tell a UA it did something wrong , the request it constructed isn't proper and shouldn't try it again, without at least some modification. 5xx codes Tell a UA the server is broken somehow . But hey that query could work in the future, so there is no reason not to try it again. (except for 501, which is more of a 400 issue). You mentioned in a comment using a 5xx code, but your system is working. It was asked a query that doesn't work and needs to communicate that to the UA. No matter how you slice it, this is 4xx territory. Consider an alien querying our solar system Alien: Computer, please tell me all planets that humans inhabit. Computer: 1 result found. Earth Alien: Computer, please tell me about Earth . Computer: Earth - Mostly Harmless. Alien: Computer, please tell me about all planets humans inhabit, outside the asteroid belt. Computer: 0 results found. Alien: Computer, please destroy Earth. Computer: 200 OK. Alien: Computer, please tell me about Earth . Computer: 404 - Not Found Alien: Computer, please tell me all planets that humans inhabit. Computer: 0 results found. Alien: Victory for the mighty Irken Empire! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203492",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12039/"
]
} |
203,684 | Which of these programming styles is better? var result = methodOne(methodTwo(a, methodThree(b)), c, d); or var result3 = methodThree(b);
var result2 = methodTwo(a, result3);
var result = methodOne(result2, c, d); | In layman's words: The important thing is not the numbers of lines but the readability of the code. Any fool can write code that a computer can understand. Good
programmers write code that humans can understand. ( M. Fowler ) In the examples you gave, the second one is definitively easier to read. Source code is for people to read. Besides, intermediate values make the code easier to debug. Code one-liners, on the other hand, are useful to show other people that you are "smart" and that you don't care. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203684",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/95678/"
]
} |
203,928 | I'm having a funny but also terrible problem. I'm about to launch a new (iPhone) app. It's a turn-based multiplayer game running on my own custom backend. But I'm afraid to launch. For some reason, I think it might become something big and that its popularity will kill my poor lonely single server + MySQL database. On one hand I'm thinking that if it's growing, I'd better be prepared and have a scalable infrastructure already in place. On the other hand I just feel like getting it out into the world and see what happens. I often read stuff like "premature optimization is the root of all evil" or people saying that you should just build your killer game now, with the tools at hand, and worry about other stuff like scalability later. I'd love to hear some opinions on this from experts or people with experience with this. Thanks! | It's actually quite an easy choice. Right now, you have zero users, and scalability is not a problem. Ideally, you want to reach the point where you have millions of users, and scalability becomes a problem. Right now, you don't have a scalability problem; you have a number-of-users problem. If you work on the scalability problem, you will not fix the number-of-users problem, which means you will have solved a problem you don't have yet, and you will not have solved the problem you do have. The most likely result is that your product won't make it, and all your work will be for nothing. If you work on the number-of-users problem, you will solve a problem you have right now, and then you can focus on the next problem, which might be scalability. The nice thing about scalability problems is that, by definition, having them usually means business is pretty damn good, and this in turn means you can afford to spend money on optimizing for scalability. You don't go from zero users to ten million overnight, and if you keep an eye on the system's performance, you will have plenty of time to optimize when the time comes. Of course it helps to keep scalability in mind while writing the code you need right now, but it doesn't make a lot of sense to spend dozens or even hundreds of man-hours on a feature of which you don't know if you'll ever need it, and the most likely scenario is that you won't. Right now, your main concern is to ship. What happens after that; well, you can worry about that later. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203928",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/95915/"
]
} |
203,970 | I see that Java has Boolean (class) vs boolean (primitive). Likewise, there's an Integer (class) vs int (primitive). What's the best practice on when to use the primitive version vs the class? Should I basically always be using the class version unless I have a specific (performance?) reason not to? What's the most common, accepted way to use each? | In Item 5, of Effective Java, Joshua Bloch says The lesson is clear: prefer primitives to
boxed primitives, and watch out for unintentional autoboxing . One good use for classes is when using them as generic types (including Collection classes, such as lists and maps) or when you want to transform them to other type without implicit casting (for example Integer class has methods doubleValue() or byteValue() . Edit: Joshua Bloch's reason is: // Hideously slow program! Can you spot the object creation?
public static void main(String[] args) {
Long sum = 0L;
for (long i = 0; i < Integer.MAX_VALUE; i++) {
sum += i;
}
System.out.println(sum);
} This program gets the right answer, but it is much slower than it should be,
due to a one-character typographical error. The variable sum is declared as a Long instead of a long , which means that the program constructs about 2^31 unnecessary Long instances (roughly one for each time the long i is added to the Long sum ). Changing the declaration of sum from Long to long reduces the runtime from 43 seconds to 6.8 seconds on my machine. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203970",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21497/"
]
} |
203,998 | I am not sure if this is the place to ask the following conceptual question (Stackoverflow is definitely not). I saw this question in a multiple choice exam (single answer), similar to the ISTQB exams: Why it is not recommended to report several defects in the same issue
/ ticket ? a. In order to keep the report concise and clear. b. Because the developers might fix only one bug. c. Because the testing group testers are rated by the amount of bugs
they find. d. Bugs management systems does not support this feature of multiple
bugs. My sole opinion is that a is the correct answer. b - can't be it as the fix-feedback-resolved-closed should avoid that case. c - Obviously wrong. d - Redmine / Trac plugins supports multiple fields. The answer according to the answer sheet is b . Can someone explain why ? Comments with opinion about answers are welcome. | Imagine if Stack Overflow had a guideline: instead of asking one question, you come and ask, in the same question, whatever comes into your mind, all your issues you had for the last two weeks. What would upvote and downvote mean? What would be the titles of the questions? How to accept the best answer? How to tag the question? Bug tracking system is done to... track bugs. Tracking a bug means: Creating a record saying that a bug might exist, with information about how to reproduce it, Confirming that indeed, the bug exists and is a bug, not something by design, Asserting that the bug is now solved, Confirming that the bug was solved. In a very simplistic model, 1 and 4 will be done by the customer, and 2 and 3 – by the developer. Imagine the following log: Day 1 [Customer] When pressing on “Remove” button in “Product details” window, the application hangs. Restarting the application shows that the product wasn't removed. The expected behavior is to remove the product. Day 4 [Developer] <Issue reproduced> Day 5 [Developer] <Issue solved in revision 5031> Day 12 [Customer] <Ticket closed: issue solved> The log is simple and clear . You can easily track what was done and when , which revision solved which bug, etc. For example, if the bug tracking system is integrated with the version control, when you view a specific revision, you can check what bugs were solved in it. It's easy to find information . It's easy to see its state (is it reproduced? If the ticket was closed, why?). It's easy to filter tickets (I want to display tickets which concern only the UI of the plugins, given that I want only tickets which are open, older than one week and assigned to me by our interaction designer and are medium or high priority). It's easy to reassign a ticket or to originally determine which is the person who should be in charge of the bug. Now imagine the following log: Day 1 [Customer] The app hangs when I press “Remove” button in “Product details” window. Also, the background color of the left panel is dark blue, while it should be purple. I also noted that the text of the “Product details” window is not translated well to German; is it expected? When the final translation would be available? BTW, have you received the new icon I sent for the “Publish product” action? I don't see it in the “Sync data” window. Day 6 [Developer] I changed the color to purple. Day 7 [Developer] Yes, it's normal that the translation to German is incomplete. Day 8 [Customer] Ok for German. What about Italian? Lucia sent you the XML file two days ago. Day 9 [Developer] It's ok now. Day 10 [Customer] Ok for the “Remove” button? Strange, at my computer, it still hangs. Day 11 [Developer] No, I wanted to say it's ok for Italian translation. Day 12 [Customer] I see. Thank you. But there is a problem with the color. You changed it to dark purple, but it should be light purple, like the top panel on the main window. Day 13 [Developer] I updated the icon. Day 14 [Customer] The icon? What icon? Day 15 [Developer] The icon you asked me to update. Day 16 [Customer] I never asked you to update any icon. Day 17 [Developer] Of course you asked. See this ticket. You wrote that the publish product icon should be updated. I've done it. ⁞ Day 100 [Customer] So, what about the entries in the log? Day 101 [Developer] I have no idea what you're talking about. It's not even in this ticket, but in 6199. I'm closing this one as solved. <Ticket closed: issue solved> Day 102 [Customer] Sorry to reopen it, but the problem is not solved. I'm talking about the entries in the log: I told you last week that the text is sometimes invalid when it contains unicode characters. Do you remember? <Ticket reopened> Day 103 [Developer] I vaguely remember something like that, but after searching the last three pages of this ticket, I can't find any trace. Can you write again what was the problem? ⁞ Day 460 [Developer] I spent two hours searching for a trace of what you've said about the files sent encrypted through the network. I'm not sure I can find the precise request. Day 460 [Customer] You guys should really be more organized. I notified you four times about this issue for the last two weeks. Why are you forgetting everything? ⁞ What's this log about? It was solved 43 times and reopened 43 times. Does it mean that the developer is so stupid that he can't solve the same issue for 460 days? Ah, no, wait, this ticket was assigned to 11 developers meanwhile. What's the deal? How to search for a specific issue? It's actually assigned to Vanessa, but her five colleagues are concerned as well by seven of the eleven issues in this ticket. When the ticket should be closed? Is it when half of the issues are solved? Or maybe ten out of eleven? Note: You may believe that such logs don't exist. Believe me, I've seen ones more than one time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/203998",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/95958/"
]
} |
204,022 | For example I've got following code: auto z = [](int x) -> int {
if (x > 0) {
switch (x) {
case 2: return 5;
case 3: return 6;
default: return 1;
}
}
return 0;
}; And later I call this several times. In asm code I see external calls with lambda....something... It's becoming not easy to read and I think it also can cause performance. So maybe I win in meta-programming but do I lose in asm debugging and performance? Should I avoid modern language features, macros and other meta programming aspects to be sure in performance and debugging simplicity? | Must I think about compiled machine code when I write my code? No , not when you write your code the first time and don't suffer from any real, measurable performance problems. For most tasks, this is the standard case. Thinking too early about optimization is called "premature optimization", and there are good reasons why D. Knuth called that "the root of all evil" . Yes , when you measure a real, provable performance bottleneck, and you identify that specific lambda construct as the root cause. In this case, it may a good idea to remember Joel Spolsky's "law of leaky abstractions" and think about what might happen at the asm level. But beware, you may be astonished how small the performance increasement will be when you replace a lambda construct by a "not so modern" language construct (at least, when using a decent C++ compiler). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204022",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/12033/"
]
} |
204,075 | If a compiler's work is essentially translating source code into machine level code, can there be any glitch in a compiler, i.e. a faulty "translation?" The same goes for an interpreter: can it fail to output the required content sometimes? I have not heard of any bugs in compilers/interpreters, but do they exist? | In layman's words: All programs can have bugs. Compilers are programs. Ergo, compilers can have bugs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204075",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/96033/"
]
} |
204,099 | I am under the impression that an encrypted string cannot be decrypted so the original value is lost forever. However, if the following string always equals "dominic" (my name), then can't there be some logical way to reverse it; being as it's not random nor is it based on the date/time, but there is a logical method to it? 0WrtCkg6IdaV/l4hDaYq3seMIWMbW+X/g36fvt8uYkE= No matter what or how many times I encrypt "dominic" (string), it always equals as above. So, shouldn't there be some way to decrypt a string like that? Example of what I'm talking about: public string EncryptPassword(string password)
{
return Convert.ToBase64String(
System.Security.Cryptography.SHA256.Create()
.ComputeHash(Encoding.UTF8.GetBytes(password)));
} | Encryption can always be reversed. The point of encryption is to take a message and encode it with a secret key so that only another person who has the key can reverse the encryption and read the message. What you're looking at here is hashing , which is not the same as encryption, though cryptographic techniques are often used in implementing hashes. The idea of a hash is that it uses complicated mathematical techniques to build a new value that maps to an old value, which is repeatable. There's no key, and it's not meant to be reversed. A cryptographically strong hash is created with the mathematical property that, if you have value A whose hash is value B , it's very, very difficult to intentionally create another value C that also hashes to B . Hashes don't need to be reversible, because they're used for authentication. If you give me a username and a password, you really don't want me storing that password in my database, because if someone hacks in and gains access to my database, they could get ahold of your password! So instead, I'd store the hash of your password in the database. Then when you log in, I check to see if there's a username that matches yours, with a password entry that matches the hash of the password you sent, and if so you're authenticated, because it's very difficult to create a hash collision (two values that hash to the same value) with a good hash, so I'm almost perfectly certain that the password you used is the right one. The other property of a strong cryptographic hash is that it's very difficult to reverse. You know that the value 0WrtCkg6IdaV/l4hDaYq3seMIWMbW+X/g36fvt8uYkE= is the hash for "dominic" because you just worked it out, but if you didn't know that, and didn't know where to start looking, and all you had was 0WrtCkg6IdaV/l4hDaYq3seMIWMbW+X/g36fvt8uYkE= , it could literally take you billions of years to figure out that the original was "dominic", if the hash is a good one. Again, this is useful to prevent collateral damage in case a password list gets stolen. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204099",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/61753/"
]
} |
204,260 | I have always heard that linear search is a naive approach and binary search is better than it in performance due to better asymptotic complexity. But I never understood why is it better than linear search when sorting is required before binary search? Linear search is O(n) and binary search is O(log n) . That seems to be the basis of saying that binary search is better. But binary search requires sorting which is O(n log n) for the best algorithms. So binary search shouldn't be actually faster as it requires sorting. I am reading CLRS in which the author implies that in insertion sort instead of using the naive linear search approach it is better to use binary search for finding the place where item has to be inserted. In this case this seems to be justified as at each loop iteration there is a sorted list over which the binary search can be applied. But in the general case where there is no guarantee about the data set in which we need to search isn't using binary search actually worse than linear search due to sorting requirements? Are there any practical considerations that I am overlooking which make binary search better than linear search? Or is binary search considered better than linear search without considering the computation time required for sorting? | Are there any practical considerations that I am overlooking which making binary search better than linear search? Yes - you have to do the O(n log n) sorting only once, and then you can do the O(log n) binary search as often as you want, whereas linear search is O(n) every time. Of course, this is only an advantage if you actually do multiple searches on the same data. But "write once, read often" scenarios are quite common. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204260",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92918/"
]
} |
204,379 | I was asked about how to run a suite of 65.000.000.000 tests and I wonder if it is normal to have a project with such a huge amount of tests. Have you worked in projects with this characteristic? | With 65 billion tests, it sounds like you're being asked to test all possible inputs. This is not useful--you'd essentially be testing that your processor functions correctly, not that your code is correct. You should be testing equivalence classes instead. This will drastically reduce your range of test inputs. Also consider whether you can subdivide your system into smaller pieces. Each piece will be easier to test in isolation, and then you can perform some integration tests which bring all the pieces together. If you still want that reassurance that some of those input combinations work, perhaps you could try fuzz testing . You will get some of the benefits of testing lots of different inputs, but without running all 65 billion of them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204379",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/72087/"
]
} |
204,380 | Mention the pros and cons of UIBinder in GWT . I want to develop a big project. Is it flexible to use UIBinder for my project? | With 65 billion tests, it sounds like you're being asked to test all possible inputs. This is not useful--you'd essentially be testing that your processor functions correctly, not that your code is correct. You should be testing equivalence classes instead. This will drastically reduce your range of test inputs. Also consider whether you can subdivide your system into smaller pieces. Each piece will be easier to test in isolation, and then you can perform some integration tests which bring all the pieces together. If you still want that reassurance that some of those input combinations work, perhaps you could try fuzz testing . You will get some of the benefits of testing lots of different inputs, but without running all 65 billion of them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204380",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/95996/"
]
} |
204,431 | Mushroom cultivation requires fairly precise chemical composition of substrate (a.k.a. growing medium). Let's pretend we're growing shitakes and that this is the required composition of their substrate: Nitrogen | Benzene | Toluene | Dioxygen Diflouride
5% | 5% | 10% | 80% We want to create an appropriate substrate from materials we have on hand which we know the chemical composition of. Material | Nitrogen | Benzene | Toluene | Dioxygen Diflouride
apples | 5% | 0% | 5% | 90%
oranges | 20% | 20% | 50% | 10%
Etc... How does one calculate this? It reminds me of solving matrices in high school. Is this something that can be done with matrices? What is this problem called? What do I need to know to solve it? | This is called Linear Programming . It is NP-Hard for integer constraints but there are methods of dealing with this, see Jeff Erickson's notes on the subject. The most common method is know as the Simplex Algorithm . Basically you're finding the vertices of shapes formed geometrically by the linear equations representing your constraints. You proceed till you find the optimal one. In this case, the ratio of needed substrate components. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204431",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51196/"
]
} |
204,500 | While getting my code reviewed here the issue of using the const keyword came up. I understand that it is used for implementing read-only behaviour on variables. I am confused about what are the various situations when it can be useful. Should it be used for the sake of clarity in function prototypes? Should it be used as a security measure during code development? Should it be used in the scope of various functions for declaring run-time constants? Should it be used at all? These question are just examples of the confusion that I am facing. The general confusion is When should be the const keyword used in C programming? What are the various types of benefits that can be gained by using this keyword in C? Are there any cons of using const keyword? It has been pointed that this question may be too broad due to all these questions in the detail of my question. I just wanted to clarify that these questions are just to clarify the confusion regarding the main question. When and for what purposes should the const keyword be used in C for variables? It can also be rephrased as The proper use of const keyword in C` with the pros and cons of the same. | When reviewing code, I apply the following rules: Always use const for function parameters passed by reference where the
function does not modify (or free) the data pointed to. int find(const int *data, size_t size, int value); Always use const for constants that might otherwise be defined using a #define or an enum. The compiler can locate the data in read-only memory (ROM) as a result (although the linker is often a better tool for this purpose in embedded systems). const double PI = 3.14; Never use const in a function prototype for a parameter passed by value . It has no meaning and is hence just 'noise'. // don't add const to 'value' or 'size'
int find(const int *data, size_t size, int value); Where appropriate, use const volatile on locations that cannot be changed by the program but might still change. Hardware registers are the typical use case here, for example a status register that reflects a device state: const volatile int32_t *DEVICE_STATUS = (int32_t*) 0x100; Other uses are optional. For example, the parameters to a function within the function implementation can be marked as const. // 'value' and 'size can be marked as const here
int find(const int *data, const size_t size, const int value)
{
... etc or function return values or calculations that are obtained and then never change: char *repeat_str(const char *str, size_t n)
{
const size_t len = strlen(str);
const size_t buf_size = 1 + (len * n);
char *buf = malloc(buf_size);
... These uses of const just indicate that you will not change the variable; they don't change how or where the variable is stored. The compiler can of course work out that a variable is not changed, but by adding const you allow it to enforce that. This can help the reader and add some safety (although if your functions are big or
complicated enough that this makes a great difference, you arguably have other
problems). Edit - eg. a 200-line densely coded function with nested loops and many long
or similar variable names, knowing that certain variables never change might
ease understaning significantly. Such functions have been badly designed or
maintened. Problems with const . You will probably hear the term "const poisoning".
This occurs when adding const to a function parameter causes 'constness' to
propagate. Edit - const poisoning: for example in the function: int function_a(char * str, int n)
{
...
function_b(str);
...
} if we change str to const , we must then ensure that fuction_b also takes
a const . And so on if function_b passes the str on to function_c ,
etc. As you can imagine this could be painful if it propagates into many
separate files/modules. If it propagates into a function that cannot be
changed (eg a system library), then a cast becomes necessary. So sprinkling const around in existing code is perhaps asking for trouble. In new code
though, it is best to const qualify consistently where appropriate. The more insidious problem of const is that it was not in the original
language. As an add-on it doesn't quite fit. For a start it has two meanings
(as in the rules above, meaning "I'm not going to change this" and "this cannot be modified"). But more than that, it can be dangerous. For example, compile and
run this code and (depending upon the compiler/options) it may well crash when
run: const char str[] = "hello world\n";
char *s = strchr(str, '\n');
*s = '\0'; strchr returns a char* not a const char* . As its call parameter is const it must cast the call parameter to char* . And in this case that
casts away the real read-only storage property. Edit: - this applies generally to vars in read-only memory. By 'ROM', I mean not just physical ROM but any memory that is write-protected, as happens to the code section of programs run on a typical OS. Many standard library functions behave in the same way, so beware: when you
have real constants (ie. stored in ROM) you must be very careful not to
lose their constness. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204500",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92918/"
]
} |
204,559 | I was working with a query I wrote today had to change the code from the WHERE clause to use a IN(list of stuff) filter instead of using something like item_desc = 'item 1'
OR item_desc = 'item 2'
OR item_desc = 'item 3'
OR item_desc = 'item 4' The above ran for 15 minutes and returned nothing, yet the following gave me my result set in 1.5 minutes item_desc IN (
'item 1'
,'item 2'
,'item 3'
,'item 4'
) I did this in SQL and am wondering why the IN(list of items) performed so much faster then the OR statement. -- EDIT --
SQL Server 2008, I do apologize for not putting this bit of info in in the first place. Here is the Query in its entirety using the OR statements: DECLARE @SD DATETIME
DECLARE @ED DATETIME
SET @SD = '2013-06-01';
SET @ED = '2013-06-15';
-- COLUMN SELECTION
SELECT PV.PtNo_Num AS 'VISIT ID'
, PV.Med_Rec_No AS 'MRN'
, PV.vst_start_dtime AS 'ADMIT'
, PV.vst_end_dtime AS 'DISC'
, PV.Days_Stay AS 'LOS'
, PV.pt_type AS 'PT TYPE'
, PV.hosp_svc AS 'HOSP SVC'
, SO.ord_no AS 'ORDER NUMBER'
--, SO.ent_dtime AS 'ORDER ENTRY TIME'
--, DATEDIFF(HOUR,PV.vst_start_dtime,SO.ent_dtime) AS 'ADM TO ENTRY HOURS'
, SO.svc_desc AS 'ORDER DESCRIPTION'
, OSM.ord_sts AS 'ORDER STATUS'
, SOS.prcs_dtime AS 'ORDER STATUS TIME'
, DATEDIFF(DAY,PV.vst_start_dtime,SOS.prcs_dtime) AS 'ADM TO ORD STS IN DAYS'
-- DB(S) USED
FROM smsdss.BMH_PLM_PtAcct_V PV
JOIN smsmir.sr_ord SO
ON PV.PtNo_Num = SO.episode_no
JOIN smsmir.sr_ord_sts_hist SOS
ON SO.ord_no = SOS.ord_no
JOIN smsmir.ord_sts_modf_mstr OSM
ON SOS.hist_sts = OSM.ord_sts_modf_cd
-- FILTER(S)
WHERE PV.Adm_Date BETWEEN @SD AND @ED
AND SO.svc_cd = 'PCO_REMFOLEY'
OR SO.svc_cd = 'PCO_INSRTFOLEY'
OR SO.svc_cd = 'PCO_INSTFOLEY'
OR SO.svc_cd = 'PCO_URIMETER'
AND SO.ord_no NOT IN (
SELECT SO.ord_no
FRROM smsdss.BMH_PLM_PtAcct_V PV
JOIN smsmir.sr_ord SO
ON PV.PtNo_Num = SO.episode_no
JOIN smsmir.sr_ord_sts_hist SOS
ON SO.ord_no = SOS.ord_no
JOIN smsmir.ord_sts_modf_mstr OSM
ON SOS.hist_sts = OSM.ord_sts_modf_cd
WHERE OSM.ord_sts = 'DISCONTINUE'
AND SO.svc_cd = 'PCO_REMFOLEY'
OR SO.svc_cd = 'PCO_INSRTFOLEY'
OR SO.svc_cd = 'PCO_INSTFOLEY'
OR SO.svc_cd = 'PCO_URIMETER'
)
ORDER BY PV.PtNo_Num, SO.ord_no, SOS.prcs_dtime Thank you, | Oleski's answer is incorrect. For SQL Server 2008, an IN list gets refactored to a series of OR statements. It may be different in say MySQL. I'm fairly certain that if you generated actual execution plans for both your queries they would be identical. In all likelihood the second query ran faster because you ran it second , and the first query had already pulled all the data pages from the database and paid the IO cost. The second query was able to read all the data from memory and execute a lot faster. Update The actual source of the variance is likely that the queries are not equivalent . You have two different OR lists below: WHERE PV.Adm_Date BETWEEN @SD AND @ED
AND SO.svc_cd = 'PCO_REMFOLEY'
OR SO.svc_cd = 'PCO_INSRTFOLEY'
OR SO.svc_cd = 'PCO_INSTFOLEY'
OR SO.svc_cd = 'PCO_URIMETER' and later WHERE OSM.ord_sts = 'DISCONTINUE'
AND SO.svc_cd = 'PCO_REMFOLEY'
OR SO.svc_cd = 'PCO_INSRTFOLEY'
OR SO.svc_cd = 'PCO_INSTFOLEY'
OR SO.svc_cd = 'PCO_URIMETER' In both those WHERE clauses, operator precendence (where AND is handled before OR) means that the actual logic run by the engine is: WHERE (ConditionA AND ConditionB)
OR ConditionC
OR ConditionD
OR ConditionE If you replace the OR lists with an IN expression, the logic will be: WHERE ConditionA
AND (ConditionB OR ConditionC OR ConditionD OR ConditionE) Which is radically different. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204559",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/59333/"
]
} |
204,711 | We were migrating from Java 6 to Java 7 . The project is behind schedule and risks being dropped, in which case it will continue to use Java 6. What are the specific improvements in Java 7 that we could go back to our manager with and convince him it is important to use JDK 7? Looking for bug fixes that I could highlight in Oracle Java 7 (with respect to Java 6). Fixes in security, performance, Java 2D /printing, etc. will be more sellable in my case. Compiler fixes for example, will not of much use. [I am going through many sites like Oracle adoption guide , bug database, questions on Stack Overflow]. Update: Thanks for the answers. We rescheduled the update to next release. Closest we got was security. Accepting the highest voted answer. | Java 6 has reached EOL in February this year and will no longer receive public updates (including security) unless you buy very expensive enterprise support. That should be all the reason needed. Besides, overwhelming evidence suggests that backwards compatibility for Java runtimes is excellent. Chances are that you just have to replace the Java 6 installations with Java 7 and all applications will just continue to work without any problems. Of course this is not guaranteed and extensive tests are recommended to confirm that there will indeed be no problems. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204711",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3005/"
]
} |
204,729 | I have just completed my Bachelors degree in IT field. I have deep interest in coding and really want to be a professional in it. Now, apart from college courses, I have been learning programming(C#) on my own (college level programming is too basic). Now I feel I need a little more time to be close to professional programmer. But some of my seniors say that corporate world programming is too different from bookish programming, hence there is no point in wasting time. (They are not programmers themselves, this may be probably what they heard). Would I benefit by reaching an advance level of C#? or as mediocre level is sufficient to break interviews, higher levels do not matter to firms because they rely on their training purely to teach how things work in corporate world and learning more won't help me much?
Please if there are some professional programmers who can help, I promise this is something which about every student interested in programming want to ask at my stage. "How do you actually change from learner to professional in field of programming?" - keep learning until you are perfect or joining a firm is must once basics are covered? | There are many differences between programming in school and in the real world. I'm not sure there is such a thing as corporate programming . Depending on where you're actually working, there will be huge differences. There will also be huge differences depending on the tasks at hand. However there are still some common issues. the real world code life-cycle is very different from college homework. In real world programming you are usually working on some existing code base. One of the biggest issues is to avoid breaking compatibility with code used by your customers (who may be internal customers or external customers, depending on your actual workplace and case). The code you are writing will also probably be used for years afterward (this depends: the issue is not the same for a web site or code embedded in some devices). If you want to get ready for this, get in the habit of writing unit tests and functional tests for every piece of code. That is not always done with real world code, but it should make your life simpler both in college and in the corporate world. design/requirements are usually much fuzzier in the real world than in a collegiate atmosphere. When writing professional code, someone has to define the code's purpose, and you aren't simply given toy problems or even well known problems. It is very likely that sooner or later it will be you who is doing the designing! Customers usually don't know what they want (and even when they know what they want it may not be what they need), and managers usually just describe the large picture, leaving many details and choices to the programmers. Depending on the methods followed ("agile" -vs- "V cycle", etc.), choices and detailed problem definition may occur sooner or later, but you should at least keep an open mind and wonder if you're actually doing what is required. You may also consider changing requirements if the ongoing task is too hard to code or is inefficient. You may end up writing something useless or overly complicated anyway because the customer or your manager didn't get your point or they disagree with you. Still, you should always question requirements (it's a survival skill). Also keep in mind that requirements may change in the middle of a task, and you should be ready for that. in college when you get an assignment you are usually supposed to write some code. In the real world, you usually start by checking if you can use existing code instead: reuse or change parts of a project, use or buy libraries, etc. If you find existing code that fits the assignment, you may end-up using it or not (there may be maintenance issues, performance issues, copyright issues, or even pricing issues), but the option of reusing existing code should usually be considered. There are obviously many other issues related to teamwork, project scale, etc. But the above points are issues you are very likely to see in corporate environments but do not have to consider in your college assignments. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204729",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/96583/"
]
} |
204,739 | I jumped in to a project and I see that the other developers are adding a lot of logic in the setters of synthesized properties.
I understand how this works, but I think that it makes it hard to understand the flow of the program; while reading the code, whenever I see self.something = whatever , I always check if something 's setter is overridden. What are your opinions around this topic? Do you think this is a sign of bad architecture or an elaborate solution? I would be glad to read more on this if you have relevant links/sources, it's just too hard to get good google results so I decided to ask here as well. Thanks for any answer and please note that I'm talking about objective C in case you haven't seen the tag (even though this shouldn't be a language specific issue I guess). | Is it considered a bad practice to add logic in a property setter? No Properties were invented to allow class designers to have logic attached to a convenient interface of field access and assignment. How much is too much? It depends on the responsibilities of the class. Here are some things that are reasonable to put in a property setter: update some derived values notify observers that the class state has changed propagate the change to some contained object propagate the change to a backing store perform validation Programming is easier when classes have interfaces that make it obvious what the class can do, without making callers think about how it is being done. Putting logic behind property setters allows classes to hide their implementation behind a simple interface. For some classes, no methods are required. Just turn the knobs by setting properties and read the output by getting properties. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204739",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/96586/"
]
} |
204,773 | JVM supports so many languages other than Java like Groovy,Clojure,Scala etc which are functional languages unlike Java(I am referring to Java before Version 8 where Lambda's are not supported) that doesn't support functional capabilities.On a high level what makes the JVM so versatile that it can support both Object Oriented as well as Functional languages? | Compared to other VMs, the JVM actually isn't particularly versatile . It directly supports statically typed OO. For everything else, you have to see what parts you can use, and how you can build everything else your language needs on top of those parts. For example, until Java 7 introduced the invokedynamic bytecode, it was very hard to implement a dynamically typed OO language on the JVM - you had to use complex workarounds that were bad for performance and resulted in horribly bloated stack traces. And yet, a bunch of dynamic languages (Groovy, Jython, JRuby among others) were implemented on the JVM before that. Not because the JVM is so versatile, but because it is so widespread, and because it has very mature, well-supported and high-performing implementations. And, perhaps even more important, because there is a huge amount of Java code out there doing pretty much anything, and if your language runs on the JVM, you can easily offer facilities to integrate with that code. Basically, having your language run on the JVM is the 21st century version of offering interoperability with C. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204773",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/60189/"
]
} |
204,786 | If I already have integration test for my program, and they all passed, then I have a good feel that it will work. Then what are the reasons to write/add unit tests? Since I already have to write integration tests anyway, I will like to only write unit test for parts that not covered by integration tests. What I know the benefit of unit test over integration test are Small and hence fast to run (but adding new unit to test something is already tested by integration test means my total test suit get larger and longer to run) Locate bug easier because it only test one thing (but I can start write unit test to verify each individual part when my integration test failed) Find bug that may not be caught in integration test. e.g. masking/offsetting bugs. (but if my integration tests all passes, which means my program will work even some hidden bug exists. So find/fix these bugs are not really high priority unless they start breaking future integration tests or cause performance problem ) And we always want to write less code, but write unit tests need lots more code (mainly setup mock objects). The difference between some of my unit tests and integration tests is that in unit tests, I use mock object, and in integration tests, I use real object. Which have lots of duplication and I don't like duplicated code, even in tests because this add overhead to change code behavior (refactor tool cannot do all work all the time). | You've laid out good arguments for and against unit testing. So you have to ask yourself, " Do I see value in the positive arguments that outweigh the costs in the negative ones? " I certainly do: Small-and-fast is a nice aspect of unit testing, although by no means the most important. Locating-bug[s]-easier is extremely valuable. Many studies of professional software development have shown that the cost of a bug rises steeply as it ages and moves down the software-delivery pipeline. Finding-masked-bugs is valuable. When you know that a particular component has all of its behaviors verified, you can use it in ways that it was not previously used, with confidence. If the only verification is via integration testing, you only know that its current uses behave correctly. Mocking is costly in real-world cases, and maintaining mocks is doubly so. In fact, when mocking "interesting" objects or interfaces, you might even need tests that verify that your mock objects correctly model your real objects! In my book, the pros outweigh the cons. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204786",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92356/"
]
} |
204,841 | We currently have a complex VC++ software application, which uses a library like ObjectARX to build the dll. I feel there are many features in C# like Collections, Generics, and other libraries which can be used to build the current application in a better and efficient way. I have been thinking about it, but I am not sure on how to present it to my supervisor and colleagues. I would appreciate any help, to help me think in the right direction and highlight the points to bring it to the team. Few points that I thought was; With some current examples, implementing it in C# with the features. Highlight the development time is comparatively lesser in C# than C++. Use a Design Architecture. | Short answer You should consider that it's a very risky and costly idea that may not give you as many benefits as you think it might. Long answer You should consider the following: C++ is a language that can be used at a very high level, that is cross platform (though that depends on how much you used the VC proprietary extensions) and for which many very mature tools exist. C++11 will add even more juicy bit to handle annoying use cases. If you're thinking about a full rewrite, don't forget that rewriting fully debugged code is time you won't be implementing any new features. If you don't have a clear benefit for using C#, this is throwing money and time through the window. If the team knows C++, then for a long time writing code in C# will be slower despite any advantage that C# brings. You have two option for migrating your apps : restart from scratch (and see http://joelonsoftware.com/articles/fog0000000069.html , already posted), or add new features by maintaining a mapping layer between the C++ and C# code. While C# is quite good as far as calling native code go, it's still some pretty complex code to write. If you either use P/Invoke or C++/CLI, both will force you to know much deeper detail about the platform than would be required for a pure C# solution. Also, you'll spend an awful lot of time marshalling data between managed and native code. A better option may be COM, though I hope you like ATL programming. The biggest benefits of C# are its simplicity and garbage collector that free you from thinking about a lot of corner cases. That mean it can be developed by developers that are less hardcore than what you need for C++. In your case, your team already know C++ so those benefit are much less present. If you use unique_ptr, shared_ptr, RAII and such, much of the dangerous part of C++ can be managed. Yes, you have more options to shoot yourself in the foot, but you avoid the dangerous parts. But still... If you're not talking about a full rewrite, yes, it could be possible to develop some part of the application in C#. But always keep it mind the cost of the mapping layer between C++ and C#. I would recommend exporting your C# parts as COM modules and calling that from C++. Be sure it bring a real advantage. If you must constantly convert vector<> to IList<> and must constantly convert your C++ type to C# one, any speed advantage of C# will be lost. You gain most of converting to C# and .NET when everything can stay inside the CLR. Getting everything inside the CLR mean a complete rewrite of a complex application and that is dangerous proposition. All in all, I wouldn't recommend it. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204841",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/94998/"
]
} |
204,904 | Not only are our sprint planning meetings not fun, they're downright dreadful. The meetings are tedious, and boring, and take forever (a day, but it feels like a lot longer). The developers complain about it, and dread upcoming plannings. Our routine is pretty standard (user story inserted into sprint backlog by priority >> story is taken apart to tasks >> tasks are estimated in hours >> repeat), and I can't figure out what we're doing wrong. How can we make the meetings more enjoyable? ... Some more details, in response to requests for more information: Why are the backlog items not inserted and prioritized before sprint kickoff? User stories are indeed prioritized; we have no idea how long they'll take until we break them down into tasks! From the (excellent) answers here, I see that maybe we shouldn't estimate tasks at all, only the user stories. The reason we estimate tasks (and not stories) is because we've been getting story-estimates terribly wrong -- but I guess that's the subject for an altogether different question. Why are developers complaining? Meetings are long. Meetings are monotonous. Story after story, task after task, struggling (yes, struggling) to estimate how long it will take and what it involves. Estimating tasks makes user-story-estimation seem pointless. The longer the meeting, the less focus in the room. The less focused colleagues are, the longer the meeting takes. A recursive hate-spiral develops. We've considered splitting the meeting into two days in order to keep people focused, but the developers wouldn't hear of it. One day of planning is bad enough; now we'll have two ?! Part of our problem is that we go into very small detail (in order to get more accurate estimations). But when we estimate roughly, we go way off the mark! To sum up the question: What are we doing wrong? What additional ways are there to make the meeting generally more enjoyable? | Make estimating easier Break your sprint planning down. Do you need to estimate the individual tasks? I've done sprint planning two ways: Stories are estimated in story points and then tasks are estimated in hours Stories are estimated in story points and tasks simply fall under that with no estimate Of the two, I prefer the second option. I find that not estimating tasks gives more freedom to developers to cope with changes. If a task no longer makes sense (how many times have you found out that a task isn't applicable or was already done in a previous sprint) you simply throw it out without any penalty, or you may have to change a current task into something new, possibly breaking it up. You're really being redundant if you estimate both, as the sum of the tasks should represent the story points and vice versa. What value do you really gain by this other than knowing how much time individual tasks will take? If you find yourself with task sizes that really vary enough to make a difference, I would suggest breaking those tasks down into smaller, more homogeneous chunks. By doing this, you can cut down on the time you spend in sprint planning . Stories get estimated during sprint planning, and when you start the sprint you can put down all the tasks you can think of that make up that story. Obviously if there are points that you come across in estimating the story that you know will have to be dealt with in a task, you can add that onto the story information and put it as a task. Estimate in Ideal units Story points are meant to be in ideal units such as ideal man hours or ideal work days. These are meant to mean that given the perfect day every day, where you had no interruptions, no meetings, and everything went according to plan, you could accomplish the task in X days. Now everyone knows that this simply isn't true, but the wonderful thing about statistics is that it doesn't have to be. What I mean by this is that after a while of estimating these in ideal days, you realize that maybe it takes an extra 25% of the time you estimate on average to complete a story. Lets say you had estimated 4 ideal work days, and instead it took you 5. Over time, you keep track of this and then you have a rough idea of the conversion from ideal days to real days. Your first instinct would be to try and compensate for this by over estimating, and you would likely be wrong. The main thing here is to stay consistent. That way, your long term average remains the same. Sure sometimes, it'll be under and sometimes it'll be over, but the more you estimate, the better off you are. If you find that you still can't get a decent estimate, maybe that means you don't know enough about the story to estimate it properly. Talk about the stories When you estimate, everyone should have a rough idea of what will need to be done, from start to finish, of what it will take for this story to be complete. You don't need to know every detail, but enough that you think you, yourself, could undertake the story. If you don't have that level of confidence, you probably shouldn't be estimating it. If you say "Well this story is too big for us to know most of the details" then that's an indication that the story is too big, and should be broken down. Stories, at least in my experience, have been small enough that one person, if need be, could work on it alone and accomplish it within a week or two. This also will help to solve your second point in the edit, which is too much estimation . Instead of estimating every single task for every single story, you simply estimate the story as a whole, which should help to remove a lot of the estimating. As for making the meetings less monotonous, I would suggest planning poker, which you can see more information about above. Make planning more engaging Estimate using Planning Poker As far as making estimation more fun, have you tried planning poker ? It's the way that I've always done planning for all my sprints on multiple teams, and it's a good way to keep everyone involved, as every person has to at least pick SOMETHING. There's also a fair amount of fun involved when everyone on the team picks 3, and someone puts down a 20 and has to explain themselves, or when everyone on the team puts down a 5 but the manager puts down an 8 (who's gonna argue with the boss when he wants to give you more time!). To do this, all you need are some planning poker cards , which we often make on the back side of index cards, or using normal playing cards with values attached to face cards. Nothing fancy, and it keeps everyone focused. Just remember that trying to do any task for an entire day (including planning poker) takes a toll on productivity. Many sets of cards come with a coffee card for a reason; if someone is feeling burnt out, give the team a break to recharge and pick it up when everyone is fresh! As an alternative to physical cards , you could also look at electronic cards . The real benefits here are automated tracking of results, tracking user stories to be estimated and allowing everyone to show their cards at once to avoid "cheating" (where one persons estimate is influenced by another's due to being able to see their card). Obviously this requires everyone have a computer and the ability to focus on the task at hand though, so use it at your own discretion. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204904",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43798/"
]
} |
204,930 | I've run into this challenge a couple of times and I'm hoping someone can provide some references, training or advice on how to explain the difference between a Product Backlog Item and a Task in TFS. I understand and have explained that a Product Backlog Item is the "What" and the Task is the "How". I have also explained that the PBI is the requirement and the Task is how the requirement is met. I'm repeatedly met with blank stares and head scratching when I explain this. It seems that the Software Engineers I explain this to can not make the distinction. It is all the same to them. I believe my other challenge is that I am not able to effectively illustrate why it is important to make the distinction. | The "Product backlog Item" is indeed the What, the functionality that needs to be built. The Task describes the steps that need to be taken to get there. Many teams are not used to decompose into tasks, they just build what the spec says. For these people it's hard to see them as two separate things. Maybe a simple anecdote would help: See the Product Backlog Items as the items on their shopping list for their vacation. Maybe a "tent", a "fishing rod", a "prepare car for travel". The Tasks for the "tent" item would be "Describe tent requirements", "Compare tents online", "Get advice from friends with outdoor experience", "Go to outdoor shop", "Buy tent", "setup tent in backyard to verify completeness", "pack tent for travel" The Tasks for Fishing Rod will be very similar, but the tasks for "prepare car for travel" are probably very different: "Check requirements for states/countries on desired route", "buy safety vest", "replace expired contents from first aid kit", "inspect spare tire", "schedule appointment with garage to have engine checked", "go to garage to have engine checked", "go to state agency to buy highway pass", "check car insurance" This clearly separates the question of what the product owner wants from what they need to do. Unless of course the product owner already decomposed into actionable items on the Product Backlog, in which case you also need to talk to them. As I said, for many developers they think they already have enough information and know what to do, they don't want to decompose the What into How steps, they'll get there when they get there. When you start talking to them about tracking the sprint progress, improving estimations, tracking work that was forgotten during sprint planning and other items that have to do with professional improvements, ask them how they and their team will know where they can improve and how they know they're really done. When they can come up with a system that works without creating tasks and it works, then that's fine, but chances are very low that they actually can. Before trying to work with TFS and the agile tools, your team will need to understand how this all works. The best way is to have them work with a paper board, which is visible on the work floor to all. Later, when the process is understood better, moving to the tools will help. Without the understanding, the tools won't be of much use and will meet a lot of resistance. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/204930",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/96325/"
]
} |
205,201 | I'm currently an intern at a government contractor and am getting the (obnoxiously unavoidable) feeling that Word is the de-facto standard in the software development process. Its binary format makes it very difficult to collaborate on documents in the way I am used to collaborating on a code base. The use of plain text markup (with such languages as LaTeX, Markdown, ReStructured Text, etc. ) allows for a diff-friendly document that works well with the normal workflow of a developer. As for comments where the language doesn't support them (eg Markdown), there are many existing solutions that allow collaborative comments on code bases (eg GitHub, Bitbucket) that could easily be applied to other plain-text files containing markup. I understand the need to cooperate with technologically illiterate management necessitates some sort of graphical interface to everything, but such interfaces exist for most of these formats. For example, LaTeX has a 'fork' of sorts called LyX that puts a graphical front-end to a plain-text, LaTeX-like syntax. This file, even though primarily graphical in its editing, is still diff-friendly. (It even also has Word-style comments.) Many of these solutions could yet be used instead of Word, and the vast majority are free or open-source. However, we use Word even for our own internal documentation that nobody else sees. We work with text for a significant chunk of our career---why is documentation so special? Aside from the trivial "We didn't know any better and now we're stuck here", there must be reasons supporting such a decision. What challenges face the software development process in using plain-text documentation in lieu of other, more colloquial (and debatably less powerful) means of writing documents? Since the reasons will differ, perhaps answering for these two closely related scenarios separately should be in order. Using plain-text documentation from the start Migrating to plain-text documentation over time | Lots of participants in the software development process are not developers, and need the ability to interact with documentation regardless. Should QA/Marketing use word and developers use something else completely? It would be inconsistent, it would add another tool into the maintenance chain and the IT portion of a company may have no idea what to do with the files, when they know completely well how to manage/maintain a word document store. Above all else, many non-developers have spent years in university using Microsoft Word to turn in their work, frequently having had actual training classes in just how to use word. They know it far better than alternatives. I can hardly tell the difference when I'm working in Open Office vs Word, but when I had my sister in law use it to save money, a week later she had her husband buy her Word and proclaimed "Since I got word I love my new laptop!". Think about the mindset that conflates things like that, a slight change in what they're used to is not 'slight' to them. Modern word allows version differences, annotations with version tracking and change merging as well. It may not be as straight forward as merging code is, but I've seen many project managers do it easily, so surely devs can manage to do it too. Beyond that, it has become quite common amongst dev teams to do documentation in wikis because it does get back to the textual representation while staying simple enough that non-developers can make edits. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/205201",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/82855/"
]
} |
205,411 | I always wondered why git prefers hashes over revision numbers. Revision numbers are much clearer and easier to refer to (in my opinion): There is a difference between telling someone to take a look at revision 1200 or commit 92ba93e! (Just to give one example). So, is there any reason for this design? | A single, monotonically increasing revision number only really makes sense for a centralized version control system, where all revisions flow to a single place that can track and assign numbers. Once you get into the DVCS world, where numerous copies of the repository exist and changes are being pulled from and pushed to them in arbitrary workflows, the concept just doesn't apply. (For example, there's no one place to assign revision numbers - if I fork your repository and you decide a year later to pull my changes, how could a system ensure that our revision numbers don't conflict?) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/205411",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/78932/"
]
} |
205,459 | Suppose you have the following: +--------+ +------+
| Animal | | Food |
+-+------+ +----+-+
^ ^
| |
| |
+------+ +-------+
| Deer | | Grass |
+------+ +-------+ Deer inherits from Animal , and Grass inherits from Food . So far so good. Animal objects can eat Food objects. Now lets mix it up a bit. Lets add a Lion which inherits from Animal . +--------+ +------+
| Animal | | Food |
+-+-----++ +----+-+
^ ^ ^
| | |
| | |
+------+ +------+ +-------+
| Deer | | Lion | | Grass |
+------+ +------+ +-------+ Now we have have a problem because Lion can eat both Deer and Grass , but Deer is not Food it is Animal . With out using multiple inheritance, and using object oriented design, how do you solve this problem? FYI: I used http://www.asciiflow.com to create the ASCII diagrams. | IS A relationships = Inheritance Lion is an animal HAS A relationships = Composition Car has a wheel CAN DO relationships = Interfaces ICanEat | {
"source": [
"https://softwareengineering.stackexchange.com/questions/205459",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/97101/"
]
} |
205,514 | There are some very experienced folks on Stack Overflow who always talk about the C standard. People seem to not like non-portable solutions, even if they work for me. Ok, I understand that the standard needs to be followed, but doesn't it put shackles on programmer's creativity? What are the concrete benefits that come from following a standard? Especially since compilers may implement the standard slightly differently. | There are a few reasons why sticking to the standard is a good thing. Being locked into a compiler sucks hard. You're completely at the mercy of a group of developers with their own agenda. They're obviously not out to get you or anything, but if your compiler starts lagging on optimizations, new features, security fixes etc, too bad; You're stuck. In extreme cases some companies have to start patching whatever tool they have made themselves dependent on. This is a huge waste of money and time when there are other working tools out there. Being locked into a platform sucks harder. If you're pushing software on Linux and and want to switch to Windows because you realize your market is really there, you're gonna have a heck of a time changing every non-portable hack you've got in your code to play nice with both GCC and MSVC. If you've got several pieces of your core design based around something like that, good luck! Backward incompatible changes sucks the hardest. The standard will never break your code (Ignore python). Some random compiler writer though might decide that really this implementation specific add-on is not worth the trouble and drop it. If you happen to rely on it, then you're stuck on whatever old outdated version it was last in. So the overriding message here, sticking to the standard makes you more flexible . You have a more limited language sure, but you have more Libraries Support (people know the standard, not every intricate compiler-specific hack) Available platforms Mature tools Security (future proofness) It's a delicate balance, but completely ignoring the standard is definitely a mistake. I tend to organize my C code to rely on abstractions that might be implemented in a non-portable way, but that I can transparently port without changing everything that depends on the abstraction. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/205514",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/92165/"
]
} |
205,606 | I'm working on a website that will allow users to log in using OAuth credentials from the likes of Twitter, Google, etc. To do this, I have to register with these various providers and get a super-secret API key that I have to protect with pledges against various body parts. If my key gets ganked, the part gets yanked. The API key has to travel with my source, as it is used at runtime to perform authentication requests. In my case, the key must exist within the application in a configuration file or within the code itself. That isn't a problem when I build and publish from a single machine. However, when we throw source control into the mix, things get more complicated. As I'm a cheap bastard, I'd much prefer to use free source control services such as TFS in the cloud or GitHub. This leaves me with a slight conundrum: How can I keep my body intact when my API keys are in my code, and my code is available in a public repository? I can think of a number of ways to handle this, but none of them are that satisfying. I could remove all private info from code, and edit it back in after deployment. This would be a severe pain to implement (I won't detail the many ways), and isn't an option. I could encrypt it. But as I have to decrypt it, anyone with the source could figure out how to do so. Pointless. I could pay for private source control. LOL j/k spend money? Please. I could use language features to segregate sensitive info from the rest of my source and therefore keep it from source control. This is what I'm doing now, but it could easily be screwed up by mistakenly checking in the secret file. I'm really looking for a guaranteed way to ensure I don't share my privates with the world (except on snapchat) that will work smoothly through development, debugging and deployment and be foolproof as well. This is completely unrealistic. So what realistically can I do? Technical details: VS2012, C# 4.5, source control is either going to be TF service or GitHub. Currently using a partial class to split the sensitive keys off in a separate .cs file that won't be added to source control. I think GitHub may have the advantage as .gitignore could be used to ensure that partial class file isn't checked in, but I've screwed that up before. Am hoping for a "oh, common issue, this is how you do it" but I may have to settle for "that doesn't suck as much as it could have", :/ | Don't put your secret information in your code. Put it into a configuration file which is read by your code at startup. Configuration files shouldn't be put on version control, unless they are the "factory defaults", and then they shouldn't have any private information. See also the question Version control and personal configuration file for how to do this well. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/205606",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/698/"
]
} |
205,681 | The words invert or control are not used at all to define Inversion of Control in the definitions that I've seen. Definitions Wikipedia inversion of control (IoC) is a programming technique, expressed here
in terms of object-oriented programming, in which object coupling is
bound at run time by an assembler object and is typically not known at
compile time using static analysis. ~ http://en.wikipedia.org/wiki/Inversion_of_control Martin Fowler Inversion of Control is a common pattern in the Java community that helps wire lightweight containers or assemble
components from different projects into a cohesive application. ~ based on http://www.martinfowler.com/articles/injection.html (reworded) So why is Inversion of Control named Inversion of Control? What control is being inverted and by what? Is there a way to define Inversion of Control using the terminology: invert and control ? | Let's say you have some sort of "repository" class, and that repository is responsible for handing data to you from a data source. The repository could establish a connection to the data source by itself. But what if it allowed you to pass in a connection to the data source through the repository's constructor? By allowing the caller to provide the connection, you have decoupled the data source connection dependency from the repository class, allowing any data source to work with the repository, not just the one that the repository specifies. You have inverted control by handing the responsibility of creating the connection from the repository class to the caller. Martin Fowler suggests using the term "Dependency Injection" to describe this type of Inversion of Control, since Inversion of Control as a concept can be applied more broadly than just injecting dependencies in a constructor method. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/205681",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/91765/"
]
} |
205,696 | I'm building a search engine using Lucene.NET / Solr.NET, and I'm wondering should search hits be returned as a dictionary or strongly typed object. public class SearchResult
{
public string SearchTerm { get; set; }
public List<Dictionary<string, object>> SearchHits { get; set; }
...
}
// where search hit looks like this
Dictionary<string, object> searchHit = new Dictionary<string, object>
{
{ "Name", "John Doe" },
{ "Address", "Some street" },
}; vs public class SearchResult
{
public string SearchTerm { get; set; }
public List<Foo> SearchHits { get; set; }
...
}
// where search hit looks like this
var searchHit = new SearchHit
{
Name = "John Doe",
Address = "Some address"
}; I will be indexing different types of objects (People, Web pages, Files, etc.).
Each object has a different set of properties (people have name and address, while web pages have URLs and titles), and search hits don't have the same set of properties as original objects ( WebPage class has the MainContent property, but after indexing, the search hit that represents this object will have the PreviewText property (the first 300 characters of MainContent property with stripped HTML tags). I don't want to duplicate the code all over the place and have: IPeopleSearchService , IWebPagesSearchService , IFilesSearchService , etc. and corresponding search results: PeopleSearchResult , FileSearchResult , etc. interface IFooService
{
FooSearchResult Search(string text);
void Index(Foo foo);
...
}
interface IBarService
{
BarSearchResult Search(string text);
void Index(Bar bar);
...
}
... I would like to have a more generic solution, but I don't like over-complexity. It should be readable and easily understandable by junior developer. interface ISearchService<TEntity, TSearchHit>
{
SearchResult<TSearchHit> Search(string text);
void Index(TEntity entity);
} How would you design such a system? Any help would be greatly appreciated! | Let's say you have some sort of "repository" class, and that repository is responsible for handing data to you from a data source. The repository could establish a connection to the data source by itself. But what if it allowed you to pass in a connection to the data source through the repository's constructor? By allowing the caller to provide the connection, you have decoupled the data source connection dependency from the repository class, allowing any data source to work with the repository, not just the one that the repository specifies. You have inverted control by handing the responsibility of creating the connection from the repository class to the caller. Martin Fowler suggests using the term "Dependency Injection" to describe this type of Inversion of Control, since Inversion of Control as a concept can be applied more broadly than just injecting dependencies in a constructor method. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/205696",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25026/"
]
} |
205,762 | The C# community has so ubiquitously used the "I" prefix to denote an interface that even the most inexperienced programmers know to use it. Why is it then that we do not prefix enums, abstract classes or structs (possibly with "E", "A" and "S" respectively)? For example, if we did mark all abstract classes with an "A", it would provide valuable information about that type which, while it could be inferred, is not immediately obvious. Please note that I am not advocating for this change, I am merely trying to understand why we do not do things this way. This thread answers why we do use the "I" prefix but doesn't answer why we don't use other prefixes. | The point of the naming convention for interfaces is to provide a quick, no-brain decision about what to call the interface that your class implements. If you have a Frobnicator , but have to declare an interface for decoupling or whatever reason, then the decision to call it IFrobnicator requires no conscious thought, and this is good. The same problem doesn't apply to the other constructs you name. Enums and structs are useful, but it's not necessary to find a second , short, transparent, obviously related name in addition to the name itself. Therefore, there is no pressure to slap an 'E' on to the name of an enum or struct. (Abstract classes are somewhat similar to interfaces, since you do have to provide a second, concrete class to get anything done, so they might have acquired a convention of starting with 'A', but for whatever reason, they didn't. If I'm allowed to speculate, I think that 'I' being a particularly narrow letter might have had something to do with that.) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/205762",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/97122/"
]
} |
205,803 | I recently read this question that features, the arrow anti-pattern. I have something similar in code I'm trying to refactor except that it branches. It looks a little something like this: if(fooSuccess==true&&barSuccess!=true){
if(mooSuccess==true){
.....
}else if (mooSuccess!=true){
.....
}
}else if(fooSuccess!=true&&barSuccess==true){
if(mooSuccess==true){
.....
}else if (mooSuccess!=true){
if(cowSuccess==true){
.....
}else if (cowSuccess!=true){
.....
}
}
}
...... In short it looks like this if
if
if
if
do something
endif
else if
if
if
do something
endif
endif
else if
if
if
do something
endif
endif
endif Outline borrowed from Coding Horror article on the subject And the code goes on through different permutations of true and false for various flags. These flags are set somewhere 'up' in the code elsewhere, either by user input or by the result of some method. How can I make this kind of code more readable? My intention is that eventually I will have a Bean-type object that contains all the choices the previous programmer tried to capture with this branching anti-pattern. For instance, if in the outline of the code above we really do only have three, I have an enum set inside that bean: enum ProgramRouteEnum{
OPTION_ONE,OPTION_TWO,OPTION_THREE;
boolean choice;
void setChoice(boolean myChoice){
choice = myChoice;
}
boolean getChoice(){
return choice;
}
} Is this an acceptable cure? Or is there a better one? | The code can be simplified like this: boolean condA = ( fooSuccess && !barSuccess && mooSuccess )
boolean condB = ( fooSuccess && !barSuccess && !mooSuccess )
boolean condC = (!fooSuccess && barSuccess && mooSuccess )
boolean condD = (!fooSuccess && barSuccess && !mooSuccess && cowSuccess)
boolean condE = (!fooSuccess && barSuccess && !mooSuccess && !cowSuccess)
if (condA) {
....
return;
}
if (condB) {
....
return;
}
... and so son
if (condE) {
....
return;
} In essence: Evaluate complex conditions into a single boolean Since conditions are exclusive (as it's typically the case in nested arrow heads) you don't have to nest the ifs, just add a return so that no other block is run. This reduces the cyclomatic complexity enormously and makes the code trivial. Having what exactly gets run in every case, you can even do a Karnaugh table to identify and remove redundant condition combinations, just as they do in logic circuits design. But you didn't provide that in the example. Be sure to extract all this logic into its own method so the return statements don't interfere with blocks that have to be run anymay. Descriptive names should be chosen for conA..condF . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/205803",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42685/"
]
} |
205,821 | I'm currently working on a code base that has many classes that implement a Start method.
This seems like two-phase construction to me, which I had always considered a bad practice. I can't tell the difference between this and a constructor. When is it appropriate to use a start method instead of normal object construction? When should I prefer to use the constructor? Edit:
I don't think it is that relevant but the programming language is C#, it could equally apply to Java or C++ | A Start() method (like Run() , Execute() or anything similar) is appropriate when the cost of constructing the object is low, but the cost of using it is high. For example: A class which encapsulates a best-path-optimization algorithm. It's trivial to set it up with a set of parameters ( X squares by Y squares, with suchandsuch evaluation method), but it may take a while to execute. If you want to create 20 of these objects, you may want to delay execution until all of them have been created - this lets you parallelize them easier, for example. Alternatively, it could be useful when you don't know when the object is going to be needed to start - perhaps because it's based on user input, or logic which selects from a list of possibilities. This assumes, of course, that Start() is the useful method on the object, and not an equivalent to an Initialize() method. If it is just an extra way to set more parameters, it shouldn't exist. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/205821",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37972/"
]
} |
205,857 | I understand that in the earlier days of computing, shorter method names like printf made sense, because storage was limited. But why do modern languages like Python and Go still use the less readable names from the C APIs? Why don't they switch to a more readable form like print_format, the same goes for other names. I wouldn't mind having to write make_directory instead of mkdir in go. People have suggested that this question has been asked before, but the questions I've found are about whether abbreviated names are bad practice. This question assumes that the answer to this question is usually yes. | But why do modern languages like python and go still use the less readable names from the C apis? Because everyone (in their domain) knows the short names. If you made different names, people would question "what makes this different from mkdir ?" and generally get confused that stuff isn't where they expect it. Because without auto-complete in an IDE, long names with generally difficult to type characters like _ annoy skilled programmers. Annoyed programmers limit the adoption of a language, meaning designers don't make the names or you never hear about the languages that used them since they never took off. These are both kinda flimsy arguments of course. Readable code is known to be better than writable code these days. And having an IDE with auto-complete handles option 2, though there are still people who argue against IDE's as bloat, or a crutch that hinder programmers' abilities. Having an IDE with intellisense (or the equivalent) helps a bunch with #1 as well, though that has the same arguments and works less well in a non-object oriented environment. Still, these flimsy arguments (and the vagaries of human taste) are enough momentum to keep the trend going. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/205857",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27135/"
]
} |
205,999 | On my web page, if I wish to display dynamic dropdowns (e.g city names based on country selected), I can do it using AJAX. But I can also do it using a REST call. So which should I use? My problem is I really don't get the difference between REST and any other HTTP browser request.(a.k.a a form submit). I have looked at the formal definitions of REST and it seems identical to a HTTP request. So how is REST inherently different from AJAX? | I can do it using AJAX. But I can also do it using a REST call. Um, no. Those two are completely orthogonal. If you want to update your page with data you have to get from a server, you will do it using AJAX. There is no other way. And that AJAX call can use REST, or something else. My problem is i really dont get the difference between REST and an HTTP browser request.(a.k.a a form submit). I have looked at formal definitions of REST and it seems like a HTTP request. A REST call is an HTTP request, always. Though it can be used to handle regular browser calls (like form submits) and return full HTML pages, it's usually used to handle API calls that return only data (usually in JSON format). So why it has a separate name? Because REST is a specific style of using HTTP, arguably using it as it was originally meant to be used, but which most people didn't "get" and was thus rarely used for almost 2 decades. Specifically, REST means encoding which entity you want to retrieve or manipulate in the URL itself (usually via an ID) and encoding what action you want to perform on it in the HTTP method used (GET for retrieving, POST for changing, PUT for creating, DELETE for deleting). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/205999",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14157/"
]
} |
206,001 | I'm making a facebook app and I'm trying to follow MVC properly. But I was wondering if I had a class that was a bunch of fql queries (getting data from facebook) should I have keep these in a controller or a model? According to Codeigniter's user guide "Models are PHP classes that are designed to work with information in your database". Since fql interacts with facebook's databases and not my own, would having a class full of what is basically just SQL Select statements count as valid model? | I can do it using AJAX. But I can also do it using a REST call. Um, no. Those two are completely orthogonal. If you want to update your page with data you have to get from a server, you will do it using AJAX. There is no other way. And that AJAX call can use REST, or something else. My problem is i really dont get the difference between REST and an HTTP browser request.(a.k.a a form submit). I have looked at formal definitions of REST and it seems like a HTTP request. A REST call is an HTTP request, always. Though it can be used to handle regular browser calls (like form submits) and return full HTML pages, it's usually used to handle API calls that return only data (usually in JSON format). So why it has a separate name? Because REST is a specific style of using HTTP, arguably using it as it was originally meant to be used, but which most people didn't "get" and was thus rarely used for almost 2 decades. Specifically, REST means encoding which entity you want to retrieve or manipulate in the URL itself (usually via an ID) and encoding what action you want to perform on it in the HTTP method used (GET for retrieving, POST for changing, PUT for creating, DELETE for deleting). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/206001",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/73898/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.