source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
140,073 | First allow me to coin a term: code goal-tending: Checking out code in the morning, then silently reviewing all of the changes made by the other developers the previous day
file by file, (especially code files you originally developed), and
fixing formatting, logic, renaming variables, refactoring long
methods, etc., and then committing the changes to the VCS. This practice tends to have a few pros and cons that I've identified: Pro : Code quality/readability/consistency is often maintained Pro : Some bugs are fixed due to the other developer not being too familiar with the original code. Con : Is often a waste of time of the goal-tending developer. Con : Occasionally introduces bugs which causes hair-pulling rage by developers who thought they wrote bug-free code the prior day. Con : Other developers get aggravated with excessive nitpicking and begin to dislike contributing to the goal-tender's code. Disclaimer: To be fair, I'm not actually a development manager, I'm the developer who is actually doing the "goal tending". In my defense, I think I'm doing this for good reason (to keep our extremely large code base a well oiled machine), but I'm very concerned that it's also creating a negative atmosphere. I am also definitely concerned that my manager will need to address the issue. So, if you were the manager, how would you address this problem? UPDATE: I don't mean for this to be too localized, but some have asked, so perhaps some background will be illuminating. I was assigned a giant project (200K LoC) three years ago, and only recently (1 yr ago) were additional developers added to the project, some of which are unfamiliar with the architecture, others who are still learning the language (C#). I generally do have to answer for the overall stability of the product, and I'm particularly nervous when changes are surprisingly made to the core architectural parts of the code base. This habit came about because at first I was optimistic about other developer's contributions, but they made way too many mistakes that caused serious problems that would not be discovered until weeks later, where the finger would be pointed at me for writing unstable code. Often these "surprises" are committed by an eager manager or a co-worker who is still in the learning phase. And, this probably will probably lead to the answer: we have no code review policy in place, at all. | It sounds like what you're doing is basically equivalent to a code review except that rather than providing feedback to the developer, you're making all the changes that you would suggest in a code review. You'd almost certainly be better off doing an actual code review where you (or someone else) provides feedback to the original developer about code quality issues and obvious bugs and ask the original developer to fix them. That keeps code quality up but it also helps the developer become more familiar with the original code and its pitfalls and helps improve future code changes. Plus, it doesn't have the downside of causing "hair-pulling rage" when a bug gets silently introduced or cause other developers to think that they're being talked about behind their back. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140073",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39690/"
]
} |
140,147 | PHP is not a bad language (or at least not as bad as some may suggest). I had teachers that didn't even know PHP was object oriented until I told them. I've had clients that immediately distrust us when we say we are PHP developers and question us for not using chic languages and frameworks such as Django or RoR, or "enterprise and solid" languages such as Java and ASP.NET. Facebook is built on PHP. There are plenty of solid projects that power the web like Joomla and Drupal that are used in the enterprise and governments. There are frameworks and libraries that have some of the best architectures I've seen across all languages (Symfony 2, Doctrine). PHP has the best documentation I've seen and a big community of professionals. PHP has advanced OO features such as reflection, interfaces, let alone that PHP now supports horizontal reuse natively and cleanly through traits . There are bad programmers and script kiddies that give PHP a bad reputation, but power the PHP community at the same time, and because it is so easy to get stuff done PHP you can often do things the wrong way, granted, but why blame the language?. Now, to boil this down to an actual answerable question: what would be a good and solid and short and sweet argument to avoid being frowned upon and stop prejudice in one fell swoop and defend your honor when you say you are a PHP developer? . (free cookie with teh whipped cream to those with empirical evidence of convincing someone —client or other— on the spot) P.S.: We use Symfony , and the code ends being beautiful and maintainable P.P.S.: Facebook is written in PHP, compiled to C++ via HipHop and deployed on Hadoop. Here: http://arstechnica.com/business/2012/04/exclusive-a-behind-the-scenes-look-at-facebook-release-engineering/1/ | The only answer I have is this: everything sucks. You can find tons of arguments against OOP being any good (a quick search will reveal them) versus functional or procedural programming. You seem to indicate that PHP's OO support defends it. Google blank sucks and you will find results for anything. Python sucks . Ruby sucks . Rails sucks . PHP sucks . Java sucks . You want to know what really sucks? Programmers. Programmers suck. Any good developer should be able to create an amazing application, front end and back end, regardless of the language -- even if they were not familiar with the language! (That is to say, a good programmer should be able to learn and work with any language effectively). Also note that the success of an application has nothing to do with its code. I've heard that Facebook code is awful, but by God is it effective. Same is probably true of Wikimedia (Wikipedia is built on PHP and is also an extremely popular website). What really matters is results! PHP is the language that I am personally most familiar with. I will defend it to the death. I have worked with PHP frameworks, straight up PHP code, and PHP, python, and ruby (and even Java) all for web development. I can't say that one is particularly better than the other. What matters more is the developers and the algorithms. I have seen amazingly good and completely awful code in many languages (especially PHP since I have so much exposure). Now for your question: it depends on who you're dealing with Businessmen They care mostly about results and cost-effectiveness. The fact that the very successful Facebook, Wikipedia, and Wordpress use PHP should be more than enough to convince them that it's an effective language for building successful applications. Programmers Let your code speak for itself. If a developer says that PHP is inferior in some way, show them some PHP code that you believe to be effective. The proof of the pudding is in the eating. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140147",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/10869/"
]
} |
140,156 | My team at work is moving to Scrum and other teams are starting to do test-driven development using unit tests and user acceptance tests. I like the UATs, but I'm not sold on unit testing for test-driven development or test-driven development in general. It seems like writing tests is extra work, gives people a crutch when they write the real code, and might not be effective very often. I understand how unit tests work and how to write them, but can anyone make the case that it's really a good idea and worth the effort and time? Also, is there anything that makes TDD especially good for Scrum? | Short Answer: Absolutely positively. Long Answer: Unit tests are one of the most important practices I try and influence at my place of work (large bank, fx trading). Yes they are extra work, but it's work that pays back again and again. Automated unit tests not only help you actually execute code you're writing and of course verify your expectations but they also act as a kind of watch dog for future changes that you or someone else might make. Test breakage will result when someone changes the code in undesirable ways. I think the relative value of unit tests declines in correlation with the level of expected change and growth in a code base, but initial verification of what the code does make it worthwhile even where the expected change is low. Unit test value also depends on the cost of defects. If the cost (where cost is loss of time/money/reputation/future effort) of a defect is zero, then the relative value of a test is also zero; however this is almost never the case in a commercial environment. We generally don't hire people anymore who don't routinely create unit tests as part of their work - it's just something we expect, like turning up every day. I've not seen a pure cost benefit analysis of having unit tests (someone feel free to point me to one), however I can say from experience that in a commercial environment, being able to prove code works in a large important system is worthwhile. It also lets me sleep better at night knowing that the code I've written provably works (to a certain level), and if it changes someone will be alerted to any unexpected side effects by a broken build. Test driven development, in my mind is not a testing approach. It's actually a design approach/practice with the output being the working system and a set of unit tests. I'm less religious about this practice as it's a skill that is quite difficult to develop and perfect. Personally if I'm building a system and I don't have a clear idea of how it will work I will employ TDD to help me find my way in the dark. However if I'm applying an existing pattern/solution, I typically won't. In the absence of mathematical proof to you that it makes sense to write unit tests, I encourage you to try it over an extended period and experience the benefits yourself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140156",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49909/"
]
} |
140,161 | Exchange 2010 has a delegation model where groups of winrm cmdlets are essentally grouped into roles, and the roles assigned to a user. ( Image source ) This is a great & flexible model considering how I can leverage all the benefits of PowerShell, while using the right low level technologies (WCF, SOAP etc), and requiring no additional software on the client side. ( Image source ) Question(s) Is there a way for me to leverage Exchange's delegation model in my .NET application? Has anyone attempted to imitate this model? If I must start from scratch, how would I go about imitating this approach? | Short Answer: Absolutely positively. Long Answer: Unit tests are one of the most important practices I try and influence at my place of work (large bank, fx trading). Yes they are extra work, but it's work that pays back again and again. Automated unit tests not only help you actually execute code you're writing and of course verify your expectations but they also act as a kind of watch dog for future changes that you or someone else might make. Test breakage will result when someone changes the code in undesirable ways. I think the relative value of unit tests declines in correlation with the level of expected change and growth in a code base, but initial verification of what the code does make it worthwhile even where the expected change is low. Unit test value also depends on the cost of defects. If the cost (where cost is loss of time/money/reputation/future effort) of a defect is zero, then the relative value of a test is also zero; however this is almost never the case in a commercial environment. We generally don't hire people anymore who don't routinely create unit tests as part of their work - it's just something we expect, like turning up every day. I've not seen a pure cost benefit analysis of having unit tests (someone feel free to point me to one), however I can say from experience that in a commercial environment, being able to prove code works in a large important system is worthwhile. It also lets me sleep better at night knowing that the code I've written provably works (to a certain level), and if it changes someone will be alerted to any unexpected side effects by a broken build. Test driven development, in my mind is not a testing approach. It's actually a design approach/practice with the output being the working system and a set of unit tests. I'm less religious about this practice as it's a skill that is quite difficult to develop and perfect. Personally if I'm building a system and I don't have a clear idea of how it will work I will employ TDD to help me find my way in the dark. However if I'm applying an existing pattern/solution, I typically won't. In the absence of mathematical proof to you that it makes sense to write unit tests, I encourage you to try it over an extended period and experience the benefits yourself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140161",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/175/"
]
} |
140,264 | I recently joined a rapidly growing startup. In the past 3 months the development team has grown from 4 to 12. Until now they were very laissez-faire about what developers used to do their work. In fact one of the things I initially found attractive about the company is that most programmers used Linux, or whatever OS they felt best suited their efforts. Now orders, without discussion, have come down that everyone is to switch to Eclipse. A fine editor. I prefer SublimeText2, but it's just my personal taste. Just to be clear: we are a JS team using Backbone and Eclipse just isn't good at understanding Backbone code. This means that those on the team who use a /good/ IDE (PHP Storm), have to go back to doing a lot of search-find-oh-wait-where-was-I-three-steps-ago kind of things instead of just ctrl+clicking and using back/ forward - probably diminishing productivity by say 15% and enjoyment by 50%... Is this a red flag? It seems capricious and unreasonably controlling to tell developers (non-MS) what IDE or tool-sets to use if they are already settled in and productive. | "Now orders, without discussion , have come down that everyone is to switch to Eclipse." I think that this is the real red flag. Your team is the expert on software development and the one to be affected by the decision, and yet you did not get to say a word in the discussion that resulted in this order? It sounds like over-managing by pointy-haired bosses. Does the decision making person/team have relevant insight for that decision? Given that the decision makers are qualified enough for such a decision, not asking for the dev team's opinion has at least two shortcomings: The team does not feel involved. Involving the team should be a priority for management. I would not like to work as a dev somewhere where my opinion about such central issues as IDE is not valued enough to even be asked for. Granted that asking for someone's opinion and then deciding against it may be worse, but in that case I'd expect a solid rationale for that decision. The management, however experienced, does not work 100% with development of this specific code. Assuming that the people who do don't have interesting insight at all would be naïve. Of course it may be so that the managers had thought of everything the devs come up with, but the only way to know is to ask. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140264",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3722/"
]
} |
140,321 | What's the difference between Building and Compiling. | Compiling is part of a build process. A build process can include testing, packaging and other activities apart from compilation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140321",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/49979/"
]
} |
140,331 | I wrote some sorting algorithms for a class assignment and I also wrote a few tests to make sure the algorithms were implemented correctly. My tests are only like 10 lines long and there are 3 of them but only 1 line changes between the 3 so there is a lot of repeated code. Is it better to refactor this code into another method that is then called from each test? Wouldn't I then need to write another test to test the refactoring? Some of the variables can even be moved up to the class level. Should testing classes and methods follow the same rules as regular classes/methods? Here's an example: [TestMethod]
public void MergeSortAssertArrayIsSorted()
{
int[] a = new int[1000];
Random rand = new Random(DateTime.Now.Millisecond);
for(int i = 0; i < a.Length; i++)
{
a[i] = rand.Next(Int16.MaxValue);
}
int[] b = new int[1000];
a.CopyTo(b, 0);
List<int> temp = b.ToList();
temp.Sort();
b = temp.ToArray();
MergeSort merge = new MergeSort();
merge.mergeSort(a, 0, a.Length - 1);
CollectionAssert.AreEqual(a, b);
}
[TestMethod]
public void InsertionSortAssertArrayIsSorted()
{
int[] a = new int[1000];
Random rand = new Random(DateTime.Now.Millisecond);
for (int i = 0; i < a.Length; i++)
{
a[i] = rand.Next(Int16.MaxValue);
}
int[] b = new int[1000];
a.CopyTo(b, 0);
List<int> temp = b.ToList();
temp.Sort();
b = temp.ToArray();
InsertionSort merge = new InsertionSort();
merge.insertionSort(a);
CollectionAssert.AreEqual(a, b);
} | Test code is still code and also needs to be maintained. If you need to change the copied logic, you need to do that in every place you copied it to, normally. DRY still applies. Wouldn't I then need to write another test to test the refactoring? Would you? And how do you know the tests you currently have are correct? You test the refactoring by running the tests. They should all have the same results. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140331",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7957/"
]
} |
140,350 | Ok so I am familiar with developing Form and Console applications on Windows using Visual Studio .Net with C#, but where do I start when it comes to Linux distro's like Ubuntu, is there an equivalent? How would one go about matching what they can do in a Windows environment with .Net and C# in a Linux environment without .Net coding in something like Java or C/C++? I am aware of Eclipse, does eclipse have a form designer or do you have to code the design of any Gnome/Unity forms manually? Can I use eclipse to write the Linux equivalent of a console application, that you just double click on to run? I also know about Mono, but the idea is that I want to learn how to develop software without using anything in the Microsoft stack and am not sure where to start. What is the standard language/framework used to develop these types of applications on Linux? As I become more proficient with Visual Studio, C# and .Net, it has struck me that without these Microsoft tools, I am nothing. I am only capable of developing for the Microsoft OS and this scares me. This isn't some anti Microsoft thing, Microsoft makes some incredible Software/Hardware/Operating Systems/IDE's, but it is generally a bad idea to put all of your eggs in one basket so if I want to learn how to develop Terminal and Gnome/Unity form applications where in the world do I start? I have used Linux on and off for years, but Windows has been my primary OS. However I have watched Linux get better and better and as much as I love Windows 7, I am dubious about Windows 8 (I for one will sorely miss my start menu)! Obviously MS aren't going anywhere anytime soon and I could spend the the next couple of decades developing for .Net without any issues but just because you can get away with something doesn't always mean it's a good idea. Thanks | Test code is still code and also needs to be maintained. If you need to change the copied logic, you need to do that in every place you copied it to, normally. DRY still applies. Wouldn't I then need to write another test to test the refactoring? Would you? And how do you know the tests you currently have are correct? You test the refactoring by running the tests. They should all have the same results. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140350",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/45098/"
]
} |
140,423 | How do you remember programming related stuff? Have you had the feeling that you've encountered the error you have before you right now, a few years ago and you could swear you knew the cause then but now you've forgotten it? Did you work with the xsl's string parsing some time ago but now you can't remember exactly which are the string functions altogether from xsl and you have to start from scratch? Or perhaps you forget about some feature from Apache Commons like "filtering a collection by some predicate" that you surely used in the past. So how do you do it? I tried having a blog but when I develop apps, I never find the time to update the blog or write about my experiences. Also, using a wiki is a nice thing but then I found it difficult to keep a clean separation between them since many times I needed to change a blog post to add new information about that topic. This made me think that I actually should have put this topic in the wiki instead of the blog. Do you have any systems that help you remember about your programming experience? What's your setup? | Forgetting things is normal. Not remembering some tricks that helped you in the past is also normal. This is the first step one should acknowledge. Then there are some ways you can "store" knowledge for further revision: Find time and blog about it . The future-you will be very thankful to the present-you; Work with tiny demos and archive them in some way. You will surely step through this archive many times; Make use of your stackexchange profile. Mark interesting questions/problems/issues/tips/tricks as favorites for further investigation; Keep doing , keep programming. The more you use a certain part of a framework, the more you familiarize with it and the more you remember. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140423",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/33861/"
]
} |
140,424 | The result of the following process should be a html form. This form's structure varies from one to user. For example there might be a different number of rows or there may be the need for rowspan and colspan . When the user chooses to see this table an ajax call is made to the server where the structure of the table is decided from the database. Then I have to create the html code for the table structure which will be inserted in the DOM via JavaScript. The following problem comes to my mind: Where should I build the HTML code which will be inserted in the DOM? On the server side or should I send some parameters in the ajax call method and process the structure there? Therefore the main question involves good practice when it comes to decide between Server side processing or client side processing. Thank you! | Forgetting things is normal. Not remembering some tricks that helped you in the past is also normal. This is the first step one should acknowledge. Then there are some ways you can "store" knowledge for further revision: Find time and blog about it . The future-you will be very thankful to the present-you; Work with tiny demos and archive them in some way. You will surely step through this archive many times; Make use of your stackexchange profile. Mark interesting questions/problems/issues/tips/tricks as favorites for further investigation; Keep doing , keep programming. The more you use a certain part of a framework, the more you familiarize with it and the more you remember. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140424",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47001/"
]
} |
140,483 | Let's consider a fictional program that builds a linked list in the heap, and at the end of the program there is a loop that frees all the nodes, and then exits. For this case let's say the linked list is just 500K of memory, and no special space managing is required. Is that a waste of time, because the OS will do that anyway? Will there be a different behavior later? Is that different according to the OS version? I'm mainly interested in UNIX based systems, but any information will be appreciated. I had today my first lesson in OS course and I'm wondering about that now. Edit: Since a lot of people here are concerned about side effects, and general 'good programming practice' and so. You are right! I agree 100% with your statements. But my question is only hypothetical, I want to know how the OS manages this. So please leave out things like 'finding other bugs when freeing all memory'. | My main problem with your approach is that a leak detection tool (like Valgrind) will report it and you will start ignoring it. Then, some day a real leak may show up, and you'll never notice it because of all the noise. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140483",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50029/"
]
} |
140,602 | I've been studying OO programming, primarily in C++, C# and Java. I thought I had a good grasp on it with my understanding of encapsulation, inheritance and polymorphism. One frequently referenced concept when discussing OO is "message passing". Apparently, this is something that is not used whilst OO programming in today's mainstream languages, but is supported by Smalltalk. My questions are: What is message passing? Is there any support for this "message passing" in C++, C# or Java? | What is message passing? (Can someone give a practical example?) Message passing simply means that (at a very abstract level) the fundamental mechanism of program execution is objects sending each other messages. The important point is that the name and structure of these messages is not necessarily fixed beforehand in the source code and can itself be additional information. This is an important part of what Alan Kay originally envisioned as "object oriented programming". Is there any support for this "message passing" in C++, C# or Java? These language implement a limited version of message passing through method calls. Limited because the set of messages that can be sent is limited to the methods declared in a class. The advantage of this approach is that it can be implemented very efficiently, and it enables very detailed static code analysis (which results in all kinds of useful benefits, like code completion). Conversely, languages that implment "real" message passing often have method definitions too, as a convenient way to implement message handlers, but allow classes to implement more flexible message handlers that enable the object to recieve "method calls" with arbitrary names (not fixed at compile time). An example in Groovy that demonstrates the power of this concept: def xml = new MarkupBuilder(writer)
xml.records() {
car(name:'HSV Maloo', make:'Holden', year:2006) {
country('Australia')
record(type:'speed', 'Production Pickup Truck with speed of 271kph')
}
} will produce this XML: <records>
<car name='HSV Maloo' make='Holden' year='2006'>
<country>Australia</country>
<record type='speed'>Production Pickup Truck with speed of 271kph</record>
</car>
</records> Note that records , car , country and record are syntactically method calls, but there are no methods of that name defined in MarkupBuilder . Instead, it has a catchall message handler that accepts all messages and interprets the message names as the name of an XML element, parameters as attributes and closures as child elements. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140602",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/39003/"
]
} |
140,633 | I'm on a private project that eventually will become open source. We have a few team members, talented enough with the technologies to build apps, but not dedicated developers who can write clean/beautiful and most importantly long-term maintainable code. I've set out to refactor the code base, but it's a bit unwieldy as someone in the team out in another country I'm not in regular contact with could be updating this totally separate thing. I know one solution is to communicate rapidly or adopt better PM practices, but we're just not that big yet. I just want to clean up the code and merge nicely into what he has updated. Would using a branch be a suitable plan? A best-effort-merge? Something else? | One thing people often fail to consider is that a clean architecture doesn't only speed up long term maintenance, it also speeds up development right now . Don't try to insulate your changes from your colleagues until they are "done." Your changes will help them be more productive and less prone to bugs. The most frequent mistake people make when undertaking a large refactor is to not merge often enough, instead trying to do it in one "big bang." The right way to do it is to make the smallest possible refactor you can, test it, then merge it into your colleague's branch, and teach him about the change so he can incorporate it going forward. Ideally you're doing one merge per day, or one per week at the very least. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140633",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1525/"
]
} |
140,705 | I heard different interpretations of sound and complete . I understand that completeness means finding a solution if there is one. What does it mean to say an algorithm is sound . What does it mean to say an algorithm is Sound and Complete? | These are very specific terms as related to logic. Here are some starting points: http://en.wikipedia.org/wiki/Soundness http://en.wikipedia.org/wiki/Completeness_(logic) Basically, soundness (of an algorithm) means that the algorithm doesn't yield any results that are untrue. If, for instance, I have a sorting algorithm that sometimes does not return a sorted list, the algorithm is not sound. Completeness, on the other hand, means that the algorithm addresses all possible inputs and doesn't miss any. So, if my sorting algorithm never returned an unsorted list, but simply refused to work on lists that contained the number 7, it would not be complete. It is complete and sound if it works on all inputs (semantically valid in the world of the program) and always gets the answer right. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140705",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/23250/"
]
} |
140,811 | It is almost and instantaneous whenever I talk to developers about Model View Controller ( MVC ) they say you make a request to a url the server builds a entity (MODEL) and provides you with visual representation of that model. So does this mean MVC is only for the web or have I been meeting people who are just developers who employ MVC for writing web applications? Are there usages for MVC on desktop style applications? I for one am new to paradigm and would like to know of any super-set to MVC | MVC is a pattern. Patterns apply across all programming. MVC just happens to work very well in a web context. As gnat points out just have a look at the mvc tag and you will see multiple examples of it being implemented. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140811",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13146/"
]
} |
140,848 | Why was Java chosen for Android, instead of something like C++? I have heard that Java uses quite a lot of memory and I would presume that low memory usage would be quite important on mobile devices. Is there any real advantage to using Java instead of a language like C++ on a mobile device? | This article sheds some light on the situation. The most pertinent link within that article is this . So you've got a massive install base with lots of programmers who know the language and it's widely taught at universities. C++ was dropped from my school's curriculum, Java is still here. Java has Java ME which has a massive install base on other cellphones. The Pantec Ease I have in my pocket right now has a little coffee cup in the corner of the screen. Anyone care to guess what that is? This answer on Stack Overflow covers it pretty well too. Summary of SO answer: java is a known language, developers know it and don't have to learn it its harder to shoot yourself with java than with c, c++ code since it has no pointer arithmetic it runs in a vm, so no need to recompile it for every phone out there and easy to secure large number of developement tools for java (see first) several mobile phones already used java me, so java was known in the industry the speed difference is not an issue for most applications, if it were you should code in assembly | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140848",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42791/"
]
} |
140,856 | I'm looking for a solid and but fast paced entry in field of javascript development. The following topics come to my mind: Javascript advanced concepts, OOP jQuery, jQuery-UI, jQuery-Mobile backbone.js node.js BDD and/or TDD The courses of http://www.codelesson.com seem promising. What certificates for Javascript developers exist/can be recommended? What other vendors can you recommend? | JavaScript certification is called github . It's called write modules, maintain modules, and share modules with the community, build popularity, etc. As a JavaScript employer I couldn't care less what certification you have, I care about either examples of github modules showing quality code or live websites/web applications show high quality code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140856",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21735/"
]
} |
140,925 | I was recently assigned reading from the Tanenbaum-Torvalds debates in my OS class. In the debates, Tanenbaum makes some predictions: Microkernels are the future x86 will die out and RISC architectures will dominate the market (5 years from then) everyone will be running a free GNU OS I was a one year old when the debates happened, so I lack historical intuition. Why have these predictions not panned out? It seems to me, that from Tanenbaum's perspective, they're pretty reasonable predictions of the future. What happened so that they didn't come to pass? | Microkernels are the future I think Linus hit the points on monolithic kernels in his debate. Certainly some lessons learned from microkernel research was applied to monolithic kernels. Microsoft sometimes used to claim that the Win32 kernel was a microkernel architecture. It's a bit of a stretch when you look at some textbook microkernels, but the claims had some technical justification. x86 will die out and RISC architectures will dominate the market If you back up from desktops and servers, RISC dominates the processor market by any measure. ARM (R stands for RISC) outsells x86 in number of processors, there are more ARM processors than x86 processors in use, and there is more total ARM computing capacity than x86 computing capacity. This year, a single ARM vendor (yeah, Apple) may outsell all x86 vendors combined. Only in the desktop and server space does x86 dominate. So long as Windows is the dominant platform for desktop computers and Linux for servers, this is likely to continue to be true for a while. There's a part b to this as well. Intel engineers did some amazing work to squeeze life out of their instruction set, even to the point of making a RISC core with an opcode translator that sits on top. Compare to one of the dominant RISC desktop chip makers, IBM, who could not get a power-efficient and high performance G5 for Apple laptops in a reasonable timeframe. (5 years from then) everyone will be running a free GNU OS I think the various OS vendors still offer compelling value propositions on their OSes. GNU isn't even necessarily the most important player in the Open Source community, so even a more widespread adoption of open source software didn't necessarily translate into GNU OSes. Yet, there's a whole lot of GNU stuff out there (all Macs ship with GNU's Bash , for example. There's probably some GNU system tools on Android phones). I think the computer ecosystem is a lot more diverse than Tanenbaum foresaw, even when you restrict your view to desktop computers. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140925",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50064/"
]
} |
140,992 | Is using dependency injection (DI) essential for unit testing? I can't think of another alternative for isolating code so it can be tested. Also, all the examples I have ever seen use this pattern. Is that because it is the only viable option or are there other alternatives? | DI makes unit testing much easier. But you can still write unit tests without DI. Lots of unit tests have been written already before DI became widespread. (Of course, some of these used techniques identical or very similar to DI without knowing it has a fancy name :-) I myself have used e.g. interfaces and factories a lot before learning about DI. The actual factory class name may have been read from a config file, or passed to the SUT as an argument. Another approach is using singletons (or globally accessible data in general). Yes, I know it is not recommended by many (including myself) in general. Still it may be viable in specific situations, especially if the singleton contains static configuration data which is not test case specific, but differs between production and test environment. Of course it has its known problems, so DI is superior if you can use it. But often (e.g. in legacy systems) you can't. Talking of which, Working Effectively With Legacy Code describes a lot of tricks to get legacy code covered by tests. Many of these are not nice, and aren't meant as a long term solution. But they allow you to create the first valuable unit tests to an otherwise untestable system... which enables you to start refactoring, and eventually (among others) introduce DI. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140992",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/29270/"
]
} |
140,999 | I am reading Domain-Driven Design by Evans and I am at the part discussing the layered architecture. I just realized that application and domain layers are different and should be separate. In the project I am working on, they are kind of blended and I can't tell the difference until I read the book (and I can't say it's very clear to me now), really. My questions, since both of them concerns the logic of the application and are supposed to be clean of technical and presentation aspects, what are the advantages of drawing a boundary these two? | I recently read DDD myself. When I got to this section I was pleasantly surprised to find out I discovered the same 4-layer architecture that Evans did. As @lonelybug pointed out, the domain layer should be completely isolated from the rest of the system. However, something has to translate UI-specific values (query strings, POST data, session, etc.) into domain objects. This is where the application layer comes into play. It's job is to translate back and forth between the UI, the data layer and the domain, effectively hiding the domain from the rest of the system. I see a lot of ASP.NET MVC applications now where almost all the logic is in the controllers. This is a failed attempt to implement the classic 3-layer architecture. Controllers are difficult to unit test because they have so many UI-specific concerns. In fact, writing a controller so that it isn't directly concerned with "Http Context" values is a serious challenge in and of itself. Ideally, the controller should be just perform translation, coordinate work and spit back the response. It can even make sense to do basic validation in the application layer. It's okay for the domain to assume the values going into it make sense (is this a valid ID for this customer and does this string represent a date/time). However, validation involving business logic (can I reserve a plane ticket in the past?) should be reserved for the domain layer. Martin Fowler actually comments on how flat most domain layers are these days . Even though most people don't even know what an application layer is, he finds that a lot of people make rather dumb domain objects and complex application layers that coordinate the work of the different domain objects. I'm guilty of this myself. The important thing isn't to build a layer because some book told you to. The idea is to identify responsibilities and separate our your code based on those responsibilities. In my case, the "application layer" kind of evolved naturally as I increased unit testing. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/140999",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8486/"
]
} |
141,005 | How would one know if the code one has created is easily readable, understandable, and maintainable? Of course from the author's point of view, the code is readable and maintainable, because the author wrote it and edited it, to begin with. However, there must be an objective and quantifiable standard by which our profession can measure code. These goals are met when one may do the following with the code without the expert advice of the original author: It is possible to read the code and understand at a basic level the flow of logic. It is possible to understand at a deeper level what the code is doing to include inputs, outputs, and algorithms. Other developers can make meaningful changes to the original code such as bug fixes or refactoring. One can write new code such as a class or module that leverages the original code. How do we quantify or measure code quality so that we know it readable, understandable, and maintainable? | Your peer tells you after reviewing the code. You cannot determine this yourself easily because as the author, you know more than the code says by itself. A computer cannot tell you, for the same reasons that it cannot tell if a painting is art or not. Hence, you need another human - capable of maintaining the software - to look at what you have written and give his or her opinion. The formal name of said process is peer review . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141005",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48185/"
]
} |
141,019 | I'm specifically interested in how users perform authorized / authenticated operations on a web API. Are authentication cookies compatible with the REST philosophy, and why? | An ideal ReSTful service allows clients (which may not be in-browser) to perform any needed task in one request ; because the full state needed to do that is held by the client, not the server. Since the client has full control of the state, it can create the state on its own (if that is legitimate), and only talk to the API to "get 'er done". Requiring cookies can make that difficult. For clients besides browsers, managing cookies is a pretty big inconvenience compared to query params, plain request headers or the request body. On the other hand, In browser, using cookies can make lots of things much simpler. So an API might first look in the Authorization header for the authentication data it needs, since that's probably the place where non-browser clients will prefer to put it, but to simplify and streamline browser-based clients, it might also check for a session cookie for server side log in, but only if the regular Authorization header was missing. Another example might be a complex request that normally requires lots of parameters set. A non interactive client would have no trouble jamming all of that data into one request, but a HTML form based interface might prefer to break the request into several pages (something like a set of 'wizard' pages) so that users aren't presented with options that are not applicable based on previous selections. All of the intermediate pages could store the values in client side cookies, so that only the very last page, where the user actually submits the request, has any server side effect at all. The API could look for the needed attributes in the request body, and fall back to looking at cookies if the needed parameters weren't there. Edit: in RE to @Konrad's comment below: Tokens in comparison are harder to implement especially because you can't easily invalidate the token without storing them somewhere. er... you are validating the cookies on the server side, right? Just because you told the browser to discard a cookie after 24 hours doesn't mean it will. That cookie could be saved by a highly technical user and reused long after it has "expired". If you don't want to store session data on the server side, you should store it in the token (cookie or otherwise). A self contained auth token is sometimes called a Macaroon. How this is passed between client and server (whether by cookie, as extra headers, or in the request entity itself) is totally independent of the authentication mechanism itself. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141019",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36147/"
]
} |
141,093 | I need to store questions and answers in a database. The questions will be one to two sentences, but the answers will be long, at least a paragraph, likely more. The only way I know about to do this right now is an SQL database. However, I don't feel like this is a good solution because as far as I've seen, these databases aren't used for data of this type or size. Is this the correct way to go or is there a better way to store this data? Is there a better way than storing raw strings? | Mongodb is great, but you know SQL. There is nothing wrong with storing long answers in fields. You can store images or even files in SQL. I think the max field size is 2gb. I'm almost positive this answer itself is being stored in a table field somewhere. As for there being thousands of them, no problem. Even millions shouldn't be an issue. You might consider utilizing full text indexing if you're searching the field for keywords or something. But I try not to optimize till I see a problem. Computers are cheap, storage is basically free. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141093",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/38478/"
]
} |
141,175 | I have seen many people around complaining about verbosity in programming languages. I find that, within some bounds, the more verbose a programming language is, the better it is to understand. I think that verbosity also reinforces writing clearer API s for that particular language. The only disadvantage I can think of is that it makes you type more, but I mean, most people use IDEs which do all the work for you. So, What are the possible downsides to a verbose programming language? | The goal is quick comprehension "Verbose" means "uses too many words". The question is what "too many" is. Good code should be easy to comprehend at a glance. This is easier if most of the characters directly serve the purpose of the code . Signal vs Noise If a language is verbose, more of your code is noise. Compare Java's "Hello World" : class HelloWorldApp {
public static void main(String[] args) {
System.out.println("Hello World!");
}
} ... with Ruby's: print "Hello World!" Noise wastes mental energy. Cryptic vs Clear On the other hand, excessive terseness in a language also costs mental energy. Compare these two examples from Common Lisp : (car '(1 2 3)) # 3 characters whose meaning must be memorized
# vs
(first '(1 2 3)) # 5 characters whose meaning is obvious | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141175",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/21152/"
]
} |
141,189 | I study computer science and I have a class called "Programming Techniques". Its purpose is to teach (us) good object oriented design principles. During the semester we have homeworks, programs that we must write to demonstrate what we've learned. The lab assistant demands for each of these homeworks that specific design patterns should be used. For example, the current homework is an application used for processing customer orders. We are demanded to use either "Factory Method" or "Abstract Factory" design patterns for this. It gets even worse: at the end of the semester we must write a program (something more complex) that must use at least one creational pattern, at least one structural pattern and at least one behavioural pattern. Is it normal to demand this ? I mean, forcing us to design our programs in such a way that a specific design pattern makes sense is just beyond what I consider ok. If I'm a car mechanic and have a huge tool box, then I will use a certain tool from that box if and when the situation demands it. Not more, not less. If my design of the application doesn't demand at all the use of "Abstract Factory" (for example), then why should I implement it ? I'm not sure yet if the senior lecturer agrees with what the lab assistant is demanding, but I want to talk to him about it and I need solid arguments to do so. How should I approach this problem with him ? PS: I'm sure there must be a better way to teach us these things. Maybe making us each week read about 3 design patterns and the next week giving us a test with small but specific programming or architectural situations/problems. The goal in that test would be to identify what design patterns would make sense and how they could be implemented. This way, he can see if we understand them. EDIT: These homeworks are not just 100-line programs, they have quite a lot of requirements and are fairly complicated. This is the reason we have about 2 - 3 weeks of deadline for each of them. I agree that practicing this is the best way to learn. But shouldn't smaller programs/applications be used for this ? Something just for demonstrating purposes. Not big programs with lots of requirements/classes/etc. | Sorry, but I think your teachers are right. If you were developing software for a customer, and the customer or your boss requires you to use specific design patterns, I would definitely say that that was a big mistake. But there is a difference between class assignments and software development for a customer: both serve completely different purposes: If you are developing for a customer, the purpose is the software you create. It should work, it should be maintainable, it should satisfy your customer's requirements. The purpose of your assignments is not the software you create, it is the experience you gather during the whole creation process. During the assignment you will learn things about implementing patterns, which will help you understand how the patterns work, when to use a pattern, and (hopefully) when not to use it, as well. To use your car mechanic analogy: You are not a mechanic, you are learning how to become one. And maybe your teacher will tell you which wrench to use, in order to change the tires. That is completely ok. And if the owner of the garage tells you to change the tires with a screwdriver, because he wants you to learn something from the failure that will certainly occur, that is ok, too. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141189",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46212/"
]
} |
141,232 | (I'm not sure if it's a appropriate question here) Shell scripts, like those written in bash , can do many things. They can call Unix programs, pipe their output, redirect I/O from/to files, control flow, check whether a file exists, etc. But a modern programming language, e.g, python and ruby , can also do these things. And, they are (I think) more readable and maintainable. bash enjoys wide spread adoption. But many distributions have installed python interpreter, too. So what's the advantage of shell a script? If I could write python , ruby or perl , is it worth to learn bash ? | Shells have specialized features for working with files and getting data from one program into another (assuming that data is text). For those tasks, shell scripts can be less cumbersome than a scripting language like Python. Shell scripting also has the advantage that the commands you use are basically the same commands you'd use from the command line -- so if you can do something in the shell, you're more than halfway to scripting the same operation. Here for example is a bash script that moves all the PNG files from the current directory to a specified directory. #!/usr/bin/sh
mv *.png $1 Here's a Python version. #!/usr/bin/python
import sys, shutil, glob
for filename in glob.iglob("./*.png"):
shutil.move(filename, sys.argv[1]) You'll notice: The bash script is a third as long as the Python if you count lines (excluding the shebang line) -- even less by character count. In fact, you probably wouldn't even write a script for it, you'd just create an alias. The Python script requires three libraries to be imported, while everything you need for this task is natively available in bash. The Python script requires an explicit loop to move the files, whereas that is part of the semantics of the mv command in bash. The bash script can run faster -- you'll probably invoke it from bash, and you can use source to run it in the same instance of the shell. glob.iglob("./*.png") is quite a mouthful just to say *.png If you wanted to write a basic pipe operation in Python, you would be astounded at the verbosity. (Of course, some things, like piping through grep , can be replaced by Python code rather than using an external program, so you often don't need to pipe quite as much.) As a counterexample, I once had to write a routine that checked to see how long each of the filenames were in a particular directory. If they were longer than supported by a particular OS, they had to be shortened. This could result in duplicate filenames, which I needed to rectify, and since they would be linked from a Web page, the shortened names needed to be stable, i.e., they should be generated in such way that the same long filename would always result in the same shortened filename. I did this by generating a hex md5 of the long filename and appending the first four characters of that to the shortened name (names could still collide, but it was very unilkely, so I just checked for that condition and bailed if it should happen). It also had to record the rename operation so a batch search-and-replace could later be done on the files to fix the links between them (I wrote out a sed command file and passed that to sed for each file). I did this in bash because it was part of our build system which was already written in bash. It was exactly as hard to get right as you are probably thinking. It would have taken a lot less time to write in Python and probably would have been clearer, too. In short: different languages are designed for different kinds of tasks; choose the language available to you that is best suited to the task at hand. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141232",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50355/"
]
} |
141,317 | Discounting subtly different semantics due to ADL, how should I generally use using , and why? Is it situation-dependent (e.g. header which will be #include d vs. source file which won't)? Also, should I prefer ::std:: or std:: ? Namespace-level using namespace : using namespace std;
pair<string::const_iterator, string::const_iterator>
f(const string &s) {
return make_pair(s.begin(), s.end());
} Being fully explicit: std::pair<std::string::const_iterator, std::string::const_iterator>
f(const std::string &s) {
return std::make_pair(s.begin(), s.end());
} Namespace-level using-declarations: using std::pair;
using std::string;
pair<string::const_iterator, string::const_iterator>
f(const string &s) {
return make_pair(s.begin(), s.end());
} Function-local using-declarations: std::pair<std::string::const_iterator, std::string::const_iterator>
f(const std::string &s) {
using std::make_pair;
return make_pair(s.begin(), s.end());
} Function-local using namespace : std::pair<std::string::const_iterator, std::string::const_iterator>
f(const std::string &s) {
using namespace std;
return make_pair(s.begin(), s.end());
} Something else? This is assuming pre-C++14, and thus no return-type-deduction using auto . | Avoid using using in headers, because that breaks the purpose of namespaces. It is ok to use it in source files, but I would still avoid it in some cases (for example using std ). However if you got nested namespaces, it's ok : namespace A {
namespace B {
namespace C {
class s;
} // C
} // B
namespace D{
using B::C::s;
} // D
} // A | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141317",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11833/"
]
} |
141,329 | I code a lot in both C and C++, but did not expect C to be the second most popular language, slightly behind Java. TIOBE Programming Community Index I'm curious as to why, in this age of OOP, C is still so popular? Note that 4 out of the top 5 popular programming languages are "modern", object-oriented capable languages. Now, I agree that you can use OOP in C to some extent, but it's sort of painful and inelegant (well at least compared to C++ I guess). So, what makes C so popular? Is it efficiency; being low-level; the vast majority of libraries that already exist or something else? | A few factors that contribute: C is ubiquitous. Whatever the platform, C is probably available. C is portable. Write a piece of clean C, and it compiles with minimal modifications on other platforms - sometimes it even works out-of-the-box. C has been around for a while. Back in the days when UNIX conquered the world, C (the UNIX programming language of choice) shared in its world domination, and became the lingua franca of the programming world. Any serious programmer can be expected to at least make some sense of a chunk of C; the same can't be said about most other languages. C is still the default language for UNIX and UNIX-flavored systems. If you want a library to succeed in open-source land, you need fairly good reasons not to use C. This is partially due to tradition, but more because C is the only language you can safely assume to be supported on any UNIX-like system. Writing your library in C means you can minimize dependencies. C is simple. It lacks the expressivity of sophisticated OOP or functional languages, but its simplicity means it can be picked up quickly. C is versatile. It is suitable for embedded systems, device drivers, OS kernels, small command-line utilities, large desktop applications, DBMS's, implementing other programming languages, and pretty much anything else you can think of. C is fast. Most C implementations compile directly to machine code, and the programmer has full power over what happens at the machine level. There is no interpreter, no JIT compiler, no VM or runtime - just the code, a compiler, a linker, and the bare metal. C is 'free' (in both the beer and the speech sense). There is no single company that owns and controls the standard, there are several implementations to choose from, there are no copyright, patenting or trademark issues for using C, and some of the best implementations are open-source. C has a lot of momentum going. The language has been popular for decades, so there is an enormous amount of applications, libraries, tools, and most of all, communities, to support the language. C is mature. The last standard that introduced big changes is C99, and it is mostly backwards-compatible with previous standards. Unlike newer languages (say, Python), you don't have to worry about breaking changes anytime soon. C is compatible. Most languages have bindings to talk to C. This means one can develop a library in C using standard calling conventions, and feel confident that almost any other language can link against that library. To name a few popular languages in widespread use: C#, Java, Perl, Python, PHP can all link against C libraries without much trouble. C is powerful: if the language cannot do something, all popular compilers allow embedding assembler code which can do anything the hardware can do. Transitively combined with the above point about compatibility, this means C can act as a liaison between higher level languages and the "bare metal" of assembly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141329",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50429/"
]
} |
141,410 | I am planning to build a RESTfull API but there are some architectural questions that are creating some problems in my head. Adding backend business logic to clients is an option that I would like to avoid since updating multiple client platforms is hard to maintain in real time when business logic can rapidly change. Lets say we have article as a resource ( api/article ), how should we implement actions like publish, unpublish, activate or deactivate and so on but to try to keep it as simple as possible? 1) Should we use api/article/{id}/{action} since a lot of backend logic can happen there like pushing to remote locations or change of multiple properties. Probably the hardest thing here is that we need to send all article data back to API for updating and multiuser work could not be implemented. For instance editor could send 5 seconds older data and overwrite fix that some other journalist just did 2 seconds ago and there is no way that I could explain to clients this since those publishing an article is really not in any way connected to updating the content. 2) Creating new resource can also be an option, api/article-{action}/id , but then returned resource would not be article-{action} but article which I am not sure if this is proper. Also in server side code article class is handling actuall work on both resource and I'm not sure if this goes against RESTfull thinking Any suggestions are welcomed.. | I find the practices described here to be helpful: What about actions that don't fit into the world of CRUD operations? This is where things can get fuzzy. There are a number of approaches: Restructure the action to appear like a field of a resource. This
works if the action doesn't take parameters. For example an activate action could be mapped to a boolean activated field and updated via
a PATCH to the resource. Treat it like a sub-resource with RESTful
principles. For example, GitHub's API lets you star a gist with PUT /gists/:id/star and unstar with DELETE /gists/:id/star . Sometimes you really have no way to map the action to a sensible RESTful
structure. For example, a multi-resource search doesn't really make
sense to be applied to a specific resource's endpoint. In this case, /search would make the most sense even though it isn't a resource.
This is OK - just do what's right from the perspective of the API
consumer and make sure it's documented clearly to avoid confusion. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141410",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/27605/"
]
} |
141,411 | I don't understand the difference between "normal" software and enterprise software. Even after reading these... "Enterprise Software" on Wikipedia "Enterprise Software Is Sexy Again" on Techcrunch "The Great Enterprise Software Swindle" on Coding Horror I can't really wrap my head around the real differences. Is there any difference at all between the two? Why do people say enterprise software sucks? | In short words, normal software would be the software made with individuals in mind, i.e. retail software or web applications targeting the general populace. Its success depends on how well it is received by users who in most part are offered a ready-made, 'standard issue' product. The development is an investment and the revenue comes from individual product or ad space sales. On the other hand, enterprise software would be the software commissioned or developed internally by companies, either tailor-made from scratch or purchased from a third-party vendor and heavily customized for the company's business process. The reason people say enterprise software sucks? I'd say there are three main reasons, heavily interconnected: People who pay for it aren't the ones to use it - upper management / IT department makes the decisions. Ideally, they would consult the future users and make it imperative to adhere to what they have to say. The bad reputation comes from the cases where this is not so. Systems like this are one-of-a-kind - retail software has to be well tested before putting it out on market, as the bugs may make it or break it in the view of its target users. Furthermore, it is used by dozens of people around the world on different machines, with different usage patterns, and so on, resulting in feedback useful for future releases. Enterprise software suffers from a small user base consisting mainly of people who have no choice but to use it regardless of user experience. As a result, less focus on user experience and less feedback to be gained from users (and often, no channel for such feedback when the work is being outsourced). Companies who use the software are not software companies - they are using the software, it's critical for the business they are conducting, however it is secondary to their business objectives. As such, enterprise software will suffer from unreasonable deadlines, resources being under-allocated and being deemed 'good enough' while still being incomplete or under-tested. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141411",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31560/"
]
} |
141,522 | I don't know whether this question fits better on this site or on Stack Overflow, but because my question is connected rather to practices, that some specified problem. So, consider an object that does something. And this something can (but should not!) can go wrong. So, this situation can be resolved in two ways: first, with exceptions: DoSomethingClass exampleObject = new DoSomethingClass();
try
{
exampleObject.DoSomething();
}
catch (ThisCanGoWrongException ex)
{
[...]
} And second, with if statement: DoSomethingClass exampleObject = new DoSomethingClass();
if(!exampleObject.DoSomething())
{
[...]
} Second case in more sophisticated way: DoSomethingClass exampleObject = new DoSomethingClass();
ErrorHandler error = exampleObject.DoSomething();
if (error.HasError)
{
if(error.ErrorType == ErrorType.DivideByPotato)
{
[...]
}
} Which way is better? On one hand, I heard that exceptions should be used only for real unexpected situations, and if the programmer knows that something may happen, they should use if/else . On the other hand, Robert C. Martin in his book Clean Code wrote: "exceptions are far more object oriented, and more simple to keep clean." | Typically, I tend to use exceptions for exceptional cases - the instance where nothing should go wrong, and if it does go wrong, that's a big problem. An example might be a missing configuration file that ships with the software or a missing hardware driver. If these things happen, that's a huge problem. I will perform checks on things like user input or optional instances where there's a good chance that I'll be given invalid information. However, it should also be noted that some languages appear to favor exceptions. For example, I've noticed that the use of exceptions for even minor problems is much more common in Python. In the Python community, this is known as EAFP - "Easier to ask forgiveness than permission" . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141522",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48334/"
]
} |
141,554 | Possible Duplicate: How could the first C++ compiler be written in C++? I know my question goes to the underground galaxy cave where languages are born and involves some lambda math and light-years of google-studying. But what kind of knowledge is necessary to create a language? | Look up "bootstrapping". Basically you start with a very minimal process/set of functions that can be used to compile the code that defines a slightly more functional compiler. This creates your next compiler which then can then be used to build code that can do even more. You repeat this process until you have a full blown compiler that can compile all the language features. The other alternative is to write the first version of the compiler in a different language and then write the next version in your target language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141554",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37323/"
]
} |
141,698 | Our project uses a user-specific configuration file. This file is currently not in version control, since it is different for each user. The problem is, whenever a developer adds a new module that requires configuration, or changes the name of an existing modules, the other developers get errors because their private configuration files are not updated. To solve the problem, we thought of working with two configuration files: a default/global configuration file that will be in version control and will be updated regularly by each developer that adds a new module, and a private configuration file that will be kept out of version control and will contain only the user-specific changes. However, this still seems like an ad-hoc solution. Can you propose a better solution? What do the professionals do? | Though you already got some good answers here, most of them miss the root cause of your problem: your user config files seem to contain more than just user-specific information, they also contain (perhaps redundant) information which is under version control somewhere else, probably in different files, like module names. I can think of two possible solutions here: try to separate that information rigorously. For example, don't use any module names in your user config. Use id numbers (for example, GUIDs) to refer to modules, and let those id numbers never change after they have been assigned to a module. Of course, that probably has the drawback that your user config files lose some of their simplicity they might have now. You will perhaps need to create a GUI tool to edit your config files instead of using a plain text editor. give your config file format a version number, and whenever something like a module name is changed, assign them a new version number. Then you can provide an upgrade script which checks the version numbers, and if the config file is not up-to-date, it changes all module names it finds within the file and increases the version number afterwards. This can be automated, so the process of upgrading won't disturb your team mates in their daily work. EDIT: after reading your posting again, I think your supposed solution is reasonable, as long as new modules are just added, but not renamed. What I wrote above will allow to change module names or the structure of the configuration of existing modules afterwards. But if you don't need that, I would stick to the most simple solution. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141698",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50606/"
]
} |
141,818 | I am a freelancer and work as a single developer. In all cases, I am the person wearing different hats. I am the Scrum Master and I am the only person in the development. What I am not is the Product Owner? Are there any Agile Methods and free tools that a freelancer can make use of? | The goal of Agile is to... minimize all feedback loops as much as possible minimize project overhead in terms of documents/forms produced and replace it with much higher bandwidth medium, which is real-time (preferably face-to-face) communications among team members. If you do more research about agile, a lot of it will talk about teams and the interaction between members. That's because with every Nth member, number of lines of communications within the goes up by N. As N goes up, you face additional challenges such as training/knowledge dissemination, coordination of effort... A lot of the problems that an agile team would face, you would definitely not need to worry about, but the main area where I would focus is (1) - minimizing feedback loops. Even as a single developer this comes in very handy because the sooner you identify problems (whether it is requirements, design or implementation) the less expensive it will be to fix those problems. This is what I would do: Make it a point to meet with your customers on regular basis and make sure they... know exactly what you've done and provide you feedback as soon as possible know what you are working on next and confirm those are the features they actually want given what they already see as being completed Spend some time on ironing out deployment/packaging. A little of upfront cost will ensure that with one button click you can roll out a fully deployable version of your application which can be instantly tested. I'm not sure how much continuous integration server would benefit a one-person team - curious what other's would say about that. You can manage your product backlog much the same way agile teams do it. However, since you are working by yourself, I would skip using any type of fancy software and even whiteboard with posted notes and go right to a text file (maybe excel sheet) that lists work items. Prioritize them and regularly review them to make sure that you are not missing anything and that the important things are taken care of first. If you want to try something a little fancier, Trello , might be a good option Consider moving over to TDD. A year ago I was a bit skeptical and considered TDD only useful for low-level utilities that have no dependencies on anything else. But in the last year, I've tried the approach several times and each time was extremely surprised at the results. Yes, it is a hefty chunk of upfront cost, but it seems that the benefits from having your code easily testable/verifiable ends up paying you back crazy fast. TDD will instantly tell you if you made a mistake in the code and you can easily verify that your classes/functions behave properly in boundary conditions. Alternative is typically to "do your best", but you end up finding bugs during testing and then you end up spending hours isolating and debugging the problem. With unit tests in place, you won't be nearly as scared about refactoring and significantly altering the design as new features are added. I noticed without TDD, most people tend to be scared of touching code that's already been tested (and rightfully so). In a lot of cases, they keep adding on more and more, while afraid to alter what's already there and the code ends up looking like a legacy spaghetti mess even before v1.0 is shipped. With tests in place and being able to refactor at will, also gives you the advantage that you don't need to stress as much about getting the design right the first time since it becomes that much easier to alter the design as needed later on. Carry out your own retrospectives. One of the key messages of agile is that change should be welcomed and that we should never settle into a rhythm where the same thing is being done over an over. Periodically evaluate what you are doing and try to identify areas were you feel different approach could make your more efficient. Don't be afraid to adjust the process (or sometimes drastically change something, if for now other reason, then just to see what would happen) and tailor your practices to your specific situation. The advice I provided is based on the type of work I've done, which mainly involves working on an application sold as a product. We work on the same code base and keep extending/improving the same application with each new release. Advice might be different if your primary source of income is coding as a service (i.e. write some code, sell it, never see it again). The impression I'm getting is that developers in the services area, tend to skip TDD and many other practices since extendable/maintainable code is not their priority. Instead, they focus on time to completion (i.e. minimizing costs) and making sure that v1.0 (which is the first and last version) passes customer's acceptance criteria. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141818",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6001/"
]
} |
141,834 | C has pointers and Java has what is called references. They have some things in common in the sense that they all point to something. I know that pointers in C store the addresses they point to. Do reference also store the address? How they are different except that pointer is more flexible and error-prone? | References might be implemented by storing the address. Usually Java references will be implemented as pointers, but that's not required by the specification. They may be using an additional layer of indirection to enable easier garbage collection. But in the end it will (almost always) boil down to (C-style) pointers being involved in the implementation of (Java-style) references. You can't do pointer arithmetic with references. The most important difference between a pointer in C and a reference in Java is that you can't actually get to (and manipulate) the underlying value of a reference in Java. In other words: you can't do pointer arithmetic. In C you can add something to a pointer (i.e. the address) or substract something to point to things that are "nearby" or point to places that are at any place. In Java, a reference points to one thing and that thing only. You can make a variable hold a different reference, but you can't just ask it to point to "the thing after the original thing". References are strongly typed. Another difference is that the type of a reference is much more strictly controlled in Java than the type of a pointer is in C. In C you can have an int* and cast it to a char* and just re-interpret the memory at that location. That re-interpretation doesn't work in Java: you can only interpret the object at the other end of the reference as something that it already is (i.e. you can cast a Object reference to String reference only if the object pointed to is actually a String ). Those differences make C pointers more powerful, but also more dangerous. Both of those possibilities (pointer arithmetic and re-interpreting the values being pointed to) add flexibility to C and are the source of some of the power of the language. But they are also big sources of problems, because if used incorrectly they can easily break assumptions that your code is built around. And it's pretty easy to use them incorrectly. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141834",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48752/"
]
} |
141,854 | Being an IT student, I was recently given some overview about design patterns by one of our teachers. I understood what they are for, but some aspects still keep bugging me. Are they really used by the majority of programmers? Speaking of experience, I've had some troubles while programming, things I could not solve for a while, but Google and some hours of research solved my problem. If somewhere in the web I find a way to solve my problem, is this a design pattern? Am I using it? And also, do you (programmers) find yourself looking for patterns (where am I supposed to look, by the way?) when you start the development? If so, this is certainly a habit that I must start to embrace. UPDATE: I think when I ask if programmers do use them, I'm asking if when you have a problem to solve you think "Oh, I should use that pattern". | When I was a novice programmer, I loved design patterns. I didn't just use design patterns. I inflicted them. Wherever and whenever I could. I was merciless. Aha! Observer pattern! Take that! Use a listener! Proxy! AbstractFactory! Why use one layer of abstraction when five will do? I've spoken to many experienced programmers and found that just about everyone who reads the GoF Book goes through this stage. Novice programmers don't use design patterns. They abuse design patterns. More recently, I find that keeping principles like the Single Responsibility Principle in mind, and writing tests first, help the patterns to emerge in a more pragmatic way. When I recognise the patterns, I can continue to progress them more easily. I recognise them, but I no longer try to force them on code. If a Visitor pattern emerges it's probably because I've refactored out duplication, not because I thought ahead of time about the similarities of rendering a tree versus adding up its values. Experienced programmers don't use design patterns. Design patterns use them. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141854",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31465/"
]
} |
141,860 | Possible Duplicate: Google is good or bad for programmer? I'm currently in college to be a software engineer, and one of the main principles taught to us is how to learn for ourselves, and how to search the web when we have a doubt.
This leads to a proactive attitude - when I need something, I go get it. Recently, I started wondering how much development would I be able to do without internet access and the answer bugged me quite a bit. I know the concept of the languages and how to use them, but I was amazed by how "slow" things were without having the Google to help in the development. Most of the problems I have are related to specific syntax. For example, reading and writing to a file in Java. I have done this about a dozen times in my life, yet every time I need to do it, I end up googling "read file java" and refreshing my memory. I completely understand the code and fully understand what it does, but I am sure that without Google it would take me a few tries to get the code correct. Is this normal? Should I be worried and try to change something in my programming behaviour? | Working without internet is a skill . You don't have it, as most developers. The thing you must ask yourself before being worried is if you need this skill in your life. Chances are, you don't, because at every developer job with Joel Test higher than zero, you will have a fast internet access. Then, you have a choice: Either you spend years learning how to work effectively without internet, without IDE (just with a text editor with no syntax highlighting) and even without a high level operating system (i.e. only in console mode). Or you spend the same time learning things you will really use in your daily work. The first will help you a lot in a case of some post-apocalyptic world where there will be no IDEs, no operating systems and no internet , and where most developers would be unable to continue their career because of the lack of skills. The second helps you today, in the world you live, right now, and very probably in the near future. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141860",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31465/"
]
} |
141,917 | A bug was opened, fixed, verified and closed. A month later, it showed up again in a subsequent version after several iterations without any regression. Provided the bug characteristics are the same, would you reopen the existing bug ID or open a new one with a link to the closed bug? | Characteristics do not equal causes. The new bug could have a different underlying reason, even though it appears to be the same. So, open a new bug and point it to the old one to help the developer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141917",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47399/"
]
} |
141,973 | My organization is considering moving from SVN to Git. One argument against moving is as follows: How do we do versioning? We have an SDK distribution based on the NetBeans Platform. As the SVN revisions are simple numbers we can use them to extend the version numbers of our plugins and SDK builds. How do we handle this when we move to Git? Possible solutions: Using the build number from Hudson (Problem: you have to check Hudson to correlate that to an actual Git version) Manually upping the version for nightly and stable (Problem: Learning curve, human error) If someone else has encountered a similar problem and solved it, we'd love to hear how. | Use tags to mark commits with version numbers: git tag -a v2.5 -m 'Version 2.5' Push tags upstream—this is not done by default: git push --tags Then use the describe command: git describe --tags --long This gives you a string of the format: v2.5-0-gdeadbee
^ ^ ^^
| | ||
| | |'-- SHA of HEAD (first seven chars)
| | '-- "g" is for git
| '---- number of commits since last tag
|
'--------- last tag | {
"source": [
"https://softwareengineering.stackexchange.com/questions/141973",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50715/"
]
} |
142,048 | Sometimes I find myself in situations when the part of code that I am writing is (or seems to be ) so self-evident that its name would be basically repeated as a comment: class Example
{
/// <summary>
/// The location of the update.
/// </summary>
public Uri UpdateLocation { get; set; };
} (C# example, but please refer to the question as language-agnostic). A comment like that is useless; what am I doing wrong? Is it the choice of the name that is wrong? How could I comment parts like this better? Should I just skip the comment for things like this? | Comments should describe the code, not duplicate it. This header comment just duplicates. Leave it out. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142048",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/3530/"
]
} |
142,065 | At the moment I create a database connection when my web page is first loaded. I then process the page and run any queries against that conection. Is this the best way to do it or should I be creating a database connection each time I run a query? p.s It makes more sense to me to create 1 connection and use it but I don't know if this can cause any other issues. I am using C# (ASP.NET) with MSSQL. | If you create one per query / transaction, it is much easier to manage "closing" the connections. I can see why common sense dictates that you should open one and use it throughout, but you will run into problems with dropped connections and multithreading. So your next step will be to open a pool, say of 50, connections and keep them all open, doling them out to different processes. And then you'll find out that this is exactly what the .NET framework does for you already . If you open a connection when you need it and dispose of it when you've finished, that will not actually close the connection, it'll just return it to the connection pool to be used again. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142065",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50766/"
]
} |
142,075 | Which is a better option? It's not always that when you have something creative your code is going to look ugly. But at times it does go a bit ugly. e.g. if ( (object1(0)==object2(0) &&
(object1(1)==object2(1) &&
(object1(2)==object2(2) &&
(object1(3)==object2(3) )
retval = true;
else
retval = false; is simple and readable bool retValue = (object1(0)==object2(0)) &&
(object1(1)==object2(1)) &&
(object1(2)==object2(2)) &&
(object1(3)==object2(3)); but having something like this will make some newbies scratch their heads. So what do I go for? including simple code everywhere might sometime hamper my performance. What I could think of was commenting wherever necessary but at times you get too curious to know what is actually happening. | Your second version IS far more simple and readable, and much better in every way. It's not creative or complex, but perfectly normal, straightforward code. The only way in which it might confuse newbies is that it requires them to understand that complex boolean expressions are A) still expressions like any other and B) can be used wherever a boolean value is required, rather than just inside an if clause. But this is something newbies really need to understand, so you should never let the possibility of someone not yet understanding it influence your code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142075",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42599/"
]
} |
142,086 | It seems to be fashionable recently to omit semicolons from Javascript. There was a blog post a few years ago emphasising that in Javascript, semicolons are optional and the gist of the post seemed to be that you shouldn't bother with them because they're unnecessary. The post, widely cited, doesn't give any compelling reasons not to use them, just that leaving them out has few side-effects. Even GitHub has jumped on the no-semicolon bandwagon, requiring their omission in any internally-developed code, and a recent commit to the zepto.js project by its maintainer has removed all semicolons from the codebase. His chief justifications were: it's a matter of preference for his team; less typing Are there other good reasons to leave them out? Frankly I can see no reason to omit them, and certainly no reason to go back over code to erase them. It also goes against ( years of ) recommended practice , which I don't really buy the "cargo cult" argument for. So, why all the recent semicolon-hate? Is there a shortage looming? Or is this just the latest Javascript fad? | I suppose my reason is the lamest: I program in too many different languages at the same time (Java, Javascript, PHP) - that require ';' so rather than train my fingers and eyes that the ';' is not needed for javascript, I just always add the ';' The other reason is documentation: by adding the ';' I am explicitly stating to myself where I expect the statement to end. Then again I use { } all the time too. The whole byte count argument I find irritating and pointless: for common libraries like jquery: use the google CDN and the library will probably be in the browser cache already version your own libraries and set them to be cached forever. gzip and minimize if really, really necessary. But really how many sites have as their biggest speed bottleneck the download speed of their javascript? If you work for a top 100 site like twitter, google, yahoo, etc. maybe. The rest of us should just worry about the code quality not semicolon religious wars. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142086",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/22765/"
]
} |
142,192 | The bridge design pattern separates implementation from the interface of a program. Why is this advantageous? | It allows you to change the implementation independently of the interface. This helps deal with changing requirements. The classic example is replacing the storage implementation under an interface with something bigger, better, faster, smaller, or otherwise different without having to change the rest of the system. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142192",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/45379/"
]
} |
142,238 | I just got an copyright infringement notice that says my app infringes on the usage policies of the services my app uses. However, this mail does not come directly from the company that runs the website I pull information from.
The mail comes from a company that has some paid apps in the same category on the app store (mine is free), so they obviously want me gone. My question is, can I just ignore this mail until I hear directly from the company that runs the website? Judging from their (the company that contacted me) website, they have no connection what so ever to the company that I fetch information from. | Be proactive and contact the company who runs the service you are using, and ask them if they are ok with your app being available on the app store, and with the way it uses their services. This approach has benefits: If you ignore the email from this competing company, they might inform the service owners of your app, which might lead them to ask you to take it down. If you approach them directly they might take a more lenient view as it shows you are willing to comply with their terms and not sneak something by them (even if that's what you've been doing so far). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142238",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50868/"
]
} |
142,258 | Based on a comment and subsequent upvotes from Bug reopen vs. new : Citing bug IDs in patch notes is just.. very unfriendly. – Krelp It appears at least some people feel that referencing bug IDs in patch notes is not a good idea. I'm a fairly inexperienced developer, so I'm wondering why that's the case. | In my opinion, it's a good practice, assuming your users have read access to your bug database. There are lots of times when people are waiting on a certain bug to be fixed in order to decide when to upgrade. I think what's frowned upon is only citing the bug id and nothing else. You should always also supply a description that is understandable without going to the bug tracker. That also allows you to switch bug trackers in the future without entirely invalidating your previous release notes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142258",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/9875/"
]
} |
142,278 | I have been having a little bit of a debate with a coworker lately. We are specifically using C#, but this could apply to any language with nullable types. Say for example you have a value that represents a maximum. However, this maximum value is optional. I argue that a nullable number would be preferable. My coworker favors the use of zero, citing precedent. Granted, things like network sockets have often used zero to represent an unlimited timeout. If I were to write code dealing with sockets today, I would personally use a nullable value, since I feel it would better represent the fact that there is NO timeout. Which representation is better? Both require a condition checking for the value meaning "none", but I believe that a nullable type conveys the intent a little bit better. | Consider: Language, Framework, Context. 1. Language Using ∞ can be a solution for a maximum. JavaScript, for example, has an infinity. C# doesn't¹. Ada, for example, has ranges. C# doesn't. In C#, there is int.MaxValue , but you cannot use it in your case. int.MaxValue is the maximum integer, 2,147,483,647. If in your code, you have a maximum value of something, like a maximum accepted pressure before something explodes, using 2,147,483,647 has no sense. 2. Framework .NET Framework is rather inconsistent on this point, and its usage of magic values can be criticized. For example, "Hello".IndexOf("Z") returns a magic value -1 . It maybe makes it easier (does it?) to manipulate the result: int position = "Hello".IndexOf("Z");
if (position > 0)
{
DoSomething(position);
} rather than using a custom structure: SearchOccurrence occurrence = "Hello".IndexOf("Z");
if (occurrence.IsFound)
{
DoSomething(occurrence.StartOffset);
} but is not intuitive at all. Why -1 and not -123 ? A beginner may also mistakenly think that 0 means "Not found" too or just mistype (position >= 0) . 3. Context If your code is related to timeouts in network sockets, using something which was used by everyone for decades for the sake of being consistent is not a bad idea . Especially, 0 for a timeout is very clear: it's a value which cannot be zero. Using a custom class in this case may make things more difficult to understand: class Timeout
{
// A value indicating whether there is a timeout.
public bool IsTimeoutEnabled { get; set; }
// The duration of the timeout, in milliseconds.
public int Duration { get; set; }
} Can I set Duration to 0 if IsTimeoutEnabled is true? If IsTimeoutEnabled is false, what happens if I set Duration to 100? This can lead to multiple mistakes. Imagine the following piece of code: this.currentOperation.Timeout = new Timeout
{
// Set the timeout to 200 ms.; we don't want this operation to be longer than that.
Duration = 200,
};
this.currentOperation.Run(); The operation runs for ten seconds. Can you see what's wrong with this code, without reading the documentation of Timeout class? Conclusion null expresses well the idea that the value is not here. It's not provided. Not available. It's neither a number, nor a zero/empty string or whatsoever. Don't use it for maximum or minimum values. int.MaxValue is strongly related to the language itself. Don't use int.MaxValue for a maximum speed limit of Vehicle class or a maximum acceptable speed for an aircraft, etc. Avoid magic values like -1 in your code. They are misleading and lead to mistakes in code. Create your own class which would be more straightforward, with the minimum/maximum values specified. For example VehicleSpeed can have VehicleSpeed.MaxValue . Don't follow any previous guideline and use magic values if it's a general convention for decades in a very specific field, used by most people writing code in this field. Don't forget to mix approaches. For example: class DnsQuery
{
public const int NoTimeout = 0;
public int Timeout { get; set; }
}
this.query.Timeout = 0; // For people who are familiar with timeouts set to zero.
// or
this.query.Timeout = DnsQuery.NoTimeout; // For other people. ¹ You can create your own type which includes infinity. Here, I'm talking about native int type only. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142278",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/4949/"
]
} |
142,334 | There seems to be a lot of discussion of the various speed merits to C or C++ as compared to say Java or Python, but I rarely see Objective-C mentioned. Roughly where does it fall in terms of language performance? | Unlike C++, Objective-C is designed as a clean superset of C. The few Objective-C compiler I've used are better known as C compilers, but also handle Objective-C. So, it's safe to assume that in the code generation level, C and Objective-C are equivalent. The first difference appears in the OOP ABI, also called "late method binding". Just like in C++, Objective-C relies in compiler-generated function pointer tables that are traversed at runtime. Unlike C++, however, the binding method is more 'dynamic', and promotes the use of the id superclass everywhere, making it slightly slower than C++ in theory. In practice, this difference is way below measurable. Finally, the most important performance issue is the quality of the libraries used. Since Objective-C is only really popular in the Apple systems, it's reasonable to assume you're using it with Cocoa; which is a fine set of high-level libraries. In most cases, you can leave the heavy lifting to them, so your code either don't have to be so fast, or if you do heavy crunching, then it's likely to be a mostly-static code base, roughly similar to plain C. TL;DR: it's right there with C and C++ languages where it matters most. If you're not getting good performance, check your algorithms; just as in any serious language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142334",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37340/"
]
} |
142,390 | I'm planning on leaving my current job because we're locked into using Blub , with an enterprise Blub framework and a Blub-level web server, on mediocre shared hosting. My coworkers are friendly and my boss is an average small business owner - I want to leave entirely because of the technical reasons. I feel like being soaked in Blub is bad for my brain and making me a worse programmer. When I leave, how can I explain this to my boss and coworkers? How can I phrase my complaints about Blub productively? What kind of warning can I and should I leave for my successor in documentation? (trying to make sure I meet the standards ) | I don't know anything about Blub itself, but I've been in a similar situation where there was something about my job that I think should be fixed, but don't want to burn bridges. Here are a few ideas that may help. Try to fix the issue. Explain to your boss that you think Blub is a bad decision for the health and growth of the company. Provide specific cases and instances where it's hurting the company (or where some other platform would help the company better). Suggest an alternative that you feel is superior and be ready to back it up with facts (remember - objective data). This will allow you to voice your concerns and gauge how your boss responds and how open he is to different technologies (or, how married he is to Blub). You may also gain some insight into why the company is using Blub and sticking with it. It will also give you a gauge of whether it's worth sticking out through it, if the company has decided to change technologies. (Note - this may depend on your boss. Obviously, this won't work if he's in love with it and thinks it's the future of technology.) Hold out until you get a job offer. You've dealt with it until now, so find a new job and wait to leave until you get an offer. This gives you an easy out - "I've been offered a position that better suits my career goals" (or some other more neutral line). Granted, this doesn't necessarily help your current company, but it's also not entirely up to you to fix the matter. Say you want to take your career in a different direction. Explain that you would prefer to work on a different platform and that Blub isn't your cup of tea. This allows you to say something along the lines of "I don't like it," without getting into the religious debate of code languages/platforms. As Paul said in his answer, it keeps the reasons for you leaving close to you and reduces the chance of people taking it personally. Make it clear that it's not the office environment. Make sure your boss and coworkers know that you enjoyed working with them. Offer to connect with them on LinkedIn if you haven't already. Try to keep in touch with them as part of your professional network. As for your successor and documentation, simply make sure all the issues/quirks that you know of are documented somewhere, either in the code or in a wiki or some other structured documentation platform. Explain in comments why you did something a certain way and be matter-of-fact about it - "doing it this way because our version of Blub doesn't support Alternative Method X." If your successor is familiar with Blub and doesn't mind it, then they're not going to heed any kind of "stay away!" messages. Someone not familiar with it is probably going to think you're just one of those platform elitists and ignore overt messages, and someone who is familiar with Blub and doesn't like it, or is on the fence, will either already sway to your side after more experience, wouldn't have applied to the position, or would ignore your "stay away!" messages, anyway. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142390",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/7402/"
]
} |
142,454 | I'm a novice programmer. I study languages such as C, C++, Python and Java (mainly focusing on C++). I'm what you'd call "young and inexperienced" and I admit that because I can't claim otherwise. As a student, I have many other problems besides programming. I practice programming as often as I can, and especially because my teacher gives me a lot more exercises than the rest of the class (it's a very low level), so oftentimes I spend weeks doing something else such as school projects or sports, or traveling, anything besides programming. Don't get me wrong though, I love programming. I love to build functional code, to watch as a program comes alive at the push of a button, and to learn as much as I can. The thing is, I simply don't have much time for it. Straight to the question, now: does your programming knowledge decrease as time passes and you don't practice? You may ask "how much time do you mean?". I don't mean a specific amount of time, but for reference you could take a month-two or even a year as an example. By knowledge I mean anything from syntax to language functionality. | Obviously, programming is something you learn to do, not a set of facts or information. That said, it's more like riding a bike or speaking a language. There are theories too, but it's more about putting them to practice. Even so, like anything, if you don't use it your brain will start to drop the information. Your brain is like a muscle that way. After a period of time you'll most likely remember broad concepts but not specifics about syntax and lesser-used functions. For example, you may want to do a for loop or iterate over a list and know that it's something like while List.hasNext() or for item in list or for index, item of list but not writing it very confidently. The good thing is, you'll know what you want to do. You just need to look it up. So I wouldn't be too worried about it. It'll come back to you. The important thing is to learn how to solve problems with programs. All programming languages are usually capable of doing the same things, but most of the time in different ways. You might forget that Ruby or Python have comprehensions and write too many for loops instead, but you'll get the job done. As for how much you'll forget or remember, I think that kind of depends on how active your mind is and how old you are. I don't think I developed a fully functional brain until I was 19. At that point memorizing anything for me was a snap. Everyone is different. In sum: details always fade, the rate they fade depends on you, all languages are trying to make it easy to solve the same problems, so maybe it's more important to learn how to solve problems. :) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142454",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47546/"
]
} |
142,532 | The Red - Green - Refactor cycle for TDD is well established and accepted. We write one failing unit test and make it pass as simply as possible. What are the benefits to this approach over writing many failing unit tests for a class and make them all pass in one go. The test suite still protects you against writing incorrect code or making mistakes in the refactoring stage, so what's the harm? Sometimes it's easier to write all the tests for a class (or module) first as a form of 'brain dump' to quickly write down all the expected behavior in one go. | Test driven design is about getting your API right, not the code. The benefit of writing the simplest failing tests first is that you get your API (which it essentially is you are designing on the fly) as simple as possible. Up front. Any future uses (which the next tests you write are) will go from the initial simple design, instead of a suboptimal design coping with more complex cases. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142532",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11393/"
]
} |
142,571 | I work for a company for which the domain is really difficult to understand because it is high technology in electronics, but this is applicable to any software development in a complex domain. The application that I work on displays a lot of information, charts, and metrics which are difficult to understand without experience in the domain. The developer uses a specification to describe what the software must do, such as specifying that a particular chart must display this kind of metrics and this metric is the following arithmetic formula. This way, the developer doesn't really understand the business and what/why he is doing this task. This can be OK if specification is really detailed but when it isn't or when the author has forgotten a use case, this is quite hard for the developer to find a solution. At the other hand, training every developer to all the business aspects can be very long and difficult. Should we give more importance to detailed specification (but as we know, perfect specification does not exist) or should we train all the developers to understand the business domain? EDIT: keep in mind in your answer that the company could used external developers and that a formation to all the domain can be about 2 weeks | The specification is virtually never sufficient. Developers who do not have domain knowledge cannot point out when the specification is in error (a frequent occurence most places) and make poor design choices. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142571",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51049/"
]
} |
142,612 | I'm writing an Agile course for some of the new guys we are on-boarding recently, and I want to add a cautionary tale so they understand that Agile is not meant for all projects. My problem is that, because of the nature of the projects I work in with Agile has worked pretty well so far, I can't honestly point out what can go wrong and why when you use it in the wrong kind of project. What are things to look out for when an Agile project goes wrong? | The biggest failure with "Agile" teams is a result of what is called Cargo Culting . Essentially, teams want the effects of successful agile teams so they mimic the visibile actions Daily standups (that run for an hour or so) Breaking work into sprints User stories (that are usually little more than a sentence but an estimate is expected) Those are the three that you'll see consistently "applied" in these environments but very little committment to actually being agile. In fact you'll hear management say we're "doing agile." (Run away at those two words it's a bad sign.) You'll also hear a lot about technical debt but their definition of technical debt is "do it quick and dirty and maybe we'll get around to making it better later." (Translation: we are going to make it sound like we're concerned with maintainability but in reality we will keep the same boiler room mentality because that's what's worked for us in the past). Other key phrases: "I know these stories aren't fully defined but we're doing agile so we can fix them as we go." "We're doing agile development so you should be able to accomodate what I need within the sprint as I identify it." "We're not able to lock down our committed stories at the beginning of the sprint because needs keep changing mid-sprint." The key indicator on whether an Agile project will be successful is if the project lead (scrum master or whatever role) has had experience or formal training on leading an agile project. Too often I've seen people read about Agile in a book or take a two day course on being a scrum master and think they've got the chops to successfully implement it. Sorry it ain't happening captain. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142612",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/11228/"
]
} |
142,743 | I am working on a CNC (computer numerical control) project which cuts shapes into metal with help of laser. Now my problem is once in a while (1-2 times in 20 odd days) the cutting goes wrong or not according to what is set. But this causes loss so the client is not very happy about it. I tried to find out the the cause of it by Including log files Debugging Repeating the same environment. But it wont repeat. A pause and continue operation will again make it to run smoothly without the bug reappearing. How do I tackle this issue? Should I state it as a Hardware Problem? | Work arounds As ChrisF suggests, the pragmatic short term solution may be to use the pause and resume trick, but you have to talk to your customers to know what your priorities should be. For example: If the fault trashes a £1000 part or causes 4 hours of downtime once a week, while the pause-resume fix reduces production by 1%, they will probably prefer the fix right now. If the fault trashes a £1 part or causes 4 minutes of downtime once a week, but the pause-resume fix reduces production by 1%, they will probably prefer to wait for a fix which doesn't affect production rate. Having worked in the laser micro-machining industry for many years, I know just how much pressure you can be under to optimise the process and make your machine produce as many parts per hour as is possible, so either way you are going to be under pressure to fix the problem properly. Logging In my experience, the only way to effectively track down a Heisenbug is copious logging. Log everything in and around the part of the code which could be responsible for the error. Learn how to read your log files effectively, make sure you are monitoring following error on your motors (are your stages moving where they should when they should?). Look at the memory usage on the machine, is a memory leak causing a critical process to be starved? Make sure you are logging user actions too, are you sure that the operator isn't hitting the emergency stop so they can pop out for a shifty cigarette break while it's being fixed? I've seen this happen! Static analysis Also, look for correlations between scribing certain patterns and the bug being triggered more or less often. If you can find patterns that trigger the problem more frequently (or never trigger it) these may point to your problem. Try to make patterns that trigger the problem even more frequently. If you can find a way to trigger the problem reliably then you are half the way to a solution. Other options Finally, don't be quick to blame the hardware, but never assume that it's perfect. Many times I've been blamed for problems which turned out to be electrical or mechanical in nature, so you always have to have that at the back of your mind. Even though you may not normally have access to the machine, remember that some problems can only be efficiently solved on the machine. Sometimes a few days on-site can be worth weeks via remote desktop and months off-line completely. If you run out of off-line options, don't be afraid to propose a site visit, they can only say no. You might also want to look at the questions and answers to What do you do with a heisenbug? and What to do with bugs that do not repro? but these might not be so useful for your situation. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142743",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42599/"
]
} |
142,760 | If I am attempting to simulate a Rubik's Cube , how would you create a data structure to store the cube's state in memory, with X number of tiles per side? Things to consider: the cube can be of any size it is a Rubik's cube, so layers can be rotated | What's wrong with a plain old array of size [6X][X] ? You do not need to know about inner mini-cubes, because you do not see them; they are not part of the cube's state. Hide two ugly methods behind a nice-looking and simple to use interface, unit test it to death, and voila, you're done! | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142760",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/25679/"
]
} |
142,864 | Are there any Java-specific techniques (things which wouldnt apply to C++) for writing low latency code, in Java? I often see Java low latency roles and they ask for experience writing low latency Java- which sometimes seems a little bit of an oxymoron. The only think I could think of is experience with JNI, outsourcing I/O calls to native code. Also possibly using the disruptor pattern, but thats not an actual technology. Are there any Java specific tips for writing low latency code? I am aware there is a Real Time Java Spec, but I have been warned real-time is not the same as low latency.... | In addition to Martijn's comments I'd add: Warm up your JVM. Bytecode starts starts off being interpreted for Hotspot and
then gets compiled on the server after 10K observations .
Tiered Compilation can be a good stop gap. Classloading is a sequential process that involves IO to disk. Make
sure all the classes for your main transaction flows are loaded
upfront and that they never get evicted from the perm generation. Follow the " Single Writer Principle " to avoid contention and
the queueing effect implications of Little's Law, plus study
Amdhal's Law for what can be parallel and is it worth it. Model you business domain and ensure all your algorithms are O(1) or
at least O(log n). This is probably the biggest cause of
performance issues in my experience. Make sure you have performance
tests to cover the main cases. Low-latency in Java is not just limited to Java. You need to
understand the whole stack your code is executing on. This will
involve OS tuning, selecting appropriate hardware, tuning systems
software and device drivers for that hardware. Be realistic. If you need low-latency don't run on a hypervisor. Ensure you have sufficient cores for all threads that need to be in the runnable state. Cache misses are your biggest cost to performance. Use algorithms
that are cache friendly and set affinity to processor cores either
with taskset or numactl for a JVM or JNI for individual threads. Consider an alternative JVM like Zing from Azul with a pause-less
garbage collector. Most importantly get someone involved with experience. This will
save you so much time in the long run. Shameless plug :-) Real-time and low-latency are distinctly separate subjects although often related. Real-time is about being more predictable than fast. In my experience the real-time JVMs, even the soft real-time ones, are slower than the normal JVMs. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142864",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/41217/"
]
} |
142,915 | Whenever I'm writing a typical if-else-construct in any language I wonder what would be the best way (in terms of readability and overview) to add comments to it. Especially when commenting the else clause the comments always feel out-of-place for me. Say we have a construct like this (examples are written down in PHP): if ($big == true) {
bigMagic();
} else {
smallMagic()
} I could comment it like this: // check, what kind of magic should happen
if ($big == true) {
// do some big magic stuff
bigMagic();
} else {
// small magic is enough
smallMagic()
} or // check, what kind of magic should happen
// do some big magic stuff
if ($big == true) {
bigMagic();
}
// small magic is enough
else {
smallMagic()
} or // check, what kind of magic should happen
// if: do some big magic stuff
// else: small magic is enough
if ($big == true) {
bigMagic();
} else {
smallMagic()
} What are your best-practice examples for commenting this? | I prefer either: if ($magic == big) {
bigMagic();
}
else {
smallMagic();
} or: if ($magic == big) {
// big magic requires a big surprise, so I'm telling you about it here
surprisingThing();
}
else {
// give a magical feeling even if $magic is noMagicAtAll
smallMagic();
} It seems a little silly to write a comment explaining what your condition checks unless the code doesn't clearly state it. Even then, better to rewrite the code to make it as clear as possible. The same goes for the bodies of the conditional blocks -- if you can make the reason for doing something obvious, do that instead of commenting. I don't subscribe to the "never write comments" philosophy, but I do believe in avoiding comments that say what the code should be saying. If you write a comment like "check what kind of magic should happen" when the code could say if ($magic == big) {... , readers will stop reading your comments very quickly. Using fewer, more meaningful comments gives each of your comments more value, and readers are much more likely to pay attention to those that you do write. Choosing meaningful names for your variables and functions is important. A well-chosen name can eliminate the need for explanatory comments throughout your code. In your example, $magic or maybe $kindOfMagic seems like a better name than $big since according to your example, it's the "kind of magic" that's being tested, not the "bigness" of something. Say as much as you can in code. Save prose for the cases that demand more explanation than you can reasonably write in code. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142915",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/13033/"
]
} |
142,951 | I'm using CMake to generate my projects IDE/makefiles, but I still need to call custom "scripts" to manipulate my compiled files or even generate code. In previous projects I've been using Python and it was OK, but now I'm having serious trouble managing a lot of dependencies in two very big projects I'm working on so I want to minimize the dependencies everywhere. Someone suggested to me to use C++ to write my build scripts instead of adding a language dependency just for that. The projects themeselves already use C++ so there are several advantages that I can see: to build the whole project, only a C++ compiler and CMake would be necessary, nothing else (all the other dependencies are C or C++); C++ type safety (when using modern C++) makes everything easier to get "correct"; it's also the language I know the better so I'm more at ease with it even if I'm able to write some good Python code; potential gain in execution speed (but i don't think it will really be perceptible); However, I think there might be some drawbacks and I'm not sure of the real impact as I didn't try yet: might be longer to write the code (that said I'm not sure because I'm efficient enough in C++ to write something that work quickly, so maybe for this system it wouldn't be so long to write) (compilation time shouldn't be a problem for this case); I must assume that all the text files I'll read as input are in UTF-8, I'm not sure it can be easilly checked at runtime in C++ and the language will not check it for you; libraries in C++ are harder to manage than in scripting languages; I lack experience and forsight so maybe I'm missing advantages and drawbacks.
So the question is: does it make sense to use C++ for this? do you have experiences to report and do you see advantages and disadvantages that might be important? | Just use Python. I develop in C++ and do my build scripts in Python, and I would find it painful to do build scripts in C++: Python makes it trivial to manipulate dictionaries, lists, nested dictionaries of dictionaries of lists, etc. (For example, one of my scripts uses a multi-level hierarchy of all of my tools, tools' versions, and tools' versions' paths.) C++ can do the same with templates and custom classes, but it's much more verbose (which translates to more lines of code, which generally translates to lower productivity). Python provides high-level libraries and routines like its XML and JSON handling, subprocess , and os.walk . Again, C++ can do this, but it's a lot more work to find the libraries, learn their APIs, correctly assemble the calls (which are often lower level), etc. Build scripts are a non-value-added activity (to borrow a term from lean). It's better to use as high-level a language as possible, to get them done as quickly as possible, to get back to work which benefits your users. In my experience, build scripts tend to grow in unforeseen ways. Even if a task seems initially simple for C++, it can get complicated in a hurry. When a new requirement comes up, it's often a lot simpler to tack on handling in a Python script than it is to do it in C++ (which may require finding or reading up on new library APIs, etc.). Regarding the advantages which you list for C++: Adding a single dependency (Python) shouldn't significantly complicate your build. It's already standard on most Linux installations, for example. Thanks to Python's "batteries included" libraries, it may even be easier to manage than the C++ libraries that your build scripts would depend on. The type safety that C++ gives is most useful for large projects, not small scripts. Python complements C++ very well (high-level versus lower-level, dynamically typed versus statically typed, etc.) and can even integrate with C++ very well (thanks to SWIG and Boost.Python) if you later want to do that, so it's worth learning for a C++ programmer. As you said, execution speed should be a nonissue. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/142951",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1921/"
]
} |
143,009 | Granted, every profession has it's technicalities. If you are an MD, you better know the anatomy of the human body, and if you are astronomer, you better know your calculus. Yet, you don't have to know these more advance topics to know that smoking might give you lung cancer because of carcinogens or the moon revolves around the earth because of gravity (thank you Discovery Channel). There's sort of a common knowledge (at least in more developed countries) of these more advanced topics. With that said, why are things like recursive descent parsing, BNF, or Turing machines hardly ever mentioned outsided 3000 or 4000 level classes in a university setting or between colleagues? Even back in my days before college in my pursuit of knowledge on how computers work, these very important topics (IMHO) never seem to get the light of day. Many different sources and sites go into "What is a processor?" or "What is RAM?", or "What is an OS?". You might get lucky and discover something about programming languages and how they play a role in how applications are created, but nothing about the tools for creating the language itself. To extend this idea, Dennis Ritchie died shortly after Steve Jobs , yet Dennis Ritchie got very little press compared to Steve Jobs. So, the heart of my question: Does the public in general not care to hear about computer science topics that make the technology in their lives work, or does the computer science community not lend itself to the general public to close the knowledge gap? Am I wrong to think the general public has the same thirst for knowledge on how things work as I do? Please consider the question carefully before answering or vote closing please. | Does the public in general not care to hear about computer science
topics that make the technology in their lives work, or does the
computer science community not lend itself to the general public to
close the knowledge gap? In short, no and no. Knowledge specialization in modern society exists to such a degree that not only do most people not care about the concepts you mention, but it doesn't really make any sense for them to. As people who understand these things, it may seem odd to us that others don't and odder still that they don't want to, but to what level of detail do we understand other professions? Can programmers cite passages from the tax code by rote? Can we explain the specific mechanics of how our cars' engines work? Can we diagram up our houses' electrical systems? Do we know the names of all of the chambers of our heart? For some, the answer to these things may be "yes", but that's going to be the exception rather than the rule. And, what's more, is there some specific reason we should care about the answer to these questions and dozens more? After all, this is why we pay accountants, mechanics, electricians and doctors respectively. The reason I rehash here what you covered in the first paragraph of your question is to emphasize matters of degrees. A given programmer may care about the subject of taxes deeply as a matter of self interest. It certainly affects him or her. But there is a level of abstraction at which "I get a discount for owning a home" is sufficient without understanding all the provisions and history of the mortgage interest deduction. The average consumer needs to know about RAM and processor because those impact them in the pocketbook and in the user experience. Knowing how Turing machines work provides no actual benefit other than the aesthetic one relating to satisfaction in knowledge for most people. If layperson doesn't understand what RAM is, he will potentially get ripped off by a salesman at Best Buy and have a bad user experience. If he doesn't know some abstract concept, he'll still get his computer, his apps will still work, and he'll still be able to navigate with his GPS. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/143009",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/45101/"
]
} |
143,042 | As a freelancer, I am often asked by my customers what they must choose between similar elements, neither of which being better than another. Examples: “Do I need my e-commerce website be in PHP or ASP.NET?” “Do I need to host this ordinary web service in Cloud or use an ordinary hosting service?” “Which one is better for my new website: MySQL or Oracle?” etc. There is maybe at most 1% of cases where the choice is relevant, and there is a real, objective reason to use one over another, based on the precise metrics and studies. In all other cases, it doesn't matter at all. It is totally, completely irrelevant, either because there are no implications¹, or because those implications are too small to be taken in account², or, finally, because it's impossible to predict those implications³. If you know one thing and not another one, the answer to those questions is easy: “You can either write the application in C# or Java, both being probably equivalent in your case. Note that I'm a C# developer, so if you choose Java, I would not be able to work on your project and you would need to find another freelancer.” When you know both technologies, you can't answer that. In this case, how to explain to the customer that the question he asks is subject to flamewar and has no real consequences on his project? In other words, how to explain that you've chosen to use one technology rather than an equivalent one for the reasons related to human resources, without giving the impression to be unprofessional or to not care about the project? ¹ Example: Is MySQL better (worse?), performance-wise, compared to Oracle, for a personal website which will be accessed by, oh, let's be optimistic, two people per day? ² Example: for a given project, I was asked to asset if Windows Azure hosting would be cheaper than the hosting of the same application on a well-known ASP.NET hosting provider. The cost revealed to be exactly the same. ³ Example: your customer have an idea of a future application (the idea itself being extremely vague). There is no business plan, no requirements, nothing at all. Just an idea. You are asked if Java is better than C# for this app. What do you answer? | how to explain that you've chosen to use one technology rather than an equivalent one for the reasons related to human resources, without giving the impression to be unprofessional or to not care about the project? Well. You say just that: In terms of the requirements for this project, technology X and technology Y are equally suited to the task, so technical considerations do not come into it. It is easier to find talented people with knowledge of the chosen technology (or problem domain), which is why it was selected. Or - for the purpose of this project, using technology Y is going to be more cost effective. Tell it like it is. You may want to find an analogy that the customer can relate to. Something like - for the purpose of getting from Europe to the US, you can go on a Boeing 747 or an Airbus 380. Does it matter which one? Not to the customer, so long as they get there and the technologies are equally suited to the requirements :) Which one was selected? Whichever one the airline operates... | {
"source": [
"https://softwareengineering.stackexchange.com/questions/143042",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/6605/"
]
} |
143,169 | I have a very difficult time focusing on what I'm doing (programming-wise) when something (compilation, startup time, etc.) takes more than just a few seconds. Anecdotally it seems that threshold is about 10 seconds (and I recall reading about study that said the same thing, though I can't find it now). So what typically happens is I make a change and then run the program to test it. That takes about 30 seconds, so I start reading something else, and before I know it 20 minutes have passed, and then it takes (if I'm lucky!) another 10+ minutes to deal with the context switch to getting back into programming. It's not an exaggeration to say that some things that should take me minutes literally take hours to complete. I'm very curious about what other programmers do to combat this tendency (or if I'm unique and they don't have this tendency?). Suggestions of any type at all are welcome - anything from "sit on your hands after hitting the compile button", to mental tricks, to "if it takes 30 seconds to start up something to test a change, then something's wrong with your development process!" | I wrote a little commandline utility called 'alert' which will cause the computer to beep / play a sound / etc. Then, when I have a lengthy command to run such as a make , I run make; alert . Where I can, I will also have it take an argument so it makes a different sound depending on the argument. Thus I can do make; alert $? and I'll know a) the build is done, and b) it passed or failed. You don't have to be that fancy with it; just an echo -e "\a" can be enough. If you wanted to get really fancy/annoying, use some text-to-speech package and trigger a dialog popup. The main idea here is to interrupt your distraction as soon as the work-related task completes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/143169",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36341/"
]
} |
143,236 | The term "Lisp" (or "Lisp-like") is an umbrella for lots of different languages, such as Common Lisp, Scheme, and Arc. There is similar fragmentation in other language communities, like in ML. However, Ruby and Python have both managed to avoid this fate, where innovation occurred more on the implementation (like PyPy or YARV) instead of making changes to the language itself. Did the Ruby and Python communities do something special to prevent language fragmentation? | Ruby and Python both have benevolent dictators at their helm. They are languages deeply rooted in pragmatic concerns. Those are probably the most significant factors inhibiting fragmentation. Lisp and ML, on the other hand, are more like "design by committee" languages, conceived in academia, for theoretical purposes. Lisp was originally designed by John McCarthy as a practical mathematical notation for computer programs. He never implemented it as an actual programming language; the first implementation was developed by Steve Russell , but he was not a benevolent dictator. Over time, many different implementations of Lisp appeared; Common Lisp was an attempt to standardize them. Lisp is more of a "family" of languages. So is ML, which followed a similar evolutionary path to Lisp. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/143236",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/8575/"
]
} |
143,555 | I keep reading this sentence: Linux is a Unix-like system, but it is not Unix. I don't know what's the real difference between the two. I know Linux got a lot of ideas from Unix and the licenses of the two are different. Apart from that, as I am not an expert in either one of them, I want to know whether there are basic differences between them in design or other significant aspects. | A "Unix like" system may be fully compliant with the Single UNIX Specification , the collective name of standards for what qualifies as a Unix system, but at the same time Unix is a registered trademark of The Open Group and vendors of Unix like systems need to get their systems registered to officially qualify as Unix. Currently the registered UNIX 03 systems are: Apple Inc.: Mac OS X Version 10.5 Leopard on Intel-based Macintosh computers Apple Inc.: Mac OS X Version 10.6 Snow Leopard on Intel-based Macintosh computers Fujitsu Limited: Solaris™10 Operating System on Fujitsu PRIMEPOWER® 64-bit SPARC® Based Platforms Hewlett-Packard Company: HP-UX 11i V3 Release B.11.31 or later on HP Integrity Servers IBM Corporation: AIX 5L for POWER V5.2 dated 8-2004 or later with APARs: IY59610, IY60869, IY61405 with VAC 6.0.0.8 or later on pSeries CHRP systems IBM Corporation: AIX 5L for POWER V5.3 dated 7-2006 or later on Systems using CHRP system architecture with POWER™processors IBM Corporation: AIX 6 Operating System V6.1.2 with SP1 or later on Systems using CHRP system architecture with POWER™ processors and 2, 8 or 128 port async cards Oracle Corporation: Oracle Solaris 11 FCS and later on SPARC-based platforms, 32-bit and 64-bit and on X86-based platforms, 32-bit and 64-bit Oracle Corporation: Solaris 10 Operating System plus patch 118844-06 for X86 and on, on 64-bit X86 based systems Oracle Corporation: Solaris 10 Operating System and on, on 32-bit and 64-bit SPARC based systems Oracle Corporation: Solaris 10 Operating System and on, on 32-bit X86 based systems Vendors of open source Unix like systems (mostly Linux and FreeBSD) typically don't register with The Open Group, either to avoid the costs of certification or, well, because they don't find much value in doing so. In theory, it's entirely possible that a Unix like system is technically Unix, and all it's missing is certification. The Linux Foundation on the other hand, created the Linux Standard Base , an ISO standard , in an effort to standardize Linux. Compliance with POSIX is at the heart of both the SUS and the LSB, maintaining in a way the link between Unix and Linux. Unix and Unix like systems tend to be more similar than different, in theory all popular Unix flavours, registered or not, are POSIX compliant (full or mostly), so they share a core programming interface, shells and utilities (and a lot of other stuff). IEEE and The Open Group maintain a freely available copy of the latest version, POSIX.1-2008 , where you can find more information on what POSIX compliance actually means. Now, apart from the legal and technical reasons, Linux inherited the "not Unix" mantra from it's association with GNU , a Unix like operating system initiated by Richard Stallman. GNU stands for "GNU's not Unix", as Stallman's intentions were to build a Unix compatible system that would be free, and in order to do that it should contain no Unix code, as Unix is proprietary. Early Linux developers started porting GNU tools to Linux, and the resulting system was referred as GNU/Linux as early as 1992 . There is a long lasting controversy on whether Linux should be referred to as Linux or GNU/Linux (as it incorporates several parts of GNU), but that's irrelevant to your question, what's relevant is that "not Unix" may just refer to the association with GNU and have little to do with it's design, depending on context. The "History of Linux" article on Wikipedia explains the origins of Linux and it's relationship with Unix (via Minix and GNU) in some detail, and you should also take some time to read through the references of the article, if you are interested in learning more. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/143555",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48752/"
]
} |
143,673 | I've noticed more and more mentions (both in posts here and in actual job descriptions) of programmers' "portfolios" - typically their public profiles on sites such as this, GitHub , etc. How important is this, and would companies (startups in particular) reject (or immediately discard without even interviewing) otherwise outstanding candidates who don't have an online presence? Personally, I prefer to keep a very low profile online. My name here cannot identify me, and I have other handles for other sites. I have a very spartan (and completely private) Facebook page. I do code on my own, but the code lives in local repositories. In general, the less information online about me, the better. I could see a designer needing some sort of online portfolio, but for a programmer, is this really a big negative when job-searching? | It entirely depends on where you want to work. There is no universal answer to this. Many (all?) employers will google your name and look you up. You really should do that as well to see what comes back. The best way to control what they see is to have your own presence - something that will push any results that you don't want them to see way down the list, where they won't click through. However, having an online presence is different from having a presence that shows you are active in your programming community. Blogging, answering questions on forums or Stack Exchange sites, participating in open source (or starting such) projects, writing articles etc show that. All of these are bonuses as far as good employers are concerned. None of the above are requirements for getting a job, but they are all good to have in order to increase your chances. In other words, given two candidates that score similarly on the points the employer cares about, if one shows participation in the community and the other doesn't, the one that does will have a better chance at getting the offer. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/143673",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36341/"
]
} |
143,722 | CoffeeScript is a language that transpiles to JavaScript, with a clean syntax, inspired by Ruby. Is there a similar language that transpiles to C, allowing for more readable code without compromising on performance? If nothing like that exists, is there a good reason for not creating it? | CoffeeScript compiles to JavaScript for a very simple reason, JavaScript is the de facto client side language and it would be unreasonable to expect browser vendors to natively support CoffeeScript, when all it offers is an alternative syntax. In a very similar manner, the main point of high level language to C translators is immediate portability, as there's a C compiler for almost every platform and an abundance of C libraries. Vala , for example, was designed to : be a compiler for the GObject , build native executables (through the machine's C compiler), automate reference counting, and still be accessible to GNOME C programmers GNOME is a traditionally C oriented project and GObject specifically is written in C, Vala wouldn't probably find much love amongst GNOME developers if it compiled to machine code, regardless of it's friendlier nature (and syntax). Not everyone seemed to like the syntax, to the point that another language, Genie , was build to improve upon it. For a C++ example, Facebook developed HipHop , a PHP to C++ translator. They were trying to solve a very specific issue, CPU usage, without having to replace all their PHP code and re-train their engineers (or worst, replace them). This is a far more specific example, as Facebook scalability issues are, well, unique, and again having access to the intermediate C++ code can be useful, as PHP extensions are written in C and C++. So a translator from a high level language to another is a good idea mostly when you access to the intermediate code is required. For CoffeeScript, the JavaScript code is necessary because of its wide browser adoption, and for Vala, Genie and HipHop because of the existing codebase. Obviously having access to the intermediate code means that you can further optimize it if need be. But generally speaking, it wouldn't be such a good idea to build a language that translates to C, or any other language, if you didn't have any use of the resulting code. There are so many languages out there, if you can't cope with C, just pick an other. Coincidentally the first C++ compiler written by Bjarne Stroustrup, CFront, was a C with Classes to C translator, but that was mainly because as a new language, it was impossible to bootstrap C with Classes. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/143722",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51582/"
]
} |
143,736 | Why do we need private variables in classes? Every book on programming I've read says this is a private variable, this is how you define it but stops there. The wording of these explanations always seemed to me like we really have a crisis of trust in our profession. The explanations always sounded like other programmers are out to mess up our code. Yet, there are many programming languages that do not have private variables. What do private variables help prevent? How do you decide if a particular property should be private or
not? If by default every field SHOULD be private then why are there
public data members in a class? Under what circumstances should a variable be made public? | It's not so much a matter of trust, but rather one of managing complexity. A public member can be accessed from outside the class, which for practical considerations means "potentially anywhere". If something goes wrong with a public field, the culprit can be anywhere, and so in order to track down the bug, you may have to look at quite a lot of code. A private member, by contrast, can only be accessed from inside the same class, so if something goes wrong with that, there is usually only one source file to look at. If you have a million lines of code in your project, but your classes are kept small, this can reduce your bug tracking effort by a factor of 1000. Another advantage is related to the concept of 'coupling'. A public member m of a class A that is used by another class B introduces a dependency: if you change m in A , you also have to check usages of m in B . Worse yet, nothing in class A tells you where m is being used, so again you have to search through the entire codebase; if it's a library you're writing, you even have to make sure code outside your project doesn't break because of your change. In practice, libraries tend to stick with their original method signatures as long as possible, no matter how painful, and then introduce a block of breaking changes with a major version update. With private members, by contrast, you can exclude dependencies right away - they can't be accessed from outside, so all dependencies are contained inside the class. In this context, "other programmers" include your future and past selves. Chances are you know now that you shouldn't do this thing X with your variable Y, but you're bound to have forgotten three months down the road when a customer urgently needs you to implement some feature, and you wonder why doing X breaks Y in obscure ways. So, as to when you should make things private: I'd say make everything private by default, and then expose only those parts that absolutely have to be public. The more you can make private, the better. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/143736",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51597/"
]
} |
143,764 | Private variables are a way to hide complexity and implementation details to the user of a class. This is a rather nice feature. But I do not understand why in c++ we need to put them in the header of a class. I see two annoying downsides to this: It clutters the header from the user It force recompilation of all client libraries whenever the internals are modified Is there a conceptual reason behind this requirement? Is it only to ease the work off the compiler? | It is because the C++ compiler must know the actual size of the class in order to allocate the right amount of memory at instantiation. And the size includes all members, also private ones. One way to avoid this is using the Pimpl idiom , explained by Herb Sutter in his Guru of the Week series #24 and #28 . Update Indeed, this (or more generally, the header / source file distinction and #include s) is a major hurdle in C++, inherited from C. Back in the days C++ C was created, there was no experience with large scale software development yet, where this starts to cause real problems. The lessons learned since then were heeded by designers of newer languages, but C++ is bound by backward compatibility requirements, making it really hard to address such a fundamental issue in the language. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/143764",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/34327/"
]
} |
143,874 | I find myself constantly running into this expression "don't reinvent the wheel" or "never reinvent the wheel" when I ask some questions on SO. They tell you to use some frameworks or existing packages. I know where this attitude is coming from since it's unwise to waste time on something others have already solved. Or it that so? As a student, I find by using some code others wrote to solve my problem I can't learn as much as I'd like to, and I gain less insight. And sometimes I think that phrase is mainly for working programmers facing deadlines and not for students like me. Is it that bad to "reinvent the wheel"? Maybe I'm thinking it wrong? Maybe there is a way I can avoid reinventing the wheel and at the same time learn a lot? | I think you make a good point. Most of the programmers on this site are likely working professionals whose goal is pretty much to create quality software as quickly as possible. Reinventing the wheel fails this goal on two counts. Re-writing code that exists is wasted effort that could be used on the unique parts of your system and makes the project take longer than is necessary. The first version of any code is more likely to have bugs/unforseen issues. Most libraries and re-usable components have been battle tested and patched multiple times. If you re-invent a hashing algorithm or try to create your own RDBMS (unless that is what the project is) more often than not you are going to end up with inferior results. That said, in an academic environment the goal is to learn , not deliver software on a budget. Re-inventing a wheel to understand how the spokes or axle work is a great way to accomplish that goal. That's why many programming curricula include a class on building compilers when very few working programmers ever have cause to need to do that. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/143874",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48752/"
]
} |
143,973 | Scenario: You push to production The push broke multiple things That same build did not break qa or dev As a developer, you don't have prod access. There is lots of pressure from above to get things working agian. Specifics: PHP/MVC application that is API-driven in Zend. Deployed to a few servers. My question: While investigating, lets say I have a hunch that something is wrong. But, I don't know for sure. And, of course, I can't test things in production. If I have a suggested fix based on that hunch, would it be wise to try and apply it and see if it works, before understanding what the problem is? | Grab as much information about the problem as you can (logfiles etc.) and then rollback the production servers to a working state. That's a pain from the developer's point of view of course, but is most likely a given. Next, try and see if you can reproduce the problem in a development environment.
If you can, then fix it and try releasing again. If you can't reproduce it, then see if you can add more diagnostics and release to one server for a short time to get more information about the problem. If that's not possible then look more closely at the differences between production and the dev/qa environments and try to make a dev environment closer to production. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/143973",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42062/"
]
} |
144,008 | In the .NET Framework, at least in the C# language, we have two "versions" of the string type: "string" "String" It appears that they are interchangeable, but are they really? If they are not interchangeable, is it generally better to use one instead of the other, and under what circumstances? | Well according to the MSDN string is an alias for String in the .NET Framework. Where "String" is in fact System.String . I would say that they are interchangeable and there is no difference when and where you should use one or the other. It would be better to be consistent with which one you did use though. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144008",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/1351/"
]
} |
144,019 | Both languages have the same syntax. Why does C have the weird * character that denotes pointers (which is some kind of memory address of the variable contents?), when PHP doesn't have it and you can do pretty much the same things in PHP that you can do in C, without pointers? I guess the PHP compiler handles this internally, why doesn't C do the same? Doesn't this add unneeded complexity in C? For example I don't understand them :) | Like so many things, the answer is of the form "Because X and Y are different things with different purposes". In this case, the designers of both languages assumed that the users of their languages had a very different set of goals. For C, the primary use case was "portable assembly language", which really means getting down into the nitty-gritty of how the computer is actually managing its resources. There's no practical way to avoid memory address manipulation at the lowest level of abstraction, and so C has robust support for it. PHP was intended to make dynamic web page content as flexible and painless as possible. This is quite a few steps removed from the super-low level of the C world; managing memory is, for the purposes PHP is intended to address, much too low level to be of much interest. Any kind of automatic memory management would be fine, so long as it is robust and reliable, and stays out of the way. That's exactly the situation you see in PHP; objects are allocated automatically, when needed, and garbage collected when they are no longer useable, and it all happens without the intervention of the PHP programmer. It's perhaps of some interest to observe that PHP is itself written in C! the nitty gritty of memory management is written in C, which provides the tools needed to do that kind of thing, so that the language created, doesn't require the programmer to do much themselves. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144019",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51755/"
]
} |
144,042 | When reading the famous SICP, I found the authors seem rather reluctant to introduce the assignment statement to Scheme in Chapter 3. I read the text and kind of understand why they feel so. As Scheme is the first functional programming language I ever know something about, I am kind of surprised that there are some functional programming languages (not Scheme of course) can do without assignments. Let use the example the book offers,the bank account example. If there is no assignment statement, how can this be done?How to change the balance variable? I ask so because I know there are some so-called pure functional languages out there and according to the Turing complete theory, this must can be done too. I learned C, Java, Python and use assignments a lot in every program I wrote. So it's really an eye-opening experience. I really hope someone can briefly explain how assignments are avoided in those functional programming languages and what profound impact (if any) it has on these languages. The example mentioned above is here: (define (make-withdraw balance)
(lambda (amount)
(if (>= balance amount)
(begin (set! balance (- balance amount))
balance)
"Insufficient funds"))) This changed the balance by set! . To me it looks a lot like a class method to change the class member balance . As I said, I am not familiar with functional programming languages, so if I said something wrong about them, feel free to point out. | If there is no assignment statement,how can this be done?How to change the balance variable? You can't change variables without some sort of assignment operator. I ask so because I know there are some so-called pure functional languages out there and according to the Turing complete theory,this must can be done too. Not quite. If a language is Turing complete that means that it can calculate anything that the any other Turing complete language can calculate. It doesn't mean that it has to have every feature other languages have. It's not a contradiction that a Turing complete programming language has no way of changing the value of a variable, as long as for every program that has mutable variables, you can write an equivalent program that does not have mutable variables (where "equivalent" means that it calculates the same thing). And in fact every program can be written that way. Regarding your example: In a purely functional language you simply wouldn't be able to write a function that returns a different account balance each time it's called. But you'd still be able to rewrite every program, that uses such a function, in a different way. Since you asked for an example, let's consider an imperative program that uses your make-withdraw function (in pseudo-code). This program allows the user to withdraw from an account, deposit to it or query the amount of money in the account: account = make-withdraw(0)
ask for input until the user enters "quit"
if the user entered "withdraw $x"
account(x)
if the user entered "deposit $x"
account(-x)
if the user entered "query"
print("The balance of the account is " + account(0)) Here's a way to write the same program without using mutable-variables (I won't bother with referentially transparent IO because the question wasn't about that): function IO_loop(balance):
ask for input
if the user entered "withdraw $x"
IO_loop(balance - x)
if the user entered "deposit $x"
IO_loop(balance + x)
if the user entered "query"
print("The balance of the account is " + balance)
IO_loop(balance)
if the user entered "quit"
do nothing
IO_loop(0) The same function could also be written without using recursion by using a fold over the user input (which would be more idiomatic than explicit recursion), but I don't know whether you're familiar with folds yet, so I wrote it in a way that doesn't use anything you don't know yet. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144042",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48752/"
]
} |
144,092 | The rule of 3 ( the rule of 5 in the new c++ standard) states : If you need to explicitly declare either the destructor, copy constructor or copy assignment operator yourself, you probably need to explicitly declare all three of them. But, on the other hand, the Martin's " Clean Code " advises to remove all empty constructors and destructors (page 293, G12:Clutter ) : Of what use is a default constructor with no implementation? All it serves to do is clutter up the code with meaningless artifacts. So, how to handle these two opposite opinions? Should empty constructors/destructors really be implemented? Next example demonstrates exactly what I mean : #include <iostream>
#include <memory>
struct A
{
A( const int value ) : v( new int( value ) ) {}
~A(){}
A( const A & other ) : v( new int( *other.v ) ) {}
A& operator=( const A & other )
{
v.reset( new int( *other.v ) );
return *this;
}
std::auto_ptr< int > v;
};
int main()
{
const A a( 55 );
std::cout<< "a value = " << *a.v << std::endl;
A b(a);
std::cout<< "b value = " << *b.v << std::endl;
const A c(11);
std::cout<< "c value = " << *c.v << std::endl;
b = c;
std::cout<< "b new value = " << *b.v << std::endl;
} Compiles fine using g++ 4.6.1 with : g++ -std=c++0x -Wall -Wextra -pedantic example.cpp The destructor for struct A is empty, and not really needed. So, should it be there, or should it be removed? | For a start the rule says "probably", so it doesn't always apply. The second point I see here is that if you have to declare one of the three, that's because it's doing something special like allocating memory. In this case, the others wouldn't be empty since they would have to handle the same task (such as copying the content of dynamically allocated memory in the copy constructor or freeing such memory). So as a conclusion, you shouldn't declare empty constructors or destructors, but it's very likely that if one is needed, the others are needed too. As for your example: In such a case, you can leave the destructor out. It does nothing, obviously. Usage of smart pointers is a perfect example of where and why the rule of 3 doesn't hold. It's just a guide for where to take a second look over your code in case you may have forgotten to implement important functionality you might otherwise have missed. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144092",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20065/"
]
} |
144,188 | Background I was just asked in a tech interview to write an algorithm to traverse an "object" (notice the quotes) where A is equal to B and B is equal to C and A is equal to C. That's it. That is all the information I was given. I asked the interviewer what the goal was but apparently there wasn't one, just "traverse" the "object". I don't know about anyone else, but this seems like a silly question to me. I asked again, "am I searching for a value?". Nope. Just "traverse" it. Why would I ever want to endlessly loop through this "object"?? To melt my processor maybe?? The answer according to the interviewer was that I should have written a recursive function. OK, so why not simply ask me to write a recursive function? And who would write a recursive function that never ends? My question: Is this a valid question to the rest of you and, if so, can you provide a hint as to what I might be missing? Perhaps I am thinking too hard about solving real world problems. I have been successfully coding for a long time but this tech interview process makes me feel like I don't know anything. | It's a baffling, invalid interview question. The interviewer couldn't clearly articulate what it was that he/she was looking for and expected you to read his/her mind instead of responding meaningfully to your appropriate attempts to clarify the statement of the problem. Consider yourself lucky you didn't get the job. The meaning of the verb "traverse" operating on a generic "object" is ambiguous, in my opinion. Start substituting a variety of different nouns for the word object and it quickly becomes obvious that traversal of an object is only meaningful for a small subset of the universe of things that are objects. It makes sense to "traverse" the nodes of a "binary tree". It doesn't make sense to "traverse" a "clown". Yet, an object can just as easily represent a "clown" as it can represent a "binary tree". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144188",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/44497/"
]
} |
144,202 | I am currently learning Javascript, and there is one element of the syntax that I can just not seem to nail down. That element is adding semicolons at the end of each line. I have learned Python and Ruby, and written a ton of both, so the lack of semicolons is stuck in my mind! Is there a way to drill that (or even any syntax, to make this more useful to future readers) into my brain? | It's a baffling, invalid interview question. The interviewer couldn't clearly articulate what it was that he/she was looking for and expected you to read his/her mind instead of responding meaningfully to your appropriate attempts to clarify the statement of the problem. Consider yourself lucky you didn't get the job. The meaning of the verb "traverse" operating on a generic "object" is ambiguous, in my opinion. Start substituting a variety of different nouns for the word object and it quickly becomes obvious that traversal of an object is only meaningful for a small subset of the universe of things that are objects. It makes sense to "traverse" the nodes of a "binary tree". It doesn't make sense to "traverse" a "clown". Yet, an object can just as easily represent a "clown" as it can represent a "binary tree". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144202",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/50929/"
]
} |
144,274 | What's the difference between these? Recursion Corecursion On Wikipedia, there is little information and no clear code explaining these terms. What are some very simple examples explaining these terms? How is corecursion the dual of recursion? Are there any classic corecusive algorithms? | There are a number of good ways of looking at this. The easiest thing for me is to think about the relation between "Inductive" and "Coinductive definitions" An inductive definition of a set goes like this. The set "Nat" is defined as the smallest set such that "Zero" is in Nat, and if n is in Nat "Succ n" is in Nat. Which corresponds to the following Ocaml type nat = Zero | Succ of nat One thing to note about this definition is that a number omega = Succ(omega) is NOT a member of this set. Why? Assume that it was, now consider the set N that has all the same elements as Nat except it does not have omega. Clearly Zero is in N, and if y is in N, Succ(y) is in N, but N is smaller than Nat which is a contradiction. So, omega is not in Nat. Or, perhaps more useful for a computer scientist: Given some set "a", the set "List of a" is defined as the smallest set such that "Nil" is in List of a, and that if xs is in List of a and x is in a "Cons x xs" is in List of a. Which corresponds to something like type 'a list = Nil | Cons of 'a * 'a list The operative word here is "smallest". If we didn't say "smallest" we would not have any way of telling if the set Nat contained a banana! Again, zeros = Cons(Zero,zeros) is not a valid definition for a list of nats, just like omega was not a valid Nat. Defining data inductively like this allows us to define functions that work on it using recursion let rec plus a b = match a with
| Zero -> b
| Succ(c) -> let r = plus c b in Succ(r) we can then prove facts about this, like "plus a Zero = a" using induction (specifically, structural induction) Our proof proceeds by structural induction on a. For the base case let a be Zero. plus Zero Zero = match Zero with |Zero -> Zero | Succ(c) -> let r = plus c b in Succ(r) so we know plus Zero Zero = Zero .
Let a be a nat. Assume the inductive hypothesis that plus a Zero = a . We now show that plus (Succ(a)) Zero = Succ(a) this is obvious since plus (Succ(a)) Zero = match a with |Zero -> Zero | Succ(a) -> let r = plus a Zero in Succ(r) = let r = a in Succ(r) = Succ(a) Thus, by induction plus a Zero = a for all a in nat We can of-course prove more interesting things, but this is the general idea. So far we have dealt with inductively defined data which we got by letting it be the "smallest" set. So now we want to work with coinductivly defined codata which we get by letting it be the biggest set. So Let a be a set. The set "Stream of a" is defined as the largest set such that for each x in the stream of a, x consists of the ordered pair (head,tail) such that head is in a and tail is in Stream of a In Haskell we would express this as data Stream a = Stream a (Stream a) --"data" not "newtype" Actually, in Haskell we use the built in lists normally, which can be an ordered pair or an empty list. data [a] = [] | a:[a] Banana is not a member of this type either, since it is not an ordered pair or the empty list. But, now we can say ones = 1:ones and this is a perfectly valid definition. Whats more, we can perform co-recursion on this co-data. Actually, it is possible for a function to be both co-recursive and recursive. While recursion was defined by the function having a domain consisting of data, co-recursion just means it has a co-domain (also called the range) that is co-data. Primitive recursion meant always "calling oneself" on smaller data until reaching some smallest data. Primitive co-recursion always "calls itself" on data greater than or equal to what you had before. ones = 1:ones is primitively co-recursive. While the function map (kind of like "foreach" in imperative languages) is both primitively recursive (sort of) and primitively co-recursive. map :: (a -> b) -> [a] -> [b]
map f [] = []
map f (x:xs) = (f x):map f xs same goes for the function zipWith which takes a function and a pair of lists and combines them together using that function. zipWith :: (a -> b -> c) -> [a] -> [b] -> [c]
zipWith f (a:as) (b:bs) = (f a b):zipWith f as bs
zipWith _ _ _ = [] --base case the classic example of functional languages is the Fibonacci sequence fib 0 = 0
fib 1 = 1
fib n = (fib (n-1)) + (fib (n-2)) which is primitively recursive, but can be expressed more elegantly as an infinite list fibs = 0:1:zipWith (+) fibs (tail fibs)
fib' n = fibs !! n --the !! is haskell syntax for index at an interesting example of induction/coinduction is proving that these two definitions compute the same thing. This is left as an exercise for the reader. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144274",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52042/"
]
} |
144,293 | I work with someone who insists that any good software engineer can develop in any software technology, and experience in a particular technology doesn't matter to building good software. His analogy was that you don't have to have knowledge of the product being built to know how to build an assembly line that manufactures said product. In a way it's a compliment to be viewed with an eye such that "if you're good, you're good at everything", but in a way it also trivializes the profession, as in "Codemonkey, go sling code". Without experience in certain software frameworks, you can get in trouble fast, and that's important. I tried explaining this, but he didn't buy it. Any different views or thoughts on this to help explain that my experience in one thing, doesn't translate to all things? | but in a way it also trivializes the profession, as in "Codemonkey, go sling code". I would argue quite the opposite. A good software engineer would have the ability to conceptualize, architect, and design quality software agnostic of technology. The opposite end of this spectrum is the .NET or Java or PHP only "codemonkey" that is good at being given direction or specifications and utilizing the tool to implement the software. A software engineer doesn't need to be a master of all tools, but should have a pretty good high level understanding about what the majority of them are, what they bring to the table, and what will likely be most appropriate for the given project. I would expect a code monkey to only be a master of their proclaimed expertise in a specific tool. I wouldn't trust a Ford engineer that doesn't know how to do the Mechanic's job. Still though, software engineering is one of these fields where in many cases we are expected to be the Engineer, the Builder, and the Mechanic all at the same time. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144293",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42446/"
]
} |
144,326 | There is a provision for try-catch block in javascript . While in java or any other language it is mandatory to have error handling, I don't see anybody using them in javascript for greater extent. Isn't it a good practice or just we don't need them in javascript? | One should avoid throw errors as the way to pass error conditions around in applications. The throw statement should only be used "For this should never happen, crash and burn. Do not recover elegantly in any way" try catch however is used in situation where host objects or ECMAScript may throw errors. Example: var json
try {
json = JSON.parse(input)
} catch (e) {
// invalid json input, set to null
json = null
} Recommendations in the node.js community is that you pass errors around in callbacks (Because errors only occur for asynchronous operations) as the first argument fs.readFile(uri, function (err, fileData) {
if (err) {
// handle
// A. give the error to someone else
return callback(err)
// B. recover logic
return recoverElegantly(err)
// C. Crash and burn
throw err
}
// success case, handle nicely
}) There are also other issues like try / catch is really expensive and it's ugly and it simply doesn't work with asynchronous operations. So since synchronous operations should not throw an error and it doesn't work with asynchronous operations, no-one uses try catch except for errors thrown by host objects or ECMAScript | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144326",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43851/"
]
} |
144,423 | A while ago I started to create a winform application and at that time it was small and I did not give any thought of how to structure the project. Since then I added additional features as I needed and the project folder is getting bigger and bigger and now I think it is time to structure the project in some way, but I am not sure what is the proper way, so I have few questions. How to properly restructure the project folder? At the moment I am thinking of something like this: Create Folder for Forms Create Folder for Utility classes Create Folder for Classes that contain only data What is the naming convention when adding classes? Should I also rename classes so that their functionality can be identified by just looking at their name? For example renaming all forms classes, so that their name ends with Form .
Or is this not necessary if special folders for them are created? What to do, so that not all the code for main form ends up in Form1.cs Another problem I encountered is that as the main form is getting more massive with each feature I add, the code file (Form1.cs) is getting really big.
I have for example a TabControl and each tab has bunch of controls and all the code ended up in Form1.cs. How to avoid this? Also, Do you know any articles or books that deal with these problems? | It looks like you've fallen into some of the common pitfalls, but don't worry, they can be fixed :) First you need to look at your application a little differently and start breaking it down into chunks. We can split the chunks in two directions. First we can separate controlling logic (The business rules, data access code, user rights code,all that sort of stuff) from the UI code. Second we can break the UI code down into chunks. So we'll do the latter part first, breaking the UI down into chunks. The easiest way to do this is to have a single host form on which you compose your UI with usercontrols. Each user control will be in charge of a region of the form. So imagine your application had a list of users, and when you click on a user a text box below it is filled with their details. You could have one user control managing the display of the user list and a second one managing the display of the user's details. The real trick here is how you manage the communication between the controls. You don't want 30 user controls on the form all randomly holding references to each other and calling methods on them. So you create an interface for each control. The interface contains the operations the control will accept and any events it raises. When you think about this app, you don't care if the list box list selection changes, you are interested in the fact a new user has changed. So using our example app, the first interface for the control hosting the listbox of users would include an event called UserChanged which passes a user object out. This is great because now if you get bored of the listbox and want a 3d zoomy magic eye control, you just code it to the same interface and plug it in :) Ok, so part two, separating the UI logic from the domain logic. Well, this is a well worn path and I'd recommend you look at MVP pattern here. It's really simple. Each control is now called a View (V in MVP) and we've already covered most of what is needed above. In this case, the control and an interface for it. All we're adding is the model and the presenter. The model contains the logic that manages your application state. You know the stuff, it would go to the database to get the users, write to the database when you add a user, and so on. The idea is you can test all of this in complete isolation from everything else. The Presenter is a bit more tricky to explain. It is a class which sits between the model and the View. It is created by the view and the view passes itself into the presenter using the interface we discussed earlier. The presenter doesn't have to have its own interface, but I like to create one anyway. Makes what you want the presenter to do explicit. So the presenter would expose methods like ListOfAllUsers which the View would use to get its list of users, alternatively, you could put an AddUser method the View and call that from the presenter. I prefer the latter. That way the presenter can add a user to the listbox when ever it wants. The Presenter would also have properties like CanEditUser, which will return true if the user selected can be edited. The View will then query that every time it needs to know. You might want editable ones in black and read only ones in Gray. Technically that's a decision for the View as it is UI focused, whether the user is editable in the first place is for the Presenter. The presenter knows because it talks to the Model. So in summary, use MVP. Microsoft provide something called SCSF (Smart Client Software Factory) which uses MVP in the way I've described. It does a lot of other things too. It's quite complex and I don't like the way they do everything, but it may help. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144423",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51733/"
]
} |
144,430 | Caveat: I am a political science student and I have tried my level best to understand the technicalities; if I still sound naive please overlook that. In the Symantec report on Stuxnet , the authors say that once the worm infects the 32-bit Windows computer which has a WINCC setup on it, Stuxnet does many things and that it specifically hooks the function CreateFileA() . This function is the route which the worm uses to actually infect the .s7p project files that are used to program the PLCs. ie when the PLC programmer opens a file with .s7p the control transfers to the hooked function CreateFileA_hook() instead of CreateFileA() . Once Stuxnet gains the control it covertly inserts code blocks into the PLC without the programmers knowledge and hides it from his view. However, it should be noted that there is also one more function called CreateFileW() which does the same task as CreateFileA() but both work on different character sets. CreateFileA works with ASCII character set and CreateFileW works with wide characters or Unicode character set. Farsi (the language of the Iranians) is a language that needs unicode character set and not ASCII Characters. I'm assuming that the developers of any famous commercial software (for ex. WinCC) that will be sold in many countries will take 'Localization' and/or 'Internationalization' into consideration while it is being developed in order to make the product fail-safe ie. the software developers would use UNICODE while compiling their code and not just 'ASCII'. Thus, I think that CreateFileW() would have been invoked on a WINCC system in Iran instead of CreateFileA() . Do you agree? My question is: If Stuxnet has hooked only the function CreateFileA() then based on the above assumption there is a significant chance that it did not work at all? I think my doubt will get clarified if: my assumption is proved wrong, or the Symantec report is proved incorrect. Please help me clarify this doubt. Edit: For more clarity of my question and what I'm looking for. Is it possible that the WinCC STL Editor be programmed in the following way? //Pseudocode Begins
if (locale == ASCII Dependent) //like US, UK, Australia etc.
{
CreateFileA(); //with appropriate parameters
}
else if (locale == UNICODE Dependent) //like Middle East, China, Japan etc
{
CreateFileW(); //with appropriate parameters
} //Pseudocode ends If it is possible then does it follows that Stuxnet would work appropriately in the US but not in China or Japan or Iran? | It looks like you've fallen into some of the common pitfalls, but don't worry, they can be fixed :) First you need to look at your application a little differently and start breaking it down into chunks. We can split the chunks in two directions. First we can separate controlling logic (The business rules, data access code, user rights code,all that sort of stuff) from the UI code. Second we can break the UI code down into chunks. So we'll do the latter part first, breaking the UI down into chunks. The easiest way to do this is to have a single host form on which you compose your UI with usercontrols. Each user control will be in charge of a region of the form. So imagine your application had a list of users, and when you click on a user a text box below it is filled with their details. You could have one user control managing the display of the user list and a second one managing the display of the user's details. The real trick here is how you manage the communication between the controls. You don't want 30 user controls on the form all randomly holding references to each other and calling methods on them. So you create an interface for each control. The interface contains the operations the control will accept and any events it raises. When you think about this app, you don't care if the list box list selection changes, you are interested in the fact a new user has changed. So using our example app, the first interface for the control hosting the listbox of users would include an event called UserChanged which passes a user object out. This is great because now if you get bored of the listbox and want a 3d zoomy magic eye control, you just code it to the same interface and plug it in :) Ok, so part two, separating the UI logic from the domain logic. Well, this is a well worn path and I'd recommend you look at MVP pattern here. It's really simple. Each control is now called a View (V in MVP) and we've already covered most of what is needed above. In this case, the control and an interface for it. All we're adding is the model and the presenter. The model contains the logic that manages your application state. You know the stuff, it would go to the database to get the users, write to the database when you add a user, and so on. The idea is you can test all of this in complete isolation from everything else. The Presenter is a bit more tricky to explain. It is a class which sits between the model and the View. It is created by the view and the view passes itself into the presenter using the interface we discussed earlier. The presenter doesn't have to have its own interface, but I like to create one anyway. Makes what you want the presenter to do explicit. So the presenter would expose methods like ListOfAllUsers which the View would use to get its list of users, alternatively, you could put an AddUser method the View and call that from the presenter. I prefer the latter. That way the presenter can add a user to the listbox when ever it wants. The Presenter would also have properties like CanEditUser, which will return true if the user selected can be edited. The View will then query that every time it needs to know. You might want editable ones in black and read only ones in Gray. Technically that's a decision for the View as it is UI focused, whether the user is editable in the first place is for the Presenter. The presenter knows because it talks to the Model. So in summary, use MVP. Microsoft provide something called SCSF (Smart Client Software Factory) which uses MVP in the way I've described. It does a lot of other things too. It's quite complex and I don't like the way they do everything, but it may help. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144430",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51979/"
]
} |
144,477 | I had always thought that the "head" of a queue as the next element to be read, and never really questioned that usage. So a linked-list library I wrote, which is used for maintaining queues, codified that terminology: we have a list1_head macro that retrieves the first element; when using this library in a queue, this will be the first element to be removed. But a new developer on the team was used to having queues implemented the other way around. He described a queue as behaving like a dog: you insert at the head, and remove at the tail. This is a clever enough description that I feel like his usage must be more widespread, and I don't have a similarly evocative description of my preferred usage. So, I guess, there are two related questions: 1, what does the "head" of a queue mean to you? and 2, why do we use the word "head" to describe that concept? | You enter at the back of the queue, and leave from the front. In most societies, that would imply the head is the front, and items are removed from the head. The Javadoc for Queue seems to agree with the classic definition (i.e. your original one): Whatever the ordering used, the head of the queue is that element which would be removed by a call to remove() or poll(). In a FIFO queue, all new elements are inserted at the tail of the queue. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144477",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/14820/"
]
} |
144,515 | Why were frames removed in HTML5, but not iFrames? After all, there is almost no difference between the two. In many instances using either of them would give the same output (pardon me if I am wrong)? | There's a couple of misconceptions in your post. First, the frame and frameset elements are not deprecated in HTML5, they're obsolete (i.e., they've been removed entirely). Second, the frame and frameset elements are not the same thing as the iframe element, nor do they give the same output: The frameset element replaces the body element in pages as a means to include a different document model for web pages: they're bad for usability and accessibility, and what they intended to accomplish have been completely replaced by CSS and ubiquitous server-side development. The iframe element, on the other hand, does not replace the body of a page. It acts as a means to include a new browsing context embedded within a block of content. It does not suffer from the same usability or accessibility problems as the frameset model and is used almost anywhere one needs to include an embedded browsing context (widgets being the most prolific example). 1 The iframe in HTML5 also takes on additional features in that it can be sandboxed , allowing the parent document to decide what gets executed within it. This allows for some measure of security for the parent document (and visitors to the parent document) when embedding untrusted content. Notes Note 1: the object element somewhat overlaps with the iframe element, but it has a different content model (which is intended mainly for plugins), has its own set of caveats, and doesn't have the sandboxing attributes the iframe element has. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144515",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/-1/"
]
} |
144,530 | I'm in the middle of a project and I was asked to write UML diagrams (use cases,class diagram,etc). The project isn't very complex. I was wondering if this be a waste of my time? Should I just be doing other fun stuff like writing code? And when is it not ok to build something without going through all of the conceptual phase? Is it all about complexity? And if so, how to measure it? | I was a big fan of UML a while ago. I'm even OMG certified. I used it extensively in large enterprise projects I was involved in. Today, I stopped UML almost completely. Sometimes, I use the sequence diagram that I find very useful to describe interactions between systems, but no other diagrams. I now prefer working with user stories, supported by both (only) necessary documentation written by product owner (or analysts) AND his (their) dedication to the development team to give more details as needed. UML is a tool you can use, but it's certainly not a critical factor for the success of your projects. After many years, I now think the critical factor is the development team, no matter which tools it uses. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144530",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52019/"
]
} |
144,602 | I'm finding it very difficult to write complex SQL queries involving joins across many (at least 3-4) tables and involving several nested conditions. The queries I'm being asked to write are easily described by a few sentences, but can require a deceptive amount of code to complete. I'm finding myself often using temporary views to write these queries, which seem like a bit of a crutch. What tips can you provide that I can use to make these complex queries easier? More specifically, how do I break these queries down into the steps I need to use to actually write the SQL code? Note that I'm the SQL I'm being asked to write is part of homework assignments for a database course, so I don't want software that will do the work for me. I want to actually understand the code I'm writing. More technical details: The database is hosted on a PostgreSQL server running on the local machine. The database is very small: there are no more than seven tables and the largest table has less than about 50 rows. The SQL queries are being passed unchanged to the server, via LibreOffice Base. | I'm basing most of this on just trying to get the "right" answer, so you may discover there are some performance issues. No point in speeding up an incorrect query. Understand the table relationships - Most will be one to many. Know the "many" table. Identify the fields required for your joins. Think about LEFT join scenarios - Select all the employees and their paycheck from last month. What if they didn't get a paycheck last month? Know the result set: 1) In a spreadsheet, manually enter at least one correct record for your query. 2) Write the query in a simple enough form to identify how many records should be returned. Use both of these to test your query to make sure joining a new table doesn't alter the result. Break up your query into managable parts - You don't have to write it all at once. Complex queries can sometimes just be a collection of simple queries. Beware of mixed levels of aggregation : If you have to put monthly, quarterly and year-to-date values in the same result set, you'll need to calculate them separately in queries grouped on different values. Know when to UNION Sometimes it's easier to break up subgroups into their own select statements. If you have a table mixed with managers and other employees, and on each column you have to do Case statements based on membership in one of these groups, it may be easier to write a Manager query and union to an Employee query. Each one would contain their own logic. Having to include items from different tables in different rows is an obvious use. Complex/Nested formulas - Try to consistently indent and don't be afraid to use multiple lines. "CASE WHEN CASE WHEN CASE WHEN" will drive you nuts. Take the time to think these through. Save the complex calcs for last. Get the correct records selected first. Then you attack complex formulas knowing you're working with the right values. Seeing the values used in the formulas will help you spot areas where you have to account for NULL values and where to handle the divide by zero error. Test often as you add new tables to make sure you're still getting the desired result set and knowing which join or clause is the culprit. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144602",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/51061/"
]
} |
144,701 | I have worked in software development for over 10 years now, and it's dawning on me that I rarely get to create anything "new". I realize that "new" is a vague term, but I would define it as anything from an obvious new large-scale project to a new large feature in an existing project (say something that would require some thought into its design, and that might take 2 weeks or more to complete). Maybe a rough guideline is something is new if it requires a written spec. I think most programmers know what I'm talking about - you're in the zone, writing a ton of code at a fast pace. Anyway, thinking back to what I've done, I'd estimate that less than 10% of my time is spent on "new" work. There are things like "adapt this existing system to work in this new environment", which certainly requires a lot of planning, but the actual coding and "new stuff" comes down to making tiny changes in many places throughout the code. Likewise for small feature requests - if I know what to do, these can often be finished in under an hour, and if I don't, it's just a lot of reading code and figuring out what to do (which frustrates me because I learn much better by doing, not by reading). In general I feel like I am not really creating anything most of the time. I kind of assumed that this was the case at most places - a new product would come out rather quickly and at that point everyone would be excited and banging out the code at a fast pace, but then once live it moves into maintenance mode, where few of the subsequent changes would be considered "new & creative". Am I wrong? Am I accurately describing most programming jobs, or do most programmers feel like they are often creating new things? | A great deal of software work is maintenance. No hiring manager will actually tell you this, of course, but it's certainly the case. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144701",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/36341/"
]
} |
144,753 | I'm not asking from a business sense exactly, but for example, both reddit and Twitter are completely open source. It is my understanding that at least the large majority of their profit comes from advertising on their website. So what exactly is to prevent someone from copying their code and making their own website with some small but effective changes? I ask this because I have a website, and I want to make money on it through advertising. I also want the code to be open source, just to be nice, but I don't want this to happen to me. My site is not (and will probably never ben close to) an institution like Twitter or reddit. I'm not sure if they're just so big they're not worried about someone copying their site, or if they actually have some protection in spite of being open source. reddit uses CPAL, and Twitter uses the Apache license -- does this offer the protection they need or is there something else I'm not getting? | There's no "protection" when anyone could take their codebase and put it up on another server. None. Zip. Reddit and Twitter are instead relying on the network effect to make their sites valuable. (The Wikipedia article even cites Twitter as an example of the network effect.) The basic idea is that for a service that connects people together, its value is based primarily on the number of users. (If I'm the only person in the world with a phone, it's useless. But once everyone has one, that phone is an incredibly valuable tool.) So yes, they're "just so big they're not worried about someone copying their site," because the site itself isn't what they're about. Having a site that's that big is. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144753",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/28544/"
]
} |
144,787 | I have been reading about MapReduce for a while -- but what I can't understand is how someone would make a decision to use (or not use) MapReduce. I mean, what are the problem patterns that signal that MapReduce could be used. | It's basically problems that are huge, but not hard. Travelling salesman depends crucially on the distance between any given pair of cities, so while it can be broken down into many parts, the partial results cannot be recombined so that the globally optimal solution emerges (well, probably not; if you know a way, please apply for your Fields medal now). On the other hand, counting frequencies of words in a gigantic corpus is trivially partitionable, and trivially recombinable (you just add up the vectors computed for the segments of the corpus), so map-reduce is the obvious solution. In practice, more problems tend to be easily recombinable than not, so the decision whether to parallelize a task or not has more to do with how huge the task is, and less with how hard it is. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144787",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/31560/"
]
} |
144,792 | When I learned the C++ language for the first time I learned that besides int, float etc, smaller or bigger versions of these data types existed within the language. For example I could call a variable x int x;
or
short int x; The main difference being that short int takes 2 bytes of memory while int takes 4 bytes, and short int has a lesser value, but we could also call this to make it even smaller: int x;
short int x;
unsigned short int x; which is even more restrictive. My question here is if it's a good practice to use separate data types according to what values your variable take within the program. Is it a good idea to always declare variables according to these data types? | Most of the time the space cost is negligible and you shouldn't worry about it, however you should worry about the extra information you are giving by declaring a type. For example, if you: unsigned int salary; You are giving a useful piece of information to another developer: salary cannot be negative. The difference between short, int, long is rarely going to cause space problems in your application. You are more likely to accidentally make the false assumption that a number will always fit in some datatype. It's probably safer to always use int unless you are 100% sure your numbers will always be very small. Even then, it is unlikely to save you any noticeable amount of space. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144792",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47546/"
]
} |
144,983 | I found in legacy code a class which name is BucketService. Because english is not my native language I was trying to translate that, but it doesn't make sense. I found few termins like bucket sorting and so on, but I still don't get it. Actually this word is used quite frequently in programming I would be helpfull for some simple explanation of that word | A bucket in English is a device for holding water. When used in software, it normally means a data type that groups objects together. The term is used often when discussing hashing algorithms , where different items that have the same hash code (hash collision) go into the same "bucket". Meaning, the objects are grouped by the hash. In general, a hashing function may map several different keys to the same index. Therefore, each slot of a hash table is associated with (implicitly or explicitly) a set of records, rather than a single record. For this reason, each slot of a hash table is often called a bucket, and hash values are also called bucket indices. Informally, I have seen the term used with dictionaries whose value (not key) is a collection of items. Wikipedia has a page dedicated to the term as used in computing - Bucket (Computing) : In computing, the term bucket can have several meanings. It is used both as a live metaphor, and as a generally accepted technical term in some specialised areas. A bucket is most commonly a type of data buffer or a type of document in which data is divided into regions. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/144983",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/46401/"
]
} |
145,055 | I've noticed a few functions I work with have 6 or more parameters, whereas in most libraries I use it is rare to find a function that takes more than 3. Often a lot of these extra parameters are binary options to alter the function behaviour. I think that some of these umpteen-parametered functions should probably be refactored. Is there a guideline for what number is too many? | I've never seen a guideline, but in my experience a function that takes more than three or four parameters indicates one of two problems: The function is doing too much. It should be split into several smaller functions, each which have a smaller parameter set. There is another object hiding in there. You may need to create another object or data structure that includes these parameters. See this article on the Parameter Object pattern for more information. It's difficult to tell what you're looking at without more information. Chances are the refactoring you need to do is split the function into smaller functions which are called from the parent depending on those flags that are currently being passed to the function. There are some good gains to be had by doing this: It makes your code easier to read. I personally find it much easier to read a "rules list" made up of an if structure that calls a lot of methods with descriptive names than a structure that does it all in one method. It's more unit testable. You've split your problem into several smaller tasks that are individually very simple. The unit test collection would then be made up of a behavioral test suite that checks the paths through the master method and a collection of smaller tests for each individual procedure. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145055",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/37997/"
]
} |
145,078 | If I am developing a Java library, is it good practice to issue log statements from within the library's code? Having logging within the library will make debugging and troubleshooting more transparent. However, on the other hand, I do not like littering my library code with logging statements. Are there any performance implications to consider as well? | Yes you should. Using a logging facade like SLF4J gives you flexibility without burdening your users with a particular logging framework. Authors of widely-distributed components and libraries may code against the SLF4J interface in order to avoid imposing an logging framework on the end-user of the component or library. Thus, the end-user may choose the desired logging framework at deployment time by inserting the corresponding slf4j binding on the classpath, which may be changed later by replacing an existing binding with another on the class path and restarting the application. This approach has proven to be simple and very robust. Also, if your users don't include an SLF4J jar (from the user's guide ): As of SLF4J version 1.6.0, if no binding is found on the class path, then slf4j-api will default to a no-operation implementation discarding all log requests. If you're concerned about performance implications of logging, check out this SLF4J FAQ entry . The idea is that you provide parameters to log statements instead of adding them into a String inline: The following two lines will yield the exact same output. However, the second form will outperform the first form by a factor of at least 30, in case of a disabled logging statement. logger.debug("The new entry is "+entry+".");
logger.debug("The new entry is {}.", entry); Is SLF4J yet another logging facade? SLF4J is conceptually very similar to JCL. As such, it can be thought of as yet another logging facade. However, SLF4J is much simpler in design and arguably more robust. In a nutshell, SLF4J avoid the class loader issues that plague [Jakarta Commons Logging]. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145078",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/20332/"
]
} |
145,099 | I'm not a graphic designer. I'm pretty bad at drawing anything. I struggle to build things that look even as nice as "sample" applications bundled with development tools; primarily because I don't have squat in the way of art assets. What strategies might I take to mitigate this? | I personally do not think you have to be good artistically to create pleasing user interfaces . What makes a good UI is not up to creativity, but is more related to a couple of well-established guidelines. If you follow these guidelines and practice some you can create great interfaces yourself. I would suggest doing the following... Read about what makes a good user interface. (online mostly) Research and find some user interfaces that are pleasing to you. Compare several good designs and try to pick out things that are similar between them. Now look at your own design and see if you have those elements. Try to recreate your user interface to be similar to the ones you liked. I forecast that if you do this exercise for a week or two (and if you ask me, a week or two to learn how to design good interfaces is not such a long time), you will learn most of what makes a good user interface. Just a couple of things I found that make user interfaces pleasing: Simplicity Consistency (colors, fonts, usage of buttons, links, etc...) Spacing Less is more (hide as much as you can from the user without diminishing usability) Do not use white background and black font. Make sure the contrast is good enough, but usually change your background to a light shade of gray while your font to be dark gray. Also... do not start with the design. Start with functionality and let the design evolve. Also, experiment! Do not get upset if it does not look perfect after 1-2 'iterations'. It gets better over time. But most importantly, you have to try . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145099",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/886/"
]
} |
145,173 | In our mostly large applications, we usually have a only few locations for "constants": One class for GUI and internal contstants (Tab Page titles, Group Box titles, calculation factors, enumerations) One class for database tables and columns (this part is generated code) plus readable names for them (manually assigned) One class for application messages (logging, message boxes etc) The constants are usually separated into different structs in those classes.
In our C++ applications, the constants are only defined in the .h file and the values are assigned in the .cpp file. One of the advantages is that all strings etc are in one central place and everybody knows where to find them when something must be changed. This is especially something project managers seem to like as people come and go and this way everybody can change such trivial things without having to dig into the application's structure. Also, you can easily change the title of similar Group Boxes / Tab Pages etc at once.
Another aspect is that you can just print that class and give it to a non-programmer who can check if the captions are intuitive, and if messages to the user are too detailed or too confusing etc. However, I see certain disadvantages: Every single class is tightly coupled to the constants classes Adding/Removing/Renaming/Moving a constant requires recompilation of at least 90% of the application (Note: Changing the value doesn't, at least for C++). In one of our C++ projects with 1500 classes, this means around 7 minutes of compilation time (using precompiled headers; without them it's around 50 minutes) plus around 10 minutes of linking against certain static libraries. Building a speed optimized release through the Visual Studio Compiler takes up to 3 hours. I don't know if the huge amount of class relations is the source but it might as well be. You get driven into temporarily hard-coding strings straight into code because you want to test something very quickly and don't want to wait 15 minutes just for that test (and probably every subsequent one). Everybody knows what happens to the "I will fix that later"-thoughts. Reusing a class in another project isn't always that easy (mainly due to other tight couplings, but the constants handling doesn't make it easier.) Where would you store constants like that? Also what arguments would you bring in order to convince your project manager that there are better concepts which also comply with the advantages listed above? Feel free to give a C++-specific or independent answer. PS: I know this question is kind of subjective but I honestly don't know of any better place than this site for this kind of question. Update on this project I have news on the compile time thing: Following Caleb's and gbjbaanb's posts, I split my constants file into several other files when I had time. I also eventually split my project into several libraries which was now possible much easier. Compiling this in release mode showed that the auto-generated file which contains the database definitions (table, column names and more - more than 8000 symbols) and builds up certain hashes caused the huge compile times in release mode. Deactivating MSVC's optimizer for the library which contains the DB constants now allowed us to reduce the total compile time of your Project (several applications) in release mode from up to 8 hours to less than one hour! We have yet to find out why MSVC has such a hard time optimizing these files, but for now this change relieves a lot of pressure as we no longer have to rely on nightly builds only. That fact - and other benefits, such as less tight coupling, better reuseability etc - also showed that spending time splitting up the "constants" wasn't such a bad idea after all ;-) Update2 Since this question still receives some attention: Here is what I've been doing in the past few years: Put every constant, variable, etc exactly in the scope that is relevant for it: If you use a constant only in a single method, it is OK to define it in that method. If a single class is interested in it, leave it as a private implementation detail of that class. The same applies for the namespace, module, project, company scope. I also use the same pattern for helper functions and the like. (This may not apply 100% if you develop a public framework.) Doing this increased reusability, testability and maintainability to a degree where you not only spend less time compiling (at least in C++), but also less time on bugfixing, which leaves you more time for actually developing new features. At the same time, developing these features will go faster since you can reuse more code more easily. This outweighs any advantage the central constants file might have by a magnitude. Take a look at especially the Interface Segregation Principle and the Single Responsibility Principle if you want to know more. If you agree, upvote Caleb's answer since this update is basically a more general take on what he said. | Constants that are specific to a class should go in that class's interface. Constants that are really configuration options should be part of a configuration class. If you provide accessors for the configuration options in that class (and use them in place of the constants elsewhere), you won't have to recompile the whole world when you change a few options. Constants that are shared between classes but which aren't meant to be configurable should be reasonably scoped -- try to break them out into files that have particular uses so that individual classes only include what they actually need. This again will help reduce compile times when you change some of those constants. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145173",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/35812/"
]
} |
145,231 | If I create this method public void foo() And then I create an overloaded version like this public void foo( string bar ) Do we say that the second functions overloads the first, or are both methods equally "overloaded"? This would imply (I think), that there is a base type function, that is being overloaded, by another function (somewhat like inheritance, but not really). Assuming that one method can "overload another", would also imply terms like "overloader" and "overloadie", if that is a word at all. But that doesn't feel right at all, especially since you can have several overloads. I got to this question when I wanted to write down the process of creating an overloaded method and I wanted the most correct way of writing it down. Examples: I am overloading foo I am overloading foo with foo( string bar ) I am creating an overloaded method I am making foo overloaded So yeah, this kind of got me thinking, I am not sure what to make of it.
There are hundreds, if not thousands, of descriptions of function overloading online, but at a first glance I couldn't find any addressing this. | When talking about overloads, the name of the function is overloaded, not the function itself. The functions overloading the name are "overloads" and overload the name, but not each other. In your example, "public void foo()" and "public void foo( string bar )" both overload the name "foo". Therefore, you cannot speak in terms of overloader and overloadee of one of the functions, because they have no direct relationship. In your examples, you can say that you are overloading "foo" (the name) with "foo( string bar )" (the function), but you cannot say that you create an overloaded method, because methods are never overloaded. You can say that you create an overloading method. To formulate "making foo overloaded" is just a worse way of saying "overload foo". | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145231",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52338/"
]
} |
145,323 | We had an assignment for our class where we had to create a Tic-tac-toe game. People like to complicate themselves, so they wrote complex games which included menus. At the end of the game, you had to have the option to play again or quit the program. I used an int variable for that, but I noticed some classmates using BOOLs. Is it more efficient? What's the difference, between storing an answer that should only store two values in an int rather than storing it in a bool? What is the exact purpose of these variables? | When choosing variable types and variable names you want your intent to be as clear as possible. If you choose a bool (boolean) type, it is clear there are only two acceptable values: true or false . If you use an int (integer) type, it is no longer clear that the intent of that variable can only be 1 or 0 or whatever values you chose to mean true and false . Plus sizeof(int) will typically return as being 4 bytes, while sizeof(bool) will return 1. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145323",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/47546/"
]
} |
145,437 | Possible Duplicate: Why are interfaces useful? Like most faculty, my java faculty introduced interface without explaining or even mentioning its practical use. Now I imagine interfaces have a very specific use, but can't seem to find the answer. My question is: a class can directly implement the functions in an interface. eg: interface IPerson{
void jump(int);
}
class Person{
int name;
void jump(int height){
//Do something here
}
} What specific difference does class Person implements IPerson{
int name;
void jump(int height){
//Do something here
}
} ?
make | It starts with a dog. In particular, a pug. The pug has various behaviors: public class Pug
{
private String name;
public Pug(String n)
{
name = n;
}
public String getName()
{
return name;
}
public String bark()
{
return "Arf!";
}
public boolean hasCurlyTail()
{
return true;
}
} And you have a Labrador, who also has a set of behaviors. public class Lab
{
private String name;
public Lab(String n)
{
name = n;
}
public String getName()
{
return name;
}
public String bark()
{
return "Woof!";
}
public boolean hasCurlyTail()
{
return false;
}
} We can make some pugs and labs: Pug pug = new Pug("Spot");
Lab lab = new Lab("Fido"); And we can invoke their behaviors: pug.bark() -> "Arf!"
lab.bark() -> "Woof!"
pug.hasCurlyTail() -> true
lab.hasCurlyTail() -> false
pug.getName() -> "Spot"
lab.getName() -> "Fido" Let's say I run a dog kennel and I need to keep track of all the dogs I'm housing. I need to store my pugs and labradors in separate arrays: public class Kennel
{
Pug[] pugs = new Pug[10];
Lab[] labs = new Lab[10];
public void addPug(Pug p)
{
...
}
public void addLab(Lab l)
{
...
}
public void printDogs()
{
// Display names of all the dogs
}
} But this is clearly not optimal. If I want to house some poodles, too, I have to change my Kennel definition to add an array of Poodles . In fact, I need a separate array for each kind of dog. Insight: both pugs and labradors (and poodles) are types of dogs and they have the same set of behaviors. That is, we can say (for the purposes of this example) that all dogs can bark, have a name, and may or may not have a curly tail. We can use an interface to define what all dogs can do , but leave it up to the specific types of dogs to implement those particular behaviors. The interface says "here are the things that all dogs can do" but doesn't say how each behavior is done. public interface Dog
{
public String bark();
public String getName();
public boolean hasCurlyTail();
} Then I slightly alter the Pug and Lab classes to implement the Dog behaviors. We can say that a Pug is a Dog and a Lab is a Dog . public class Pug implements Dog
{
// the rest is the same as before
}
public class Lab implements Dog
{
// the rest is the same as before
} I can still instantiate Pug s and Lab s as I previously did, but now I also get a new way to do it: Dog d1 = new Pug("Spot");
Dog d2 = new Lab("Fido"); This says that d1 is not only a Dog , it's specifically a Pug . And d2 is also a Dog , specifically a Lab . We can invoke the behaviors and they work as before: d1.bark() -> "Arf!"
d2.bark() -> "Woof!"
d1.hasCurlyTail() -> true
d2.hasCurlyTail() -> false
d1.getName() -> "Spot"
d2.getName() -> "Fido" Here's where all the extra work pays off. The Kennel class becomes much simpler. I need only one array and one addDog method. Both will work with any object that is a dog; that is, objects that implement the Dog interface. public class Kennel
{
Dog[] dogs = new Dog[20];
public void addDog(Dog d)
{
...
}
public void printDogs()
{
// Display names of all the dogs
}
} Here's how to use it: Kennel k = new Kennel();
Dog d1 = new Pug("Spot");
Dog d2 = new Lab("Fido");
k.addDog(d1);
k.addDog(d2);
k.printDogs(); The last statement would display: Spot
Fido An interface give you the ability to specify a set of behaviors that all classes that implement the interface will share in common. Consequently, we can define variables and collections (such as arrays) that don't have to know in advance what kind of specific object they will hold, only that they'll hold objects that implement the interface. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145437",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/48052/"
]
} |
145,545 | I want to integrate Spring framework in my project especially into server side. So, I don't want to put it within WEB-INF folder of war file. Should I put an applicationContext.xml it into each layer (means each project since divided in distinct projects? (Services, Domain, and DAO) What is the good practice ? | The Maven file structure may help with this In essence the Spring configuration files (that can have any name by the way, not just the generic applicationContext.xml ) are treated as classpath resources and filed under src/main/resources . During the build process, these are then copied into the WEB-INF/classes directory which is the normal place for these files to end up. Variations include an additional spring directory (e.g. src/main/resources/spring ) to separate the Spring contexts from other resources dedicated to application frameworks. You may wish to split the application contexts into dedicated layers such as: example-servlet.xml
example-data.xml
example-security.xml and so on. What about different environments like dev/test/production? Typically, your Spring configuration should pick up the environment configuration from its, ahem, environment. Usually this means using JNDI, JDBC, environment variables or external properties files to provide the necessary configuration. I list those in order of preference since JNDI is generally easier to administer than external properties files in a controlled production cluster. In the case of integration testing you may need to use a "test-only" Spring configuration file. This would contain special contexts that use test beans or configuration. These would be present under src/test/resources and may have a test- prefix to make sure that developers are aware of their purpose. A typical use would be to provide a non-JNDI DataSource perhaps targeting a HSQLDB database during the build automated tests and would be referenced within the test case. However, in general the majority of your Spring context files should not need specialised modification as they move between tiers. It should be the case that the same build artifact (e.g. WAR file) is used in dev/test/production just with different credentials. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145545",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/43961/"
]
} |
145,574 | I am a big fan of good coding style, producing clean, clear code that runs well and is easy to use and integrate into larger systems. I believe that we programmers are essentially craftspeople who should take pride in our work, every line. I am not fond of code that is inconsistently formatted, cluttered with commented out experimental code, and rife with unhelpful or misleading function and variable names. But I sometimes find it hard to argue with code like this that basically works. If you think that coding style matters, I am looking for recommendations for ways of teaching good, professional coding style to the junior programmers under me. I want them to take pride in their work, but my concern is that they appear to become satisfied when their code just barely works, and seem to have no interest in producing what professionals like me would consider professional code. On the other hand, if you think coding style is not particularly valuable, I welcome your comments and am open to reconsidering my standards and tolerances. I know the usual arguments in favor of good code: comprehensibility, ease of maintenance, etc., but I would also like to hear some rebuttals to arguments like "it works, what more do you want?" In our industry we can use the compiler to hide a multitude of sins. Clean code and messy code can both lead to identically functioning programs, so does cleanliness matter, and if so, why? [While there are several existing questions on coding style, I didn't find any that related to teaching methods and ways to instill good coding style in junior programmers.] | Make a good impression Take some of the well-known books, e.g. Clean Code , Code Complete , Coders at Work , The Clean Coder: A Code of Conduct for Professional Programmers etc. ( see here for full lists ) and give them a couple of days to read one - at work, in a private office. Space those out, say 1 book a month or a quarter. They will see from the effort that what you're personally saying is really important, not just "the company line" to take with a grain of salt. Obviously make sure you've got management in line with this - you don't want them saying in passing "huh? what's person X up to? holed away in there with the door closed?" Along with continual, more formal training classes in the latest technologies. Also, as things are done "the right way" gave big encouragement and even rewards. Otherwise it can be more of a negative, punishing, rather than positive, rewarding environment. Most folks want to do and want to be known for doing "a good job" if they have the tools to do it. Practice what you preach, Preach what you practice Talk about code, talk about the good principles, talk about new tools, become 'known' for it. Provide / support /suggest screencasts, videos, peepcodes and whatever online tutorials and classes you can find. Support and suggest appropriate local user groups, including those on sites like http://www.meetup.com If you are in office (i.e. not virtual) a well-stock bookshelf of the actual books you would like people to use is good. Find a way to make this be "not just dusty bookshelf in the corner" but placing it really prominently, Moving the books around,etc. Use your imagination too. Maybe every programmer gets one book as 'homework' a month and you have a monthly meeting where they get to 'present their findings' ! This will make far more of an impression that any 10 minute conversation alone and it will remove you from the 'criticizer' role and allow them to learn how to fish for themselves (rather than giving them fish, you know the deal). Some junior folks also find it intimidating to have a senior folk always explain stuff, when sometimes all they really want is some time to study, practice and absorb it. Instill a culture of learning and excellence Basically you want to create "a culture of learning and excellence" so that you can Practice what you Teach and inspire others to do the same. This should be in conjunction with code reviews to see if/how the principles apply to the actual work being done. Conversely, code reviews done without the above can feel like whipping sessions to the student no matter how well intentioned by the teacher. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145574",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/52512/"
]
} |
145,669 | I got interested in the Soviet space program and was interested to discover that the software on the Buran spacecraft circa 1988 was written in Prolog. Does anyone know what languages might have been used in earlier missions, especially the Mars PrOP-M rover missions of the early 1970s which were somewhat autonomous and could navigate obstacles? Edit My source for the Buran Prolog is this declassified document from the CIA site from May 1990. I couldn't find an OCR version, so here's the relevant quote from p. 0449: According to open-source literature, the Soviets used the
French-developed programming language known as Prolog to develop
on-board system software for the Buran vehicle... | There's a book in Russian, German Noskin, First computers (literally board digital computing machines ) for space applications (Герман Носкин, Первые БЦВМ космического применения), ISBN 978-5-91918-093-7. The author himself participated in many early projects (mostly in hardware) and according to him analog hardware was in favor for a long time, he mentions that space rendezvous tasks didn't use digital computers until the late 70's. Due to this policy many digital computers were really proofs of concept although used in other areas of soviet economics. The first computer according to him used on-board was the Argon-11S (Аргон-11С) on the unmanned missions to the Moon closer to Apollo-8 in time. Also Noskin briefly says that the on-board computer Salut-4 was compatible with general-purpose computers ES used in Soviet economics so it was possible to develop software in PL-1 and Fortran. There are several mentions of Buran program languages on Russian websites. According to Vladimir Parondjanov, an engineer from the program ( Russian Post ) three languages using Russian as a base were developed: PROL2 (ПРОЛ2) for onboard programs, Dipol (Диполь) for earth tests, and Laks (Лакс) for modelling. All of them were intended for use not only by professional programmers but also engineers from other areas. When the Buran program was closed they were merged into a new language Drakon (Дракон, Russian word for "Dragon") that is claimed to be be a "graphical" language having 2-dimensional descriptions of the programs and using arbitrary well-known languages for code generation. This language was also intended for use by non-programmers. The language probably does not have and international community and isn't even well-known within Russia although heavily promoted by its author, Vladimir Parondjanov (the Russian Wikipedia article article is very long and was even deleted once for not following Wikipedia rules). Drakon was first used for programming for the Sea Launch missions and has been used in other Russian space programs since. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145669",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/42203/"
]
} |
145,751 | C# style suggests using CamelCase in identifiers to delimit words. Lisp tradition suggests using-dashes-instead. Has there ever existed a programming language where using spaces in identifiers was not only allowed, but a commonly used idiom when employing multi-word identifiers? It's possible to have identifiers with spaces in some Scheme implementations, but it's not a widely seen practice. Here's an example: Petite Chez Scheme Version 8.4
Copyright (c) 1985-2011 Cadence Research Systems
> (define |hey there| 100)
> (define |x y z| 200)
> (list |hey there| |x y z|)
(100 200) | FORTRAN compilers ignored spaces so: result = value * factor
r e s u l t = val ue * fac tor
result=value*factor were identical as far as the compiler was concerned. Some SQL dialects allow embedded spaces in column names but they need to be surrounded by backquotes or some other delimiter before they can be used. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/145751",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/45105/"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.