source_id
int64
1
4.64M
question
stringlengths
0
28.4k
response
stringlengths
0
28.8k
metadata
dict
152,180
I really have never understood it at all. I can do it, but I just don't get why I would want to. For instance, I was programming a game yesterday, and I set up an array of pointers for dynamically allocated enemies in the game, then passed it to a function which updates their positions. When I ran the game, I got one of those nondescript assertion errors, something about a memory block not existing, I don't know. It was a run-time error, so it didn't say where the problem was. So I just said screw it and rewrote it with static instantiation, i.e.: while(n<4) { Enemy tempEnemy = Enemy(3, 4); enemyVector.push_back(tempEnemy); n++; } updatePositions(&enemyVector); And it immediately worked perfectly. Now sure, some of you may be thinking something to the effect of "Maybe if you knew what you were doing," or perhaps "n00b can't use pointers L0L," but frankly, you really can't deny that they make things way overcomplicated, hence most modern languages have done away with them entirely. But please-- someone -- What IS the point of dynamic allocation? What advantage does it afford? Why would I ever not do what I just did in the above example?
First off, you are using dynamic allocation. You just don't handle raw pointers yourself. But std::vector , as nearly all other useful data structures, internally allocates memory dynamically. This is because your only alternatives are: Static allocation (far too limited, you have to know the size at compile time ) and stack allocation (far too limited, a few megabytes at most and it's deallocated as soon as the allocating function returns). Dynamic gives you the largest amount of memory and the most freedom in how you use it. A pointer or reference also abstracts over the size of what's allocated - on the stack, you have to know at compile time (or use alloca to allocate manually, but then you need even more care and still get a pointer). Second, many C++ programmers will agree that raw pointers are not very useful most of the time, precisely due to the complexity and error-proneness you cite. You need them under the hood, and you should absolutely be able to use them, but you shouldn't do it in 99% of all code in the interest of sanity, correctness, programmer performance, exception safety, memory leak avoidance, etc. Your alternatives are some combination of containers ( std::vector is just the tip of the iceberg), smart pointers, other forms of RAII, plain old stack allocation whenever you can get away with it, and references. Also note that there is a difference between pointers (and other kinds of indirection) and dynamic allocation. Pointers (raw and smart, as well as references) afford you (as a commenter pointed out) polymorphism, regardless of how the pointed-to object is allocated. You will have to wrap your head around them, around references, around smart pointers, and about issues like ownership that follow suit, regardless of what/where/how you allocate.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152180", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56275/" ] }
152,240
Being a completely self taught programmer, I would like it if I could better myself by self-learning the computer science course taught to a typical CS grad. Finding different resources on internet has been easy, there is of course MIT open course ware , and there are Coursera courses from Stanford and other universities. There are numerous other open resources scattered around the Internet and some good books that are repeatedly recommended. I have been learning a lot, but my study is heavily fragmented, which really bugs me. I would love If somewhere, I could find a path I should follow and a stack I should limit myself to, so that I can be sure about what essential parts of computer science I have studied, and then systematically approach those I haven't. The problem with Wikipedia is it doesn't tell you what's essential but insists on being a complete reference. MIT open course ware for Computer science and Electrical Engg. has a huge list of courses also not telling you what courses are essential and what optional as per person's interest/requirement. I found no mention of an order in which one should study different subjects. What I would love is to create a list that I can follow, like this dummy one SUBJECTS DONE Introduction to Computer Science * Introduction to Algorithms * Discrete Mathematics Adv. Discrete Mathematics Data structures * Adv. Algorithms ... As you can clearly see, I have little idea of what specific subjects computer science consists of. It would be hugely helpful, even if someone pointed out essential courses from MIT Course ware ( + essential subjects not present at MIT OCW) in a recommended order of study. I'll list the Posts I already went through (and I didn't get what I was looking for there) https://softwareengineering.stackexchange.com/questions/49557/computer-science-curriculum-for-non-cs-major - top answer says it isn't worth studying cse https://softwareengineering.stackexchange.com/questions/110345/how-can-a-self-taught-programmer-learn-more-about-computer-science - points to MIT OCW https://softwareengineering.stackexchange.com/questions/49167/studying-computer-science-what-am-i-getting-myself-into https://softwareengineering.stackexchange.com/questions/19912/overview-of-computer-science-programming
I've seen some course material from MIT, and it was shockingly bad . They had teaching materials which required VC5, bunches of implicit global variables, passing colours as "Blue" instead of 32bit ARGB, let alone 4x [0,1] floats, that sort of thing. I wouldn't trust a curriculum or code just because it comes from a big-name university. My CS degree (from a university which is top 10 in the UK for CS) consisted of: First year: OOP- the super basics Computer Systems- stuff like, binary integer representations. Basic relational database theory Mathematics for CS- simple 2D and 3D geometry. A little bit of HTML/JS- complete beginner's stuff An equally tiny bit of PHP. A tad of functional programming Second year: Legal issues in computing- stuff like, laws revolving around protection of user data Programming languages- Chomsky hierarchy and lexing was covered Operating Systems, Networks, and the Internet- mostly stuff like virtual memory and paging, IP stack 2D computer graphics- mostly just proving theorems of the underlying mathematics AI- basic descriptions of neural networks, Bayesian belief systems, etc. Requirements analysis- brief overview of UML, functional/nonfunctional requirements. Team project Third year: Algorithm analysis- complexity theory, mostly Implementation of programming languages- LL/LR parsing techniques, CFGs, and such things. Software Project Management- a look at Waterfall/Agile models International Computing- Unicode and other localization fun Advanced AI- don't know, honestly, and I've got an exam on it soon 3D computer graphics- mostly, again, just proving theorems for rotation matrices and such Agent-based Systems- mostly about asynchronous agents communicating, reaching group decisions, etc. Microprocessor Applications- digital signal processing Robotics- covers stuff like computer vision and robot decision making at a high level As you'll notice, pretty much everything is "the basics" of something and almost nothing is covered to a useful depth. The stuff that was actually worth doing, essential: OOP- and then some more, and then some more Functional programming- also some more. Try to pick a language like C++ or C# where you don't have to re-learn the syntax and tools, etc, to cover both styles. The OS part- virtual memory is good to know about, as is kernel mode vs user mode. Skip segmentation and the IP stack. Requirements analysis- Gotta be useful for any project Algorithm analysis- knowing what algorithmic complexity is, how to reduce it, and what the complexity is of common operations is important. Software project management models- many shops do Agile and many older ones still do Waterfall-style models. International computing- Unicode is essential The stuff that was worth doing, optionally: Programming languages- Chomsky hierarchy, the tools of lexing and parsing. Skip the theory behind LL or LR parsers- an LR parser can accept virtually any realistic unambiguous CFG, and when it can't, your parser generator's documentation will tell you about it. 3D Graphics. I don't mean "Prove this is a rotation matrix formula" wastes of time, I mean actual "This is a vertex shader" stuff, or GPGPU. That's fun, interesting, and different. Some of the AI stuff is fun- like potential fields and pathfinding. Stuff that's essential but I didn't cover it anyway: Concurrency- a must-know, at least the basics, for anyone in 2012. The rest were a complete waste of time. Unfortunately, most of these nine points I either already knew, or picked up the useful parts elsewhere. If you read about things like the FizzBuzz problem it rapidly becomes apparent that you don't actually need to know all that much to be on top of the pack- which is fortunate, since my degree and many of the materials I've seen online for other degrees really do not teach much at all.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152240", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40032/" ] }
152,255
When the concepts of Object Oriented Programming were introduced to programmers years back it looks interesting and programming was cleaner. OOP was like this Stock stock = new Stock(); stock.addItem(item); stock.removeItem(item); That was easier to understand with self-descriptive name. But now OOP, with pattern like Data Transfer Objects, Value Objects, Repository, Dependency Injection etc, has become more complex. To achieve the above you may have to create several classes (e.g. abstract, factory, DAO etc) and Implement several interfaces Note: I am not against best practices that makes Collaboration, Testing and Integration easier
OOP itself hasn't changed much since its inception. A few new angles to it have been explored, but the core principles are still the same. If anything, the collective knowledge gathered over the years makes the programmer's life easier rather than harder. Design patterns are not a hindrance; they provide a toolbox of solutions to standard problems, distilled from years and years of experience. So why is it you perceive OOP today as more complex than when you started using it? One reason may be that the code you are being exposed to becomes more complex - not because OOP has become more complex, but because you have advanced on the learning ladder, and get to read larger and more complex code bases. Another reason may be that while the complexity paradigm hasn't changed, the size and complexity of an average software project may very well have. With processing power available on customer-grade cellphones that would have been a developer's wet dream on a server less than two decades ago, the general public basically expecting slick animated GUI's for even the cheapest throwaway app, and entry-level desktop PC's being more powerful than a 1980's "supercomputer", it is only natural that the bar has been raised since the early days of Smalltalk and C++. And then there's the fact that in modern applications, concurrency and parallelism is the norm rather than the exception, and applications frequently need to communicate between different machines, outputting and parsing a whole zoo of protocols. While OOP is great as an organizational paradigm, it does have its limitations, just like any other paradigm: for example, it does not provide a lot of abstraction for concurrency (most implementations being more or less an afterthought, or outsourced to libraries entirely), and it's not the best possible approach for building parsers and transforming data. Modern programming frequently runs into the limitations of the OOP paradigm, and design patterns can only take you so far. (Personally, I consider the fact that we need design patterns a sign of this - if the paradigm provided these solution out-of-the-box, it would be more expressive for these problems, and the standard solutions would be obvious. There is no design pattern to describe method inheritance, because it is a core feature of OOP; but there is a Factory Pattern, because OOP does not provide an obvious natural way of constructing objects polymorphically and transparently.) Because of this, most modern OOP languages incorporate features from other paradigms, which makes them more expressive and more powerful, but also more complex. C# is the prime example for this: it has obvious OOP roots, but features like delegates, events, type inference, variant data types, attributes, anonymous functions, lambda expressions, generics, etc., originate from other paradigms, most notably Functional Programming.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152255", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/55555/" ] }
152,276
During an interview, I was asked whether I knew the difference between C and C++. I was wondering why such question being asked?
If the questions was phrased like "Do you know the difference between C and C++?" and you were allowed to just respond with "Yes" then I could see your confusion, but if they were actually asking "What -are- the differences between C and C++?" and a more open-ended answer was expected, I can see it as a legitimate "avenue of inquiry" as they say. For example, only having ever coded in C and never in C++, I would barely be able to answer beyond a vague "C++ supports object oriented coding?".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152276", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/53366/" ] }
152,351
I manage a small team of developers on an application which is in the mid-point of its lifecycle, within a big firm. This unfortunately means there is commonly a 30/70 split of Programming tasks to "other technical work". This work includes: Working with DBA / Unix / Network / Loadbalancer teams on various tasks Placing and managing orders for hardware or infrastructure in different regions Running tests that have not yet been migrated to CI Analysis Support / Investigation Its fair to say that the Developers would all prefer to be coding, rather than doing these more mundane tasks, so I try to hand out the fun programming jobs evenly amongst the team. Most of the team was hired because, though they may not have the elite programming skills to write their own compiler / game engine / high-frequency trading system etc., they are good communicators who "can get stuff done", work with other teams, and somewhat navigate the complex bureaucracy here. They are good developers, but they are also good all-round technical staff. However, one member of the team probably has above-average coding skills, but below-average communication skills. Traditionally, the previous Development Manager tended to give him the Programming tasks and not the more mundane tasks listed above. However, I don't feel that this is fair to the rest of the team, who have shown an aptitude for developing a well-rounded skillset that is commonly required in a big-business IT department. What should I do in this situation? If I continue to give him more programming work, I know that it will be done faster (and conversely, I would expect him to complete the other work slower). But it goes against my principles, and promotes the idea that you can carve out a "comfortable niche" for yourself simply by being bad at the tasks you don't like. I want to clarify that I'm not trying to address this issue due to a grudge, or that I have a "chip on my shoulder" as was mentioned. I'm looking for advice on how to keep a well-rounded team, which is happy and motivated. By observing the variety of answers to this question, it seems like there are a lot of different opinions on how to achieve this.
It sounds like you are placing too much effort on having well rounded individuals and not enough effort on having a well rounded team . There is nothing wrong with being good at something--in fact, that is probably why he was hired! You should be thankful to have someone who is good at programming to begin with. You stated: ... it goes against my principles, and promotes the idea that you can carve out a "comfortable niche" for yourself simply by being bad at the tasks you don't like. If he was a mediocre programmer, then I'd agree. But you didn't say that. You said he was a good programmer. He's not being bad at the other tasks to get out of them--he's merely focused his efforts on becoming a better programmer. There is nothing wrong with that. As a manager, it is not your job to make sure that everyone is "well rounded". It is your job to make sure that s*** gets done. And you're not doing that. In fact, you're making decisions that are stopping things from getting done. Whatever problem you have, you need to get over it--you are making your team less productive.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152351", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41464/" ] }
152,392
Sometimes you run into a situation where you have to extend/improve some existing code. You see that the old code is very lean, but it's also difficult to extend, and takes time to read. Is it a good idea to replace it with modern code? Some time ago I liked the lean approach, but now, it seems to me that it's better to sacrifice a lot of optimizations in favor of higher abstractions, better interfaces and more readable, extendable code. The compilers seem to be getting better as well, so things like struct abc = {} are silently turned into memset s, shared_ptr s are pretty much producing the same code as raw pointer twiddling, templates are working super good because they produce super lean code, and so on. But still, sometimes you see stack based arrays and old C functions with some obscure logic, and usually they are not on the critical path. Is it a good idea to change such code if you have to touch a small piece of it either way?
Where? On a home page of a Google-scale website, it is not acceptable. Keep the things as quick as possible. In a part of an application which is used by one person once a year, it is perfectly acceptable to sacrifice performance in order to gain code readability. In general, what are the non-functional requirements for the part of the code you're working on? If an action must perform under 900 ms. in a given context (machine, load, etc.) 80% of the time, and actually, it performs under 200 ms. 100% of the time, sure, make the code more readable even if it might slightly impact the performance. If on the other hand the same action never performed under ten seconds, well, you should rather try to see what's wrong with the performance (or the requirement in the first place). Also, how readability improvement will decrease the performance? Often, developers are adapting the behavior close to premature optimization: they are afraid to increase the readability, believing that it will drastically destroy the performance, while the more readable code will spend a few microseconds more doing the same action.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152392", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/8029/" ] }
152,464
I have just recently started my career as a web developer for a medium sized company. As soon as I started I got the task of expanding an existing application (badly coded, developed by multiple programmers over the years, handles the same tasks in different ways, zero structure). So after I had successfully extended this application with the requested functionality, they gave me the task to fully maintain the application. This was of course not a problem, or so I thought. But then I got to hear I wasn't allowed to improve the existing code and to only focus on bug fixes when a bug gets reported. From then on I have had three more projects just like the above, that I now also have to maintain. And I got four projects where I was allowed to create the application from scratch, and I have to maintain those as well. At this moment I'm slightly beginning to get crazy from the daily mails of users (read managers) for each application I have to maintain. They expect me to handle these mails directly while also working on two other new projects (and there are already five more projects lined up after those). The sad thing is I have yet to receive a bug report on anything that I have coded myself. For that I have only received the occasional let's-do-things-180-degrees-different change requests. Anyway, is this normal? In my opinion I'm doing the work equivalent of a whole team of developers. Was I an idiot when I initially expected things to be different? I guess this post has turned into a big rant, but please tell me that this is not the same for every developer. P.S. My salary is almost equal if not lower than that of a cashier at a supermarket.
During one of my internships I found I spent a lot of time doing bug fixes. You have to realize that as an entry level employee you aren't going to get the sexiest work, you're going to get the grunt work no one else wants. It's unfortunate, but it's how it is at every job. Additionally, you have to realize that to a company, having code that works is more important than having code that is clean. From your company's perspective, you changing the existing structure is money wasted on redoing something that is already done and potentally introducing even more errors. Usually these types of companies aren't computer/software companies so no sufficiently high manager has the technical background to know that sometimes you need to do these major overhauls. That said, if your company is run by technically competent people and they understand the value of good code, you may get more leeway, although sometimes you need to choose your battles (the main purpose of a business is still to make money, after all). That said, you are not unreasonable in wanting to be able to leave your mark on the software and wanting more meaningful work. It is also unfortunate that you have to deal with so many projects at once while fielding requests from so many different managers. As a programmer, it is a fact of life that you will spend more time maintaining and modifying other people's code than you will writing your own from scratch. If this is a problem for you then perhaps you should stick to developing as a hobby and pursue a different career. If you are OK with maintaining code, but you feel you are not being used effectively or are being overwhelmed, then that is a matter you need to discuss with your manager. If your problems are more serious than that or if you feel like your managers don't know how to effectively manage your skill set then it would be a good idea to consider finding a position at a different company. Given your stated low salary, this is probably your best course of action.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152464", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56486/" ] }
152,477
I was reading this article. It has the following paragraph. And did Scala turn out to be fast? Well, what’s your definition of fast? About as fast as Java. It doesn’t have to be as fast as C or Assembly. Python is not significantly faster than Ruby. We wanted to do more with fewer machines, taking better advantage of concurrency; we wanted it to be compiled so it’s not burning CPU doing the wrong stuff. I am looking for the meaning of the last sentence. How will interpreted language make the CPU do "wrong" stuff ?
If the code says A = A + 1 compiled code does this add A, 1 interpreted code does this (or some variation) look up the location of A in the symbol table find the value of A see that 1 is a constant get its value add the value of A and the value of 1 look up the location of A in the symbol table store the new value of A get the idea?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152477", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/31895/" ] }
152,533
I was under the impression that by now everyone agrees this maxim was a mistake. But I recently saw this answer which has a "be lenient" comment upvoted 137 times (as of today). In my opinion, the leniency in what browsers accept was the direct cause of the utter mess that HTML and some other web standards were a few years ago, and have only recently begun to properly crystallize out of that mess. The way I see it, being lenient in what you accept will lead to this. The second part of the maxim is "discard faulty input silently, without returning an error message unless this is required by the specification" , and this feels borderline offensive. Any programmer who has banged their head on the wall when something fails silently will know what I mean. So, am I completely wrong about this? Should my program be lenient in what it accepts and swallow errors silently? Or am I mis-interpreting what this is supposed to mean? The original question said "program", and I take everyone's point about that. It can make sense for programs to be lenient. What I really meant, however, is APIs: interfaces exposed to other programs , rather than people. HTTP is an example. The protocol is an interface that only other programs use. People never directly provide the dates that go into headers like "If-Modified-Since". So, the question is: should the server implementing a standard be lenient and allow dates in several other formats, in addition to the one that's actually required by the standard? I believe the "be lenient" is supposed to apply to this situation, rather than human interfaces. If the server is lenient, it might seem like an overall improvement, but I think in practice it only leads to client implementations that end up relying on the leniency and thus failing to work with another server that's lenient in slightly different ways. So, should a server exposing some API be lenient or is that a very bad idea? Now onto lenient handling of user input. Consider YouTrack (a bug tracking software). It uses a language for text entry that is reminiscent of Markdown. Except that it's "lenient". For example, writing - foo - bar - baz is not a documented way of creating a bulleted list, and yet it worked. Consequently, it ended up being used a lot throughout our internal bugtracker. Next version comes out, and this lenient feature starts working slightly differently, breaking a bunch of lists that (mis)used this (non)feature. The documented way to create bulleted lists still works, of course. So, should my software be lenient in what user inputs it accepts?
I think that it all depends on who your target demographic is. If it's programmers, then absolutely not. Your program should fail hard and scream bloody murder. However, if your target audience isn't programmers, then your program should be lenient where it can handle exceptions gracefully, otherwise, whisper sweet bloody murder. As a case study, take the NPAPI Flash player. There is a "release" version for those who don't really care about 99% of errors that can occur, but there's also a "debug" version that can be used that screams bloody murder when anything goes wrong. Each support playing Flash content obviously, but are targetted to two completely different demographics. In the end, I think that the important thing is: What do your users care about?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152533", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3278/" ] }
152,547
I had a co-worker leave our company recently. Before leaving, he coded a component that had a severe memory leak that caused a production outage ( OutOfMemoryError in Java). The problem was essentially a HashMap that grew and never removed entries, and the solution was to replace the HashMap with a cache implementation. From a professional standpoint, I feel that I should let him know about the defect so he can learn from the error. On the other hand, once people leave a company, they often don't want to hear about legacy projects that they have left behind for bigger and better things. What is the general protocol for this sort of situation?
You don't hunt down a former collegue to tell him he made a mistake. You may tell your friend that he made a mistake. Whether he is a friend or a former collegue is up to you.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152547", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/39511/" ] }
152,654
I am a big fan of open source code. I think I understand most of the advantages of going open source. I'm a science student researcher, and I have to work with quite a surprising amount of software and code that is not open source (either it's proprietary, or it's not public). I can't really see a good reason for this, and I can see that the code, and people using it, would definitely benefit from being more public (if nothing else, in science it's vital that your results can be replicated if necessary, and that's much harder if others don't have access to your code). Before I go out and start proselytising, I want to know: are there any good arguments for not releasing not-for-profit code publicly, and with an OSI-compliant license? (I realise there are a few similar questions on around, but most focus on situations where the code is primarily used for making money, and I couldn't much relevant in the answers.) Clarification: By "not-for-profit", I am including downstream profit motives, such as parent-company brand-recognition and investor profit expectations. In other words, the question relates only to software for which there is NO profit motive tied to the software what so ever.
You need to take into account that open sourcing your code might require additional effort. As an example, in this blog entry Sun/Oracle engineer describes efforts they had to take when open sourcing their code: Open Source or Dirty Laundry? As we get ready to dive into the open source world, one of the many activities that's occurring is the preparation of the code for being open sourced. There are some obvious things that need to be done. For instance, our source code includes a mixture of code that we've written and code that we've licensed from others. We'll need to separate out the latter and open source only the appropriate pieces of code. Another preparation activity is "scrubbing" the code of proprietary information, mentions of particular customers, developers, technologies etc. This is a little less obvious, but consider the following example: /\* \* HACK - insert a time delay here because the stupid Intertrode \* Technologies framebuffer driver will hang the system if we \* don't. Those guys over there must really be idiots. \*/ While all of the above might be true, we probably have a relationship of some sort with Intertrode Tech and having comments like this in the code could hurt our business somehow and so it should be removed. Arguably it shouldn't have been there in the first place, but now's the time to take it out. Another part of the "scrubbing" activity is to remove profanity and other "undesirable" words... Note all above changes had to be made to code that has been considered perfectly OK as closed source - that makes it pure extra effort so to speak.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152654", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/47367/" ] }
152,733
Sometimes when we check the commit history of a software, we may see that there are a few commits that are really BIG - they may change 10 or 20 files with hundreds of changed source code lines (delta). I remember that there is a commonly used term for such BIG commit but I can't recall exactly what that term is. Can anyone help me? What is the term that programmers usually use to refer to such BIG and giant commit? BTW, is committing a lot of changes all together a good practice? UPDATE: thank you guys for the inspiring discussion! But I think "code bomb" is the term that I'm looking for.
(1) Ben Collins-Sussman : "... " code bombs ". That is, what do you do when somebody shows up to an open source project with a gigantic new feature that took months to write? Who has the time to review thousands of lines of code? ..." (2) Dan Fabulich : "The Code Bomb, or: The Newbie with Big Ideas ... A code bomb is a patch that's so large that no one can review it." (3) Google Summer of Code: Guidelines : "Commit early, commit often ... Please do not work the whole day and then push everything out in a single code bomb . Instead, every commit should be self-contained to one task only which should be summarized in the log message." (4) Jeff Atwood : "code-bombs ... Rule #30: Don't go dark . ...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152733", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/21021/" ] }
152,745
A Little Background I'm 28 today, and I've never had any formal training in software development, but I do have two higher education degrees equivalent to a B.A in Public Relations and an Executive MBA focused on Project Management. I've worked on those fields for about 6 years total an then, 2,5 years ago I quit/lost my job and decided to shift directions. After a month thinking things through I decided to start freelancing developing small websites in WordPress. I self-learned my way into it and today I can say I run a humble but successful career developing themes and plugins from scratch for my clients - mostly agencies outsourcing some of their dev work for medium/large websites. But sometimes I just feel that not having studied enough math, or not having a formal understanding of things really holds me behind when I have to compete or work with more experienced developers. I'm constantly looking for ways to learn more but I seem to lack the basics. Unfortunately, spending 4 more years in Computer Science is not an option right now, so I'm trying to learn all I can from books and online resources. This method is never going to have NASA employ me but I really don't care right now. My goal is to first pass the bar and to be able to call myself a real programmer. I'm currently spending my spare time studying Java For Programmers (to get a hold on a language everyone says is difficult/demanding), reading excerpts of Code Complete (to get hold of best practices) and also Code: The Hidden Language of Computer Hardware and Software (to grasp the inner workings of computers). TL;DR So, my current situation is this: I'm basically capable of writing any complete system in PHP (with the help of Google and a few books), integrating Ajax, SQL and whatnot, and maybe a little slower than an experienced dev would expect due to all the research involved. But I was stranded yesterday trying to figure out (not Google) a solution for the FizzBuzz test because I didn't have the if($n1 % $n2 == 0) method modulus operator memorized. What would you suggest as a good way to solve this dilemma? What subjects/books should I study that would get me solving problems faster and maybe more "in a programmers way"? EDIT - Seems that there was some confusion about what did I not know to solve FizzBuzz. Maybe I didn't express myself right: I knew the steps needed to solve the problem. What I didn't memorize was the modulus operator. The problem was in transposing basic math to the program, not in knowing basic math. I took the test for fun, after reading about it on Coding Horror . I just decided it was a good base-comparison line between me and formally-trained devs. I just used this as an example of how not having dealt with math in a computer environment before makes me lose time looking up basic things like modulus operators to be able to solve simple problems.
In your case, as you're self-taught and already have what seems to be a good, healthy and no-BS approach to learning. Still some suggestions... Practice Makes Perfect I think you should dive into progamming exercises, like the: Project Euler , the classic 99 Prolog Puzzles (just as good for any language), TopCoder , Google Code Jam , and so forth. Even grab the past exam questions of known universities around you, or of local (or remote) programming competitions. For example, we have a nice one in France for aspiring high-school programmers is called Prologin , and it does provide every year a good series of head-scratching puzzles (probably French only though, sorry, but that's the sort of things I mean). For more: Where Can I Find Programming Puzzles and Challenges? Classic Books and References We could also recommend a very long list of amazing books , but I'd say the Zen answer is that there's no single right way to Enlightenment. It would be hard to tell you which ones are top of the list. So keep reading a ton of stuff to learn general pitfalls to avoid and best practices to follow, as you are correctly doing now. For more: List of Freely Available Programming Books What is the Most Influential Book Every Programmer Should Read? What Books Should Everyone Read? What Papers Should Everyone Read? What Videos Should Everybody Watch? or even Where do You Start When You Have an Interest in Computer Science and Programming? Become a Programmer, Motherf*cker (Pardon his French ;) His tutorials and lists of resources are worth it) Pushing the Limits Also, look for head-scratching material, like: Hacker's Delight by Henry S. Warren, Matters Computational by Jörg Arndt. Not necessarily to be taken as an example of things to do nowadays, but worth trying to figure out what the hell is happening in there. Peer Systems are Motivational You may also want to lurk around (and get involved in) the following communities to improve your skills incrementally by being confronted to others. P.SE , naturally, StackOverflow , CodeGolf.SE , CodeReview.SE , or even the crazy folks at CS.SE (or specialized ones like Crypto.SE ), or many, many others SE or non-SE community sites. Part-Time Education If you don't have the time or motivation to engage in another 4 year course or something similar (which may not even be necessary or rewarding anyways, and expensive), you could consider looking for teaching material online. Of course, these are not limited to computer science. Thanks to the original MIT OpenCourseWare effort, there are now tons of universities that followed suit , and you can find a lot of university-level course material for free. It's not always easy to navigate and read through it on your own, but some are pretty well done. To start from the ground up and go pretty high up, consider also looking at the Khan Academy . Some go a bit further, and offer real online-courses for free, where you similarly have access to the course's material, but where they also provide paced lectures and regular self-assessments. For instance, visit Coursera or Udacity . Most of the above often publish their lectures on Youtube or iTunesU, so you'll find plenty of material if your thirst for knowledge wasn't already quenched by all the previous links. If you want something that provides a closer experience to the "real" university, you can consider remote universities, which also allow you to work part-time, but require you to follow the pace, and to have both self-assessments and end of year exams (sometimes on-site), like with the OpenUniversity and its international variants. Passion Keeps You Going Find a pet project: create your own or join one or more existing software projects and contribute. Code, code, code. And then code some more. (and get enough eyeballs looking at your code to criticize you and hqve different perspectives) The French say: C'est en forgeant que l'on devient forgeron. Keep doing what you're doing, and eventually you'll be an expert. Takes time and work. See also I'm Having Trouble Learning for more suggestions. PS: Though it's a very controversial tool for interviews, and doesn't help to identify good candidates, I do often use FizzBuzz to at least weed out the incredibly "bad ones". So get crankin' on this practicing thing! :)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152745", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56586/" ] }
152,759
Suppose I have an abstract class named Task . Is there a standard or convention that would suggest I should name it AbstractTask instead?
According to Bloch's Effective Java (Item 18) the Abstract prefix is a convention used in a special case. You can combine the virtues of interfaces and abstract classes by providing an abstract skeletal implementation class to go with each nontrivial interface that you export. ... By convention, skeletal implementations are called AbstractInterface, where Interface is the name of the interface they implement. But Bloch also points out that the name SkeletalInterface would have made sense, but concludes that the Abstract convention is now firmly established. As other answers have pointed out, in general there is no reason to apply this naming convention to all abstract classes.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152759", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2403/" ] }
152,780
I feel that often you don't really choose what format your code is in. I mean most of my tools in the past have decided for me. Or I haven't really even thought about it. I was using TextPad on windows the other day and as I was saving a file, it prompted me about ASCII, UTF-8/16, Unicode etc etc... I am assuming that almost all code written is ASCII, but why should it be ASCII? Should we actually be using UTF-8 files now for source code, and why? I'd imagine this might be useful on multi-lingual teams. Are there standards associated with how multilingual teams name variables/functions/etc?
The choice is not between ASCII and UTF-8. ASCII is a 7-bit encoding, and UTF-8 supersedes it - any valid ASCII text is also valid UTF-8. The problems arise when you use non-ASCII characters; for these you have to pick between UTF-8, UTF-16, UTF-32, and various 8-bit encodings (ISO-xxxx, etc.). The best solution is to stick with a strict ASCII charset, that is, just don't use any non-ASCII characters in your code. Most programming languages provide ways to express non-ASCII characters using ASCII characters, e.g. "\u1234" to indicate the Unicode code point at 1234. Especially, avoid using non-ASCII characters for identifiers. Even if they work correctly, people who use a different keyboard layout are going to curse you for making them type these characters. If you can't avoid non-ASCII characters, UTF-8 is your best bet. Unlike UTF-16 and UTF-32, it is a superset of ASCII, which means anyone who opens it with the wrong encoding gets at least most of it right; and unlike 8-bit codepages, it can encode about every character you'll ever need, unambiguously, and it's available on every system, regardless of locale. And then you have the encoding that your code processes; this doesn't have to be the same as the encoding of your source file. For example, I can easily write PHP in UTF-8, but set its internal multibyte-encoding to, say, Latin-1; because the PHP parser does not concern itself with encodings at all, but rather just reads byte sequences, my UTF-8 string literals will be misinterpreted as Latin-1. If I output these strings on a UTF-8 terminal, you won't see any differences, but string lengths and other multibyte operations (e.g. substr ) will produce wrong results. My rule of thumb is to use UTF-8 for everything; only if you absolutely have to deal with other encodings, convert to UTF-8 as early as possible and from UTF-8 as late as possible.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152780", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56734/" ] }
152,893
I run into this often during programming where I want to have a loop count index inside of a foreach and have to create an integer, use it, increment, etc. Wouldn't it be a good idea if there was a keyword introduced that was the loop count inside of a foreach? It could also be used in other loops as well. Does this go against the use of keywords seeing as how the compiler would not allow this keyword used anywhere but in a loop construct?
If you need a loop count inside a foreach loop why don't you just use a regular for loop. The foreach loop was intended to make specific uses of for loops simpler. It sounds like you have a situation where the simplicity of the foreach is no longer beneficial.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152893", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56815/" ] }
152,920
I am developing a paid application in python. I do not want the users to see the source code or decompile it. How can I accomplish this task to hiding the source code from the user but running the code perfectly with the same performance.
To a determined user, you can't. From a practical standpoint, you can do some tricks, such as wrapping it into a binary executable or using obfuscation. See here for full details: https://stackoverflow.com/questions/261638/how-do-i-protect-python-code
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152920", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56714/" ] }
152,993
Is it feasible to expect 100% code coverage in heavy jquery/backbonejs web applications? Is it reasonable to fail a sprint due to 100% coverage not being met when actual code coverage hovers around 92%-95% in javascript/jquery?
It is equally realistic as it is unrealistic. Realistic If you have automated testing that has been shown to cover the entire code base, then insisting upon 100% coverage is reasonable. It also depends upon how critical the project is. The more critical, the more reasonable to expect / demand complete code coverage. It's easier to do this for smaller to medium sized projects. Unrealistic You're starting at 0% coverage ... The project is monstrous with many, many error paths that are difficult to recreate or trigger. Management is unwilling to commit / invest to make sure the coverage is there. I've worked the gamut of projects ranging from no coverage to decent. Never a project with 100%, but there were certainly times I wished we had closer to 100% coverage. Ultimately the question is if the existing coverage meets enough of the required cases for the team to be comfortable in shipping the product. We don't know the impact of a failure on your project, so we can't say if 92% or 95% is enough, or if that 100% is really required. Or for that matter, the 100% fully tests everything you expect it to.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/152993", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/7954/" ] }
153,171
I am writing an application that works with satellite images, and my boss asked me to look at some of the commercial applications, and see how they behave. I found a strange behavior and then as I was looking, I found it in other standard applications as well. These programs first write to the temp folder, and then copy it to the intended destination. Example: 7zip first extracts to the temp folder, and then copies the extracted data to the location that you had asked it to extract the data to. I see several problems with this approach: The temp folder might not have enough space, while the intended location might have that much space. If it is a large file, it can take a non-negligible amount of time for the copy operation. I thought about it a lot, but I couldn't see one single positive point to doing this. Am I missing something, or is there a real benefit to doing this?
A few reasons I can think of: On most platforms, file moves are atomic, but file writes are not (especially if you can't write all the data in one go). So if you have the typical producer/consumer pattern (one process produces files, the other watches a directory and picks up everything it finds), writing to a temp folder first and only then moving to the real location means the consumer can never see an unfinished file. If the process that writes the file dies halfway through, you have a broken file on your disk. If it's in a real location, you have to take care of cleaning it up yourself, but if it's in a temp location, the OS will take care of it. If the file happens to be created while a backup job is running, the job may pick up an incomplete file; temp directories are generally excluded from backups, so the file will only be included once moved to the final destination. The temp directory may be on a fast-but-volatile filesystem (e.g. a ramdisk), which can be beneficial for things like downloading several chunks of the same file in parallel, or doing in-place processing on the file with lots of seeks. Also, temp directories tend to cause more fragmentation than directories with less frequent reads, writes, and deletes, and keeping the temp directory on a separate partition can help keep fragmentation of the other partitions down. TL;DR - it mostly boils down to atomicity, that is, you want to make it so that (at the final location) the file is either complete or not there at all at any given time.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153171", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/1426/" ] }
153,266
I was wondering if anyone knows what is the operating system used in commercial airplanes (say Boeing or Airbus). Also, what is the (preferred) real-time programing language? I heard that Ada is used in Boeing, so my question is - why Ada? what are the criteria the Boeing-guys had to choose this language? (I guess Java wouldn't be a great choice if exactly on lift-off the garbage collector wakes up).
Avionics For aircraft control systems, we don't speak of operating systems but of avionics , integrated avionics or computer airborne systems in general. And they are actually a combination of a multitude of independent or inter-dependent systems, for different functions ( flight control , collision avoidance , weather, communications, blackboxes ...). Each controller is usually an independent module (hardware and software) for obvious security and safety reasons: they are critical control and monitoring systems and if one of them were to fail or get damaged, it's a rather big "inconvenience" for the people which are depending on the aircraft. Dependability takes all its meaning when you are in the flying machine. So usually it's custom built for the purpose of their mission, built to work independently and interface with other systems, built to be fault-tolerant to its own failures AND to failures of the other systems it's interfaced to (because you don't want the pilot's sound system taking down the engine controls, for instance). It's not a big computer running everything. If you think of it from the perspective of a military aircraft rather than a commercial one (though it's similar in this respect, the image might help): if a part gets shot at, you'd rather at least have some other parts be able to keep going (the part controlling the communications and safety systems might be interesting to keep alive...). Hence also the big bunch of buttons you see in jetliners, to keep track of the status of different systems. They are usually either built as custom components operating their own system, or they are run and scheduled by a micro-kernel (in most cases, with support for real-time capabilities). It depends across vendors and countries, obviously, but they usually at least need to follow rather strict sets of regulations, design requirements and protocol specifics, which allow for: the control of their strict-compliance to security and safety standards, the inter-communication with other systems (much better if that airplane you took off with in Reykjavik can "talk" to that ground-control equipment in Tokyo...) Standardization Efforts The DO-178B (revised in 1992) and its successor the DO-178C (revised in 2012) (and a bunch of associated documents) are an example of reference certifications for such compliance levels, and are recognized by the FAA (US), the EASA (EU), and Transport Canada, amongst others. Multiple other organizations are involved in the creation of such documents, like the EUROCAE . Such airborne systems are usually bespoke software, but the following systems are known to be used in some airplanes: WindRiver 's VxWorks (see Aerospace & Defense uses ), QNX (actually not sure if QNX is used in airplanes, but it's used in ground control systems) To give you a vague idea of elements built into an avionics system, this list of avionics acronyms points to some of them (with some overlap). Notable Languages used in Commercial and Military Avionics Apart from the usual suspects we know in the "mainstream" programming world, you'll come up some often referenced names like Ada , and some less known languages like the (dated and now "retired" since 2010) JOVIAL . Related StackExchange Questions: Which operating system is used for airplane computers?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153266", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56393/" ] }
153,350
I've really fallen in love with unit testing and TDD - I am test infected. However, unit testing is normally used for public methods. Sometimes though I do have to test some assumptions-assertions in private methods too, because some of them are "dangerous" and refactoring can't help further. (I know, testing frameworks allow testing private methods). So it became a habit of mine that the first and the last line of a private method are both assertions. However, I've noticed that I tend to use assertions in public methods (as well as the private) just "to be sure". Could this be "testing duplication" since the public method assumptions are tested from the outside by the unit testing framework? Could someone think of too many assertions as a code smell?
Those assertions are really useful for testing your assumptions, but they also serve another really important purpose: documentation. Any reader of a public method can read the asserts to quickly determine the pre and post conditions, without having to look at the test suite. For this reason, I recommend you keep those asserts for documentation reasons, rather than testing reasons. Technically you are duplicating the assertions, but they serve two different purposes and are very useful in both. Keeping them as asserts is better than simply using comments, because they actively check assumptions whenever they are run.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153350", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32342/" ] }
153,366
Recently there has been a lot of praise for Node.js. I am not a developer that has had much exposure to network application. From my bare understanding of Nodes.js, its strength is: we have only one thread handling multiple connections, providing an event-based architecture. However, for example in Java, I can create only one thread using NIO/AIO (which is non-blocking APIs from my bare understanding), and handle multiple connections using that thread, and I provide an event-based architecture to implement the data handling logic (shouldn't be that difficult by providing some callback etc.)? Given JVM being a even more mature VM than V8 (I expect it to run faster too), and event-based handling architecture seems to be something not difficult to create, I am not sure why Node.js is attracting so much attention. Did I miss some important points?
While that concept can indeed be implemented in many languages (and as dodgy_coder mentioned, it has been implemented in Ruby and Python at least), it's not quite as trivial as you state. True, Java has non-blocking IO APIs. So you can do raw disk/network IO in a non-blocking way. However every API that somehow wraps or handles IO needs to be implemented in a non-blocking way as well. Every XML parser, every Database driver, every file-format converter needs to be written to support non-blocking IO. Because if a single library is blocking in this pattern, then that brings down your servers performance to stone-age values. Node.js has that library infrastructure, because it was always designed that way: every library that strives to become popular has to provide an asynchronous API or it will not be used.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153366", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/30168/" ] }
153,410
Most of the projects that I work on consider development and unit testing in isolation which makes writing unit tests at a later instance a nightmare. My objective is to keep testing in mind during the high level and low level design phases itself. I want to know if there are any well defined design principles that promote testable code. One such principle that I have come to understand recently is Dependency Inversion through Dependency injection and Inversion of Control. I have read that there is something known as SOLID. I want to understand if following the SOLID principles indirectly results in code that is easily testable? If not, are there any well-defined design principles that promote testable code? I am aware that there is something known as Test Driven Development. Although, I am more interested in designing code with testing in mind during the design phase itself rather than driving design through tests. I hope this makes sense. One more question related to this topic is whether it's alright to re-factor an existing product/project and make changes to code and design for the purpose of being able to write a unit test case for each module?
Yes, SOLID is a very good way to design code that can be easily tested. As a short primer: S - Single Responsibility Principle: An object should do exactly one thing, and should be the only object in the codebase that does that one thing. For instance, take a domain class, say an Invoice. The Invoice class should represent the data structure and business rules of an invoice as used in the system. It should be the only class that represents an invoice in the codebase. This can be further broken down to say that a method should have one purpose and should be the only method in the codebase that meets this need. By following this principle, you increase the testability of your design by decreasing the number of tests you have to write that test the same functionality on different objects, and you also typically end up with smaller pieces of functionality that are easier to test in isolation. O - Open/Closed Principle: A class should be open to extension, but closed to change . Once an object exists and works correctly, ideally there should be no need to go back into that object to make changes that add new functionality. Instead, the object should be extended, either by deriving it or by plugging new or different dependency implementations into it, to provide that new functionality. This avoids regression; you can introduce the new functionality when and where it is needed, without changing the behavior of the object as it is already used elsewhere. By adhering to this principle, you generally increase the code's ability to tolerate "mocks", and you also avoid having to rewrite tests to anticipate new behavior; all existing tests for an object should still work on the un-extended implementation, while new tests for new functionality using the extended implementation should also work. L - Liskov Substitution Principle: A class A, dependent upon class B, should be able to use any X:B without knowing the difference. This basically means that anything you use as a dependency should have similar behavior as seen by the dependent class. As a short example, say you have an IWriter interface that exposes Write(string), which is implemented by ConsoleWriter. Now you have to write to a file instead, so you create FileWriter. In doing so, you must make sure that FileWriter can be used the same way ConsoleWriter did (meaning that the only way the dependent can interact with it is by calling Write(string)), and so additional information that FileWriter may need to do that job (like the path and file to write to) must be provided from somewhere else than the dependent. This is huge for writing testable code, because a design that conforms to the LSP can have a "mocked" object substituted for the real thing at any point without changing expected behavior, allowing for small pieces of code to be tested in isolation with the confidence that the system will then work with the real objects plugged in. I - Interface Segregation Principle: An interface should have as few methods as is feasible to provide the functionality of the role defined by the interface . Simply put, more smaller interfaces are better than fewer larger interfaces. This is because a large interface has more reasons to change, and causes more changes elsewhere in the codebase that may not be necessary. Adherence to ISP improves testability by reducing the complexity of systems under test and of dependencies of those SUTs. If the object you are testing depends on an interface IDoThreeThings which exposes DoOne(), DoTwo() and DoThree(), you must mock an object that implements all three methods even if the object only uses the DoTwo method. But, if the object depends only on IDoTwo (which exposes only DoTwo), you can more easily mock an object that has that one method. D - Dependency Inversion Principle: Concretions and abstractions should never depend on other concretions, but on abstractions . This principle directly enforces the tenet of loose coupling. An object should never have to know what an object IS; it should instead care what an object DOES. So, the use of interfaces and/or abstract base classes is always to be preferred over the use of concrete implementations when defining properties and parameters of an object or method. That allows you to swap one implementation for another without having to change the usage (if you also follow LSP, which goes hand in hand with DIP). Again, this is huge for testability, as it allows you, once again, to inject a mock implementation of a dependency instead of a "production" implementation into your object being tested, while still testing the object in the exact form it will have while in production. This is key to unit testing "in isolation".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153410", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57196/" ] }
153,474
Even languages where you have explicit pointer manipulation like C it's always passed by value (you can pass them by reference but that's not the default behavior). What is the benefit of this, why are so many languages passed by values and why are others passed by reference ? (I understand Haskell is passed by reference though I'm not sure).
Pass by value is often safer than pass by reference, because you cannot accidentally modify the parameters to your method/function. This makes the language simpler to use, since you don't have to worry about the variables you give to a function. You know they won't be changed, and this is often what you expect . However, if you want to modify the parameters, you need to have some explicit operation to make this clear (pass in a pointer). This will force all your callers to make the call slightly differently ( &variable , in C) and this makes it explicit that the variable parameter may be changed. So now you can assume that a function will not change your variable parameter, unless it is explicitly marked to do so (by requiring you to pass in a pointer). This is a safer and cleaner solution than the alternative: Assume everything can change your parameters, unless they specifically say they can't.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153474", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57129/" ] }
153,483
The Apache License, v2.0 [..] 2. Grant of Copyright License Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense and distribute the Work and such Derivative Works in Source or Object form. [..] 3. Grant of Patent License Subject to the terms and conditions of this License, each Contributor hereby grants to you a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including cross-claim or counterclaim in lawsuit) alleging that the Work or a Contribution incorporated within theWork constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. While the meaning of the Copyright License provision is rather clear, I did not get the meaning of the Patent License provision. Which advantages does the "Grant of Patent License" provision further give to Contributors? Why are they useful? Is the "Grant of Patent License" provision useful only in case of patent litigation?
Pass by value is often safer than pass by reference, because you cannot accidentally modify the parameters to your method/function. This makes the language simpler to use, since you don't have to worry about the variables you give to a function. You know they won't be changed, and this is often what you expect . However, if you want to modify the parameters, you need to have some explicit operation to make this clear (pass in a pointer). This will force all your callers to make the call slightly differently ( &variable , in C) and this makes it explicit that the variable parameter may be changed. So now you can assume that a function will not change your variable parameter, unless it is explicitly marked to do so (by requiring you to pass in a pointer). This is a safer and cleaner solution than the alternative: Assume everything can change your parameters, unless they specifically say they can't.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153483", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57137/" ] }
153,547
I am looking at a new position with a new company. I have talked to some people in the past (in general, not at this company) that they had been given a yearly budget to buy new computer stuff to keep up to date. Now why I feel this question is worth asking here is that Joel comes right out and says an employer should pay for the best equipment money can buy... within reason of course. From The Joel Test: 12 Steps to Better Code 9. Do you use the best tools money can buy? Writing code in a compiled language is one of the last things that still can't be done instantly on a garden variety home computer... Top notch development teams don't torture their programmers. Even minor frustrations caused by using underpowered tools add up, making programmers grumpy and unhappy. And a grumpy programmer is an unproductive programmer... Does anyone know if the industry has such a standard to offer an allowance or budget? I have never worked for a company like this, but I am thinking I should toss this in the ring for negotiations. Seems reasonable. How do bigger companies like MS, Google, and Apple handle this? If you say yes, give a range... I have been told numbers from $5k to $10k. Seems high to me, but hey I would gladly take it.
Personally, I would want the company to just sort out the equipment I need, not give me a budget and make me to deal with all the research, negotiation and other hassle that goes into buying and installing corporate hardware. In the end, all I want to have to do about hardware is state my few requirements, and have someone else do all of that work, so that I can get on with mine. More important and appropriate (IMHO) is a personal training budget, with which you can buy books and attend courses and conferences.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153547", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32542/" ] }
153,586
What is the difference between Object Oriented Design Patterns and Principles? Are they different things? As far as I understood both of them try to achieve some common goal (e,g. flexibility). So can I say a pattern is a principle and vice versa? Design Principle = SOLID (i.e. Dependency Inversion Principle) Design Pattern = Gof (i.e. Abstract Factory Pattern)
No, they aren't the same. Patterns are common solutions to object-oriented programming problems . (I'm not aware of any similar books for functional or declarative programming.) The idea was crystallized in the famous "Design Patterns" book by the Gang of Four in 1995. As Andre points out, patterns are common in every paradigm. I'll reiterate my previous statement: I'm not aware of any similar books for functional or declarative programming, but Andre has remedied my ignorance with the link he provided below. (Thank you, Andre.) Principles are less about particular languages or paradigms, more general. "Don't Repeat Yourself" - DRY principle - is true for all programming.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153586", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57211/" ] }
153,656
William Cook in a tweet wrote that: " UML is the worst thing to ever happen to MDD. Fortunately many people now realize this ... " I would like to know the reasoning behind that claim (apparently, I'm not referring to his personal opinion). I've noticed that many people out there don't like UML that much. Also it is worth mentioning that he is in academia, where UML is preety much the holy grail of effective design and modelling.
Well, I'm the academic who posted the original tweet. Tweets are not meant to be scholarly articles. They are advertisements, and I think they can also be controversial. Here are my follow-up tweets: 1) UML was created to model OO designs. It effect you are modeling the code of a system, not the system's behavior. UML is at wrong level. 2) the idea that 7 (or 13) diagram formats in UML can cover everything is crazy. What about GUIs, web wireframes, authorization, etc. ??? 3) UML has encouraged the idea that models must be graphical. Ridiculous! Text and graphic models are both useful and often interchangeable 4) UML is at once too large and complex and at the same time very limiited. Stereotype and profiles are not effective for usable extensions. Note that I'm not necessarily saying UML is bad. I'm simply saying that it is not helping the goal of "model-driven development", which is what I'm interested in. I don't understand the comment about "holy grail".
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153656", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32342/" ] }
153,738
A co-worker and I were looking at the behavior of the new keyword in C# as it applies to the concept of hiding. From the documentation : Use the new modifier to explicitly hide a member inherited from a base class. To hide an inherited member, declare it in the derived class using the same name, and modify it with the new modifier. We've read the documentation, and we understand what it basically does and how it does it. What we couldn't really get a handle on is why you would need to do it in the first place. The modifier has been there since 2003, and we've both been working with .Net for longer than that and it's never come up. When would this behavior be necessary in a practical sense (e.g.: as applied to a business case)? Is this a feature that has outlived its usefulness or is what it does simply uncommon enough in what we do (specifically we do web forms and MVC applications and some small factor WinForms and WPF)? In trying this keyword out and playing with it we found some behaviors that it allows that seem a little hazardous if misused. This sounds a little open-ended, but we're looking for a specific use case that can be applied to a business application that finds this particular tool useful.
You can use it to imitate return type covariance (Note: After this answer was written, C# 9.0 added support for return type covariance). Eric Lippert's Explanation . Eric provides this example code: abstract class Enclosure { protected abstract Animal GetContents(); public Animal Contents() { return this.GetContents(); } } class Aquarium : Enclosure { public new Fish Contents() { ... } protected override Animal GetContents() { return this.Contents(); } } This is a work-around. public override Fish Contents() { ... } is not legal, despite being safe. In general, you should not use method hiding, as it is confusing to consumers of your class (the specific example above does not suffer from this problem). Just name your new method something else if you don't want to override an existing method. A likely real-world situation where you would need method hiding is if the provider of a base class added a generic method which you had already added to a derived class. Such a program will compile (and give warnings) without the new keyword, but adding new says, "I know my version of this method is replacing the base class's version. This is horrible and confusing, but we're stuck with it." That's still better than forcing the derived class to rename their method. Just allowing the derived method to be treated as being an override would cause problems. Ignoring any concerns with implementing the compiler, the new method is semantically different from the base method, but polymorphism would cause the new method to be called when asked to call a method with the same name. This situation is discussed in detail in this post by Eric Lippert.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153738", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16747/" ] }
153,791
I am preparing for a programming contest where we have to code in C++ and it is all about producing working code in a short time. An example would be to use a macro to get the minimum of two ints or using memsets to initialize arrays (but I was told that you shouldn't use either here ). This leads to the question, what kind of coding techniques exist to use at a real job?
The way to produce working code fast is to... slow down. Take very small steps. Make sure you know what each step is for. Make sure that after each step your code compiles and runs. Best of all, use Test-Driven Development. Write a failing test. Write just enough code to make the test pass. Refactor to make the code clean, making sure it still passes all the tests. Repeat. If you do not do this, then it is very easy to write a big pile of code, which does not work. Then it will take you a very long time to figure out why it does not work.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153791", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/46319/" ] }
153,799
Edit: Thanks for all of the answers, guys! I think I'm just going to include some kind of text on my webpage telling users to only download from the links provided. The thing is that there have been some legitimate and illegitimate websites that have picked up on my software, so it would be safer just to tell users to avoid any website that I don't link to. Thanks a lot for all the help! Original: The software I've created is hosted on what you could call a "bad" website. It's hard to explain, so I'll just provide an example. I've made a free password generator. This, along with most of my other FREE software, is available on this website. This is their description of my software: Platform: 7/7 x64/Windows 2K/XP/2003/Vista Size: 61.6 Mb License: Trial File Type: .7z Last Updated: June 4th, 2011, 15:38 UTC Avarage Download Speed: 6226 Kb/s Last Week Downloads: 476 Toatal Downloads: 24908 Not only is the size completely skewed, it is not trial software, it's free software. The thing is that it's not the description I'm worried about--it's the download links. The website is a scam website. They apparently link to "cracks" and "keygens", but not only is that in itself illegal, they actually link to fake download websites that give you viruses and charge your credit card. Just to list things that are wrong with this website: they claim all software is paid software then offer downloads for keygens and cracks; they fake all details about the program and any program reviews and ratings; they and the downloads site they link to are probably run by the same person, so they make money off of these lies. I'm only a teenager with no means to pursue legal action. This means that, unfortunately, I can't do anything that will actually get results. I'd like my software to only be downloaded off my personal website. I have links to four legitimate locations to download my software and that's it. Essentially, is there anything I can do about this? As I said above, I can't pursue legal action, but is there some way I can discourage traffic to that website by blacklisting it or something? Can I make a claim on MY website to only download my software from the links I provide? Or should I just pay no mind? Because, honestly, it's a bit of a ways back in Google results.
Unfortunately there is little you can do. I think you have the answers on your last paragraph. As far as making claims on your web site about other sources - put an app signature on your site, explain than some "less than desirable" sites are listing your apps and it should only be downloaded from <here> or <here> . Do not name or provide any information about those sites - it opens you up for legal action (and raises their profile in Google). Most savy users will track the original source down. There is little you can do about others. Also be aware that 'your' app on their sites is probably your app plus extras - hence the need for a signature.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153799", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57366/" ] }
153,816
I'm curious if my current experiences as an intern are representative of actual industry. As background, I'm through the better part of two computing majors and a math major at a major university; I've aced every class and adored all of them, so I'd like to think that I'm not terrible at programming. I got an internship with one of the major software companies, and half way through now I've been shocked at the extraordinarily low quality of code. Comments don't exist, it's all spaghetti code, and everything that could be wrong is even worse. I've done a ton of tutoring/TAing, so I'm very used to reading bad code, but the major industry products I've been seeing trump all of that. I work 10-12 hours a day and never feel like I'm getting anywhere, because it's endless hours of trying to figure out an undocumented API or determine the behavior of some other part of the (completely undocumented) product. I've left work hating the job every day so far, and I desperately want to know if this is what is in store for the rest of my life. Did I draw a short straw on internships (the absurdly large paychecks imply that it's not a low quality position), or is this what the real world is like?
They call it the Real World™ for a reason. 99% of what you will encounter in the real corporate world will be considered crap, and for good reason that I will explain. The 1% that isn't considered crap will become crap eventually. #1 Write Code, #2 ????, #3 Profit! First off businesses exist to turn a profit, they do not exist to generate mountains of perfectly theoretically clean designed and pristine academic code housed in golden repositories of perfectness. Not even close, not even the ones in the business of selling source code they produce. In the business world code is a means to an end . If some code solves a business problem and makes more money than it costs to create and maintain then it is desirable for the business. Employing you to write code is just one way for the business to obtain code. Theory 0 - Practice ∞ Ideally maintenance should be more of a concern but it usually isn't, because in the short term it doesn't win out financially. In the long term, software usually has a relatively short life cycle, especially web based applications, they get obsoleted quickly and re-written more often. In house line of business applications are the ones that churn on as what are perceived as endless zombie projects because of many momentum based reasons. These projects are actually successes they continue because they continue making the business a profit. In theory there is no difference between theory and practice. In practice there is. - Yogi Berra In theory perfectly architected absolutely clean pristine code bases with 100% code coverages should save companies money, in practice it doesn't even come close to delivering any thing close to a valid return on investment. Physics of the Software Lifecycle There is also a super powerful entropy force at work in the world of software. It is a black hole of inevitability that condemns all software to degenerate into a Big Ball of Mud . The farther you start from a BBM the better, but every software system will eventually get there given enough time. How quickly you approach 100% entropy is determined by where you start and how rapidly you pile on technical debt and how high the interest on it is. Software systems degenerate and rot because of maintenance, not because of the lack of it. A system that is in place for years with no code changes by definition meets all its requirements and goals and is a success. It is the systems that require constant change because they started out closer to maximum entropy are the ones that are constantly poked and prodded and it is the maintenance that accelerates the negative change. Good Enough is Good Enough Short lifecycle systems like websites that change constantly don't benefit from expensive huge upfront design 100% code coverage in unit tests, because the amortization time is too short to recoupe the costs. Long lifecycle systems like the above mentioned internal line of business apps, don't really benefit from massive investments of 100% code coverage unit tests either, because the rate of change over the life of the project approaches a constant that is near zero in a non-linear fashion. That is why end of life plans are more important and replacement systems should be planned just as something is being released, not when it has passed it prime by a few years and unsupportable so a new system must be rushed into place. They don't teach about BBM as far as I know, I have never encountered a recent CS graduate that knew what it was, much less why it happens. That is why Good Enough is Good Enough , anything more or less isn't. Software Slumlords There are real estate slum lords for a reason, they make a profit on the run down shanty buildings they own. The make more profit than they spend on incremental maintenance of the run down property. If they didn't they would tear down the building and replace it. But they don't, because the incremental costs are far less than overhauling or replacing the entire building. There are also customers ( tenants ) that are willing to pay for run down property. No building owner, slum lord or not is going to spend money on a property just because of some academic notion of perfection that doesn't translate to a substantial profit over the associated cost. No customer is going to pay for upgrades to a software system that is working acceptably to them. No business is going to spend money on just writing and re-writing code for no tangible substantial profit. Microsoft is most dominate and successful software slumlord there is. Windows did not start getting major foundational re-writes until very recently. And they still haven't dropped all the legacy code from the kernel. It doesn't make business sense to them, people are more than willing to accept the low bar of expectations they have set over the last decade. Prognosis This has been a pattern for the 20+ years I have been in software development. It isn't going to change any time soon. This isn't the way people want it to be out of some belief system, it is a reality of external forces on a business. Business drives decision making, profits aren't evil they pay your salary, short term or long term vision is irrelevant, this is a short term industry of constant change by definition. Anyone that argues against good enough to make a profit doesn't understand business. I spent 15 years consulting and learned very quickly that good enough was just that, anything else was costing me money. Yeah I wanted things to be perfect, but unless you are selling a code base, which 99.99999% of the time you are selling a solution , all that prefect clean organized elegant code is lost and you just wasted your time you will never get reimbursed for. Progress and Hope Agile methodologies are a good step in the right direction, at least philosophically. They address the chaos and constant change as a first class citizen and accept it. They reject dogmatic practices, acknowledging that the methodologies and practices should change as well as the requirements and technologies. They accept the entropy that is introduced by the lack of time or changing requirements, changing staff and the liveness of a software system with the concept of technical debt. But Agile isn't a panacea, it isn't going to change the fundamental laws of physics and code bases will rot regardless. It is up to management to plan on dealing with the rot before it gets completely out of hand and un-manageable. Agile when done correctly, helps manage the entropy, slow it down, track it, measure it and deal with it in a planned manner. It won't stop it! Career Decision If this is a real philosophical problem for you, you should probably consider other career choices, because the way things work has valid business merit behind it. Open Source projects don't have any better track record, and in many cases the code is even worse than most corporate code I have seen.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153816", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57374/" ] }
153,903
We have REST webservices that can serve XML or JSON (WCF). I'm toying with idea of implementing Protobufs. Why? PROS Less load on servers. Smaller message size - less traffic. It is easier to switch now than later. CONS Need to be implemented Going to be harder to troubleshoot/sniff messages for debugging. I can enable GZip on server and JSON will consume as much traffic What is your suggestion and/or experience on this?
Does the business value of implementing them exceed the cost? If you implement, you need to change not just your server, but all clients (although you can support both formats and only change clients as needed). That will take time and testing, which is a direct cost. And don't underestimate the time taken to really understand protocol buffers (especially the reasons to make field required or optional), and the time taken to integrate the protobuf compiler into your build process. So does the value exceed that? Are you faced with a choice of "our bandwidth costs are X% of our revenues and we can't support that"? Or even "we need to spend $20,000 to add servers to support JSON"? Unless you have a pressing business need, your "pros" aren't really pros, just premature optimization.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153903", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/27781/" ] }
153,926
Is it acceptable to return a non-zero exit code if the program in question ran properly? For example, say I have a simple program that (only) does the following: Program takes N arguments. It returns an exit code of min(N, 255). Note that any N is valid for the program. A more realistic program might return different codes for successfully ran programs that signify different things. Should these programs instead write this information to a stream instead, such as to stdout?
It depends on the environment, but I'd say it's poor style. Unix-like systems have a strong convention that an exit status of 0 denotes success, and any non-zero exit status denotes failure. Some, but not all, programs distinguish between different kinds of failures with different non-zero exit codes; for example grep typically returns 0 if the pattern was found, 1 if it wasn't, and 2 (or more) if there was an error such as a missing file. This convention is pretty much hard wired into Unix shells. For example, in sh , bash , and other Bourne-like shells, the if statement treats a 0 exit status as success/true, and a non-zero exit status as failure/false: if your-command then echo ok else echo FAILURE fi I believe the conventions under MS Windows are similar. Now there's certainly nothing stopping you from writing your own program that uses unconventional exit codes, especially if nothing else is going to interact with it, but be aware that you're violating a well established convention, and it could come back and bite you later. The usual way for a program to return this kind of information is to print it to stdout : status="$(your-command)" echo "Result is $status"
{ "source": [ "https://softwareengineering.stackexchange.com/questions/153926", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/16467/" ] }
154,056
I'm basing my Git repo on A successful Git branching model and was wondering what happens if you have this situation: Say I'm developing on two feature branches A and B, and B requires code from A. The X node introduces an error in feature A which affects branch B, but this is not detected at node Y where feature A and B were merged and testing was conducted before branching out again and working on the next iteration. As a result, the bug is found at node Z by the people working on feature B. At this stage it's decided that a bugfix is needed. This fix should be applied to both features, since the people working on feature A also need the bug fixed, since its part of their feature. Should a bugfix branch be created from the latest feature A node (the one branching from node Y) and then merged with feature A? After which both features are merged into develop again and tested before branching out? The problem with this is that it requires both branches to merge to fix the issue. Since feature B doesn't touch code in feature A, is there a way to change the history at node Y by implementing the fix and still allowing the feature B branch to remain unmerged yet have the fixed code from feature A? Mildly related: Git bug branching convention
Use a distinct commit to fix the bug in one branch, then cherry-pick that commit into the other branch.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154056", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/46013/" ] }
154,126
I like the fact that the C language lets you use binary arithmetic in an explicit way in your code, sometimes the use of the binary arithmetic can also give you a little edge in terms of performance; but since I started studying C++ i can't really tell how much i have seen the explicit use of something like that in a C++ code, something like a pointer to pointer structure or an instruction for jumping to a specific index value through the binary arithmetic. Is the binary arithmetic still important and relevant in the C++ world? How i can optimize my arithmetic and/or an access to a specific index? What about the C++ and the way in which the bits are arranged according to the standard? ... or i have taken a look at the wrong coding conventions ... ?
In short: No, it is not any good to use "binary arithmetic"(in sense the question asks it) or "C style" in C++ . What makes you believe bitwise arithmetic would be any faster really? You can throw your bitwise operations all over the code, but what would make it any faster? The thing is, almost whatever trivial thing you're trying to solve, can be solved easier by the standard high-level features(including the standard library, STL and STL algorithms). In some specific cases you might indeed want to switch the individual bits, for example when working with bitmasks . Or storing very compact data, for example when writing a compression algorithm, a dense file format or working for example with embedded systems. If you are concerned with performance, always write a simple and trivial algorithm first. Just make it work. Then, measure the time your algorithm takes with typical input. Now, if you at this point feel that it is too slow, only then you can try optimizing it by hand with these "bitwise arithmetic" tricks. And after done, measure if your code is any faster. The chances are that it is not, unless you really know what you are doing in that specific situation/case. Frankly, the best way to understand about this kind of low-level constructs which deal with performance is to study assembly language. It really makes you realize that no, writing some bit-manipulating wizzcode is not any faster than using that sort(begin(v),end(v)) . Just because you operate at low level doesn't mean you operate fast. In general, Algorithms are more important than implementation details! Basically whatever the " C style " means, please , stay away from it when writing C++. They are two completely different languages. Don't mix them. Bjarne Stroustrup gave a great talk about C++ style in Microsoft's GoingNative 2012 conference this February, please take a look: http://channel9.msdn.com/Events/GoingNative/GoingNative-2012/Keynote-Bjarne-Stroustrup-Cpp11-Style Especially the parts between around 10 and 15 minutes are great, when he talks about old C-style code compared to modern C++ style.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154126", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57405/" ] }
154,155
I know the company you work for owns the code and obviously you will get arrested if you try to sell it. But is it uncommon for developers to keep a personal copy of the code they wrote (for future reference)? Apparently this guy was sent to prison for copying source code.
But is it uncommon for developers to keep a personal copy of the code they wrote (for future reference)? I don't know how common it is, but common or not, it's still a bad idea. Programmers often operate in the mindset that solving the same problem twice is a waste of time. We try to design our code to be reusable (sometimes). We build libraries of classes and functions to reuse at some point in the future. We sometimes even give our code away so that nobody else will ever have to write code to solve the same problem that we just did. So it may be understandable to want to take "your" code with you when you move from one job to another. But you still shouldn't do it, for the following reasons: It's not your code to take. The code you wrote for your former employer is part of the business they built. Their code is part of their competitive advantage. Sure, competitors could write their own code to solve the same problem, but they shouldn't get the advantage of building on work that your employer paid for, owns, and didn't authorize you to take. If they have any sense at all, your new employer doesn't want any part of the code that you took from your former employer. The more you "refer" to work you did for some previous employer, the more you put your new employer in legal jeopardy. If you ever accidentally let it slip at New Employer that you've still got a copy of the stuff you did for Old Employer, your boss at New will probably realize that you'll take a copy of their code when you leave for some other job. That might not sit well with him or her. Even if you're not cribbing actual lines or just vague ideas from your old stuff, just having your old stuff in your possession could raise suspicions that you might be using it for something. Imagine that Old Employer sues New Employer, and as one of a small handful of employees that moved from Old to New, you suddenly find yourself giving a deposition. None of you actually copied Old's code into New's product, but the lawyer in front of you asks: "Mr. SuperFoo, do you now or have you at any time since leaving Old Employer had in your possession a copy of any code that you or anyone else wrote while working at Old Employer?" You don't need the code that you wrote last month, last year, or longer ago. You solved the problem once, and now you know how to solve the problem again. Or, you may know how not to solve the problem -- your new implementation will be better because you have experience. There are better ways. It's hard to go back and learn anything useful by reading old code out of context. A diary or journal that describes what you learn, ideas that you have, etc. is far more useful later on. Even if Old Employer knows that you've got their code and is OK with that, you still don't want it! The only thing that can come of having it is a 3am phone call: "Hey there, SuperFoo? How are you doing? Listen, you've got a copy of our stuff, right? Look, we've got a problem with the system, and we've narrowed it down to a couple files that you wrote that our new guy just doesn't understand. I know it's late, but could you walk him through SuperDuper.pl?" Let it go. You don't need it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154155", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57582/" ] }
154,193
Lately I've been getting professional work, hanging out with other programmers, and making friends in the industry. The only thing is I'm 100% self-taught. It's caused my style to extremely deviate from the style of those that are properly trained. It's the techniques and organization of my code that's different. It's a mixture of several things I do. I tend to blend several programming paradigms together. Like Functional and OO. I lean to the Functional side more than OO, but I see the use of OO when something would make more sense as an abstract entity. Like a game object. Next I also go the simple route when doing something. When in contrast, it seems like sometimes the code I see from professional programmers is complicated for the sake of it! I use lots of closures. And lastly, I'm not the best commenter. I find it easier just to read through my code than reading the comment. And most cases I just end up reading the code even if there are comments. Plus I've been told that, because of how simply I write my code, it's very easy to read it. I hear professionally trained programmers go on and on about things like unit tests. Something I've never used before so I haven't even the faintest idea of what they are or how they work. Lots and lots of underscores "_", which aren't really my taste. Most of the techniques I use are straight from me, or a few books I've read. Don't know anything about MVC, I've heard a lot about it though with things like backbone.js. I think it's a way to organize an application. It just confuses me though because by now I've made my own organizational structures. It's a bit of a pain. I can't use template applications at all when learning something new like with Ubuntu's Quickly. I have trouble understanding code that I can tell is from someone trained. Complete OO programming really leaves a bad taste in my mouth, yet that seems to be what EVERYONE else is strictly using. It's left me not that confident in the look of my code, or wondering whether I'll cause sparks when joining a company or maybe contributing to open source projects. In fact I'm rather scared of the fact that people will eventually be checking out my code. Is this just something normal any programmer goes through or should I really look to change up my techniques?
In fact I'm rather scared of the fact that people will eventually be checking out my code. Good. Being conscious that people are going to look at your code will make you try harder. Programming has become an incredibly large field. There are dozens of topics, tools, niches, and specializations, some of which are whole careers unto themselves. There is a vast amount to learn and know, and when you work with other programmers, there will always be stuff you know that they don't and stuff they know that you don't. This is a good thing. If you are worried that your experience is incomplete, there are plenty of steps you can take to amend that through formal education and collaboration with trained experts. But it sounds like you're afraid there's some quantifiable milestone after which people say "Okay, now that I've mastered that, I'm officially a programmer." There is no such milestone. I've definitely had moments where I thought "yeah, now I'm getting somewhere" after I learned something new, but there's no magic list of things you must know and do to call yourself a programmer. I know a lot of things about programming, I've used a dozen languages in lots of projects, and yet the subset of programming knowledge I can call my own is tiny. And I like that. Frankly, a programmer isn't something you are. A programmer is something you are constantly learning to be. Take honesty inventory of your skills, your strengths and weaknesses. Get feedback from people with more experience than you. Look for positions that line up pretty well with where you think you are - but don't be afraid to go for jobs that are a little outside your current mastery. If you only take jobs you already know everything about, you'll never learn at work.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154193", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57594/" ] }
154,228
I'm still a student in high school (entering 10th grade), and I have yet to take an actual computer course in school. Everything I've done so far is through books. Those books have taught me concepts such as inheritance, but how does splitting a program into multiple classes help? The books never told me. I'm asking this mainly because of a recent project. It's an arcade video game, sort of like a Flash game as some people have said (although I have no idea what a Flash game is). The thing is, it's only one class. It works perfectly fine (a little occasional lag however) with just one class. So, I'm just asking how splitting it into multiple classes would help it. This project was in Java and I am the only person working on it, for the record.
The simplest answer is that if you put everything into one class, you have to worry about everything at once when you're writing new code. This may work for small projects, but for huge applications (we're talking hundreds of thousands of lines), this quickly becomes next to impossible. To solve this issue, you break up pieces of functionality into their own classes and encapsulate all the logic. Then when you want to work on the class, you don't need to think about what else is going on in the code. You can just focus on that small piece of code. This is invaluable for working efficiently, however it's hard to appreciate without working on applications that are huge. Of course there are countless other benefits to breaking your code into smaller pieces: the code is more maintainable, more testable, more reusable, etc., but to me the biggest benefit is that it makes massive programs manageable by reducing the amount of code you need to think about at one time.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154228", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57622/" ] }
154,247
I'm one of the developers of Ruby (CRuby). We are working on Ruby 2.0 release (planned to release 2012/Feb). Python has "PEP302: New Import Hooks" (2003): This PEP proposes to add a new set of import hooks that offer better customization of the Python import mechanism. Contrary to the current import hook, a new-style hook can be injected into the existing scheme, allowing for a finer grained control of how modules are found and how they are loaded. We are considering introducing a feature similar to PEP302 into Ruby 2.0 (CRuby 2.0). I want to make a proposal which can persuade Matz. Currently, CRuby can load scripts from only file systems in a standard way. If you have any experience or consideration about PEP 302, please share. Example: It's a great spec. No need to change it. It is almost good, but it has this problem... If I could go back to 2003, then I would change the spec to...
I'm the maintainer of Python's runpy module, and one of the maintainers of the current import system. While our import system is impressively flexible, I'd advise against adopting it wholesale without making a few tweaks - due to backwards compatibility concerns, there are bunch of things that are more awkward than they would otherwise need to be. One thing that hurt with PEP 302 in Python is how long it took us to convert the core import system over to using it. For the better part of a decade, anyone doing anything complex with import hooks has been stuck implementing two pieces: one handling PEP 302 compliant loaders (such as zip imports), and a second handling the standard filesystem based import mechanism. It's only in the forthcoming 3.3 that handling PEP 302 loaders will also take care of handling modules imported through the standard filesystem import mechanism. Try not to repeat that mistake if you can possibly avoid it. PEP 420 (implemented for Python 3.3) makes some additions to the protocol to allow importers to contribute portions to namespace packages. It also fixes a naming problem in the Finder API definition (effectively replacing the misnamed "find_module" with the more accurate "find_loader"). This should hopefully all be documented more clearly in the language spec by the time 3.3rc1 rolls around in a couple of weeks time. Another notable problem is that the approach documented specifically in PEP 302 has way too much process global state. Don't follow us down that path - try to encapsulate the state in a more coherent object model so it's slightly easier to selectively import other modules (C extension modules are the bane of making any such encapsulation completely effective, but even some level of encapsulation can be helpful). PEP 406 (http://www.python.org/dev/peps/pep-0406/) discusses a possible backwards compatible evolution of Python's approach with improved state encapsulation. If you have an encapsulated state model from the beginning though, then you can define your APIs accordingly and avoid having importers and loaders access global state at all (instead being passed a reference to the active engine). Another missing piece in PEP 302 is the ability to ask an importer for an iterator over the modules provided by that importer (this is necessary for things like freeze utilities and automatic documentation utilities that extract docstrings). Since it's incredibly useful, you'd probably be better off standardising it from the get go: http://docs.python.org/dev/library/pkgutil#pkgutil.iter_modules (we'll probably finally elevate this to a formally specified API in Python 3.4) And my last comment is that you should take a close look at the division of responsibility between the import system and the loader objects. In particular, consider splitting the "load_module" API into separate "init_module" and "exec_module" steps. That should allow you to minimise the degree to which loaders need to interact directly with the import state. PEP 302 and importlib are a great starting point for a more flexible import system, but there are definitely mistakes we made that are worth avoiding.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154247", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57646/" ] }
154,283
I work at a place where we buy a lot of IT projects. We are currently producing a standard for systems-requirements for the requisition of future projects. In that process, we are discussing whether or not we can demand automated unit testing from our suppliers. I firmly believe that proper automated unit-testing is the only way to document the quality and stability of the code. Everyone else seems to think that unit-testing is an optional method that concerns the supplier alone. Thus, we will make no demands of automated unit-testing, continous testing, coverage-reports, inspections of unit-tests or any of the kind. I find this policy extremely frustrating. Am I totally out of line here? Please provide me with arguments for any of the opinions.
I firmly believe, that proper automated unit-testing is the only way to document the quality and stability of the code. The thing is that you won't (or very rarely at least) get proper automated unit-testing by forcing it on people. That's a good way to get shitty tests and drive up the cost of the projects. Personally, I would look towards some demand or SLA that involves quality; regardless of how it is accomplished. 10 years ago unit tests were infrequent at best. You don't want to handcuff your suppliers in 10 years when we have better methods to ensure quality but your outdated policy requires them to use the old way.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154283", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/18069/" ] }
154,439
When writing unit tests, is it worth spending the extra time to make the code have good quality and readability? When writing tests I often break the Law of Demeter , for faster writing and to avoid using so many variables. Technically, unit tests are not reused directly - they are strictly bound to the code so I do not see any reason for spending much time on them; they only need to be functional.
It is absolutely worth spending time writing good-quality code for unit tests: They will require maintenance like any other code. Unit tests are one of the best sources of documentation for your system, and arguably the most reliable form. They should really show: Intent: "what is the expected behaviour?". Usage: "how am I supposed to use this API?". They will require debugging like any other code. The one factor in favour of a slightly more ad-hoc approach is that your unit tests are never going to be a public API so you don't need to worry about what interfaces/etc. you are exposing.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154439", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57792/" ] }
154,497
I am a new developer, who just got hired at a big company. I don't know how but I guess they are desperate. However, I am well-versed with HTML5/CSS3 though things change and new things are released and I keep up with as much as I can. But this job required me to hand-code Javascript, know Jquery and Ajax. I have been exposed to this a bit but I am not sure if I can hand-code Javascript. My question is, is it necessary to memorize all there is about Javascript or are there a few key things that I should know how to hand-code because looking at javascript code it seems there are lots of lines code! Please point me in the right direction.
Today, in our work as in our life, it is more important know how to find information rather than know the information itself. I mean that a good developer is a person able to find documentation, network, and that share with an open mind. I am an experienced .NET developer and, believe me, for every project I work on I have to learn new things about the language and development environment. Our work is always more intricate so do not worry. Point your attention at problem solving and then look for help. All the solutions already exist; we must be able to find them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154497", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/43652/" ] }
154,500
Possible Duplicate: Should I use switch statements or long if…else chains? I'm working on a small program that will conduct an Insertion Sort. A number will be inputted through the keyboard and stored in a variable I called "num." I've decided to use a switch statement in order to obtain the number inputted. switch( e.getKeyCode() ) { case KeyEvent.VK_0: num = 0; break; case KeyEvent.VK_1: num = 1; break; case KeyEvent.VK_2: num = 2; break; case KeyEvent.VK_3: num = 3; break; case KeyEvent.VK_4: num = 4; break; case KeyEvent.VK_5: num = 5; break; case KeyEvent.VK_6: num = 6; break; case KeyEvent.VK_7: num = 7; break; case KeyEvent.VK_8: num = 8; break; case KeyEvent.VK_9: num = 9; break; } I realized one other course of action could have been to use a set of if statements. if( e.getKeyCode() == KeyEvent.VK_0 ) num = 0; else if( e.getKeyCode() == KeyEvent.VK_1 ) num = 1; etc. for every number up until 9. I then wondered what the essential difference is between a switch statement and a series of if statements. I know it saves space and time to write, but it's not that much. So, my question is, aside from the space, does a switch statement differ from a series of if statments in any way? Is it faster, less error-prone, etc.? This question really doesn't affect my code that much. I was just wondering. Also, this question pertains to the JAVA language, not any other programming language.
You are correct that functionally they are identical. Practically speaking though, why would you type (or cut and paste) all that when a switch statement is so much more concise? The difference is not one of functionality , but of intent . When you write a switch statement, you are in effect telling the reader that this section of code could branch multiple ways based on the state of one single variable . This statement is enforced by the compiler. When you write an if block, however, you are not deliberately stating that only one variable is involved. When you write a series of if s, as you suggest, it is much harder to see that the branching is only based on a single variable. This is a maintenance problem. Developers that have to modify your code in the future may not realize that you intended that decision to be so focused and will start adding conditions that are not using that variable. Someone will come along and say, "I want num to equal 10 if the last key sequence was 5-4-8," and tack an additional else if on the end of your block. Eventually that section will become a Gourdian knot of if...else if conditions that no one can understand. The future will hate you and bad karma will be added to your account. Use a switch statement when possible. It clearly communicates that this section of code relies on a single variable and is much more future-proof.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154500", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57622/" ] }
154,615
I cannot count the number of times I read statements in the vein of 'unit tests are a very important source of documentation of the code under test'. I do not deny they are true. But personally I haven't found myself using them as documentation, ever. For the typical frameworks I use, the method declarations document their behaviour and that's all I need. And I assume the unit tests backup everything stated in that documentation, plus likely some more internal stuff, so on one side it duplicates the ducumentation while on the other it might add some more that is irrelevant. So the question is: when are unit tests used as documentation? When the comments do not cover everything? By developpers extending the source? And what do they expose that can be useful and relevant that the documentation itself cannot expose?
They're NOT an ABSOLUTE Reference Documentation Note that a lot of the following applies to comments as well, as they can get out of sync with the code, like tests (though it's less enforceable). So in the end, the best way to understand code is to have readable working code . If at all possible and not writing hard-wired low-level code sections or particularly tricky conditions were additional documentation will be crucial. Tests can be incomplete: The API changed and wasn't tested, The person who wrote the code wrote the tests for the easiest methods to test first instead of the most important methods to test, and then didn't have the time to finish. Tests can be obsolete. Tests can be short-circuited in non-obvious ways and not actually executed. BUT They're STILL an HELPFUL Documentation Complement However, when in doubt about what a particular class does, especially if rather lengthy, obscure and lacking comments (you know the kind...), I do quickly try to find its test class(es) and check: what they actually try to check (gives a hint about the most important tidbits, except if the developer did the error mentioned above of only implementing the "easy" tests), and if there are corner cases. Plus, if written using a BDD-style , they give a rather good definition of the class's contract . Open your IDE (or use grep) to see only method names and tada: you have a list of behaviors. Regressions and Bugs Need Tests Too Also, it's a good practice to write tests for regression and for bug reports: you fix something, you write a test to reproduce the case. When looking back at them, it's a good way to find the relevant bug report and all the details about an old issue, for instance. I'd say they're a good complement to real documentation, and at least a valuable resource in this regard. It's a good tool, if used properly. If you start testing early in your project, and make it a habit, it COULD be a very good reference documentation. On an existing project with bad coding habits already stenching the code base, handle them with care.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154615", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/3660/" ] }
154,676
Many larger OSS projects maintain IRC channels to discuss their usage or development. When I get stuck on using a project, having tried and failed to find information on the web, one of the ways I try to figure out what to do is to go into the IRC channel and ask. But my questions are invariably completely ignored by the people in the channel. If there was silence when I entered, there will still be silence. If there is an ongoing conversation, it carries on unperturbed. I leave the channel open for a few hours, hoping that maybe someone will eventually engage me, but nothing happens. So I worry that I'm being rude in some way I don't understand, or breaking some unspoken rule and being ignored for it. I try to make my questions polite, to the point, and grammatical, and try to indicate that I've tried the obvious solutions and why they didn't work. I understand that I'm obviously a complete stranger to the people on the channel, but I'm not sure how to fix this. Should I just lurk in the channel, saying nothing, for a week? That seems absurd too. A typical message I send might be "Hello all - I've been trying to get Foo to work, but I keep on getting a BarException. I tried resetting the Quux, but this doesn't seem to do anything. Does anyone have a suggestion on what I could try?"
Rule #1: Don't ask to ask Rule #2: Behave as you would do in a real life conversation Rule #3: Be patient. If there is no activity, it usually means that no one has read what you wrote yet. If no one responds, they don't know or didn't notice. You can re-try after a while, or ask if anyone has any clue with regards to your question x minutes ago. Also, sometimes IRC is not the best way to get help. You could ask if there is a more active forum, like a mailing list, that you can try.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154676", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57899/" ] }
154,679
Code formatting matters. Even indentation matters . And consistency is more important than minor improvements. But projects usually don't have a clear, complete, verifiable and enforced style guide from day 1, and major improvements may arrive any day. Maybe you find that SELECT id, name, address FROM persons JOIN addresses ON persons.id = addresses.person_id; could be better written as / is better written than SELECT persons.id, persons.name, addresses.address FROM persons JOIN addresses ON persons.id = addresses.person_id; while working on adding more columns to the query. Maybe this is the most complex of all four queries in your code, or a trivial query among thousands. No matter how difficult the transition, you decide it's worth it. But how do you track code changes across major formatting changes? You could just give up and say "this is the point where we start again", or you could reformat all queries in the entire repository history. If you're using a distributed version control system like Git you can revert to the first commit ever, and reformat your way from there to the current state. But it's a lot of work, and everyone else would have to pause work (or be prepared for the mother of all merges) while it's going on. Is there a better way to change history which gives the best of all results: Same style in all commits Minimal merge work ? To clarify, this is not about best practices when starting the project, but rather what should be done when a large refactoring has been deemed a Good Thing™ but you still want a traceable history? Never rewriting history is great if it's the only way to ensure that your versions always work the same, but what about the developer benefits of a clean rewrite? Especially if you have ways (tests, syntax definitions or an identical binary after compilation) to ensure that the rewritten version works exactly the same way as the original?
Do the reformatting as separate commits. This will interfere minimally with the history, and you should be able to see at a glance which commits are just reformatting and which actually change code. It could skew git blame and similar, but if it points to a reformat-only commit, it's fairly straight forward to look for the previous change before that.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154679", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13162/" ] }
154,723
The Single Responsibility Principle is based on the high cohesion principle. The difference between the two is that a highly cohesive classes features a set of responsibilities that are strongly related, while classes adhering to SRP have just one responsibility. But how do we determine whether a particular class features a set of responsibilities and is thus just highly cohesive, or whether it has only one responsibility and thus adheres to SRP? In other words, isn't it more or less subjective, since some may consider a class very granular (and as such will believe the class adheres to SRP), while others may consider it not granular enough?
Why yes it is very subjective, and it is the subject of many heated, red-faced debates programmers get into. There's not really any one answer, and the answer may change as your software becomes more complex. What was once a single well-defined task may eventually become multiple poorly-defined tasks. That's always the rub too. How do you choose the proper way to divide a program up into tasks? About the only advice I can give is this: use your (and your coworkers') best judgement. And remember that mistakes can (usually) be corrected if you catch them soon enough.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154723", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/58499/" ] }
154,733
In one of the latest "WTF" moves, my boss decided that adding a "Person To Blame" field to our bug tracking template will increase accountability (although we already have a way of tying bugs to features/stories). My arguments that this will decrease morale, increase finger-pointing and would not account for missing/misunderstood features reported as bug have gone unheard. What are some other strong arguments against this practice that I can use? Is there any writing on this topic that I can share with the team and the boss?
Tell them this is only an amateurish name for the Root Cause field used by professionals (when issue tracker does not have dedicated field, one can use comments for that). Search the web for something like software bug root cause analysis , there are plenty of resources to justify this reasoning 1 , 2 , 3 , 4 , ... . ...a root cause for a defect is not always a single developer (which is the main point of this field)... That's exactly why "root cause" is professional while "person to blame" is amateurish. Personal accountability is great, but there are cases when it simply lays "outside" of the dev team. Tell your boss when there is a single developer to blame, root cause field will definitely cover that ( "coding mistake made by Bob in commit 1234, missed by Jim in review 567" ). The point of using the term root cause is to cover cases like that, along with cases that go out of the scope of the dev team. For example, if the bug has been caused by faulty hardware (with the person to blame being someone outside of the team who purchased and tested it), the root cause field allows for covering that, while "single developer to blame" would simply break the issue tracking flow. The same applies to other bugs caused by someone outside of the dev team - tester errors, requirements change, and management decisions. Say, if management decides to skip investing in disaster recovery hardware, "blaming a single developer" for an electricity outage in the datacenter would just not make sense.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154733", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57919/" ] }
154,862
In my few years of programming, I've toyed with everything from Ruby to C++. I've done everything from just learning basic syntax (Ruby) to completing several major (for me) projects that stretched my abilities with the language. Given this variety (and the fact that truly learning a language never stops), when can I say I know (or have learned) a language?
At what point can you say that you've "learned" a language like English or French? I think most people would accept that you've learned a language when you can use it to communicate. You don't have to know every word or understand every idiom, but you should know most of the (rather small) subset of the language that people use in daily conversation. Having "learned" a language to that extent, you can continue to learn more as you use the language daily and expose yourself to more and more of it. By the time you die, you still won't know all there is to know about the language, but nobody will say that you never "learned" the language. So it is, I think, with computer languages. You can say that you "know" a language like C++ or Ruby when you can write useful programs in it. If you need to quantify the degree to which you know the language, use the same terms that you'd use for a human language. Say "I know a little C++" or "I know conversational C++" or "I'm fluent in C++" to mean that you know just enough to scrape by, enough to get useful things done, or that you're an expert, respectively.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154862", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54164/" ] }
154,974
I really hesitate to ask this, because I don't want to "solicit debate, arguments, polling, or extended discussion" but I'm new to C and want to gain more insight into common patterns used in the language. I recently heard some distaste for the goto command, but I've also recently found a decent use-case for it. Code like this: error = function_that_could_fail_1(); if (!error) { error = function_that_could_fail_2(); if (!error) { error = function_that_could_fail_3(); ...to the n-th tab level! } else { // deal with error, clean up, and return error code } } else { // deal with error, clean up, and return error code } If the clean-up part is all very similar, could be written a little prettier (my opinion?) like this: error = function_that_could_fail_1(); if(error) { goto cleanup; } error = function_that_could_fail_2(); if(error) { goto cleanup; } error = function_that_could_fail_3(); if(error) { goto cleanup; } ... cleanup: // deal with error if it exists, clean up // return error code Is this a common or acceptable use-case of goto in C? Is there a different/better way to do this?
The goto statement (and its corresponding labels) are a flow control primitive (along with conditional execution of a statement). By that, I mean that they are there to allow you to construct program flow control networks. You can think of them as modeling the arrows between the nodes of a flowchart. Some of these can be optimized out immediately, where there is a direct linear flow (you just use a sequence of basic statements). Other patterns are best replaced with structured programming constructs where these are available; if it looks like a while loop, use a while loop , OK? The structured programming patterns are definitely at least potentially clearer of intent than a mess of goto statements. Yet C does not include all possible structured programming constructs. (It's not clear to me that all relevant ones have been discovered yet; the rate of discovery is slow now, but I'd hesitate to jump to saying that all have been found.) Of the ones we know about, C definitely lacks the try / catch / finally structure (and exceptions too). It also lacks multi-level break -from-loop. These are the kinds of things which a goto can be used to implement. It's possible to use other schemes to do these too — we do know that C has a sufficient set of non- goto primitives — but these often involve creating flag variables and much more complex loop or guard conditions; increasing the entanglement of the control analysis with the data analysis makes the program harder to understand overall. It also makes it more difficult for the compiler to optimize and for the CPU to execute rapidly (most flow control constructs — and definitely goto — are very cheap). Thus, while you shouldn't use goto unless needed, you should be aware that it exists and that it may be needed, and if you need it, you shouldn't feel too bad. An example of a case where it is needed is resource deallocation when a called function returns an error condition. (That is, try / finally .) It's possible to write that without goto but doing that can have downsides of its own, such as the problems of maintaining it. An example of the case: int frobnicateTheThings() { char *workingBuffer = malloc(...); int i; for (i=0 ; i<numberOfThings ; i++) { if (giveMeThing(i, workingBuffer) != OK) goto error; if (processThing(workingBuffer) != OK) goto error; if (dispatchThing(i, workingBuffer) != OK) goto error; } free(workingBuffer); return OK; error: free(workingBuffer); return OOPS; } The code could be even shorter, but it's enough to demonstrate the point.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/154974", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/50064/" ] }
155,131
In the Java world, sometimes it seems to be a problem, but what about C++? Are there different solutions? I was thinking about the fact that someone can replace the C++ library of a specific OS with a different version of the same library, but full of debug symbols to understand what my code does. Is it a good thing to use standard or popular libraries? This can also happen with some dll library under Windows replaced with the "debug version" of that library. Is it better to prefer static compilation? In commercial applications, I see that for the core of their app they compile everything statically and for the most part, the DLLs (dynamic libraries in general) are used to offer some third party technologies like anti-piracy solutions (I see this in many games), GUI library (like Qt), OS libraries, etc. Is static compilation the equivalent to obfuscation in the Java world? In better terms, is it the best and most affordable solution to protect your code?
Don't Waste Your Time on Losing Battles As noted in many other similar answers for C++ and other languages, this is mostly useless . Further Reading Selected reads on the topic (not all are C++ specific, but the general principles apply): StackExchange Answers The Case for Code Obfuscation? What Are the Advantages of Obfuscating Release Code? Decompilers - Myths or Facts? Can JS Code be Encrypted, Making it Hard for Someone to Copy? Obfuscate C/C++ Code? (for tools for C++ obfuscation, if you really must...) Papers Code (De)Obfuscation (Madou, Anckaert, De Bosschere) On the Effectiveness of Source Code Transformations for Binary Obfuscation (Madou & al.) The Effectiveness of Source Code Obfuscation: an Experimental Assessment (Ceccato & al., 2009) [ PDF ] The Quality of Obfuscation and Obfuscation Techniques (Witkowska, 2006) Famous Quotes on Obfuscation: Then finally, there is that question of code privacy. This is a lost cause. There is no transformation that will keep a determined hacker from understanding your program. This turns out to be true for all programs in all languages, it is just more obviously true with JavaScript because it is delivered in source form. The privacy benefit provided by obfuscation is an illusion. If you don’t want people to see your programs, unplug your server. - Douglas Crockford Never? I'm not saying you should never obfuscate and that there aren't any good reasons for it, but I seriously question the need for it in most cases, and its cost-effectiveness. Furthermore, there are situations where obfuscation is a de-facto requirement. For instance, if you write viruses, obviously obfuscation (and a dynamic one, preferrably) is as good a thing for your program's survival as it's ability to replicate. However, this hardly constitutes a "common" case...
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155131", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57405/" ] }
155,158
I'm a young programmer who desires to work in the field someday as a programmer. I know Java, VB.NET and C#. I want to learn a new language (as I programmer, I know that it is valuable to extend what I know - to learn languages that make you think differently). I took a look online to see what languages were common. Everybody knows C and C++ (even those muggles who know so little about computers in general), so I thought, maybe I should push for C. C and C++ are nice but they are old. Things like Haskell and Forth (etc. etc. etc.) are old and have lost their popularity. I'm scared of learning C (or even C++) for this same reason. Java is pretty old as well and is slow because it's run by the JVM and not compiled to native code. I've been a Windows developer for quite a while. I recently started using Java - but only because it was more versatile and spreadable to other places. The problem is that it doesn't look like a very usable language for these reasons: It's most used purpose is for web application and cellphone apps (specifically Android) As far as actual products made with it, the only things that come to mind are Netbeans, Eclipse (hurrah for making and IDE with the language the IDE is for - it's like making a webpage for writing HTML/CSS/Javascript), and Minecraft which happens to be fun but laggy and bipolar as far as computer spec. support. Other than that it's used for servers but heck - I don't only want to make/configure servers . The .NET languages are nice, however: People laugh if I even mention VB.NET or C# in a serious conversation. It isn't cross-platform unless you use MONO (which is still in development and has some improvements to be made). Lacks low level stuff because, like Java with the JVM, it is run/managed by the CLR. My first thought was learning something like C and then using it to springboard into C++ (just to make sure I would have a strong understanding/base), but like I said earlier, it's getting older and older by the minute. What I've Looked Into Fantom looks nice. It's like a nice middleman between my two favorite languages and even lets me publish between the two interchangeably, but, unlike what I want, it compiles to the CLR or JVM (depending on what you publish it to) instead of it being a complete compile. D also looks nice. It seems like a very usable language and from mult iple sources it appears to actually be better than C/C++. I would jump right with it, but I'm still unsure of its success because it obviously isn't very mainstream at this point. There are a couple others that looked pretty nice that focused on other things such as Opa with web development and Go by GOOGLE. My Question Is it worth learning these "experimental" languages? I've read other questions that say that if you aren't constantly learning languages and open to all languages that you aren't in the right mindset for programming. I understand this and I still might not quite be getting it, but in truth, if a language isn't going to become mainstream, should I spend my time learning something else? I don't want to learn old (or any going to soon be old) programming languages. I know that many people see this as something important, *but would any of you ever actually consider (assuming you didn't already know) FORTRAN? My goal is to stay current to make sure I'm successful in the future. Disclaimer Yes, I am a young programmer, so I probably made a lot of naive statements in my question. Feel free to correct me on ANYTHING! I have to start learning somewhere so I'm sure a lot of my knowledge is sketchy enough to have caused to incorrect statements or flaws in my thinking. Please leave any feelings you have in the comments. The Results... I'm truly amazed by the amazing responses, most of them so nicely pointing out my misunderstandings and misjudgments. I've learned quite a lot from this and I'm excited to hopefully utilize everything I've learned! I'll probably start learning Haskell next (the not so old language, albeit over 20 years old - hahaha ) and then start looking at some other options around me. Thanks
The thing with learning drastically different languages isn't about learning the languages, it's about getting exposure to different approaches to problems. Tools for the toolbox as it were. One thing to note is that Haskell isn't particularly old and it is actually a very good candidate for someone only familiar with mainstream languages. Even a very old language like Lisp is very useful to learn due to its influence on things. Also, Java and .Net are not interpreted and I expect you're making some incorrect assumptions based on that mislabeling.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155158", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/32146/" ] }
155,176
I just started a job where I'm writing Python after coming from a Java background, and I'm noticing that other developers tend to quote strings using single quotes ( '' ) instead of double quotes ( "" ). For example: line1 = 'This is how strings typically look.' line2 = "Not like this." Is there a particular reason for this other than personal preference? Is this the proper way to be quoting strings? Specifically, what I want to know is if there is some type of standard or accepted best practice that drives this style of coding.
The other answers are correct in that it makes no technical difference, but I have seen one informal style rule on a couple of open-source projects: double quotes are used for strings that might eventually be visible to the user (whether or not they need translation), and single quotes are for strings that relate to the functionality of the code itself (eg. dict keys, regular expressions, SQL). This is certainly not a universal rule (or even codified in a PEP), so like any other arbitrary aspect of coding it comes down to local rules. Note that PEP 8 (which I hadn't noticed when I wrote this answer) says : This PEP does not make a recommendation for this. Pick a rule and stick to it. When a string contains single or double quote characters, however, use the other one to avoid backslashes in the string. It improves readability. As a commenter points out, this isn't necessarily contradictory, depending on how you interpret "rule". What I suggest doesn't really work with the second half of the quote though.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155176", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/28556/" ] }
155,239
Recently, i had to understand the design of a small program written in a language i had no idea about ( ABAP , if you must know). I could figure it out without too much difficulty. I realize that mastering a new language is a completely different ball game, but purely understanding the intent of code (specifically production standard code, which is not necessarily complex) in any language is straight forward, if you already know a couple of languages (preferably one procedural/OO and one functional). Is this generally true? Are all programming languages made up of similar constructs like loops, conditional statements and message passing between functions? Are there non-esoteric languages that a typical Java/Ruby/Haskell programmer would not be able to make sense of? Do all languages have a common origin?
The basics of most procedural languages are pretty much the same. They offer: Scalar data types: usually boolean, integers, floats and characters Compound data types: arrays (strings are special case) and structures Basic code constructs: arithmetic over scalars, array/structure access, assignments Simple control structures: if-then, if-then-else, while, for loops Packages of code blocks: functions, procedures with parameters Scopes: areas in which identifiers have specific meanings If you understand this, you have a good grasp of 90% of the languages on the planet. What makes these languages slightly more difficult to understand is the incredible variety of odd syntax that people use to say the same basic things. Some use terse notation involving odd punctuation (APL being an extreme). Some use lots of keywords (COBOL being an excellent representative). That doesn't matter much. What does matter is if the language is complete enough by itself to do complex tasks without causing you tear your hair out. (Try coding some serious string hacking in Window DOS shell script: it is Turing capable but really bad at everything). More interesting procedural languages offer Nested or lexical scopes, namespaces Pointers allowing one entity to refer to another, with dynamic storage allocation Packaging of related code: packages, objects with methods, traits More sophisticated control: recursion, continuations, closures Specialized operators: string and array operations, math functions While not technically a property of the langauge, but a property of the ecosystem in which such languages live, are the libraries that are easily accessible or provided with the language as part of the development tool. Having a wide range of library facilities simplifies/speeds writing applications simply because one doesn't have to reinvent what the libraries do. While Java and C# are widely thought to be good languages in and of themselves, what makes them truly useful are the huge libraries that come with them, and easily obtainable extension libraries. The languages which are harder to understand are the non-procedural ones: Purely functional languages, with no assignments or side effects Logic languages, such as Prolog, in which symbolic computation and unification occur Pattern matching languages, in which you specify shapes that are matched to the problem, and often actions are triggered by a match Constraint languages, which let you specify relations and automatically solve equations Hardware description languages, in which everything executes in parallel Domain-specific languages, such as SQL, Colored Petri Nets, etc. There are two major representational styles for languages: Text based, in which identifiers name entities and information flows are encoded implicitly in formulas that uses the identifiers to name the entities (Java, APL, ...) Graphical, in which entities are drawn as nodes, and relations between entities are drawn as explicit arcs between those nodes (UML, Simulink, LabView) The graphical languages often allow textual sublanguages as annotations in nodes and on arcs. Odder graphical languages recursively allow graphs (with text :) in nodes and on arcs. Really odd graphical languages allow annotation graphs to point to graphs being annotated. Most of these languages are based on a very small number of models of computation: The lambda calculus (basis for Lisp and all functional languages) Post systems (or string/tree/graph rewriting techniques) Turing machines (state modification and selection of new memory cells) Given the focus by most of industry on procedural languages and complex control structures, you are well served if you learn one of the more interesting languages in this category well, especially if it includes some type of object-orientation. I highly recommend learning Scheme, in particular from a really wonderful book: Structure and Interpretation of Computer Programs . This describes all these basic concepts. If you know this stuff, other languages will seem pretty straightforward except for goofy syntax.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155239", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
155,406
I have come across this programming idiom recently: const float Zero = 0.0; which is then used in comparisons: if (x > Zero) {..} Can anyone explain if this is really any more efficient or readable or maintainable than: if (x > 0.0) {..} NOTE: I can think of other reasons to define this constant, Im just wondering about its use in this context.
Possible reasons are caching, naming or forcing type Caching (not applicable) You want to avoid the cost of creating an object during the act of comparison. In Java an example would be BigDecimal zero = new BigDecimal ("0.0"); this involves a fairly heavy creation process and is better served using the provided static method: BigDecimal zero = BigDecimal.ZERO; This allows comparisons without incurring a repeated cost of creation since the BigDecimal is pre-cached by the JVM during initialisation. In the case of what you have described, a primitive is performing the same job. This is largely redundant in terms of caching and performance. Naming (unlikely) The original developer is attempting to provide a uniform naming convention for common values throughout the system. This has some merit, especially with uncommon values but something as basic as zero is only worth it in the case of the caching case earlier. Forcing type (most likely) The original developer is attempting to force a particular primitive type to ensure that comparisons are cast to their correct type and possibly to a particular scale (number of decimal places). This is OK, but the simple name "zero" is probably insufficient detail for this use case with ZERO_1DP being a more appropriate expression of the intent.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155406", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/25446/" ] }
155,467
I'm starting a new Java project which will require a RESTful API. It will be a SaaS business application serving mobile clients. I have developed one project with Java EE 6, but I'm not very familiar with the ecosystem, since most of my experience is on the Microsoft platform. Which would be a sensible choice for a JAX-RS implementation for a new project such as described? Judging by Wikipedia's list , main contenders seem to be Jersey, Apache CXF, RESTeasy and Restlet. But the Comparison of JAX-RS Implementations cited on Wikipedia is from 2008. My first impressings from their respective homepages is that: CXF aims to be a very comprehensive solution (reminds me of WCF in the Microsoft space), which makes me think it can be more complex to understand, setup and debug than what I need; Jersey is the reference implementation and might be a good choice, but it's legacy from Sun and I'm not sure how Oracle is treating it (announcements page doesn't work and last commit notice is from 4 months ago); RESTeasy is from JBoss and probably a solid option, though I'm not sure about learning curve; Restlet seems to be popular but has a lot of history, I'm not sure how up-to-date it is in the Java EE 6 world or if it carries a heavy J2EE mindset (like lots of XML configuration). What would be the merits of each of these alternatives? What about learning curve? Feature support? Tooling (e.g. NetBeans or Eclipse wizards)? What about ease of debugging and also deployment? Is any of these project more up-to-date than the others? How stable are them?
I've grown to love Dropwizard for an overall solution Rather than go with some huge application container approach, Dropwizard advocates a lightweight solution that offers much faster development cycles. Essentially, it provides the glue for the following well-known frameworks: Jetty (HTTP) Jersey (JAX-RS) Jackson (JSON or XML) Guava (excellent additions to JDK libraries) Metrics (real time application monitoring) Hibernate Validator (input verification) OAuth (RESTful authentication) The combination of the above, coupled with a solid approach to functional testing, gives a complete solution to getting your service up and running quickly. Yeah? And the JAX-RS question I asked... You'll notice that their choice was Jersey, the reference JAX-RS implementation. Being a RESTEasy guy I thought this would be a problem, but there was zero learning curve. The two are largely interchangeable. However, I would say that the Jersey client offered a fluent interface for constructing tests. An example would be... @Override protected void setUpResources() { addResource(new HelloWorldResource("Hello, %s!","Stranger")); setUpAuthenticator(); } @Test public void simpleResourceTest() throws Exception { Saying expectedSaying = new Saying(1,"Hello, Stranger!"); Saying actualSaying = client() .resource("/hello-world") .get(Saying.class); assertEquals("GET hello-world returns a default",expectedSaying.getContent(),actualSaying.getContent()); }
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155467", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/34188/" ] }
155,488
I hope this isn't too general of a question; I could really use some seasoned advice. I am newly employed as the sole "SW Engineer" in a fairly small shop of scientists who have spent the last 10-20 years cobbling together a vast code base. (It was written in a virtually obsolete language: G2 -- think Pascal with graphics). The program itself is a physical model of a complex chemical processing plant; the team that wrote it have incredibly deep domain knowledge but little or no formal training in programming fundamentals. They've recently learned some hard lessons about the consequences of non-existant configuration management. Their maintenance efforts are also greatly hampered by the vast accumulation of undocumented "sludge" in the code itself. I will spare you the "politics" of the situation (there's always politics!), but suffice to say, there is not a consensus of opinion about what is needed for the path ahead. They have asked me to begin presenting to the team some of the principles of modern software development. They want me to introduce some of the industry-standard practices and strategies regarding coding conventions, lifecycle management, high-level design patterns, and source control. Frankly, it's a fairly daunting task and I'm not sure where to begin. Initially, I'm inclined to tutor them in some of the central concepts of The Pragmatic Programmer , or Fowler's Refactoring ("Code Smells", etc). I also hope to introduce a number of Agile methodologies. But ultimately, to be effective, I think I'm going to need to hone in on 5-7 core fundamentals; in other words, what are the most important principles or practices that they can realistically start implementing that will give them the most "bang for the buck". So that's my question: What would you include in your list of the most effective strategies to help straighten out the spaghetti (and prevent it in the future)?
Foreword This is a daunting task indeed, and there's a lot of ground to cover. So I'm humbly suggesting this as somewhat comprehensive guide for your team, with pointers to appropriate tools and educational material. Remember: These are guidelines , and that as such are meant to adopted, adapted, or dropped based on circumstances. Beware: Dumping all this on a team at once would most likely fail. You should try to cherry-pick elements that would give you the best bang-for-sweat, and introduce them slowly, one at a time. Note: not all of this applies directly to Visual Programming Systems like G2. For more specific details on how to deal with these, see the Addendum section at the end. Executive Summary for the Impatient Define a rigid project structure , with: project templates , coding conventions , familiar build systems , and sets of usage guidelines for your infrastructure and tools. Install a good SCM and make sure they know how to use it. Point them to good IDEs for their technology, and make sure they know how to use them. Implement code quality checkers and automatic reporting in the build system. Couple the build system to continuous integration and continuous inspection systems. With the help of the above, identify code quality "hotspots" and refactor . Now for the long version... Caution, brace yourselves! Rigidity is (Often) Good This is a controversial opinion, as rigidity is often seen as a force working against you. It's true for some phases of some projects. But once you see it as a structural support, a framework that takes away the guesswork, it greatly reduces the amount of wasted time and effort. Make it work for you, not against you. Rigidity = Process / Procedure . Software development needs good process and procedures for exactly the same reasons that chemical plants or factories have manuals, procedures, drills and emergency guidelines: preventing bad outcomes, increasing predictability, maximizing productivity... Rigidity comes in moderation, though! Rigidity of the Project Structure If each project comes with its own structure, you (and newcomers) are lost and need to pick up from scratch every time you open them. You don't want this in a professional software shop, and you don't want this in a lab either. Rigidity of the Build Systems If each project looks different, there's a good chance they also build differently . A build shouldn't require too much research or too much guesswork. You want to be able to do the canonical thing and not need to worry about specifics: configure; make install , ant , mvn install , etc... Re-using the same build system and making it evolve over the time also ensures a consistent level of quality. You do need a quick README to point the project's specifics, and gracefully guide the user/developer/researcher, if any. This also greatly facilitates other parts of your build infrastructure, namely: continuous integration , continuous inspection . So keep your build (like your projects) up to date, but make it stricter over time, and more efficient at reporting violations and bad practices. Do not reinvent the wheel, and reuse what you have already done. Recommended Reading: Continuous Integration: Improving Software Quality and Reducing Risk (Duval, Matyas, Glover, 2007) Continuous Delivery: Release Sotware Releases through Build, Test and Deployment Automation (Humble, Farley, 2010) Rigidity in the Choice of Programming Languages You can't expect, especially in a research environment, to have all teams (and even less all developers) use the same language and technology stack. However, you can identify a set of "officially supported" tools, and encourage their use. The rest, without a good rationale, shouldn't be permitted (beyond prototyping). Keep your tech stack simple, and the maintenance and breadth of required skills to a bare minimum: a strong core. Rigidity of the Coding Conventions and Guidelines Coding conventions and guidelines are what allow you to develop both an identity as a team, and a shared lingo . You don't want to err into terra incognita every time you open a source file. Nonsensical rules that make life harder or forbid actions explicity to the extent that commits are refused based on single simple violations are a burden. However: a well thought-out ground ruleset takes away a lot of the whining and thinking: nobody should break under no circumstances; and a set of recommended rules provide additional guidance. Personal Approach: I am aggressive when it comes to coding conventions, some even say nazi , because I do believe in having a lingua franca , a recognizable style for my team. When crap code gets checked-in, it stands out like a cold sore on the face of an Hollywood star: it triggers a review and an action automatically. In fact, I've sometimes gone as far as to advocate the use of pre-commit hooks to reject non-conforming commits. As mentioned, it shouldn't be overly crazy and get in the way of productivity: it should drive it. Introduce these slowly, especially at the beginning. But it's way preferable over spending so much time fixing faulty code that you can't work on real issues. Some languages even enforce this by design: Java was meant to reduce the amount of dull crap you can write with it (though no doubt many manage to do it). Python's block structure by indentation is another idea in this sense. Go, with its gofmt tool, which completely takes away any debate and effort ( and ego!! ) inherent to style: run gofmt before you commit. Make sure that code rot cannot slip through. Code conventions , continuous integration and continuous inspection , pair programming and code reviews are your arsenal against this demon. Plus, as you'll see below, code is documentation , and that's another area where conventions encourage readability and clarity. Rigidity of the Documentation Documentation goes hand in hand with code. Code itself is documentation. But there must be clear-cut instructions on how to build, use, and maintain things. Using a single point of control for documentation (like a WikiWiki or DMS) is a good thing. Create spaces for projects, spaces for more random banter and experimentation. Have all spaces reuse common rules and conventions. Try to make it part of the team spirit. Most of the advice applying to code and tooling also applies to documentation. Rigidity in Code Comments Code comments, as mentioned above, are also documentation. Developers like to express their feelings about their code (mostly pride and frustration, if you ask me). So it's not unusual for them to express these in no uncertain terms in comments (or even code), when a more formal piece of text could have conveyed the same meaning with less expletives or drama. It's OK to let a few slip through for fun and historical reasons: it's also part of developing a team culture . But it's very important that everybody knows what is acceptable and what isn't, and that comment noise is just that: noise . Rigidity in Commit Logs Commit logs are not an annoying and useless "step" of your SCM's lifecycle: you DON'T skip it to get home on time or get on with the next task, or to catch up with the buddies who left for lunch. They matter, and, like (most) good wine, the more time passes the more valuable they become. So DO them right. I'm flabbergasted when I see co-workers writing one-liners for giant commits, or for non-obvious hacks. Commits are done for a reason, and that reason ISN'T always clearly expressed by your code and the one line of commit log you entered. There's more to it than that. Each line of code has a story , and a history . The diffs can tell its history, but you have to write its story. Why did I update this line? -> Because the interface changed. Why did the interface changed? -> Because the library L1 defining it was updated. Why was the library updated? -> Because library L2, that we need for feature F, depended on library L1. And what's feature X? -> See task 3456 in issue tracker. It's not my SCM choice, and may not be the best one for your lab either; but Git gets this right, and tries to force you to write good logs more than most other SCMs systems, by using short logs and long logs . Link the task ID (yes, you need one) and a leave a generic summary for the shortlog , and expand in the long log: write the changeset's story . It is a log: It's here to keep track and record updates. Rule of Thumb: If you were searching for something about this change later, is your log likely to answer your question? Projects, Documentation and Code Are ALIVE Keep them in sync, otherwise they do not form that symbiotic entity anymore. It works wonders when you have: clear commits logs in your SCM, w/ links to task IDs in your issue tracker, where this tracker's tickets themselves link to the changesets in your SCM (and possibly to the builds in your CI system), and a documentation system that links to all of these. Code and documentation need to be cohesive . Rigidity in Testing Rules of Thumb: Any new code shall come with (at least) unit tests. Any refactored legacy code shall come with unit tests. Of course, these need: to actually test something valuable (or they are a waste of time and energy), to be well written and commented (just like any other code you check in). They are documentation as well, and they help to outline the contract of your code. Especially if you use TDD . Even if you don't, you need them for your peace of mind. They are your safety net when you incorporate new code (maintenance or feature) and your watchtower to guard against code rot and environmental failures. Of course, you should go further and have integration tests , and regression tests for each reproducible bug you fix. Rigidity in the Use of the Tools It's OK for the occasional developer/scientist to want to try some new static checker on the source, generate a graph or model using another, or implement a new module using a DSL. But it's best if there's a canonical set of tools that all team members are expected to know and use. Beyond that, let members use what they want, as long as they are ALL: productive , NOT regularly requiring assistance NOT regularly adjusting to your general infrastructure , NOT disrupting your infrastructure (by modifying common areas like code, build system, documentation...), NOT affecting others' work , ABLE to timely perform any task requested . If that's not the case, then enforce that they fallback to defaults. Rigidity vs Versatility, Adaptability, Prototyping and Emergencies Flexibility can be good. Letting someone occasionally use a hack, a quick-n-dirty approach, or a favorite pet tool to get the job done is fine. NEVER let it become a habit, and NEVER let this code become the actual codebase to support. Team Spirit Matters Develop a Sense of Pride in Your Codebase Develop a sense of Pride in Code Use wallboards leader board for a continuous integration game wallboards for issue management and defect counting Use an issue tracker / bug tracker Avoid Blame Games DO use Continuous Integration / Continuous Inspection games: it fosters good-mannered and productive competition . DO keep track defects: it's just good house-keeping. DO identifying root causes : it's just future-proofing processes. BUT DO NOT assign blame : it's counter productive. It's About the Code, Not About the Developers Make developers conscious of the quality of their code, BUT make them see the code as a detached entity and not an extension of themselves, which cannot be criticized. It's a paradox: you need to encourage ego-less programming for a healthy workplace but to rely on ego for motivational purposes. From Scientist to Programmer People who do not value and take pride in code do not produce good code. For this property to emerge, they need to discover how valuable and fun it can be. Sheer professionalism and desire to do good is not enough: it needs passion. So you need to turn your scientists into programmers (in the large sense). Someone argued in comments that after 10 to 20 years on a project and its code, anyone would feel attachment. Maybe I'm wrong but I assume they're proud of the code's outcomes and of the work and its legacy, not of the code itself or of the act of writing it. From experience, most researchers regard coding as a necessity, or at best as a fun distraction. They just want it to work. The ones who are already pretty versed in it and who have an interest in programming are a lot easier to persuade of adopting best practices and switching technologies. You need to get them halfway there. Code Maintenance is Part of Research Work Nobody reads crappy research papers. That's why they are peer-reviewed, proof-read, refined, rewritten, and approved time and time again until deemed ready for publication. The same applies to a thesis and a codebase! Make it clear that constant refactoring and refreshing of a codebase prevents code rot and reduces technical debt, and facilitates future re-use and adaptation of the work for other projects. Why All This??! Why do we bother with all of the above? For code quality . Or is it quality code ...? These guidelines aim at driving your team towards this goal. Some aspects do it by simply showing them the way and letting them do it (which is much better) and others take them by the hand (but that's how you educate people and develop habits). How do you know when the goal is within reach? Quality is Measurable Not always quantitatively, but it is measurable . As mentioned, you need to develop a sense of pride in your team, and showing progress and good results is key. Measure code quality regularly and show progress between intervals, and how it matters. Do retrospectives to reflect on what has been done, and how it made things better or worse. There are great tools for continuous inspection . Sonar being a popular one in the Java world, but it can adapt to any technologies; and there are many others. Keep your code under the microscope and look for these pesky annoying bugs and microbes. But What if My Code is Already Crap? All of the above is fun and cute like a trip to Never Land, but it's not that easy to do when you already have (a pile of steamy and smelly) crap code, and a team reluctant to change. Here's the secret: you need to start somewhere . Personal anecdote: In a project, we worked with a codebase weighing originally 650,000+ Java LOC, 200,000+ lines of JSPs, 40,000+ JavaScript LOC, and 400+ MBs of binary dependencies. After about 18 months, it's 500,000 Java LOC (MOSTLY CLEAN) , 150,000 lines of JSPs, and 38,000 JavaScript LOC, with dependencies down to barely 100MBs (and these are not in our SCM anymore!). How did we do it? We just did all of the above. Or tried hard. It's a team effort, but we slowly inject in our process regulations and tools to monitor the heart-rate of our product, while hastily slashing away the "fat": crap code, useless dependencies... We didn't stop all development to do this: we have occasional periods of relative peace and quiet where we are free to go crazy on the codebase and tear it apart, but most of the time we do it all by defaulting to a "review and refactor" mode every chance we get: during builds, during lunch, during bug fixing sprints, during Friday afternoons... There were some big "works"... Switching our build system from a giant Ant build of 8500+ XML LOC to a multi-module Maven build was one of them. We then had: clear-cut modules (or at least it was already a lot better, and we still have big plans for the future), automatic dependency management (for easy maintenance and updates, and to remove useless deps), faster, easier and reproduceable builds, daily reports on quality. Another was the injection of "utility tool-belts", even though we were trying to reduce dependencies: Google Guava and Apache Commons slim down your code and and reduce surface for bugs in your code a lot. We also persuaded our IT department that maybe using our new tools (JIRA, Fisheye, Crucible, Confluence, Jenkins) was better than the ones in place. We still needed to deal with some we despised (QC, Sharepoint and SupportWorks...), but it was an overall improved experience, with some more room left. And every day, there's now a trickle of between one to dozens of commits that deal only with fixing and refactoring things. We occasionally break stuff (you need unit tests, and you better write them before you refactor stuff away), but overall the benefit for our morale AND for the product has been enormous. We get there one fraction of a code quality percentage at a time. And it's fun to see it increase!!! Note: Again, rigidity needs to be shaken to make room for new and better things. In my anecdote, our IT department is partly right in trying to impose some things on us, and wrong for others. Or maybe they used to be right . Things change. Prove that they are better ways to boost your productivity. Trial-runs and prototypes are here for this. The Super-Secret Incremental Spaghetti Code Refactoring Cycle for Awesome Quality +-----------------+ +-----------------+ | A N A L Y Z E +----->| I D E N T I F Y | +-----------------+ +---------+-------+ ^ | | v +--------+--------+ +-----------------+ | C L E A N +<-----| F I X | +-----------------+ +-----------------+ Once you have some quality tools at your toolbelt: Analyze your code with code quality checkers. Linters, static analyzers, or what have you. Identify your critical hotspots AND low hanging fruits . Violations have severity levels, and large classes with a large number of high-severity ones are a big red flag: as such, they appear as "hot spots" on radiator/heatmap types of views. Fix the hotspots first. It maximizes your impact in a short timeframe as they have the highest business value. Ideally, critical violations should dealt with as soon as they appear, as they are potential security vulnerabilities or crash causes, and present a high risk of inducing a liability (and in your case, bad performance for the lab). Clean the low level violations with automated codebase sweeps . It improves the signal-to-noise ratio so you are be able to see significant violations on your radar as they appear. There's often a large army of minor violations at first if they were never taken care of and your codebase was left loose in the wild. They do not present a real "risk", but they impair the code's readability and maintainability. Fix them either as you meet them while working on a task, or by large cleaning quests with automated code sweeps if possible. Do be careful with large auto-sweeps if you don't have a good test suite and integration system. Make sure to agree with co-workers the right time to run them to minimize the annoyance. Repeat until you are satisfied. Which, ideally, you should never be, if this is still an active product: it will keep evolving. Quick Tips for Good House-Keeping When in hotfix-mode , based on a customer support request: It's usually a best practice to NOT go around fixing other issues, as you might introduce new ones unwillingly. Go at it SEAL-style: get in, kill the bug, get out , and ship your patch. It's a surgical and tactical strike. But for all other cases , if you open a file, make it your duty to: definitely: review it (take notes, file issue reports), maybe: clean it (style cleanups and minor violations), ideally: refactor it (reorganize large sections and their neigbors). Just don't get sidetracked into spending a week from file to file and ending up with a massive changeset of thousands of fixes spanning multiple features and modules - it makes future tracking difficult. One issue in code = one ticket in your tracker. Sometimes, a changeset can impact multiple tickets; but if it happens too often, then you're probably doing something wrong. Addendum: Managing Visual Programming Environments The Walled Gardens of Bespoke Programming Systems Multiple programming systems, like the OP's G2, are different beasts... No Source "Code" Often they do not give you access to a textual representation of your source "code": it might be stored in a proprietary binary format, or maybe it does store things in text format but hides them away from you. Bespoke graphical programming systems are actually not uncommon in research labs, as they simplify the automation of repetitive data processing workflows. No Tooling Aside from their own, that is. You are often constrained by their programming environment, their own debugger, their own interpreter, their own documentation tools and formats. They are walled gardens , except if they eventually capture the interest of someone motivated enough to reverse engineer their formats and builds external tools - if the license permits it. Lack of Documentation Quite often, these are niche programming systems, which are used in fairly closed environments. People who use them frequently sign NDAs and never speak about what they do. Programming communities for them are rare. So resources are scarce. You're stuck with your official reference, and that's it. The ironic (and often frustrating) bit is that all the things these systems do could obviously be achieved by using mainstream and general purpose programming languages, and quite probably more efficiently. But it requires a deeper knowledge of programming, whereas you can't expect your biologist, chemist or physicist (to name a few) to know enough about programming, and even less to have the time (and desire) to implement (and maintain) complex systems, that may or may not be long-lived. For the same reason we use DSLs, we have these bespoke programming systems. Personal Anecdote 2: Actually, I worked on one of these myself. I didn't do the link with the OP's request, but my the project was a set of inter-connected large pieces of data-processing and data-storage software (primarily for bio-informatics research, healthcare and cosmetics, but also for business intelligence, or any domain implying the tracking of large volumes of research data of any kind and the preparation of data-processing workflows and ETLs). One of these applications was, quite simply, a visual IDE that used the usual bells and whistles: drag and drop interfaces, versioned project workspaces (using text and XML files for metadata storage), lots of pluggable drivers to heterogeneous datasources, and a visual canvas to design pipelines to process data from N datasources and in the end generate M transformed outputs, and possible shiny visualizations and complex (and interactive) online reports. Your typical bespoke visual programming system, suffering from a bit of NIH syndrome under the pretense of designing a system adapted to the users' needs. And, as you would expect, it's a nice system, quite flexible for its needs though sometimes a bit over-the-top so that you wonder "why not use command-line tools instead?", and unfortunately always leading in medium-sized teams working on large projects to a lot of different people using it with different "best" practices. Great, We're Doomed! - What Do We Do About It? Well, in the end, all of the above still holds. If you cannot extract most of the programming from this system to use more mainstream tools and languages, you "just" need to adapt it to the constraints of your system. About Versioning and Storage In the end, you can almost always version things, even with the most constrained and walled environment. Most often than not, these systems still come with their own versioning (which is unfortunately often rather basic, and just offers to revert to previous versions without much visibility, just keeping previous snapshots). It's not exactly using differential changesets like your SCM of choice might, and it's probably not suited for multiple users submitting changes simultaneously. But still, if they do provide such a functionality, maybe your solution is to follow our beloved industry-standard guidelines above, and to transpose them to this programming system!! If the storage system is a database, it probably exposes export functionalities, or can be backed-up at the file-system level. If it's using a custom binary format, maybe you can simply try to version it with a VCS that has good support for binary data. You won't have fine-grained control, but at least you'll have your back sort of covered against catastrophes and have a certain degree of disaster recovery compliance. About Testing Implement your tests within the platform itself, and use external tools and background jobs to set up regular backups. Quite probably, you fire up these tests the same that you would fire up the programs developed with this programming system. Sure, it's a hack job and definitely not up to the standard of what is common for "normal" programming, but the idea is to adapt to the system while trying to maintain a semblance of professional software development process. The Road is Long and Steep... As always with niche environments and bespoke programming systems, and as we exposed above, you deal with strange formats, only a limited (or totally inexistant) set of possibly clunky tools, and a void in place of a community. The Recommendation: Try to implement the above guidelines outside of your bespoke programming system, as much as possible. This ensures that you can rely on "common" tools, which have proper support and community drive. The Workaround: When this is not an option, try to retrofit this global framework into your "box". The idea is to overlay this blueprint of industry standard best practices on top of your programming system, and make the best of it. The advice still applies: define structure and best practices, encourage conformance. Unfortunately, this implies that you may need to dive in and do a tremendous amount of leg-work. So... Famous Last Words, and Humble Requests: Document everything you do. Share your experience. Open Source any tool your write. By doing all of this, you will: not only increase your chances of getting support from people in similar situations, but also provide help to other people, and foster discussion around your technology stack. Who knows, you could be at the very beginning of a new vibrant community of Obscure Language X . If there are none, start one! Ask questions on Stack Overflow , Maybe even write a proposal for a new StackExchange Site in the Area 51 . Maybe it's beautiful inside , but nobody has a clue so far, so help take down this ugly wall and let others have a peek!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155488", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/50055/" ] }
155,521
I just read that there will be two versions of Windows available: Windows RT and Windows 8 . Could someone please describe what the differences between Windows RT and Windows 8 are? And how these differences may affect developers creating apps for one or the other? Also, what is WinRT ?
In Brief... WinRT (Windows Runtime, unlikely to be what you meant) is a software layer on top of which Metro apps are built, while Windows 8 is the whole operating system; Windows RT (most likely the one you meant), this is a version of Windows 8 for devices using processors based on the ARM achitecture and instruction set. You got the names a bit mixed up apparently - granted they made them confusingly similar - so I just addressed both and edited your question accordingly. WinRT / Windows Runtime WinRT a software layer that sits on top of the OS, and that is at the base of the new Metro design language approach. It's primarily a set of APIs to build Metro apps for all Metro-supported platforms (including for Windows 8 for ARM). See the image below for a general approximation of the Windows 8 Platform: Windows RT / Windows 8 for ARM Windows RT (where RT also means "runtime", to make things as confused as possible), used to be known as Windows 8 for ARM . It is a version targeting hardware manufacturers (mostly aiming at the tablet market). See this post Announcing the Windows 8 Editions (Archived, July 2012) of the Windows Team 's Blogging Windows blog ( emphasis mine ): Windows RT is the newest member of the Windows family – also known as Windows on ARM or WOA , as we’ve referred to it previously. This single edition will only be available pre-installed on PCs and tablets powered by ARM processors and will help enable new thin and lightweight form factors with impressive battery life. Windows RT will include touch-optimized desktop versions of the new Microsoft Word, Excel, PowerPoint, and OneNote . For new apps, the focus for Windows RT is development on the new Windows runtime , or WinRT , which we unveiled in September and forms the foundation of a new generation of cloud-enabled, touch-enabled, web-connected apps of all kinds. See the original post for a table listing the major differences between the versions, or Wikipedia's Windows 8 Editions article for more details and sources. Note that only software written using Win RT (the APIs) can run on Windows RT (the OS version).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155521", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/49694/" ] }
155,523
Background The longer I work on a project, the less clear it becomes. It's like I cannot seperate various classes/objects anymore in my head. Everything starts mixing up, and it's extremely hard to take it all apart again. I start putting functions in classes where they really don't belong, and make silly mistakes such as writing code that I later find was 100% obsolete; things are no longer clearly mappable in my head. It isn't until I take a step back for several hours (or days sometimes!) that I can actually see what's going on again, and be productive. I usually try to fight through this, I am so passionate about coding that I wouldn't for the life of me know what else I could be doing. This is when stuff can get really weird, I get so up in my head that I sort of lose touch with reality (to some extent) in that various actions, such as pouring a glass of water, no longer happen on a concious level. It happens on auto pilot, during which pretty much all of my concious concentration (is that even a thing?) is devoted to borderline pointless problem solving (trying to seperate elements of code). It feels like a losing battle. So I took an IQ test a while ago (Wechsler Adult Intelligence Scale I believe it was) and it turned out my Spatial Aptitude was quite low. I still got a decent total score, just above average, so I won't have to poke things with a stick for a living, but I am a little worried that this is such a handicap when writing/engineering computer programs that I won't ever be able to do it seriously or professionally. Question I am very much interested in what other people think of this... Could a low spatial aptitude be the cause of the above described problems? How is programming affected by spatial aptitude? Maybe I should be looking more along the lines of ADD or something similar, because I did get diagnosed with ADD at the age of 17 (5 years ago) but the medicine I received didn't seem to affect me that much so I never took it all that serious. As far as I know people are born with low/med/high spatial aptitude, so I think it's interesting to find out if the more fortunate are better programmers by birth right.
There's actually some hard research data on this, mostly collected over the past 35 years, and I also have experienced a few similar phenomenons, though not on a regular basis. See below for more. Research Data There appears to be some but minor correlation based on research performed and summarized in the following works. As often with research though, the study models differ between studies and they should be closely reviewed to understand why results present differences in conclusions. Exploring the psychological predictors of programming achievement [ PDF ] (Erdogan, Aydin, Kabaca, 2008) Unfortunately this one is vague on details. It points to the "high-impact" of "aptitudes" in general, but then only points to other research without giving the results for each aptitude test, so we don't know how spatial ability fares. It's mostly a litterature review more than actual research. Spatial ability and learning to program [ PDF ] (Jones, 2008) From the results of this analysis, there is evidence that spatial ability is important when learning to program. [...] While spatial ability has been shown to be relevant, we do not feel that mental rotation capacity should be used as a means of predetermining programming aptitude, but should be considered while devising pedagogical interventions. Thought needs to be given to teaching methods and software visualizations that help students with low spatial ability to envisage abstract concepts and build better mental models (Wiedenbeck et al., 2004). Predictors of Success in a First Programming Course [ PDF ] (Simon, Fincher & al., 2006) Only a small positive correlation was found between scores in the spatial visualisation (paper folding) task and programming marks. This suggests that components of IQ other than spatial skills may account for most of the effect of IQ on programming success (Mayer et al 1989). Who is likely to acquire programming skills? (Shute, 1991) Hemispheric Lateralization and Programming Ability , (Gasen, Morecroft, 1990) Correlates of problem-solving in programming [ PDF ] (Choi-man, 1988) Interesting one... Nice study model, and quantified results with several study groups and accounting for the reliability of study factors. It yields that: [...] it could be seen that, for the males, mathematics alone could account for 30.90% of variance on programming ability, and that spatial test could account for 8.00%. [...] [...] it could also be seen that, for females, only the performance of mathematics and spatial tests had significant effect in predicting the programming ability. Results of this study revealed that students who scored high in mathematics test and spatial test would score high in programming ability test. Learning, research, and the graphical representation of programming (Taylor, Cunniff, Uchiyama, 1986) Cognitive Requirements of Learning Computer Programming in Group and Individual Settings (Webb, 1985) Cognitive correlates of programming tasks in novice programmers (Irons, 1982) Research on aptitude for learning: A progress report [ PDF ] (RE Snow, 1976) Take it with a pinch of salt: Some are relatively dated, IQ tests might have changed since. I haven't done an in-depth search to find citations of each article to see if they were confirmed or debunked later on. Some links (especially the [PDF] kind) may not work for you if you don't have an affiliation to a library giving access to these online contents. Personal Opinion Warning and disclosure: I am NEITHER a psychologist NOR a neurologist, but I have been studying and teaching programming to both small kids (starting 6) and university students (up to 60!). Having studied with AND taught students as university teacher myself, including some students affected by spatial problems (and others with stronger disabilities), I have to say that while it could have been (I didn't keep track of my students based on disabilities, obviously) that some would have registered in a lower part of the general curve, I still remember clearly some scoring high (and even one in particular being the class' major for at least 2 years). My point is, while it may have an effect, and as shown by some of the research above, it doesn't account for the largest part of your ability to learn to program and think like a programmer. It's inconsequential, in that it won't stop you to learn if you really want to, and won't prevent you from working in the general case, though it could (as might be your case) make it slightly harder for you. There's virtually no limit to what and how fast you can learn . After all, no programmer doesn't like a good challenge, right? (I'm looking at you, RSI) Personal (Possibly Unrelated) Experience It might be that you are too passionate. How many hours do you work per day and per week? Do you take regular breaks? A Similar Case? At a period in my life, I worked days of at least 14 hours every day of the week, the whole year, to a point where it culminated to record weeks of 120 hours of work in front of a computer screen . Yes, that's only 48 hours left per week to eat, sleep, travel to and from work ( tip: avoid driving!! ), shower and other vital functions. At this particular point, I could pretty much go to sleep in a heart-beat (though usually having sleeping problems), but I would almost always keep dreaming of code, and I would also suddenly realize in the shower or even when walking or running or doing menial tasks that my mind went back to it in auto-pilot, as you said it yourself. Unfortunately, I wouldn't magically solve problems in my sleep; it would be closer to what you seem to describe and experience: a giant maelstrom of confused thoughts turning around in my head, which would sort of (seem to) make sense on a grander scale, but not clearly express any solution and without much success in grabbing one of these thoughts to focus on it, dissect it clearly and turn it into something useful. And this was usually rather tiresome and distressing. Relaxation Might Help Maybe you need to calm down just a bit, and relax and work less. Try to find something to take your mind off. Back then, I ended up often renouncing some precious hours of sleeping time to instead do something that would really stop this mad train of thought. It seems counterproductive, but I actually preferred to do a few thing where I would really relax than to sleep more and not be rested. The distraction for the nervous batteries, and the sleep for the physical batteries, in a sense. Identifying Triggers If that's not your case, then maybe there's something else involved in triggering this state for you. Try to isolate elements that are present in these situations, and see if you can reproduce this condition in other environments, to see if you find these elements as well. Does it happen more at work or at home, etc... Isolation Also, you may already have heard and tried this, but I have a friend with a minor spatial disability, and usually it helps for him, if working on computers, to be a in darker room, to avoid having too many complex views and windows open (to avoid distraction), and in general to keep things rather minimalistic (both in terms of design and colors, and in terms of content and representation). Try also to take regular breaks, and to let your mind run free for short periods of time every 1 or 2 hours, based on what works best for you. Maybe adopt the Pomodoro technique or something similar (I don't have research on a correlation with this, but it could be helpful in forcing you to take breaks).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155523", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/58433/" ] }
155,639
I'd like to consider myself a fairly experienced programmer. I've been programming for over 5 years now. My weak point though is terminology. I'm self-taught, so while I know how to program, I don't know some of the more formal aspects of computer science. So, what are practical algorithms/data structures that I could recognize and know by name? Note, I'm not asking for a book recommendation about implementing algorithms. I don't care about implementing them, I just want to be able to recognize when an algorithm/data structure would be a good solution to a problem. I'm asking more for a list of algorithms/data structures that I should "recognize". For instance, I know the solution to a problem like this: You manage a set of lockers labeled 0-999. People come to you to rent the locker and then come back to return the locker key. How would you build a piece of software to manage knowing which lockers are free and which are in used? The solution, would be a queue or stack. What I'm looking for are things like "in what situation should a B-Tree be used -- What search algorithm should be used here" etc. And maybe a quick introduction of how the more complex(but commonly used) data structures/algorithms work. I tried looking at Wikipedia's list of data structures and algorithms but I think that's a bit overkill. So I'm looking more for what are the essential things I should recognize?
An objective response: While my initial response to this question was based on my empirical experience as a soon-to-graduate CS student and my projected opinion of the type of people I wanted to work with in the CS field. There is actually an objective (with respect to the subjective opinions of the ACM SIGCSE and IEEE computing societies) answer. Every 10 years the ACM and the IEEE bodies cooperate on a joint publication that details suggestions for undergraduate computer science curriculum based on professional knowledge of the state of the computing industry. More information can be found at cs2013.org . The committee publishes a final report listing their curriculum recommendation . That said, I still think my list is pretty good. Original answer below. What Should I Know? Minimum I think an adept programmer should have at least undergraduate level knowledge in Computer Science. Sure, you can be effective at many jobs with only a small subset of Computer Science because of the rock solid community CS sits upon, and the narrowed focus of most professional positions. Also, many people will further specialize after undergraduate study. However, I do not think either are an excuse to not be privy of foundational CS knowledge. To answer the title question, here is what an undergraduate CS student (the foundation for an adept programmer) should know upon graduation: Data Structures Machine Data Representation Ones, Two's Complement, and Related Arithmetic Words, Pointers, Floating Point Bit Access, Shifting, and Manipulation Linked Lists Hash Tables (maps or dictionaries) Arrays Trees Stacks Queues Graphs Databases Algorithms Sorting: Bubble Sort (to know why it's bad) Insertion Sort Merge Sort Quick Sort Radix style sorts, Counting Sort and Bucket Sort Heap Sort Bogo and Quantum Sort (= Searching: Linear Search Binary Search Depth First Search Breadth First Search String Manipulation Iteration Tree Traversal List Traversal Hashing Functions Concrete implementation of a Hash Table, Tree, List, Stack, Queue, Array, and Set or Collection Scheduling Algorithms File System Traversal and Manipulation (on the inode or equivalent level). Design Patterns Modularization Factory Builder Singleton Adapter Decorator Flyweight Observer Iterator State [Machine] Model View Controller Threading and Parallel Programming Patterns Paradigms Imperative Object Oriented Functional Declarative Static and Dynamic Programming Data Markup Complexity Theory Complexity Spaces Computability Regular, Context Free, and Universal Turing Machine complete Languages Regular Expressions Counting and Basic Combinatorics Beyond To get into what you're asking about later in your question, if you are familiar with the above, you should be easily able to identify the appropriate pattern, algorithm, and data structure for a given scenario. However, you should recognize that there is often no best solution. Sometimes you may be required to pick the lesser of two evils or even simply choose between two equally viable solutions. Because of this, you need the general knowledge to be able to defend your choice against your peers. Here are some tips for algorithms and data structures: Binary Search can only (and should) be used on sorted data. Radix style sorts are awesome, but only when you have finite classes of things being sorted. Trees are good for almost anything as are Hash Tables. The functionality of a Hash Table can be extrapolated and used to solve many problems at the cost of efficiency. Arrays can be used to back most higher level data structures. Sometimes a "data structure" is no more than some clever math for accessing locations in an array. The choice of language can be the difference between pulling your hair out over, or sailing through, a problem. The ASCII table and a 128 element array form an implicit hash table (= Regular expressions can solve a lot of problems, but they can't be used to parse HTML . Sometimes the data structure is just as important as the algorithm. Some of the above might seem like no brainers, and some may seem vague. If you want me to go into more detail, I can. But, my hope is when encountered with a more concrete question such as, "Design a function that counts the number of occurrences of every character in a String", you look to the tip about the ASCII table and 128 element arrays forming neat implicit hash tables for the answer. Based off these ideas, I will propose an answer the locker problem outlined in your question. Answer to the problem posed in your question. This may not be the best answer to your question, but I think it's an interesting one that doesn't require anything too complex. And it will certainly beat the time complexity of using a queue, or stack which require linear time to determine whether a locker is free or not. You have 0-999 lockers. Now, because you have a fixed number of lockers, you can easily conceive a hashing function with no collisions on the range 0-999. This function is simply h(x) = x mod 1000. Now, [conceptually] construct a hash table with integer keys and the contents of a 1000 element char array as your values. If a customer wants to reserve locker 78 for use, simply put 78 into the hash function (returning 78), and then add that number to the base pointer of the array -- storing a true value at the location pointed to by the offset value. Similarly, if you need to check whether 78 is in use, simply read the value stored at that location and check against true. This solution operates in constant time for lookups and storage as opposed to a log(n) time storage and lookup in the case of a priority queue backed by a binary tree. The description is intentionally verbose so you can see the higher concepts being boiled down into an efficient algorithm. Now, you might ask, what if I need to know all of the available lockers, wouldn't a priority queue be better? If there are k available lockers in the priority queue, iterating over all of them will take k steps. Further, depending on your priority queue implementation, you might have to rebuild your priority queue as you look at it all.. which would take k*log(k) : (k < 1000) steps. In the array solution, you only have to iterate of a 1000 element array and check which ones are open. You can also add an available or used list to the implementation to check in k time only.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155639", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/483/" ] }
155,758
I've been reading Martin Fowler's note on Continuous Integration and he lists as a must "Everyone Commits To the Mainline Every Day". I do not like to commit code unless the section I'm working on is complete and that in practice I commit my code every three days: one day to investigate/reproduce the task and make some preliminary changes, a second day to complete the changes, and a third day to write the tests and clean it up^ for submission. I would not feel comfortable submitting the code sooner. Now, I pull changes from the repository and integrate them locally usually twice a day, but I do not commit that often unless I can carve out a smaller piece of work. Question: is committing everyday such a good practice that I should change my workflow to accomodate it, or it is not that advisable? Edit: I guess I should have clarified that I meant "commit" in the CVS meaning of it (aka "push") since that is likely what Fowler would have meant in 2006 when he wrote this. ^ The order is more arbitrary and depends on the task, my point was to illustrate the time span and activities, not the exact sequence.
I commit code several times a day . Whenever I reach a point where the code is complete enough to compile and doesn't break other things, it goes in. You should look at breaking up your work so you can safely check-in a few times a day. The rationales for this are two: Any work that is not checked in may be lost - your computer may have a catastrophic failure. In this case, the longer you wait, the more work you lose. The more work you do without checking in, the more code others will need to integrate when you finally decide that it bakes. This introduces more chances of conflicts and merge issues.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155758", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/40518/" ] }
155,852
Over time I could understand two parts of SOLID – the “S” and “O”. “O” – I learned Open Closed Principle with the help of Inheritance and Strategy Pattern. “S” – I learned Single Responsibility principle while learning ORM (the persistence logic is taken away from domain objects). In a similar way, what are the best regions/tasks to learn other parts of SOLID (the “L”, “I” and “D”)? References msdn - Dangers of Violating SOLID Principles in C# channel9 - Applying S.O.L.I.D. Principles in .NET/C# OOPS Principles (SOLID Principles)
I was in your shoes couple months ago till I found a very helpful article. Each principle is nicely explained with real-world situations that each software developer may face in their projects. I am cutting short here and pointing to the reference - S.O.L.I.D. Software Development, One Step at a Time . As pointed in comments, there is another very good pdf reading - Pablo's SOLID Software Development . In addition, there are some good books that describe SOLID principles in more details - Good Book on SOLID Software Development . Edit and comments of a short summary for each principle: “S” – Single Responsibility Principle is driven by the needs of the business to allow change. “A single reason to change” helps you understand which logically separate concepts should be grouped together by considering the business concept and context, instead of the technical concept alone. In another words , i learned that each class should have a single responsibility. The responsibility is to just accomplish the assigned task “O” – I learned Open Closed Principle and started to "prefer composition over inheritance" and as such, preferring classes that have no virtual methods and are possibly sealed, but depend on abstractions for their extension. “L” – I learned Liskov Substitution Principle with help of Repository pattern for managing data access. “I” – I learned about Interface Segregation Principle by learning that clients shouldn't be forced to implement interfaces they don't use (like in Membership Provider in ASP.NET 2.0). So interface should not have “a lot of responsibilities” “D” – I learned about Dependency Inversion Principle and started to code that is easy to change . Easier to change means a lower total cost of ownership and higher maintainability. As a useful resource from CodePlex was mentioned in comments, reference is included to SOLID by example
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155852", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/58676/" ] }
155,880
I have a chunk of code that looks something like this: function bool PassesBusinessRules() { bool meetsBusinessRules = false; if (PassesBusinessRule1 && PassesBusinessRule2 && PassesBusinessRule3) { meetsBusinessRules= true; } return meetsBusinessRules; } I believe there should be four unit tests for this particular function. Three to test each of the conditions in the if statement and ensure it returns false. And another test that makes sure the function returns true. Question: Should there actually be ten unit tests instead? Nine that checks each of the possible failure paths. IE: False False False False False True False True False And so on for each possible combination. I think that is overkill, but some of the other members on my team do not. The way I look at it is if BusinessRule1 fails then it should always return false, it doesn't matter if it was checked first or last.
Formally, those types of coverage have names. First, there's predicate coverage : you want to have a test case that makes the if statement true, and one that makes it false. Having this coverage met is probably a basic requirement for a good test suite. Then there Condition Coverage : Here you want to test that each sub-condition in the if has the value true and false. This obviously creates more tests, but it usually catches more bugs, so it's often a good idea to include in your test suite if you have time. The most advanced coverage criteria is usually called Combinatorial Condition Coverage : Here the goal is to have a test case that goes through all possible combinations of boolean values in your test. Is this better than simple predicate or condition coverage? In terms of coverage, of course. But it's not free. It comes at a a very high cost in test maintenance. For this reason, most people don't bother with full combinatorial coverage. Usually testing all branches (or all conditions), will be good enough for catching bugs. Adding the extra tests for combinatorial testing won't usually catch more bugs, but requires a lot of effort to create and maintain. The extra effort usually makes this not worth the very very small payoff, so I wouldn't recommend this. Part of this decision should be based on how risky you think that code will be. If it has a lot of room to fail, it's worth testing. If it's somewhat stable, and won't change much, you should consider focusing your testing efforts elsewhere.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/155880", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56736/" ] }
156,073
I feel like I have burned out, even though I am only out of college for 5 years. For the first 3 years of my career, things were going awesome. I was never anything special in school, but I felt special at my company. Looking back, I could tell that I made all the right moves: I actively tried to improve myself daily. I made a point of helping anyone I could. I made a point (and read books about) being a good team member. I had fun. After 3 years in a row as being rated as a top employee, I converted that political capital into choosing to work on an interesting, glamorous project with only 2 developers: me and a highly respected senior technical leader. I worked HARD on that project, and it came out a huge success. High in quality, low in bugs, no delays, etc. The senior tech lead got a major promotion and a GIGANTIC bonus. I got nothing. I was so disappointed that I just stopped caring. Over the last year, I have just kind of floated. During my first 4 years I felt energized after a 10 hour day. Now I can barely be bothered to work 6 hours a day. Any advice? I don't even know what I'm asking. I am just hoping smart people see this and drop me a few pieces of wisdom.
It Happens Unfortunately we don't always get the credit we deserve, or management will give credit to the people directly under them, who do not necessarily have the power (or honesty) to bestow some of it upon you. It's an organizational thing: by way of the organigram, it should trickle down; except a few people act as dams. I'm afraid that's what happened to you. Most likely, that senior tech did deserve a bonus and promotion, but you were too much below the radar for the people above. Basically, this happened: Market Yourself It's not because you already have a job that you should stop selling yourself. Ask for a Raise!!! People have a hard-time understanding this: they won't come to you naturally. And if they do, it's either one of these 2 cases: they emit raises to have control over the increases, not you (you feel happy and recognized, you don't need to request one, and would feel bad about asking more). you work in the most awesome company ever created (please send me the address). Do ask for a raise . Learn to keep track of what you do. Keep track of bug countrs, commit logs, real achievements. Cover your ass is a real rule in the world. Request a raise means different things: "I want more money and am greedy" Except that's what you think first, but it's not necessarily what they think first. "I want what I deserve, or I'll get it somewhere else" If you ask for a raise, it means you are conscious that there is something better out there. "I am unhappy" And the obvious implication is that you will not perform as well. Asking for a raise implies a status change. You get more money, but also more scrutiny. People will look to make sure that you deserved it, after the fact . Be prepared for this as well. Your position in a company is a fluid thing: it changes and evoles. Take control of that and don't just let things happen. Raise the Issue You mention your boss... how come he didn't notice this issue? Or did he, but didn't do anything about it? It's his reponsibility to make sure the work of his subordinates his well-advertised and respected. If he's unaware of your problem, talk it over with him. Make it clear that this is an issue, that it affected you. Make you'll come across as a diva (maybe you are, for all I know), but IT DOESN'T MATTER . What matters is that your boss will notice that one of his employees is unhappy, and that it is a problem for him and others. Look for Alternatives You're young, you're employable, and you're good. Why would you think you need to stay where you are? Maybe this was just your first experience, and opportunity to make your teeth on something. Now, you can go out and reach for more. It's sometimes easier to get a career break by actually taking a career break, in one sense. So look for other jobs, brush over your resume, sell yourself well, and attempt to reach for the next job slightly up the ladder that you are interested in. And state your salary expectations (well, advertise higher than your salary expectations, actually) to agencies. Be careful about your privacy. It's best, at this stage, that nobody in your company knows that you are looking. But if they call you up on it, then so what? It also shows that there's an issue and they need to address it, or things will change. They can't control that. Basically, it allows you to do this: Keep it Professional, and Civil - Don't Burn Bridges DO NOT whine. DO NOT get too passionate. DO NOT get personal. DO stay professional. DO stay polite. These are professional, business relationships . You will most likely need references from your current company when you decide to go somewhere else. Do not burn bridges. Or, this will happen: (Unlikely though, as that would mean your boss tries to keep you while knowingly making you unhappy... Surely, he'd know no good can come from that. Or run for the hills, now!) About Burn-Out I haven't actually addressed this... While it doesn't sound to me like you've burned out and are simply demotivated, I would say that there no age requirement or limit to burn out. It can happen at 23 like it can at 77. The good thing is that you can recover from it at 23 and look past it. It's harder at 45, for instance, to look for alternatives. Don't overwork yourself. Don't do jobs for extended periods of time that suck the life and the joy of programming out of you. It doesn't seem like you were using these, so I doubt you actually burned out. You were burned, but in a different way. Consider it just being a lesson. Now get back to work, get that raise or recognition, and remember to have fun coding. And if it's tough at the moment to have fun doing it, I'd even advise to pick up small projects that make it fun at the workplace (develop a tool to make something easier, something like this). Pictures are courtesy of Dilbert.com and Scott Adams .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156073", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/58828/" ] }
156,096
I'm curious about the birth of the compiler. How did programming begin? Did people first build hardware that recognized a certain set of commands, or did people define a language and then build hardware around it? And on a related note, what was the first programming language?
This has a very clear answer, actually: Source code came first – by a big margin. Before giving the technical details, a bit of perspective: The first programming languages were all translated into machine language or assembler by hand . The idea of using a piece of software to automate this translation (either via a compiler or evaluator) always came later, and was far from intuitive. Consider this quote of the Wikipedia article on FORTRAN which illustrates the reluctance compilers had to face: … the first FORTRAN compiler [was] delivered in April 1957. This was the first optimizing compiler, because customers were reluctant to use a high-level programming language unless its compiler could generate code whose performance was comparable to that of hand-coded assembly language. [emphasis mine] => By the time the FORTRAN compiler hit the market (1957), people were already happily programming both in assembly language and FORTRAN. The case was similar for LISP (from Hackers & Painters ): Steve Russell said, look, why don't I program this eval …, and I said to him, ho, ho, you're confusing theory with practice, this eval is intended for reading, not for computing. But he went ahead and did it. That is, he compiled the eval in my paper into IBM 704 machine code, fixing bug, and then advertised this as a Lisp interpreter, which it certainly was. So at that point Lisp had essentially the form that it has today..." Once again, not only does the source code (in LISP) predate the interpreter, the latter wasn’t even implicit in the former. But these developments are relatively late. Even not considering Charles Babbage’s Analytical Engine and Ada Lovelace’s related first program , there were programming languages in the 20th century which predated compilers: Konrad Zuse’s Plankalkül and the mathematical construct of λ-calculus introduced by Alonzo Church. Both of these are indubitably formally specified programming languages, but neither had a compiler at the time. To put this into perspective, λ-calculus is from the 1930s, and Plankalkül was developed around 1945. By contrast, the first FORTRAN compiler came out in 1957 (but again three years after FORTRAN was specified).
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156096", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/50424/" ] }
156,156
I know what the JIT compiler is but how about why is it called that, it obviously catches exceptions Just in Time , but how and why should it be called this? Sorry if this sounds a bit vague.
The Just-In-Time compiler doesn't have anything to do with exceptions. It refers to when code is compiled. Some languages, like Java, are translated from the language that you program in with your favorite editor into byte code. At runtime, some byte code might be further compiled into native code just prior to execution (hence the name, just-in-time). This is in contrast to some languages, like C, that are compiled into native machine code at the time when you invoke the compiler.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156156", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/41201/" ] }
156,181
Is varchar just a remnant from before text came around, or are there use cases where you would want to use a varchar ? (Or char for that matter..) (I use Postgres and MySQL (MyISAM) daily, so those are what I'm most interested in, but answers for other databases are of course welcome. ^_-)
In General text columns are non standard and implementation specific. In many cases, depending on the database they may have a combination of one or more of the following restrictions: not indexable , not searchable and not sortable . In Postgres All these types are internally saved using the same C data structure. . In MySQL The text column is a specialized version of BLOB and has restrictions on indexing. Just these two examples can be extrapolated to the other SQL RDBMS systems and should be reason enough to understand when to choose one type over the other. Just to make it implicitly clear, you should never use TEXT as it is proprietary and non-standard. Any SQL you write against it will not be portable and will guaranteed to cause you problems in the future. Only use types that are part of the ANSI Standard . Use CHAR when you know you have a fixed number of characters for every entry. Use VARCHAR when you have a variable number of characters for every entry. If you need more storage than VARCHAR can provide, CLOB with UTF-8 encoding or equivalent standard type. NEVER use TEXT as it is non-standard.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156181", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/27881/" ] }
156,220
What are ways to allow the versioning of database entries (data)? Think of the content-managment-systems abilities to revert back changes of articles. What are their pros/cons?
There are basically two approaches: an audit table, with all previous values stored in it, or include a start/end date as part of the table, and all updates create a new record while closing out the old one. Update: SQL SERVER 2016 supports this as a design pattern/table type — https://docs.microsoft.com/en-us/sql/relational-databases/tables/temporal-tables?view=sql-server-2017
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156220", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56779/" ] }
156,266
If I come across a non-critical typo in code (say, an errant apostrophe in a print(error) statement), is it worth making a commit to resolve that error, or should it simply be left alone? Specifically, I'm curious about weighing the gumming-up of the commit log against the value of resolving these non-critical typos. I'm leaning toward resolving them. Am I being pedantic?
My personal feeling is that improving quality is worth the minor inconvenience of an additional commit log entry, even for small improvements. After all, small improvements count a lot when you factor in the broken window effect . You might want to prefix it with a TRIVIAL: tag, or mark it as trivial if your VCS supports it.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156266", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/58421/" ] }
156,290
It has been a few months since I started my position as an entry level software developer. Now that I am past some learning curves (e.g. the language, jargon, syntax of VB and C#) I'm starting to focus on more esoteric topics, as to write better software. A simple question I presented to a fellow coworker was responded with "I'm focusing on the wrong things." While I respect this coworker I do disagree that this is a "wrong thing" to focus upon. Here was the code (in VB) and followed by the question. Note: The Function GenerateAlert() returns an integer. Dim alertID as Integer = GenerateAlert() _errorDictionary.Add(argErrorID, NewErrorInfo(Now(), alertID)) vs... _errorDictionary.Add(argErrorID, New ErrorInfo(Now(), GenerateAlert())) I originally wrote the latter and rewrote it with the "Dim alertID" so that someone else might find it easier to read. But here was my concern and question: Should one write this with the Dim AlertID, it would in fact take up more memory; finite but more, and should this method be called many times could it lead to an issue? How will .NET handle this object AlertID. Outside of .NET should one manually dispose of the object after use (near the end of the sub). I want to ensure I become a knowledgeable programmer that does not just rely upon garbage collection. Am I over thinking this? Am I focusing on the wrong things?
"Premature optimization is the root of all evil (or at least most of it) in programming." - Donald Knuth When it comes to your first pass, just write your code so that it's correct and clean. If it is later determined that your code is performance-critical (there are tools to determine this called profilers), it can be re-written. If your code is not determined to be performance-critical, readability is far more important. Is it worth digging into these topics of performance and optimization? Absolutely, but not on your company's dollar if it's unnecessary.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156290", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/-1/" ] }
156,303
F# comes out of the box with an interactive REPL. C# has nothing of the sort and is in fact kinda difficult to play around without setting up a full project (though LINQpad works and its also possible to do via powershell). Is there something fundamentally different about the languages that allows F# to have the interactive console but makes it difficult to implement it for C#? Since many years later people are still coming to this question I should note that now there are many options. You can use powershell (pre-installed on every modern windows machine) to play with the .Net framework. Or you can use LINQpad to prototype arbitrary c# code . Or you can use ScriptCs or you can use an online jsfiddle-type environment like Complify.net or Jsil . Lots of options.
Is there something fundamentally different about the languages that allows F# to have the interactive console but makes it difficult to implement it for C#? Yes. F# is a descendant of the ML programming language, which in turn was heavily influenced by languages like Lisp and Scheme. Those languages were designed from day one to have three nice properties. First, those languages do not really have statements the way you think of them in C#. Rather, almost everything is an expression that has a value , so an evaluate-and-then-print-the-value mechanism makes sense in almost every situation. Second, those languages discourage programming with side effects, so you can make evaluations without worrying that you’re going to be messing up global state. Third, most of the work you do in those languages is “at the top level”; there is typically no enclosing “class” or “namespace” or other context. By contrast, C# emphasizes programming control flow with statements that produce side effects, and those statements are always in multiple nested containers -- a namespace, a class, a method, and so on. So these are all things that make it harder for C# to have a REPL, but certainly not impossible . We’d just need to figure out what the semantics are for statements and expressions that appear outside of the usual context, and what the semantics are of mutations that change name bindings, and so on. Why does F# have an interactive mode but not C#? Because the F# team decided that having a REPL loop was a priority-one scenario for them. The C# team historically has not. Features do not get implemented unless they are the highest priority features that fit into the budget; until now, a C# REPL has not been at the top of our list. The Roslyn project has a C# REPL (and will eventually have a VB REPL as well, but it is not ready yet.) You can download a preview release of it to see how you like it at http://www.microsoft.com/en-us/download/details.aspx?id=27746
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156303", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/27600/" ] }
156,336
I’m Looking for a definitive answer from a primary or secondary source for why (notably) Java and C# decided to have a static method as their entry point, rather than representing an application instance by an instance of an Application class (with the entry point being an appropriate constructor). Background and details of my prior research This has been asked before. Unfortunately, the existing answers are merely begging the question . In particular, the following answers don’t satisfy me, as I deem them incorrect: There would be ambiguity if the constructor were overloaded. – In fact, C# (as well as C and C++) allows different signatures for Main so the same potential ambiguity exists, and is dealt with. A static method means no objects can be instantiated before so order of initialisation is clear. – This is just factually wrong, some objects are instantiated before (e.g. in a static constructor). So they can be invoked by the runtime without having to instantiate a parent object. – This is no answer at all. Just to justify further why I think this is a valid and interesting question: Many frameworks do use classes to represent applications, and constructors as entry points. For instance, the VB.NET application framework uses a dedicated main dialog (and its constructor) as the entry point 1 . Neither Java nor C# technically need a main method. Well, C# needs one to compile, but Java not even that. And in neither case is it needed for execution. So this doesn’t appear to be a technical restriction. And, as I mentioned in the first paragraph, for a mere convention it seems oddly unfitting with the general design principle of Java and C#. To be clear, there isn’t a specific disadvantage to having a static main method, it’s just distinctly odd , which made me wonder if there was some technical rationale behind it. I’m interested in a definitive answer from a primary or secondary source, not mere speculations. 1 Although there is a callback ( Startup ) which may intercept this.
TL;DR In Java, the reason of public static void main(String[] args) is that Gosling wanted the code written by someone experienced in C (not in Java) to be executed by someone used to running PostScript on NeWS For C#, the reasoning is transitively similar so to speak. Language designers kept the program entry point syntax familiar for programmers coming from Java. As C# architect Anders Hejlsberg puts it , ...our approach with C# has simply been to offer an alternative... to Java programmers... Long version expanding above and backed up with boring references. java Terminator Hasta la vista Baby! VM Spec, 2.17.1 Virtual Machine Start-up ...The manner in which the initial class is specified to the Java virtual machine is beyond the scope of this specification, but it is typical, in host environments that use command lines, for the fully qualified name of the class to be specified as a command-line argument and for subsequent command-line arguments to be used as strings to be provided as the argument to the method main. For example, using Sun's Java 2 SDK for Solaris, the command line java Terminator Hasta la vista Baby! will start a Java virtual machine by invoking the method main of class Terminator (a class in an unnamed package) and passing it an array containing the four strings "Hasta", "la", "vista", and "Baby!"... ...see also: Appendix: I need your clothes, your boots and your motorcycle My interpretation: execution targeted for use like typical scripts in command line interface. important sidestep ...that helps avoid a couple of false traces in our investigation. VM Spec, 1.2 The Java Virtual Machine The Java virtual machine knows nothing of the Java programming language... I noticed above when studying prior chapter - 1.1 History which I thought could be helpful (but turned out useless). My interpretation: execution is governed by VM spec alone, which explicitly declares that it has nothing to do with Java language => OK to ignore JLS and anything Java language related at all Gosling: a compromise between C and scripting language... Based on above, I began searching the web for JVM history . Didn't help, too much garbage in results. Then, I recalled legends about Gosling and narrowed down my search to Gosling JVM history . Eureka! How The JVM Spec Came To Be In this keynote from the JVM Languages Summit 2008, James Gosling discusses... Java's creation,... a compromise between C and scripting language... My interpretation: explicit declaration that at the moment of creation, C and scripting have been considered most important influences. Already seen nod to scripting in VM Spec 2.17.1, command line arguments sufficiently explain String[] args but static and main aren't there yet, need to dig further... Note while typing this - connecting C, scripting and VM Spec 1.2 with its nothing-of-Java - I feel like something familiar, something... object oriented is slowly passing away. Take my hand and keep movin' Don't slow down we're nearly there now Keynote slides are available online: 20_Gosling_keynote.pdf , quite convenient for copying key points. page 3 The Prehistory of Java * What shaped my thinking page 9 NeWS * Networked Extensible Window System * A window system based on scripting.... PostScript (!!) page 16 A Big (but quiet) Goal: How close could I get to a "scripting" feel... page 19 The original concept * Was all about building networks of things, orchestrated by a scripting language * (Unix shells, AppleScript, ...) page 20 A Wolf in Sheeps Clothing * C syntax to make developers comfortable A-ha! Let's look closer at C syntax . The "hello, world" example... main() { printf("hello, world\n"); } ...a function named main is being defined. The main function serves a special purpose in C programs; the run-time environment calls the main function to begin program execution. ...The main function actually has two arguments, int argc and char *argv[] , respectively, which can be used to handle command line arguments... Are we getting closer? you bet. It is also worth following "main" link from above quote: the main function is where a program starts execution. It is responsible for the high-level organization of the program's functionality, and typically has access to the command arguments given to the program when it was executed. My interpretation: To be comfortable for C developer, program entry point has to be main . Also, since Java requires any method to be in class, Class.main is as close as it gets: static invocation, just class name and dot, no constructors please - C knows nothing like that. This also transitively applies to C#, taking into account the idea of easy migration to it from Java. Readers thinking that familiar program entry point doesn't matter are kindly invited to search and check Stack Overflow questions where guys coming from Java SE are trying to write Hello World for Java ME MIDP. Note MIDP entry point has no main nor static . Conclusion Based on above I would say that static , main and String[] args were at the moments of Java and C# creation most reasonable choices to define program entry point . Appendix: I need your clothes, your boots and your motorcycle Have to admit, reading VM Spec 2.17.1 was enormous fun. ...the command line java Terminator Hasta la vista Baby! will start a Java virtual machine by invoking the method main of class Terminator (a class in an unnamed package) and passing it an array containing the four strings "Hasta", "la", "vista", and "Baby!". We now outline the steps the virtual machine may take to execute Terminator , as an example of the loading, linking, and initialization processes that are described further in later sections. The initial attempt... discovers that the class Terminator is not loaded... After Terminator is loaded, it must be initialized before main can be invoked, and a type (class or interface) must always be linked before it is initialized. Linking (§2.17.3) involves verification, preparation, and (optionally) resolution... Verification (§2.17.3) checks that the loaded representation of Terminator is well formed... Resolution (§2.17.3) is the process of checking symbolic references from class Terminator ... Symbolic references from Terminator oh yeah.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156336", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2366/" ] }
156,382
The more SQL fields I add to my tables and the more primary/foreign keys I add the more I lose the overview for specific fields for certain scenarios like Get/Add/Delete/Update data. I use SQL Management Studio (SQL Server) to design my database in the database diagram designer. Maybe there is a better tool or good approach how to keep track of the meaning of all those fields?
How about a logical semantically correct naming system that avoids duplication, tautology and abbreviations? That and a Glossary and a logical / physical ER diagram of the database, preferably hosted on a Wiki of some sort, is about all you can do. Plus what Jason Holland says in the comment below ! :-)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156382", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57734/" ] }
156,519
I'm very new to GIT/GitHub (as new as starting yesterday). I would like to know what is the best way to manage multiple people working on the same project with Github. Currently I'm managing one project with four developers. How do I go about the workflow and making sure everything is in sync? (Note: All developers will have one universal account.) Does each developer need to be on a different branch? Will I be able to handle 2 people working on the same file? Please post a detailed answer, I'm not a shy reader. I need to understand this well.
If all of the developers have commit access to the repo, you should not need to do anything special. They will pull changes from the repo, make their own changes, commit locally, and then push back into the public repo when they have something working. If on the other hand you have one (or a few) developer(s) responsible for committing to the repo, and the others are providing patches to these. Have each of them clone the repo into their own accounts and have them send pull-requests when they have a change they want into the main repo. It's also possible to make specific clones for working on specific features if you wish. Using the same workflow with pull-requests to get changes into the main repo when the feature is done. If by "All developers will have one universal account" you mean that all developers will share one GitHub account and appear as the same committer in the repo, that's a bad idea. Make separate accounts and set them up as collaborators if you want them all to have commit access. As for your specific questions: No, use branches for features, fixes etc that will take more than one commit. More than one developer can be working on the same branch. Yes, git handles conflicts really well, so there's no problems having people work on the same file. No problems except, conflict resolution may not always be trivial if there's fundamental changes to a file that has been edited by more than one member. This is however nothing that can not be overcome by talking together. Version control does not replace communication. Good luck!
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156519", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59079/" ] }
156,541
I'm a junior C# developer, I learned at home and now I got my first job :) I want to buy these books. But what is the correct order to read these books? Code Complete: A Practical Handbook of Software Construction Clean Code: A Handbook of Agile Software Craftsmanship Pragmatic Programmer
There is no "correct" order to reading these books. They each focus on different aspects of software engineering. Clean Code - focuses on coding in the small. How to write classes and functions. Code Complete - focuses on the processes of software engineering. Pragmatic Programmer - focuses on working within a team producing software.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156541", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/13724/" ] }
156,669
When the company I work in hired new managers, they offered us to overview someone's code on every meeting. We have meetings every two weeks, so each time one of developers was to show his/her code on the projector, and others were going to discuss it. I thought this would be great: each developer would be more careful when writing code and we could share our experience better. But somehow we forgot about this and the offer just remained an offer. What are the benefits of this and are there any drawbacks?
Code reviews are a great practice. It is probably the best way to learn from mistakes and to see how certain problems are solved by others. It is also one of the best way to maintain quality in a code base. Code reviews happen in many companies, though it is difficult to say that there is a specific process that they all follow. In more formal code review, a senior (or several seniors) will sit together with a developer to review their code, offering suggestions and teaching at the same time. Additional benefits to code reviews (as commented on for this question), include: A great way to teach and learn They are one of the best ways to improve and keep the consistency of a code base (style and idioms) They help ensure all team members understand the style and idioms used in the project and how to use them Code reviews will speed up development as they catch bugs and design flaws early (so, though they may slow down initial development, they pay dividends in later development cycles) There is tooling support that helps streamline the code review process
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156669", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56691/" ] }
156,722
I have been programming in higher level languages (Python, C#, VBA, VB.NET) for around 10 years and I have completely zero understanding on what's going on, "under the hood." I am wondering what are the benefits of learning assembly, and how will it aid me as a programmer? Can you please provide me with a resource that will show me exactly the connection between what I write in higher level code to what happens in assembly?
Because you'll understand how it really works. You'll understand that function calls are not for free and why the call stack can overflow (e.g., in recursive functions). You'll understand how arguments are passed to function parameters and the ways in which it can be done (copying memory, pointing to memory). You'll understand that memory is not for free and how valuable automatic memory management is. Memory is not something that you "just have", in reality it needs to be managed, taken care of and most importantly, not forgotten (because you need to free it yourself). You'll understand how control flow works at a most fundamental level. You'll appreciate the constructs in higher-level programming languages more. What it boils down to is that all the things we write in C# or Python need to be translated into a sequence of basic actions that a computer can execute. It's easy to think of a computer in terms of classes, generics and list comprehensions but these only exist in our high-level programming languages. We can think of language constructs that look really nice but that don't translate very well to a low-level way of doing things. By knowing how it really works, you'll understand better why things work the way they do.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156722", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/14856/" ] }
156,803
I was reading Agile Manifesto Principles . Everything seems clear and reasonable except for one point: Simplicity--the art of maximizing the amount of work not done--is essential. I dont understand this. Does this mean that the work that wasn't done should be somehow exaggerated? If so, it doesn't really make sence.
Remove the parenthetical comment. What remains is "Simplicity is essential", which by the way is an application of the principle to its expression itself. Simplicity is essential, because you have distilled what you really need, removing what is making the task at hand heavier, less elegant: complex. I have always interpreted in the sense of Pascal's take on brevity : " I would have written a shorter letter, but I did not have the time. " You have to avoid what is unneded (from the letter, from the code) and this is an active task , and not an easy one. It is not something which happens by itself.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156803", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/56691/" ] }
156,812
I've been working on software to modify a game's resource files, and I'm planning on releasing it in open source. I'm perfectly fine with releasing my code under gpl v3. However, I'm afraid that that would deter others from writing extensions for my software, as I have had others offer to do so - the community which I'm working with is very much afraid of open source leeches, and as a result, the majority of popular released projects are closed source. As such, I'd be more comfortable with something like BSD or MIT. The codebase is extremely modular, and one window might consist of the end products of many different plugins. One plugin which I am working with is the work (licensed under gpl v3) of a person who I have communicated with, who was enthusiastic of me using his software. As one other person has contributed to that software, I don't think he can simply release his source to me under a different license. In the future, I'd like to have more plugins for a variety of things which would be difficult to manually implement (as doing so would require large amounts of reverse engineering). Tl;dr: As I am in good terms with these people, is there any means for me to use their code as plugins? I would have permission, but licensing would be the issue, since others have contributed to the codebases. If so, would it be okay if i distributed their plugins with my built software (in a plugins folder)? How about placing them in the code repository and visual studio solution? Any other advice would be appreciated. Thanks!
Remove the parenthetical comment. What remains is "Simplicity is essential", which by the way is an application of the principle to its expression itself. Simplicity is essential, because you have distilled what you really need, removing what is making the task at hand heavier, less elegant: complex. I have always interpreted in the sense of Pascal's take on brevity : " I would have written a shorter letter, but I did not have the time. " You have to avoid what is unneded (from the letter, from the code) and this is an active task , and not an easy one. It is not something which happens by itself.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156812", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59276/" ] }
156,824
Ok, I understand the normal conventions of using verbs with functions and nouns with classes. What about interfaces? Is there any methodology when coming up with interface names that may not be that obvious? Just to make it clear, I'm not talking about whether to put an "I" in front of the name or if to use camelCase or PascalCase. I'm wondering about the method of figuring out a clear, semantic name for an interface. EDIT I'm obsessing on how to name an interface in the clearest way. I guess it just needs to be a noun too because when I think of naming classes I think of the closest "real" world object it can relate to. I suppose real world interfaces are things like a keyboard, mouse, remote control, ATM screen. Those are all nouns. Anyhow, any additional insight on a good way to formulate interface names would be appreciated.
I'd say it depends on what the interface defines. In some cases when the interface is rather specific and detailed, I think a noun is best. Examples are IList , ICollection . Sometimes though an interface is more about adding certain general features to a class. In that case I think an adjective is best. Examples are IDisposable , IEnumerable , ... Maybe another way to think about this is how many "abilities" your interface defines. For example, the IList<T> interface defines these abilities: Add, Clear, Contains, Insert, Remove, ... These are all properties of a list so IList is a good name. IDisposable on the other hand only defines one ability: Dispose. So it is suited for anything that is disposable. Hence the name IDisposable .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/156824", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/54244/" ] }
157,104
I work with a team of programmers as the business analyst. We just released version 2.0 of our product and are working on the next version to be released in 3 months (it's an internal software product). Unfortunately version 2.0 has some issues that they have had to fix and we're going to deploy those fixes in a couple weeks. The problem is that we also don't want to deploy the changes that are still being worked on and are not slated to be released for another 3 months. The programmers decided that the way to manage this was that only the code for the defects will be checked in, and the code for the new enhancements will be kept on the developer's local machines until they are done. I will have to get local builds from their machines to test because if they check in the code and we have to push out another patch to fix defects we don't want to include those enhancements just yet. There is also the problem where the same code file contains both defect fixes and enhancements, so they have to copy the code file locally, then make a change to fix a bug and check that one in, then resume work on the enhancements by taking the local copy they made. It seems quite convoluted - is there a better way to handle this type of scenario? We're using Team Foundation Server and Visual Studio 2010.
V2.0 should have had what we used call a 'steady-state branch' (we used Perforce, not TFS) made for it once it was released. Any fixes for v2 would have been made to this branch and then propagated back into the v3 development branch while v3 features were also being worked on, i.e. a defect on v2 would result in a defect also on v3. Having changes reside on developer's machines for a long time will likely result in an integration nightmare.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157104", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/55327/" ] }
157,161
The other day I reviewed code someone on my team wrote. The solution wasn't fully functional and the design was way over complicated-- meaning stored unnecessary information, built unnecessary features, and basically the code had lots of unnecessary complexity like gold plating and it tried to solve problems that do not exist. In this situation I ask "why was it done this way?" The answer is the other person felt like doing it that way. Then I ask if any of these features were part of the project spec, or if they have any use to the end user, or if any of the extra data would be presented to the end user. The answer is no. So then I suggest that he delete all the unnecessary complexity. The answer I usually get is "well it's already done". My view is that it is not done, it's buggy, it doesn't do what the users want, and maintenance cost will be higher than if it were done in the simpler way I suggested. An equivalent scenario is: Colleague spends 8 hours refactoring code by hand which could have been automatically done in Resharper in 10 seconds. Naturally I don't trust the refactoring by hand as it is of dubious quality and not fully tested. Again the response I get is "well it's already done." What is an appropriate response to this attitude?
Mentality/attitude Lead by example Admonish in private (one-to-one, outside the code review) Encourage a Keep-it-simple mentality among team members Team management Spend more time on the specification of a work item (such as architecture, algorithm outline, UI wireframe, etc) Encourage team members to seek clarification about the scope of a work item Encourage team members to discuss ways of implementing a work item Make reasonable estimates for each work item before starting, and make best effort to meet them Monitor the "improvement" of team members. After being admonished or being shown the right way to do things, see if the team member improve. Skill level Allocate some time for pair-programming sessions or one-to-one training sessions for making the best use of developer tools (refactoring, code-review) Project (risk) management Conduct code-review more often, asynchronously (Note) Note about "asynchronously" The code reviewer should get notifications / invitations to review changes as soon as being committed The code reviewer should have a chance to review the code before any meeting with the developer. If clarification from the developer is needed, do it informally on IM/email without casting a negative opinion
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157161", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20576/" ] }
157,181
Recently I was talking with a colleague who mentioned that his company was working on adding the MVC design pattern as a PHP extension. He explained that they wrote C code for adding Controllers, Models and Views to the language constructs to increase performance. Now I know that MVC is an architectural design pattern that is used widely in web applications, but I still have to come across languages that have a language construct for Controllers for example. IMHO integrating design patterns into a language can emphasize the importance of good OO design. So, Why aren't the most used design patterns (MVC, Factory, Strategy,...etc.) added to the language constructs? If the question sounds too broad then you may limit the question to PHP only. Edit: I'm not implying that one must use a design pattern when developing a project. Actually I promote the methodology of keep it simple as long as it works.
Design Patterns are added to language constructs all the time. Ever heard of the Subroutine Call Design Pattern? No? Me neither. That's because Subroutine Calls, which were a Design Pattern in the early 1950s got added to languages pretty much instantly. Nowadays, they are even present in the machine code instruction sets of CPUs. What about the For Loop Design Pattern? The While Loop Design Pattern? The Switch Design Pattern? The Object Design Pattern? The Class Design Pattern? All of these have been added to some languages.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157181", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/50440/" ] }
157,287
We're working on a large product which has been in production for about 5 years. The codebase is.. erm.. working. Not really well but it is working. New features are thrown into production and tested with a small QA. Bugs are fixed, etc. But no one, except me, is writing unit-tests. No one uses the power of "tracking" down bugs by writing unit tests to ensure this special bug (test case) would never, ever occur again. I've talked to management. I've talked to developers. I've talked to everyone in the whole company. Everybody says: "Yep, we must write more unit-tests!" That was about a year ago. Since then I have forced introduction of pre-commit code review ( Gerrit ) and continuous integration ( Jenkins ). I held some meetings about unit-tests and I also showed the benefits of writing unit-tests. But no one seems to be interested. Q1: How do I motivate my fellow co-workers to write unit-tests? Q2: How do I stay motivated to follow my personal code quality standards? (Sometimes it' s really frustrating!) PS: Some frustrating facts (reached in 1 year): Total unit-tests: 1693 Total "example unit-tests": around 50 Done by me: 1521 Edit: Am I expecting too much? Its my first working place and I'm trying to do my best. Edit 2: Based upon all answers I've made a small checklist for myself. I've talked to two developer in private and we had a good and honest talk. One of them told me, like Telastyn said, that he is really uncomfortable with unit-tests. He said that he would like to be "more professional" but he needs a kickstart. He also said that our unit-test meeting with all developers (around 9-11) was good, but it was too crowdy. Meh. Some critics for me, but I'll learn from that. (see answers below concering tdd kata meetings!) The other one said that he is not interested in writing unit-tests. He thinks that his work is good enough for his salary. He don't want to put more effort in. I was quite speechless. Typical 9-5 "worker". Next week I'm going to talk to the other developers. Thanks for your great answers (so far!) and your support. I really appreciate it! I'ved learned a lot, thank you very much!
I noticed that talking about TDD hardly works. People like to see raw results . Saying that "writing tests will reduce development time" is most likely true, but it might not be enough to get anybody convinced. I was in similar position (well, not as bad as yours), and it kind of resolved itself when people started working on my code (note: my code was unit tested, others' not so much). When something stopped working, natural follow-up after local investigation was to ask me what could be the reason . Then we sat, we ran unit tests and saw what happened. If tests were passing, most of the time problems were in the new, untested code. If not, tests were usually able to spot the problem (or at least point us in the right direction). We fixed the bug, tests were up again, everybody was happy. Long story short, a few situations like this transpired and 2 more developers became TDD/testing enthusiasts (there are still a few more to go, but it looks promising). As for advice, you can give a go with TDD kata; simple task to implement using a test first approach as opposed to no tests . Depending on how complex the task is, non-test approach should usually be slower, especially with incremental required changes: Roy's string calculator Bank OCR Edit : OP's comment made me realize there's even stronger argument at his disposal - regression aka returning bugs . Those kind of situations are perfect examples demonstrating how benefical unit tests can be. People like numbers - like I said, telling "unit testing is good" might not be convincing, but arranging data like below might surely be: time spent to implement feature (no tests were written; I assume this happened often so it should be relatively easy to find such example) estimated time to implement feature with TDD (or even tests after approach; doesn't matter - what's important is presence of unit tests) time spent resolving the bug on untested vs tested code One thing to warn you about (this migth be obvious but is worth noting): outcome bias - make sure you don't select example where the only way of spotting bug with test was to write test for that bug. Usually, nobody knows bug will occur upfront, but it's tempting to say "man this bug would be trivial if we had test for X" - it's easy to find a winning tactic for a war after it has ended. Outcome of those examples should be simple question - if you could spent x-hours developing feature Y, why would insist on doing it in 2x ?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157287", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/58747/" ] }
157,407
I want to know what is considered better way of returning when I have if statement. Example 1: public bool MyFunction() { // Get some string for this example string myString = GetString(); if (myString == null) { return false; } else { myString = "Name " + myString; // Do something more here... return true; } } Example 2: public bool MyFunction() { // Get some string for this example string myString = GetString(); if (myString == null) { return false; } myString = "Name " + myString; // Do something more here... return true; } As you can see in both examples function will return true/false but is it a good idea to put else statement like in first example or it is better to not put it?
Example 2 is known as guard block. It is better suited to return/throw exception early if something went wrong (wrong parameter or invalid state). In normal logic flow it is better to use Example 1
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157407", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20693/" ] }
157,482
I recently encountered a situation in our codebase where a different team created a 'god class' containing around 800 methods, split across 135 files as a partial class. I asked the other team about this. While my gut reaction was to nuke it from orbit, they insist that it's a good design, a common practice, and that it promotes 'modularity' and 'ease of implementation' because new developers can bolt on functionality with almost no knowledge of the rest of the system. Is this actually a common practice or in any way a good idea? I'm inclined to take immediate steps to bring this beast down (or at least prevent it from growing further) but I'm willing to believe I'm wrong.
This is not in any way a good idea. Partial classes exist to help the form designer. Using them for any other reason (and arguably for their intended purpose, but that's a different matter) is a bad idea, because it leads to bloated classes that are hard to read and understand. Look at it this way: If a god class with 800 methods in one file, where you know where everything is, is a bad thing--and I think everyone would agree that it is--then how can a god class with 800 methods spread across 135 files, where you don't know where everything is possibly be a good thing?
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157482", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/11211/" ] }
157,526
This blogpost was posted on Hacker News with several upvotes. Coming from C++, most of these examples seem to go against what I've been taught. Such as example #2: Bad: def check_for_overheating(system_monitor) if system_monitor.temperature > 100 system_monitor.sound_alarms end end versus good: system_monitor.check_for_overheating class SystemMonitor def check_for_overheating if temperature > 100 sound_alarms end end end The advice in C++ is that you should prefer free functions instead of member functions as they increase encapsulation. Both of these are identical semantically, so why prefer the choice that has access to more state? Example 4: Bad: def street_name(user) if user.address user.address.street_name else 'No street name on file' end end versus good: def street_name(user) user.address.street_name end class User def address @address || NullAddress.new end end class NullAddress def street_name 'No street name on file' end end Why is it the responsibility of User to format an unrelated error string? What if I want to do something besides print 'No street name on file' if it has no street? What if the street is named the same thing? Could someone enlighten me on the "Tell, Don't Ask" advantages and rationale? I am not looking for which is better, but instead trying to understand the author's viewpoint.
Asking the object about its state, and then calling methods on that object based on decisions made outside of the object, means that the object is now a leaky abstraction; some of its behavior is located outside of the object, and internal state is exposed (perhaps unnecessarily) to the outside world. You should endeavor to tell objects what you want them to do; do not ask them questions about their state, make a decision, and then tell them what to do. The problem is that, as the caller, you should not be making decisions based on the state of the called object that result in you then changing the state of the object. The logic you are implementing is probably the called object’s responsibility, not yours. For you to make decisions outside the object violates its encapsulation. Sure, you may say, that’s obvious. I’d never write code like that. Still, it’s very easy to get lulled into examining some referenced object and then calling different methods based on the results. But that may not be the best way to go about doing it. Tell the object what you want. Let it figure out how to do it. Think declaratively instead of procedurally! It is easier to stay out of this trap if you start by designing classes based on their responsibilities; you can then progress naturally to specifying commands that the class may execute, as opposed to queries that inform you as to the state of the object. http://pragprog.com/articles/tell-dont-ask
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157526", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37552/" ] }
157,536
I'd like to find a way to write an API that can be accessed from any other programming language via language bindings (or some other framework). Is it possible to do this? If so, which programming language would be the most suitable for writing a "cross-language" API? My goal is to create a single set of functions that I can access from any programming language that I'm working with, so that I won't need to manually re-write the entire API in each language.
You have a few options: Create an HTTP interface, almost everything can talk HTTP so that will get you a lot of languages. Create something that can be linked into a language runtime, this will be rather time consuming as you will need to find a way to connect it to many different languages.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157536", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/57752/" ] }
157,684
Clojure does not perform tail call optimization on its own: when you have a tail recursive function and you want to have it optimized, you have to use the special form recur . Similarly, if you have two mutually recursive functions, you can optimize them only by using trampoline . The Scala compiler is able to perform TCO for a recursive function, but not for two mutually recursive functions. Whenever I have read about these limitations, they were always ascribed to some limitation intrinsic to the JVM model. I know pretty much nothing about compilers, but this puzzles me a bit. Let me take the example from Programming Scala . Here the function def approximate(guess: Double): Double = if (isGoodEnough(guess)) guess else approximate(improve(guess)) is translated into 0: aload_0 1: astore_3 2: aload_0 3: dload_1 4: invokevirtual #24; //Method isGoodEnough:(D)Z 7: ifeq 10: dload_1 11: dreturn 12: aload_0 13: dload_1 14: invokevirtual #27; //Method improve:(D)D 17: dstore_1 18: goto 2 So, at the bytecode level, one just needs goto . In this case, in fact, the hard work is done by the compiler. What facility of the underlying virtual machine would allow the compiler to handle TCO more easily? As a side note, I would not expect actual machines to be much smarter than the JVM. Still, many languages that compile to native code, such as Haskell, do not seem to have issues with optimizing tail calls (well, Haskell can have sometimes due to laziness, but that is another issue).
Now, I don't know much about Clojure and little about Scala, but I'll give it a shot. First off, we need to differentiate between tail-CALLs and tail-RECURSION. Tail recursion is indeed rather easy to transform into a loop. With tail calls, it's much harder to impossible in the general case. You need to know what is being called, but with polymorphism and/or first-class functions, you rarely know that, so the compiler cannot know how to replace the call. Only at runtime you know the target code and can jump there without allocating another stack frame. For instance, the following fragment has a tail call and does not need any stack space when properly optimized (including TCO), yet it cannot be eliminated when compiling for the JVM: function forward(obj: Callable<int, int>, arg: int) = let arg1 <- arg + 1 in obj.call(arg1) While it's just a tad inefficient here, there are whole programming styles (such as Continuation Passing Style or CPS) which have tons of tail calls and rarely ever return. Doing that without full TCO means you can only run tiny bits of code before running out of stack space. What facility of the underlying virtual machine would allow the compiler to handle TCO more easily? A tail call instruction, such as in the Lua 5.1 VM. Your example does not get much simpler. Mine becomes something like this: push arg push 1 add load obj tailcall Callable.call // implicit return; stack frame was recycled As a sidenote, I would not expect actual machines to be much smarter than the JVM. You're right, they aren't. In fact, they are less smart and thus don't even know (much) about things like stack frames. That's precisely why one can pull tricks like re-using stack space and jumping to code without pushing a return address.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157684", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/15072/" ] }
157,720
I have lots of ideas for products to be built. The problem is that I have less than a year of professional work experience and I am afraid of getting judged negatively in the future based on what I produce now. I have no clue if my code is any good. I am not familiar with any of the coding patterns. All I know is to build products that work. I want to have a public profile in github for my future projects and I will try hard to make sure that it is well commented, is optimized and clean. These are the things that I fear getting exposed publicly: My code may not be highly optimized. Wrong usage of certain libraries or functions which coincidentally get the job done. Not knowing or following any coding pattern. Lots of bugs/ not considering corner, edge cases Fundamental lack of understanding and application of certain concepts such as thread safety, concurrency issues in multi-threaded programming, etc. Should I go ahead and get started or continue to stick to building stuff locally and privately till I get more experience. I don't want the mistakes made here to haunt my career prospects in the long run.
After 30 years of professional software development, I still create bugs. I still find patterns I don't know. I still learn from my colleagues, and encounter stuff I don't know every day. Most experienced developers will judge you on how you respond to issues and criticism, whether you learn from your mistakes and improve your product to meet the users' or community's needs, whether you admit what you don't know and seek to improve. One of the best skills for a developer is willingness to ask the dumb questions and to look a bit foolish at times in order to find good answers as quickly as possible. Everyone who is experienced and very proficient was once where you are now. You will learn much faster if you put your work out there and work with other people. There is no reason to wait. Make your project open. Better yet, contribute to other open projects and learn from them.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157720", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/2158/" ] }
157,771
I've always used git before, but I want to contribute to python so now I have to learn mercurial and I find it very frustrating. So, I've made a couple of small patches and I wanted to track them as commits in my local mercurial repository. Apparently there are 4 ways to handle branching in mercurial . 1 and 4 looked completely ridiculous to me, named branches seem to be heavyweight and I feel that I'm not supposed to use them for quick 1-commit fixes, so I used bookmarks. Now, my patch gets rejected and I want to remove one of my bookmark-branches from my repository. OK, in git I would just force-delete my branch and forget about it, so I delete my bookmark and now I have following problems: TortoiseHG and hg log still show that commit and default branch has 2 heads. And if I understand correctly, you can't delete commits in hg without additional plugins. Mercurial has not only hashes, but also revision numbers. As I've added a couple of my own commits, all commits pulled after that have different revision numbers from the main central repo. I do hg update after pulling to move my master bookmark to the latest commit automatically, but I couldn't find a way to do that in TortoiseHG. What am I doing wrong? Is this normal and expected and should I just ignore these issues? Or how am i supposed to work with my branches?
Personally, for your scenario, I wouldn't bother even creating a branch, unless I was working on multiple changes, each of which would need to be accepted by the core developers. Just clone their repository and work in it, then make a pull request. If I were to use a branch then I'd rather use named branches. They were designed for this exact purpose, where bookmarks weren't. I don't see why you'd consider it heavy-weight. Mercurial have a whole page on their wiki describing different ways of " Pruning Dead Branches ". The "Using clone" option should satisfy your requirements. To answer your more specific issues ... TortoiseHG and hg log still show that commit and default branch has 2 heads. And if I understand correctly, you can't delete commits in hg without additional plugins. This is a mistake I made with Mercurial when I was new to it. Do not be afraid of those additional plugins. Some of them are very powerful tools and often they later get pulled into the core product. That just how Mercurial works. If you need one to perform a specific task, get it and use it. Revising history is considered a bad thing in the Mercurial world, so the vanilla product doesn't always have everything that a Git user thinks it should have, but there are plenty of plugins for those who want to use the application but have different priorities. Mercurial has not only hashes, but also revision numbers. As I've added a couple of my own commits, all commits pulled after that have different revision numbers from the main central repo. Do not worry about revision numbers. They are a matter of convenience, no more. The hash codes are the important identifiers that will pass from repo to repo. Revision numbers are inconsistent across repositiories. Check out Hg Init for a good explanation. Revision numbers are just a handy and more memorable shortcut when working with a single repo. I do hg update after pulling to move my master bookmark to the latest commit automatically, but I couldn't find a way to do that in TortoiseHG. When using TortoiseHG, use the Workbench rather than the other tools. Everything (almost) is in there. Update is in the revision context menus. It's not always intuitive, but there is a good guide at the link above and as you get used to it, you end up clicking away with confident abandon.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157771", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/24215/" ] }
157,922
I have a friend who has a slightly greater amount of programming experience as me. We were talking about all the different programming technologies we use and Interface Builder came up in conversation. Being of no programming background except for what I've taught myself, I personally believe that IB and all it's features ( IBOutlets , IBActions ) help coders of my skill level (and all skill levels, for that matter) complete projects of theirs in less time. His view of IB is a little enthusiastic. He believes that coders that utilize Interface Builder are "cheating" in the fact that they don't have to lay out interfaces by hand. Question: Should using a GUI builder to lay out interface elements be considered "cheating" (since most programming originally required laying interfaces out by hand in code)? Why?
It's not cheating. Programs like IB are tools. Use the right one for the job. There is no need to get dogmatic about it. If you are more effective using such a tool, use it. The only caveat is that you should learn the trade-offs when making your decisions. Doing layouts by hand give you precise control at the expense of drag-and-drop ease. Drag-and-drop tools let you do many things quick and easy, but may make your code harder to maintain over time. Personally, I've never had success or derived much pleasure from using a drag and drop UI design tool, but that's just me. I find laying GUIs out by hand is the most effective for me , and yields a code base that is easier to maintain over time. Others have the opposite experience.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157922", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/35862/" ] }
157,943
I've started reading the design pattern book by the GoF. Some patterns seem very similar with only minor conceptual differences. Do you think out of the many patterns some are unnecessary in a dynamic language like Python (e.g. because they are substituted by a dynamic feature)?
Peter Norvig demonstrates that 16 out of the 23 design patterns found in the GOF book are invisible or simpler in dynamic languages (he focuses on Lisp and Dylan). Since you mentioned Python, there is a nice presentation by Alex Martelli about the topic. Also related with Python, there is a nice blog post demonstrating six design patterns in idiomatic Python . I also keep a github repository with implementations (by other people) of the most common design patterns in Python .
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157943", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/47365/" ] }
157,968
I'm working on, and adding to, a GitHub-hosted project that includes this LICENCE.md (apparently the MIT licence verbatim): Copyright (c) 2012 [Acme Corp] Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. I've made changes, improvements etc (say 10% of the code, in ballpark figures) and publish the code to my own GitHub fork. What should I do with this copyright notice? I'd like to update it (eg, just to add the name of my own organisation), but it says not to. How are these things normally managed? Add a separate copyright file?
You've got some options, jump to the end for the summary. So let's break this one down... Copyright (c) 2012 [Acme Corp] This is the Copyright notice and it belongs to Acme Corp. It was claimed in 2012, which is relevant because Copyright does eventually expire. If the claim was actually given to "Acme Corp", ie. it was boilerplate cut & pasted from the MIT example, then you could almost claim that there is NO copyright on this work. Acme Corp is a fictitious organization, and failing to update the boilerplate puts the claim on dubious grounds. But let's be good citizens, and grant the copyright to the actual claimants. Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: This next section, the Permission notice , is stating that you can do just about anything you want with the code, including modifying the licensing agreement! The catch is that you can't change the license on the existing code - you can change only what you modify. The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. This part simply means you have to persist the notices that are above. So, what can you do? You can and should lay copyright claim to the code you wrote and / or modified. To do so with the same MIT license: Just add your copyright notice after the 2012 Acme Corp copyright notice in the files you modified. You can license your modifications under a different license, if you so choose. To use a different license: Add your copyright and license notice after the entire 2012 Acme Corp block (copyright, permission / license, exclusion of warranty) in the files you modified. In the simplified case of your question, here's what you need to do: Emphasis added to highlight the differences. Original work Copyright (c) 2012 [Acme Corp] Modified work Copyright 2012 Steve Bennett Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157968", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/37877/" ] }
157,991
I am new to StackExchange, but I figured you would be able to help me. We're crating a new Java Enterprise application, replacing an legacy JSP solution. Due to many many changes, the UI and parts of the business logic will completely be rethought and reimplemented. Our first thought was JSF, as it is the standard in Java EE. At first I had a good impression. But now I am trying to implement a functional prototype, and have some really serious concerns about using it. First of all, it creates the worst, most cluttered invalid pseudo-HTML/CSS/JS mix I've ever seen. It violates every single rule I learned in web-development. Furthermore it throws together, what never should be so tightly coupled: Layout, Design, Logic and Communication with the server. I don't see how I would be able to extend this output comfortably, whether styling with CSS, adding UI candy (like configurable hot-keys, drag-and-drop widgets) or whatever. Secondly, it is way too complicated. Its complexity is outstanding. If you ask me, it's a poor abstraction of basic web technologies, crippled and useless in the end. What benefits do I have? None, if you think about. Hundreds of components? I see ten-thousands of HTML/CSS snippets, ten-thousands of JavaScript snippets and thousands of jQuery plug-ins in addition. It solves really many problems - we wouldn't have if we wouldn't use JSF. Or the front-controller pattern at all. And Lastly, I think we will have to start over in, say 2 years. I don't see how I can implement all of our first GUI mock-up (Besides; we have no JSF Expert in our team). Maybe we could hack it together somehow. And then there will be more. I'm sure we could hack our hack. But at some point, we'll be stuck. Due to everything above the service tier is in control of JSF. And we will have to start over. My suggestion would be to implement a REST api, using JAX-RS. Then create a HTML5/Javascript client with client side MVC. (or some flavor of MVC..) By the way; we will need the REST api anyway, as we are developing a partial Android front-end, too. I doubt, that JSF is the best solution nowadays. As the Internet is evolving, I really don't see why we should use this 'rake'. Now, what are pros/cons? How can I emphasize my point to not use JSF? What are strong points to use JSF over my suggestion?
There is at least one very good reason for considering JSF. It is a standard part of the Java EE stack, and hence will be available - and working - in ALL Java EE containers for a very, very long time. And maintained too, without you needing to do so if you have adhered strictly to the Java EE specification. If this is a concern for you, then you should consider it. Most software live longer than their designers think, especially if taken in consideration when being written.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/157991", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/59999/" ] }
158,052
I just graduated with a degree in CS and I currently have a job as a Junior .NET Developer (C#, ASP.NET, and web forms). Back when I was still in university, the subject of unit testing did get covered but I never really saw the benefits of it. I understand what it's supposed to do, namely, determine whether or not a block of code is fit for use. However, i've never actually had to write a unit test before, nor did I ever feel the need to. As I already mentioned, I'm usually developing with ASP.NET web forms, and recently I've been thinking of writing some unit tests. But I've got a few questions about this. I've read that unit testing often happens through writing "mocks". While I understand this concept, I can't seem to figure out how I'm supposed to write mocks for websites that are completely dynamic, and where almost everything depends on data that comes from a database. For example: I use lot's of repeaters who have ItemDataBound events etc. (again depending on data that is "unknown"). So question number 1: Is writing unit tests for ASP.NET web forms something that is done often, and if it is: how do I resolve the "dynamic environment" issue? When I'm developing I go through a lot of trial-and-error. That doesn't mean I don't know what I'm doing, but I mean that I usually write some code, hit Ctrl F5 and see what happens. While this approach does the job most of the time, sometimes I get the feeling that I'm being a little clueless (because of my little experience). I sometimes waste a lot of time like this as well. So, question number 2: Would you guys advise me to start writing unit tests? I think it might help me in the actual implementation, but then again I feel like it might slow me down.
No. The concept behind unit tests is based on a premise that has been known to be false since before unit testing was ever invented: the idea that tests can prove that your code is correct. Having lots of tests that all pass proves one thing and one thing only: that you have lots of tests which all pass. It does not prove that what the tests are testing matches the spec. It does not prove that your code is free from errors that you never considered when you wrote the tests. (And the things that you thought to test were the possible issues you were focusing on, so you're likely to have gotten them right anyway!) And last but not least, it does not prove that the tests, which are code themselves, are free from bugs. (Follow that last one to its logical conclusion and you end up with turtles all the way down .) Djikstra trashed the concept of tests-as-proof-of-correctness way back in 1988, and what he wrote remains just as valid today: It is now two decades since it was pointed out that program testing may convincingly demonstrate the presence of bugs, but can never demonstrate their absence. After quoting this well-publicized remark devoutly, the software engineer returns to the order of the day and continues to refine his testing strategies, just like the alchemist of yore, who continued to refine his chrysocosmic purifications. The other problem with unit testing is that it creates tight coupling between your code and the test suite. When you change code, you'd expect some bugs to show up that would break some tests. But if you're changing code because the requirements themselves have changed, you'll get a lot of failing tests, and you'll have to manually go over each one and decide whether or not the test is still valid. (And it's also possible, though less common, that an existing test that should be invalid will still pass because you forgot to change something that needed to be changed.) Unit testing is just the latest in a long line of development fads that promise to make it easier to write working code without actually being a good programmer. None of them have ever managed to deliver on their promise, and neither does this one. There is simply no shortcut for actually knowing how to write working code . There are some reports of automated testing being genuinely useful in cases where stability and reliability are of paramount importance. For example, the SQLite database project. But what it takes to achieve their level of reliability is highly uneconomical for most projects: a test-to-actual-SQLite-code ratio of almost 1200:1. Most projects can't afford that, and don't need it anyway.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/158052", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/52956/" ] }
158,097
I've been working on a project for the past six months at a client site, since they require data confidentiality and didn't want us to work at our own office. When I showed up alone to this client site, I was told that I needed to finish the project in two months. Since the client is not a software company, and because of various policies, it took around 20-25 days just to give me rights on my machine to install stuff like Eclipse, Tomcat, etc. Even after the delay in getting the environment setup, they were still expecting me to complete the project in the same two month period. They did not give me any requirement documents, but since I'm working at the client site, we used to have meeting regularly to discuss the requirements. After six months the application is still not finished, and everyone is blaming me, but they fail to realize that we have added many more features than those discussed in the first few meetings. I've had to redo many things during this period, e.g. separate a form into two sections; a few weeks later, they asked me to merge the two forms again as it's confusing, and so on. The scope of the application is increasing every day but they still think it's a two month project that got delayed. When I told them that scope has increased they ask why I didn't ask for requirements at the beginning. I already work 11-12 hours everyday and travel 3-4 hours, and now they expect me to come on Saturdays also. I have to do everything here: take requirements, design, code and test. Please advise me what to do in such a case? Additional details: We did have a list of deliverables, but then they added a few more things to it saying these are also important. They also changed a few deliverables. They don't even have their UAT server, they test on my development machine itself via its IP address.
This is a failure of your manager . You, as a contractor, should not have been placed into a situation with such a tight deadline by your company without a firm set of requirements up front, in writing. None of this 'they added features' afterwards nonsense - each time that happened, they should have signed off on an updated schedule that you gave them . Your manager, since they are planning on meeting with him, needs to get from the customer a specific set of requirements - the project should do A, B, C, D, and E. And after it does, it is complete. The customer's signature needs to be on that document agreeing to that list. You should have had that from the beginning. If your manager does not back you up and support you in this - and I don't say this very often - start looking for another job. Because you'll probably end up being the scapegoat for the whole mess. And if you are willing to work 11 hour days & 3 hour commute, it's apparent you're a very dedicated individual who deserves better.
{ "source": [ "https://softwareengineering.stackexchange.com/questions/158097", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/20163/" ] }
158,109
I've done some research on my own and understand the basic concept. But some insights can only be gained through actual experience. What are the advantages of myBatis that would make it worth learning a new framework? In what case would you avoid using it?
Consider what you're trying to achieve. Typically, the Command Query Response Segregation model works well for complex domains. The reason is that you're trying to do one of two things typically: Create/Update/Delete some complex domain entities Run analytic fetch queries (i.e. summation/aggregation queries) Hibernate works well for case 1 allowing you to just make a POJO and persist/update it. It also does this quickly, unless your domain is quite large. myBatis is great for fetch queries (case 2) where you just want an answer. Hibernate would attempt to load the entire object graph and you'd need to start tuning queries with LazyLoading tricks to keep it working on a large domain. This is important when running complex analytic queries that don't even return entity objects. Hibernate only offers SqlQuery and bean Transformers in this case with huge default types like BigDecimal, while myBatis could easily map to a simple POJO non-entity. These two cases are the difference between Commands where you want to change the domain data and Responses where you just want to fetch some data. So, consider these two cases and what your application does. If you have a simple domain and just fetch information, use myBatis. If you have a complex domain and persist entities, use Hibernate. If you do both, consider a hybrid approach. That's what we use on our project that has thousands of entities to keep it under control. ;)
{ "source": [ "https://softwareengineering.stackexchange.com/questions/158109", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/53818/" ] }
158,260
I want to know are there any design patterns for web besides MVC? I know there are design patterns as: Registry, Observer, Factory, ActiveRecord ,... and MVC a set of other design patterns and folder structure. Is there design pattern like MVC is a set of other design patterns? Edit : my programming language is PHP.
There are different patterns in software development; MVP, MVVM, MVC, etc. are some of the well-known ones. However, you have to define the specific problem or technology that you are intending to solve or use. Each of these patterns is good to solve some specific sets of problems. For example, the MVP (Model View Presenter) pattern helps to introduce separation of concerns in ASP.NET WebForms development. It consists of splitting up the responsibilities for gathering, displaying, and storing data from a web page into separate objects: a Model object, a View object, and a Presenter object. The most famous general cookbook of design patterns is Gang of Four (GoF) design patterns . Edit: i suppose that you are more interested in implementing design patterns on .NET platform
{ "source": [ "https://softwareengineering.stackexchange.com/questions/158260", "https://softwareengineering.stackexchange.com", "https://softwareengineering.stackexchange.com/users/60149/" ] }