source_id
int64 1
4.64M
| question
stringlengths 0
28.4k
| response
stringlengths 0
28.8k
| metadata
dict |
---|---|---|---|
440,887 | When reviewing code, I sometimes see blocks like this (I happen to be using a JavaScript since it is widely understood): if (!myVar || myVar === null || myVar === undefined) {
...
} Those who know JavaScript decently well will immediately recognize that myVar === null and myVar === undefined will never be true if they are evaluated, because if myVar is either null or undefined, execution will proceed immediately to the first line inside the if block. Some might argue that the code above provides greater clarity than simply using if (!myVar) ... ,
and junior developers may spend less time interpreting it. And clarity in code is obviously paramount. But I find this type of code distasteful, since in my view it promotes a low-confidence approach to software development. And I hardly think I would be alone in that. Conversely, I can think of times when I have passed an options object to an obscure function with what I know are the function's default values. This is another example of unnecessary verbosity for the sake of clarity, and it does potentially discourage developers (including me) from being confident and clear-eyed about how the function works. But I subjectively find this more acceptable, since the functionality I'm leveraging is more obscure. All aspects of code exist somewhere on the continuum of common-to-obscure. So the right approach to what I'm calling "clarity vs. confidence" is probably somewhere on a continuum as well. But developers who write and review code in the real world need to know where to place themselves and their teams on that continuum. My Google search terms didn't turn up much on this topic. Have published experts in coding best practice explored when to write "unnecessary" code vs. not? There can be value in crowd-sourcing opinions, but StackExchange discourages questions that elicit opinion-based answers. Only an answer that includes citations will be selected as the "best answer". | If you can't trust me to figure out what !myVar does then either teach me if (!myVar) { // if null or undefined
...
} Or don't if (myVar === null || myVar === undefined) {
...
} Trying to do both at the same time will just leave me more confused. If I don't know what !myVar does seeing the rest of your code doesn't tell me what it does. It just tells me there's also more stuff going on to worry about. That just adds to the intimidation. To prove that point I offer the comments of Doc Brown & Jon Wolski that point out the equivalent code is actually: if ( myVar === null
|| myVar === undefined
|| myVar === false
|| Number.isNaN(myVar)
|| myVar === 0
|| myVar === ""
) {
...
} Which I didn't even spot. So don't underestimate how confusing this code is. That means the comment I added is not equivalent even if it was your intent. I could try to fix that but generally I find comments that tell me what code does distasteful. Mostly because code will get refactored and the comment wont and now I really hate you. Better comments tell me why it's doing what it's doing. Show me your intent. Help me understand that and you'll free up the brain cells I need to do a web search and learn this standard stuff for myself. Consider: //Set default if needed Tell me that and I know what you're trying to do even if your code is wrong. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/440887",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/419355/"
]
} |
440,905 | I am refactoring a legacy codebase of an Angular SPA.
The central entity of the app is the chat room, and there is a plethora of ways on how to enter a chat from different views all across the app. Due to our special use case, entering a chat is a multi step process with a fair amount of possible edge cases. To top it all, it has to work on mobile browsers, too. Before my refactoring, large chunks of the chat joining precedures were simply implemented within the respective buttons' click handlers. Lots of duplicated code, business logic in UI components, I don't think I need to explain why it was hard to maintain. Now, what I set out to do was to implement a central joinChat method in a service. One that -of course- is separated into smaller subroutines, but which you can read from top to bottom and understand the flow. However, while doing that, I noticed more and more details on how each button triggers a slightly different chat joining procedure, regarding which confirmation dialogs to show, which pre-checks to do, etc. Especially the mobile version works quite differently. Now, what I ended up with is a joinChat method with too many configuration parameters. Sure, it's way less duplicated code now and you don't have to search all over the app anymore to change a detail in the flow, but I feel like I just transformed one anti-pattern into another. Even worse, instead of having business logic inside the UI, I now have UI inside the business logic, as subroutines of joinChat trigger dialogs and alerts. If the designer comes along and demands that a certain confirmation dialog should e.g. be a modal on the desktop version, but a separate page on the mobile version, things will get messy. How do I approach this elegantly? My current idea would be to let the business layer contain only atomic steps of the chat joining process and move the orchestration of these steps somewhere else. Where exactly? I see two options: Back into the UI components. It seems like a sin, but also like a pragmatic thing to do, given that the flow is so interdependent with the UI components anyway. The use case "show a modal on desktop, but a separate page on mobile" would be trivial to implement. On the downside, a flow that encompasses multiple screens would result in hard to read code, as someone who wants to understand it would have to reverse-engineer it screen by screen (one of the biggest issues I faced with the old codebase). Into a controller layer between the UI and the business layer. Seems cleaner. Use cases can be read from top to bottom, even if they traverse multiple screens. Dependencies on user interaction would be solved using a delegate pattern. However, most other parts of the app don't require such a layer as they are pretty straightforward CRUD, and it feels inconsistent to now introduce it only for this one use case. Another problem I see is that it would contain many similar entry points, e.g. joinChatFromContactsList , joinChatFromTicket , joinChatFromCall , joinChatFromCallMobile etc... The main aspects that I want to optimize for are readability (because we are a growing company and new employees should understand the code easily) and flexibility (because we have to be able to react to changing requirements quickly). | If you can't trust me to figure out what !myVar does then either teach me if (!myVar) { // if null or undefined
...
} Or don't if (myVar === null || myVar === undefined) {
...
} Trying to do both at the same time will just leave me more confused. If I don't know what !myVar does seeing the rest of your code doesn't tell me what it does. It just tells me there's also more stuff going on to worry about. That just adds to the intimidation. To prove that point I offer the comments of Doc Brown & Jon Wolski that point out the equivalent code is actually: if ( myVar === null
|| myVar === undefined
|| myVar === false
|| Number.isNaN(myVar)
|| myVar === 0
|| myVar === ""
) {
...
} Which I didn't even spot. So don't underestimate how confusing this code is. That means the comment I added is not equivalent even if it was your intent. I could try to fix that but generally I find comments that tell me what code does distasteful. Mostly because code will get refactored and the comment wont and now I really hate you. Better comments tell me why it's doing what it's doing. Show me your intent. Help me understand that and you'll free up the brain cells I need to do a web search and learn this standard stuff for myself. Consider: //Set default if needed Tell me that and I know what you're trying to do even if your code is wrong. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/440905",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/397914/"
]
} |
441,114 | We have product X which has the following semver versions: 1.0.0 , 1.1.0 , 1.2.0 , ..., 1.10.0 . In version 1.5.0 , we introduced a feature which changes the way the application is consumed in a big way. Eventhough the change is backwards compatible (which means a minor bump would be appropriate), we didn't realise at the time that it would change the way our application is used so much. So in hindsight, we should have done a major bump to 2.0.0 . Is there any guidance around how to handle this situation? Should we just release a 2.0.0 now and deprecate all versions before 1.5.0 eventhough they're perfectly fine and supported? Or should we "republish" 1.5.0 as a 2.0.0 and then "replay" the 1.0 releases to 2.1.0 , 2.2.0 , 2.3.0 , etc and carry on like that. My overall goal is that I want consumers to know that if they are on a version that is before 1.5.0 , they really should look at upgrading, because " 1.5.0 " of the product introduced a big backwards-compatible change. | You are mixing two different version numbers here: The " Technical Version Number " and The " Marketing Version Number ". Don't do that. Conflating those two version numbers is going to cause confusion. If you want to signal to your users a cool new feature, increase the Marketing Version Number. Leave the Technical Version Number to proper Semantic Versioning, i.e. only to signal changes to the public API. Take Microsoft Windows NT, for example. Microsoft Windows 2000 to Microsoft Windows XP was a major user-visible change with a significantly different user experience, but from an API perspective, it was fully backwards-compatible, so the Technical Version Number only got bumped from 5.0 to 5.1. Windows Vista had much more strict security rules which broke quite a number of applications, so it had Technical Version Number 6.0. Windows 7 was only a minor update to Windows Vista, and so the Windows release with the Marketing Version Number 7 actually had the Technical Version Number 6.1 . Windows 8 had 6.2, Windows 8.1 had 6.3. From Windows 10 on, Microsoft is using a rolling release model, so all versions of both Windows 10 and Windows Server have 10.0. Even Windows 11 still has the Technical Version Number 10.0. So, as you can see, the Technical Version Number used to signal backwards (in)compatibility and the Marketing Version Number used to signal "buy this upgrade, it is shiny!!!" don't have to be the same. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/441114",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/420642/"
]
} |
441,123 | I''m working on a C# library where the API provides several public interfaces and a single concrete factory class (itself an interface implementation). This factory provides implementations of the various interfaces. Other than the factory, none of the actual implementations are available to the user. As is the nature of using interfaces, this decouples my users from being concerned with how I variously decide to refactor the implementations. I know that's good practice in itself, but I do feel I'm maybe being a little restrictive not making the implementations public. Is this common/accepted practice? What are the pros and cons of this approach? | While this may be an opinion-based question, and the real answer is the same as with many endeavors in software design and development ("it depends"), I'm going to say yes, do not expose these implementation details by adhering to the Principle of Least Privilege . If and only if you find your design has good reason to expose the implementation for extension -- adherence to another design guidepost known as the Open/Closed Principle -- should you make that available to other parties. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/441123",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/180349/"
]
} |
441,373 | As has been covered to the point of parody , heavily object-oriented languages, such as C# or Java, tend to lack the feature of having functions as a primitive type. You can argue about whether or not their functions are first class , but the pattern is always that you cannot reference a function on its own; It must always have a companion object. C# calls these companions "delegates" and I believe Java calls them "runnables". In a language that doesn't need these companions, higher-order functions can be called with a function directly used as one of their arguments. My question is this: Why, from a language design point of view, do heavily object-oriented languages tend to lack the ability to reference a function on its own? Why must they be carried around by a companion object? For example, what benefits do these companions provide over the functional-programming like approach of being able to have a function be its own thing? What costs do they avoid? To pre-empt an objection, I do not believe that this question is opinion-based. The Java and C# designers are very well known for documenting their decisions and I am absolutely certain that they were familiar with heavily functional languages such as Scheme. There'll surely be a good and documented reason why they chose their current approach. In fact, with the rise of more functional languages that intermix with Java, such as Clojure, I wouldn't be surprised if there's already documentation of the costs that functional Java-like languages have paid by choosing the "functions don't need to be carried around by an object" path. Finally, please note that I'm not specifically asking about C# or Java. Those two languages just happen to be the best examples of languages that insist that functions must be referenced with an object. | IMO... Because Java and C# are not true OO languages. Functional programming was not in vogue when they were designed. I agree with Jörg W Mittag , neither C# nor Java are true object-oriented languages . They're hybrid procedural/OO languages attempting to improve on C++ (and C# attempting to improve on Java). They have both a traditional primitive type system and a class system. For example, int is a primitive type in Java and requires a wrapper to behave like an object . Such hybrid design means the language designers get to pick and choose what is and is not an object. The designers of Java and C# didn't feel functions needed to be objects, I'm guessing because functional programming wasn't vogue back then and it was a little faster to not have them be objects. So they weren't objects. In contrast, true object-oriented languages (Smalltalk, Ruby, Scalar, Eiffel, Emerald, Self, Raku) treat everything as an object which responds to methods. Everything. That includes methods and procedures . They're objects so they can be referenced. Methods being objects are inherent to OO design because everything is an object; the language designers would have to deliberately do otherwise. For example, Ruby is a pure OO language. Since everything is an object, it has Method objects . Since everything is an object, the code of a Method is a Proc object . Lambdas are just special Procs. func = lambda { |x | x**2 } That's syntax sugar for Proc.new . func is an instance of Proc. I can call methods on it. p func.call(4) # 16
p func.class # Proc I can ask it for its call method. That is a Method object. method = func.method(:call)
p method.class # Method
p method.call(4) # 16 Anonymous functions and method references are a benefit of Ruby sticking to OO design principles. In contrast, let's look at how Java and C# implemented lambdas and function references. Java was designed in early 90s as a better C++ . C++ is a hybrid procedural/OO language with many, many design problems, but it was extremely popular and influenced what many people thought OO was. Java inherited these problems. Because Java is a hybrid language, they picked and chose what was and was not an object. For whatever reason, probably micro-optimization and a lack of appreciation for functional programming, they decided that methods were not objects and could not be referenced, and there were no bare functions. Java attempted to address the need to pass around snippets of functionality with anonymous classes. They finally realized this is awkward and added lambda expressions explaining ... Lambda expressions let you express instances of single-method classes more compactly. One issue with anonymous classes is that if the implementation of your anonymous class is very simple, such as an interface that contains only one method, then the syntax of anonymous classes may seem unwieldy and unclear. In these cases, you're usually trying to pass functionality as an argument to another method, such as what action should be taken when someone clicks a button. Lambda expressions enable you to do this, to treat functionality as method argument, or code as data. At the same time they added method references ... You use lambda expressions to create anonymous methods. Sometimes, however, a lambda expression does nothing but call an existing method. In those cases, it's often clearer to refer to the existing method by name. Method references enable you to do this; they are compact, easy-to-read lambda expressions for methods that already have a name. What you can do is use them to construct instances of any interface which is a FunctionalInterface . Functional interfaces provide target types for lambda expressions and method references. Runnable r = () -> System.out.println("Hello World!");
// The equivalent Runnable class.
Runnable r = new Runnable() {
@Override
public void run() {
System.out.println("Hello World!");
}
}; There's dozens of FunctionalInterfaces which seem to be working around Java's non-OO primitive types. DoubleToIntFunction : Represents a function that accepts a double-valued argument and produces an int-valued result. The hybrid nature of Java means rather than flowing naturally from the design, a complex series of adapters is necessary. Point is, it was bolted on later and it's complicated. C# was designed a better Java . It fixed some mistakes, and repeated others. When you go back and look, C# version 1.0, released with Visual Studio .NET 2002, looked a lot like Java. As part of its stated design goals for ECMA, it sought to be a "simple, modern, general-purpose object-oriented language." At the time, looking like Java meant it achieved those early design goals. C# 3.0 added lambda expressions . As with Java, the relationship between lambda expressions and objects is complicated ... Any lambda expression can be converted to a delegate type. The delegate type to which a lambda expression can be converted is defined by the types of its parameters and return value. If a lambda expression doesn't return a value, it can be converted to one of the Action delegate types; otherwise, it can be converted to one of the Func delegate types. For example, a lambda expression that has two parameters and returns no value can be converted to an Action<T1,T2> delegate. A lambda expression that has one parameter and returns a value can be converted to a Func<T,TResult> delegate. This complex system of annotation types and target types and automatic casting and blurring between what is and is not an object is a common problem of hybrid languages, particularly C++ derived languages which mix procedural and object-oriented principles with a C-style type system without choosing one clear paradigm. And that's my point. True object-oriented languages naturally have function references because functions are objects. Hybrid languages get to pick and choose what is and is not an object; if the language designer didn't think function references were needed, you don't get function references. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/441373",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/373159/"
]
} |
441,474 | I'm recently learning the programming language, and I wonder how compilers work when the language itself does not allow recursion, like how the compiler or the runtime checkers makes sure that there is no recursion. I learned that compilers don't need to understand recursion when translating the code, but how does one work without understanding it? I try to think to allocate a specific size of stack to avoid recursion, but then I think I have no idea about how to determine the size. I assume that it is not the language don't have recursion feature but the compilers or checkers don't allow. | Recursion can only be programmed either by having a call to function A within the definition of A itself (direct), or by having function A call function B, and function B call function A (indirect). It is easy to forbid both possibilities simply by requiring that every call to a function must occur after the definition of that method is complete . The technical term is forward referencing ; every recursive program must contain at last one syntactical forward reference. By forbidding the forward reference, you implicitly also disallow any recursion. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/441474",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/421415/"
]
} |
441,543 | I developed a big(ger) project, which is in use already and grows, gets altered, fixed, etc. every week. Until now I am the only developer. Since the team has to grow, also we will be more developers on that project. The code is mostly self-speaking or commented on the darker places ... But I guess an experienced developer will find its path. To explain the projects tech in a few points: the Frontend are two SPAs (React) giving functionality to customers and administrators the backend is a Node.JS API (NestJS) with the main purpose of being a flexible GraphQL and REST endpoint(s) and being capable of doing heavy computations, aggregation, organization and exports of data a postgres database additional (micro-)services doing cleanup, optimization, transfer, etc. a CI/CD pipeline that let's us comfortably deploy changes to either a staging or a production/live server My question is about documenting every flow (or the more complex ones), every Entity / Table, every use case, so that the user (customer) and developers know, what the whole system can do / has to do. Just to find and see first the big picture and secondly the internal details like rules, etc. without digging through the whole codebase (which btw is not an option for the customer). The project has grown over years, so have I and so have the duties of that software.
I still think the project is SOLID enough to be maintained for a longer time. To the question. As there is 0 documentation (except from the code), would it be a good Idea to start transfering parts or (in the best case) the whole project into use case diagrams a big class diagram (for the Entity Class should be OK, I think. To transfer even all controllers, services and helpers will be not too usefull, I guess) activity diagrams for the more complex processes (or also even for the whole system?) Does a Double-Opt-In Process have to be drawn as a diagram? I am asking for experiences of people working in larger projects. Is it good practice in big enterprise projects to do this? Or is it just a tool to "get into a project"? I am afraid that the work of achieving this will grow drastically while being in the middle of the process. I don't want to start and realize after weeks of work, that there will be no end and even no gain in having a "partly" with diagrams documented project. I also would love to "feel more freedom in my mind". Until this point I feel like I have the big challenge of never forgetting anything about this detail just to be more on point if I dive back into the projects after weeks. I would like to put that knowledge (for me and others) on a sheet of paper or some .zargo files ... | Tldr Documentation should not be seen as something that you simply start doing. It carries a colossal business cost to maintain. It will require additional staff and skills in its own right. Long read Beware, if you have written zero documentation so far, and you are in a workplace where zero documentation has ever been written (let alone successfully read), then there is every chance that the documentation you do write will be of a very poor standard and completely ineffective for its purpose. Technical writing is a skill that most do not possess. Educators and academics - experts on documenting and reproducing knowledge of complicated areas of study - are experienced professionals in their own right. Once written material exceeds the volume of a long slideshow, or a short article or memorandum, real authorship is required. There is also very often a risk that the system you want to document has already gone beyond what can be coherently explained to others. Sometimes, the conceptual weight of a system that has withstood alterations in design, far exceeds the volume of code, and it would be easier to simplify the code and purge historical remnants, than to fully explain that history and all the transitional logic. One person, working without ever having to explain themselves or reproduce their own knowledge, and supervising their creation constantly for years on end, whose every detail they have devised, can build an edifice of far greater subtlety and complexity than can ever be properly handed over to another person. The edifice will simply not contain the modularity, the regularity, and the refactorings, that would have been necessary in a team-built effort of similar functionality and complexity. It will also probably take certain aspects to the extremes of the strengths and weaknesses of your own mind and the finest details of your own changing understandings and obsessions over the years, without ever having had to concede in the slightest to the different profile of someone else's mind coming to the problems afresh. You might also get a shock about the sheer amount of overhead introduced by team-working. A team of three - if they are of your same calibre - will likely be no more productive in the long-term than you are alone, except that they can keep going once you are gone. In other words, team-working means productivity will sustain better over multiple generations of staff, but three such replaceable cogs will likely deliver less over a single generation, than one irreplaceable kingpin will deliver in a single tenure. If they are of lower calibre than you are, then productivity could be worse, because you will be dragged into tasks (like documentation and management) on which you have no experience, and away from the tasks on which you are already proven strongest, whilst they will never grow to your potential on the tasks you are trying to delegate. Also realise, useful documentation will take as much time and effort as writing code. I would carefully consider whether your business is in fact prepared to pay for the cost of sustainability. Most aren't able or willing to afford it. Bespoke software generally becomes moribund with the departure of sole creators, and then the business either invests in a massive modernisation project, are absorbed by a competitor, buy something "off the shelf" that is widely considered substandard (until made bespoke again), or collapse to a more simple mode of operation that requires less software. There is always another alternative for most businesses, and that is to avoid much documentation, but find natural modularity in the collective work to allow different aspects to be delegated to staff who cover different areas, or to hire a well-motivated deputy for the principal who will learn like an apprentice over years of working together on the same tasks. These approaches can however fail if there are no natural modules, or if the difficulty of the edifice exceeds the deputy's capability and motivation. But documentation should not be regarded as the easy option. It's the most effective and flexible option when done well, but it's also the most expensive to do, and the most likely to be useless when done cheap. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/441543",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/191548/"
]
} |
441,647 | I am working on a project on my own and I am using git just to keep track and to get used to a team environment. When should I commit on the main branch (if I should at all), do I need to create a branch for each feature, bug, etc., and then merge them to main or could smaller things (adding one field to get stored in a database) be committed on the main branch? I looked into the following questions and it didn't really see anyone focusing on the point if the main branch is only for merging or could it also be used for minor commits as well. Git branching and tagging best practices Git strategy to use when a file I am going to edit in another branch has been updated in the master branch? | Since you're working on this by yourself, it's up to you to make up your own git branching strategy. Strict rules like "never commit directly to main" are mostly for actual collaborative work, to e.g. make sure that every code change is reviewed. For yourself, it's totally OK to use branches only for features that you'll be working on for a longer time but don't want to have in the main branch right away, and commit directly to main for smaller changes. Do whatever you think is the best balance between ease of use and protecting you from problems. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/441647",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/402502/"
]
} |
441,670 | I wrote some script in Python that creates a giant 2D matrix (1000x1000 or bigger) and fills it with random numbers. And after that, it goes through every element of the matrix and changes the number to another one depending on the current element's neighbours (something like you do in Game Of Life). And I noticed that if I write the same algorithm of checking the neighbours like if neighbour on the left has the value of X:
do Al
Bl
and Cl
else:
do Dl
El
and Gl
now if neighbour on the right has the value of X:
do Ar
Br
and Cr
and so on... this code runs much faster than a version using function calls: def actionA(x):
do x
def actionB(x):
do x
def actionC(x):
do x
def check(n):
actionA(n)
actionB(n)
actionC(n)
for every neighbour:
check(neighbour) I'm wondering why that is so. Is it because the script has to switch between the loop and functions when executing the function's code while it goes line-by-line when running the inlined code? | Many language implementations will automatically inline function calls wherever this makes sense and is possible. This is completely normal for “compiled languages” like C, or JIT-compiling runtimes like Java/JVM, .NET/CLR, or JavaScript/V8. CPython, the Python reference implementation, is not one of those. The insane cost of function calls in that language is well known, though recent Python versions like 3.10 have made function calls significantly faster. In particular, named arguments used to require that dict() objects were created for the function call. CPython performs basically no optimizations to your program, and runs it as-is. If your program contains a function call, CPython will execute that function call and won't inline anything. So yes, it is entirely believable that your function call version of the program is a lot slower, and in the past I have achieved some major optimizations by removing function calls from very hot loops in Python programs. However, Python does have a couple of options to get better performance in a scenario like yours where you're operating on a large matrix. You can write the critical code as a C module Possibly by writing the code in a Python-like syntax with Cython You can try a different Python implementation such as PyPy If your Python code conforms to a simple subset of Python, you can JIT-compile it with Numba . While Numba requires you to annotate all functions that should be JIT-compiled, it's often a fairly easy way to get a big performance boost for numerical code. If you can redesign your program to operate on the entire array at once instead of looping through all entries, you can use Numpy . Numpy is very efficient for dealing with large arrays/matrices as long as you avoid Python-level loops. You wouldn't be the first to think about using Numpy's advanced indexing features to efficiently count neighbors for a Game Of Life program . | {
"source": [
"https://softwareengineering.stackexchange.com/questions/441670",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/421796/"
]
} |
441,891 | By definition, a pure function is deterministic + no side effect. Is there any example for a function which has no side effects, but is non-deterministic? I.e., a function without side effects, but not pure. To me non-deterministic function comes from randomness. But random generator has side effect. AFAIK random generator's implementation mutates some global state. EDIT: duplicate with https://stackoverflow.com/q/54992302 | A properly working classical computer is, by definition, deterministic. That is, the output of the same series of steps with the same inputs will produce the same result. When we talk about non-determinism in that context, what that typically means (in my experience) is that there are implicit inputs whose state can vary. If we accept the above, then a non-deterministic function with no side effects might be something like a pseudo-random number generator that uses the system clock time to generate numbers, the distinction being that it doesn't change any state as part of its execution. It's not pure because it depends on state. I think it's important to reiterate that the actual function is still deterministic. If you changed such a function to accept a time value instead of using the system clock's state, it will produce the same output for the same time value. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/441891",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/422254/"
]
} |
442,109 | In Python 3, I subclassed int to forbid the creation of negative integers: class PositiveInteger(int):
def __new__(cls, value):
if value <= 0:
raise ValueError("value should be positive")
return int.__new__(cls, value) Does this break Liskov Substitution Principle ? | This is not an LSP violation , because the object is immutable and doesn't overload any instance methods in an incompatible way. The Liskov Substitution Principle is fulfilled if any properties about instances of the supertype are also fulfilled by objects of the subtype. This is a behavioural definition of subtyping. It does not require that the subtype must behave identically to the supertype, just that any properties promised by the supertype are maintained. For example, overridden methods in the subtype can have stronger postconditions and weaker preconditions, but must maintain the supertype's invariants. Subtypes can also add new methods. PositiveInteger instances provide all the same properties and capabilities of normal int instances. Indeed, your implementation changes nothing about the behaviour of integer objects, and only changes whether such an object can be created. The __new__ constructor is a method of the class object, not a method on instances of the class. There is precedent for this in the standard library: bool is a similar subtype of int . Some caveats though: If int object were mutable, this would indeed be an LSP violation: you would expect to be able to set an int to a negative number, yet your PositiveInteger would have to prevent this – violating one of the properties on base class instances. This discussion of the LSP applies to instances of the class. We can also consider whether the classes itself are compatible, which are also objects. Here, the behaviour has changed, so that you can't substitute your PositiveInteger class. For example, consider the following code: def make_num(value, numtype=int):
return numtype(value) With type annotations, we might have something like: Numtype = TypeVar('Numtype', int)
def make_num(value: int, numtype: Type[Numtype]) -> Numtype:
return numtype(value) In this snippet, using your PositiveInteger as the Numtype would arguably change the behaviour in an incompatible way, by rejecting negative integers. Another way how your constructor is incompatible with the constructor of the int class is that int(value) can receive non-int values. In particular, passing a string would parse the string, and passing a float would truncate it. If you are trying to ensure a mostly-compatible constructor, you could fix this detail by running value = int(value) before checking whether the value is positive. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/442109",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/422677/"
]
} |
442,393 | I have a mysql database in which I have drafts, each of which contains exactly 24 players the order of which matters. I am conflicted between having a drafts table with 24 extra columns for each player or creating a new table for a one-to-many relation between drafts and players. Drafts ID Arbitrary info Player1_ID Player2_ID 1 name1 4 6 2 name2 10 6 OR
Drafts ID Arbitrary info 1 name1 2 name2 and
Relations draft_ID Player_ID P_Order 1 4 1 1 6 2 2 10 1 2 6 2 If I were to try to summarize my question, it would be one of efficiency vs elegance/robustness. If the number of players was variable (and even more so if their order didn't matter), it would be clear to me that I should make another table. (I have found many posts about variable-length arrays that suggest this). However, in my case, adding 24 columns to the draft table seems more efficient to me. I am also thinking of the calls to the database and as of right now, most often calls will be to return the entire draft (i.e. all players), and not many calls relating to specific players. In this small use case, it doesn't matter much which method I choose, but what if things get bigger, what if I had say 1000 players in each draft? | "adding 24 columns to the draft table seems more efficient to me" show me all the drafts which include player 2. select * from draft where p1=2 or p2=2 or p3=2.... vs select * from draft left join relation on d.id = r.id where r.playerid=2 Relational databases are designed for the table approach, not the column approach. If you are using a relational database, follow the normalisation rules and add a table. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/442393",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/423335/"
]
} |
442,588 | C# is considered a statically-typed language. However, it contains keywords such as: var , which infers the type at compile time, and dynamic , which determines the type at runtime. Is this a contradiction? | C# has a static type system. So it is reasonable to describe it as a statically typed language. Just like any other OOP language, it also provides for some degree of dynamic typing. C# goes a lot further here than other statically typed languages by providing a dynamic static type, but this does not break its static type system. Background: With dynamic typing, values or objects have types. E.g. 1 is an int, and the result of evaluating new Foo() is an object of type Foo . We can think of every object being “tagged” with a type. With static typing, expressions in our code can be assigned types. For example, x + 1 might be an int, and the variable foo might be declared to only contain Foo instances. These static types are guarantees about the values that the expression might evaluate to. This is useful because it might detect potential bugs, before running the code. Or viewed another way: a static type system allows for basic proofs of correctness. Static type systems also enable interesting features such as method overloading. A lot of time, the static type of an expression is obvious. We don't have to manually annotate every expression with its static type. The compiler has rules to infer static types itself. When declaring a variable, we can instruct the compiler to infer its type with the var keyword. With statically typed OOP languages, we have both static and dynamic typing, due to the use of inheritance and interfaces. Now, the static types do not describe exactly what values are possible at runtime, but only constrain them. For example, we might define a variable IFoo foo = ... . This guarantees us statically that any object referred to by foo will conform with the IFoo interface, providing certain properties or methods. But we do not know which classes those objects might have exactly – we don't yet know their dynamic types. C# has a unified inheritance hierarchy, with all objects being derived from the System.Object class. If we declare a variable with static type Object , then we can assign anything to it. However, we are only guaranteed that it will provide the methods that are part of the Object class, such as ToString() . If we want to call other methods that the object instance supports, we have to cast it to its actual type, which requires checking its dynamic type. We can do that via a cast. Casts let us write statically checked code dependent on the object's dynamic types, because casts can fail – the code can only be executed if the object really has the expected type. A lot of code doesn't benefit that much from static type safety. As a shortcut for casting the object to the correct type before accessing a method or property, C# offers the dynamic type. This pseudy-type behaves differently from other static types, since it just defers any type checks to runtime. However, this is still perfectly safe. This cannot be used to violate the static types in other parts of the code, because the compiler will insert the suitable checks at runtime. For example, consider a function that converts a dynamic object to an integer: static int IntFromDynamic(dynamic o) => o; This function is compiled roughly as if it were the following: static int IntFromDynamic(Object o) => (int)o; Neither variant would allow us to break the static type system, for example by trying to launder a string into an int: IntFromDynamic("foo") will fail at runtime. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/442588",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/390741/"
]
} |
442,594 | For my job, I work on multiple different scientific software projects, as well as general administrative tasks that go hand-in-hand with any 'office' job. Thus, any given working week could involve progressing none, or all, of those software projects. Put simply, my problem is that I waste a lot of time (sometimes days) picking up from where I left off when I pick up one of those software projects after a large break from it. Usually this is because I have to re-learn: How the particular function/class I'm looking at fits into the overall project The general flow/algorithm with which the software works for our use-case Which methods/classes etc. that I have already written (i.e. to ensure DRY-adherence with further work) What are the most efficient and best ways in software development to avoid this 'dead'/'lag' time when you pick up development of a project again after a prolonged break from working on it? PS - I am a scientist by trade, not a software developer so I apologise for the fundamental/basic nature of this question. Also, many of the projects are just developed by myself alone. | Let me speed you up by slowing you down. Ever heard who the best person is to hire as a tutor? It’s not the teacher. It’s some student who just took the class. Who remembers the struggle. Therefor I want you to refactor just before doing new work. Because now is when you really understand how confusing the code is. And how hard it is to add the new code for your new feature. Right now you see the problems better than ever. Later, when you’ve added that new feature, you’ll be dumb again. You'll know all too well how it works. So you’ll have no idea how readable it is. When working alone this is the best you can do. If you want to do better find some way to get someone else to look at your code. Do it soon after you write it when you’re still willing to change it. They can see what it really needs when you can't and will keep you from doing silly things just because you can . Oh sure, tests, good names, whitespace, all that good stuff , are all still important. The sooner you do that stuff the better. But writing code changes your brain until you're not the codes intended audience anymore. It's intended for those that don't already know how it works. Test its readability against that. Writers might recognize this pattern. When they proofread their own work sometimes it needs to spend some time in a drawer before they read it again. This is so they can see their typos for what they are, rather than only see what they meant to say. That means those moments when you're kicking yourself for not making it easier the last time you touched it are just not going away. Your brain was code damaged back then. Now it's healed. Be quick and use that healed brain to fix the problem you see now, before it gets all damaged again. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/442594",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/423847/"
]
} |
442,777 | I have a simple 2D game which has squares, player (teal) and enemies (red). class Game {
List<List<Square>> map;
Player player;
List<Enemy> enemies; ... My problem is that I don't know what would be the best way to store the coordinates (int x, int y) of the enemies. Currently each enemy contains their coordinates. class Enemy {
int x;
int y; ... If I want to check if a square (map[x][y]) contains an enemy, I have to go through all the enemies in the list. This is not good because checking if a square contains an enemy is done often in the game. I could try to use a Map<Coordinate, Enemy> enemies where the key is the coordinate of the enemy, but then the coordinate would be stored in two places and this could create some annoying bugs. How should I organize the code so that I can easily and quickly access the enemy's coordinates and check if a square contains an enemy? Is it a bad idea to keep the enemy location in the enemy class? Any architectural/design patterns that could help me? | For this kind of domain, it is is not uncommon that you need bidirectional lookup: quick access to the coordinate of an enemy (or more general, of any kind of piece) quickly determine which piece is placed on a given coordinate. Hence, your idea of using an additional map (together with storing the coordinate pair inside the enemy) is fine. What you need to make sure is that there will be only one place in your code which is allowed to change the coordinates of a piece, and this code updates always both in sync, the coordinate of the piece and the map content. That will help to keep the redundancy under control and avoid the kind of "annoying bugs" you mentioned. Of course, what others wrote about moving the coordinates out of the enemy class is also possible. Though it does not solve the described problem directly, it can help to make the implementation of the coordinate change in one place more rigid. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/442777",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/381604/"
]
} |
442,830 | When modeling classes I always have stumbled on the problem that the class has a field with the same name. Look at this example: class Name {
string meaning;
string language;
string name; //the actual name
} As you can see, the class Name has a field name, which is the actual name. Is this good practice? How to avoid this naming issue? | Bart van Ingen Schenau has some good advice , but I'd like to offer some additional advice. Don't universally avoid naming a class and property the same, but definitely question it. Consider all of the properties together as one cohesive concept: name, language, and meaning. Rather than renaming the property, perhaps rename the class. This is more than a name. Instead, pick a word or short phrase that describes the concept of the combination of a name, its meaning, and the language (or culture) for the name. For example, typically the combination of a word or phrase along with its meaning is called a "definition". The class could be called NameDefinition , which has a name property. The addition of the word "Definition" helps differentiate the type from the data in the type, while also communicating to consumers of this class that it is more than a simple string. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/442830",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/424381/"
]
} |
442,907 | I know that as a general rule, you shouldn't construct SQL queries dynamically because of the possibility of SQL injection. However, it could come in quite handy to break this rule and define for example my SELECT statements like so and use it for all my tables: public static string SelectAll<T>() => $"SELECT * FROM {typeof(T)?.Name}s"; In this case, there is guaranteed to be no user input. Also, we're not talking about a public library or similar, but a closed app. Would it therefore be acceptable to write code like the above or should one always act according to the rule that "you never know who or what will end up using this code and how" and under no circumstance construct SQL queries dynamically? | On one hand, I don't like such braindead rules, since they are clearly an oversimplification. A better wording would be IMHO You shouldn't construct SQL queries dynamically using unsanitized input data from some source you don't trust to be or stay reliable. On the other hand, such code bears a certain risk of becoming a bad example for junior developers in your team which don't treat it so carefully as you. Hence if you see a slight chance that this code will become an example for other code which could turn into a security risk, it is probably better to invest the extra effort for creating a parametrized query (when possible). | {
"source": [
"https://softwareengineering.stackexchange.com/questions/442907",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/424590/"
]
} |
443,221 | Multi threading could cause a racing condition if two threads are accessing the same memory slot, but why is that? From a HW point of view, if the two cores are designed the same, internal pipelines are the same, the logic gates/transistors pass electrons the same way and the speed of those electrons is a constant value then what causes the race? Theoretically speaking shouldn't the two threads access the memory slot at the exact same time down to the nano second, every time? | Your understanding of computer hardware is flawed. Memory is not accessed by different cores in parallel, access is regulated like traffic at a road junction. Different threads can run simultaneously on different cores but they do not access the same memory cell together. What can happen is one thread ruining the work of another thread, like overwriting a result value before it has been read by a consumer. But that would be done sequentially, threads do not "collide", hitting the same cell at the same time. Reads and writes are all performed in a very controlled manner. The race conditions software engineers speak of are not a thing at the transistor level. They are a thing at the much higher program logic level. Think using a boolean value to control access to a resource. Before one thread uses the resource it checks the value to find it is false, meaning the resource is available. So it sets the value to true, signaling to other threads the resource is now occupied, and continues to use the resource. Between the check and set operation however another thread could have checked the value and also have found it to be false. This is the race and the unpredictability. Yet access to the variable by both threads was all performed sequentially in the most orderly fashion. So we need something better than a boolean variable to regulate traffic at the software level and this can only work with hardware support. This problem cannot be solved in software alone. Modern processors support this feature, we most often call this a lock. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/443221",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/425242/"
]
} |
443,832 | If all data is essentially just a bit string, then all data can be represented as a number. Because a compression algorithm, c(x) , must reduce or keep the same length of the input, then the compressed file must be smaller or equal to the input (and greater or equal to 0). This can be stated as 0 <= c(x) <= x . For compression to be useful there must also be a decode algorithm, d(x) , which returns the original input. Start with the number/data item 0. 0 <= c(0) <= 0 , hence c(0) = 0 , and d(0) = 0 . Then 0 <= c(1) <= 1 , and c(1) != 0 as d(0) = 0 , so c(1) = 1 . We can continue this: 0<=c(2)<=2 , c(2) != 0 as d(0) = 0 , c(2) != 1 as d(1) = 1 , so c(2) = 2 . This pattern can then repeat forever showing that without losing any data, any compression algorithm cannot compress data into a size lower than the original input. I do understand how some compression algorithms work, such as run-length encoding (RLE), but I cannot see how they avoid this issue. One reason which I could see as to why RLE or other algorithms can fail would be how to transmit just the plain text. E.g., if x = 8888123, you can compress that to four 8s, 123, which is then transmitted as 48123. But then how can you transmit 48123? And if you then add a special delimiter signal to signify that the "8" is a repeat count, then you are adding even more data. If you use a special signal, such as 1111, then you would transmit the signal 111148123 to represent 8888123, but then how do you transmit 111148123 and so on. This is just an example as to why the demonstration of RLE compressing data doesn’t show that compression is actually useful. | There can be no algorithm that losslessly compresses all inputs. But there are many algorithms that losslessly compress many inputs. And it turns out that most of the strings we like to operate on have enough internal redundancy that they compress rather well. Think of it as a Robin Hood-style exchange: the algorithm maps an infinite number of possible inputs to an equally infinite number of possible outputs, so if it maps some strings to shorter strings, it must map some other strings to longer strings. But we can arrange things so that it takes from the dull strings to give to the interesting strings :-) | {
"source": [
"https://softwareengineering.stackexchange.com/questions/443832",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/426615/"
]
} |
444,098 | We've been utilizing pair programming (or something like it) for a few years. As a senior engineer on the team - I find that pairing actually negatively impacts the team's throughput. The common pattern is that more senior devs generally end up handholding more junior devs throughout the whole process - basically coding while looking at a screenshare. "Scroll up", "Add a console statement", "Go to file X", "Can you write Y after line Z", etc. Side conversations often come up during pairing sessions, distracting away from the work at hand Solving complex problems often takes much longer, because many engineers need heads down time to actually design a solution - doing so on a call takes longer and often results in analysis paralysis. There are so many stories where I feel like I'm coding through someone, and a story that would have taken me 30 minutes ends up taking 3 hours, meanwhile, I question whether the more junior devs actually learn. | If pair programming makes you feel like you're an air traffic controller trying to talk down an air plane piloted by a fidgety 12 year old you're doing it wrong. The reason that's wrong is because you aren't miles away talking on a radio. You're right there and can take the keyboard at any time. It should feel like being a co-pilot. You aren't giving over control because you have to. You're doing it because you can. One thing pair programming is not is mentoring. A teacher-stundent relationship feels very different from two people working together as equals even if one has significantly more experience. It takes time to get used to pair programming so don't worry if it feels awkward at first. extremeprogramming.org - Pair Programming What you need to grasp is the point of pairing. It isn't so you can say "we were pairing". It's so you can communicate in your natural language: code. The awesome thing here is you can type a line of code and ask, "does that make sense?" That's a tight feed back loop. You can sort out when code is too clever fast. "Scroll up", "Add a console statement", "Go to file X", "Can you write Y after line Z", etc. If this is all you're going to say while we pair then just take the keyboard already. Rather than spoon feed me a walk through, tell me what's going on. Why we're doing this. How I could have known to do this myself. Tell me that. Don't just take the keyboard, click some mysterious keyboard shortcuts and make magic happen. Show me how the trick works. Also, don't just dictate the entire agenda. Carve out work I can do. Let me jump in and be part of this. Hell you may get lucky and learn something from me. The keyboard should be sliding back and forth. Side conversations often come up during pairing sessions, distracting away from the work at hand Oh take a moment and be a human. Convince me I'm talking to someone who considers me a human. Solving complex problems often takes much longer, because many engineers need heads down time to actually design a solution True. Some need heads up time spent throwing pencils at ceiling tiles. Some need rubber duck time (a sounding board if you're of the silver hair set). Pair programming isn't for every problem all the time. doing so on a call takes longer and often results in analysis paralysis. Yes pairing can take longer than coding alone. But if you're doing it right you're also getting an instant informal peer review as well as some on the spot collaboration. The easy cure for analysis paralysis is doing something stupid and making people explain to you why it's wrong. Iterate on that until you run out of wrong. I question whether the more junior devs actually learn. Keep questioning. Learn what works and what doesn't. There isn't just one perfect way to do this. But doing it only because we're supposed to do it is definitely wrong. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/444098",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/40885/"
]
} |
444,106 | I have one bounded conext, offers, that has Offers as root aggregate and Items as child entity. I have another bounded context, products, which has Products as root aggregate, Providers as another root entity... but for the example Products is the entity that I need. Product has a price property, that is the reference price that a product is sold. So when I add a new item to the offer, is this price which is set as default. My offer is created in a first state, created, in which I can edit the offer, add new items, update items and so on. I would like that if I change the price in a product, I would like to update the price in all the items of this product in all the offers which state is created. How could I ensure that all the data is coherent after changing the price i the product, because they are two different bounded context, and two different aggregate roots, so from products I can't update the items of an offer when I call the update method in the product entity. Also, I am organizing my project in modules according to the folder by feature (folder by bounded context) point of view, instead of folder by type, I think it is more clear. It is possible to see different opinions here: Folder-by-type or Folder-by-feature So my project is organize in this way: MyProject.Products.AplicationLayer MyProject.Products.Domain MyProject.Products.Repository MyProject.Orders.AplicaionLayer MyProject.Orders.Domain MyProject.Orders.repository I guess I have to have a higher layer or service that has a repository with offers, items and products, and do all in a same transaction. But in this way, I have a service only to update the price and the rest of the logic about products are in its own project, so the code is in many places, and I would like to avoid this. Perhaps I have defined in a wrong way my bounded context, perhaps I could have a product entity in the offers bounded context only with the price, to only can modify the price that is the data that it is needed for the offer, but I think that this could make to update part of the product outside from many bounded context and i am not if it is correct. Also with this solution, I don't see clear how could I update both entities in a coherence way, because I still would have two repositories, one for the root aggregate Offers and another repository for the root aggregate Products, because if I am not wrong, the recomendation it is to have one repository for each root aggregate, so in this case, no matter if I have a root aggregate Products in the bounded context of Offers, I still would have two repositories, and the main problem is how to ensure the transaction. Or perhaps prices it wouldn't be a property of Products entity, because the price is more data that belongs to contability or sells (Offers in this case), more than products. But still I would have the problem to have two repositories for two root aggreagates. I summary, I would like to know how could I update the price of the items in created offers when I update the price in the product. Thanks. | If pair programming makes you feel like you're an air traffic controller trying to talk down an air plane piloted by a fidgety 12 year old you're doing it wrong. The reason that's wrong is because you aren't miles away talking on a radio. You're right there and can take the keyboard at any time. It should feel like being a co-pilot. You aren't giving over control because you have to. You're doing it because you can. One thing pair programming is not is mentoring. A teacher-stundent relationship feels very different from two people working together as equals even if one has significantly more experience. It takes time to get used to pair programming so don't worry if it feels awkward at first. extremeprogramming.org - Pair Programming What you need to grasp is the point of pairing. It isn't so you can say "we were pairing". It's so you can communicate in your natural language: code. The awesome thing here is you can type a line of code and ask, "does that make sense?" That's a tight feed back loop. You can sort out when code is too clever fast. "Scroll up", "Add a console statement", "Go to file X", "Can you write Y after line Z", etc. If this is all you're going to say while we pair then just take the keyboard already. Rather than spoon feed me a walk through, tell me what's going on. Why we're doing this. How I could have known to do this myself. Tell me that. Don't just take the keyboard, click some mysterious keyboard shortcuts and make magic happen. Show me how the trick works. Also, don't just dictate the entire agenda. Carve out work I can do. Let me jump in and be part of this. Hell you may get lucky and learn something from me. The keyboard should be sliding back and forth. Side conversations often come up during pairing sessions, distracting away from the work at hand Oh take a moment and be a human. Convince me I'm talking to someone who considers me a human. Solving complex problems often takes much longer, because many engineers need heads down time to actually design a solution True. Some need heads up time spent throwing pencils at ceiling tiles. Some need rubber duck time (a sounding board if you're of the silver hair set). Pair programming isn't for every problem all the time. doing so on a call takes longer and often results in analysis paralysis. Yes pairing can take longer than coding alone. But if you're doing it right you're also getting an instant informal peer review as well as some on the spot collaboration. The easy cure for analysis paralysis is doing something stupid and making people explain to you why it's wrong. Iterate on that until you run out of wrong. I question whether the more junior devs actually learn. Keep questioning. Learn what works and what doesn't. There isn't just one perfect way to do this. But doing it only because we're supposed to do it is definitely wrong. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/444106",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/412921/"
]
} |
444,109 | Let's consider the following test. [Fact]
public void MyTest()
{
// Arrange Code
var sut = new SystemWeTest();
// Act Code
var response = sut.Request();
// Assert
response.Should().NotBeNull();
response.ResponseCode.Should().Be("1");
response.Errors.Should().BeEmpty();
} I have argued with a few colleagues that it's pointless to assert that 'response' is not null if you are going to then assert on some of it's internals. Of course if the only thing that you are interested is checking that the object is not null, nothing more, then it's fine. My thinking is that each following assert statements are in fact implicit assertions that 'response' is not null. If it is null then it would throw a null reference exception and making the test fail as expected. The only benefit I see from doing this is a somewhat clearer message on why the test failed. You'd get your test framework specific exception that's thrown by the 'NotBeNull' assertion instead of the generic 'NullReferenceException'. I don't feel this is useful as you are getting the same information anyway. Am I missing something here ? Does checking for null have any other benefits in the example provided ? | Does checking for null has any benefits in the example provided ? The only benefit I see from doing this is a somewhat clearer message on why the test failed. You have successfully answered your own question. Clear error messages on test failures are very useful. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/444109",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/243853/"
]
} |
444,157 | Whenever possible I have been requiring an understanding of the requirements and architecture for the next scope of work before starting to code. Sometimes due to schedule pressure on larger projects I have to start coding before I know everything I need to know for that scope of work, but in that case I make it a priority to catch up as soon as possible. And if I can't get caught up on everything going into a release, then at least get caught up on what I need to know for the next few weeks. This seems like such an obvious no brainer that I'm embarrassed to even ask it, but I've been getting pushback so I wanted a reality check. Maybe there are some downsides to understanding what you are about to do before you do it, and if so I'm hoping someone can fill me in. Or, if what I'm doing seems like a best practice, then a simple confirmation would be appreciated. If it makes a difference, the pushback I am receiving is on a project with a development timeline of about five months and a value to the company I am working for in the millions. | I've seen people dive straight into the code, make bad assumptions and spend ages writing the wrong thing. On the other hand, I've seen people spend weeks "understanding requirements", drawing pretty architecture diagrams and whatever else - only to discover when they actually got round to coding that there was a fundamental problem they'd missed and would have found much earlier if they started writing code. Or in other words: everything is a balance. One of the critical skills of a senior developer is working out where to set that balance for a particular piece of work. For some pieces of work, the risk is in the requirements and you should spend more time on those. For other pieces of work, the risk is in the implementation and you should spend more time coding. There are no simple answers, sorry. | {
"source": [
"https://softwareengineering.stackexchange.com/questions/444157",
"https://softwareengineering.stackexchange.com",
"https://softwareengineering.stackexchange.com/users/427397/"
]
} |
8 | I am asking if it will be alright on this site to ask questions, such as "what is the best library for image rendering in C#?", or similar questions. I understand that this website is for software that is "completed" and serves a purpose for an end-user, so those questions should be moved to stackoverflow.com. Nevertheless, asking for software tools (i.e. what is the best free tool for reporting development that integrates with SQL Server?) that will be used to complement a software development should be allowed on this site. What do you think? | "what is the best library for image rendering in C#?" Would be a horrible question because it is completely unclear what you're looking for. A particular image rendering library might be perfect for one person's needs and useless to the proverbial next man. However, if you list the exact features you need and a few more details, this would a perfectly valid question. For instance, "Image rendering library in C# that is very fast and can output in .png, .gif, and .jpg formats" Might be better. As a general rule, I see no reason to prohibit any one category of software, programming libraries included (except for software intended for illegal operations that the community has deemed to be unethical ). As I said in the comments, code libraries are just software that happens to be used to create more software. | {
"source": [
"https://softwarerecs.meta.stackexchange.com/questions/8",
"https://softwarerecs.meta.stackexchange.com",
"https://softwarerecs.meta.stackexchange.com/users/49/"
]
} |
197 | Anyone else finding that their instincts on what is a good question are wrong?
Since questions that are off topic, on any other SE site are on topic here.
Ie any software-rec.
and questions that are there on topic cousin: What are criteria for X? are off topic (probably). I am finding it hard to pose my questions in a good way.
Much harder than answering questions.
(Normally I'm the reverse, I think I've become quiet good at asking a good SE question, but I am rarely fast enough/good enough to write a top quality answer, before someone else does.) How can I break out of my mold? | It's a pretty simple checklist to ask a good, narrowly-scoped recommendation question. I'll make a rough outline here, I'm currently revising this (which is based on the original ground rules that I posted pre-launch). 1. Straight to the point, succinct title Don't use words like 'best' or 'good' - just tell us what you want. We're not going to recommend the worst, or bad software. What [editor/utility/program/plugin] does [task] in [manner]? 2. Describe your task Tell us what you're doing, or intend to do with what we recommend. If you're bulk re-sizing a bunch of pictures while converting them to another format, or backing up lots of files and trying to avoid data duplication - let us know with as many specifics as you think might be relevant. 3. Describe what you have, if anything, and what you don't like about it This can sometimes be optional, but let us know what you've got or what you've tried and didn't like, and why. Let us know if you looked at something and decided it wasn't for you. Note - answers may recommend something you note, but might have overlooked. 4. Give us an enumerated list of constraints, in order of importance Every recommendation question probably needs this list. Tell us the features or operating constraints a good fit would meet, ordered from must-have to nice-to-have. An example: Must run on OS/2, with 128 MB of RAM Must not be pink Ideally takes less than 2MB of disk Big plus if it plays music 5. Wrap up your question, if it needs wrapping up. You can probably skip this most of the time if you want, but this is a place to put anything supplemental. It's hard to define that, beyond you'll know it when you encounter it. Your goal when writing is to narrow the scope enough so that realistically, 10 answers might directly answer it. It's really not too hard and the simple exercise of just writing it all down can sometimes make you think of something you might have overlooked. That's everything someone needs in order to give you a great recommendation. We may not be able to meet all of your requirements, but since you've listed them in the order of importance, we'll be able to recommend something that gets the most important job done. | {
"source": [
"https://softwarerecs.meta.stackexchange.com/questions/197",
"https://softwarerecs.meta.stackexchange.com",
"https://softwarerecs.meta.stackexchange.com/users/180/"
]
} |
308 | If the Community Managers team (henceforth CM) so decides, we're scheduled to start public beta in two days (Tuesday). I'm not sure we're ready. On most new beta sites, announcements from the CM team that the site will stay in private beta for another week are met with disappointment. I'd like to stipulate that we really want and need another week. I originally brought up the idea of requesting extended private beta (henceforth EPB) in chat, and it seemed to be met with general agreement. Here is my reasoning: We have a great amount of activity now. We're not going to starve of new content in EPB. We really haven't fully hashed out the basics of our policy yet, IMO. For example, I'm still not entirely sure when to flag an answer as Very Low Quality (VLQ). I'm not sure we're effectively keeping up with our existing content moderation-wise. I don't have any objective evidence of this, but it's just a feeling I have. When public beta hits, we're going to get a huge amount of activity. It feels like the eyes of SE in general are upon us. When we open the doors, there's going to be a rush. Are we ready to deal with it? Do we have enough high-rep users? Also, I'm imagining quite a few comments around SE on many closed recommendation questions pointing people here. These people that post recommendation questions on other sites aren't very likely to read the rules here, either. Can we handle it? It seems weird to be requesting another week in private beta, but I believe it's fully necessary for the long-term health of the site. Please, express your opinions in votes/comments/answers below. Am I missing something? | This site is all but ready to move on to public beta, but we tend to agree — we're giving this site another week to shore up the core community before launch. You have set up a strong foundation of what works on this site, but this extra week will help you put some of that dialog into action… and to let it all soak in so the core community is largely on the same page before the site goes public. | {
"source": [
"https://softwarerecs.meta.stackexchange.com/questions/308",
"https://softwarerecs.meta.stackexchange.com",
"https://softwarerecs.meta.stackexchange.com/users/46/"
]
} |
1,019 | Seeing as people want hats, we're signed up for hats. Yay! Visit winterbash2014.stackexchange.com starting December 15 for hat awesomeness. If you absolutely hate hats, there'll be a button in the footer to turn them off. Original post below: Last year, Stack Exchange ran Winter Bash 2013 , in which users earned hats which they proudly displayed upon their avatar. There was a leaderboard of hat earners: We have the option to do it again this year! So, here's the rundown: Hats are enabled on a per-site basis, if we don't want them we can disable them here Hats are awesome, kinda like ephemeral badges Users can turn off hats on a per-user basis, so if you're a hataphobe you don't have to see everyone else having fun. If we choose to accept, the event will run from 15 December 2014 to 4 January 2015 . After the time period, all the hats go away into Last Year's Hat Bin. We need to decide if we want hats by December 1 So, do we like hats or hate hats? | I want hats, they sound awesome and will keep my ears warm. | {
"source": [
"https://softwarerecs.meta.stackexchange.com/questions/1019",
"https://softwarerecs.meta.stackexchange.com",
"https://softwarerecs.meta.stackexchange.com/users/46/"
]
} |
25 | I'm looking for a native OSX desktop application (needs to work offline) in which I can edit plain text documents, and preview how they'll look with markdown formatting applied. At a minimum, I need it to be able to understand and display all the "official" markdown syntax, but save as straight-up .txt files. | Use Mou! Mou has everything you've asked for. If you have the preview pane open, it will update in close to real-time, though it's a little delayed if you type very quickly. It speaks Markdown. It lets you customize the editor theme and the preview pane. It lets you choose the default file extension for saving (and .txt is included on its list). | {
"source": [
"https://softwarerecs.stackexchange.com/questions/25",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/51/"
]
} |
40 | In the past I used Graphviz to create drawing of graphs. It is a nice tools for small graphs. But unfortunately, for large graphs, Graphviz really sucks: It always crossed edges that obviously could be drawn without a cross. It superimposes different texts, making them unreadable. It has no reusable styling (like CSS), and you need to repeat the same personalizations in nodes and edges over, over, and over again. If the user wants to, just say, swap the positions of two nodes. To do so, it is frequently needed to heavily hack the source file, probably screwing unrelated parts of the graph in the process. It is very easy that in order to make small changes in one isolated place of the graph, Graphviz forces heavy major changes elsewhere, frequently invalidating hours of working trying to convince it do draw it right. It wastes a lot of space in the graph and at the same time overcrowd some places so very tightly. Sometimes, some edges makes very tortuous paths to connect the source node with the target node, featuring strange useless curves and a lot of superimposed laterally running edges. It features avalanche effects. Trivial modifications somewhere in the graph, might perturb Graphviz heuristics, resulting in a completely different graph. A lot of bugs... I want something that as, a user, I can simply: Define what the nodes are, possibly with style to be applied. Say what are the edges, possibly with style to be applied. And then the program gives: A graph with the minimum possible number of crossings. Pretty aligned nodes are good. I DO NOT want to: Add a lot of hacks on input just because the tool is too stupid to see that it could swap two specific node to remove a crossing. Manually need to position edges and nodes. Get avalanche effects. So, what might be a good replacement of Graphviz? I really want it to be a free one. Note: I don't care much about the format in which the graph should be input'd, as long as I can save and edit a file with the graph description (whatever is the language of such description). So, there is absolutely no need to still be at the dot language or anything similar (in fact, I would be more than happy to throw away my dot files entirely, as there are much more hacks than actual graph-describing there). | Sorry for the disappointment. Graphviz could be better in many ways, but at this point the prospects for that aren't great because AT&T isn't supporting the work as much as it did in the past and some of the authors (like me) have left to seek other work. We are looking for people that want to take it over, so let us know. We are impressed with yFiles , too. Also try Tom Sawyer Software ; they have a lot of engineering talent and did a lot of work on advanced layout methods and interactive tools. (You may need to spend $$$ as the free trial seems to be discontinued.) The question did not say what specific layout tool or options were tried or how big a "large" network is, so it's not clear what to suggest. If "large" means maybe hundreds of nodes, try neato -Goverlap=false (to avoid node text label overlap) and possibly -Gmodel=subset to try for better clustering. (These options are not the default, because in data analysis e.g. in bioinformatics, a straight MDS embedding gives a more accurate rendering of distances in the underlying network.) If "large" means thousands of nodes, perhaps many thousands, use sfdp instead of neato again with -Goverlap=false. (The subset distance model isn't available in sfdp, because it's not clear how to handle variable edge lengths when merging edges in an hierarchical solver.) You can see a good example of a 1054 node graph here For "wasted space problems" in the case of disconnected components see also the pack and packmode attributes. The solutions to such problems are not obvious (basically you are trying to optimally pack irregular shapes, with additional constraints, and sometimes at the scale of whatever people consider to be "large" so subquadratic algorithms are needed.) For connected graphs, experiment with -Goverlap options. Those are the suggestions. As for excuses and explanations... What someone is calling the "avalanche effect" is also called layout instability with respect to (minor) changes in the input graph. This is a property of almost all batch graph layout programs and constraint solvers. So you should look for interactive tools like D3 spring embedder layout, and Tim Dwyer did a lot of great work on this when he was at Microsoft so maybe someday their Graph Layout toolkit (AGL) will adopt his interactive constraint methods. Just an observation, most researchers and programmers have not attempted to attack scale, interactivity, and aesthetics all at the same time (choose any 2 of the above...) The styling issue is also a good one, we just didn't have time/energy to tackle it since most graphs are generated automatically, so you could apply styles in some pre-processing tool or script. Also it has to be considered that the graph is not just a static parse tree but after a graph is read, its style sheet or the attributes of objects to which the styles have been applied can be changed, and then the graph must be written out correctly in a way that still preserves the original structure as much as possible. Not insurmountable but these are details that have to be thought through carefully. Bugs can be reported on www.graphviz.org under Bug and Issue Tracking. Global edge routing with smooth curves - hard problem. Note that a lot of cool looking layouts by some other tools use curved edges but they just draw over everything else that's in the way. I think we added this feature to graphviz too. Also I think there was a CHI or INFOVIS paper showing such curved edges are actually a little harder to read correctly than straight lines. Crossings - some local optimization might be possible. Not sure what tool is being used. It is easy to point out specific examples where layouts could be better, but harder to invent an effective solution that where "minimum number of crossings" would not actually make things worse in general. Note that I'm directly affiliated with Graphviz. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/40",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/29/"
]
} |
70 | In short, something that just does what one would expect such a program to do. (Just convert it so it works on my mother's media player!) By "free" I mean free-as-in-beer. I'm okay with closed source as well as ad-supported programs. By "lightweight" I mean a program designed to convert, not a full-fledged editor like Audacity. By "all-in-one" I mean a program that can convert to and from most common formats like MP3, M4A, WAV, FLAC, automatically introspecting containers like AVI and detecting codecs. For Windows 7+. | Handbrake is one of the best free (and open-source) video converters around. It's fast, powerful, and simple. It's also quite good at converting audio . It's a good match for you because it's... Free (Zero Cost and FOSS) Very lightweight - it does nothing but convert stuff Extremely feature-filled - you can tweak every aspect of conversion Simple and straightforward Able to Keep metadata, and allows you to edit metadata before conversion | {
"source": [
"https://softwarerecs.stackexchange.com/questions/70",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/88/"
]
} |
129 | I have three computers (2x OS X and 1x Ubuntu) and Android tablet. I'm trying to find a tool to automatically sync files between the computers and the tablet. Backups are handled separately, so there's no need for versioning. Over 100GB of files should be synced between computers and a few gigabytes between computers and tablet. Therefore, services like Dropbox, Google Drive or OneDrive are not really practical, as the price is relatively high. Additionally, I don't need or want cloud syncing, as syncing that amount of data over internet takes quite bit of time (and money, depending on location). | Bittorrent Sync ( wikipedia ) seems to fit rather well. It automatically syncs files between devices whenever a connection is available, handles collisions if those happen, and is compatible with Linux, Android, OS X and Windows, among others. It does not require a central server (or specific laptop) to be available. On the negative side, it's still beta, and might be discontinued. However, recent news says that its userbase is already over 2M, so discontinuation is probably not the biggest risk. My experiences so far: Seems to work surprisingly well. Syncing to a phone is not always working without manually starting the app. This might also be just my impatience. Actively syncing (adding new files) on phone consumes quite a bit of battery. However, this is expected, as it rather resources intensive (preventing sleep, writing to flash, and keeping wifi active). This was the only problem during initial sync (adding 10GB of photographs). Performance over LAN/wifi is good. Easily, and constantly saturates the network. There're options to limit both upload and download rates. Clocks must be synced (relatively well, +-5min is tolerable), otherwise Sync refuses to work. Setting up shared folders is easy, but there's no way to exclude some of the contents (for example, temp files). In addition to syncing home computers and phone, I am now using Sync to publish files to my server. When sharing something, I save/export it to synced folder, and it's automatically downloaded to the server. Easier/faster than running sftp manually. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/129",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/130/"
]
} |
155 | I mainly use Google Drive Desktop for backing up pictures, and files on Windows, but I have yet to find a good alternative to use on Linux/Ubuntu. I know about Dropbox , however they don't have a large amount of free space and their prices for upgrading are, well, pricey. I am mainly looking for a tool that can sync files so I don't lose my work in case my computer has a meltdown. I am not just looking for a free service. I am looking for something that has cheaper upgrade rates than dropbox, and also has more default, free space. Is there an alternative to Google Drive that isn't Dropbox? I need about 15GB. | Mega Mega is an excellent service I've been using since its initial launch. I'd highly recommend it if not just for its usability and the 50gb free storage you get. One thing I love about Mega is the file manager. It's easy to use, understand, and works in browser. Most importantly, Mega functions in most modern web browsers . A Mega client with file syncing is also available for Windows, Mac and Linux. Mobile apps are available on Android, IOS, and Blackberry. Features: 50gb free storage Accessible from any modern browser (mobile and desktop clients are also available) File Sync client available on Windows, Mac, and Linux Plans starting from 9.99 € a month for 500gb storage and 1tb bandwidth End-to-end file encryption Easy file sharing and contact management Cryptography One interesting thing to note about Mega is that, to quote Wikipedia: Dotcom has said that data on the Mega service will be encrypted client-side using an advanced AES algorithm. Since Mega does not know the encryption keys to uploaded files, they cannot decrypt and view the content. Therefore, they cannot be responsible for the contents of uploaded files. However, I would not trust that Mega, if compelled by, say, a subpoena, could not obtain a user's master key and decrypt their files in their entirety. See also MEGApwn . When uploading to any file storage platform you should always expect that your data will be immediately accessible to others without your consent, and that it may, at any point in the future, become unavailable to you without warning. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/155",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/50/"
]
} |
163 | I'm looking for a screenshot utility that would help me when I try to explain over email how to use a program. I'd like it to: Take a screenshot, Add a red arrow or circle like this: Upload the image to imgur or attach it to an email. And then I can send email: click this button with link or attachment that shows where to click. Of course, I'd like to get the image as quickly as possible. Recently I've found a perfect Linux program for this: Shutter . Are there similar tools for Windows or OSX ? (I know that GIMP and other graphical programs can take screenshots, but starting GIMP takes more time). | Greenshot is a free program for Windows and is really good at this. When you press Print Screen or click on its icon, it presents a nice context menu full of options It also has a nice editor, which is the default that opens when you click 'Open in image editor'. It has shapes, arrows, a highlighter, an obfuscation tool, text boxes for annotations, undo (everything is separate objects until you save), even stuff like connecting to Dropbox, rotation, drop shadow effects, and freehand drawing ( which seems to be a meme on Stack Exchange ). | {
"source": [
"https://softwarerecs.stackexchange.com/questions/163",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/99/"
]
} |
245 | Often, on my Windows computers, when I go to delete or move directories or files, an error message appears explaining that this action can not be completed because one of the files is being used by another program. Is there a program that allows me, for a given file, to find out what program is using it and end that program? | I regularly use Process Explorer , (free from Microsoft) , to do exactly what you are asking for you can search for which programs/tasks are using a given file or directory and then kill the program or program tree. You can even find a DLL that has a given file locked and then find and kill those programs that are using that DLL. It also gives you a lot of other useful information. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/245",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/137/"
]
} |
310 | I've been having a lot of problems with both of my Internet providers lately, and I'd like to start monitoring and graphing the quality of my Internet connections. My ultimate goal is to be able to produce graphs and data that I can take to my ISP to assist them in narrowing down what the problem might be. I'd like something that runs nicely in the background on Windows 7 with a convenient task bar icon that I can use to browse the collected data and change the configuration of the program as needed. I want to collect the following types of data: Pings to various hosts that I define The time it takes to complete a HTTP request to various hosts that I define How long it takes to download a 1 MB file at an interval (to get a rough estimate of speed) Timestamps, of course, would be critical in logged data. I imagine being able to create graphs similar to what the Cacti network monitor produces. I have something that monitors the quality of my WiFi connections, the Xirrus WiFi inspector and companion desktop gadget , but this only measures the quality of signal from my router to my machine, and the data doesn't easily persist. Still, it illustrates the kind of interface I'm hoping for. Is there something that fits, or comes close to fitting this criteria? | Smokeping ( demo ) does all that. However, this includes multiple caveats. This is not out-of-the box solution for Windows . I have not tested this on Windows, but I'm using Smokeping for exactly same thing. To avoid installing on Windows, see bottom of this post. It's for unix based systems, so installing it to windows is not easy There's no GUI. All configuration goes to configuration file. Output should be accessed through web server/browser. Web server causes additional overhead. Installation script only supports smokeping 2.2.4, which is already 7 years old. But smokeping is not updated really often, 7 year old version is basically feature complete Modifying installation script and patches for newer version should be easy. This old, and probably outdated blog post offers Windows installation instructions, quickly referenced below. Requires downloading installer/patch set, which might disappear. The patch set changes Unix paths to Windows paths. Install perl Install web server, for example wamp Configure cgi-bin support to your web server Download this installer/patch set unzip and run perl install\ n\ patch.pl . Downloads smokeping and patches files for Windows support. Following steps are from this blog post : Test it by running C:\smokeping-2.2.4\bin\smokeping.pl on cmd.exe Wait for 15 minutes while smokeping pings predefined targets. Try opening http://127.0.0.1/cgi-bin/smokeping.pl Configure autostart: add scheduled task for the same command, and option to run it on each startup. Modify C:\smokeping-2.2.4\etc\config.dist to suit your configuration. Restart smokeping after changes. Pinging redefined targets is supported by default. For http requests, there's EchoPingHttp . Alternatively you could install Linux to virtual machine. For example, install Debian with no graphical environment to VirtualBox . Disk usage is really conservative (by default, somewhere around 3MB/destination/probe, with one-year history). For memory, 256MB is easily enough if you're not planning to run anything else. Advantage of this approach is getting the newest version, and avoiding patching smokeping and installation/configuration hassles. Installation in Debian: sudo apt-get install smokeping
sudo vi /etc/smokeping/config.d/Targets
sudo /etc/init.d/smokeping reload By default, smokeping is available at http://virtual_machine_ip/cgi-bin/smokeping.cgi (replace virtual_machine_ip with IP address of your virtual machine). Note that by default, you can only connect to VirtualBox machines from the host OS, not from another computer. Yet another alternative is to buy Raspberry PI (30€/$25), and run smokeping on that. Do note that migrating database files ( Round Robin Database, RRD ) to different processor architecture is far from simple. If you don't mind losing history, you don't have to care about this. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/310",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/177/"
]
} |
340 | I need to edit some images in Windows. I plan to do more than simple things (not just color / contrast correction, but removing spots and filling in missing areas). Don't have memory / GPU / CPU constraints. The only constraint would be the price: I'm not willing to pay more than $400 in USD. The software would need to deal with RAW, JPG and PNG files. Requirements (short version) Must run on windows Advanced options (remove spots, fill missing areas...) Low memory requirement Can be paid but must be less than $400 us dollars Must deal with RAW JPG PNG | I really like GIMP . It's free and you can it supports plugins, so for example, you can use Resynthesizer & Heal Selection to fill missing areas . To open RAW files, you need to use the UFRaw plugin . Here is a screenshot: | {
"source": [
"https://softwarerecs.stackexchange.com/questions/340",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/214/"
]
} |
395 | I'm looking for some Windows software that allows you to do drawing/sketching but more on the amateur level. Some specific features I am looking for: Ability to make shapes (circles, squares and straight lines) Change colour of your pen Different pen types (brush, fine, spray paint etc) Ability to save to a .PNG format Easy "clear" button to clear your drawing surface without having to start a new file Change the size of your drawing surface If possible, auto incrementing file names (ex Untitled 1, Untitled 2 etc) It shouldn't be to complicated to use and should be more centered around amateur drawing users. | There is Paint.net . It is a free image and photo editing software. It is easy to handle as MS Paint but contains a lot of advanced features. It supports different formats. Actually, it fulfills all your requirements. Easy to handle software Looks like MS Paint Plugin Engine (support for several plugins) Includes Effect for images Forum and Community Contra: No possibility to draw a polygon. It exists a plugin but did not work on my Paint.net | {
"source": [
"https://softwarerecs.stackexchange.com/questions/395",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/50/"
]
} |
435 | I'm looking for a good IDE for Python that should run on Windows 7 and higher. The program should ideally support the following features: Syntax highlighting Code Completion Debugger Support Support to run Shell side by side Support for CPython and IronPythin Navigation to Definition (As in Visual Studio) It will be preferred if IDE has good UI and docking support as in Eclipse At this time, I cannot acquire products that are not free, but I am willing to accept answers that describe a relatively cheap product. | PyCharm Made by JetBrains, are the same people that make ReSharper, the C# refactoring tool.
It has a free and a paid version.
I found the free version to be quite good.
I've not tried the paid version. Requirements Checklist Syntax highlighting : Yes , Also has error highlighting, programming style highlight, and spelling error highlighting (I can't work out how to add a word to its dictionary, which is annoying) Code Completion Yes , menu comes up when you hesitate, and also is bound to the tab key Debugger Support : Yes Support to run Shell side by side : Yes? I've not tried but I can't see anything that could stop you. Support for CPython and IronPython : Partial (at least) I've tested it with CPython and PyPy, I've not tried IronPython. Cython is only in the Paid version. Navigation to Definition (As in Visual Studio) Yes , via the "Find Definitions" context menu option. (It is listed under a separate subheading in the results) | {
"source": [
"https://softwarerecs.stackexchange.com/questions/435",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/255/"
]
} |
460 | I have a server running Ubuntu Server 12.04, and I'd like to see disk IO stats. I've tried top and htop, and neither of them output anything resembling IO stats. Is there a command line tool that does this? | You are probably looking for iotop . It provides the information you are looking for per process. You will need to run it with super-user privileges if you run a recent kernel since some changes to the NET_ADMIN permissions have been done something like a year ago. Simply install it and run sudo iotop Bwm-ng can also output some disk I/o stats when you cycle through the available methods. The advantage of bwm-ng on linux over iotop is that you don't need the NET_ADMIN capability so it will work as a normal user by default. It provides the information per device as you can see in the picture. If you want to have the lifetime stats of your disk, try smartctl -a /dev/your/disk | {
"source": [
"https://softwarerecs.stackexchange.com/questions/460",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/46/"
]
} |
506 | There are many ways to take a screenshot, edit it and save it to disk. But sometimes I want to take a series of screenshots possibly quick and process them later. Is there a free Windows program enabling me to take a screenshot or desktop (or active window) each time I press PrintScreen or some key combination, and saving each, giving me the possibility to view and process them later, when I'm done? | Windows 8+ only Assuming you're using Windows 8 or above, and want a whole screen screenshot, simply hit + PrtSc . This works anywhere, and is about the only way to get a screenshot of Universal apps. You can find these under your Pictures/Screenshot folder in your user folder. Sadly, this doesn't work in older versions of windows | {
"source": [
"https://softwarerecs.stackexchange.com/questions/506",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/77/"
]
} |
696 | Seems like there should be something. Basically it should provide a report: With the total time spent in program X With a graph of how much time was spent in program X at what time of day (think the OPSR Github graph of activity ) That works on Windows Bonus: Time spent actively engaged ( Very optional since I'm not even entirely sure how would be a good way to calculate that - perhaps time typing/mouse clicking or within 5 seconds of that?) Page address in browser(s) Filename in things like other programs (obviously that wouldn't apply to all other programs but thinking of e.g. notepad++, MS Word, MS Excel) Open Source Gratis | ManicTime is a pretty awesome piece of software (free/pro with trial).
It let's you track time by theme dimensions: Usage (Active or not) Application used Document (title of the document) Tags The pro version (also have trial) even let's you auto-tag time by using filters to select specific keywords. for example you can auto-tag all the time you spend in facebook.com/ under 'Social networking'. It can also give you reports in the end of the month/week/day! I love it... | {
"source": [
"https://softwarerecs.stackexchange.com/questions/696",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/107/"
]
} |
710 | I use LICEcap fairly extensively when answering questions on Stack Exchange - its a VERY intuitive tool for making gif-based screen captures You open the application, select an area to record, choose a filename to save it, and do your thing so it can record. However, this is Windows and OS X only - I've occasionally toyed with using a Linux VM and capturing the VM window, but I'd like a native alternative that would record a gif the same way. What could I use? | Update 10/22/2014: Seth Johnson has improved the Ubuntu PPA so that only the Silentcast PPA is required. (Previously, 3 PPA's were needed.) Update 10/13/2014: Version 2.0 released. Added options to create webm or mp4 videos instead of just animated gifs. Added a script to do a full install without root privileges. Update 10/4/2014: Runs in Unity, no problem, and there's now a PPA for installation - Thanks to Seth for his Unity Indicator patch and PPA. Please have a look at the installation instructions below which I've updated today. Also, you can run it without installing it . Follow the Any Linux Distro instructions, but don't run the install script . With Xfce , just open the extracted folder and double-click bash silentcast . With other desktops, run from the terminal from within the extracted folder. That's it! Nothing will be copied into your system files and deleting the extracted folder will completely remove it from your system. I wasn't happy with either of these answers so I wrote my own: Silentcast . If anything doesn't work for you, please file a bug at Silentcast Issues Notice there's a stop icon in the Notificaton Area before I even start Silentcast, then a 2nd stop icon appears when recording begins. That's because I already had Silentcast running to make these animated gifs of how to use Silentcast. Silentcast 1 keeps going after I stop Silentcast 2. Fullscreen: How to use Silentcast to record Gimp Transparent: How to use Silentcast to record 2 windows Interior: How to use Silentcast to only record the drawing Entirety: How to use Silentcast to record 1 window Installation ... (skipping over some stuff - in the full README, this includes a list of dependencies and distro specific instructions for installing them) Any Linux Distro Full Install Without Root Access Install missing dependencies (see the Dependencies table and Installing Dependencies by Distro above) Download a version of Silentcast: Should always work as intended: Download Latest Release of Silentcast from github.com Most likely working right: Download Silentcast master.zip from github.com Probably broken when in active development, otherwise the same as master: Download Silentcast next.zip from github.com Extract. Then, from a terminal, cd into the extracted directory and ./no_root_install Uninstall instructions are provided in the output of the no_root_install script. You can also see them in the comments to the launcher. See options with ./no_root_install -h . If installed to the default location, uninstall with the following commands: rm -r ~/.silentcast and rm ~/.local/share/applications/no_root_silentcast.desktop See what version you've got with silentcast -v . [Check for a newer version]( https://github.com/colinkeenan/silentcast/releases/latest Any Linux Distro Full Install Install missing dependencies (see the Dependencies table and Installing Dependencies by Distro above) Download a version of Silentcast: Should always work as intended: Download Latest Release of Silentcast from github.com Most likely working right: Download Silentcast master.zip from github.com Probably broken when in active development, otherwise the same as master: Downlad Silentcast next.zip from github.com Extract. Then, from a terminal, cd into the extracted directory and sudo ./install Uninstall instructions are the same replacing install with uninstall . The install (or uninstall ) bash script just copies (or deletes) files. You may want to edit them if your distro puts files in unusual places. See what version you've got with silentcast -v . Check for a newer version ...(see full README for how to install dependencies for your distro) Arch Linux Full Install Use an AUR helper, like yaourt -S silentcast . This will automatically install the latest release and missing dependencies. Keep your install up to date the usual way with your AUR helper, like yaourt -Syua . Uninstall with sudo pacman -R silentcast Without an AUR helper, just Download silentcast.tar.gz from aur.archlinux.org , extract, and do makepkg -si from the extracted directory. This will do exactly the same thing as an AUR helper would do for installation, but you will have to keep track of updates yourself. Uninstall with sudo pacman -R silentcast Ubuntu Linux Full Install For 14.04 and 12.04 run the following commands to install Silentcast (for older versions of Ubuntu follow the "Any Linux Distro" instructions below): sudo add-apt-repository ppa:sethj/silentcast
sudo apt-get update
sudo apt-get install silentcast Or run the following, condensed, command: sudo add-apt-repository ppa:sethj/silentcast && sudo apt-get update && sudo apt-get install silentcast Uninstall Run sudo apt-get remove silentcast . You can then remove the PPAs with sudo add-apt-repository -r like so: sudo add-apt-repository -r ppa:sethj/silentcast && sudo apt-get update Launch Methods Menu Hierarchy Graphics -> Silentcast Multimedia -> Silentcast Search Box Terms silentcast screencast record gif (and other things will work too) ALT + F2 silentcast Terminal silentcast Find Silentcast in the menu under either Graphics or Multimedia , type silentcast into the search box, or ALT + F2 silentcast . It can also be run from a terminal as silentcast . | {
"source": [
"https://softwarerecs.stackexchange.com/questions/710",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/125/"
]
} |
767 | I recently completed Tiny and Big: Grandpa's Leftovers , a Puzzle game, that revolves around it's physics engine. The main game mechanic is being able to slice various parts of the terrain and objects, and then drag and push them around to solve puzzles. For example in one of the levels you are confronted with a almost vertical cliff of stone and must slice of pieces to build ramps to get the the top. I'm looking for another game like that: Must be a Puzzle Game Must involve Physics , in the puzzles (Eg momentum, object destruction etc) Must be Character orientated , I want to play as a character. Should be First Person , a third person games might work, but I liked the first person kinda deal. Should NOT be combat focused , combat can be a thing, but it shouldn't be a FPS, with puzzle elements. The puzzles must be the true focus. Should be 3D Ideally would be available through Steam Ideally would be for windows and linux , but being only availble for one of those would be Ok. Ideally would cost less than $100 I don't care how old or new the game is, so long as it will run on modern windows/linux.
I don't play many games, so even games that are super well known, I might have overlooked. | Valve have released two games incorporating physics based puzzles: Portal and Portal 2 . Both of these games are story driven puzzle games from a first person perspective where you take control of Chell and guide her through a variety of puzzles set up by an artificial intelligence, GLaDOS (Genetic Lifeform and Disk Operating System), at the Aperture Science testing facilities. The game primarily revolves around the use of the "Portal Gun" (the Aperture Science Handheld Portal Device), which can be used to open portals within the facility to navigate to otherwise inaccessible areas. As for your list of requirements: Both games are primarily puzzle games Physics is included, for example, momentum through portals is maintained and this is a key gameplay element You play as Chell, a survivor trapped in the Aperture Science Testing Facilities Both games are first person There is no "real" combat - in some scenes there are automated robots that will fire upon you, but this is just another part of the puzzle for that particular level Both games are 3D (see above screenshot) Both games are available on Steam Both games are available on Windows and Linux Both games can be purchased for less than $100 | {
"source": [
"https://softwarerecs.stackexchange.com/questions/767",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/180/"
]
} |
817 | I'm looking for software that would let me download YouTube videos as video files to the hard drive. Don't care much about format being saved as long as Windows VLC can play it out of the box. I'm not an expert on different formats of YouTube videos; if there are different ones, the software must support them all. Must save both video and audio. I strongly prefer Freeware. Automatic acknowledgement if some video triggers "18+ years only" warning. Ability to download batches (e.g. GUI that accepts a list of URLs, OR command line interface with URL parameter that I can wrap in a loop in batch/Perl/Powershell script). Save all URL links (to different videos or other sites) embedded in the YouTube video. Format doesn't matter - could be just a text file on the side. Furthermore, it would be nice if the tool also considers the following options Download video sets defined by YouTube, e.g. all uploads from a user; or entire channel. Ability to remember download history, e.g. do "Download all videos from the channel you didn't already download". Skip advertisements. Not an problem if it doesn't. Furthermore Windows platform (XP 32 bit compatible preferred but not required) All things being equal (e.g. 100% same features and quality) standalone software is preferred over FF/Chrome plugins. But if a given plugin is better than anything standalone, I'm fine with a plugin. | I use yt-dlp for downloading videos from YouTube. It's a free console program (public domain licence), written in Python. I've used it on Windows and Linux and it worked well. (According to the official site it should work on Mac OS X too.) By default it downloads the video in the best quality provided by YouTube. If it's not playable with VLC you can get the available formats with yt-dlp -F "http://www.youtube.com/watch?v=..." and set it with -f : yt-dlp -f <formatId> "http://www.youtube.com/watch?v=..." It downloads 18+ videos automatically without manual intervention. You can download multiple videos with one command: yt-dlp "http://www.youtube.com/watch?v=..." "http://www.youtube.com/watch?v=..." ... It supports YouTube channels (and many more) . If it finds the downloaded video in the current directory it does not download it again. (If it's not finished it continues the downloading.) | {
"source": [
"https://softwarerecs.stackexchange.com/questions/817",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/261/"
]
} |
867 | I need a self-hosted replacement for Github. It is crucial it works on firewalled intranet, with no access to the Internet (for example, styles, license checks, etc.). Relatively good web UI: source code and commit browsing are a must. Support for git and/or mercurial. Support for both is a plus. SSH shell (repositories must be accessible over ssh, instead of just http, even though at least git supports all operations over http relatively well) Permissions: at least private/public repositories read-only and full access Same permission set for web UI and for SSH (when granting/modifying permissions, it should be reflected to both) preferably integration to LDAP (both users and groups for permissions) Pull requests (aka. merge request) Administration tools: creating repositories, granting access Simple issue tracker: creating tickets, commenting, closing, tags/labels Preferably search, including tickets, users, projects, filenames and inside source code Preferably forking from web UI Preferably runs in Linux Must be either open source (which means it is okay if it is missing some minor functionality) or affordable (>2400€/year for 30 users is too expensive). I know there is at least: Cydra ( website says "under construction") Gitlab (seems to be the best alternative) Gitbucket Gitorious Github enterprise (way too expensive; 10000$/year) But I don't have experiences with these - this list is not excluded from answers in any way. However, as I already know there are some alternatives, so I'm not looking for a list of possible solutions, but recommendations based on what you have used and tried. I can use a search engine too, so there is no need to post answers with only copy-pasted content from the first hit. | We've used GitLab for over a year to host projects of my students. TL;DR;EDIT: there used to be a demo , but now it's missing. You can register for free and create some public repositories. I must say I am really satisfied. As an iteration through your requirement is encouraged on this site, I'll do just that. Relatively good web UI : You can browse source and history, statistics (global and per user) and graphs of commits (like "network" on Github). You can comment each line of commit from GUI, it's a great feature! Sorry, but I can't provide any screenshots, I'd have to manually anonymise them. Generally it's similar to Github. Support for git and/or mercurial. Support for both is a plus. Git only. SSH shell (repositories must be accessible over ssh, instead of just
http, even though at least git supports all operations over http
relatively well) : It's like in Github. HTTP for read-only access, SSH for read-write. Permissions: at least private/public repositories : It's there. read-only and full access : You can define roles (I believe the defaults are master, developer, reporter, guest). Same permission set for web UI and for SSH
(when granting/modifying permissions, it should be reflected to
both) : I believe it works just like that, but as I don't have admin access right now, it's hard to test. But, again, it's like github. preferably integration to LDAP (both users and groups for
permissions) : We have that. Everyone logs in via ldap, staff with more privileges than students. BUT I can't really tell if that was very easy, it's just possible. Pull requests (aka. merge request) : Present. Administration tools: creating repositories, granting access : All from web interface, with a nice searching for users and ability to define groups of users. Simple issue tracker: creating tickets, commenting, closing, tags/labels : Yep, it's there. Not sure what do you mean by tags tough, couldn't see anything like this. Milestones? Preferably search, including tickets, users, projects, filenames and inside
source code : This would be probably the least fancy feature of gitlab. You can search for users/projects/groups, you can find the content of files, but not a filename. I find it quite clumsy. Preferably forking from web UI : Present. Preferably runs in Linux : Obviously ;-) Upgrade process : it's pretty straightforward if you know your system. Every release has it's own upgrade guide, which is always a bit related to the default, recommended setup (i.e. paths, users, commands etc). If you have a non-standard (in their terms) system, if you customize your setup, you'll have to spend a little while to pimp everything up, but it's never complicated - mainly a new clone, run few scripts and you're done. Never had any problems, but I stopped following the process quite a long time ago. UPDATE Gitlab now includes (as of 6.4.2) an easy upgrade script . Assuming you have the standard system layout, the actual upgrade process is now a single command. It's under active development with a new release every month, so it's definitely worth trying. It's open-source, free for commercial use. An Internet connection is not required for Gitlab to work. You will need Internet to set up Gitlab because it downloads its dependencies from RubyGems. Alternatively, you can build a RubyGems mirror , or do the install on another server and copy the complete install directory (by default /home/git/ ) to this server. Screenshot | {
"source": [
"https://softwarerecs.stackexchange.com/questions/867",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/130/"
]
} |
1,022 | I somewhat like (read: am in love with) Stack Exchange's implementation of markdown. In fact, I often try to use it when writing emails. Much to my disappointment, this never works. Is there an OS X email client that supports composing in Markdown? | I recommend Mozilla Thunderbird + Markdown Here add-on. Mozilla Thunderbird is an open-source, cross platform e-mail client that is extendable with add-ons (plugins). When you install the Markdown Here add-on, you will get a new toolbar button labeled "MD Toggle" on your message composition window. (Note: after you install, you may have to right-click the toolbar, select Customize, and manually add the button to your toolbar before you will see it.) You can then type your message in markdown, and click the "MD Toggle" button (or Ctrl - Alt - M ) to toggle back and forth between markdown code and rendered output. It works very well. The advanced options are nice too, as you can alter the CSS rules used in the markdown rendering, if you like. It also supports GitHub style code syntax highlighting and TeX mathematical formulas. The TeX output is done using a Google API , and the API returns an image. I have only been using it for a few days, and I like it, but there are a few watchouts: If you want to use Markdown, you need to hit the "MD Toggle" button before sending, or it won't render the Markdown. I like this feature, as I don't necessarily want it interpreting every e-mail I send as Markdown. If you don't hit the MD Toggle button, the e-mail goes out just like it always did without the add-on. After you hit the "MD Toggle" button, resist the temptation to make changes to your message while in Markdown rendered mode. If you edit while in this mode, Thunderbird will introduce some extra code in there that will confuse the add-on if you decide to toggle back to source mode. Write your code, toggle to render mode, check that everything looks good, and if you see something that needs to be changed, switch back to source mode and make your changes in Markdown, then render again before sending. By the way, if you want to use this with your favorite webmail provider, Markdown Here is also available as a plugin for Firefox, Chrome, or Safari. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/1022",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/46/"
]
} |
1,237 | I'm looking for a password manager which is available as Android App and Linux Desktop application alike. I know there's e.g. KeePassDroid (see: Password manager for Linux with just working in-browser autotype ), but that does not fit my requirements, as it has a "fixed layout" when it comes to its "password form". What I need: must be available as native Android app and native Linux Desktop application specific forms for different "password" types, such as credit cards, login information, etc. folder-like organization (or at least categories somehow) secure storage (i.e. good encryption) data file location should be configurable (requirement for the next item) Sync between Desktop and Mobile must be possible by simply synchronizing the database file Android app must not require network access (sync will be done separately) Preferred options (not mandatory) nice GUI icon-sets to select icons from for folders and "leafs" is contained in Ubuntu repositories (PPA is fine, a .deb would do, no problems if it comes as .jar or ready-to-go .tar.gz however, or source if not too many dependencies (I feel fit enough to configure && make && check-install ;) I'm currently running Ubuntu 12.04, but plan switching to Debian with my next install – which won't be too soon, however) Candidates tried, but failed: KeePassDroid / KeePassDroidX : While offering clients for multiple platforms, the format is fixed for "web logins" (name, url, login, password, comment), which is unsuitable for e.g. credit or debit cards. Pocket : while offering an Android app and Java desktop, and seemingly even different "forms" suitable for credit cards, logins, and more, the Android app requires the Internet permission, and it wants to sync via Dropbox. Also I couldn't get the desktop app working (it required an existing database, which was of course not there to start with), using the same database on both ends involves permanent renaming of the file (both ends are fixed to different file names), plus the Desktop has trouble with mixed-case directory names on case sensitive file-systems. | The easy thing about your criteria is that you don't actually need a matched set. As long as the data file is completely inter-operable, any combination of unrelated apps will work. TL;DR KeepassX 2 + Keepass2Android use the same data format and are the only pair I know of to meet all your criteria, even though there are a couple "gotchas". Data format As far as password-manager data formats that are interchangeable go, there is basically one 800 pound gorilla in the room. The data format originally conceived for KeePass is both well established and widely supported. The cryptography used has also been extensively peer reviewed so it is arguably safer than many smaller players or commercial solutions that use proprietary formats. The trick is going to be that you need to use version 2 of the data format (kdbx). Your criteria includes several items including the need for custom data fields that were not possible in the version 1 database format (kdb) which restricted entries to a pre-defined set of fields that made is suitable for a rigid "login credentials manager" roll but not a "private data manager" role that you are looking for. History The original KeePass software was written for Windows. and the 1.x series only worked on that platform (although it worked under WINE so some of us got Linux mileage out of it before there were alternatives. The 2.x series is basically a rewrite with many advancements that includes a port that will run under Mono for Linux, OSX and BSD support. I would actually recommend not using the original client software and using some of the alternatives instead. The pair I use seem to match all your criteria perfectly with one caveat. The keepass database format allows arbitrary key/value pairs to be stored with each entry. While theoretically this could be used with a smart interface that intelligently adapted to different entry types (e.g. website login, credit card data, passport, etc), to my knowledge no clients yet do this. What you can do is use the arbitrary fields to organize your own data. That caveat aside, I tried a LOT of alternatives when I was picking my own solution and was unable to find a better pair. If something else is out there that meets your criteria better I would like to hear about it too as our needs seem similar. This is the best setup I could arrange. Linux Client Recommendation The KeePassX project has been around quite some time. It was originally conceived as a parallel to the Windows project and indeed called KeePass/Linux. After the original project got a port of it's own, the name KeePassX was adapted and the code has in fact been ported to also run on Windows and OSX. As you can see from the commit log it is actively developed, but unfortunately the project has always suffered from very long release cycles and a hesitancy to call anything stable that hasn't stood up to years of testing. For your purposes you will need to use the 2.x series . If your distro still has the 0.4.x series the data format won't be interchangeable with the Android app in this recommendation. Since anything you would be putting in such a system is obviously important and irrecoverable if you were to corrupt it, you should definitely have a fail safe backup system it place. I like to keep my database it a private git repository so there is a versioned history of it spread across many of my machines as well as some special backup provisions. You said you are going to be syncing and managing the database file yourself. This is fine, just do your homework and do it right. It is NOT the client software's fault if a corrupted copy of the DB gets synced across all your devices and clobbers your backups! The latest KeepassX 2 release tag as of this writing edit is 2.0.3, but check the project news and source code tags for new ones. That source can be downloaded , compiled and installed from the announcement page or you can download an up to the minute zip from the project's Github mirror Android Client Recommendation You mentioned you had tried KeePassDroid , which was an attempt to port the KeePass 1 software to Android. I found the interface to be clunky and (when I last tried it) it did not support custom fields. There is support for kdbx format files (marked beta) but not all the features are exploited. Instead I use use Keepass2Android and find the interface to be better than any of the other KeePass compatible clients available. There is an offline version that is stripped of all connectivity options if you prefer to do the sync yourself, so that criteria is met. Custom fields are also supported and the interface even makes this relatively simple. The download is a little heavy weighing in at an unwieldy 13 MB, but in practice the front end is clean and fast and has been updated regularly to keep pace with the latest Android UI guidelines. The backend is heavy because is it wraps up other widely used code for the actual encryption but this does mean you can be pretty sure that the cryptography is being done right than some one-off design. Bonus Let's say you are on a Linux box (or almost any other desktop platform) and need into your password database but can't be bothered to install a program from source or fiddle with custom package repositories. There is an open source JavaScript client with read-only support for KeePass (kdbx only) files called BrowsePass ( Chrome extension here ). | {
"source": [
"https://softwarerecs.stackexchange.com/questions/1237",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/185/"
]
} |
1,300 | Is there a good remote desktop software tool that I can use to control my Windows and Linux PCs from my Mac, and vice-versa? I don't care if it uses up lots of internet bandwidth. Desired features: Easy to set up Secure Gratis Ad free Able to work with different IPs Windows, Mac and Linux compatible | Team viewer is a great remote desktop tool that works on mac windows and linux, it is also free for non-commercial use it is also very easy to set up just install it on both PCs and enter id and password of the PC you want to connect to. it also has a feature to connect to your pc that you are not near. you can also connect to a different platform then what you are running (EG mac to linux) I have found it to be very easy to setup and use and quite reliable. you can also set it to open on boot so it is always ready. It meets the requirements: It is very easy to set up (login optional) it is free It is very secure It is add free works with IPs outside of you local network one thing though it is not open source. You can download it free here | {
"source": [
"https://softwarerecs.stackexchange.com/questions/1300",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/494/"
]
} |
1,308 | Personally I'd just install Cygwin and proceed with the usual *nix shell tools, but I need to make a software recommendation to some clients that need something a bit more newbie friendly. The situation is that several clients have developed website(s) of the mostly static HTML sort, but the server they need to deploy to doesn't have the usual collection of 1990s protocols available (for example, no FTP ). In fact, there is no access to any graphical interface. Deployment is handled through Git. Changes need to be pushed to a a remote repository that is accessible only via SSH key login. Any commits to the master branch pushed by the authorized key trigger a hook script that deploys the site to the production servers. I am looking to suggest a Git client for Windows that: makes it relatively simple to setup and initialize… …one or more local repositories. …a single git+ssh remote. …authentication using an RSA key pair (generation of this would be a bonus). has a simple interface where a basic workflow of committing and pushing is easy to accomplish without understanding the intricacies of distributed version control. There is a GitLab instance available for each client that has one project per domain and makes adding their public key fairly easy. It also gives the clone/remote URLs for each project and makes it fairly easy to check what the status of the remote repository is. Open source would be preferred, but any reputable freeware would be acceptable. What client software should I point them to? Edit: Most suggestions to date seem to focus on full blown front ends to all of Git's functionality. I'm looking for something more pared down that only covers the basics and is better suited for a specific task than at running with the big dogs. I'm thinking the KISS principle here for people that do not use version control for anything else and just want to "upload" their websites. | I am using Atlassian SourceTree and like it a lot. Here's the drill: Free (not open-source thought AFAIK) Feature rich - Almost all the features of Git is there (not of GitHub, though, e.g. I didn't find a way to rebase a GitHub fork. It's doable using ordinary Git commands - adding remote etc, but not out of the box) Supports GitFlow Nice UI: NOTE - This screenshot is from a much older version. The UI of the newer version is simpler. Bottom line - I find it almost perfect and use it for all the needs not covered inside my IDE. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/1308",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/429/"
]
} |
1,469 | I would like to identify a simple, FOSS version control system for plain text documents. For more complex (academic) writing, I happily use LibreOffice. But increasingly I find it convenient to write more simple documents (reports, presentations, lectures, note-taking, whatever) in Markdown, usually using ReText . My goal now is to manage versions of these documents. A scenario might be: drafting a presentation --> "finished" version delivered at Event A --> now redrafted --> delivered at Event B --> now occasion C comes along, for which the Event A version is the better one --> pulled and drafted for Event C. So, my requirements are: version control (essential, obviously!) Ubuntu/Mint friendly (PPA would be great) easy tagging/commenting/labelling of "commits" simple tag/label browsing simple tag/label searching simple "recovery" of previous versions possibility of doing diffs possibility of syncing to different machines simple directory structure/hierarchy for docs One obvious solution that I'm aware of would be to use Git to manage the version control (and a number of other write-ups on around the web). I'm not wholly averse, and have used Github casually both from Windows and Linux (Ubuntu, Mint) for some years. But the key word there is " casually " -- and it seems a bit of a sledgehammer-to-nut scenario. (I have also seen the question about a " Document manager for paperless office ", but that appears to go well beyond my needs.) There may well be other options out there, and certainly there will be tools I have never heard of. Grateful for any help with this one. | Yes, you should use git *. Now let me explain why. Given the current (rather nebulous) set of criteria in your question the answer seems fairly obvious. If you knew any more, you wouldn't even be asking this question. You have already brought yourself to the edge of the cliff, now you just need coaxing to make the jump. * Or Fossil, Mercurial, Darcs, Bazaar, or other DVCS depending on the front end tooling you prefer, to be explained. The current scene and some history: There are basically three kinds of version control systems: distributed , ham-fisted , and tapped-out . Allow me to expand on that technical terminology and how each came to be and would apply to your situation. ham-fisted Notable entries*: CVS, Subversion. Before DVCS systems took the world by storm, there were VCS systems. These could be characterized by a central repository/server and a star pattern of user-workspaces/clients. These were an indispensably valuable tool for keeping a team of programmers on the same page and even adapted themselves to other uses. A single programmer could work from multiple systems and play around with branches and tags. They saved many a day. But they were inherently clumsy. They make some simple tasks harder . First there was the overhead of setting them up, the need for a server and specific protocols to connect them. Then there was the pain of dealing with those times you did something wrong. Suffice it to say these would get the job done for your use case but would introduce trade offs making life more complicated. tapped-out Notable entry: RCS. For when a "full blown" system involving servers and clients and authentication and all that jazz was too much to swallow, it was possible to use a pared down system that lived in its own little bubble. RCS did this by eschewing the idea of a repository and just versioning one file at a time. You had file.txt and sitting right next to it a file.txt,v that had the version history. You could instantiate it easily on a per-file basis and use a handful of simple tools to work with diffs, roll back time, etc. Now before you say, "Ah ha, that's just what I was looking for!", please read on because this is not the easiest or recommended way to do this any more. Easy entry came at the cost of a low operational ceiling that is pretty much guaranteed to cramp your style sooner or later. distributed At some point a bunch of smart programmers decided they had had enough of the pain and said, "We are going to eat cake and have it too." Amazingly, they succeed and distributed version control systems were born. These systems combine the best of both worlds—complete version history sitting on your local system right next to the original files plus the ability to share your history and interact with that of other remote repositories. It turns out there are no serious technical disadvantages to doing this. The most significant barrier for some shops turned out to be the flexibility. Because the systems don't impose arbitrary restrictions on the way you work, it is sometimes painful to migrate from a system that forced you to have a certain workflow. Suddenly you have to think a little bit about how you want your system to work. Many things that used to be required (central node that always has everybody's latest stuff) became a matter convention to be used only when desired but you have to sit down and say, "This is how we are going to use this tool." So lets sit down and say how you would use this tool. For the purposes of this answer I am going to stick to git because it is one of the most widely adopted systems. It is easy to install on most any system (on the off chance it isn't installed already) and there is a wide range of documentation available covering almost any use case. It is also extensible and recognized by many third party systems, etc. That being said it is not necessarily inherently better that its nearest competition, Bazaar or Mercurial. If you live in an Ubuntu ecosystem, you might want to give Bazaar a look because Canonical uses it for everything and it will integrate well with your environment. Their launchpad service is similar to Github, but tailored to Ubuntu software development. If you plan to play in that ecosystem, consider learning bzr instead of git so that one tool works for both your personal world and the ecosystem you participate in. If you don't work on projects coordinated on launchpad, I would suggest using git. If any of your colleagues is big into Mercurial, you might want to look into using it. It's a very capable DVCS with some advantages over Git. It's frequently alleged to be faster for some operations due to a more streamlined data flow. The tooling is all wrapped into a single binary rather than being clobbered together from a bunch of separate (and sometimes redundant) tools like Git is. It it extensible using Python bindings and it's possible to built external systems that integrate very tightly with it. The paradigm is similar enough to Git that once you learn it you'll also be able to blunder your way around a git repository. In the end however git is the most popular player in the field right now and sticking with it will give you readier access to help when you need it. * My apologies to all the VCS systems not named. CVSNT, CA, CC, Perforce, Plastic, PVCS, Star, SVK, Vault, Vesta, VSS and a litany of others. Your names will never be forgotten are already just a memory. How git fits your use case: You mentioned having used Github a little. That's great, but you need to keep in mind that Github is not git . It is a common miss-perception that what they are offering is a free way to get into the system by hosting your repositories for you. In reality what they offer is a layer built out on top of git that is part social-networking and part project-management. This is a great thing for the open-source community. Instead of trying to be an alternative system and fighting the market, they have brilliantly leveraged a good tool and carved out a market providing a value added service for corporations and serving the community at the same time. But Github is not git. Git can, in fact, be used in a much simpler fashion. Similar to RCS, git stores the version information locally right next to your content. The notable difference out of the gate is that it does this on a per directory basis rather than a per file basis. RCS: file1.txt
file1.txt,v
file2.txt
file2.txt,v The ,v file for each file keeps a running list of the file history, storing the delta between each consecutive version. git: directory
+ file1.txt
+ file2.txt
+ .git
+ glob1
+ glob2 The stuff in the .git folder actually has funky names and is kind of complicated, but you don't need to know about it. Conceptually all it is doing is storing the differences between versions of stuff it your directory. Basically each glob is an image of what your directory looked like at the time of each commit. A lot of fancy math keeps the overhead down so that only the delta data is saved. Now this may be sounding complex already, but you really don't need to know any of that. The tool-set keeps track of all the fancy stuff for you. The basic usage in very bit as easy as RCS but gives you room to grow down the road. Getting started would go something like this: # Change to the directory that has files you want to version.
cd ~/pathto/yourtextfiles
# Initialize git to keep track of that folder
git init Done. No servers needed. Just you and your version controlled files. Except you don't have any files under surveillance yet so git isn't actually watching anything. Git is not greedy in the sense that it does not keep track of everything in a folder, only the specific things you tell it to. So the next step is to tell it what files you want to track. # Add some files to the system, assuming these already exist the your dir
git add file1.txt file2.txt
# Commit the changes you just made
git commit -m "initial add" Note that unlike most systems, this is a two step process. Before you commit things and stuff them in the repository as a new version, you have to 'stage' them. Not every change you make to your working directory is automatically assumed to be something you want to save to with each commit. Maybe you changed two files but only want to mark one. # Edit a bunch of files
vim file1.txt
vim file2.txt
vim file3.txt
# Only mark one of them as going it your next commit
git add file2.txt
# Commit it to history
git commit -m "fixed typo" Note that the changes to file1.txt are not yet saved to the repository and file3.txt is not tracked at all yet. You can see that a previously tracked file has unstaged changes by running git status and what those changes are by running git diff . It this point status would tell you that you have made changes to file1.txt and that there are there is an untracked file3.txt in your directory. Diff would show you the changes you made to file1.txt since the last time you staged and commited it. One 'gotcha' that you should be warned about starting out so it doesn't bite you down the road is that – because git's concept of a repository is a whole directory and the changes to files are seen as an image of that directory at a point in time – you should consider having separate repositories for disparate projects. Rather than making a single repository out of "My Documents", you should make separate repositories out of directories that contain some meaningful subset of your documents, whether per-topic or per-format or per-project or whatever. This will make it easier down the road when you want to work with "all documents related to x" from another machine without having to have every document you've ever created on that machine. Git does not allow you to checkout a subtree of a repository*, it is all or nothing, so err on the side of making many granular repositories for closely related data sets, one per directory. Really that's all there is to basic usage. From there, almost anything you can imagine to do is possible, but at that point you would be asking a usage question, not a software recommendation question. * Subversion for example did allow this and I got used to it. This bit me early on when I assumed git would allow something similar. I had ALL my personal files in one large svn repo and naively assumed git would be a drop in replacement for that. Lesson learned, and my files are the better off for being categorized. But command prompts are scary! There are, of course, a multitude of front end GUIs available to keep you off the command line and visually represent what is going on with your files. Many of these have an IDE-ish flavor to them and might serve as your entry point into document management, using them to launch your favorite editor* or even use a built in one. Since you asked about the back-end about how your files should be stored, I have made a recommendation to use git as the version manger, but if you have in mind a way this should look and work from a front end perspective, you should ask that as a separate question. * Of course gvim would be well suited for this use case ;-) In conclusion: Come on in, the water is fine! One! Two! Three! Jump git init . See that wasn't so hard. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/1469",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/435/"
]
} |
1,496 | I'm looking for a plugin that can indent code (HTML, CSS, PHP, ASP, etc.) in Notepad++ . In Visual Studio (and a few other Microsoft editors), you can auto format a document with a simple Ctrl + K , Ctrl + D . This inserts line breaks and tabbing automatically. Is there a similar feature in Notepad++? Requeriments Work with Notepad++ Windows 7+ Must implement shortcut for it No too "heavy" (not more than Notepad++ itself) Optional May have other features Able to define how to indent each language I looked and couldn't find any "inbox" feature that covers it. I also gave a try using: Code alignment for Notepad++ . There are good tools in the Code Alignment, but not what I'm looking for. Below what I'm searching. TextFX didn't the job as I expected, maybe I didn't use as I should have? // Some examples using PHP, but I want it to be used in other languages like ASP as well as CSS and HTML
<?php
public function x()
{
$foo = 'test';
$bar = 1;
return $foobar;
}
?>
<?php
public function x()
{
$foo = 'test';
$bar = 1;
return $foobar;
}
?>
// What I'm looking for
<?php
public function x()
{
$foo = 'test';
$bar = 1;
return $foobar;
}
?> | Why don't you try the Indent By Fold plugin? Here's your PHP code indented by fold : (The image above is not assembled from two separate images! Notepad++ has the ability to clone its tabs in a new view.) You can access a screencast demo for the 'Indent By Fold' plugin. There's no separate plugin for the 'auto complete' feature in the video; for most languages there are defined already XML files with keywords: Now, about the 'Code Alignment' plugin: just use it only if you are not satisfied with the 'Indent By Fold' results! Here's how you can define a shortcut for the the indent operation: The Ctr + K and Ctrl + D shortcuts key are already "taken" (by 'comment code' and 'duplicate selection'), therefore I've choosed another combination. But everyone can re-map all the commands according to own needs. If anytime you find a language with a "weak" 'code formatting / folding' you can take the lead, and proceed in defining your own folding and coloring rules for keywords, comments, numbers, operators and delimiters: See how beautiful 'Indent By Fold' works when I press ALt + K ? The vbproc keyword is underlined because the 'DSpellChecker' plugin is active. Here are a large amount of UDLs (User Defined Languages) for Notepad++. To better understand how to use this feature I recommend you to read the UDL 2.0 online documentation . It was a time when folding was possible only for single words like "BeginSub" and "EndSub". Now it is easier because folding can be done using expressions, as you can see in my My better ASP example. Even now, the UDL cannot address every imaginable situations (there is a work in progress called UDL 3). But can we blame the Notepad++ developers for not achieving perfection with this free and simple, yet wonderful utility? | {
"source": [
"https://softwarerecs.stackexchange.com/questions/1496",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/391/"
]
} |
1,530 | Since Facebook bought Whatsapp , I'm looking for an alternative app and chat network which is able to replace their functionality. What can you recommend? Requirements: End-to-end encryption group chat search contacts in address book of the phone Ad free can cost up to € 2 once share photos, videos nice to have video chat Android and iOS support Are there any alternatives available? | As a protocol, XMPP (formerly known as Jabber) would fulfill a lot of these requirements ( Possibly because WhatsApp is using XMPP on the backend ) It is massively extendible, so I'm picking a specific client that covers a lot of these requirements. Its impossible to cover all since things like being able to pick a contact on your address book is because whatsapp simply uses a phone number as a username. Other than video and group chat, any jabber client with OTR would do - I'd recommend chatsecure - it does seem to do multi user chat (though not chatrooms) without OTR, and runs on your two target platforms (and you can use pidgin or kopete on the desktop side for OTR conversations). Its free, can send photos and videos the usual way and does everything but search contacts in address book of the phone. Video would be covered by jingle, but I can't seem to find a good, well recommended android client outside the old obsolete google talk. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/1530",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/193/"
]
} |
1,897 | Community managers tend to have issues with tabs, in particular, an over-abundance of them. While applying a recent batch of system updates, I closed about 150 tabs between two browser windows. The reasons for having so many open are broadly incidental, just running down suspected voting irregularities can lead to opening ten tabs, sometimes more. We also work in a highly interrupt-driven environment, a particular trick to not forgetting to follow up on something is to leave a tab open, and check tabs individually as you close them - the logic being that you'll remember why you had something open, and finish whatever it was you meant to do. That works well in theory, but ... I'm looking for something I can add on to Chrome that lets me annotate a tab, somewhere easy to find, with notes on why I'm leaving a particular tab open. These would be terse notes, intended only to jog my memory - something like: Find out if this shipped for sure, order it again if it didn't. Or perhaps Definitely some crazy cross voting on a few sites going on here, run it down and clean it up everywhere prior to messaging Anything that lets me quickly recall why something was left open would work. At the minimum, I need the following features: Easy one-click access to an icon in the tool area (similar to screenshot tools, etc). Clicking gives me a short text box, where I can save an annotation Easy button to see an annotation for a tab while viewing it Easy dismiss button to clear an annotation without having to close the tab itself Nice to have: Tabs with annotations are visually distinct from tabs that don't have annotations, preferably when they're all scrunched together due to too many #@^& tabs being open If a tab with an annotation is closed without dismissing the annotation first, (optionally) bring the annotation back if I visit the same URL again (Chrome loves to crash when you have too many open tabs) Hovering over a tab with an annotation shows the first 80 - 100 characters of the annotation 'tool tip' style Really spiffy if: Syncs with Chrome like most other things, so I can inherit annotations that haven't been dismissed across browsers I'm not particular on implementation details, or the state of the extension (alpha / beta is fine with me). I don't care if it doesn't sync and only works off of local storage / etc - anything beats what I currently have which is basically nothing. Additionally, I'm not concerned about compatibility with the various mobile versions of Chrome. Windows 7 + gets the job done for me, compatibility with Linux would be swell, but not needed. Is there an extension that does this, and can help me make simply leaving tabs open a more productive tool in my workflow? | Update: I published this extension here . So I just created a quick extension that does this: AnnoTabe It's a popup that one can add an annotation to. The update button updates the annotation (and closes the popup). By default, the "persist" checkbox is clicked, so the annotation is associated with tab URL , and is not cleared unless explicitly dismissed. This will sync across devices too. If the persist checkbox is not clicked, the annotation is associated with internal tab id and is cleared when the tab is closed (or the annotation is dismissed). The icon will turn yellow on pages which have annotations, for easy identification. There is also a list of all annotations, accessible from the "show all annotations" button in the popup. This lets you dismiss annotations from one place, and also acts like a tab-switcher with the "Go to tab" button. Hattip to @TildalWave for making the icon! :) Enjoy! | {
"source": [
"https://softwarerecs.stackexchange.com/questions/1897",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/177/"
]
} |
2,004 | I know on Windows 8 you can use this hidden shortcut to take a screenshot of the screen and copy it to your clipboard, but I'm looking for an application where I can either set up a shortcut or has its own shortcut which would then save the current window to my clipboard. I don't care about editing the images, saving them anywhere, anything fancy. All I want is to be able to put it on the clipboard so I can easily open up Imgur or SE chat and paste to upload them. Optimally, there'd be another alternative shortcut which would let me select which area of the screen to actually take a screenshot of, but that's not a requirement. In a dream work I'd basically have a 1-to-1 clone of the OS X utility Grab but I don't think anything that easy exists. | You don't need a tool, you can use Alt PrtSc to take a screenshot of the currently selected window. Works on all versions of windows . | {
"source": [
"https://softwarerecs.stackexchange.com/questions/2004",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/84/"
]
} |
2,236 | I am looking for a (preferably free) task manager application that is more robust than the default Task Manager that ships with Windows 7. Functionality I am looking for: Alter Security Levels Suspend processes (and resume) Ability to identify unknown tasks ("what is boblovesdata.exe?!?!?") Identify files being accessed by certain processes (you can't delete that it is in use!!!) Speeds for Read/Write | Process Explorer , part of the Sysinternals suite, does everything you want. Alter priority levels simply by right-clicking on the process and selecting a priority level. Suspend/resume processes by right-clicking, and selecting suspend/resume. See what the process does/is by looking at the columns in the program next to the processes. If that doesn't work, right-click and select Search Online, which will search for the program in Google. Identify what program is using a specific file by hitting Ctrl + F , to bring up this box: Then simply type in what file you want to see what process is using, and after it searches, you'll see something like this, with the process name, PID, name, and other data: See the read/write information, either by right-clicking the process and selecting properties, or adding an "IO" category to the view. To my knowledge you can't see exact read/write speeds. It's a little old, but it still runs well and has a good number of features. Here's what it looks like in normal use: | {
"source": [
"https://softwarerecs.stackexchange.com/questions/2236",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/259/"
]
} |
2,932 | I used to use Google Reader a lot to check blogs I follow daily. It was really good and user friendly. But Google closed Google Reader, and people who used it face a bit of a problem. Is there an alternative that is as good as Google Reader was? These are my main requirements: Advertisements are ok but less annoying. Manual categorization. Instantly updating is a must. Web-based, but we can see it using mobile also when need. Unicode support (especially Sinhala). | Yes. In fact there is something better than "the original". After Google Reader turned out the lights, I experimented with a long string or RSS readers and aggregation systems. I eventually settled on Feedly and tried to camp out there. While the interface is polished and it does most things well, after a couple weeks I was frustrated with how little customization could be done. Adding feeds from mobile was also a pain and there were other minor annoyances, so I went hunting again. Eventually I dug up InoReader and have not looked back! I actually love the service. Not only does it sport feature parity with what Google Reader was, it allows a number of usage and interface customization out of the box that I used to turn to elaborate user-scripts for. It has a low key but highly functional interface that provides an efficient work flow for consuming feeds while staying out of your face both in browser and on mobile, integrates without being invasive and has never left me wishing "if only this did/had X". As for the specifics you asked about: Feeds can be organized using folders (actually work like tags because a feed can be in multiple categories) and you can also filter for finer tuned control. Feeds are updates instantly in the interface as the crawler finds new posts, and the notification system either from inside the site or using a browser extension cat make you aware of the updates in a timely manner. The service is free for most usage (although a paid version exists that allows you to do extras like search through ALL feeds they track, not just your subscribed ones) and you will not be bothered by any advertising. Their release blog is a good place to get a dime tour. It talks about new or revised features as they come out, but with a recent full makeover this will quickly give you an idea of what the whole system looks like. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/2932",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/2150/"
]
} |
2,934 | I have an unused key on my computer keyboard, and would like to turn it into a "Google interphone". How it would work: Press the key (and keep it pressed) Say "Nairobi weather" Release the key Immediately the software would open a web browser tab on Google Voice Search searching for what I said. Requirements: Linux Start listening fast. I should be able to press the key and start speaking immediately, without having to wait for anything to load. No user interface (except maybe for settings) Do not try to do speech recognition, just send the audio to Google, like on mobile. | Yes. In fact there is something better than "the original". After Google Reader turned out the lights, I experimented with a long string or RSS readers and aggregation systems. I eventually settled on Feedly and tried to camp out there. While the interface is polished and it does most things well, after a couple weeks I was frustrated with how little customization could be done. Adding feeds from mobile was also a pain and there were other minor annoyances, so I went hunting again. Eventually I dug up InoReader and have not looked back! I actually love the service. Not only does it sport feature parity with what Google Reader was, it allows a number of usage and interface customization out of the box that I used to turn to elaborate user-scripts for. It has a low key but highly functional interface that provides an efficient work flow for consuming feeds while staying out of your face both in browser and on mobile, integrates without being invasive and has never left me wishing "if only this did/had X". As for the specifics you asked about: Feeds can be organized using folders (actually work like tags because a feed can be in multiple categories) and you can also filter for finer tuned control. Feeds are updates instantly in the interface as the crawler finds new posts, and the notification system either from inside the site or using a browser extension cat make you aware of the updates in a timely manner. The service is free for most usage (although a paid version exists that allows you to do extras like search through ALL feeds they track, not just your subscribed ones) and you will not be bothered by any advertising. Their release blog is a good place to get a dime tour. It talks about new or revised features as they come out, but with a recent full makeover this will quickly give you an idea of what the whole system looks like. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/2934",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/140/"
]
} |
3,069 | I have quite often the need to take screenshots and do the following: Crop them Add texts (where I want to be able to choose the color) Add arrows (where I want to be able to choose the color) Mark stuff with rectangles / ellipses (where I want to be able to choose the color) Free-hand draw Is something like this available for Linux Mint (which is Ubuntu based)? I know I can do this with GIMP, but drawing arrows with GIMP is a pain. Also, GIMP has much more functions than I need. Awesome Screenshot is quite nice for taking screenshots in Chrome, but sometimes I also need to take screenshots outside of the browser. | My first thought was "ok you need something like Greenshot " so, after some search, I found Shutter which offers the features you need (more info in this blog post ). You can download it from Launchpad . More details: Capture options like capture a specific area or whole desktop Share options like generating shared link and Ubuntu One(which has stopped I think) Edit options like these screenshots: | {
"source": [
"https://softwarerecs.stackexchange.com/questions/3069",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/1834/"
]
} |
4,165 | I want to record some tutorials / workings of my programs / desktop activities / old-school gaming in Windows 8.1. I tried a few video capture programs, but the results didn't satisfy me. I tried: Ezvid . Seemed very promising but failed to save the recording as any format, displaying the error 'inconsistent file type`. Neither I could get to anywhere with googling the error code nor made use of FAQ page. So this option, sadly, failed. CamStudio . The quality of the capture was terrible even though I used other drivers in the list than the built-in Intel graphics and played with the capture settings. The program is definitely not user-friendly (at least for people like me who are complete newbies to video editing / capturing). So, I'm looking for a - preferably free, possibly user-friendly - video capture software for Windows, just like the Geany IDE of Linux: lightweight and robust. Videos should be able to be recorded at 720p and at least 24FPS. Edit : In addition to the answers to this question, there are more alternative programs that can be found under another question: What is a screen recorder with mouse movement and can save the recording as video? | Open Broadcaster Software (free) After months in search of a good, free screen capture program, I found this, and I knew that my search was finally over. OBS is for both streaming and general screen capture. It can stream to a service like Twitch or just write to a local file. It uses x264, a powerful encoder for the H.264 (.mp4) codec, which allows for flexible compression and low file sizes. It uses a scene system, which lets you place images, text, or whatever you want on the output video. For compatibility, there are different scene objects called "Monitor Capture," "Video Capture Device," "Window Capture," and "Game Capture" all for reliably capturing portions of the screen. It also has a plugin system in case you want a specific feature implemented, like a live chat or playing another video within the stream. Another very useful feature that OBS has is that you don't need to use something like Virtual Audio Cable to merge the speaker output and microphone; you can tell OBS to do that as a built-in feature. Screenies: | {
"source": [
"https://softwarerecs.stackexchange.com/questions/4165",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/3081/"
]
} |
7,019 | I'm looking for a free program that allows me to record my screen and save the 'video' as an animated GIF. This will be useful when making instructions / steps to do something. Requirements: free (preferably open-source) make a GIF that lasts for about 10 seconds runs on Snow Leopard - 10.6.8 (if possible, Win7 as well :) saves file locally - not uploaded automatically to a website Note: I'm not looking for a program that converts images into an animated GIF, but more of a 'on-the-spot' video maker and converter (i.e., I want the program to make the video and make it as a GIF - to upload to websites etc.). | I personally use and recommend LICEcap . Great interface: select a region of the screen and set the frame rate and destination for the gif. No time limit on length of gif. OS X and Windows (can use Wine for Linux). LICEcap also allows some further gif customisation, including adding title frame with custom duration to the gif, setting a pause hotkey and repeat count (as opposed to the default infinite repeat). Used for many of my answers across SE ( example ). | {
"source": [
"https://softwarerecs.stackexchange.com/questions/7019",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/2675/"
]
} |
7,463 | I am looking for the fastest Python library to read a CSV file (if that matters, 1 or 3 columns, all integers or floats, example ) into a Python array (or some object that I can access in a similar fashion, with a similar access time). It should be free, work on Windows 7 and Ubuntu 12.04, and with Python 2.7 x64. CSV with 1 column: 350
750
252
138
125
125
125
112
95
196
105
101
101
101
102
101
101
102
202
104 CSV with 3 columns: 9,52,1
52,91,0
91,135,0
135,174,0
174,218,0
218,260,0
260,301,0
301,341,0
341,383,0
383,423,0
423,466,0
466,503,0
503,547,0
547,583,0
583,629,0
629,667,0
667,713,0
713,754,0
754,796,0
796,839,1 | So I eventually wrote a small benchmark using the libraries Steve Barnes had pointed at. I had found the same when looking for it as I was writing the question so I guess that's the main ones. Some other ideas that haven't tried yet: HDF5 for Python , PyTables , IOPro (non-free). In short, pandas.io.parsers.read_csv beats everybody else, NumPy's loadtxt is impressively slow and NumPy's from_file and load impressively fast. Data (I should have generated them in the benchmark but I am running out of time right now) Code: import csv
import os
import cProfile
import time
import numpy
import pandas
import warnings
# Make sure those files in the same folder as benchmark_python.py
# As the name indicates:
# - '1col.csv' is a CSV file with 1 column
# - '3col.csv' is a CSV file with 3 column
filename1 = '1col.csv'
filename3 = '3col.csv'
csv_delimiter = ' '
debug = False
def open_with_python_csv(filename):
'''
https://docs.python.org/2/library/csv.html
'''
data =[]
with open(filename, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter=csv_delimiter, quotechar='|')
for row in csvreader:
data.append(row)
return data
def open_with_python_csv_cast_as_float(filename):
'''
https://docs.python.org/2/library/csv.html
'''
data =[]
with open(filename, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter=csv_delimiter, quotechar='|')
for row in csvreader:
data.append(map(float, row))
return data
def open_with_python_csv_list(filename):
'''
https://docs.python.org/2/library/csv.html
'''
data =[]
with open(filename, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter=csv_delimiter, quotechar='|')
data = list(csvreader)
return data
def open_with_numpy_loadtxt(filename):
'''
http://stackoverflow.com/questions/4315506/load-csv-into-2d-matrix-with-numpy-for-plotting
'''
data = numpy.loadtxt(open(filename,'rb'),delimiter=csv_delimiter,skiprows=0)
return data
def open_with_pandas_read_csv(filename):
df = pandas.read_csv(filename, sep=csv_delimiter)
data = df.values
return data
def benchmark(function_name):
start_time = time.clock()
data = function_name(filename1)
if debug: print data[0]
data = function_name(filename3)
if debug: print data[0]
print function_name.__name__ + ': ' + str(time.clock() - start_time), "seconds"
def benchmark_numpy_fromfile():
'''
http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html
Do not rely on the combination of tofile and fromfile for data storage,
as the binary files generated are are not platform independent.
In particular, no byte-order or data-type information is saved.
Data can be stored in the platform independent .npy format using
save and load instead.
Note that fromfile will create a one-dimensional array containing your data,
so you might need to reshape it afterward.
'''
#ignore the 'tmpnam is a potential security risk to your program' warning
with warnings.catch_warnings():
warnings.simplefilter('ignore', RuntimeWarning)
fname1 = os.tmpnam()
fname3 = os.tmpnam()
data = open_with_numpy_loadtxt(filename1)
if debug: print data[0]
data.tofile(fname1)
data = open_with_numpy_loadtxt(filename3)
if debug: print data[0]
data.tofile(fname3)
if debug: print data.shape
fname3shape = data.shape
start_time = time.clock()
data = numpy.fromfile(fname1, dtype=numpy.float64) # you might need to switch to float32. List of types: http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html
if debug: print len(data), data[0], data.shape
data = numpy.fromfile(fname3, dtype=numpy.float64)
data = data.reshape(fname3shape)
if debug: print len(data), data[0], data.shape
print 'Numpy fromfile: ' + str(time.clock() - start_time), "seconds"
def benchmark_numpy_save_load():
'''
http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html
Do not rely on the combination of tofile and fromfile for data storage,
as the binary files generated are are not platform independent.
In particular, no byte-order or data-type information is saved.
Data can be stored in the platform independent .npy format using
save and load instead.
Note that fromfile will create a one-dimensional array containing your data,
so you might need to reshape it afterward.
'''
#ignore the 'tmpnam is a potential security risk to your program' warning
with warnings.catch_warnings():
warnings.simplefilter('ignore', RuntimeWarning)
fname1 = os.tmpnam()
fname3 = os.tmpnam()
data = open_with_numpy_loadtxt(filename1)
if debug: print data[0]
numpy.save(fname1, data)
data = open_with_numpy_loadtxt(filename3)
if debug: print data[0]
numpy.save(fname3, data)
if debug: print data.shape
fname3shape = data.shape
start_time = time.clock()
data = numpy.load(fname1 + '.npy')
if debug: print len(data), data[0], data.shape
data = numpy.load(fname3 + '.npy')
#data = data.reshape(fname3shape)
if debug: print len(data), data[0], data.shape
print 'Numpy load: ' + str(time.clock() - start_time), "seconds"
def main():
number_of_runs = 20
results = []
benchmark_functions = ['benchmark(open_with_python_csv)',
'benchmark(open_with_python_csv_list)',
'benchmark(open_with_python_csv_cast_as_float)',
'benchmark(open_with_numpy_loadtxt)',
'benchmark(open_with_pandas_read_csv)',
'benchmark_numpy_fromfile()',
'benchmark_numpy_save_load()']
# Compute benchmark
for run_number in range(number_of_runs):
run_results = []
for benchmark_function in benchmark_functions:
run_results.append(eval(benchmark_function))
results.append(run_results)
# Display benchmark's results
print results
results = numpy.array(results)
numpy.set_printoptions(precision=10) # http://stackoverflow.com/questions/2891790/pretty-printing-of-numpy-array
numpy.set_printoptions(suppress=True) # suppress suppresses the use of scientific notation for small numbers:
print numpy.mean(results, axis=0)
print numpy.std(results, axis=0)
#Another library, but not free: https://store.continuum.io/cshop/iopro/
if __name__ == "__main__":
#cProfile.run('main()') # if you want to do some profiling
main() Windows 7: Output: open_with_python_csv: 1.57318865672 seconds
open_with_python_csv_list: 1.35567931732 seconds
open_with_python_csv_cast_as_float: 3.0801260484 seconds
open_with_numpy_loadtxt: 14.4942111801 seconds
open_with_pandas_read_csv: 0.371965476805 seconds
Numpy fromfile: 0.0130216095713 seconds
Numpy load: 0.0245501650124 seconds To install all libraries: Unofficial Windows Binaries for Python Extension Packages Windows configuration: Windows 7 SP1 x64 Ultimate Python 2.7.6 x64 NumPy 1.7.1 ( import numpy; numpy.version.version ) Pandas 0.13.1 ( import pandas as pd; pd.__version__ ) MSI Computer Corp. Notebook Computer GE70 0ND-033US;9S7-175611-033 (with SSD Crucial M5) Ubuntu 12.04: Output: open_with_python_csv: 1.93 seconds
open_with_python_csv_list: 1.52 seconds
open_with_python_csv_cast_as_float: 3.19 seconds
open_with_numpy_loadtxt: 7.47 seconds
open_with_pandas_read_csv: 0.35 seconds
Numpy fromfile: 0.01 seconds
Numpy load: 0.02 seconds To install all libraries: sudo apt-get install python-pip
sudo pip install numpy
sudo pip install pandas If libraries are already installed but need to be upgraded: sudo apt-get install python-pip
sudo pip install numpy --upgrade
sudo pip install pandas --upgrade Ubuntu configuration: Ubuntu 12.04 x64 Python 2.7.3 NumPy 1.8.1 ( import numpy; numpy.version.version ) Pandas 0.14.0 ( import pandas as pd; pd.__version__ ) Obviously feel free to improve the benchmark by commenting/editing/etc, I'm sure about that there are plenty of things to enhance: Making sure that the current loading functions are well optimized Try new functions / libraries such as HDF5 for Python , PyTables , IOPro (non-free). Generate the CSV in the benchmark (so that one doesn't have to download the CSV files) | {
"source": [
"https://softwarerecs.stackexchange.com/questions/7463",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/903/"
]
} |
7,473 | I am developing under Windows. The user will connect a smart-phone by USB which will communicate with an app on the PC. I do not have access to the code of the Smartphone app. For regression testing during development, I would like to write some scripts to simulate user action at the ´phone & test the result. For reasons that I won´t go into, I have to simulate the human user; I cannot simply simulate the 'phone. E.g., Script says - find a button with the label "push me", simulate a click to it and expect an alert saying "Please don't push the button again". Unless it gets more platforms covered, I would strongly prefer scripting over macro recording (especially as it let´s me check the result). I presume that I am looking for something which has a native app for all platforms, or a script which can run on all platforms, which will interpret and run my test scripts(?). Something like AutoIt , but for smart-phones. Android & Ios == "must have", Windows 'phone == "nice to have", Symbian == "a bonus to have, for legacy support". [Update] After a lot of googling, the best free solutions that I have found cover only Android & iOS ... http://appurify.com/ and http://www.cloudmonkeymobile.com/monkeytalk community edition. I will finish searching, then evaluate these & report back. Hmm, I don´t like this requirement of MonkeyTalk, because I do not have access to the source code of the Smartphone app: "MonkeyTalk Agent - library that must be added to app to enable testing". More explicitly, “Can I test an app without the source code? No. You must install the MonkeyTalk Agent into the app under test during the build process, which requires access to the app source code“. The best (and expensive!!) commerical solutions is SeeTest Automation Tool Features : Mobile (Android, iOS, Blackberry, Windows, Symbian) BUT $3,500 / year !! This looks good too, but I can´t see the pricing http://www.froglogic.com/squish/gui-testing/index.php | So I eventually wrote a small benchmark using the libraries Steve Barnes had pointed at. I had found the same when looking for it as I was writing the question so I guess that's the main ones. Some other ideas that haven't tried yet: HDF5 for Python , PyTables , IOPro (non-free). In short, pandas.io.parsers.read_csv beats everybody else, NumPy's loadtxt is impressively slow and NumPy's from_file and load impressively fast. Data (I should have generated them in the benchmark but I am running out of time right now) Code: import csv
import os
import cProfile
import time
import numpy
import pandas
import warnings
# Make sure those files in the same folder as benchmark_python.py
# As the name indicates:
# - '1col.csv' is a CSV file with 1 column
# - '3col.csv' is a CSV file with 3 column
filename1 = '1col.csv'
filename3 = '3col.csv'
csv_delimiter = ' '
debug = False
def open_with_python_csv(filename):
'''
https://docs.python.org/2/library/csv.html
'''
data =[]
with open(filename, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter=csv_delimiter, quotechar='|')
for row in csvreader:
data.append(row)
return data
def open_with_python_csv_cast_as_float(filename):
'''
https://docs.python.org/2/library/csv.html
'''
data =[]
with open(filename, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter=csv_delimiter, quotechar='|')
for row in csvreader:
data.append(map(float, row))
return data
def open_with_python_csv_list(filename):
'''
https://docs.python.org/2/library/csv.html
'''
data =[]
with open(filename, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter=csv_delimiter, quotechar='|')
data = list(csvreader)
return data
def open_with_numpy_loadtxt(filename):
'''
http://stackoverflow.com/questions/4315506/load-csv-into-2d-matrix-with-numpy-for-plotting
'''
data = numpy.loadtxt(open(filename,'rb'),delimiter=csv_delimiter,skiprows=0)
return data
def open_with_pandas_read_csv(filename):
df = pandas.read_csv(filename, sep=csv_delimiter)
data = df.values
return data
def benchmark(function_name):
start_time = time.clock()
data = function_name(filename1)
if debug: print data[0]
data = function_name(filename3)
if debug: print data[0]
print function_name.__name__ + ': ' + str(time.clock() - start_time), "seconds"
def benchmark_numpy_fromfile():
'''
http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html
Do not rely on the combination of tofile and fromfile for data storage,
as the binary files generated are are not platform independent.
In particular, no byte-order or data-type information is saved.
Data can be stored in the platform independent .npy format using
save and load instead.
Note that fromfile will create a one-dimensional array containing your data,
so you might need to reshape it afterward.
'''
#ignore the 'tmpnam is a potential security risk to your program' warning
with warnings.catch_warnings():
warnings.simplefilter('ignore', RuntimeWarning)
fname1 = os.tmpnam()
fname3 = os.tmpnam()
data = open_with_numpy_loadtxt(filename1)
if debug: print data[0]
data.tofile(fname1)
data = open_with_numpy_loadtxt(filename3)
if debug: print data[0]
data.tofile(fname3)
if debug: print data.shape
fname3shape = data.shape
start_time = time.clock()
data = numpy.fromfile(fname1, dtype=numpy.float64) # you might need to switch to float32. List of types: http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html
if debug: print len(data), data[0], data.shape
data = numpy.fromfile(fname3, dtype=numpy.float64)
data = data.reshape(fname3shape)
if debug: print len(data), data[0], data.shape
print 'Numpy fromfile: ' + str(time.clock() - start_time), "seconds"
def benchmark_numpy_save_load():
'''
http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html
Do not rely on the combination of tofile and fromfile for data storage,
as the binary files generated are are not platform independent.
In particular, no byte-order or data-type information is saved.
Data can be stored in the platform independent .npy format using
save and load instead.
Note that fromfile will create a one-dimensional array containing your data,
so you might need to reshape it afterward.
'''
#ignore the 'tmpnam is a potential security risk to your program' warning
with warnings.catch_warnings():
warnings.simplefilter('ignore', RuntimeWarning)
fname1 = os.tmpnam()
fname3 = os.tmpnam()
data = open_with_numpy_loadtxt(filename1)
if debug: print data[0]
numpy.save(fname1, data)
data = open_with_numpy_loadtxt(filename3)
if debug: print data[0]
numpy.save(fname3, data)
if debug: print data.shape
fname3shape = data.shape
start_time = time.clock()
data = numpy.load(fname1 + '.npy')
if debug: print len(data), data[0], data.shape
data = numpy.load(fname3 + '.npy')
#data = data.reshape(fname3shape)
if debug: print len(data), data[0], data.shape
print 'Numpy load: ' + str(time.clock() - start_time), "seconds"
def main():
number_of_runs = 20
results = []
benchmark_functions = ['benchmark(open_with_python_csv)',
'benchmark(open_with_python_csv_list)',
'benchmark(open_with_python_csv_cast_as_float)',
'benchmark(open_with_numpy_loadtxt)',
'benchmark(open_with_pandas_read_csv)',
'benchmark_numpy_fromfile()',
'benchmark_numpy_save_load()']
# Compute benchmark
for run_number in range(number_of_runs):
run_results = []
for benchmark_function in benchmark_functions:
run_results.append(eval(benchmark_function))
results.append(run_results)
# Display benchmark's results
print results
results = numpy.array(results)
numpy.set_printoptions(precision=10) # http://stackoverflow.com/questions/2891790/pretty-printing-of-numpy-array
numpy.set_printoptions(suppress=True) # suppress suppresses the use of scientific notation for small numbers:
print numpy.mean(results, axis=0)
print numpy.std(results, axis=0)
#Another library, but not free: https://store.continuum.io/cshop/iopro/
if __name__ == "__main__":
#cProfile.run('main()') # if you want to do some profiling
main() Windows 7: Output: open_with_python_csv: 1.57318865672 seconds
open_with_python_csv_list: 1.35567931732 seconds
open_with_python_csv_cast_as_float: 3.0801260484 seconds
open_with_numpy_loadtxt: 14.4942111801 seconds
open_with_pandas_read_csv: 0.371965476805 seconds
Numpy fromfile: 0.0130216095713 seconds
Numpy load: 0.0245501650124 seconds To install all libraries: Unofficial Windows Binaries for Python Extension Packages Windows configuration: Windows 7 SP1 x64 Ultimate Python 2.7.6 x64 NumPy 1.7.1 ( import numpy; numpy.version.version ) Pandas 0.13.1 ( import pandas as pd; pd.__version__ ) MSI Computer Corp. Notebook Computer GE70 0ND-033US;9S7-175611-033 (with SSD Crucial M5) Ubuntu 12.04: Output: open_with_python_csv: 1.93 seconds
open_with_python_csv_list: 1.52 seconds
open_with_python_csv_cast_as_float: 3.19 seconds
open_with_numpy_loadtxt: 7.47 seconds
open_with_pandas_read_csv: 0.35 seconds
Numpy fromfile: 0.01 seconds
Numpy load: 0.02 seconds To install all libraries: sudo apt-get install python-pip
sudo pip install numpy
sudo pip install pandas If libraries are already installed but need to be upgraded: sudo apt-get install python-pip
sudo pip install numpy --upgrade
sudo pip install pandas --upgrade Ubuntu configuration: Ubuntu 12.04 x64 Python 2.7.3 NumPy 1.8.1 ( import numpy; numpy.version.version ) Pandas 0.14.0 ( import pandas as pd; pd.__version__ ) Obviously feel free to improve the benchmark by commenting/editing/etc, I'm sure about that there are plenty of things to enhance: Making sure that the current loading functions are well optimized Try new functions / libraries such as HDF5 for Python , PyTables , IOPro (non-free). Generate the CSV in the benchmark (so that one doesn't have to download the CSV files) | {
"source": [
"https://softwarerecs.stackexchange.com/questions/7473",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/5113/"
]
} |
7,481 | I am looking for a quick way to do AR (augmented reality). I think the correct term is camera matching. Given a photograph which contains several squares of a known size, and several 3D models, I would like to compute the position and orientation in 3D of the camera and the objects so that the models render with a correct perspective with respect to the perspective of the image. If this can't be done inside a single software, two different softwares are acceptable. So i'm looking for a way to detect the features in the original image (can be done by hand if needed), to calibrate the camera and compute the projection matrix, and to place the objects in 3D, and then render the scene. Here is a sample photo showing some actual data, and the kind of result I'm looking for: | So I eventually wrote a small benchmark using the libraries Steve Barnes had pointed at. I had found the same when looking for it as I was writing the question so I guess that's the main ones. Some other ideas that haven't tried yet: HDF5 for Python , PyTables , IOPro (non-free). In short, pandas.io.parsers.read_csv beats everybody else, NumPy's loadtxt is impressively slow and NumPy's from_file and load impressively fast. Data (I should have generated them in the benchmark but I am running out of time right now) Code: import csv
import os
import cProfile
import time
import numpy
import pandas
import warnings
# Make sure those files in the same folder as benchmark_python.py
# As the name indicates:
# - '1col.csv' is a CSV file with 1 column
# - '3col.csv' is a CSV file with 3 column
filename1 = '1col.csv'
filename3 = '3col.csv'
csv_delimiter = ' '
debug = False
def open_with_python_csv(filename):
'''
https://docs.python.org/2/library/csv.html
'''
data =[]
with open(filename, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter=csv_delimiter, quotechar='|')
for row in csvreader:
data.append(row)
return data
def open_with_python_csv_cast_as_float(filename):
'''
https://docs.python.org/2/library/csv.html
'''
data =[]
with open(filename, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter=csv_delimiter, quotechar='|')
for row in csvreader:
data.append(map(float, row))
return data
def open_with_python_csv_list(filename):
'''
https://docs.python.org/2/library/csv.html
'''
data =[]
with open(filename, 'rb') as csvfile:
csvreader = csv.reader(csvfile, delimiter=csv_delimiter, quotechar='|')
data = list(csvreader)
return data
def open_with_numpy_loadtxt(filename):
'''
http://stackoverflow.com/questions/4315506/load-csv-into-2d-matrix-with-numpy-for-plotting
'''
data = numpy.loadtxt(open(filename,'rb'),delimiter=csv_delimiter,skiprows=0)
return data
def open_with_pandas_read_csv(filename):
df = pandas.read_csv(filename, sep=csv_delimiter)
data = df.values
return data
def benchmark(function_name):
start_time = time.clock()
data = function_name(filename1)
if debug: print data[0]
data = function_name(filename3)
if debug: print data[0]
print function_name.__name__ + ': ' + str(time.clock() - start_time), "seconds"
def benchmark_numpy_fromfile():
'''
http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html
Do not rely on the combination of tofile and fromfile for data storage,
as the binary files generated are are not platform independent.
In particular, no byte-order or data-type information is saved.
Data can be stored in the platform independent .npy format using
save and load instead.
Note that fromfile will create a one-dimensional array containing your data,
so you might need to reshape it afterward.
'''
#ignore the 'tmpnam is a potential security risk to your program' warning
with warnings.catch_warnings():
warnings.simplefilter('ignore', RuntimeWarning)
fname1 = os.tmpnam()
fname3 = os.tmpnam()
data = open_with_numpy_loadtxt(filename1)
if debug: print data[0]
data.tofile(fname1)
data = open_with_numpy_loadtxt(filename3)
if debug: print data[0]
data.tofile(fname3)
if debug: print data.shape
fname3shape = data.shape
start_time = time.clock()
data = numpy.fromfile(fname1, dtype=numpy.float64) # you might need to switch to float32. List of types: http://docs.scipy.org/doc/numpy/reference/arrays.dtypes.html
if debug: print len(data), data[0], data.shape
data = numpy.fromfile(fname3, dtype=numpy.float64)
data = data.reshape(fname3shape)
if debug: print len(data), data[0], data.shape
print 'Numpy fromfile: ' + str(time.clock() - start_time), "seconds"
def benchmark_numpy_save_load():
'''
http://docs.scipy.org/doc/numpy/reference/generated/numpy.fromfile.html
Do not rely on the combination of tofile and fromfile for data storage,
as the binary files generated are are not platform independent.
In particular, no byte-order or data-type information is saved.
Data can be stored in the platform independent .npy format using
save and load instead.
Note that fromfile will create a one-dimensional array containing your data,
so you might need to reshape it afterward.
'''
#ignore the 'tmpnam is a potential security risk to your program' warning
with warnings.catch_warnings():
warnings.simplefilter('ignore', RuntimeWarning)
fname1 = os.tmpnam()
fname3 = os.tmpnam()
data = open_with_numpy_loadtxt(filename1)
if debug: print data[0]
numpy.save(fname1, data)
data = open_with_numpy_loadtxt(filename3)
if debug: print data[0]
numpy.save(fname3, data)
if debug: print data.shape
fname3shape = data.shape
start_time = time.clock()
data = numpy.load(fname1 + '.npy')
if debug: print len(data), data[0], data.shape
data = numpy.load(fname3 + '.npy')
#data = data.reshape(fname3shape)
if debug: print len(data), data[0], data.shape
print 'Numpy load: ' + str(time.clock() - start_time), "seconds"
def main():
number_of_runs = 20
results = []
benchmark_functions = ['benchmark(open_with_python_csv)',
'benchmark(open_with_python_csv_list)',
'benchmark(open_with_python_csv_cast_as_float)',
'benchmark(open_with_numpy_loadtxt)',
'benchmark(open_with_pandas_read_csv)',
'benchmark_numpy_fromfile()',
'benchmark_numpy_save_load()']
# Compute benchmark
for run_number in range(number_of_runs):
run_results = []
for benchmark_function in benchmark_functions:
run_results.append(eval(benchmark_function))
results.append(run_results)
# Display benchmark's results
print results
results = numpy.array(results)
numpy.set_printoptions(precision=10) # http://stackoverflow.com/questions/2891790/pretty-printing-of-numpy-array
numpy.set_printoptions(suppress=True) # suppress suppresses the use of scientific notation for small numbers:
print numpy.mean(results, axis=0)
print numpy.std(results, axis=0)
#Another library, but not free: https://store.continuum.io/cshop/iopro/
if __name__ == "__main__":
#cProfile.run('main()') # if you want to do some profiling
main() Windows 7: Output: open_with_python_csv: 1.57318865672 seconds
open_with_python_csv_list: 1.35567931732 seconds
open_with_python_csv_cast_as_float: 3.0801260484 seconds
open_with_numpy_loadtxt: 14.4942111801 seconds
open_with_pandas_read_csv: 0.371965476805 seconds
Numpy fromfile: 0.0130216095713 seconds
Numpy load: 0.0245501650124 seconds To install all libraries: Unofficial Windows Binaries for Python Extension Packages Windows configuration: Windows 7 SP1 x64 Ultimate Python 2.7.6 x64 NumPy 1.7.1 ( import numpy; numpy.version.version ) Pandas 0.13.1 ( import pandas as pd; pd.__version__ ) MSI Computer Corp. Notebook Computer GE70 0ND-033US;9S7-175611-033 (with SSD Crucial M5) Ubuntu 12.04: Output: open_with_python_csv: 1.93 seconds
open_with_python_csv_list: 1.52 seconds
open_with_python_csv_cast_as_float: 3.19 seconds
open_with_numpy_loadtxt: 7.47 seconds
open_with_pandas_read_csv: 0.35 seconds
Numpy fromfile: 0.01 seconds
Numpy load: 0.02 seconds To install all libraries: sudo apt-get install python-pip
sudo pip install numpy
sudo pip install pandas If libraries are already installed but need to be upgraded: sudo apt-get install python-pip
sudo pip install numpy --upgrade
sudo pip install pandas --upgrade Ubuntu configuration: Ubuntu 12.04 x64 Python 2.7.3 NumPy 1.8.1 ( import numpy; numpy.version.version ) Pandas 0.14.0 ( import pandas as pd; pd.__version__ ) Obviously feel free to improve the benchmark by commenting/editing/etc, I'm sure about that there are plenty of things to enhance: Making sure that the current loading functions are well optimized Try new functions / libraries such as HDF5 for Python , PyTables , IOPro (non-free). Generate the CSV in the benchmark (so that one doesn't have to download the CSV files) | {
"source": [
"https://softwarerecs.stackexchange.com/questions/7481",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/4902/"
]
} |
10,012 | I have a little Windows 98SE virtual machine I run as a curiosity. I occasionally need to/want to download software to test on it, but the version of IE on it has trouble rendering many modern sites or handling redirects. I'd like a browser that will run on Windows 98SE and render reasonably modern webpages. I don't expect Acid compliance, but I do expect say, oldversion.com to work enough to be able to download the most recent DX version for the platform not need a load of additional software installation to work handle redirects, PNG and other things we take for granted on the modern internet to work correctly | Opera 9.64 Free. Version 10 and later don't run on Windows 98. Version 9.64 was released in 2009 and was among the best browsers back then. You get tabs, mouse gestures, speed dial, URL blocking and countless other features. Download: http://www.oldversion.com/windows/opera-9-64 | {
"source": [
"https://softwarerecs.stackexchange.com/questions/10012",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/125/"
]
} |
10,471 | I just tried to download FileZilla from SourceForge. It tries to install some random content from an unpopular and useless provider. It is definitely not reliable anymore. More generally, which FTP clients would you recommend? free reliable and clean (malware-free) easy to use available on Windows platform | I use WinSCP . It doesn't install adware, and it has a simpler interface than FileZilla. It is easy to use and supports FTP, SFTP, and SCP. It can also save as many profiles as you want. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/10471",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/2844/"
]
} |
11,033 | I have been using Windows Media Player in my Windows 7 and I found that it can't play some type of video formats like MKV. So I am searching for a new player which: supports any type of video files supports any type of audio files should be under 30 MB in size should provide volume above 100 (because Windows Media Player is not so good in case of volume) should be easy to use | I would recommend VLC it will play just about any video/audio you throw at it (except rmvb) it is not under 30mb in size, it's around 100mb (after installation), but this is due to it containing all the codecs it needs instead of relying on the system codecs - I don't think you will find much improvement here if
you count the size that other players would need for codecs while still being able to play everything it can provide volume above 100 (up to 200% via scrolling, up to 400%
via keys) it is pretty darn easy to use, and there is a lot of support for the
more advanced features. other positives: it's free and open source it supports streaming and viewing streams (webcams/etc.) it has great subtitle support it is highly customizable - you can add/remove buttons to the
menu/toolbar based on your needs, set the buffering for network
files, etc. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/11033",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/7152/"
]
} |
13,875 | I'm looking for a solution that lets me 'mount' a remote server via ssh or sftp protocols. The remote directory should show up as a drive letter on Windows. (optional) More than one simultaneous connection possible. Free or paid solutions are acceptable. Windows 7 is a must. Things I've tried so far: sshfs - This mostly works, but for some reason when you use Notepad++ to access files over an sshfs connection, Notepad++ can't properly determine line endings. Also has some bugs in the UI (you have to save the password to get it to work), and the developer appears to be absent. SFTP Net Drive - The free version works, but I get frequent 20 second pauses in open dialogs. Given that the 'server' is a VM running locally, this isn't a network issue, and it DOESN'T happen with sshfs. ExpanDrive - Seems to be working in my initial tests. No odd lags or bad behaviors. I wouldn't mind a slimmer solution (it supports N different cloud providers as well as SFTP), but it DOES work. I'd obviously like a free solution, but if none exists, I'll happily pay money for something. Update: I've added answers below with the ones I've actually had some success with. | Out of the ones you listed: I would be cautious with sshfs as the underlying driver (Dokan) has not been updated for quite a while, and even though it supports Windows 7, it has known issues with Windows 8.x (and probably with Windows 10). ExpanDrive works like a charm, and I don't see the need for a "slimmer solution", you can simply not use the protocols you don't need. I'm surprised by the behavior you describe regarding SFTP Net Drive ; I have it installed at many customers' locations and it's probably the most reliable piece of software I've ever tried (when properly configured). Personally, out of 100+ installations, I've never seen the behavior you describe. Another excellent option that you may consider is WebDrive . Like ExpanDrive it supports a plethora of protocols, but don't be intimidated, it's easy to use and fairly lightweight. Also, you may check out NetDrive ; very similar to ExpanDrive and WebDrive, with a wide support for many back-ends, and a clean and easy-to-use configuration interface. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/13875",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/555/"
]
} |
14,413 | I was wondering if there is a program in the common unix toolset such as grep that instead of filtering the lines that contain a string, simply outputs the same input but highlighting or coloring the selected string. I was thinking in doing it by myself (should be simple enough), but maybe it already exists as a unix command. I'm planning in using it to monitor logs, so I would do something like this: tail -f logfile.log | highlight "error" Usually when I'm monitoring logs I need to find a particular string but I also need to know what is written before and after the string, so filtering sometimes is not enough. Does something like that exist? Thanks | This is a funny trick for it with the basic grep command. It consists in using two filters: the one you want to apply and a dummy one that matches all the lines but produces no highlight. This dummy match can be either ^ (beginning of line) or $ (end of line). grep "^\|text" --color='always' file or grep -E "^|text" --color='always' file See an example: $ cat a
hello this is
some text i wanted
to share with you
$ grep "^\|text" --color='always' a
hello this is
some text i wanted # "text" is highlighted
to share with you | {
"source": [
"https://softwarerecs.stackexchange.com/questions/14413",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/9191/"
]
} |
14,427 | I am looking for a simple tool working with windows 7 and free/open source which allows me to maintain multiple templates and type them in any text programm/text field (Word, email client, Browser). Perfectly I press some keys, like CTRL (or WIN ) and SPACE and a pop up appears where I can type some letters until my template is selected and I press enter and the text is pasted. Every template should contain a title and the text. Variables like date and time are a plus but not necessary. Clarification: to enter some letters like fdh# and the last letter replaces my text with another text does not help, because I need a lot of templates and I don't like to remember which letters brings me to my needed template. | This is a funny trick for it with the basic grep command. It consists in using two filters: the one you want to apply and a dummy one that matches all the lines but produces no highlight. This dummy match can be either ^ (beginning of line) or $ (end of line). grep "^\|text" --color='always' file or grep -E "^|text" --color='always' file See an example: $ cat a
hello this is
some text i wanted
to share with you
$ grep "^\|text" --color='always' a
hello this is
some text i wanted # "text" is highlighted
to share with you | {
"source": [
"https://softwarerecs.stackexchange.com/questions/14427",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/2015/"
]
} |
14,437 | NOTES: My request covers software or libraries, hence why I'm posting here. I also checked the similar threads here, but they asked for something subtly different. I have the following HTML page: <html>
<head>
<link rel="stylesheet" type="text/css" href="font.css">
<style>
body {
font-family: "Gotham SSm A";
font-size: 22px;
}
</style>
</head>
<body>
SUMMARY
</body>
</html> And the font definition in font.css (truncated for brevity): @font-face {
font-family: "Gotham SSm A";
src: url(data:font/truetype;base64,...) format('truetype');
font-weight:700;
font-style:italic;
} The page shows up fine in the browser, and when printed to PDF from the browser, is rendered fine as well. However, every utility I used to generate a PDF from server-side software (PHP) failed: Wkhtmltopdf messed up the fonts. PhantomJS messed up the fonts. SlimerJS failed to render, opened windows, and had unacceptable
dependencies PrinceXML messed up the fonts and failed to parse all the CSS rules pandoc only converts to LaTeX and requires different utilities (on
Windows/Linux) to go to PDF. What's more, it's LaTeX conversion
(according to the online version I trie) messed up the fonts as
well. What are my alternatives? I need this to... 1. Respect modern CSS (including @font-face).
2. Be available on Windows & Linux with similar output on both
3. Be offline (utility or library is fine)
4. Allow commercial use
5. Be cost effective (preferably free) | This is a funny trick for it with the basic grep command. It consists in using two filters: the one you want to apply and a dummy one that matches all the lines but produces no highlight. This dummy match can be either ^ (beginning of line) or $ (end of line). grep "^\|text" --color='always' file or grep -E "^|text" --color='always' file See an example: $ cat a
hello this is
some text i wanted
to share with you
$ grep "^\|text" --color='always' a
hello this is
some text i wanted # "text" is highlighted
to share with you | {
"source": [
"https://softwarerecs.stackexchange.com/questions/14437",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/9203/"
]
} |
17,714 | I am looking for a Markdown viewer. It should: run locally on Ubuntu - be a normal program, not a browser addon, webapp or anything else that requires usage of an internet browser* Preferable: simple and lightweight open source Viewer as in "view formatted content". It is fine if viewer is also an editor. See https://stackoverflow.com/questions/9331281/how-can-i-test-what-my-readme-md-file-will-look-like-before-committing-to-github for wider version of this question. *I give an exception to browsers running in terminal in text mode | Though not strictly being a viewer , I can recommend ReText here – which I'm using myself on Ubuntu, and am pretty satisfied. runs locally on Linux: Yes (also on Windows and Mac) normal program, not a browser addon: Yes. Written in Python, and easy to deal with. simple and lightweight: Yes. On its own, it comes with the basics – and you can add more (like support for specific Markdown dialects as Markdown Extra or MathJax if you need. open source: Yes (using GPLv2) ReText with Live Preview (source: ReText; click image for larger variant) As I said to start with, it's not strictly a viewer – but an editor including a viewer and a "Live Preview". You can call it from the command line, passing the file as parameter. Unfortunately, there seems to be no way to start it directly in the viewer mode – but a work-around to at least have the "Live Preview" triggered: start it once with a file open press Ctrl - L (or use the menu: Edit › Live Preview ) to switch on the "Live Preview" mode using the menu, go to Edit › Preferences , and check "Restore live preview state" under "Behavior" Now, when opened the next time, the "Live Preview" is switched on automatically. Alternatively, you can open the "real preview" (without the editor pane) by either clicking the "Preview" button, or using the keyboard-shortcut Ctrl - E . For more details, also see my answers here and here . | {
"source": [
"https://softwarerecs.stackexchange.com/questions/17714",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/12551/"
]
} |
18,193 | I am looking for an Office suite that can work with MS Office files that does not require an Internet connection for installation or cloud access. I have a number of clients who require compliance with some fairly stringent STIGs (Security Technical Implementation Guides). For at least two clients, putting data over the Internet is an actual breach of contract. A few more stipulate the cloud cannot be used under any circumstances. We do currently heavily use MS Office 2013 - and would soon notice the loss of functionality. The real question seems to be: with 365 being so heavily pushed, will there be a version of Office in the future that will run off-line, not require a subscription (can be purchased outright) and does not present the possibility of being switched to reduced functionality mode when traveling to areas where The Internet may not be available or deemed too much of a risk to use (wireless at a hotel,etc.) Is there an Office alternative that will still work - without any loss of functionality - without an Internet connection? | I use Libre Office . It is one of many possibilities available to you. Libre Office can be used to edit and save documents which are in MS Office format, including Visio. It also uses the OpenDocument file format. You would have to download the installation pack first but it does not need to be online during installation. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/18193",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/12052/"
]
} |
18,839 | I am looking for a JSON viewer for Windows that can: open decently large files (e.g. > 10 MB), unlike JSONViewer Notepad++ plugin (lags for ever), JSON Editor Eclipse Plugin (take over 1 minute to generate the treeview for a 500 KB JSON file) and Json Tools Eclipse Plugin (no outline generated if file is more than a few MBs but other great and fast) has a decently responsive UI, unlike JSON Viewer can collapse/expand a given level (treeview / outline) works off-line Ideally: tabs gratis can edit JSON data displays the filename somewhere, unlike JSON Viewer provide some statistics on the JSON content Example of large JSON file: https://www.dropbox.com/s/2a6ytj5wa1zlm1c/tracker004_track_2015-08-28_22-22-01-238000.json?dl=0 | I have written Huge JSON viewer based on JSON.NET, one of the fastest JSON frameworks. It matches the requirements as follows: open decently large files : it can open the 1.44 GB example file without crashing in ~ 2:45 minutes on my machine (Intel Core i7, 16 GB RAM, SSD). To do that, the OS must be 64 bit. A progress bar is shown has a decently responsive UI : I use a commercial tree view from DevExpress which I hope is optimized very well. can collapse/expand a given level : it is a full tree view and can expand/collapse any nodes. It has a feature to expand to a given level works off-line : it's a Windows desktop application. Needs .NET provide some statistics on the JSON content : some. Can definitely be improved. tabs : yes. gratis : yes. MIT license, but closed source. displays the filename somewhere : yes, in the tab The only thing it can definitely not (but was optional): "can edit JSON data" Additional features: search capability performance warning when memory swapping to disk is expected System requirements: Windows 7 SP1 or higher, x64 bit recommended Physical RAM roughly 7 times the file size to be opened .NET 4.5 Watch out the list of known issues until it's out of beta phase. Screenshots: Download (including portable version): https://github.com/WelliSolutions/HugeJsonViewer#releases | {
"source": [
"https://softwarerecs.stackexchange.com/questions/18839",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/903/"
]
} |
18,842 | Since I did not get an answer to my previous question (looking for some s/w to strip all images from a PDF file), I have decided to code it myself. Can anyone recommend a VCL component to help me? Something to open, manipulate & save PDF files. As a bonus, something that can search & find the next image. [Update] the original question states that I want to strip all images from a PDF file. I forgot to post that here. Sorry. Any other PDF manipulation features are a bonus. @Izzy - VCL (for which I created a new tag, so that vampire cheerleaders know what I am talking about), implies Embarcadero Delphi, or C++ Studio, or RAD studio, which contains both. Language is, perforce, Delphi or Embaradero C++ and o/s is Windows. | I have written Huge JSON viewer based on JSON.NET, one of the fastest JSON frameworks. It matches the requirements as follows: open decently large files : it can open the 1.44 GB example file without crashing in ~ 2:45 minutes on my machine (Intel Core i7, 16 GB RAM, SSD). To do that, the OS must be 64 bit. A progress bar is shown has a decently responsive UI : I use a commercial tree view from DevExpress which I hope is optimized very well. can collapse/expand a given level : it is a full tree view and can expand/collapse any nodes. It has a feature to expand to a given level works off-line : it's a Windows desktop application. Needs .NET provide some statistics on the JSON content : some. Can definitely be improved. tabs : yes. gratis : yes. MIT license, but closed source. displays the filename somewhere : yes, in the tab The only thing it can definitely not (but was optional): "can edit JSON data" Additional features: search capability performance warning when memory swapping to disk is expected System requirements: Windows 7 SP1 or higher, x64 bit recommended Physical RAM roughly 7 times the file size to be opened .NET 4.5 Watch out the list of known issues until it's out of beta phase. Screenshots: Download (including portable version): https://github.com/WelliSolutions/HugeJsonViewer#releases | {
"source": [
"https://softwarerecs.stackexchange.com/questions/18842",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/3397/"
]
} |
23,430 | Is there a decent free alternative to Adobe Photoshop for Windows? I know, there is a similar question , but it seems to be focused entirely on Linux. | GIMP is probably one of the most commonly used free alternatives to Adobe Photoshop for Windows: free, open source crossplatform many features in Adobe Photoshop are also present in GIMP, but Adobe Photoshop definitely contains more features, so most professionals use Adobe Photoshop (disclaimer: I work for Adobe). | {
"source": [
"https://softwarerecs.stackexchange.com/questions/23430",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/16518/"
]
} |
25,211 | I'm looking for an application that can act as an integrated Linux-like terminal for my Windows PC. For instance, I could roam around the file system, install applications like vi, etc. I would like this application to meet the following requirements, Gratis Uses Bash Not an emulator (I can actually see my files on the C drive and interact with them) Easy to install Compatible with Windows 10 | I've been using Cygwin for some time now and it seems to do the job. It was very easy to install and I could choose from many different packages to install like vim, wget, etc. Cygwin Get that Linux feeling - on Windows Cygwin is a Unix-like environment and command-line interface for
Microsoft Windows. Cygwin provides native integration of Windows-based
applications, data, and other system resources with applications,
software tools, and data of the Unix-like environment. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/25211",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/133/"
]
} |
25,741 | I need a program to cut videos (extract a sub-video from them), but I absolutely need there will be no quality losses at all from the original. As long as I don't need to do anything more (like editing, resizing, rotating, adding subtitles, colorize etc.), I prefer this program to be very easy to manage. Ideal case would be just setting start capture, end capture, and a button to split . I have tested Allok Video Splitter , but I am sure there are some (not much, but they exist) degradations in video quality (audio keeps OK, or so it seems), even when I set the program to maximum quality. Open source method preferred. Command-line methods accepted. Windows or Linux platform, please. I have no Mac. No problem to do this on Android , as long as the main feature (no quality loss) remains. | I would suggest using the command line and ffmpeg as in: ffmpeg -ss start_time -i input_file .ext -t duration -vcodec copy -acodec copy output_file .ext .ext must be a supported video type and the same in both cases note there should be no space between the name and .ext it is only there above to get the bold and italic to work . -ss should be before the input file, this allows the last keyframe before your start time to be selected and must be before -t Where both start_time and duration can be either a number of seconds, minutes:seconds, hours:minutes:seconds and both minutes & seconds limited to 2 digits in any colon format and any seconds can have .milliseconds attached. -vcodec copy -acodec copy informs ffmpeg that you would like exactly the same video and audio encoding, i.e. no loss of quality. Also note that since ffmpeg works in what is known as a pipeline the order of most options is very significant so if your output file name occurs before some of your options they will be assumed to be options for the next stage in the pipeline and so will not affect your output image. This is done so that you can specify on a single command line one input file and several output files with different options so that you could, for example, generate a sequence of 10 second clips each 5 minutes apart ready to recombine them. This solution is: Free/Gratis Open Source No change in quality unless you ask for it If you need to you can change format, resolution, order, add still frames, just about anything. Windows, Linux & Mac You do need to find your times for splitting and make a note of them manually but that shouldn't be too much of a stretch . There are a number of GUI front ends available for ffmpeg, and a lot of programs that don't mention that all they really do is provide one , but why bother for a simple operation like this. Example command that worked for the OP: ffmpeg -i OriginalVideo.flv -ss 1:00 -t 3:00 -vcodec copy -acodec copy OutputFile.flv Thanks to Sopalajo de Arrierez for taking some time to experiment and posting what worked for them. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/25741",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/1796/"
]
} |
28,169 | Is there any software used to draw figures in academic papers describing the structure of neural networks (specifically convolutional networks)? The closest solution to what I want is the TikZ LaTeX library which can produce diagrams like this with a description of the network using code (it can't handle convolutional layers): Source Other software that describes network structure but does not visualise in 3D are: TorchNN TensorBoard Mxnet The diagrams I want to construct follow a similar pattern, so am interested to know if there exists software more specialised than GIMP/GraphViz/Gephi/InkScape or even Powerpoint to achieve this. It would be great if it was programmable like TikZ. Here are some examples of figures I'd like to construct (with their sources below): Source Source Source | I wrote a simple python script to draw convnet, with adjustable parameters. https://github.com/gwding/draw_convnet It might be useful to you, if you just need some simple/non-fancy illustration. It copies the style of Figure 2 in "gradient based learning applied to document recognition" | {
"source": [
"https://softwarerecs.stackexchange.com/questions/28169",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/13123/"
]
} |
34,352 | In other words, is there a Windows 10 equivalent to apt-get / rpm / brew? Preferably cheap or free, and available for use in my business? The more software packages it supports, the better. Preferably on the order of thousands of different programs supported. | UPDATE 2016-10-10: It's possible that Chocolatey version 0.10.0 gave me a malware infection a couple of months ago. However, that does not seem to be the case. I have not had any trouble so far with Chocolatey v0.10.1 or 0.10.3. See further notes in the Comments section. END UPDATE I recently discovered Chocolatey: https://chocolatey.org . This is what I am using at the moment. This is a command-line tool. It requires administrative privileges (naturally). After I installed it, I ran the following commands, among others, to install various applications and register them with Chocolatey: choco install 7zip
choco install firefox
choco install adblockplus-firefox
choco install GoogleChrome
choco install adblockpluschrome
choco install opera
choco install adblockplusopera
choco install git
choco install github
choco install notepadplusplus
choco install SublimeText2
choco install vlc Now, any time I want to check for updates to any of these, and automatically install any updates found, I just run: choco upgrade all or choco upgrade all -y (to accept all upgrade confirmations) Works like a charm. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/34352",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/24846/"
]
} |
41,601 | I want to read PDF files on Windows. The PDFs can be quite large (like 50 MB) but they do not contain interactive forms nor special gadgets. I just do casual human reading, no automatic data extraction. Requirements: Open source ( OSI-approved license ) I can copy text from PDFs that are text-based (no image OCR needed) Bonus if works all the way back to Windows 7 As fast and reliable as possible | One of my personal favourites is SumatraPDF . Features: Sumatra PDF is a free PDF, eBook (ePub, Mobi), XPS, DjVu, CHM, Comic Book (CBZ and CBR) reader for Windows. Sumatra PDF is powerful, small, portable and starts up very fast. Simplicity of the user interface has a high priority. Multiple documents in tabs Keyboard Navigation Open Source, see project , GPL v3. Supports resume (remembers your place in documents) Unlike Acrobat, Sumatra PDF will not lock the PDF files it opens, and will automatically detect modifications and reload modified files on the same page. (This is great when working with PDF-generating tool-chains, as you can just leave the PDF open in Sumatra and re-generate it.) I have never had problems with it opening large PDF files - in the screenshot below I have 4 pdf files with sizes of 79 MB, 45 MB, 36 MB and 20 MB all together containing 1770 pages many with images or graphics - still working fine. I am a regular user of the above but not otherwise involved. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/41601",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/140/"
]
} |
46,362 | I am seeking recommendations of a program that allows creation, editing and saving of files in .svg format allows standard editing including drawing lines and shapes filling areas with color cropping opening files saved (perhaps in only a few colors) as .jpg, .gif, etc. for pasting into the file being edited is not limited to the creation of diagrams can run standalone rather than on top of a web browser or otherwise online is free or at least shareware | How about Inkscape ? It is: free and open source, has a perfectly compliant SVG format file generation and editing, can open a number of other vector formats, with the help of extensions, can natively import most raster formats (JPEG, PNG, GIF, etc.) as bitmap images, but it can only export PNG bitmaps, as it is a downloadable software, it runs standalone. Sounds like it could fit, so maybe worth giving it a try. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/46362",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/-1/"
]
} |
50,895 | As the title says, I seek high quality text editor for POSIX shell scripting. Requirements: Most important to me is syntax highlighting: I admire the fast start-up of Sublime Text , but it does not recognize variables inside strings : [ -f "${backup_file}" ] && echo "File ${backup_file} exists, exiting." && exit 1 Like in this test case, where it simply fails to highlight variables out of the box at least. Cross-platform, because I work primarily on Windows 10 (running scripts in Cygwin), but also on Linux Mint 19. Although preferred, it does not have to be open-source. I am also willing to pay for it, so it does not have to be free. Must be with graphical user interface, so a CLI editor is a no go. Does not have to be fast, just get me the syntax highlighting of variables and other shell script related things out of the box. Reference script has been posted inside my own answer on Code Review . Bottom line The accepted solution is gVim Easy , because after minor adjustments to my HiDPI display it became the fastest and probably the most powerful editor I have ever seen. I intend to use it in the Easy mode , though, in order to experience normal editing, but later on, I might use the real power of it. Follow-up Though, I was astonished by how fast gVim Easy could start up, after two days spent over _vimrc , and setting things up to my expectations, I am a little tired of it, and am not sure it's worth the trouble for me, because I am no heavy editor, I just write shell scripts, and after several hours spent in Visual Studio Code , feeling like at home, I am prepared to say my decision was rather hasty and I am truly contemplating over switching to Visual Studio Code from Sublime Text instead of to gVim for it works out of the box almost perfectly. So far I haven't even made any change to the settings, which I would have to do with gVim Easy whenever re-installing and / or moving to another computer. I am not 100% sure I won't ever use the vim family, but as for this question, for future readers, Visual Studio Code should be recommended, and thus I am accepting that solution. | Visual Studio Code Pros: Cross-platform (Windows, Linux, Mac) Open-source, see its GitHub page , though there is some fog about it Free of charge, MIT license Faster than Atom IntelliSense autocomplete Start-up time on Dell 7577-92774: 3 seconds Shell script syntax highlighting with strong color for variables: ShellCheck plugin available, which makes it really strong competitor for shell scripting Integrated Linux terminal, which makes me say wow! Cons: Far slower than gVim Slower than Sublime Text For someone it may be off-putting that it comes from Microsoft | {
"source": [
"https://softwarerecs.stackexchange.com/questions/50895",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/33363/"
]
} |
60,519 | Common HTTP operations are GET and POST -- e.g. GET is implemented by every web browser, and so is POST when the web page is a web form (e.g. with <input> and a Submit button). What about PUT and DELETE though? I imagine these might be used to edit the static content (i.e. pages) of a web site. What application(s) provide/implement this functionality? With a UI -- i.e. not just an API Maybe little else (i.e. not necessarily a huge and multi-functional application) Maybe free (libre and/or gratis) and able to run on Windows? An application which could be used (without programming) by a non-technical end-user, not just an API used by other software e.g. JavaScript I imagine it would be like FTP client software, except via HTTP(S) instead of FTP -- am I right? Apologies for ask for such a basic (and maybe commonplace) thing, I find it difficult to Google for. And this question -- i.e. " [http] put " -- doesn't seem to have been asked here before | I'd recommend Postman for this. It supports all HTTP verbs, not just GET, POST, PUT and DELETE. Some operations might require HTTP headers to be set (e.g. for authentication) and it supports that too. You can supply a raw body for your request, or key-value pairs which Postman can transform into e.g. URL encoded form content . It has a UI. While it does offer additional functionality like collaboration, I'm using it myself just for basic functionality like grouping and saving requests. It's free and runs on Windows and a couple of other OSes. | {
"source": [
"https://softwarerecs.stackexchange.com/questions/60519",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/50537/"
]
} |
71,530 | I need an FTP client for Windows Requirements: Graphical and user-friendly Feels like a Windows Explorer window, supports drag and drop No nags/ads/spyware Open source Supports FTPS, SFTP, and all modern authentication mechanisms Well-maintained. While FTP is not bleeding-edge, I want to avoid programs that has not received a single commit in more than 6 months. The most recent commits the better. | WinSCP matches all your requirements. Graphical: Yes – There are two alternative interfaces: Commander interface : Explorer interface : Feels like a Windows explorer window: Yes – One of the interfaces is explicitly designed after Windows File Explorer. Supports drag and drop: Yes – Both between file panels and between other applications. Uploading via drag&drop Downloading via drag&drop No nags/ads/spyware: Yes. Open source: Yes – GPL license Supports FTPS, SFTP: Yes All modern authentication mechanisms: Yes – SSH (for SFTP and SCP) is based on the up-to-date version of PuTTY. TLS/SSL (for FTPS, WebDAVS and Amazon S3) is based on up-to-date version of OpenSSL. Well-maintained: Yes – Over 1000 commits and 18 releases in the last year. (I'm the author of WinSCP) | {
"source": [
"https://softwarerecs.stackexchange.com/questions/71530",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/140/"
]
} |
71,562 | Postman is a great tool but it hides vital features (like workspaces with more than 25 requests) behind a paywall. Other so-called alternatives (like Insomnia) either do the same thing, or lack other vital features. Postman also suffers from the same problem as most Electron apps, in that its UI is designed by people who think they are cleverer than they actually are, with the result that not only is the UX terrible, it also goes completely against the UX standards of the platform it's running on. Requirements: Import OpenAPI Specification (OAS, formerly known as Swagger) documents from a URL and automatically create requests for each one Ability to have multiple, different logical groupings of requests (like Postman's workspace) Import/export workspaces from/to a file or directory (for version control and CLI purposes) Execute an entire workspace, or specific requests(s) within it, from the command-line; OR ability to export workspace in Postman format (could then use newman to test the requests) | WinSCP matches all your requirements. Graphical: Yes – There are two alternative interfaces: Commander interface : Explorer interface : Feels like a Windows explorer window: Yes – One of the interfaces is explicitly designed after Windows File Explorer. Supports drag and drop: Yes – Both between file panels and between other applications. Uploading via drag&drop Downloading via drag&drop No nags/ads/spyware: Yes. Open source: Yes – GPL license Supports FTPS, SFTP: Yes All modern authentication mechanisms: Yes – SSH (for SFTP and SCP) is based on the up-to-date version of PuTTY. TLS/SSL (for FTPS, WebDAVS and Amazon S3) is based on up-to-date version of OpenSSL. Well-maintained: Yes – Over 1000 commits and 18 releases in the last year. (I'm the author of WinSCP) | {
"source": [
"https://softwarerecs.stackexchange.com/questions/71562",
"https://softwarerecs.stackexchange.com",
"https://softwarerecs.stackexchange.com/users/64249/"
]
} |
1 | It's inevitable that we will get lots of R questions in this forum. Should we: Bounce them all to SO Answer all questions, even when it's clearly programming and not statistics. Answer the question unless it clearly has no statistical content. I would vote for 3. | This is something that we deal with over at BioStar pretty regularly. Based on that experience, I'd argue that number 3 is the right approach, for the following reasons There's going to be a fair amount of overlap between responders on the two sites Bouncing people around will just lead an already confused person to become more frustrated. We want to be helping. Of course, blatantly off-topic questions should should still be nixed. For borderline questions, I'd suggest that we give them a gentle nudge towards SO if they don't get answers here. Just suggest that they may get better responses there, and most people will be happy to make the jump over to the appropriate forum. | {
"source": [
"https://stats.meta.stackexchange.com/questions/1",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/8/"
]
} |
16 | The is always a disputable thing, especially within "pure statisticians". I personally think they are inevitable connected. In a practical view, it would generate a lot of questions, and making a separate SE for it will drive to a total chaos. Edit: Nevertheless a war has started, so I think it is time to revive the discussion. | The original site description in my proposal was "For statistics, data analysis, data mining and data visualization". I specifically wanted to include data mining/machine learning as it makes no sense to separate out these methods from the more statistical methods that are usually based on stochastic models. In fact, I like to call the whole area "Data Science". | {
"source": [
"https://stats.meta.stackexchange.com/questions/16",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/-1/"
]
} |
35 | Of course in context of R-versus-Clojure , I felt it needs external discussion. So, I believe that there are two possible sorts of such questions: An algorithmic question like, "How to implement X?" or "My code is not working!" A question about tool selection and usage like, "I'm looking for X; what software should I use?" or "How to do X in Y?" IMO the first case should definitely migrate to SO; the second should get its chance to exist here. I can't accept the argument that if something is a programming language, then the question about it is automatically off-topic. So would we accept Minitab versus Statistica because those are GUI based, and not SAS versus R because one can do programming there? | Once you get out of your first couple of stats classes (and maybe even before then), you're often doing stats with software. I would assume that a lot of people who will come here (especially professionals, not so much students or academics) are probably looking for an answer like, "how do I do such and such in SAS or R"? Especially if this is going to be "statistical analysis" and not "statistical theory". I work in a place with a few "stats guys" and a lot of programmers, and questions like that always come to the statisticians (it's always stuff like "where are the p_values in this output" or "how do I transform this variable so I can use it in a logistic regression procedure") I would expect that stats-software questions to outnumber pure stats questions as this site progresses (might be wrong). And, I think this is a better site for that type of thing than stack overflow. Hopefully, we'll see good software answers, with enough of the stats thrown in to give the "asker" a little bit of foundation. I agree that Colin's example should go here, and I hope that it's answered with a primary focus on the stats, and secondary focus on algorithmic efficiency. To me, any SAS, R, S+, SPSS, Stata, etc. question is perfectly valid here. | {
"source": [
"https://stats.meta.stackexchange.com/questions/35",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/-1/"
]
} |
793 | A discussion in fall 2010 considered the extent to which purely software-related questions would be welcome on this site. It didn't really reach a conclusion, but one useful suggestion that arose is to collect a set of links to online support resources (such as user groups and list servers) for the various statistical computing platforms. I guess that would let us close some of these questions in a constructive and relatively guilt-free manner. I, for one, would like to help the people who come by with questions about SAS macro syntax or table access , even though these questions have no direct statistical interest (and would interest only small subcommunities here) and therefore ought to be closed or migrated. Could we organize replies in the present thread by software platform? The ones of most immediate use are those that keep showing up: R, SAS, SPSS, Stata, Excel. | R R-help , and the various R Mailing Lists or SIGs , welcome any questions (provided they conform to the Posting Guide ). Answers generally come within one or two days. Quick-R gives a gentle overview of most of the basic R syntax for people coming from SAS, SPSS, or Stata. Stack Overflow also provides strong support for R questions . Additionally, rseek.org provides a custom Google search that facilitates queries related to R code, packages, articles, etc. You can search the R documentation and package documentation using rdocumentation.org site. Crantastic.org features useful reviews of current packages. The UCLA Academic Technology Services provide many worked examples of statistical analyses in R . If you're looking for visualization ideas, visit the R Graph gallery and the Learn R blog , both of which feature a wide variety of plots and the accompanying code. The R Graphical Manual also provides a visualization of all CRAN R package example plots, and is searchable by topic. The Cookbook for R page provides multiple examples and recipes for plotting data (mostly using ggplot2) plus some additional information on using R. To add the excellent resources list above, I (@michelle) have found the R Tutorial web site to be helpful. Also I have R bloggers as a feed. That has lots of useful posts from various bloggers and is an excellent way to keep up with new packages and new ways of using existing packages. If you're coming from SAS or SPSS, check out R for SAS and SPSS Users ; there is a book with more information in it. An equivalent book for Stata users coming to R is R for Stata Users . Nice introductory tutorial can be found on R Tutorials page - it covers introduction to basics of R, using statistical tools such as t-tests, ANOVA, regression and other topics. There are also some resources listed on our site here: Free resources for learning R , and on our R tag wiki . For learning on more advanced topics in R programming the best resource that is available online is Advanced R site by Hadley Wickham. It is an online version of book under the same title. Another resource covering programming issues is The R Inferno by Patrick Burns available as pdf file. Those two cover topics that are negligible to most people that use R for statistics but can be crucial if you do actual programming in R and can be helpful in understanding how R works 'under the hood'. If trying to understand better how some R function works, you can always check their source code as R is open-source. | {
"source": [
"https://stats.meta.stackexchange.com/questions/793",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/919/"
]
} |
974 | I know why I ask questions -- to learn and to get help with problems I'm struggling with. And I have been incredibly impressed with the knowledge, eloquence, and responsiveness of the Stack Exchange community. When I post a question, it almost always gets some response within 10 minutes. It's like having a bunch of statisticians and mathematicians hanging out at a water cooler that I can walk around the corner to when I get stuck. I am not surprised that some of my questions get answered. But I am (wonderfully) surprised that they almost ALL get answered, and almost immediately. What motivates enough people to spend enough time on this site, that there is almost always someone with specialized statistics training willing to write up and answer the question of a stranger -- often having to parse through the ill-formed/vague questions of beginners who may not know enough to know how to word a question clearly. Even if you're an expert in the subject matter, it still takes time to write out your thoughts and come up with examples. One explanation is real-world reputation, but so many people use aliases that I don't think that can explain much. I imagine some people get paid to answer, but I doubt that can be much of it. There probably an intrinsic joy to teaching for some, but many of you are teachers by profession anyway. Is it truly altruism? (in the informal sense of the word) Is it about building and being part of a community? I am very appreciative of this site and the people who answer questions. I'm curious to hear your thoughts. | For me, there are three real reasons I end up answering questions: Something that could probably called professional pride. This is my job, and I'm an academic. I'm supposed to help contribute to learning and knowledge in my chosen field. It keeps my mind sharp. I'm away from my colleagues a great deal of the time, and for awhile I found myself in something of a bubble. CV...lets me walk amongst the data folk from time to time. Selfishness. I may one day have to read a paper written by someone who asked a question here - I'd rather read a methods section that doesn't make me cry. | {
"source": [
"https://stats.meta.stackexchange.com/questions/974",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/5471/"
]
} |
1,187 | I previously asked a question as to whether or not StackExchange hurts a consultant's business because of all the free advice one can readily obtain here. Some felt that the advice given was for basic things and not of the nature of a true consultation. I think there is some merit to that view and now I am wondering if the opposite might be true. Do potential consulting customers actually hire consultants because they met them through StackExchange? I have only been on for slightly over 1 month but I helped out someone with problems related to the binomial distribution and he subsequently contacted me to see if I would be interested in possibly doing some consulting for him in the future. I don't know if this proposed offer will ever come to fruition but it does lead me to believe that maybe some of you, particularly if you have been with StackExchange for over one year, have gotten any consulting jobs through StackExchange. | Good marketing is long-term. Long ago I learned a new discipline (GIS, if anyone's curious) partly by participating actively first on a listserver (1996), then with my own listservers (1999), and finally on a Web-based Q&A forum (2002). I can trace a small amount of my consulting business (about 5% annually) directly to those contacts--but only beginning after four years of constant activity. However, I can trace a much larger amount (almost 100% in one year) to work obtained due to the reputation established in that fashion. That work has taken me physically to other continents and professionally it has allowed me to enter radically different application areas (ranging from the environment to telecommunications to real estate to medical studies). I do not expect to make any direct business contacts through my activities on SE, but I do expect that if I generate a sufficient stream of high quality, visible, and memorable contributions over 4-5 years or longer, then people I would never otherwise contact may come to me with interesting consulting engagements. Almost 30 years ago, when I left academia to start this journey, I worked for a struggling small company that almost daily asked, "What do we want to do? Why are we here?" This was both a corporate and a personal question. My personal answer always was, "I want to have the door to my office open so anybody can drop in and ask questions. I want to help them solve their problems. " The Internet has enabled that to come true. On SE, my door is always open. | {
"source": [
"https://stats.meta.stackexchange.com/questions/1187",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/11032/"
]
} |
1,366 | Of course I really don't think that there are so few good statistical questions left to exhaust all questions, I have noticed that I will answer a question and find someone else saying that the question shouldn't be answered because it is the same as a previously answered question. I am relatively new to the site and don't know a lot of the history but I find that very oftn it is happening that a question is a repeat of something from 6 months or 1 year or more in the past. What should a new member do? When a question comes in should I be searching the site to see if it has been answered previously before I give my own answer? When asking questions should we be searching the site to make sure it is a new question before we ask it? I would also like to know what the community members think of answering a question that is a repeat. I have done this on occasion and see no harm in it. If the answer is simple and brief isn't it better to help the OP by giving the answer rather than to ask them to search the site for the answer? I know that some moderators feel that duplicate questions should be closed immediately and the OP should be directed to the previous questions. The argument is that answering these questions puts too much repetition and clutter on the site. But isn't good service to the OP more important than the tidiness of the site. Some of the clutter and repetition could be cleaned up later. I expect that some or many of you will disagree with me. So let me hear it. | This is a good set of questions; perhaps many people, especially our newer users, are wondering about these things. I need to point out that the policy about closing duplicates isn't a matter of "what some moderators feel:" it's part of the very structure of StackExchange. When you consult an encyclopedia or dictionary or even Amazon.com for information, for instance, you don't want to have to look in multiple places: you expect, and deserve, that what you're looking up will be in one place and it will be cross-referenced to any related (but different ) places, so that you don't have to go skipping around to find what you're looking for. Our site provides great tools to help the asker of questions identify similar threads. In principle, this should resolve many such issues before they ever appear, and everybody is happy. But many askers cannot recognize the similarities because they do not know the terminology or they are not knowledgeable or imaginative enough to see them. That's where the community comes in. We hope and expect that when experienced members first read a new question, they will reflect about whether they have encountered something similar and whether something similar should have been encountered by now that more than ten thousand questions have appeared. For instance, a question that asks how to do a t-test should already have been asked and answered, so a quick search for that answer is in order before going any further. Just as we ask of questioners that they do some research before posting a question, we ask of answerers that they do some research too. Think before you post! When you find a clear duplicate question, then immediately closing the new one is the best thing you can do all around: it directs the asker to an existing solution; it prevents unnecessary duplication of threads; and it prevents unwary newcomers from investing time in formulating answers that have already appeared elsewhere. (Please note, too, that duplicate material complicates all future searches, actually making it harder for new users to ask questions and harder for experienced users to make connections among questions.) If you find a "perhaps" duplicate, then a quick discussion with the OP (using the comment mechanism) will determine whether they have a new question or not. Once again, if you ascertain that their question is a duplicate, then closing it and directing the asker to the existing thread is the fastest way to serve them and usually gives them more information then most people can hope to produce in a single quick reply. This community has more objectives than merely dishing out answers to all and sundry. The "tidiness of the site" is intimately connected to its usability and value far into the future. Moreover, as one who has done a heck of a lot of it, I can attest that cleaning up "clutter and repetition" is time consuming, complicated, sometimes painful, and rarely gets completely done: far better is it to avoid such noise in the first place. For that reason, everyone posting answers here is constantly encouraged to provide clear, well-formulated, objectively reasoned, carefully supported answers: we want them all to be great. If users visit our site and see ragged, disorganized collections of hasty, careless, and mediocre answers, they're unlikely to return or participate. Leave that junk for yahoo (pretty well named, isn't it? -:) or answers.com or the myriad other generic Q&A forums that come and go. In summary, if anyone has arrived here interested chiefly in participating in a question mill with little regard for quality or curating the site, then your services would be better appreciated elsewhere: Cross Validated probably holds little future for you. | {
"source": [
"https://stats.meta.stackexchange.com/questions/1366",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/11032/"
]
} |
1,419 | Is it possible for Stack Exchange to make available $\LaTeX$ abbreviations for \mathrm{E} , \mathrm{Var} , \mathrm{Cov} (possibly with new commands \E , \Var and \Cov )? The problem is that things like $\mathrm{Var}[X\mid Y]$ look much better than $Var[X\mid Y]$, but we end up losing a lot of time typing that. | $\newcommand{\E}{\mathrm{E}}$ $\newcommand{\Var}{\mathrm{Var}}$ $\newcommand{\Cov}{\mathrm{Cov}}$ $\newcommand{\Expect}{{\rm I\kern-.3em E}}$ Here is cardinal's solution. At the beginning of your answer type. $\newcommand{\E}{\mathrm{E}}$ $\newcommand{\Var}{\mathrm{Var}}$ $\newcommand{\Cov}{\mathrm{Cov}}$ After that, just use it. For example: $$
\Var[X] = \E[\Var[X\mid Y]] + \Var[[\E\mid X]]
$$ $$
\Cov[X,Y] = \E[XY] - \E[X]\E[Y]
$$ Stask suggested $\newcommand{\Expect}{{\rm I\kern-.3em E}}$ $$\Expect[X]$$ which looks pretty. | {
"source": [
"https://stats.meta.stackexchange.com/questions/1419",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/9394/"
]
} |
1,486 | The Stack Exchange team is organizing an original event, in the spirit of the one organized last year on the Gaming site, called Hat Dash , where users earned "hats" for their gravatars by completing certain tasks (analogous to badges). Certain actions would trigger the user receiving a hat, which their gravatar could then "wear". This event will run from 19 December 2012 to 4 January 2013.
Individual users who don’t want to participate, don’t want to see hats, and/or are generally anti-hat will have an "I hate hats" option available.
The only visual change to the site itself will be the presence of the hats and the "I hate hats" button in the footer.
Participation on one site does not affect accounts on other SE sites. The following two answers aim to collect votes for a community poll: Please, indicate whether you think Cross Validated should participate to this event or not (1 vote per user). Responses from the community users are due by November, 28 (sorry for the short delay), and moderators will inform the SE team of our collective decision. | Yes , Cross Validated should participate in Winter Bash 2012. | {
"source": [
"https://stats.meta.stackexchange.com/questions/1486",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/930/"
]
} |
1,538 | Has anyone else noticed that we are 5th from bottom with an answer rate of only 76%? On the one hand there are some questions that appear "answered" in the comments, like this one: Is the percent of total deviance explained a useful model summary? These should probably try be tidies up and "answered", if only to shift them off the unanswered board. On the otherhand if we do have a large number of genuine unanswered questions, what do people think the reasons are, and what can be done about it? Are they bad questions - in which case should more be closed. Or are they just too hard? The concern might be that it doesn't look great if you only have a 3 in 4 chance of getting an answer... Currently (8 Feb 2013): $17420$ questions, $4175$ have no upvoted answer (24%) $484$ have answers, but no upvotes (2.8%) Not that I am suggesting we go round randomly upvoting old answers - but are all 484 answers rubbish? I think I mights start flicking through some of these old questions, see if any good answers have gone unnoticed. Any chance anyone else might do the same? Of course at the same time there might also be old questions that could be answered... (Now would be a good time to answer old questions, if people can be persuaded to start looking at voting on old questions) | I think there are numerous factors, but statistics is one of those areas where a really large number of people are using a fairly demanding set of ideas with very little background in it -- in a way that (for example) people generating programming questions, largely aren't . This results in a tendency to ask very vague - but often surprisingly specialized - and frequently utterly unanswerable questions. On the other hand, sometimes the questions are so trivial that there's really nothing more to say than 'yes, that's correct', which hardly seems worth an actual answer, an issue I address in another question . With no answer selected, for a surprising number of the questions I see, clearly the original poster is satisfied by an answer but doesn't choose any answer . With repeat offenders you can - if you notice - try to encourage them to do their bit, but with one-offs (accounts that ask exactly one question), by the time you realize they won't choose an answer out of good answers available, they're long gone . Their problem is solved and our community norms don't matter. More often someone with a programming problem (for example) tends to anticipate a future need to solve such problems again. When you add in a tendency for the jargon - and even the types of analysis - to be fractured across various disciplines that use statistics, it's especially apt to generate questions that won't attract answers. When I answer questions on stackoverflow (I answer R questions there for example), the answers often take seconds. When I answer them on stats.stackexchange, I may invest hours in constructing some answers ... and then often not garner a single upvote for my effort. It does tend to lead to only answering questions that will be 'worth my time'. Even so - and in spite of decades working and publishing and teaching in the area - I frequently have to do significant reading (as well as a bunch of requests for clarification) to understand what the person is even asking, let alone how to answer it. Little wonder then, that many questions I don't answer aren't answered by anyone else either. | {
"source": [
"https://stats.meta.stackexchange.com/questions/1538",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/19879/"
]
} |
1,604 | Questions formatted in LaTeX are easier to read, but not everyone who has permission to review a question/answer knows how to do it (maybe I am an "outlier" here, but not sure). Are there any instructions on CV about how do it ? Like when you post a commentary and it appears _ _ for italic , and etc. Reflection:
There are many ways to learn how do it, for example, clicking on "edit" button on a LaTeX formatted question to see how is done, or just looking on internet. Nevertheless , I think if CV provided information on this topic, more users would feel encouraged to help on this task. | Here are some options that I know of: There is a CV help page for the markdown options that the site supports. That covers things like italics, and some (but very little) actual $\LaTeX$ . There is a comprehensive 'tutorial' thread on meta.Mathematics here: MathJax basic tutorial and quick reference . One way to learn is to find something that uses $\LaTeX$ that you would like to also be able to do and right-click on it. Then select Show Math As -> TeX Commands , and it will display the code that was used to create it. If you need to identify the $\LaTeX$ name / code for a symbol and you can draw it, detexify might help. Stack Exchange does have a Q&A site dedicated to TeX ; that might be helpful as well. After that, it's the broader internet; I've found this MathJax documentation site to be very comprehensive and informative, but it's really slow to load. This webpage discovered by @Andre is less extensive, but loads much faster. There's a handy 'reference card' here . Lastly, I suppose you could ask a question about how to do something specific here (e.g.: latex-macros-for-expectation-variance-and-covariance ). | {
"source": [
"https://stats.meta.stackexchange.com/questions/1604",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/22468/"
]
} |
1,956 | I came across this question and answer today: Recall and AUC of a binary classifier And I think the accepted answer is wrong. So I wrote what I believe is a better answer, and downvoted the accepted answer explaining why I think it is wrong. side note: if my reasoning went astray please correct me side note 2: I actually think in terms of content it will be easy to fix the accepted answer and I also left a comment that I'll gladly upvote a revised version. I don't think it will be a matter where the author of the accepted answer and I cannot reach convergence - I think that he just didn't think of a particular situation which makes the "can never happen" answer incorrect. It is not like trench wars about the correct prior or appropriateness of Bayesian vs. frequentist approaches The concept of stackoverflow relies on the fact that other users can up- or downvote answers. Ultimately this should lead to good (including correct) answers floating on top . This concept also assumes that in case some issue discovered, all sides can react: the author of the answer can edit and the OP can even change the accepted answer. This relies on people staying active on CV/SX, but what if they aren't active any more? Question and accepted answer are 1 1/2 years old. The OP's profile says that the OP never came back to the site after the day that question was asked and "answered". (The author of the accepted answer was online here more recently (2 weeks ago) so he/she may edit the answer in reaction to my comment.) I'll flag the question for moderator attention, but I think it would be good to write down here how to proceed in case the OP doesn't show up again and react on the new situation. in future if similar issues arise (the OP could have deleted his/her account meanwhile) To be even more clear: I think it is a perfectly valid policy to say that we don't do anyting in these situations. I just think that it would be good to have the policy discussed and spelled out here (whatever it is). | There is no special policy either possible or needed here. Every action is fallible, but within limits correctable! Original posters may accept poor or even incorrect answers (and then fade away). It is best not to read too much into question acceptance; always remember that this is just one person's judgment. Sometimes an OP chooses what they wished to hear, or what they understood, or what best fitted their specific needs. Often the pattern of voting indicates other answers that are deeper, fuller, or more interesting to many readers. People may post poor or even incorrect answers (and then fade away). But people who are competent and confident can and should Post good answers. If you have a better answer, post it! Explain firmly but politely the limitations or errors of other answers. Vote up (and down) accordingly. In this specific case, you did what you could, and that's fine. Note on editing: The tacit policy seems to be that people with sufficient reputation and a reasonably good eye for language, mathematical and statistical formatting should edit mainly to improve presentation . Supposed technical errors other than trivial typos should be brought to the attention of posters, with the aim that ideally they make corrections themselves. Such diffidence is partly for posters' own education (assuming they accept corrections) and partly because editors may be those who are confused or incorrect. | {
"source": [
"https://stats.meta.stackexchange.com/questions/1956",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/4598/"
]
} |
2,026 | I noticed the ad on the main site today for the data science proposal, which evidently is 77% committed. Some of the people committed to the proposal are users on CV. Most of the questions listed look like they would be on-topic on CV, or even might be duplicates of existing questions. A minority of the questions are more programming oriented, but I would guess they could fit on one or another of the various SE programming sites. Of course, people can form a new site if they want, but I'm wondering what the value-add is? Is there something we're lacking, or should we be more welcoming of something? Moreover, what do people think of the proposal? | Someone raised this issue in Area 51, Overlap with existing sites , with only a little discussion. I would think Cross Validated would be suitable for most Data Science topics, but apparently there is a perception that CV is for theoretical stats questions. The CV description is "a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization" which seems aligned with Data Science. As far as the question of what can be done to avoid the overlap goes, we can participate in the Area 51 discussion start some more organized effort of "try CV first" add "data science" to the CV description be accepting of less statistical data science questions (data cleaning, storage, ...) As a point of comparison, there are two math sites: Mathematics "for people studying math at any level and professionals in related fields" MathOverflow "for professional mathematicians" | {
"source": [
"https://stats.meta.stackexchange.com/questions/2026",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/7290/"
]
} |
2,107 | This question has been asked on other SE forums, but has anyone on Cross Validated added their profile information (either putting their handle, rep, or both) on their resume or curriculum vitae? There are, of course, numerous other factors that influence applicability for a job; & acquiring rep/presence on a SE site is particular to just this community, making it rather arbitrary to an outsider. However, the question below & its subsequent answers got me thinking that the nature of the CV forum in particular as it relates to working in the field of statistics: Unanswered questions as percentage of total – why does CV stand out? In the organizations I have worked or interviewed – largely requiring an applied statistics background – researching, understanding to the best extent possible, & then communicating technical concepts from statistics is critical. Oftentimes the client/asker doesn't know specifically what they want to see beyond a broad conceptual outcome. And of course there may be multiple approaches to any problem given the background of the expert/client, as well as organizational constraints (available time, technology, etc.). The nature of the questions & answers on CV, which frequently require deeper research & are more specific or nuanced than other SE sites, seems reflective of what I & others have encountered in the workforce. As Glen_b summarizes this well: In the course of answering questions over a few days, I often find
myself doing algebra I've never quite attempted before, running
simulations I've never run before (and writing and debugging code to
do them!), suggesting novel or tweaked test statistics and exploring
their properties, comparing the properties of several approaches to a
problem, coming up with slightly novel way to visualize some data,
reading papers to follow the history of some little technique, reading
more papers to even figure out what a person is asking about ... and
so on. That is, a lot of questions here take actual research effort.
Sometimes hours of it. Glen_b's summary seems to be exactly the kind of mindset required for many stats positions. So having a quality presence on CV in particular may serve as a plausible representation of one's aptitude in solving statistical problems in general. This is different from SO for example, where questions are expected to be very specific w/ reproducible code, solutions don't necessarily need to be as nuanced, potential number of solutions are more limited, & the community/pool of knowledge is generally larger. Similar SE questions: At what point do you put your SO reputation in your resume? I was recently asked for my Stack Overflow reputation score in a job interview. Is that appropriate? NOTE: I'm asking this as a general question. I definitely would not classify my current presence on CV as "quality". | As an independent consultant I am open to opportunities of any sort, including working full time for a (worthy) organization. I therefore maintain different resumes according to the type of position: consulting, academic, and industrial. The academic and industrial ones mention SE, but it is just one short entry out of many under "professional activities." It is there as a strategic element, intended to provide an opening for further discussion in an interview. Given such a chance, I would make a case similar to Glen B's, but I would not emphasize this in any application materials. Others, elsewhere, have pointed to a significant problem with CV (and SE in general): a high reputation is not necessarily associated with knowledge, quality of exposition, or even a tendency to be correct. Moreover, a very high rate of participation could be a negative in some employers' views ("why would she spend so much time at that and would that activity distract from her job?"), so you should be prepared to provide a good explanation if you are in that category. That has begged the question of what I might say during that hypothetical interview. Some points I think would be worth making about CV in particular are: Active participation--even if it is primarily reading, editing, and commenting--exposes us to far more statistical consulting problems than anyone ever has seen in their lifetime before. It can thereby serve as excellent preparation for consulting and to demonstrate one's fitness to serve as a statistician in almost any environment or organization. Actively answering, especially with the care described by Glen B, will teach anyone who is sufficiently thoughtful and humble to improve their communications skills as well as their statistical expertise. The site is useful as a "sandbox" in which to test ideas that might be turned into more formal resources such as a blog, a textbook, or (in a few cases) published papers. Involvement in as many topics as possible exposes one to a wider range of solutions and thereby, over time, can considerably augment one's skills and tools. | {
"source": [
"https://stats.meta.stackexchange.com/questions/2107",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/40501/"
]
} |
2,223 | For the third year running , the Stack Exchange team is organizing a "Winter Bash". Users earn "hats" for their gravatars by completing novel tasks (analogous to badges). Certain specific actions will trigger access to a (graphical) hat, which their gravatar can then "wear" at the user's option. This event will run from 15 December 2014 to 4 January 2015. Individuals who don’t want to participate, don’t want to see hats, or are generally anti-hat will have an "I hate hats" option available (which will cause you not to see hats at all). The only visual change to the site itself will be the presence of the hats and the "I hate hats" button in the footer. Participation on one site does not affect accounts on other SE sites. Two answers aim to collect votes for a community poll: Please, indicate whether you think Cross Validated should participate in this event or not (1 vote per user). Responses from the community are due by November 30. Moderators will inform the SE team of our collective decision. | Yes , Cross Validated should participate in Winter Bash 2014. | {
"source": [
"https://stats.meta.stackexchange.com/questions/2223",
"https://stats.meta.stackexchange.com",
"https://stats.meta.stackexchange.com/users/919/"
]
} |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.