target
stringlengths 16
149
| source
stringlengths 69
7.23k
| category
stringclasses 1
value |
---|---|---|
what's the better way to charge for a cloud platform, simple but more expensive, or complicated but cheaper for most? | like all good questions, the answer depends. it depends on your application: is it something you are likely going to be using periodically? a lot on some days of every month and very little the rest of the month? once a year? then a pay-as-you-go model would be 'nicer'. it also depends on the sophistication of your users. will they bother to do the mental arithmetic if you explain the payg model to them? will they care? how much will your users be spending? $5? $10? they might simple not care for that amount of money to optimize their experience and would probably rather go for something that is simple or easy. could the two models make a differences as large as $1000? then you might find people are more than willing to have a look at the payg terms... | comparison |
when is it better to offload work to the rdbms rather than to do it in code? | you want to do all set-based operations in the database for performance reasons. so aggregation functions, sorting functions, joins etc. this age calculation, i'd do in code. the only reason i might ever do something like this in a database query is if it required lots of columns that i wouldn't otherwise select that could actually amount to enough data to meaningfully slow down my query. selecting a few integer values will not make a meaningful performance difference. and even if it makes a moderate performance difference i will be biased towards keeping this logic in the application code. | comparison |
is it better to specialize in a single field i like, or expand into other fields to broaden my horizons? | specialise if you enjoy it as you are aware, if you specialise you are automatically incurring an opportunity cost in that you won't be immediably eligible for other technologies (e.g. java programmers don't often immediately get accepted for compiler optimisation jobs). however, you have to balance this with your love of the complexity inherent in your chosen discipline. you say you want to be an expert - well go ahead and take the time to learn your chosen discipline. we as a community always need new experts. however, my advice is to follow the pragmatic programmer recommendation of "learn a new language every year". that way, while you're engaging in deep lexical analysis of algorithmic encoding, you can also be churning out a little iphone app that interests you on the side. you never know, the cross pollenation of different paradigms may cause you some insight that will extend your specialisation into new areas. | comparison |
how is intellij better than eclipse? | i work with intellij (9.0.4 ultimate) and eclipse (helios) every day and intellij beats eclipse every time. how? because intellij indexes the world and everything just works intuitively. i can navigate around my code base much, much faster in intellij. f3 (type definition) works on everything - java, javascript, xml, xsd, android, spring contexts. refactoring works everywhere and is totally reliable (i've had issues with eclipse messing up my source in strange ways). ctrl+g (where used) works everywhere. ctrl+t (implementations) keeps track of the most common instances that i use and shows them first. code completion and renaming suggestions are so clever that it's only when you go back to eclipse that you realise how much it was doing for you. for example, consider reading a resource from the classpath by typing getresourceasstream("/ at this point intellij will be showing you a list of possible files that are currently available on the classpath and you can quickly drill down to the one you want. eclipse - nope. the (out of the box) spring plugin for intellij is vastly superior to springide mainly due to their code inspections. if i've missed out classes or spelled something wrong then i'm getting a red block in the corner and red ink just where the problem lies. eclipse - a bit, sort of. overall, intellij builds up a lot of knowledge about your application and then uses that knowledge to help you write better code, faster. don't get me wrong, i love eclipse to bits. for the price, there is no substitute and i recommend it to my clients in the absence of intellij. but once i'd trialled intellij, it paid for itself within a week so i bought it, and each of the major upgrades since. i've never looked back. | comparison |
i believe my solution is better than my boss's, so should i ignore him? | having been "the boss" and, as it turned out, actually better than my staff in all cases bar one - yes, he will be mad - or annoyed or frustrated and in any case, quite possibly, right in the first place. if you're genuinely better than him then you should be able to understand his proposed solution and to see why yours is better and then to explain why. but you state: because his idea was not clear enough to me in which case you need to go back and understand what he wants and why and whether - as has been the case both in me making suggestions to my staff and my staff proposing solutions to me - you or he has missed something. but don't assume that he's wrong and you're right unless and until you understand what he's asking for and whether he's covering something you haven't thought of (yet). oh and in the one case - he's a better programmer but he's not so good a couple of steps back from the problem where i'm better and we had great fun working together for that very reason. | comparison |
office arangement - comfort vs. teamwork? | get some chat software installed aside from that you may find some of the other developers find verbal communication breaks them out of the productive 'zone', regardless of how work related the talk is. also of note is that by having everyone in the same location, while work talk between dev a and dev b may be productive, it is most likely nothing but distracting noise to dev c, dev d, ... | comparison |
when is c a better choice than c++? | my favorite reason for not using c++ is that c has a de facto standard abi on most platforms. in other words, because there's no name mangling, etc., you can usually link code compiled with two different c compilers. with c++, good luck because you'lll need it. | comparison |
what's the better user experience: waiting once at startup for a long time or waiting frequently for a short time? | startup as fast as possible, use a thread to do the most important calculations. so the user gets a feedback immediately, and can start working after 15 secs. in the background, let another thread calculate everything, so after two minutes, those 2-3 sec response times also go away. | comparison |
wpf vs. winforms - a delphi programmer's perspective? | if you have a delphi background, you will be disappointed in winforms. you will try to do things that were easy in the vcl, only to find that they're painfully difficult, or even impossible. wpf will be much less confining. for example, here are just a few of the winforms limitations we've run into: winforms has nothing that compares to taction, so if you're used to coding with actions, sharing the same text and icon between a menu item and a toolbar button and a right-click menu, centralizing your enabling logic, and updating the enabled state in the background with onupdate... you'll hate winforms, where you have to do all that the hard and error-prone way. winforms' old (.net 1.0 vintage) mainmenu doesn't support images next to menu items, and the new (introduced in .net 2.0) menustrip is riddled with bugs that microsoft refuses to fix (because the bugfixes might break backward compatibility). many controls, e.g. the treeview, are woefully underfeatured compared to their vcl counterparts (painfully slow, no owner draw, many customization options missing, etc.) there's nothing resembling the vibrant community of third-party control developers that you're used to in delphi. there are quality control libraries out there, but you pay for them -- free offerings like virtualtreeview just aren't out there for winforms. wpf is a little more bare-bones in some respects than winforms, but it's immensely more extensible. you want something like taction? wpf has icommand, which is just as rich as you're used to (but make sure you read josh smith's mvvm article -- normally you have to enable/disable your commands manually when the state changes, but his version automatically fires your enabling code in the background like you're used to with onupdate). you want images on menus? that's built in (and nowhere near as buggy as in winforms). winforms leaves out owner-draw on some important controls, but if you're using wpf instead, you don't need owner-draw -- if you want your treeview nodes to have black text followed by a blue number in parentheses, you just put it in your datatemplate and it works, no ugly owner-draw code needed. you want third-party controls? in many cases, you don't need them, because you can extend what's there in ways winforms and, yes, vcl developers can only dream about. wpf has a very steep learning curve, but if you pick up a good book (e.g. "wpf 4 unleashed"), it'll help get you over the worst of it -- and you'll be glad to work with a framework that won't hold you back the way winforms will. | comparison |
is it better to load up a class with methods or extend member functionality in a local subclass? | as always, these things are a matter of taste. in this case, i'm not crazy about the second solution for a few reasons: there's always a small intellectual burden when defining classes and their relationships. it's not clear what benefit the nested class provides aside from partitioning code. the one class approach is simple and clear. i'm not crazy about nested public classes. this implies i should be able to instantiate it outside of the searchclass context which doesn't make a lot of sense. inheriting generic collections isn't my favorite. when i run into this class, it's not immediately clear that it's just a plain old readonlycollection<localfile>. i don't like the extra step of remembering what's behind the curtain. additionally, you may want some sort of static method to create your searchclass. in your example, the class runs processing in the constructor which doesn't allow it to gracefully fail. the only option is to throw an exception. if you use a static method, you have the option of returning null on failure (or some other alternative failure scheme). | comparison |
if it was for you to chose game development vs application development, which will you chose? | i have a coworker who came from the world of game development. unless you are working for the few big dogs that have their own publishing department, your employer is working for an external publisher with the constant threat of canceling the contract. the stress that management feels is passed on to you. trying to pull off a quality title on shrinking budgets and demands of people who may not care what you are trying to do in your title, but pay your salary, is quite the challenge. at the end of the day, he became burnt out on games and after a couple years out of the industry still can't bring himself to play any video games. application development is pretty stable work, and while your clients can sometimes be crazy, they are not so quick to pull the contract cancellation card. the difference is that they depend on you to help them get better at whatever it is they need. they know that if nothing changes they can't improve their business. games on the other hand are for pure enjoyment, and publishers only care about making money. game development can be enjoyable, as long as you don't have any aspirations to go up against the big dogs. small mobile device games are a lot easier to compete with, mainly because it's a smaller market. as such it's also harder to generate enough revenue to sustain the habit full time. | comparison |
python threading vs. multiprocessing: should i learn one before the other? | when you are starting out, it doesn't matter which one you choose. what is more important is getting a better understanding of how to parallelize work. if you don't have that base understanding, you will not be able to take advantage of the fine point that differentiate between the two. pick one and get used to thinking about what work can be done in parallel. think about ways you can break a large task into smaller pieces. think about what pieces of memory all the tasks need access to and if that value ever changes. think about if it is ok if each task had it's own local value to change and if combining all of the local values at the end could prevent contention. once you think you have a handle on those types of things, than you can come back and look at the differences again. with a stronger understanding of concurrency, you will have better understanding of how the two methods differ. | comparison |
is it better to have c developers learn flash, than hire flash guys? | it would matter a great deal why the existing c developers re-wrote all the code. on the one hand, it could be that an architect needs to determine what functionality needs to be built in the different software tiers. if you are embedding logic in flash that needs to be reused by your c developers, for example, that's probably a poor architecture and could explain why the c developers needed to rewrite the code to pull out various apis. on the other hand, it could be that your existing developers are being excessively territorial and resistant to learning a new language. on the third hand, perhaps flash was a poor technology choice for the requirements you have and the frameworks that have to be leveraged. without addressing why the previous projects failed with the members of the team and with the stakeholders, it's going to be very hard to address the problem. | comparison |
which is better design: determining if a function should execute from outside of it, or inside of it? | the important bit is who owns bar? if it's unclear, consider if b can live up to it's method name considering the value of bar. if so, then make it part of b. if not, then make it part of a. in a real world scenario, you'll generally find methods that fulfill a design responsibility, and helper methods that fulfill implementation steps. generally, the decision of whether to call an implementation method resides with the api method - which is generally the public method. public class control { public bool isvisible; public void render() { if (this.isvisible) { this.prepareforrender(); } } private void prepareforrender() { // just do it. don't check isvisible } } it's possible that you have two different api methods calling each other. in that case, i'd suggest falling back on the semantics of why doesn't b run when bar is false? is it because b is unnecessary - then b should decide. public class list { public int count; public void setto(object[] values) { this.clear(); // need to clear values; don't really care how } public void clear() { // count = 0 is just a shortcut to fulfilling clear's responsibility if (this.count == 0) return; } } is it because b is incorrect? then a should decide. public class file { public bool isbinary; public void write() { if (this.isbinary) { this.openbinarystream(); } else { this.opentextstream(); } } private void openbinarystream() { } private void opentextstream() { } } because it's actively harmful? a should decide, and b should throw an exception: public class file { public bool usesecuredelete; public void delete() { if (this.usesecuredelete) { this.zerobytes(); } } private void zerobytes() { if (!this.usesecuredelete) throw new invalidoperationexception(); } } | comparison |
what is better making a separate temporary table or inserting directly to big table? | are the 3000 rows entered evenly throughout the day? if so, that's a row inserted every 29 seconds -- so inserting data as it comes by is hardly anything to worry about. you'll only benefit creating a temporary table if that table is in memory and then at the end of the day dump that to the main table. edit: in reference to the comment "the data 3000 rows is inserted at once , then what to do ?" in which case just insert straight into the main table. adding to another table then dumping that table is just an overhead and completley pointless. look into load data if your host supports it, or just do one big insert with low_priority (unrecommend in a high-enviroment server) or delayed (recommended). see: [ref] | comparison |
how much better is apache solr than its non-java competition? | don't know anything about sphinx, but both solr and zend_search_lucene are based on lucene (to be precise, solr uses lucene directly and zend_search is derived from lucene). so at the bottom of either package you have the same solid and tested ir concepts. so in theory you can develop pretty much the same search engine using either package. what is good about solr is that is packed with a lot of features which save you a lot of time. for instance you can query solr and have it send response in php serialize format, so you can get the results with simple unserialize() instead of parsing. you would have to develop all the bits yourself with zend_search. | comparison |
when designing a protocol, is it better for a method to accept a single object of a specific type, or an array? | there is no general answer to your question, but let me provide one to your example to point out some principles that generally apply. note: i shall henceforth speak of "interfaces" instead of "protocols", not only because it is arguably more common name for it in other languages, but also because i really mean interfaces in a more abstract way, as in "the definition of how something can be interfaced with". when designing an interface, you should design it, as if you had no means to implement it yourself, but rather had to rely on somebody else, whom you will only be able to communicate your needs to through your definition. if you assume this scenario, you see that you want an interface to be very clear (otherwise the implementation may not be designed to do what you expect) and very easy to implement (otherwise the implementation may fail to do what you expect). now in the context where you depend on the service abstracted to the interface you design, you should have been able to make several important decisions, that you shouldn't forward to the implementor to not force them to overstep the single responsibility principle, nor provoke code duplication. instead, you need to make those decisions and convey them in your method invocation. in this example, you might have: figured out, whether some items can be processed in batch - assuming you expect any advantages in doing so been able to filter out duplicates - assuming your system can produce any in the first place and thus (under both assumptions) your contract with the implementor should be "you must be able to process a batch of unique urls". you can hardly transport this contract unambiguously without documentation, but you can try, for example by method names. imho receivedroppeditems is not very expressive, because item is really ambiguous and receive also is in a language based on message passing. i would call it processdroppedurls. it's not longer, but it really says what's expected to happen. if i am asked to implement such a method, i think "oh, ok, so i am expected to process a collection of dropped urls" (assuming i know what dropped means in that context, i know all i need). even though the collection's type might be defined to something as vague as nsfastenumeration, i would expect for...in to yield nsurls. as for the uniqueness information, i would probably rather put that into the documentation, rather than the method name, because it's not that vital. so to summarize: you want clean, concise, almost minimalistic interfaces with expressive method names, that create abstraction barriers between the current client scope and the abstracted service scope, which are clear and simple from both sides. | comparison |
readability vs minimalism/compactness/conciseness in language design: are they antagonistic? | conciseness is not the same thing as readability. nor is verbosity the same thing as unreadability. readability is a function of expressiveness (how much the syntax communicates), not length. there is often a correlation (up to a point) between conciseness and readability, but don't mistake that for causation. to the case at hand, the people complaining that the new version lacks readability are most likely rationalizing their irrational fear of change, even positive change. a very large number of languages use similar literals to denote arrays and associative arrays (dictionaries, hashes) without any apparent penalty to expressiveness or readability. to give you some anecdotal evidence, ruby is moving from {:a => 2} syntax for hashes to {a: 2} syntax (json style). similar pro/con arguments have been made for similar reasons by detractors but, once people become comfortable with the new syntax, they tend to accept and adopt it without difficulty. | comparison |
ubersvn vs visualsvn server - any experiences? | obviously ubersvn is still a very new product so i imagine experiences might be difficult to come by. you're not the first to ask this question though so i might be wrong about that! the way ubersvn is designed, when you add repositories and choose the 'import existing repository' option, the repository will be copied from the current repository location into the storage location you choose for ubersvn at installation time. that way we don't interfere with your existing visualsvn installation and you can easily test the migration without risking breaking anything in your existing setup. when you are happy it works you can gracefully migrate from one to the other with very minimal downtime for users. the one area that might take further thought is how you handle user account creation and authentication in ubersvn. if you can advise how many subversion users you have now and how your authentication is currently handled i can provide a bit more info on the best way to do this. we have discussed providing some specific migration tools from other systems and it would be very interesting to know how you get on with this. of course if you need any help feel free to update your question or to contact me directly. ian wild, ubersvn product manager, wandisco inc. | comparison |
is it better to document functions in the header file or the source file? | my view... document how to use the function in the header file, or more accurately close to the declaration. document how the function works (if it's not obvious from the code) in the source file, or more accurately, close to the definition. for the birds-eye thing in the header, you don't necessarily need the documentation that close - you can document groups of declarations at once. broadly speaking, the caller should be interested in errors and exceptions (if only so they can be translated as they propogate through the layers of abstraction) so these should be documented close to the relevant declarations. | comparison |
what do you believe is a better of method of learning languages: using books or jumping straight into a project? | false dichotomy; i routinely work on a project while reading a book. i will say this: if you just read a book without actually programming anything then you are not gonna learn the language. now whether you're reading from a book or from the web while working on your project matters little; obviously you want a good resource and not a crappy one, but there are both good and bad learning resources in both book form and on the web. | comparison |
building websites, which is the better approach mvp or mvc? | i guess you're asking in the context of the microsoft world? it was my understanding that mvp was mainly employed for unit testing the controllers of the application. i think another term for the mvp pattern is supervising presenter. the view in this case would be the asp page, which would have an instance of the presenter, and any events in the page (click handler's for example) would delegate their work to the presenter. the presenter would have a reference to the view (although, through an iview interface that the page would implement). the presenter would then respond to actions from the view, do something, update the model and pass it back to the view or perform actions on the view via the interface. this aids unit testing, because the presenter (where any presentation logic should be) can have the view mocked out. typically a presenter would also have references to other services, which should rely on dependency injection (via constructor injection imo) to also allow for the presenter to be tested in isolation (a proper unit test). the main reason i think to use mvp in asp.net webforms is because the page class is tied to the asp.net runtime, whereas because asp.net mvc has abstractions on things like the httprequest, and the controller and actionresult classes can be tested in isolation, you don't need to employ mvp - it's unit testable by default. so to answer (i think): i'd use mvp if i was using webforms, to aid testability. otherwise i'd use mvc. also check out the web client software factory, which offers a framework for a starter point for implementing the mvp pattern. | comparison |
talks vs. poster presentations: which is better for advertising your research and building research networks? | it depends on what you want to do. if you feel like at this point in your research it would be more beneficial to converse than to present, then i'd say that a poster session is the right venue for you. it's true that talks are considered a bit more prestigious than poster sessions, but you really should go with what you think will be more valuable for you, and for the conference attendees. it's worth noting that you could always do a poster presentation this year, get the feedback that you covet, and then return next year to do a talk, and let everyone know how your research went over the subsequent year. that kind of progression is not a bad thing. also, if you are in the early stages of your research, it might not be ready for a talk. when i attend a conference talk, i'm expecting there to be some significant findings. sure, talks might be more "prestigious," but, if there are some holes in your research, you could end up discrediting yourself. people aren't expecting the same level of maturity in the research during a poster session. so, as i said before, forget the prestige aspect, and choose what is more fitting based on your goals, and on what you have to share at this point in your research. | comparison |
is it better to have no publication than having you as second author? | if the paper is well-written, contributes non-trivial knowledge to a particular field, is peer-reviewed, and you contributed significantly enough to the paper to be listed as an author, you should put your name on it. in the case of fields where pre-prints are common (i.e., it won't be peer-reviewed, at least initially), it is also worthwhile. what you don't want is to be an author on a poorly written paper. in other words, you should be proud to list a publication on your cv even if it isn't exactly in your primary research area. especially at this stage in the game, having a few good papers will help your case, even as a second author (and keep in mind that some fields, such as theoretical computer science, should have the authors listed alphabetically and all authors are considered primary). | comparison |
ph.d. adviser or ph.d. advisor? | according to the new oxford american dictionary (that i have by default on my mac :)): the spellings adviser and advisor are both correct. adviser is more common, but advisor is also widely used, especially in north america. adviser may be seen as less formal, while advisor often suggests an official position. since it's an official position, i'd rather go for ph.d. advisor | comparison |
low-quality paper or no paper - which is better for an undergraduate in phd admission? | if the quality of the work is low, the student should neither publish it in a lower tier conference nor publish it as a technical report. they should either make the time to improve it or toss it in the trash. a bad publication, no matter what venue it's published in, is worse than no publication at all. similarly, a "publication" listed in a cv or described in a statement of purpose that isn't retrievable via google (unlike most technical reports, which are googlable) is also worse than no publication at all, because we can't tell if the applicant is lying. (sadly, some applicants are lying.) | comparison |
what is the difference between ph. d programs with coursework and those without it? | it seems to me that there are several advantages; none of these are suitable for every student. it's up to you whether enough of them apply to you, to make it worth doing a taught phd: a phd with a bit of coursework in the first year will help those who are crossing over into a discipline that they're not already deeply embedded in: it will give you some hand-holding through the things you'll need to know but don't yet; it should (if taught well) also teach you some extra research skills; it will give you some indication as you progress as to how well you're doing, compared to how well you should be doing if you're going to finish it will allow you to explore different aspects of the field, to help you finalise your thesis topic it may, depending on the country and institution, give you an intermediate degree at the end of the taught section, such as an mres, which will count for something even if you then don't go on to do the full phd it lessens the culture-shock for those going straight from fully-taught study to a research degree. | comparison |
for graduate admissions, is it better to attend a well-known undergrad program with advanced coursework or one where i can write a bachelors' thesis? | if i were you, i would try to move to b based on the assumption that b is better than a and a is probably less recognizable in the us than b schools. you are right that research potential is an important factor the admission committees of top schools will pay attention to. however, bachelor thesis is not the only way to show that you can do research. you can do research papers with professors' supervision at school b if you want. i suggest b since you can learn a lot more in b than in a. building the foundations during undergrad stage is more important than doing research prematurely. when you apply graduate schools, you need to provide gpa, gre/toefl scores, recommendation letters and any evidence that you can do research. one of the first things the admission committee will look at is your transcript to determine if you are competent in math. if you stay in a, all you can show is basic to intermediate level math courses. given that all other conditions will be equal, i.e. you'll get the same gre/toefl scores, about the same recommendation letters (i am not too sure about that), the only thing you can use to beat other applicants is your very good quality bachelor thesis. but, how can you write a very good thesis without having solid math knowledge? thus, i am suggesting b. i do understand how tough is the admission exam in that part of the world. it's bloody. good luck! | comparison |
is job experience more valuable than taking more courses? | if the "job experience provider" would write a good recommendation for you afterwards (surely you must also deserve it), this may matter more. unless you skip some very basic course that your future company sees as very useful for your work. for example, if the future company focuses on c++ development, a course with good marks in c++ programming may matter more than work in some company that required html/css only. i would say, take that job but think well which course deserves dropping most of all. | comparison |
in which type of schools can i have better research experience in biology, liberal arts college or large research university? | i went to a small liberal arts college and worked at a large university lab, that had undergraduate interns. from both experiences i agree that it is the lab and not the school that matters. however, there are conditions that you will be more or less likely to find, depending on the type of school. in general liberal arts colleges will give you more personal attention and large universities will have more resources. funding at liberal arts colleges is focused on undergraduate education and funding at larger universities is focused on research. personal attention: liberal arts: a lab is led by a principal investigator or pi. liberal arts colleges often do not have graduate students or have few graduate programs. so, at a small liberal arts college you will work closely with your pi and get to know them well. i left my college with great letters of rec that helped me overcome a poor gpa (due to a medical problem in my first two years) and get into grad school. university: the undergrads were trained by phds and post docs and had little contact with the faculty. post docs and phds can often be great teachers, though, since they either are students or were students recently and might be able to anticipate the student's perspective. at a university, they will normally write your letters of recommendation and the faculty will sign them. independence: liberal arts: all of our grants were training grants, so the emphasis was on teaching, as opposed to producing results. this means you will get to do more independent work. i got first hand experience with the equipment (eeg) and techniques (analysis of fmri and eeg data) that few undergraduates get to use. additionally, i know of at least two of my friends who published, as first authors, in major research journals. this is not uncommon in my school. university: the undergrads had to learn a program that is no longer used by most labs in the field. there was little room for mistakes (all of the labs grants were research grants), so the undergrads were given the task of modifying previous work and really did not develop any understanding of how the program actually works. however, this lab was an older lab. i also believe students at an older lab at my undergraduate school had a similar experience, where the methodology they were taught was not consistent with current standards in the field. connections: they are pretty equal in this area. liberal arts college professors often collaborate with people at larger institutions and can connect you with other labs. i have friends who got summer jobs at stanford and caltech this way. at the large university, one of our undergrads got to go to oxford for the summer, because of the professor's connections to a lab there. equipment and resouces: liberal arts: we did not have access to some of the most expensive equipment (an mri, for example). however, one of my professors also worked at a local university that had access to an mri and we got to use it there. ideally, you should get some lab experience at a major research university, so you are exposed to techniques that require more expensive equipment. you can do this during summer internships. getting more experience at different labs will look good on your application. on that note, a liberal arts college is more likely to have grants that will help students study at other institutions. my college had several such grants for student research grants. additionally, all senior thesis was funded by the department. the senior thesis funding and one of the summer fellowships both require students to focus on their own original ideas. in most fields the first author is the person who had the idea for the project. this is how undergraduate students were able to become first authors. university: universities will have the best equipment, but they are less likely to have funding for student research. the institution i worked at had grants for students, but they were only for work at that university. they also had no specific grants that would allow students to propose their own projects, based on their own ideas. | comparison |
what makes a bachelor's thesis different from master's and phd theses? | the phd thesis should be on a much higher level than the honours/masters thesis, offering a contribution to human knowledge that is of a sufficient level of "significance" to warrant publication in a respected journal. significance is highly subjective, and you also do not necessarily have to publish to be awarded the phd (sometimes the peer-review delay means that they come out afterwards, or there may be some intellectual property issues that make it beneficial to refrain from publication). it is awarded based on your supervisors consent and a review of academics in your field. so the "significance" would probably be judged by them in terms of how much original work they see as a reasonable expectation at that stage of your development (first 3 years of serious/committed research). unfortunately it also means that some people who probably do not deserve phd's are awarded them anyway for fulfilling grunt work for their easy-going supervisors. it is possible that some honours/masters thesis might even be more significant/higher quality than a phd thesis. unfortunately, this does not mean that the submission of the thesis will award the degree that they deserve. the university may have a policy to upgrade the student's enrolment if the supervisor senses that such progress is being made. however, it is impossible to upgrade to a phd without completing honours and i believe nearly every single university has a policy of a minimum period of enrolment before submission is allowed. a subsequent question that you may have is how to gain a phd without enrolling in one, which is another level of achievement completely. as for the difference between honours/bachelor and masters it would depend on your university, but both have no requirement for publication quality research and are usually small tasks/ideas that are not worth the supervisors time to think about alone, or involve a lot of labor. in fact, in my school, many honours thesis are of a higher level than the masters, because the smart honours students will either graduate into the work force or go straight into a phd. the masters students are usually those who cannot find a job and are not suited to research. however, i believe some other universities may require a mandatory masters degree to start the phd. you may get a better idea by looking at some titles/abstracts of completed theses. the phd level will be something like a new method/observation/application whereas the masters/honours will be an application specific set of measurements/simulations or even simply a literature review to gauge the needs of future work. the word limits are also typically different (although note that quality is not proportional to the number of words), with phd at 100k, masters at 50k and honours at 30k at my university. | comparison |
what is the difference between research intensive vs. research extensive universities? | "research intensive" and "research extensive" are obsolete terms from the carnegie classification of research universities. they are terrible, confusing terms that should never have been introduced (see page 5 of rethinking and reframing the carnegie classification). the idea was that "research extensive" means there's an extensive research program covering many areas at a high level, which "research intensive" means there's just a narrowly focused, intensive research program in certain areas. basically, extensive is supposed to be better than intensive. of course the problem is that this really isn't what they sound like. for example, most people would say harvard is a research intensive university, but in this classification it's not. because of massive confusion, the carnegie classification was updated to use other terms, but the old terms still persist. one reason is that some people just got used to them and found it hard to develop new habits; another is that every time the classification changes, some universities end up moving to a less prestigious category than they used to be in, so they have a strong incentive to describe themselves using the old terms. | comparison |
how to decide between sending paper to a specialized journal or to a journal with broader audience? | i tend to think that in most cases, the specialized/broad dichotomy is not very relevant. the most important point is to send it to an interested editor; if the most relevant editor for a paper happens to be at a general journal, you will often be better of sending your paper there. i would consider two exceptions to this principle. first, top specialized journals are usually less reputed than top generalist journals, so if you get a truly impressive result, you may want to get the best of it by sending it to a top generalized journal. second, some generalist journals will turn back papers that seems much more specialized than the average math paper (e.g. when the basic objects you study are unheard of by most mathematicians). | comparison |
what are the trade-offs of working in the office versus elsewhere (as a professor)? | i think it's important to set boundaries with students, with colleagues, and with yourself about when and where you're available for meetings. it's also important to find environments that most effectively support different types of work, and give yourself permission to use them. like you, when i'm in my office, i expect to be interrupted; so when i'm working at my desk, i can only productively work on tasks that survive interruption. put bluntly, the office is where i have meetings; if i need to think, i find a whiteboard in an empty conference room; if i need to write, i go to a coffee shop. (suresh is correct; i am in a coffee shop right now.) as daniel says, all three places allow for productive work, but of very different types. even in the computer science building, for small meetings where i don't want to be interrupted, i prefer to go to the other person's office. and because my undergraduate office hours are occasionally very popular (especially right before exams), i don't hold them in my actual office, but in a larger room down the hall with couches and whiteboards. you express two points of concern, which i'll exaggerate: my students won't like me if i'm not available on their schedule. i agree with dqdlm and nate. spread out your office hours to fit as many students' schedules as possible, be in your office (or "office") for every minute of office hours even if nobody shows up, and be willing to offer occasional off-schedule meetings. it might help to announce in your syllabus times that you're willing to schedule sporadic meetings. ("i'm also available for occasional meetings tuesday or thursday afternoons; send me email to set up an appointment!") consider moving (not adding) your office hours if student demand doesn't match your announced schedule. but then stick to your guns. yes, some students will be unhappy, but that's inevitable; don't take it personally. your availability outside regular office hours will not be the most significant bit in your student evaluations. my colleagues won't like me if they don't see me in my office. i agree with dqdlm and suresh here. yes, it's important to be visible and active citizen of your department; that's not the same thing as being constantly on call. the amount of time you spend at your desk will not be the most significant bit in your tenure evaluation. the danger is not that nobody sees you in your office, but that nobody knows what you're doing. give regular talks to your colleagues and their phd students showing off the results of your out-of-office effort. go to faculty meetings, and occasionally offer an opinion. (careful, that gun is loaded.) attend seminars, especially for faculty candidates, and ask questions. if there is a regular departmental social event ("tea" in many math departments), be there. and so on. finally, i strongly encourage you to raise these concerns with your department chair or your senior faculty mentors. (you do have a senior faculty mentor, don't you? if not, find one!) they can help you navigate your department culture far better than some guy on the intertubes. | comparison |
better chances for math grad school: graduate in three years or four? | what jumps out at me in your question is the assumption that because of your advanced standing you can only stay at berkeley for two years and thus only spend three years in college altogether. though i do not have any direct experience with this (i.e., financial aid at state universities in california), i find that quite surprising. berkeley is an elite institution, and presumably they don't let just anyone transfer in. the fact that you have two years' worth of university credit after one year in college is to your credit and probably factored into their decision to accept you. so they turn around and penalize you by only offering you two years of financial aid? that doesn't make much sense. i would at least make a phone call and, if necessary, schedule an in-person appointment with a financial aid officer. my first guess, honestly, is that you may not be understanding the situation correctly. if you are, you need to explain why the junior standing could stop you from making best use of the amazing resources that uc berkeley has to offer and could make you less competitive in your later academic plans. i would expect them to be sympathetic to that. on the other hand, i find your discussion of what it takes to get into a top mathematics program (here and in the other question you asked) a bit reductive. it is not simply a matter of taking the most graduate courses, doing multiple reus (in my opinion as someone who was involved in graduate admissions in my math department, one reu has the same effect as multiple reus unless you do some truly notable research in one of the reus, which is unusual; also, doing multiple reus makes it natural for you to get more than one recommendation letter from an reu director, and this is a mistake: most reu letters sound the same no matter who is writing them or is being written about), and so forth: the goal that you rather want to pursue is to show mastery of mathematics and show the potential and the interest in doing mathematical research. you can show this by taking 5 graduate courses rather than 10. (ten courses sounds almost ridiculously high, by the way: i took 9 trimester graduate courses -- so the equivalent of 6 semester courses -- over the last two years of my undergraduate program. i got into all the top mathematics departments. if i had taken a few courses fewer i don't think the outcome would have changed.) in fact, the list of math courses that you've already taken compares well with what very strong undergraduates take up through the end of their second year in top mathematics programs in the us. if you did really well with them, then i think you would be ready to take graduate courses (what other undergraduate courses would you take?) in your next year and thus as far as i can see you could graduate in three years and still be competitive for a top program. but do you really have to? if you are serious about studying mathematics, then you have the entire rest of your life to do that. i would recommend a more balanced undergraduate experience that is not 100% calculated to optimize the graduate program you can get into and which lasts for the traditional four years: there are other interesting courses to take as an undergraduate which you will never take again, and there are other things to do with one's undergraduate life aside from coursework. don't get shortchanged on your undergraduate experience. | comparison |
which is more important for phd applicants — quality or quantity of research papers? | i disagree with the assumption that top journals with slow turnaround times are more highly regarded than top conferences with quicker turnaround times. in most (all?) areas of computer science, the most competitive conferences are at least as highly regarded as the top journals. it is also not unusual for longer versions of conference papers to later be submitted to journals. as david patterson (uc berkeley), larry snyder (university of washington), and jeffrey ullman wrote in evaluating computer scientists and engineers for promotion and tenure: the evaluation of computer science and engineering faculty for promotion and tenure has generally followed the dictate "publish or perish," where "publish" has had its standard academic meaning of "publish in archival journals" [academic careers, 94]. relying on journal publications as the sole demonstration of scholarly achievement, especially counting such publications to determine whether they exceed a prescribed threshold, ignores significant evidence of accomplishment in computer science and engineering. for example, conference publication is preferred in the field, and computational artifacts —software, chips, etc. —are a tangible means of conveying ideas and insight. obligating faculty to be evaluated by this traditional standard handicaps their careers, and indirectly harms the field. this document describes appropriate evidence of academic achievement in computer science and engineering. your research advisor should be able to provide you advice more specific to your case. i agree with amirg that having any publications when applying to a phd program (especially based on undergraduate research) makes you exceptional. your advisor's recommendation also counts a lot, especially if he or she is well known. | comparison |
should i use recommendation letter from young professor i worked with or well-known professor i met in class? | your reu evidence that x wrote you a weak letter is not very strong. i run an reu, and we accept less than 3% of our applicants. our target participants are either legendary at small colleges (best in a decade), or excellent at top universities (top 25% this year at harvard). plenty of excellent people get turned away for essentially random reasons. maybe one of your letters was submitted late. maybe you forgot to list a course on your application that was considered essential that year. maybe i had a bad burrito for lunch and hated your essay when i read it. maybe there were just lots of really good applicants that year. and yes, maybe professor x (or professor z, the other letter-writer) said or implied something negative about you. in addition, professor y offering to write on your behalf after a one month course says more about her than it does about you. this does not guarantee that she will write a strong letter, and if she does she may write strong letters for a lot of people, which might be known by the people reading her letters. here is my advice. go to professor y and tell her that you did research with professor x and published a paper together. then, ask her whether you should ask for x's recommendation or hers. if she really feels that you are spectacular, she will insist on writing the letter herself. | comparison |
popular proprietary program or obscure open source substitute for reproducible research? | i think there are two kinds of reproducibility: the ability of someone else to run your code and obtain the same output. the ability of someone else to write their own code that does the same thing as yours based on your description and on examination of your code (reproduction from scratch). the second kind of reproducibility is much more convincing, since the main point of scientific reproducibility is to verify correctness of the result. for science that relies on code, it is usually impossible to include every detail of the code in the paper, so verification requires examination of the code. if you use proprietary software, your code probably makes use of closed source code, and therefore it cannot be verified or reproduced from scratch. if you use open source software, then all of the code that your code calls is probably open source, so it can all be verified or reproduced by someone else from scratch. at present, it is probably true that the first kind of reproducibility is more achievable with proprietary, widely-used software. i am optimistic that the current trend will lead to open-source software catching up in terms of wide use (consider sage, for example). addendum, in light of epigrad's answer below, which i mainly agree with: the problem with relying on closed-source code isn't that someone else won't know what that closed-source code is expected to do. the problem is that if you have two closed-source implementations of the same algorithm and they give different results (trust me, they usually will), then you have no way of determining which (if either) is correct. in other words, closed-source code would be fine for reproducibility if it were bug-free. but it's not. | comparison |
graduate early with average gpa, or later with better gpa? | your answer will depend partly on what you want to do in academia. if you want to teach, but don't really want to emphasize research much, you might do fine to graduate now. however, if your goal is to become a professor at a research intensive school, then you really should go to the strongest grad school you can get into. (based on your description, i strongly suspect that if you bust your butt for another year or two, in particular working to earn one or two strong letters of rec, you could get into a better grad school than you can currently.) yes, i know there are considerations about who you will work with, perhaps geography, potential two-body problems, etc. so, why's it so important to go to the best school you can? again and again i see that in academia (as everywhere) networking is crucial. generalizing and stereotyping a bit: the best schools have the best researchers, who know the other best researchers, who have the biggest grants, which fund the nicest postdocs, etc. if you want to thrive as a researcher, you will do well to get into that network. (to a large extent, it's a rich get richer system.) as an undergrad or early grad student, one way you can get into that network is to work with a professor who is a central part of it, and is willing to weave you in. and your chances of working with said professor typically go up with the reputation of the school. now a personal digression. through high school and undergrad i was in a hurry to get to the next level as soon as possible. i skipped 7th grade, finished undergrad in 3 years, and started grad school at the age of 20. i even turned down a year abroad in the budapest semester in math, because i was worried i'd miss out if i waited too long to get to grad school. the sad truth is that i wasn't ready. maybe you would be; i've never met you, so i can't say. eventually (after 8 years), i finished a phd, and am fairly happy with where my career is headed. however, i don't regret that time in grad school at all. i learned a lot of useful stuff. in fact, i think it's because i took my time in grad school that my career has gone as well as it has. one interesting feature of academia is that you're typically judged by your productivity relative to the time since you earned your phd (rather than your age). as a result, i encourage you to take your time and learn as much as you can. you'll never again have as much free time as you do now. | comparison |
which is more beneficial prior to phd: research experience or a second masters degree? | the question is framed oddly. a master's degree without research experience will probably kill your chances for phd admission, at least into the top programs. here's a better framing: is a second master's degree the best way to get more research experience? unfortunately, the answer depends on your personal situation. if you're already doing active research with a strong possibility of formal publication, with well-known faculty collaborators who will write you strong letters of recommendation, then a second ms is probably not necessary. if that doesn't describe your current research environment, a second ms might be the best way to find such an environment. | comparison |
any risks down the line choosing maths vs. stats phd programme? | provided one can find a supervisor who has the background and research interestes that would allow one to keep extending ones pure mathematical toolbox are there any major risks in going for a phd in stats? there's a small risk, but it can be managed. the first issue is that you need to make the mathematical content of your work very clear, for example by publishing in journals that could be considered both math journals and statistics journals (e.g., ims journals). however, if you're interested in math departments i assume you'd be doing that anyway. the slightly more subtle issue is how mathematicians view statisticians. there's sometimes a mild prejudice that people in mathematical statistics are overly specialized and outside of the mainstream of mathematics. for example, it's possible to get a ph.d. in mathematical statistics while having considerably less breadth of mathematical background than would be considered acceptable for a math grad student. (of course, the flip side is that you are expected to know other things instead.) there's sometimes a fear that a statistician would be unlikely to talk much with other math department members, or might be unwilling or unable to teach anything but statistics. plenty of statisticians have found jobs in math departments, so i don't want to be discouraging. however, i'd recommend focusing on mathematical breadth. for example, if you work with people in combinatorics or algebra, then it will be clear to everyone why a math department is a natural fit. if you talk only to statisticians, it will be less clear. it can still work out even then, but generally when the department either has a thriving statistics group or has decided they really need a statistician (and either way this cuts down on the flexibility of your job search). | comparison |
when writing a peer review is it better protocol to quote parts of the orginal work or paraphrase? | it depends why you are quoting/paraphrasing. usually i structure my review in two parts. the first part summarizes the paper, the major contributions, and the high-level strengths and weaknesses of the submission. in this part i paraphrase. this shows the authors, and the editor, that i have read and understood the paper well enough to describe its ideas in my own words. in the second part i get into the details, i.e. you forgot citation x, formula y has an error, these sentences are confusing, etc. in this part i quote. | comparison |
should you choose your committee members or should your advisor? | there are many ways to build phd committee, which depend on the local system (country, etc.) and your field. but, here are some general principles that should apply broadly. you need to bring a mix of highly competent yet diverse evaluators, with not too much proximity to yourself or your advisor lest it be thought that you are cherry-picking a partial (friendly) jury for your work. regarding your questions: is it more important that your advisor already know your committee members or that you do i don't think it's a very important part of the decision-making. certainly, you don't want the advisor's best friend (or yours!), that could make people think you're scared of unbiased questioning. is it better to get someone in your discipline or someone doing more related work here's one of the factors that play a very important part, for me, in picking committee members. first, all members need to be able to have a good understanding of your work. however, it is good that not all of them are precisely expert in particular field of expertise. it helps to have people from other (related) fields, because they will bring a different perspective, and give you the opportunity to highlight not only the very technical details of your work but also its significance for other fields. is the dissertation process supposed to be a chance for you to win people over to your side? no. it's good to bring people who don't necessarily agree with you on everything, but you should also avoid as committee members anyone overly critical of your approach of things, unless you know them well and they can keep it under control and agree to disagree. otherwise, you risk that person actually coming to your defense to win you over. i have seen defenses being “derailed” (though all ended well) by a committee member who was overly argumentative, and it wasn't a nice experience for anybody involved. | comparison |
is it better for a mathematical paper to have a citation or a nice theorem + another coauthor? | i think it is not important which option makes your paper look good, but the best question here is "what is the right thing to do?" if you are such a talented and hardworking undergraduate student who can write a publishable paper, you will certainly have a very bright future in grad school and later as a researcher. so, don't worry how your first paper is going to be evaluated or cited. hopefully you will write better papers and you can prove yourself in the future. regarding publishing research papers, the right thing to do is that you publish your paper as perfect as it is possible for you at the moment, which means it is better you add the solution (and one more co-author) to your paper. | comparison |
for undergraduates, is publishing "weak" research better than not publishing? | there are very few circumstances under which i think it's a bad idea for undergraduates to write research papers. these primarily have to do with the quality of the journal: so long as it is a reputable, peer-reviewed journal, i wouldn't be too concerned with the "strength" of the work. the reason is that publishing while an undergraduate—particularly as a primary author—demonstrates that you have already started to learn the basics of how to do research in your field. this means that you're less of an "unknown" quantity, and therefore less of a risk for a department reviewing your application. if you don't publish the research, then there's no tangible proof, and then you need to rely on your research supervisor to make that point in a letter of recommendation. (but then the question becomes: "if she could have written a paper, why didn't she?") as for the exceptions above, so long as you don't publish in "vanity" journals (those which will publish basically anything, so long as people pay the appropriate publication "fees"), you should be fine. | comparison |
what factors should i consider when deciding between a general mba vs. a specialized mba? | because you already have real world experience in it but are unsure it is where you want to stay, a general mba should be fine. if you want to stay in it, what most companies care about is actual experience - and it seems you have that. having an mba and experience in it is a great combination and having experience in it is far more useful than having a specialist mba in it. another issue is that it management is usually not a challenging place to get to if you are interested. i've worked with quite a few people in it and it was the rare one who wanted to be in management. most wanted to keep their hands in the technology. for this reason, it management is different from many other fields in business. | comparison |
what is the difference between coffee grinds/beans labeled as "espresso coffee" and ordinary coffee? | it is the roast that is the difference. the only real difference in the beans is that some beans taste better at a higher roast than others, so they are more appropriate for espresso. your italian grocery coffee company may be using the espresso label for marketing purposes, but in general, espresso coffee beans can be the same beans that are used for "regular" coffee, but roasted to a french or italian roast level, which is darker than city or full city. since the advent of starbucks, many roasts are much darker than they used to be. dunkin' donuts coffee, which is a full city roast, used to be the norm, but now a french seems to be what you can buy. i roast my own coffee and take it to just into the second crack which is, generally, a full city roast...a point where the character of the coffee predominates rather than the flavor of the roast. there is more information about roasts at sweet marias where i buy my green beans, and reading through the site will give you way more of a coffee education than you probably ever wanted. so, yes, you can use the coffee you have to make brewed coffee. it will probably be roastier than you would normally have, unless it is just a marketing ploy, in which case it will taste normal. consider how long you have had this coffee; if it has been shelved for a while "normal" probably won't be all that great, since freshly roasted coffee is, generally, way better than old coffee. but as long as the oils aren't rancid, it is more likely just going to be bland. | comparison |
carrot pie: custard or stew? | i can see this as being similar to a sweet potato pie. best bet in that case would be to cook the carrots, then puree and mix with eggs, milk, etc. just as if it were a sweet potato or pumpkin pie. two alternatives that come to mind would be to treat it like a fruit pie, as you say. because of the texture of the carrots, i would grate them with the large holes on a box grater, toss with sugar, raisins, some tapioca and whatever spices your heart leads you toward. an alternative would be to make a molded gelatin like pie, similar to a refrigerator lime pie. lots of gelatin and grated carrots. i, personally, wouldn't like it, but i am not fond of jello with carrots in it either. happy experimenting. | comparison |
should i preheat a pan, or start the cooking timer immediately? | if you cook (or bake) using a timer, you should always use preheated equipment. your stove top may differ from my stove top in terms of the length of time it would take to get to high heat (hell, my stove top wouldn't even hit high heat in three minutes), and this would greatly effect the final quality. always preheat, the instructions expect it. otherwise, it would tell you to put a pan on the stove top, place in pan, and turn on to high heat. leave for three minutes and remove. | comparison |
differences between cooking a whole duck vs chicken or turkey? | with chicken and turkey, the most important "trick" to cooking it is to make sure the dark meat gets done before the white meat dries out, and to make sure the skin crisps up somewhat. duck is all dark meat, and has a thick layer of fat that must be rendered out. there is not a lot of danger in drying out the breast meat like with a chicken. like martha said, it's best to make a few shallow cuts in the skin over the breast (don't go all the way through to the meat) to help the fat render out. a simple (western) roasted bird would be cooked at 350°f for about 1 hour 45 min, with the oven turned up to 500°f for another 15 minutes to crisp up the skin. there is a lot of fat rendered out, so it's best to roast in a sturdy roasting pan, on a rack (so it doesn't sit in the fat), and drain the fat about an hour into cooking (save the fat, though; it's delicious). personally, i think duck is easier to cook, but it's definitely different from roasting a chicken. | comparison |
what would be the difference between frying vs baking meatballs? | no recipe alteration needed. fried meatballs will develop more of a crust, and thus more flavour through the maillard reaction. in any case, unless you're making tiny meatballs (or finishing their cooking in a sauce of some sort), they're going to need to be finished via baking anyway. i would only do the baking method if i had to make an enormous quantity at once. for home cooking i always fry then bake. | comparison |
is it generally better to cut chicken breast with or against the grain? | alton brown talked about this on an episode of good eats, and here's what i remember: if you cut with the grain, you'll end up with long strands of protein, like this: ------------------------ ------------------------ ------------------------ from what i understand, this means that it'll be pretty chewy. your mouth has to work harder to break the strands up into smaller pieces to be more easily swallowed. if you cut against the grain, you get strands like this: |||||||||||||||||||||||| |||||||||||||||||||||||| |||||||||||||||||||||||| you can see that the strands are much shorter, which means the meat will fall apart more easily (each strand doesn't have as much surface area to connect to adjacent strands) and will therefore be more tender. | comparison |
what is the functional difference between imitation vanilla and true vanilla extract? | yes, you can detect the difference. how much of a difference will depend on the quality of both the imitation and of the real thing. that said, it's difficult if not impossible for me to pick out the differences in baked goods. so i keep both around, and use the (much cheaper) imitation stuff for baking, and the real stuff for sauces, icing, custards, milkshakes, etc. incidentally... in a pinch, bourbon makes a half-decent substitute for vanilla. | comparison |
stock vs broth - what's the difference in usage? | classification and use of stocks vs. broth: broths are the result of cooking meat, not just bones. they're generally the result of preparing another item and usually not prepared specifically on their own. the juices poured off from a roasted turkey (after being degreased) would be considered broth. whole chickens being poached for another preparation would create broth. stocks are made from just the bones. they are prepared specifically for use in other recipes (sauces, soups, stews, rice, etc.) stocks are never salted in their preparation or the finished dish will most likely end up too salty due to reduction that will take place upon further cooking. note that homemade stock will be often a bit more broth-like than restaurant/commercial stocks, since it's really hard to get all the meat off the bones. stocks are usually simmered for a very long time (4-6 hours for chicken & 8-12 for veal/beef) to extract maximum flavor and gelatin from the bones. broths aren't usually cooked nearly as long due to the fact that cooking the meat for extended periods (even chicken surrounded by the liquid) will result in tough, flavorless meat. consomme: a fortified and clarified stock. the stock is fortified in flavor by the addition of a "raft" which is a combination of lean ground meat (appropriate to the type of stock being used) with brunoise (1/16 inch) mirepoix (carrots, onions, celery), and egg whites. the raft mixture is stirred into the cold stock and as it gently heats, the proteins coagulate forming a "raft" on top of the stock. a small hole is poked in the center (if one hasn't already formed) and as the stock bubbles through the hole it leaches back through the ground meat/egg white raft which filters out impurities to clarify the stock and fortify it with flavor. bouillon: french word for broth. court bouillon: sometimes called a "short broth". a poaching liquid usually used for fish that is usually comprised of water, acid (lemon juice, vinegar, wine), parsley stems, bay leaves, peppercorns, and some salt. when to use stock vs. broth: use stock when a sauce is to be reduced significantly or when clarity of the final result is preferred. broths can be substituted for stock when the body of the liquid or clarity isn't important, and when the liquid will be thickened by addition of a starch. | comparison |
when baking, is it better to use a gas or electric oven? | for baking cakes and breads it is important to control the humidity in the oven. in early stages of baking one typically needs the humidity to remain in the baking chamber, which is hard to do with a gas oven. two of the bakeries near my house use electric ovens with brick lined baking chambers; the other uses gas. expansion gas and electric ovens can be built to bake the same way if cost is not an issue. most home gas ovens will circulate the combustion products (mainly water vapor and carbon dioxide) in the cooking chamber. as the flames burn, combustion products need to be vented out of the baking chamber. electric ovens also need vents in the baking chamber to help maintain the pressure as the air inside expands. steam is essential in the initial stages of baking for good crust formation in breads and crack-free cake surfaces. the oven cavity can hold much more steam than released from the gas combustion and it is my inference that the steam content of an electric oven will be higher (i cannot find published steam measurements inside ovens). after the dough expansion, the vapor coming off of the dough or batter needs to removed quickly for browning and for the inside to cook well. the constant flow in a gas oven makes it better at that. in an electric oven a peep or two during the last baking stages will handle excess moisture. two bakeries near my house use electric ovens, the other, which makes excellent french baguettes, uses a gas oven. the baker there has had both electric and gas ovens and he prefers the caramelization of the gas oven. but note that he can handle the moisture problem with the steam injector of his professional gas oven. he also noted that using gas ovens require skill as they have temperature and moisture quirks. recipes may be adapted to either gas or electric ovens. in the us the majority of recipes are designed for the electric oven (they're more popular). | comparison |
which is better to sauté with, stainless steel or teflon? | the advantage of using stainless steel is the fond (tasty brown bits) that form in the pan. it both flavors whatever you are sauteing and is often used as the base for a pan sauce. | comparison |
what are the benefits of using a dedicated rice cooker, rather than just cooking rice in a pot? | yes there are benefits! this is one of my most used pieces of kitchen equipments. here is a list of benefits for a quality rice cooker: never burns rice no guess measurements for all kinds of rice scheduled cooking keep warm settings uniform cooking when i cook rice on my stove, even at the lowest of heats, i get a thin layer of rice that has overcooked stuck to the bottom of the pot. for the record i have the zojirushi 5-1/2 cup induction rice cooker. best thing ever. | comparison |
what is the difference between "mince" and "dice"? | mincing produces smaller, more irregularly-shaped items than dicing. dicing is generally uniform, usually 1/8 to 1/4 inch on all sides, kind of like a tiny cube. the best way to explain the size difference is visually, check out this link for a great picture near the top. | comparison |
what are some of the benefits of electric stoves versus gas stoves? | can i convince you that electric is better? no, i can't, because i don't think it is. the issue i have is related to how long it takes to warm up (and cool down). electric cook tops just don't respond quickly. little too hot? too bad, nothing you can do about it (in time to save a dish that's starting to burn anyway). not hot enough? check back in 2 or 3 minutes. i find this particularly irritating when a recipe requires varying heats while cooking. sorry i don't have better news for you. | comparison |
what is the difference between doughnut and krapfen? | there doesn't appear to be any difference. wikipedia says "in english-speaking countries, berliners are usually called doughnuts and are usually filled with jam, jelly, custard or whipped cream", and this page says "the english translation of krapfen is cruller or doughnut". there are so many variations of filling, topping, shape and so forth that it is hard to establish a single identity anyhow. | comparison |
what is the difference between devil's food and chocolate cake? | practicallyedible has a nice description of devil's food cake. originally, devil's food cake had a medium dense texture. the colour had a reddish tint that was probably caused by baking soda reacting with cocoa powder. in fact, i have an old cookbook (the day by day cook book, 1939) that contains a recipe for red devil's food cake. this recipe calls for 2 oz. unsweetened chocolate and 1 tsp. of baking soda. | comparison |
when curing sausage, which is more important temperature or humidity? | you ned to get your hands on a computer fan (they are designed to run 24hrs a day). i simply mounted one of these inside wall of my curing chamber (down low - as wet air drops), cut a hole in the wall of the fridge with a hole saw - which allows the fan to exhaust the moist air from within the curing chamber. i also cut a similar sized hole at the top of the curing chamber on the opposite side which allows dry air to enter the chamber as the wet air is exhausted. i have it rigged up to a cheap humidity controller i purchased off ebay, so when the controller detects high humidity (whatever you set it at), it exhausts the humid air. | comparison |
which torch to buy for finishing sous vide meat? butane or propane? | i use propane all the time. there are several factors as to why: it is cheap, about 1/4 the price of butane. it’s more readily available. you can buy a propane torch at many different stores for very cheap. the torches typically put out a lot more heat. i’ve used both propane and butane, mostly for crème brulee, but other food as well. the butane torches put out such a focused small area of heat, that it would get uneven burning. whereas with propane, they often put out a much larger area of heat, making it easier to caramelize across the surface evenly. as for the concern of it imparting propane flavor, i have had that happen, once, but i’ve also had that happen with butane. it’s all about flow control. if you have the dial turned up too high and it’s spewing out massive amounts of propane, plus hold the flame too close to the food, you might get then hint of propane. but if you have the torched dialed in to the proper settings, you really can’t beat the ease and convenience of “energy-efficient clean-burning propane gas” | comparison |
are european white truffles significantly superior in flavour to those from north america? | i think the truffles that are exported are of better quality on average than what you'll find on the european market. i've never tried the american ones consciously (they're not imported to europe as far as i know). you could also take into account that the fresher the truffles are the better the quality, so in theory it would be better to eat american truffles in the usa and european truffles in europe. given the mind boggling price differences and the supposed high quality of the stuff found in the forests in oregon, you're likely going to be well off picking the local stuff. | comparison |
what's harder: shuffling a sorted deck or sorting a shuffled one? | by landauer's principle, if you want to take a uniform random permutation of $n$ keys to a sorted one, and not keep any bits in the computer which reveal what the uniform random permutation was, you need to erase $log n! \approx n \log_2 n$ bits. this will take $(n \ln n) k t$ energy. on the other hand, the computation taking the sorted array and $n \log_2 n$ random bits to the random array is reversible, and thus the energy expended can be made arbitrarily small. note that these are just theoretical lower bounds. the energy currently consumed by these processes on an actual digital computer bears no relation to the above analysis. | comparison |
realtime hardware/software versus pc software/hardware, how are these distinct and alike? | in general, the difference between "normal" and "real-time" is some sort of guarantee on the time it takes to complete job. in a normal system, usually you have no guarantees at all. programs can get interrupted by other programs, the os scheduler might not be completely fair, the processor does complicated things that alter the runtime between executions of the same code... for most applications this does not matter. real-time systems are built such that one can guarantee that job x is always completed after at most y seconds. this is important for example for the chip that decides when to deploy air bags--if that takes longer than expected, you're dead. to be able to give these guarantees you have to use special operating systems (or no operating system at all) that guarantees appropriate scheduling. it is also necessary to know a lot of details about the hardware you're using. does your chip have a cache? what kind of replacement policy does it use? how many cycles does an addition take? is that always the same number? what's the branch prediction algorithm? etc. many people confuse real-time systems with very powerful processors that can churn through data "in real time". but that's not (always) the case. the important thing is not how long it takes in absolute time, the important part is that it never takes longer than expected. there are trade-offs both in hard- and software between throughput (expected computation speed) and predictability (worst-case speed). | comparison |
what's better for an algorithm complexity, o(log n) or amortized o(log n)? | $o(\log n)$ in the worst case implies $o(\lg n)$ amortized. basically, given two data structures supporting the same operator, one in $o(\lg n)$ time in the worst case and the other one in $o(\lg n)$ amortized time, the first one is considered to be superior asymptotically: being $o(\lg n)$ time in the worst case means that each call of the operator will be supported in this time, while having an amortized complexity of $o(\lg n)$ means that some (very few) operator calls can take $o(n)$ time. usually, the concept of amortized analysis is used for data-structures which amortized complexity is better than their worst case complexity. as an illustration, consider a data-structure for integers, storing each such integer $x$ as a string of bits (e.g. $x=8$ represented by $(1,0,0,0)$), and the operator $x.inc()$ which increments $x$ by $1$. in the worst case (e.g. on $x=7$ represented by $(1,1,1)$), the operator $x.inc()$ corresponds to $\log(x)+1$ binary write (e.g. to write $(1,0,0,0)$ corresponding to $8$). in the best case (e.g. on $x=8$), the operator $x.inc()$ corresponds to exactly one binary write (e.g. to change the last bit of $(1,0,0,0)$ to $1$, giving $(1,0,0,1)$). in a sequence of increments of the same integer object (e.g. enumerating all integers from $0$), the "best case" described above happens half of the time, and the worst case one every $n$ increment (i.e. after $1,2,4,8,16,...$ increments). a case requiring $i$ bits write happens $1/2^i$ of the time. summing all those costs gives an amortized cost of $\sum_i i/2^i \in o(1)$. hence the operator $inc()$ has a worst case complexity of $o(\lg n)$ but an amortized complexity of $o(1)$. the notion of amortized analysis is well explained on wikipedia. you might want to see the page on the potential method for more details. hope it helps! | comparison |
why is it seemingly easier to resume torrent downloads than browser downloads? | the bittorrent protocol was designed to transfer large files out-of-order. it divides files in chunks (pieces in bittorrent terminology), and maintains a map of which participant holds which chunks. one of the elementary commands is for one participant to request a chunk from another participant. if a client crashes or disconnects, it can check which chunks it has already downloaded (the base data includes a cryptographic checksum for each chunk) and request only chunks that it does not already have. i think bittorrent includes a command to request part of a chunk, too, but if worst comes to worst only chunks that have not been fully downloaded need to be re-requested. the http protocol was designed to transfer mainly small files and to be simple to implement. its most basic command is to download one file with a minimum of fuss. a simple server may only understand one command, to download a file in full. hence, if the download is interrupted, there is no choice but to download the whole file again. there is a way for a client to request only part of a file (with the range: header). not all servers implement it (because it is not a fundamental feature of http). web browsers typically don't bother with it (because they are primarily designed to download small files: web pages), but all download managers support it (because they are designed to load large files) and will use it if the server accepts. | comparison |
when are binary trees better than hashtables in real world applications? | hash tables can only tell you if an element is present or not. here are somethings you can do with a binary tree that you can't do wiht a hash table. sorted traversal of the tree find the next closest element find all elements less than or greater than a certain value see this wikipedia article on k-d trees for an example of a real world data structure that makes use of the special properties of binary trees. [ref] | comparison |
what is the difference between a scripting language and a normal programming language? | i think the difference has a lot more to do with the intended use of the language. for example, python is interpreted, and doesn't require compiling and linking, as is prolog. i would classify both of these as programming languges. programming langauges are meant for writing software. they are designed to manage large projects. they can probably call programs, read files, etc., but might not be quite as good at that as a scripting language. scripting langauges aren't meant for large-scale software development. their syntax, features, library, etc. are focused more around accomplishing small tasks quickly. this means they are sometimes more "hackish" than programming langauges, and might not have all of the same nice features. they're designed to make commonly performed tasks, like iterating through a bunch of files or performing sysadmin tasks, to be automated. for example, bash doesn't do arithmetic nicely, which would probably make writing large-scale software in it a nightmare. as a kind of benchmark: i would never write a music player in perl, even though i probably could. likewise, i would never try to use c++ to rename all the files in a given folder. this line is becoming blurrier and blurrier. javascript, by definition a "scripting" langauge, is increasingly used to develop "web apps" which are more in the realm of software. likewise, python initially fit many of the traits of a scripting language but is seeing more and more sofware developed using python as the primary platform. | comparison |
what's the difference between adaptive control and hierarchical reinforcement learning? | the difference between the two tasks really comes down to the level of continuity assumed in the models of the problem. in adaptive control, continuity is assumed at all levels; the problem space and the actions to be executed are all continuous. in hierarchical reinforcement learning, although the problem space is continuous, the actions to execute upon the space are discrete. a good analogy for this is the robot arm trying to move to a certain goal. adaptive control would work with jacobian to find a continuous solution to this problem. reinforcement learning would select actions (up, down, left, right) to learn the problem space and find a solution. hierarchical reinforcement learning and adaptive control are really complements of each other. when the problem gets too complex to compute with adaptive control, hierarchical learning steps in to make it discrete and more scalable. for another perspective, especially since my explanation breaks down when you consider continuous hrl (which does exist), consider this explanation from my personal correspondence with daniel rasumussan: practically, there are many differences between how they [adaptive control and hrl in a continuous space] are implemented; they use different algorithms, different learning rules, different representations, etc. for example, adaptive control is not typically hierarchical. you could arguably call it a hierarchy with a depth of 1 (with e.g. the "high level" being control in the x,y space and "low level" being control in the motor space). but there's no temporal abstraction in that hierarchy; the high level isn't trying to encapsulate low level decisions, the problem is how to find a nice reliable one-to-one mapping between those two spaces. of course you could make adaptive control hierarchical, as you can make pretty much anything hierarchical, but that isn't typically what they're interested in. perhaps a more critical difference is that in motor control typically it is assumed that the transition and reward functions are known to the agent. there may be some unknown variability in the case of adaptive control, but by and large the agent has a pretty good idea what the effects of its actions will be, and what the goal is that it is trying to achieve. it's one of the key principle of reinforcement learning that that information is not known, and must be figured out by the agent by exploring the environment. now again here, you could come up with a motor control problem where the agent doesn't assume that information is known, but that just isn't the kind of problem those techniques are designed for, in general. so, to sum up, there certainly is a gradient between hrl and adaptive control, and by tweaking various assumptions you can bring them closer together or farther apart. but just implementing continuous actions in hrl would still leave you quite a ways from adaptive control. as for an example combination this, one might imagine using hierarchical reinforcement learning to play a video game like pong and then changing the physics of the game (pong on slippery ice!) so that adaptive control might step in. | comparison |
which is a better way of obtaining scales, gaussian blur or down sampling? | down sampling may discard relevant features, while blurring should not. as a toy example, a down sample may remove a pixel which is a local maxima, while a blur operation will preserve the maxima by increasing the values of nearby pixels. if the local maxima corresponds to an interesting feature, it may still be discernible by the human eye after blurring. from a computational sense, laplacian pyramids are able to reconstruct an image precisely because a blur-subtract operation preserves the "information" in the scene. | comparison |
why is dpll better than brute force? | local (stochastic) search is all about clever navigation of the search space. dpll's advantage is pruning the search space of large swaths of assignments that provably cannot satisfy the formula. dpll does this by incrementally building partial assignments (some variables assigned values, some left unassigned), applying the unit propagation and pure literal rules and then checking if the resulting formula is trivially unsatisfiable. if the simplified formula implied by the partial assignment contains an empty clause, dpll need not try assigning values to the remaining unassigned variables since the empty clause represents a clause that can never be satisfied under the partial assignment. the time saved is exponential to the number of unassigned variables, and those skipped assignments are where dpll improves on brute force sequential search. | comparison |
which bound is better, a logarithmic or a polynomial with arbitrarily small degree? | since $\log n = o(n^{\epsilon})$ for an $\epsilon > 0$, if you can prove an approximation ratio of $o(\log n)$, then approximation ratios of $o(n^{\epsilon})$ (for any $\epsilon > 0$) immediately follow. you should always prove the best approximation ratio that you can, unless: the best approximation ratio holds only in expectation, and some other approximation ratio holds with high probability. you have several incomparable approximation ratios (this is more common in expressions involving more than one parameter). that is, your worse guarantee needs to have some advantage over your better guarantee for you to report both. regarding practical performance, you are highlighting the fact that asymptotic performance can be misleading when it comes to evaluating algorithms in practice. the most well-known instance is probably "fast" matrix multiplication, which is usually slower than the trivial algorithm. here you have two options: prove non-asymptotic guarantees on the approximation ratio, say $100\log n$ and $(2/\epsilon)n^{\epsilon}$. this allows you to obtain concrete guarantees for every $n$. do experiments. the experiments reveal the actual approximation ratio on the average. if you can fit your results to a nice function, you can say that empirically, the average approximation ratio is (say) $10\log n$, though in the worst case all you know is (say) $100\log n$. experiments, however, are not so welcome in theoretical papers, unfortunately. | comparison |
is a lba with stack more powerful than a lba without? | theorem the following are equivalent. $l$ is accepted by a deterministic lba with stack $l$ is accepted by a nondeterministic lba with stack $l$ is in $\operatorname{dtime}(c^n)$ for some constant $c$. so the computational power of a lba with stack for decision problems is well understood. the exponential runtime limits the usefulness of this knowledge in practice. but the notion of an lba with stack can be generalized to an $s(n)$ auxiliary pushdown automaton ($s(n)$-auxpda). it consists of a read-only input tape, surrounded by endmarkers, a finite state control, a read-write storage tape of length $s(n)$, where $n$ is the length of the input string, and a stack in "hopcroft/ullman (1979) introduction to automata theory, languages, and computation (1st ed.) we find: theorem 14.1 the following are equivalent for $s(n)\geq\log n$. $l$ is accepted by a deterministic $s(n)$-auxpda $l$ is accepted by a nondeterministic $s(n)$-auxpda $l$ is in $\operatorname{dtime}(c^{s(n)})$ for some constant $c$. with the surprising: corollary $l$ is in $\mathsf p$ if and only if $l$ is accepted by a $\log n$-auxpda. the proof consists of three parts: (1) if l is accepted by a nondeterministic $s(n)$-auxpda with $s(n)\geq \log n$, then $l$ is in $\operatorname{dtime}(c^{s(n)})$ for some constant $c$. (2) if $l$ is in $\operatorname{dtime}(t(n))$, then $l$ is accepted in time $t^4(n)$ by a deterministic one-tape tm with a very simple forward-backward head scan pattern (independent of the input). (3) if $l$ is accepted in time $t(n)$ by a deterministic one-tape tm with a very simple forward-backward head scan pattern (independent of the input), then $l$ is accepted by a deterministic $\log t(n)$-auxpda. part (1) is basically a rigorous proof that the "halting problem is decidable", where the number of operations was counted thoroughly. part (2) is the creative idea that prepares the stage for part (3). part (3) uses the auxiliary storage for tracking the time step, which allows to reconstruct the head position due to the very simple forward-backward head scan pattern, and the stack for recursive backtracking. (so this proof also contains the two observations which i wanted to make more rigorous. this answer is already long enough anyway, so i won't go into more detail here.) this leads to the following answer for the initial question is a lba with stack more powerful than a lba without?: the question is equivalent to a well know open problem, and the expectation is that it is indeed more powerful. | comparison |
is a 2 address machine more likely to follow a risc or cisc design? | the problem is that the terms risc and cisc are marketing terms, not science or engineering terms. the terms are supposedly acronyms for "reduced instruction set computing" and "complex instruction set computing." your assumption that 3 addresses is much more complicated than 0 addresses is logical, but there is no logic here, and typically 3-address instructions are associated with risc and 2-address instructions are associated with cisc (and 1-address and 0-address instructions aren't very common any more, so aren't associated with either risc or cisc). the term risc is generally associated with instruction sets that have the following characteristics: fixed width instructions. usually 32-bits or 16-bits. this makes it easier for the instruction decoder to find the boundaries between instructions. in cisc machines, by contrast, the different instructions can range in length from 8-bits to as much 64-bits. this makes the job of the instruction decoder somewhat harder in cisc machines, but can result in programs consuming less memory. fewer operand addressing modes. in a risc machine typically each operator (add, sub, jmp, load) has only one available addressing mode for its operands (the a, b, c, d, and e in your picture.) typically for arithmetic type instructions (add, sub, xor, ...) the only available addressing mode is register direct. the source operands are found in registers, and the result of the computation can only be placed in a register. load and store type instructions typically have one operand that is register direct and the other operand is register indirect plus offset. jump and branch type instructions will typically have a target operand that is pc relative. there will typically also be a few jump and branch type instructions with a register indirect target, and sometimes a jump instruction with an absolute target operand. the smaller number of operand addressing modes is typically the only way in which "risc" instruction sets are actually reduced (compared to "cisc" instruction sets). the reasoning, again, has to do with trying to keep the instruction decoder as simple as possible in risc machines. the simple operand addressing modes are easier to implement in a simple pipeline, and so the decoder in modern cisc machines often has to do extra work to crack instructions with complex operand modes into sequences of micro-operations that are more like risc instructions. there is a tendency for risc architectures to have more register names available. many risc architectures have 32 registers, while many cisc architectures have only 8 or 16. this again has to do with making it somewhat simpler to exploit instruction-level parallelism with a simple pipeline. having more available register names makes it possible for the compiler to use different register names for unrelated computations, without requiring the hardware to do register renaming. risc architectures tend to have "3-address" instructions, while cisc architectures tend to have mostly "2-address" instructions. the notion of "3-address" vs. "2-address" is also somewhat fuzzy (and somewhat mis-named). every add instruction has 3 operands, 2 source operands and 1 destination operand. the real distinction is whether those operands are explicit or implicit. in a so-called "3-address" instruction you make all 3 operands explicit. in a so-called "2-address" instruction you make the destination operand explicit and one of the source operands explicit. the third source operand is implicit: it is always the same address as the destination operand. in a so-called "1-address" instruction only one of the source operands is explicit. the other source operand is implicitly either an accumulator register or the top of the stack, as is the destination operand. finally in a "0-address" instruction all the operands are implicit (usually the two source operands are the top two values on the stack and the destination goes back on the top of the stack.) to sum up: these are all marketing terms, and don't really mean much. the important concepts have to do with the relationship between different instruction-set design choices and how those choices make it easier or harder to implement hardware pipelines. | comparison |
why does a color video compress better than a black and white video? | human eye is very sensitive to luminance change and order of magnitude less sensitive to chrominance change. mpeg under the hood is based on jpeg transform, so you have 8x8 blocks of dct. it blurs a bit the whole block approximating it. colour space changes to yuv or ycbcr, to encode two channels of colour and one of luminance. luminance (grayscale if it was colour) is not compressed as much as new created two colours. these colours are fitted more loosely than luminance. the most heavy tricks are in colour space and luminance is preserved with lower compression (and less stages). if you have bw frame it cannot be dealt with high compression as this faster degrades quality. mpeg was not created to deal with bw data. origins of this codec are from times when grayscale tv was appended colour on top of existing frame. here i mean origins, not the same scheme as bandwidth was smaller at this time, i am referring to colour space and compression over chrominance. mpeg popular scheme is 4:2:2 which encodes four parts of luminance (bw) and two times two colour parts. colour is encoded as differences, because there are two channels and three colours. so simply 50% of data is well preserved luminance and 50% are three colours with higher compression ratio. compressed better means greater ratio of original vs processed channel. luminance takes (taking it simple as luminance is part of these colours) roughly equal space as colour info. different codecs are using different strategies, i referred to mpeg, not some new mixed schemes. | comparison |
chomsky nf or greibach nf benefits for lr parser? | the point of generating a parser (usually) is to parse: "analyse (a string or text) into logical syntactic components" (some online dictionary). that's different from simply recognizing that a text is a member of a language. a given language has a variety of grammars which will recognize its sentences, but typically there is a specific grammar which defines the "logical syntactic components" and their relationship to each other, as in the above definition. in real-world applications -- compilers, for example -- one expects an output which is more interesting than simply "the input is/is not a valid program". in some cases, it is possible to recover the original parse tree from the parse tree generated by parsing a transformed grammar. (this is true for cnf, with the usual transformation strategy.) in other cases (such as gnf, as far as i know), it is annoyingly difficult, making such a transformation of limited use for parsing (as opposed to recognition). also, not all grammar transformations preserve lr(k)-ness. in particular, the usual cnf transformation involves refactoring of every production into a series of rules with precisely two non-terminals. this often has the effect of converting an lr(1) grammar into a grammar which is not lr(k) for any k. so that's not much help, either. | comparison |
do recursive algorithms generally perform better than their for-loop counterpart? | i don't think recursive or for-loop are related to the abstract idea of an algorithm, rather both are a specific strategy to implement an algorithm on a computing system. so your question is basically about which implementation strategy is better for algorithm - recursive or loop based. the answer (assuming you want to implement the algorithm on general purpose of the shelf cpu) would be for-loop perform better as the recursive call would include the overhead of call stack which will grow for each recursive call. | comparison |
item lookaheads versus dot lookaheads for $lr(k)$ with $k \gt 1$? | i think you are mistaken, they are needed but the dot look-ahead there is so obvious that you have not paid attention to the fact it is used. first, let's remark that there are three kinds of items: those in which the dot is just before a non-terminal. they never participate in an ambiguous situation: when a non-terminal has been produced, it is shifted. those in which the dot is at the end. they have the item look-ahead and the dot look-ahead which are equal (what may follow the dot is what may follow the produced non-terminal when the production is reduced as the dot is at the end of the item). those in which the dot is just before a terminal. they have the item look-ahead and the dot look-ahead which are different. the item look-ahead is what may follow the non-terminal when the production is reduced, the dot look-ahead start by the terminal which follow the dot and continue with what can be generated after that terminal. now, with a look-ahead of 1 or less, the dot look-ahead is trivial: either it is the item look-ahead or the terminal which is just after the dot and that's what you are using to solve a conflict (or decide that there is no way with the limited look-ahead you have). with a look-ahead of 2 or more, you have to compute the dot look-head or you may not know if you have to shift or to reduce as in the example provided by grune and jacobs: $$\begin{array}{l} s \rightarrow aa \; | \; bb \;| \;cec \;|\; ded \\ a \rightarrow qe \\ b \rightarrow qe \\ c \rightarrow q \\ d \rightarrow q \\ e \rightarrow e \\ \end{array}$$ which has the state: $$\begin{array}{lcc} &\textrm{item look-ahead}&\textrm{dot look-ahead}\\ a \rightarrow q \cdot e & a\# & ea\\ b \rightarrow q \cdot e & b \# & eb\\ c \rightarrow q \cdot & ec & ec\\ d \rightarrow q \cdot & ed & ed\\ e \rightarrow \cdot e & a \# & ea\\ e \rightarrow \cdot e & b \# & eb\\ \end{array}$$ | comparison |
which is faster operations on register operands or immediate operands? | it depends. like aprogrammer said, it depends on the processor. we are in an age where there are many limiting factors based on physics in cpu construction. this means that distance traveled for an instruction and heat generated by a gate cause latency. in theory, this means that for a pipeline where the bottleneck is the decoding stage, this matters. with immediate operands, you would not need to travel to the register to grab the values, which is additional clock cycles and distance traveled. this would decrease latency, and thus increase speed. however, in real world applications, this is very likely not the bottleneck, and so there will be little to no increase (if mandatory register access stages exist) in speed. | comparison |
which is more fundamental: key-value or subject-predicate-object? | following your simplicity argument (binary is more fundamental than octal), i'd say that key-value stores are more fundamental. i think that subject-predicate is essentially a 'compound' or 'aggregate' key, so a subject-predicate-object store is a key-value store with additional requirements on the key (namely that it can be split into two). | comparison |
apprenticeship vs. imitation learning - what is the difference? | in general, yes, they are the same thing, which means to learn from demonstration (lfd). but usually apprenticeship learning is mentioned in the context of "apprentiship learning via inverse reinforcement learning" (irl). both methods learn from demonstration, but they learn different things: imitation learning (a.k.a. behavioral cloning) will try to copy the teacher. this can be achieved by supervised learning alone. the ai will try to copy even irrelevant actions such as blinking or scratching, for instance, or even mistakes. you could use rl here too, but only if you have a reward function. apprentiship learning via inverse reinforcement learning will try to infer the goal of the teacher. in other words, it will learn a reward function from observation, which can then be used in reinforcement learning. if it discovers that the goal is to hit a nail with a hammer, it will ignore blinks and scratches from the teacher, as they are irrelevant to the goal. | comparison |
when would best first search be worse than breadth first search? | best first search is different from bfs and dfs by that that it uses problem specific information to chose which node of the search tree to expand next. best first search is informed search and dfs and bfs are uninformed searches. in order to use informed search algorithm you need to represent the knowledge of the problem as heuristic function. best first search is sometimes another name for greedy best first search, but it may also mean class of search algorithms, that chose to expand the most promising node based on an evaluation function(not necessary the same as the heuristics) such as greedy best first search, a* and others. if you meant greedy best first search: it is complete (finds a solution in finite graphs) like bfs it is not optimal(to find the least cost solution) as dfs, but bfs is optimal when the cost of each arc is the same in the worst case its time and space complexity is o($b^n$), where b is the branching factor and n is the maximal depth. for bfs time and space complexity is o($b^m$), where m is the depth of the shallowest goal. greedy best-first search is in most cases better than bfs- it depends on the heuristic function and the structure of the problem. if the heuristic function is not good enough it can mislead the algorithm to expand nodes that look promising, but are far from the goal. here is one simple example let all arcs have the same cost, s - start node, g - goal node and h-heuristic function here greedy best-first search will expand : s,b,c,d,g and bfs will only expand : s,a,b,g | comparison |
are turing machines more powerful than pushdown automata? | if you only consider that 'turing machines can always be made to behave like a stack' you can only conclude that they are at least as powerful as pushdown automata. but in general, yes it is true, turing machines are more powerful than pdas. the easiest example would be to show that turing machines can describe context sensitive languages. | comparison |
which is more computationally efficient: multiplication or 0 padding? | i'm assuming that your numbers are in binary. in this case, 0 padding is shift left: x << 3 is the same as x * 0x08, where 0x08 (hex) is 1000 (binary). shifting is much simpler to implement in hardware and is generally more efficient. you could check by writing a short c program. note that there is one significant difference between shift left and multiplication. the former does not signal if there is an overflow error, whereas the latter will. | comparison |
can we say dfa is more efficient than nfa? | there are two answers, depending on how you define efficient. compactness of representation telling more with less: nfas are more efficient. converting a dfa to an nfa is straightforward and does not increase the size of the representation. however, there are regular languages for which the smallest dfa is exponentially bigger than the smallest nfa. a typical example is $(a|b)^*b(a|b)^k$ for $k$ fixed. computation running it fast: dfas are more efficient. the computers we use today are deterministic in nature. that makes them bad at dealing with non-determinism. there are two common ways of dealing deterministically with nfas: backtracking on one side, which is rather costly, or keeping track of the active states, which means each transition will take up to $n$ times longer (where $n$ is the size of the nfa). | comparison |
is a little watering still better than no watering, when plants are thirsty? | this would probably depend on the nature of your soil, the type of plants involved, the stage in their development, their drought tolerance and whether it is long- or short-term watering. in the case of seedlings, for instance, in very warm weather, short-term superficial watering cannot do any damage by causing surface rooting, since their roots haven't yet grown down very far, so a brief watering must be far better than none which would compromise their survival. on the other hand, if, for example, we take a reasonably drought-resistant and well-established variety of sweet corn, growing in a humus-rich, water-retentive soil before 'cobbing', it might be better to withhold water for a few days, until you can water it thoroughly - unless, of course, we're talking about a 'one-off' or short-term watering. in fact, withholding water for a short time might even encourage the plants to root more deeply and extensively in search of it. | comparison |
is a dethatching rake significantly better for removing thatch than a regular rake? | where in california (according to your se profile) are you? am asking, as from my limited research (knowledge) on grass types used in california, i discovered: northern california tends to favour cool season lawns. southern california tends to favour warm season lawns. if you do have a warm season lawn (grass type), for the overall health of your lawn you should only dethatch in spring, once the grass has woken from dormancy & is actively growing. dethatching at any other time of year greatly increases the chance of causing real damage to a warm season lawn. on the other hand, if you have a cool season lawn (grass type), without a shadow of a doubt the best time of year to treat, perform maintenance, on your lawn is late summer, early autumn (fall). around "labor day" in the usa is generally considered about ideal. now adding onto "mancuniensis" answer, i recommend paying a visit to your local tool hire shop (store) and rent yourself a mechanically powered dethatcher (a machine you walk behind, just like a lawnmower) for a ½ or full day. additionally, as you paint a picture of a bad thatch problem, i would also be very tempted to get hold of a mechanically powered aerator and aerate your lawn after dethatching. doing so will open up the soil, get some much needed air in there. then after doing that, i would give your lawn a "natural" feed with a allover covering of ½ to 1inch (12.5 to 25mm) thick layer of screened compost (preferably sta-certified or similar). | comparison |
what is the difference between pure horse manure and bedding manure? | hmm... well, i don't know the particulars about the straw bedding you've got there but i can tell you the experience here on our farm and what i know about composting horse manure, having done it for the past 10 years or so. we also compost rabbit, chicken and goat manures here. horse manure definitely needs to be composted. a horse's digestive system is pretty simple and weed seeds can survive it, though i honestly don't see a lot of weeds germinating in our horse and donkey manure. the manure itself has a good carbon to nitrogen ratio and composts well on its own. left to its own, it breaks down really well without much other than keeping it moist. bedding will generally increase the carbon part of the equation and require additional nitrogen source. we sometimes add grasses, chicken manure (which is really "hot", meaning it's got a high nitrogen content and definitely needs to be composted) and kitchen scraps to it to boost the nitrogen. anytime i've got manures mixed with bedding - wood shavings, sawdust or straw, i always ensure that i add a lot of nitrogen to help things move along more quickly. you can do the lazy composting method - no turning, relying on more anaerobic composting and it'll take longer for both the manure by itself and the manure/bedding to break down or you can engage in what i call my "farm workout" by turning the pile to introduce oxygen into the pile and it'll be aerobic and typically break down faster - that's been my experience. keeping it moist and turning it often - weekly or a couple times a month might be fine - will speed things up in both cases. moisture helps to encourage the decomposition process. if it dries out, it takes longer, particularly with the bedding. i don't think my local source of straw uses fungicides and it breaks down reasonably fast. i try to get my compost bins and piles to contain at least a cubic yard of material as that tends to encourage things to heat up more quickly but it isn't always possible. i'd shoot for something in that quantity (or more!) and see how that bedding breaks down. you might be surprised to see that it does so quickly. | comparison |
tomatoes...what is the difference between early blight and late blight? | here's the biggest difference: late blight is caused by the pathogen phytophthora infestans, while early blight is caused by alternaria solani. phytophthora infestans is an oomycete, a fungus-like eukaryotic microorganism. it is similar to alternaria solani (a fungus) in that it is a localized disease, and doesn't spread internally. p. infestans is a more aggressive disease, and spreads much faster than a. solani. it also kills all infected material, causing dark, watery spots to form. these eventually become hard and brown. control of p. infestans is very difficult. there is no cure for badly infected plants. if you notice signs of infection, remove all affected parts and destroy them. if the disease spreads anyway, destroy the entire plant. you can prevent it to some extent by: rotating your crops every year, remembering that both tomatoes and potatoes are susceptible to p. infestans. destroying all affected crops as soon as possible. remove them from the site, and burn them, or place in air-tight bags and send to the landfill. do not compost. growing resistant varieties like 'defiant', 'plum regal', 'mountain magic 'and 'mountain merit' control of a. solani is easier, but it's best to prevent the disease, or catch it early. from: sudden outbreak of yellow leaves on tomato (and other) plants: remove all leaves showing signs of early blight (yellowing, dry margin, large to small round dead spots.) do not touch the unaffected leaves with the removed portions, or your hands until they are thoroughly washed. spray with a copper fungicide (like bonide© liquid copper fungicide). apply once every 5 days, and after any rain. continue for 3-4 weeks, or until the plant stops developing the symptoms. to prevent future attacks, you can try to: minimize soil-foliage contact make sure the plants get good air circulation (including proper spacing) when you water, try not to wet the foliage, or splash soil onto the leaf undersides. the fungus spreads faster in wet conditions. to go along with that, an organic mulch will help keep soil from splashing, and is also useful for many other reasons. make sure the plants have a support to climb on. rotate the plantings each year, to stop pathogens from inhabiting the soil. remember that potatoes are also vulnerable to early blight, so plan accordingly. | comparison |
are clear or opaque tarps better for solarization? | solarization: it is a practice that can kill most soil organisms, included the good ones. solarization basically consists in the following: release the soil. remove big weeds and big rocks. irrigate. seal the soil with tarps. wait a little bit (four to twelve weeks). remove the tarp. add compost to the solarizated soil in order to restore beneficial organisms. in the documents linked at the bottom, you can read in detail about the entire procedure. the documents also include some advices for the tarp thickness how it works? tarps allow short wave radation (infrared) to enter. once light let in and it is reflected by soil: the wave change to a longer one and can not go out (heating the soil). currently there are some tarps which also let in ultraviolet light (very good for sterilization too). transparent tarps or not? always transparent. this is because, for example, black tarps will absorb and radiate the solar light back to the air and not inside, so it will heat the soil too, but just by contact with the tarp. *thanks to @laughing_jack for the correction. literature: solarization - university of florida solarization - university of california | comparison |
would it be better for me to set up my garden in the fall, or wait until spring? | you could certainly wait until spring and things would probably be fine. but personally, i'd get going right now with bed preparation. till it, clean out the rocks mostly and the weeds and clumps. the tiller will help break everything into smaller chunks. to rid yourself of most of those rocks, take a piece of 1/4" or 1/2" hardware cloth, nailed to a frame (or not) and use it to separate the rocks from the soil. i lay the hardware cloth down on a spot in the garden, pick an area of comparable size right next to it and then shovel maybe a couple inches of the topmost soil onto it from that area. i shake the frame to sift the soil through it and then take the stones and dump them in my wheelbarrow. then i put the frame where i just dug and repeat the process, always moving the frame where i just dug and replacing that soil with the newly sifted soil from the next section. it actually goes by quickly. if you are in the mood, plant something now and take advantage of the space and the fall weather. my suggestion? bush beans. they are 50-55 days until harvest generally and so you should be able to get a harvest in before the average first frost date in your area. i believe you are probably zone 7a like we are here in my neck of the woods. the beans, by the way, will help the soil by adding nitrogen from the air. i think the fancy folks call that "fixing" the nitrogen. regardless, they work well. personally, i'd harvest the beans and mow them up right there and let the plants break down in the soil over the winter. next, i'd plant a winter cover crop... you could do any number of cover crops like winter rye. this is a "green manure" and it works pretty well. you just till that stuff into the soil early in the spring before planting. it adds complexity and organic material your soil - like the bean plants would. this is, of course, just one way to go. come springtime, i'd add compost - your own if you have it. vermicompost (worm castings) are great if you have them. then, you've got yourself a nice prepared bed and can get things going in the spring without all the bed prep. plus, you've got several months of improvement already in place. aside: i generally only till the area the first time and then never bother after that. i will fork open the soil a bit but i don't run my tiller through it year after year. i can't see that it really helps things all that much, but that's me. | comparison |
is clopyralid or dicamba more effective in the control of clover in lawns? | it depends to some extent what you mean by 'clover'. white clover or dutch clover (trifolium repens) and red clover (trifolium pratense) are very susceptible to treatments containing dicamba, dichlorprop-p and mecoprop-p, or 2,4-d and mecoprop-p. other, clover like lawn weeds are more susceptible to formulations containing fluroxpyr, mcpa and clopyralid, although these will also have some impact on trifolium varieties. you'll note that the most effective treatments contain more than one ingredient. any and all chemical treatments aimed at killing or lessening clovers work better in early to mid summer, and are less effective from august (or late summer/early fall) onwards. | comparison |
what is a good ground cover for street frontage instead of lawn? | i found it difficult to mark just one answer as correct because all are. i am providing my own answer for this question, but it's only more correct than earlier answers in that it's the best groundcover for my specific situation and location. what i eventually went with (on professional advice) was creeping boobialla (myoporum parvifolium) for its outstanding suitability for the climate, soil type and hardiness zone in which it will be planted. | comparison |
is a potted grapevine better put under or outside a roof? | i don't think it'll make much difference - you live in an area where there's good rainfall, which is good for plants in the ground, but plants in containers still need watering, regardless. marginally, a plant in a pot outside the roof will remain damper in winter than one under the roof, but during the growing season, you will need to water regularly whether the pots are in or outside. if they are under the roof, you will need to check them during winter to see whether they need water or not. | comparison |
why are cut roses doing so much better in a ceramic vase than a crystal vase? | it's probably not the particular vase, but the amount of moisture the roots can get from more of the stems being submerged in water.i noticed the glass vase had very ample water and the ceramic has to hold more to be able to hold the roses upright as it is showing. i can also give you some tips on how to preserve your fresh cut roses longer. use 1 tsp of sugar or any type artificial sweetener in the water and they will stay fresh three times longer also. water also need to protrude up on the plant stalks as far as possible to absorb a lot of moisture so the blooms will get the needed irrigation to keep them fresh and healthy. and i'm sure this is the reason the ceramic does better than glass, only the water level. we have tree nurseries and we propagate roses and shrubs. they do best with a good amount of moisture. | comparison |
what are the pros and cons of bagged mulch versus city mulch? | dark color is not necessarily a sign of healthy compost/mulch. if it's dyed, the color is from chemicals, not humus. not so healthy. bagged mulch must tell you if it's dyed or not, so read the bag. if there's no mention of dye, you're good. now, because the (undyed) color doesn't affect the health of the product, using the city mulch is still a good idea, unless you want color for aesthetic purposes. so: bagged mulch if dyed will retain it's color much longer, and if undyed, most likely still a good bit longer, because bagged mulch is a much higher quality product than free city mulch. but remember, for soil health, it's not all about the color. yes, undyed bagged mulch will be at least as good for the soil as the city stuff. if dyed, you will be adding that much chemical dye to your soil. yes. here are a few: bagged mulch (undyed) (+) is much more consistent in texture and quality (+) is not going to contain as many possible contaminates like chipped black walnut limbs (a common issue with city mulch) (+) the color usually lasts fairly well, even when not dyed, because of proper 'cooking' methods and a more decomposed state (-) is much more expensive (obviously), which tends to limit the amount applied, which can (in some cases) decrease the rate at which your soil is improved city mulch (+) it's available in bulk, so you can supply your garden with as much as necessary without running up a big bill (+) usually, the stuff is good enough for use in garden beds, some cities are more careful than others, though (-) the texture can be extremely uneven, and will break down irregularly, because of the many various species that went into it. (-) color fades fast, especially if the mulch hasn't decomposed for long. this is only aesthetic. | comparison |
organic or traditional lawn fertilizer: which is safer for areas with young children? | this is a difficult one if you have children. first, let's deal with the two fertilizers you've mentioned - the first one, non organic, has a much higher nitrogen level, as you've noticed, nearly 3 times the level of the organic one, and is much more suitable for use in fall. because its an autumn formulation, it will take longer to break down than the organic one you have, which does not seem to be specifically for autumn use. it also means you need less of it so its usually spread more thinly, and its the one i would choose in these circumstances. the usual advice where children will be playing on grass after treatment is to use a liquid which, once its dry, is no trouble to either pets or children, and this is particularly useful in summer. however, because its fall, you do actually need a product in granular or dry formulation because the idea is it breaks down gradually, over a much longer period of time than a liquid would do, which is more or less instant. regarding the difference between them, in theory, either product could burn skin if the person is in prolonged contact and is particularly sensitive, but i imagine your children won't be lying for hours on the grass with bare skin. although the idea of an organic product seems better, in practice, bonemeal can produce dust, attracts various animals who think its something interesting to eat, and isn't any safer from a skin point of view when compared with the non organic product. so, on balance, i'd recommend the non organic version, carefully spread at exactly the right rate and no more, following the instructions to the letter. you should exclude children and pets whilst the treatment is carried out, and it would be better to keep them off the area until its been well watered in over a couple of days or so. i know the instructions say no need to water in, but if your sense of smell is good, after such a treatment is spread, the first couple of times it gets wet you can actually smell the treatment in the air, so i'd water it in regardless. i do have one caveat though - i would check with scott's first before using anything, to make sure that any product you choose is suitable for a lawn which has only been down a couple of months. if its like scotts products here in the uk, there should be a contact phone number on the packaging for queries/customer service. it would also be interesting to find out what product, and its npk ratio, the installer would use himself... | comparison |
do rye or oats make a better winter cover for a poor soil vegetable garden? | i think the answer depends on more factors than what you've provided in the question -- especially your specific goals, but i can lay out the considerations i would make in choosing between the two. according to a uvm factsheet, rye can add 5 tons of organic matter per acre. i like rye because the seed is cheap, it germinates quickly and reliably, you can start it late, it catches reliably, it always survives the winter, and it goes like gangbusters when the snow melts in spring. the only real downsides i can think of are (a) it is so vigorous it can be challenging to till under in the spring, and (b) it is allelopathic so you can't sow seeds for a couple of weeks after tilling. i have less experience with oats -- and none as a winter cover crop, only spring seeded. according to a sare document, oats can add 1-2 tons per acre with fall seeding. locally for me, oat seed is less available than rye. also for me, oats will reliably winter kill, which means it would need to be started earlier in the fall -- but it also would be easier to incorporate in spring. oats also have allelopathic properties; i think winterkilled oats won't cause problems in the spring. because of the allelopathy, you should wait 3 weeks (according to sare) before planting after tilling the oats under. either cover crop makes a good nitrogen trap -- "soaking up" excess nitrogen left in the soil following the previous main crop and storing it for the following season. they both do a good job of smothering weeds. the sare document on oats linked above mentions in "comparative notes" at the bottom: [compared to oats,] rye grows more in fall and early spring, absorbs more n and matures faster, but is harder to establish, to kill and to till than oats. so i guess if you've got a good rototiller and are willing to work at tilling it under in the spring, rye would be my recommendation. | comparison |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
- Downloads last month
- 48