date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2014/08/07 | 3,130 | 13,121 | <issue_start>username_0: The bookstore at my American university is an outpost of Barnes and Noble, charges much higher prices than can be found on Amazon, and in my opinion offers very poor service. Among other things that upset me, they prohibit students from browsing the stacks of textbooks -- instead you are supposed to tell the staff what you want, so they can retrieve it for you.
I prefer to mass e-mail my students in advance of the course and urge them to buy their books for my class at Amazon, used if at all possible.
Is there anything unethical, or that could possibly get me into trouble, about this?<issue_comment>username_1: First, facetiously, if you consider yourself beholden to your university, so that you must shill for all their money-making activities, then, yes, you are not doing what they'd want. :)
Second, many universities' bookstores have become financially-independent, in effect for-profit, entities, taking advantage whenever possible of convenience and misunderstandings... Their being for-profit already corrupts their function, and their selection of available (=profitable) books, not to mention their pricing structure.
Third, for-profit textbook-writing is a huge industry, with the pursuant corruptions (wherever there's a dollar to be made...). New editions with pointless changes, ... In my opinion, given that the internet exists, we, collectively, can do better, in many ways. Information is not entirely free, but it's not as expensive as all these scalpers (!) would like us to believe.
Upvotes: 3 <issue_comment>username_2: Writing an email to your students advising them to obtain their textbook from somewhere other than your University's preferred supplier - B&N - might well earn you a telling off.
Helpfully informing your students - in a lecture, not in writing - that your preferred textbook is available at the University bookshop - **as well as from other sources** - is less likely to cause you trouble. It is, after all, a completely true statement, and in the best interests of your students. Everyone knows about Amazon and I would expect any thrifty student to refer to Amazon's website for competitive prices for the textbook.
Upvotes: 3 <issue_comment>username_3: **Politically?**
Sure, but there is always a chance that you will step on someone's toes if you do anything. I fully agree with username_2 that you have subtle ways to do it.
**Ethically?**
In the given situation the bookstore is a for-profit entity that gives below-average service to your students on an above-average price. Whatever approach you use to define the main mission of a university, it should include a good and fair service to your students for their 10-20-30 k$/year they pay. So I would say **it is unethical to not tell them** that they are not obligated to use a sub-par money-sucker service, and they are free to buy from internet, e.g. Amazon. If anyone is unethical in this situation, it is the person who is supervising the B&N shop's license to run at your university. But it is again a politically sensitive issue.
Upvotes: 5 <issue_comment>username_4: Not **wrong** per se, but as others have mentioned, you may well be stepping on some toes. If you don't feel like dealing with the owners of said toes (whether in the bookstore, or the relevant person in the university), then there are ways to do so without blatantly stating that the bookstore is ripping off students. (Note that I'm not implying that you are *blatantly* saying any such thing!)
**One option** is to tell the students on the first day of class. The obvious downside to that is that many students will already have purchased the needlessly expensive bookstore texts by then.
**A better option** is your practice of mass-email prior to the start of the course. Instead of urging the students to buy from Amazon (which may imnply that you are affiliated), why not just provide information on prices from the bookstore as well as the prices --for new and used-- from several vendors (Amazon is just a starting place, Abebooks, Ebay, Textbooks.com, etc, come to mind as well). Also, as others have mentioned in the comments, students will appreciate if you mention whether the latest edition is required, or if the previous (much cheaper!) edition will also work. The savvy student will know what you are implying for the alternate vendors, and the rest... well, perhaps they deserve to pay the bookstore prices!
**Additionally**, if your institution has a formal or informal student exchange, students may be able to buy used textbooks from a student who took the course last semester. You might be able to put your incoming students in touch with this network, as well.
Upvotes: 4 <issue_comment>username_5: The only thing wrong about it is that you should be telling them to look not just on Amazon, but everywhere on the internet. A convenient aggregator is dealoz.com (there are many other similar sites).
Amazon is perhaps more reliable than many other sellers, but it is usually more expensive too. And: If you just say Amazon, it might sound like you're getting a commission from Amazon!
Also, especially for many of the more popular textbooks, it is not difficult to find free PDF copies somewhere. This may or may not be legal, but considering how evil the US textbook industry is (and the university bookstores as well), it is arguably the morally correct thing to do. You can phrase it in an ironic fashion in your email, e.g. "You may or may not know, but there are many free and illegal PDF copies of this book online. I strongly discourage you from downloading these."
ADDED TIP: Use older editions of the textbook and tell them it's OK to get an older edition (indeed, try to design your class so that it's no big deal even if they use an older edition). For the most popular textbooks, the evil textbook companies pump out a new edition every 2 or 3 years (even for things like Calculus or Spanish 101 where probably no radical advances either in research or pedagogy are made even once a decade!) As a student I was always annoyed when the professors would by default just ask you to get the newest edition, because it's just the simplest/easiest thing for the professor to do, but of course it could cost me easily $50 more.
Upvotes: 2 <issue_comment>username_6: If you have the student's best interests at heart, you can mail them that you'd be following the (n-1)th edition of the textbook, where n is the most recent version. That way, they can get the textbook at less than the price of a cup of coffee(or even free!), and there's almost always the exact same content!
Upvotes: 2 <issue_comment>username_7: **Urging** student to buy from a supplier rather than another can be seen as advertisement and it's not something a professor should do.
Suggesting to look for alternatives or simply mentioning the book title and letting them do the math is probably the best way to go. You may imply that the most recent version has very small (or no) changes so clever people can go and buy the previous version from other students or used-book stores.
As a final comment I noticed that no-one mentioned to push (in this case urging is allowed) the students to use the University (or the City) Library: books are free to peruse and to borrow, what better option is there?
Upvotes: 2 <issue_comment>username_8: I wouldn't *specifically* mention Amazon. It's just one vendor. Just let them know that they don't need a new copy and are probably able to order cheaper used versions of the book "online".
I don't think they'll have any sort of trouble understanding what you're trying to say, and it sounds a lot more reasonable and less rebellious to the rest of the university.
Upvotes: 4 <issue_comment>username_9: There is no need to specify a particular source lest it border on advertisement, but recommending alternative sources for materials has been fairly common in my experience. In fact, our campus bookstore's website even lists a price comparison tool for all the major online retailers. Taking that as a baseline I think it is only honest to provide information to the students if you find it particularly informative. It isn't uncommon for professors to email the class in the weeks leading up to the start about alternative versions and how compatible they would be with the class "just in case" they are having trouble acquiring the book. Even so far as "I have heard some sources are even 'selling' an electronic copy" has appeared as a subtle nod to the fact that there is a pdf that can be downloaded out there somewhere. Some universities will be happier than others in this regard, but as long as you avoid dropping specific names of retailers in any mass correspondence then I don't see anything outside of standard practice here.
Upvotes: 1 <issue_comment>username_10: As long as Amazon is really the best place to buy them it couldn't be unethical.
It feels wrong for me because Amazon competes unfairly due to it's size. It may also feel wrong because you're telling students not to follow the norm.
Bottom line? You're helping your students. That's what you should be doing! Keep it up!
Upvotes: 1 <issue_comment>username_11: Just from my own personal experience and less about ethics: my teachers tell me all the time to not waste money at the bookstore. And actually, unless you're a freshmen or a *really* lazy college student, no one buys from them anyways. I haven't bought a textbook from the university store in years unless (and boy do I hate this) it's a "university specific" text book that you literally can't get anywehere else.
Also, I never, ever buy books until at least the first week of class to better gauge if I actually need them. I'll get them if a teacher makes a point of saying I'll need to (and even then it usually is a 50/50 shot of if they use it or not -\_-)
What I would recommend is to just verbally tell your students in class to buy the book from somewhere else (this allows no direct trail from you saying to *not* buy from the bookstore).
Another suggestion some of my teachers have done is to list the book and then, as others have said, give the bookstore price and an amazon/ ebay price as well and let the students figure it out.
But really, I would say, just tell them in class. Your students should really already know to never buy from the bookstore and it creates less liability (if there were ever to be one) on your part.
Upvotes: 1 <issue_comment>username_12: As soon as you've selected the textbook for an upcoming class, post the information, including the ISBN, on your web page. Books available in electronic form have different ISBNs; list the electronic version, too. If you require a particular edition, say so. If an earlier edition will do, *explicitly* say that. I try to include a link to the publisher's site for the book, which will have the publisher's list price, information about electronic versions, and sometimes even free resources for students. Here is one of my textbook listings:
```
Required Textbook: Stallings, William and <NAME> [_*Computer
Security Principles and Practice, Second Edition.*_][1] Pearson / Prentice Hall,
2012; ISBN-13: 9780132775069. The second edition has been revised
substantially. Only the second edition will do for this course. (Note: This
book is available for rental as an e-book on Google Play. Kindle editions
and rentals are available on Amazon as well as in the university bookstore.
Other options may also be available.)
```
I haven't told the students where to buy the book, but I've given them everything they need to make informed purchase decisions. The "other options" note is *surely* enough of a clue to set people to searching.
Upvotes: 2 <issue_comment>username_13: My informed guess is that students know anyway, and there's no need to tell them.
FWIW, I don't believe that publisher-direct is a much better option, and I also believe some of the electronic "rent-for-a-semester" deals from the publisher are not that hot.
Interestingly, the publishers are going to track purchases from your campus bookstore. My own experience with one publisher is that they gave me tons of problems about providing me with access to electronic teaching resources associated with my text because they didn't feel the bookstore was selling enough copies.
Without going into too much detail, there are some real interesting (let's just call them) "issues" with modern academic publishing. In some ways, there are problems in that area that are somewhat analogous to what record labels have been dealing with during their recent history. There are just better ways to distribute information these days, and if publishers don't tweak their business models, they'll become dinosaurs.
If there is ONE THING you should be sharing with your students, it's that finding and using illegal electronic copies is THEFT. I'm certainly no hero for the publishers, who I don't have much sympathy for, but I'd love to see textbook theft by electronic or other means specifically listed in our academic honesty language.
Upvotes: 2 |
2014/08/08 | 408 | 1,574 | <issue_start>username_0: Every day I receive the latest arXiv abstracts via email, with subject lines such as
`physics daily Subj-class mailing a4 1`
`physics daily Subj-class mailing 124 1`
`physics daily Subj-class mailing 490 1`
With new email clients that create "conversations" from an email chain based on the subject line, the disorderly nature of the subject lines makes it a pain to sift through the last week's abstracts if I haven't kept myself up to date.
What do `a4`, `124`, and `490` mean? These identifiers are not unique to each day's listings, but I cannot spot a correlation between them and the date of the email. Is there some hidden way to either include the date in the subject header, or remove these seemingly random strings?<issue_comment>username_1: They look like hexadecimal. So they may be the beginning of a hash or UUID. This would be similar to what git uses to refer to specific versions in a repository. If this is the case, they are effectively random and you aren't going to find any correlation with the outside world (which is the point).
Upvotes: -1 <issue_comment>username_2: Straight from the horse's mouth: there is no fix, but an update is on the list of things that "will be done when they're done".
>
> You may safely ignore the numbers after the text "class mailing" as
> they are used for internal audit code and essentially meaningless from
> a user perspective.
>
>
> We do have plans to update the mailing code at some stage, but the
> time-line is unclear due to limited developer time.
>
>
>
Upvotes: 4 |
2014/08/08 | 2,526 | 10,143 | <issue_start>username_0: I have finish my BSc in physics. I want to work in a interdiscipline like biophysics or econophysics. Because these are mainly physics, one can study higher with spending little effort to understand biology or economy concept. However, I think that having a stable knowledge in other discipline would also help you better in researching. Since I have studied physics, mathematics isn't a problem, therefore I can skip it to shorten the study time, which means the amount of time I spend will approximate to the master degree duration (I hope so). Another point is I can be more flexible than one who only know his/her specialization.
The biggest disadvantage is I don't have a master degree, of course.<issue_comment>username_1: It depends on what you want to do next, and where you want to do it. In Switzerland, you are supposed to have a Masters degree to start doctoral studies, however with you bachelors degrees you can probably convince them to accept you (depending on the fields). In Australia, most people *do not* have Masters degrees before starting a PhD.
+1 for flexibility and broader knowledge.
Upvotes: 2 <issue_comment>username_2: ***Disclaimer:** the following is my practical experience on research. I have no idea about the opinions of admissions committees and the like, but it probably depends greatly on the country and the relative quality of master's and undergrad education.*
I am a theoretical physicist by formation (5 years degree). Then I did my master's thesis (I had taken enough courses) in Bioinformatics, and I am doing a PhD in Biophysics.
When I start a new project (or tackle the next sub-project), there are a bunch of things about Biology and Chemistry that I don't know, but most of them are actually quite easy. I am sometimes lacking "the bigger picture", being able to my questions into a broader meaning, but that is not so important. For example, my master's project was to improve the number of identified peptides in an experiment using computational techniques, and that is a very clear goal. What to do with this improved results? Obviously, we can improve the things we already do. But, are there biological questions it can help answer? I am not sure, but there are experts on that.
For the *middle picture* range, I rely on my advisors. They are also physicists, but they have learned along the way pretty much what they need. And after just a few months, I was surprised of how much I was able to help the new members of the lab.
Actually, I believe taking another undergrad degree would not help so much. Of course, I would be faster at the beginning, because I would know what things like "the $C\_\alpha$ residuals" mean, but the actual meat of the project, where we spend months, is probably not covered (or not covered in enough detail) in most undergrad degrees. And the main reason is that these details are known only to those who have actually worked hands on with it.
Let me give you an example: in Physics we talk about spectra all the time, and all the information you can extract from it, with perhaps, the only limitation is the noise of your instrument. The truth is, unless you have a VERY expensive camera, you are going to find very funny stuff, like two spectra taken right after the other, with exactly the same experimental set up, will not have, on camera, the same intensity; and even the profile of the line. To compensate for this you need to get clever, and it very much depends on the details, so it is very difficult to teach unless your lecturer is an expert in spectra analysis and can tell you how they do it. And still, most people can just rely on the spectra pre-processed by the experts, so they don't need to know this. Unless, of course, you want to work with raw spectra yourself (been there).
Lastly, a master's or a PhD has some courses. They are usually quite specialised, targeted for your level and background, and can bring you up to speed in the things you need to know about your field quite nicely.
And to add some peace of mind, my former lab hired a postdoc coming from computer vision. His biological knowledge had quite big holes, but nevertheless, in a couple of weeks he was already doing amazing stuff with very good ideas.
Bottomline, go for the advanced studies. You can always take Biology or Economy on the side (for example Open University or unofficially at Coursera).
Upvotes: 4 [selected_answer]<issue_comment>username_3: I strongly disagree with a 2nd BSc degree. First, even if you manage to somehow avoid officially some courses because of your first degree, all BSc degrees have a minimum duration which is typically longer than a MS. So, it is not going to be faster or easier. Second, even if you get the 2nd degree you will know less than the ones who have a MSc in the second area. Also, a MS is a nice way to connect with potential advisors if you want to continue for a PHD and work on a specialized thesis similar to your interests.
To make a long story short: If you can get to MSc program of your area of interest with your (partially irrelevant) BSc degree, go for it. Then cover the knowledge you are missing on your free time. It is not going to be easy, but if you pull it off, it will work better for you in the long term. **Disclaimer:** That is what I did and it worked for me. Hope it works for you too.
Upvotes: 3 <issue_comment>username_4: As someone who was in a situation similar to yours I always would recomment the advanced degree. The key difference (at least in most U.S scholls) is that with a B.S/ B.A you have a lot of "fluff" classes to make you more well rounded. Going for a M.A/M.S cuts away a lot of that and focuses much more on just the relivent classes. This makes getting the advanced degree much more time efficent.
Another point I'd like to make is that *rarely* does two degrees do anything. When getting a job for example, you get more money with an advanced degree. There isn't anything (to my knowledge) that gives you more for two degrees. So you've just wasted years of your life for not much reward (knowledge? Maybe, but certainly not any more than with a Master's in a specific field).
One final anecdote: I had a teacher who had two Master's in something or other in a field where PhD's reign supreme. Because of this, he wasn't allowed to be a full fledges professor but only an instructor (much less stable and much less pay). I say this because, yes he had two "advanced" degress, but two doesn't equal the higher degree. In this case, I believe you would just be limiting your options with the second B.S. In any case, good luck :)
Upvotes: 3 <issue_comment>username_5: I am a D.Phil. (PhD) student at a Doctoral Training Centres at Oxford and my experience is that a Master's degree in interdisciplinary bio-science would do you much better than a second BSc. I was one of 11 "mathematical sciences" students in my year with a couple of mathematicians and a bunch of physicists. None of us needed to relearn the skills from a BSc because we all had them, what we needed was direction on what is important to learn for working between disciplines. If I were you I wouldn't go for the second Bsc, I would go for a Master's or alternatively as I was recommended skip the Master's and go directly into a PhD program.
Upvotes: 2 <issue_comment>username_6: A masters degree is far superior. The study done at a graduate level far surpasses the study at an undergraduate level. Employers look at a masters degree with much more acclaim than they would 2 bachelor's degrees.
Upvotes: 3 <issue_comment>username_7: Master's for sure. I would rather be a master of something than a jack of many trades. Specialize. [Comparative advantage](http://en.wikipedia.org/wiki/Comparative_advantage) and all that. Or perhaps you would rather be a jack? Then maybe 2 bachelor's is for you, but interdisciplinary doesn't imply jack/2 bachelor's I guess.
Re: Econophysics, you can try out quantitative finance/mathematical finance. I am currently a grad student of QF, and we are learning stuff like Brownian motion and Feynman-Kac theorem. Basic for you Physics people, right?
There's the small difference of applying the stuff to finance (allocation of scarce resources over time) rather than economics (allocation of scarce resources), but hopefully that's not too far from your intention.
If you want to get into QF/MF, you could take a master's in it or just get a master's in Physics and learn the Finance on your own. Finance is relatively easy to learn. Things to check out:
Quant Stackexchange (save it from beta please)
Quantstart.com
<NAME>
For Mathematical Finance (the Finance is introductory while the Math is not):
Hull's Options, Futures and Other Derivatives
Bjork's Arbitrage Theory in Continuous Time
Please be patient with the cute math you may see.
For Economics (Basic-Math Basic-Economics book):
N Gregory Mankiw's Principles of Economics
Upvotes: 1 <issue_comment>username_8: "Or perhaps you would rather be a jack?"
Do people need to be reminded that medical doctors in Canada, and elsewhere, earn 2 bachelor's degrees? The first one is either a BA or B.SC., and the second undergraduate degree is an "MD". Same for teachers, BA or B.Sc. plus a B.Ed. Ditto for Law - BA or B.Sc. plus an LLB (Bachelor of Laws), now called a 'Juris Doctor', but it is still an undergraduate degree. Even though the 2nd bachelor's degrees in these professions are undergraduate, they are 'professional' undergraduate degrees.
So for many professions, 2 undergraduate degrees are both necessary and sufficient. So much for the "Or perhaps you would rather be a jack?"
Upvotes: 1 <issue_comment>username_9: It would depend on how useful your first bachelor's degree is.
If you majored in most liberal arts or humanities areas, then a second bachelor's in business, engineering or information tech could open up some doors for you that your first degree did not.
The first question to ask yourself is "Get a Master's in WHAT?" That's an almost impossible question to answer if your goal is to get a better job.
Upvotes: 1 |
2014/08/08 | 1,299 | 5,337 | <issue_start>username_0: Surprisingly, I have not found a similar question to mine - all I found was a question about the maximum number of citations per sentence.
However, I am more interested in the total number of citations that is considered normal for a paper (to be more specific, a Master Thesis, which in my case will be around 60 pages of content.)
I heard that about 1 - 1.5 multiplied with page count would be a good number of sources cited.
I am asking because I am a little worried that I might have cited too many sources.<issue_comment>username_1: There is no definite answer. It really depends on how much previous literature exists, how much of it you have reviewed and cited appropriately, and (loosely) what the word count of the document is. Page count can misleading, as some theses have many more figures and tables than others.
No one is going to skip to the bibliography, think negative thoughts, and say "you have too many references!" without reading the document. If no individual part of the thesis could be considered as having too many citations, then the thesis as a whole has an appropriate number of citations.
These related questions have answers as to how you can decide if a particular part of the thesis has too many citations.
* [Maximum number of citations per sentence?](https://academia.stackexchange.com/questions/26755/maximum-number-of-citations-per-sentence)
* [Is there such thing as too many references for one paper?](https://academia.stackexchange.com/questions/13570/is-there-such-thing-as-too-many-references-for-one-paper)
Upvotes: 6 [selected_answer]<issue_comment>username_2: In addition to the other answer, this question is based on some slightly questionable premises, as seen in the sentence "the total number of citations that is considered normal for a paper (to be more specific, a Master Thesis, which in my case will be around 60 pages of content.)":
* In the communities of CS that I am familiar with, a *Master Thesis* of some 60 pages is not a *paper*. A *paper* is usually a document that concisely describes something on typically 5 to 15 pages (depending both on the paper type (short, full, journal, poster abstract, ...) and the layout. Hence, a *Master Thesis* is not comparable to a *paper*.
* *Papers* published in conferences (and maybe to a somewhat lesser extent, in journals) are usually bound to a very strict upper page count limit. When you have lots of interesting stuff to tell, there is only so much space left for references and you often have to skip citing some sources that you would have liked to include. Such a restriction usually doesn't exist in graduation theses such as Bachelor or Master theses. There may be a rough guideline for the expected number of pages, but exceeding that by a moderate amount (in the case you presented, I'd frankly say 80 pages instead of 60 is ok) *if the content is worth it* is not necessarily a problem - least of all if the extra length is caused by "additional info" such as the appendix or references rather than the core document.
* Lastly, there is no *normal* number of references because each topic is different. For some Master Thesis tasks, there may be a number of default works that should always be listed in the initial exposition of the general topic, which in itself already fill a page of references, whereas other Master Thesis tasks might not have such a "default list"; the general exposition is done with very few or without any references.
Upvotes: 3 <issue_comment>username_3: The number should be N, where N is the exact number of papers that you have really read, understood and (mostly) relevant to your thesis.
Upvotes: -1 <issue_comment>username_4: I just completed an M.A. thesis in English literature, and I mean just. I tend to be light on the number of sources I use and I like to have favored sources and work it to exhaustion.
My thesis is about 30,000 words, about 50 percent more than the minimum at my institution. I have 27 secondary sources and six primary sources. The institution requires 20 sources, I don't if that's 20 secondary or 20 total, but what I did will give you and idea what you need to do.
I'm not just out college. In fact, I am senior citizen age. My writing ability is equal to that the people who write the journal article and equal to that of a professional historian too. Reading the journal articles I have had to read to do my seminar papers and my thesis, I have seen many that are excessively heavy on sources. Some are light on sources but seem nevertheless to be good articles.
How you primary sources you cite might depend on your topic. It could be only one. Conceivably, it could be none. For a master's thesis in literature, the minimum might be one secondary source for each thousand word. In imagine, in that case, that it might be double than many for a doctoral disseration. In that case, the number secondary sources for doctoral thesis would have to be around 150.
How many source might depend on the individual and how that persons works their sources. But I would still say, expect to be required to have 150 sources or close to it.
My thesis was low on sources in part because I first outlined a theory and then applied that theory to the characters of four novels without much reference to outside sources.
Upvotes: -1 |
2014/08/08 | 2,038 | 8,770 | <issue_start>username_0: As the title says, do student reviews of teachers actually ever matter?
At least in the U.S most universities I know of have their students evaluate their teachers at the end of each term. I do know that things like tenure, if they do research, or if they are just an instructor plays a role in how weighted these evals are but even so, does anyone know if anyone actually cares or does anything with the reviews?
I've had many teachers of all types and some were notoriously horrible. Every year hordes of students would write lengthy reasons why the teacher was bad, give them very low marks, etc... And yet the teacher has remained.
I do know some teachers who do actually care (the teachers get them a semester later) and they can look at what the students liked/ disliked, etc... Which seems like the only way these evals are used. At least the way it feels to me, the administration (the ones who require these reviews), simply collect them, put them in a file, and never speak of them again.<issue_comment>username_1: Yes the reviews matter in a number of ways. They are generally considered by tenure and promotion committees and consistently awful reviews can prevent tenure/promotion even for someone with excellent research and service. Committees look both at the average/median and the spread/extremes of any numerical scaled questions as well as the open comments.
The evaluations are also used by some people to actually improve their teaching. This is obviously much harder to enforce, but in my experience no one sets out to be a consistently bad teacher.
Very rarely would a teacher be fired based on student evaluations since they have a number of inherent flaws. First the scores and comments tend to be much better in electives than required classes. Scores also tend to be better in small group teaching versus large lectures. Some topics also tend to score higher than other topics regardless of who is teaching. There is also the issue of bias. Student comments can reveal a shocking degree of sexism, racism, and homophobia. Finally, the timing is really wrong to evaluate how much the students learned and how important it was. Asking for an evaluation before the student can see how the class fits into the entire education, and future job, misses so much about what a teacher is trying to do.
Upvotes: 5 <issue_comment>username_2: In my department, teaching evaluation numbers are available to the executive committee for use in recommending raises (or non-raises) in faculty salaries. At the college level, these numbers are a required part of the file recommending people for promotion, and the file must also include the numbers for other recent instructors of the same course or similar courses (because it's known that some types of courses generally get better evaluations than others). One of the associate deans is expected to call the committee's attention to any serious problems with a candidate's teaching.
In general, extremely high or extremely low teaching evaluations, if consistent over a number of semesters, have a real effect. Near the middle of the range, they don't matter much.
Upvotes: 3 <issue_comment>username_3: No, for the most part these surveys don't matter. They were marginally relevant 40 years ago, when they first appeared, and have become even less relevant now.
They're generally not useful to the teacher in improving his/her teaching, because they're just numerical ratings. If students give me 3.7 out of 5 on "grades were fair," that doesn't tell me anything useful.
If the teacher already has tenure, they don't affect the teacher professionally.
At the community college where I teach, they don't matter for getting tenure, because tenure is a rubber stamp. At fancy schools, they don't matter for getting tenure, because tenure is based on research.
They also don't matter because they've been overtaken by technology. Students check web sites like whototake and ratemyprofessor; they never see the surveys, whose results are usually not made public. For public schools in the US, myedu.com will tell them grade distributions for various professors.
Upvotes: 2 <issue_comment>username_3: It depends on the university/college and the department.
It is sometimes joked that a good teaching award is nicknamed the "Kiss of Death". But by my anecdotal experience, this joke is not so far from the truth at some places. So presumably at such places, the weight placed on student evaluations (when deciding tenure say) is zero. Or perhaps even negative (!) - i.e., literally the worse your students think of you, the better for your tenure decision.
In contrast, at good liberal arts colleges, they are more heavily weighted.
It all depends on how much the particular department cares about teaching. It is difficult to generalize, even restricting attention to the US.
Upvotes: 3 <issue_comment>username_4: At my institution my professor told me that non tenure track profs are suspended if they have an unsatisfactory overall rating, which surprised me. For tenured profs it doesn't seem to matter so much. Tenure track profs might be hindered by consistently low ratings.
Upvotes: 2 <issue_comment>username_5: My experience as an University Lecturer is that academia have an entrenched bias verbalised as: "Those who can,do and those who can't, teach".
Student feedback of a negative nature should be used to alert faculty that the staff who are teaching may need some professional development in ped/andr-agogy.
Upvotes: 2 <issue_comment>username_6: I have seen teaching evaluations being used to deny promotion to full professor.
That said, they are not always that useful. When I get my evaluations every semester I read them and sometimes they help me improve. But often they are so clearly biased that I get nothing from them: as an example I remember one that said "midterms are nothing like the assignments", and this was a week after a midterm where 3 out of the 5 questions were taken verbatim from the assignments.
I have also been in position to observe that beautiful and/or funny people get above average evaluations, as do easy markers. These facts don't help committees to take the evaluations too seriously.
As for ratemyprofessors, in my own case the comments there are not representative of what you see in the full sample of the in-class evaluations.
Upvotes: 3 <issue_comment>username_7: Student reviews may or may not matter depending on the institution, department, and professor.
For a research-active professor at a research institution they will most likely not matter at all in any respect unless they are really outliers...like worse than anyone has ever seen in the department. Even then, I think the result would be the department chair asking them to change.
On the other hand, at a teaching school or for a teaching professor (say, a clinical or adjunct) they can be extremely important and could result in termination.
Those are the extremes, and anything in between is also possible.
Upvotes: 1 <issue_comment>username_8: I think the answer depends on the teacher. If the teacher hopes to cover the subject well and is friendly with everyone, the rating should be average and the objective is achieved. But if the teacher wishes to engage the students either to get feedback on the subject taught or to better understand the various needs of the students to further tweak their understanding, then all feedback especially the negative ones, are useful. Alternatively, you can see that some teachers use "kiss the ass" approach in their teaching otherwise negative feedback would affect the renewable of their contracts. In reality, not everyone can claim to understand their students especially those who does not say much during and after classes. I think the feedback is a good outlet for such students and teachers alike. But not all teachers are willing to take risk in teaching. Hence, it is not surprising that teachers are increasing more difficult to to be taught especially those who spend 100% teaching compared to those who have certain % for research and teaching.
Upvotes: 2 <issue_comment>username_9: Yes and no. If staff member X is the protegee of influential higher-up Y, then no bad feedback can hurt X. Up to a complete walk-out by 30 students in protests of X's complete lack of preparation, arrogance, and ignorance.
(Yes, this is a true story about X. Rest assured that yours truly was not involved or affected in any way by it, so this is not sour grapes or anything, although, yes, this sort of thing deeply disgusts me.)
On the other hand, if staff member Z is up for the chop for some political reason, then negative student feedback can obviously be a stick to beat Z with.
Upvotes: 0 |
2014/08/09 | 4,462 | 18,251 | <issue_start>username_0: Increasingly, my school has been recruiting students from Central Asia, so I see 1-3 Muslim students in each section.
Near the end of the last term, one student asked for leave for some religious activity. He was surprised when I said he could go, then he told me he had missed many Friday afternoon religious activities, but his advisor (or perhaps some other school administrators) said he couldn't leave. I heard a similar story from another student.
I realized that many of the other Muslim students may have similar problems, but they are too nervous to speak out and let me know. Since they remain quiet, I'm not sure how to accommodate them.
What are some typical things a teacher can do to accommodate Muslim students?<issue_comment>username_1: There are a lot of places you could go to learn about Islam, starting with wikipedia. It could be interesting to do so: I recently read the *Autobiography of Malcolm X* and found his description of his pilgrimage to Mecca fascinating and moving. Last semester when I talked about [Gabriel's Horn](http://en.wikipedia.org/wiki/Gabriel%27s_Horn) in my calculus course I wanted to be more balanced in my allusions, so I mentioned that in Islam the horn is blown not by Gabriel but by Israfil, and I was strangely pleased to figure out for myself that Israfil is the Islamic counterpart to Raphael (whereas Jibrail's role is expanded from that of his Christian counterpart).
The point of that preamble was: I do not doubt that learning more about Islam would be a worthy endeavor. Nevertheless I am skeptical that such knowledge would directly help you to accommodate Muslim students. Like most major world religions, there is considerable variation in the way it is practiced. I recommend rather that you familiarize yourself with the policies of your university on religious accommodations. Just yesterday I received the yearly memo on *Sensitivity to Religious Practices* from my upper university administration. It reads:
>
> Many of our faculty, staff, and students commemorate various events of importance to their particular religions. Our institutional practice is to make every reasonable effort to allow members of the University community to observe their religious holidays without academic penalty. Absence for religious reasons does not relieve students from responsibility for any part of the course work required during the period of absence. Students who miss classes, examinations, or other assignments as a consequence of their religious observance should be provided with a fair alternative opportunity to complete such academic responsibilities. Students must provide instructors with reasonable notice of the dates of religious holidays on which they plan to be absent.
>
>
> As you plan your syllabus and begin communicating with students, please keep in mind that some religious holidays affect a significant number of University of Georgia students and might require a student to abstain from secular activities or attend a house of worship. Different groups within a particular religion may also observe holidays on different dates, making it difficult to provide a comprehensive list of all potential religious observances. You may wish to search online for a religious calendar resource to serve as a guide for the dates of common observances.
>
>
> Thank you for your cooperation.
>
>
>
At least if you are in the US (as I seem to recall is the case?) it seems very likely that your institution has some equivalent version. You could keep copies on hand and give them to a student like the one you described above. You can direct them to appropriate university administration if they feel that they are not being accommodated within the stated rules of the university. If you really feel strongly, you could try to speak to the relevant administrators yourself.
Upvotes: 5 <issue_comment>username_2: Although the students who move abroad to study in countries which are much different in the religious aspect of they own, do not expect the same religious accommodation as it would be at home, my experience so far has shown that it is a pleasant surprise to them that someone shows interest in accommodating them.
One thing that they appreciate a lot is remembering the Eids (the main holidays) as well as observance of the Holy Month of Ramadan. These days are celebrated together with the family in their respective countries so being far away is hard. In that case receiving some "good wishes" message is very welcomed. Whereas, the best would be if the University of someone organizes the "fast-breaking" (*Iftar*) dinner.
In addition to that, the Friday prayer is important, if they are allowed not to attend classes during the Friday prayer time, it is very helpful. Last year we received an email from University administration which described their plan of building praying rooms for Muslim students. I have also read that the Katholik University of Leuven has a praying room. That email was very welcomed by the Muslim students.
All in all, I think just acknowledging that you know something about the religion and the important days is heart-warming as they do not expect more than that.
Upvotes: 3 <issue_comment>username_3: All good answers here. Ramadan expect your students to fast during the day. As a Jewish Man I understood that fasting is a serious tenet of faith. Also, if you are a Jew or Christian the Muslim considers you a person of the book. Extend the same courtesy. Not saying anything bad... They avoid pork, alcohol and games of chance. If you ever speak of Muhammad... After you say his name say may peace be upon him. Five times a day the call to prayer will go out. Facing Mecca prayers will be said on a prayer rug. Most Muslim countries have a system set up to where the faithful can hear said call... I.E. Iraq which I spent time in or in Istanbul which I loved. You will not see pictures of religious figures or icons as you would in Eastern Orthodox churches, Catholicism or Protestant churches. Meat is eaten and prepared in a similar manner to our kosher standards. They just call it Halal. Anyway, it would take forever to list everything.
Upvotes: 2 <issue_comment>username_4: As a graduate student, my qualifying exam actually ended up taking place during Ramadan. As I was observing, I asked my graduate department to take that into account in scheduling the oral exams, which required two days for everybody to complete. They obliged in giving me an early morning slot, which was helpful compared to a late afternoon slot (when hunger would have affected my mental sharpness).
I would also note that Ramadan could conflict with evening labs, depending on how late they run. In such cases, it would be helpful to offer alternatives, where practical (perhaps the students could be allowed to start earlier or later, so that the meal break does not interfere too much with the lab schedule).
Finally, I should also mention that the observance of the Friday prayers, as well as the dates of the main observances, fluctuate: the former because they're tied to the solar schedule and thus shift during daylight savings time, as well as geographically according to both latitude and longitude, and the latter because they're tied to the lunar calendar but, unlike the Jewish calendar, are not intercalated. (Rough rule of thumb: the Islamic calendar "gains" one month every three Gregorian years.) Eid ul-Fitr this year was about July 28 in 2014; by 2017, it will be approximately June 26.
Upvotes: 4 <issue_comment>username_5: I think I can contribute here as a muslim. Although some of us avoid it due to laziness, a muslim is required to pray 5 times a day, at certain hours. It is like a ritual that requires a clean and quiet place. But if the person has to do something at that time, he can pray later. So a student doesn't need to leave class to pray. He can pray all day's prayings when he goes home.
There is also friday praying which requires a community to pray with. Unlike the other prays (5 times a day) you cannot do this later or by yourself so it can be good if student is allowed not to attend class friday afternoon.
During Ramadan (which is now during summers but in few years will be during school time again) muslims do not eat or drink anything till sunset but people can eat or drink in front of them. If there are evening classes during sunset, it is best if these students who choose to be thirsty and hungry are allowed not to come so they can do their iftar as they require.
Upvotes: 3 <issue_comment>username_6: I make zero accommodations for Muslim students. I also make zero accommodations for Jewish, Christian, Buddhist, Pagan, Hindu etc. Your religion is your personal business and it should stay your *personal* business. If my lecture / lab / test runs through your observance time you have two choices: take 10 minutes and keep quiet about it, or leave the room. If it's a test you don't get back in. If it's a lab you may miss something important, like how to handle dangerous chemicals. It would be very ironic if your prayers to your God resulted in meeting Him an hour later because you missed something important.
Tolerance and accommodation work both ways. I simply ask that *all* observant students show equal accommodation for the rest of the class who are quite happy with a 2:30 start time and don't want to rearrange their day because one person prefers 3:30. My reading of the afternoon prayer schedule, for example, says the time is not fixed: "... till the sun is still bright and enough daylight remains for a person to travel 6 miles". The sun is up until at least 5 in the winter, and 6 miles takes 20 minutes (10 on the highway that runs past campus).
Also, remember on the application form, when the school pointedly did NOT ask any questions about race, gender, color, religion, disabilities, heritage etc.? There's a reason: because the answers are irrelevant to the programs (and schedules) we offer.
>
> one student asked for leave for some religious activity
>
>
> one student asked for leave for some sports activity
>
>
> one student asked for leave for some social activity
>
>
> one student asked for leave for some political activity
>
>
>
How are these questions any different?
Upvotes: 3 <issue_comment>username_7: The student must take their own decision, what is more important at the moment: to sit in the class or to make religious activities. The school should not prevent such, but if the one comes then later and means, he wouldn't take part at examine, cause instead to learn he done his religious activities, so he must get his bad note - religious activities aren't an excuse for non-completion of school duties.
Upvotes: 2 <issue_comment>username_8: My general suggestions, for adjusting your course schedule to accommodate the religious *and any other* needs of your students, would be:
1. **Don't make attendance compulsory** if you can reasonably avoid it.
Of course, sometimes — e.g. for exams — requiring physical attendance may be unavoidable, but if there's any chance that a student *could* successfully complete the course without being present on a particular occasion, I'd suggest allowing it.
This may require some extra effort on your part, such as making lecture notes available for self-study, or scheduling supplementary lab sessions to make up for missed labs. It's up to you (and your department policy) how far you want to go with this, but at the very least, I'd suggest that, if a student tells you in advance that it would be inconvenient or impossible for them to attend a particular session, you should try to accommodate them if it's possible with reasonable effort.
In particular, IMO there are [other reasons](https://academia.stackexchange.com/questions/11353/what-is-the-best-way-to-keep-your-students-from-getting-out-of-control/11371#11371) to avoid compulsory lectures, anyway. Just make it clear to your students that being absent does not excuse them from learning the material that the lecture was supposed to teach them (but that you're willing to help them do so, as far as practically possible).
2. **Publish your course schedule well in advance**. This goes especially for exams and other things that are compulsory and/or cannot be easily rescheduled, as it allows students to plan their schedules in advance and to make an informed decision on whether they'll be able to properly attend your course.
You may also wish to ask prospective students to contact you if they'd like to attend your course but find the schedule problematic. At the very least, even if you find yourself unable to accommodate them this time, the feedback will be valuable for planning the next year's / semester's schedule.
3. **Ask your students to tell you if a particular time is inconvenient for them**, and make it clear that you're willing to make allowances where possible, especially if multiple students find the time problematic. If you do find that you have several students who'd prefer not to come to class at a particular time, bring it up in class (or e.g. on the course mailing list, if you have one) and see if there might be a way to reschedule the class without unduly inconveniencing anyone else, or even if it would simply make sense to skip it.
This should go for **any reason**, not just for religious ones, although those obviously do qualify. All the same, if a significant fraction of your students would really like to, say, watch a football match during a particular lecture, rescheduling that lecture could also be a perfectly reasonable request to consider.
The important part here is to make your students aware that you *want* them to tell you if your schedule is inconvenient for them *for any reason*, that no reason is too insignificant to ask, and that, even if you may not necessarily be able to arrange a perfect accommodation, you'll at least consider all requests. Do remind your students that you're not omniscient, and that if, say, an important lecture happens to overlap a religious event (or a football match), that might just be because you weren't aware of the conflict.
4. **Try to anticipate potential sources of scheduling conflicts**, such as major religious events, popular celebrations and, yes, even big concerts or sports events. That seems to be the specific thing you're asking about here, but I'm putting it last on the list because I feel that it's of secondary importance compared to the other suggestions above.
Sure, it's a good idea to be aware that, say, having a class on Friday afternoon could be problematic for Muslims, and that you probably shouldn't schedule anything important on major religious holidays like Eid al-Fitr or Eid al-Adha (or, for that matter, on Christmas or Easter, either). But, ultimately, it's IMO *even more important* to get your students to tell you if your schedule doesn't work for them, and to be willing to adjust the schedule or to find alternative solutions to accommodate the students' needs, whatever they may be.
Upvotes: 4 <issue_comment>username_9: [**Description of Salat**:](http://en.wikipedia.org/wiki/Salat)
>
> Muslims are commanded to perform prayers five times a day. These prayers are obligatory on every Muslim who has reached the age of puberty, with the exception being those who are mentally ill, too physically ill for it to be possible, menstruating, or experiencing postnatal bleeding. Those who are ill or otherwise physically unable to offer their prayers in the traditional form are permitted to offer their prayers while sitting or lying, as they are able. The five prayers are each assigned to certain prescribed times (al waqt) at which they must be performed, unless there is a compelling reason for not being able to perform them on time
>
>
>
From the above quote, I would expect the school has to provide time for Muslim students to perform prayer. This could be about 20 to 30 minutes every prayer time, except for Jumu'ah that could be carried out for half an hour to two hours.
Upvotes: 1 <issue_comment>username_10: Plus to other recommendations (such as advanced scheduling , ...) some tips on cases, when talking about things some how related to Islam (as a course material).
1. Stereotypes are usually misleading, each individual reflects
his/her specific set of beliefs
2. Consider existence of variance or some times extreme variance in groups which all are identified with Islam label
3. Consider that many of them are not
observant (usually different nationalities show different proportions in this)
4. Consider that many political orientations and religious
beliefs have found blurred boundaries specially in recent decades
and for younger generations. So one talking about some political
topic might actually considers it a religious topic or vice versa.
Upvotes: 1 <issue_comment>username_11: I have now taught for about ten years to classes with a significant fraction of Muslim students. Here are a few remarks:
* If you have a class on Friday afternoon, you might want to ask if it needs to be rescheduled or delayed because of the Friday's prayer. A couple of years ago I had a class on Friday starting a 2:30pm: a representative of the Muslim students asked me to delay the class of about 30 min to allow them to come back from the mosque. I told them that 30 min were a bit too much, but 15 min were ok. They replied saying that by hurrying up a bit they would have been able to be roughly on time and we agreed on a 15 min delay. Had they insisted for a longer delay, I would had asked the faculty a rescheduling of the class.
* I avoid to offer to shake hands to female Muslim students when they come to my office during office hours or after an oral exam.
* In my classes students participate also to electronics labs, where they divide in groups of 3-4 students each. Sometimes I have to sit among them at their bench to better show how a certain measurement works or to fix a circuit they assembled. In those cases, if there are female Muslim students in the group I pay a bit more attention at not touching them while I'm speaking.
I've never been asked to make any other special accommodation.
Upvotes: 0 |
2014/08/09 | 2,118 | 9,268 | <issue_start>username_0: I am a PhD student in social science who got interested in programming via data analysis in R and various utility tools in Python, SQL, Java (for web scraping, data querying, etc.).
I am considering whether I should take several undergrad CS classes at my university, including 1) Data Structure & Algorithm, 2) Software Development (both in Java), and 3) Database.
My concern is that I've seen various posters on this site claiming that what's taught in CS program has very little to do with the craft of programming itself. My main goal is to better my data analysis skills (for a data science career) and perhaps learn some machine learning and agent-based modeling. Given that goal, should I those classes? If not, what's the best way to learn programming the applied way?
(A common answer to "how to learn programming by yourself" is to do a project. However, engaging in a project without guidance and quick feedback is a very easy way to get sidetracked, especially since I'm in social science where programming skill is nothing but a tool and not valuable in and of itself. So, if your answer is "to do a project", please provide more details on how to get guidance, e.g. data analysis books)<issue_comment>username_1: No, you shouldn't take undergraduate courses just because the subject interests you.
You are doing a PhD in Social Sciences. That means that all of your academic effort goes into your PhD. A PhD is a full-time occupation. Not in some weak, 35-hours-a-week sense of "full-time", but in the sense of consuming as much brain work as you are able to put in.
Programming and statistics are merely some of the means to an end for you: to provide you with the tools you need to complete your PhD. So focus. Focus on that. Every time you feel the urge, when feel distracted by machine learning or whatever, ask yourself: how does this **directly** contribute to me finishing my PhD? Catch yourself as soon as the siren call of "it will help me better understand the bigger picture of machine learning" or whatever starts weaseling its way into your mind. Those are nasty little tricks the mind plays on itself, that will drag you off your true course, and onto the rocks of endless distraction. And then you'll never finish your PhD. Just keep looking at the post-it-note you've attached to the side of your monitor that contains a terse expression of your PhD's research question. That is your beacon: aim at it relentlessly, and don't steer away from your true course.
Upvotes: 2 <issue_comment>username_2: If you are considering such classes, then go for it!
You are right that basics of computer science have little to do with direct application (its unlikely that you will need to invent a new sorting algorithm). But:
* algorithmic thinking is invaluable,
* getting it at level of computer science students (rather than "computer science for liberal arts majors") may be a challenge worth it,
* getting contacts and interaction with CS students is invaluable (you may learn a lot about applied programming by immersion),
* learning advances CS will pay off (regardless if you stay in academia or not).
Dedicating a year or whole studies to something is a commitment people sometimes regret. Taking a one or two courses someone is interested it is rarely regretted.
@EnergyNumbers warns you against lack of focus. But hey, you are not taking a random subject! It is a thing that will boost skills you need for the things you are doing.
Source: I did take one such class (while majoring in phys/math) and it was great (my only regret is I didn't take more). Now I am doing data science.
Disclaimer: I do like to side-track and I have little respect towards entrenching oneself in one, arbitrarily defined field.
Upvotes: 3 <issue_comment>username_3: I see a couple of places where CS classes would help towards data analysis in the social sciences.
* thinking about issues like complexity can help when you're writing scripts to analyze large amounts of data
* the "craft of programming" is also useful to learn, because you realize the risks of your code being wrong or faulty in some way (and techniques to guard against that), which in a social science study can simply lead you to draw completely wrong conclusions. Such skills are likely to be taught rather in computer/software engineering than CS, if that distinction exists wherever you are.
One of my friends, a glaciologist, lost an enormous amount of time during his phd because of limited programming skills and the inability to verify his code, which led him to doubts about his experimental results. The programmer they hired just didn't understand the science and was useless...
Finally about machine learning and agent-based modeling, you won't learn much (if any) of that in undergrad courses. Those are usually graduate courses.
Perhaps an interesting avenue would be to collaborate with a computer scientist on a research project related to your studies, rather than embarking solo on a toy programming project.
Upvotes: 2 <issue_comment>username_4: I recommend learning about those topics. While EnergyNumbers is correct that programming is a means to an end for you, I strongly disagree with the implication that becoming a better programmer is a waste of time. Knowing more about computer science will let you complete your analyses faster and more easily and, more importantly, will give you more confidence in your code and the results you produce. A real life example of social science programming gone wrong: the coding errors in the Reinhart-Rogoff spreadsheet.
I also feel this way because I'm in the opposite of your situation. I'm a software engineer, but I'm interested how developers work with their tools and each other, so there's a social science flavour to my research. I've found it extremely helpful to dip into social science textbooks and courses to learn more about doing this kind of research. If I hadn't taken the time to learn more about grounded theory, conducting interviews, designing surveys, and so on, I'm sure my work would have serious methodological problems.
That said, I don't think you necessarily have to take a course to learn the material, or at least not an on-campus course. I took Coursera courses on [algorithms](https://www.coursera.org/course/algo) and [machine learning](https://www.coursera.org/course/ml) for my own interest and can strongly recommend both of them. EdX and Udacity are two other MOOC providers with good offerings. Doing online work is more flexible than attending undergraduate lectures, and you can pick and choose the material that's most relevant to you. Reading textbooks on your own time is an even more flexible approach. On the other hand, if the structure of an on-campus course helps you learn, then by all means, take the course.
Upvotes: 2 <issue_comment>username_5: Most of the time, in practical programming you can get away knowing the craft without really needing the science, except for a few details (why is it faster to iterate the matrix by rows than by columns? why can I not run this in parallel?). But a stronger background on the science can help you expand your limits. For example, imagine you have a "weird" dataset, where the standard algorithms don't work, but maybe you can think of your own!
Also, you will be able to understand papers from other disciplines. I personally have "leeched" knowledge from papers in Sociology, Electrical Engineering, Medicine, Mathematics... all of them oriented at the processing of their data, that was by chance, somehow similar to mine.
Yet another reason for it is that it can boost your academic value. I don't think there are many people with a CS background in Social Sciences, and probably even less with a strong knowledge of the science behind your research, which means it can help you stick out and be a valuable asset for your lab.
If this was not enough, never underestimate the value of networking. At some point, you may hit a problem that is too hard for your programming skills. Having connections in CS, you can propose a collaboration.
Upvotes: 1 <issue_comment>username_6: The three courses you mentioned deserve discrimination: one yes, one maybe, and one not yet.
Do take Data Structures and Algorithms. It covers the principal means of practically addressing real classes of problems. Even if you never use most of the techniques you see there, getting a sense of the scope of approaches to deterministic problems is key to designing new programs. If you're going to do some programming you want to know that this stuff exists.
You may choose to take Software Development. I'm assuming it's like Software Engineering, which is actually about the craft: team formation and communication strategies, productivity tools used by professional programmers, optimization for quality and maintainability. These practices are niceties essential to working in large groups or on long-lived projects, frequently ignored by solo programmers.
Don't rush to study relational Databases. They're just a famous case of a matched query language and data structure (it's an arbitrary number of mixed-type 2D matrices). You don't need to study them unless you're going to use them.
Upvotes: 0 |
2014/08/09 | 681 | 2,798 | <issue_start>username_0: In [Do student reviews of teachers matter?](https://academia.stackexchange.com/questions/26971/do-student-reviews-of-teachers-matter), there are a couple of comments which suggest that being labeled a "good teacher" is a bad thing at a research intensive university. I have heard this in the past, but have always thought it was based on the fact that you wanted to be known for your research as opposed to your teaching. In other words that you want to be known as a "good researcher" as opposed to a good teacher. The way the comments are used in that question it sounds like you should in fact strive to be known as a "bad teacher".
Is it bad to be known as a good teacher? Is it good to be known as a bad teacher?<issue_comment>username_1: No.
I work at a university that focuses almost exclusively on research (we have only graduate students, most of whom are Ph.D. students, and the number of postdocs, research staff, etc. is approximately equal to the number of students). A few faculty members are well known as excellent teachers, and a few are commonly known to be poor teachers. In general, I don't think the distinction has very much influence on the respect accorded to each within the university (and even less in the larger academic community). But I think the good teachers are better liked, both by their peers and the administration. I certainly appreciate it when I find that students are well prepared thanks to having taken a course with a "good" teacher. And bad teachers are occasionally so bad that they cause administrative problems, which makes everyone unhappy.
Part of each faculty member's annual review is an evaluation of his/her teaching (by the dean). A positive review is definitely a good thing.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I made that comment so here's my answer:
Yes and no.
There are many great senior researchers at my r1 university that are renowned teachers as well. That is, they can easily hold four hundred undergraduates enthralled for hours on end. They are no doubt Great Teachers in the truest sense.
But there are also a great many junior faculty who did not get tenure at my university (our tenure rate was less than 1:4 for past several decades, although it has gone up recently).
The common reason given for their negative tenure decisions is that they spent too much time on students and not enough time on their research. That is where the faint praise, "at least they are good teachers," comes in.
**Tl;dr**: for senior faculty, good teacher is high praise as it presumes excellent research scholarship. For junior faculty, it is dangerous faint praise as it assumes misplaced energies.
Note: You should post a separate question about the "Curse of the Teaching Award"
Upvotes: 4 |
2014/08/09 | 2,836 | 12,141 | <issue_start>username_0: During the last term, I recorded at least 50 cases of student plagiarism. The most common cases were students copying and pasting paragraphs verbatim from various Web sites, assembling them together, and calling it their essay.
I took what I thought were sufficient steps to inform students of what was not allowed:
* I posted the rules in the syllabus, on the course Web site, and listed relevant rules in the instructions for larger projects.
* I issued spoken warnings in class regularly, occasionally showed some examples of such submissions, and also showed students some of the steps I took to catch the plagiarism.
I also set what I thought were strict enough consequences so that students know it is better to do nothing at all than to cheat:
* 20% grade loss (from their entire grade) per infraction, no matter the value of the assignment (most assignments were only worth ~5%).
Note, these are policies I established from the very first day of the class, and carried through the whole term. Yet, even in the final weeks, I continued to catch copied work and failed a lot of students.
What further steps can I take to reduce this problem?<issue_comment>username_1: It might be helpful if you can tell us what field you are in, or more specifically what kinds of class you are teaching. What is the subject, and is it a big lecture class or a small section, etc? Catching 50 plagiarists makes me think you are teaching big lecture sections.
Here are two things I do: First, I have a very harsh plagiarism policy. I automatically fail anybody I catch plagiarizing. This raises the stakes.
Second, I consciously try to design assignments that are hard to plagiarize. There are a variety of ways to do this. For instance, you can give fairly specific assignments. Don't say: "Write a paper about Shakespeare" but instead "Write me a paper on the role of ghosts in King Lear and MacBeth." This doesn't make cheating impossible, but it makes it harder to google and get a prefabbed paper.
Upvotes: 4 <issue_comment>username_2: To reduce plagiarism I would increase the penalty to "failure" for any instance of plagiarism. This policy might make students react emotionally and therefore might seem difficult to do. Therefore, I would add to your "let them know in advance" policies a few statements (eg in the syllabus) to show the problem context, as "Last semester 50 students failed the class due to plagiarism," and "the purpose of the policy is to protect the value of the university's degree. If the school gets a reputation as a cheaters' school, the value of our degree may drop to that of a low tier school."
Upvotes: 2 <issue_comment>username_3: While both of the existing answers have the same basic answers as me, I will add mine simply because I don't have time to make it short enough to fit into a comment.
As Shane wrote, design assignments that are hard to plagiarize and fail all students who plagiarize.
I do both of theses but still have a problem with students plagiarizing. I fail all who plagiarize but they have a chance to resubmit one time (school policy). If it were up to me, I would fail then without a chance to resubmit, but it is not up to me.
In the end, some students do not take the issue of plagiarism seriously. **These are the students whom you need to awaken and finally seeing that they will not graduate until they write their own assignments will eventually awaken them.**
I have had students (more than one) who end up taking one of my modules three years in a row (because they keep failing for plagiarizing). Eventually, they all get it and do their own work (or they change schools). The students who get caught and open up to me usually have the same reason: They waited until the last minute and did not have time to complete the work, so they took a shortcut.
I have even had students who clearly spend hours modifying the work of someone else just to avoid detection. I constantly wonder why they would not simply spend those hours doing the actual work. I don't always get a response when I ask the student.
First semester students are usually worse than more senior ones but it does seem that some students think that even though one teacher is tough, they still try it (and sometimes succeed) with other teachers. A more coordinated school-wide effort would seem to help with this, although I have been unsuccessful in making my colleagues as concerned as I am on the topic.
Upvotes: 3 <issue_comment>username_4: I sit on my departments academic misconduct committee and see a huge number of cases and have looked at a number of statistics. We use TurnItIn at my university and allow students top precheck there work to obtain both a similarity score and a detailed report of which parts of their paper are likely copied. About half our students use this precheck feature, but they seem to ignore the output since the similarity index on the precheck are generally not that different from the similarity score on the final copy. In other words telling students exactly what is copied does not decrease plagiarism. We have a pretty light penalty for a first offence of plagiarism, but the penalty for a second offence is much more severe. Having previous offences does not reduce the probability of committing plagiarism on future assignments, so we do not think that the severity of the penalty matters. We have concluded that the students who plagiarise just do not care and that there is nothing that can be done to discourage them.
Looking at the sources students copy from, we do not think the specificity of the assignment would reduce the number of incidents. What we notice is that some types of assignments (e.g, take home essays) are much more likely to contain plagiarism than others (e.g., exam essays) and that plagiarism is much more likely to occur during the first year. We therefore limit the number of assignments with high rates of plagiarism during the first year.
Upvotes: 3 <issue_comment>username_5: Plagiarism is violation of administrative policy enforceable by a forfeit of benefits. Students are all adults and should know about plagiarism from their former schooling prior to attending the university.
* First case: Warning and educational session explaining intellectual dishonesty
* Second case: Exmatriculation/expulsion.
We as society need such specialists and often we pay our taxes for their education. Tolerating plagiarism allows students to graduate from the university with an art of criminal thinking. Exmatriculation is in this case just a prophylaxis of more substantial and society-harmful crime.
Upvotes: 2 <issue_comment>username_6: This is a very hard decision to make if you are alone with the problem.
It is my experience that teaching institutions tend to gloat about having a hard line / zero tolerance policy towards cheating (plagiarism is cheating, at least once the students have been told so), while being rather soft on cheaters when it comes to actually taking action.
I would therefore recommend that you actually ask your bosses (department heads and the like) what they recommend. Do make sure they are not encouraging you to waste your time (by taking action against cheaters only to see your actions canceled by some committee). If your institution actually enforces a hard line policy, go with it. If it is rather permissive, well, you cannot do much more than go with it too... :(
Upvotes: 1 <issue_comment>username_7: I used to work in a college when desktop computing was in its infancy.
it was inevitable that a class of 30 students would come up with similar papers if they all used the same research sources.
I failed 27 out of thirty papers submitted in the first week because they were all copied from Microsoft Encarta, which was available in the college library. I knew that the article had been copied and pasted as I had a print of it on my desk which I had used to prepare the module with. After a class protest against my actions I reviewed my actions and failed the other 3 as they had copied the article but at least had the good sense to re-word it so it wasn't quite so obvious. Sadly, none of the students had acquired any knowledge of the material.
Upvotes: 2 <issue_comment>username_8: Depends on the number of your students and a little bit of work from your end. I did teach a programming class with 100 students. I did the following:
1. Created 20 topics and with a little bit of guideline for each.
2. In a class, asked students to write their names and I put them in a hat. Then asked a student to come to the board and pick names from the hat. So we randomly created 20 groups of 5.
3. Monitored their work every week to see who is doing what in the group.
4. The last 2 weeks of the course and during the presentation of their work, I asked each person individually in a group what they have done and if the rest of the group agree with that.
Almost all students came through. The best thing was created this interactive process, I was happy because they were gradually building up and learning and solving their problems ,and they were very happy because they felt they got what they deserved.
Upvotes: 0 <issue_comment>username_9: The thing that has worked best for me is to explain that citation and accompanying references show me, the instructor, that the student has actually done work instead of just jotting down whatever comes immediately to mind, and that I reward work.
That and assigning a grade of zero on the first instance with a warning that a repetition will result in referral to the student conduct office.
Upvotes: 0 <issue_comment>username_10: The university I attended and teach at has the same plagiarism policy and I've only ever heard of 3 cases occurring during my time learning and instructing.
The policy is that if you are caught plagiarizing you fail the course and can be taken to court. Legal charges depends on who they are stealing from and if that person wants to press charges. Of the 3 cases I have seen never has anyone pressed charges.
In terms of what you can do to reduce plagiarizing there's various methods that depends on what your university allows you to enforce. First, I would fail a student immediately if it's copy & paste. If it's a student who clearly just doesn't know how to cite things correctly (example: "famous quote in paper" no acknowledgement as to who said it).
You seem like you put a good amount of effort into telling your students the consequences for plagiarism but I don't think losing 20% will deter a lot of people form at least doing it once. 80% is a passing grade though simple path would say if they assignment is less than 20% if your final grade, just don't do it rather than copying it from somewhere else.
You mentioned that you teach a writing course? In my writing courses we had to have a peer proofread our rough drafts before moving onto the final paper. If you do something similar then you may want to encourage having students review a digital copy of the assignment and have the peer run it through copyscape or even inform the students to run their own assignments through copyscape.
Upvotes: 0 <issue_comment>username_11: Just a couple of things to add, having been the plagiarism czarina at my school for a number of years.
1) The university should have a campus-wide policy so that students in different sections of the same course don't receive widely differing consequences.
2) It's written into the California Education Code that teachers cannot grade punitively, so failing a student for plagiarism was out. We could give the student a zero on that paper only, and figure that zero into their final grade.
3) We sent particularly egregious cases to the Student Discipline Officer, who as the President's designee could be punitive or whatever she felt appropriate.
4) Many students come to college not understanding how to cite sources correctly. They need to be taught.
5) Change your assignments every semester and as one responder said, write questions in such a way that students will not be able to easily cut and paste.
Upvotes: 1 |
2014/08/09 | 1,050 | 3,502 | <issue_start>username_0: What is the best way to search for conferences all over world? I need to participate in one in the near future.<issue_comment>username_1: For computer science conferences:
1. [WikiCFP](http://www.wikicfp.com/cfp/)
2. [EventSeer](http://eventseer.net/)
3. Springer [LNCS forthcoming proceedings](ftp://ftp.springer.de/pub/tex/latex/llncs/LNCS_Forthcoming_Proceedings.pdf):
4. ACM [calendar of events](http://www.acm.org/calendar-of-events)
5. IEEE [list of events](http://www.ieee.org/conferences_events/index.html)
Subject-specific mailing lists, such as:
1. [DBWorld](https://research.cs.wisc.edu/dbworld/)
2. [AISWorld](http://www.aisnet.org/AIS_Lists/publiclists.aspx)
3. [ACM SIG-IR list](http://www.sigir.org/sigirlist/)
4. [ECOOP info list](http://web.satd.uma.es/mailman/listinfo/ecoop-info)
Upvotes: 2 <issue_comment>username_2: I have a very good experience with national and similar subject mailing lists, for instance French "Groupes de Recherche" such as <http://www.gdr-im.fr/> intended for Theoretical Computer Science. There're dozens of them in France for instance.
Once you are a member of such group, you recieve quite a lot of mail (subject prefixed by `[gdr-im]`), most of it are information on interesteing seminars, job offers and conference CfP (for both local and international meetings). I have found being a member of this mailing list very valuable.
Upvotes: 1 <issue_comment>username_3: By doing reasonable googling. As you didn't state your topic let me make an example when [searching for conferences on magnetism](https://www.google.com/search?num=30&hl=en&prmdo=1&q=registration+intitle%3Aconference+magnetism+|+magnetic+material+|+physics+inurl%3Aedu+|+inurl%3Auni+-filetype%3Apdf+-filetype%3Adoc+-filetype%3Appt+-filetype%3Aps+2014..2014&oq=registration+intitle%3Aconference+magnetism+|+magnetic+material+|+physics+inurl%3Aedu+|+inurl%3Auni+-filetype%3Apdf+-filetype%3Adoc+-filetype%3Appt+-filetype%3Aps+2014..2014). The operators I use should be self-explaining. Most conferences will be announced on websites of universities or mentioned, to exclude old ones search only within `2014..2014`, and so on...
See also my [other answer](https://academia.stackexchange.com/a/455/213) which has some links how to use google properly
Upvotes: 2 <issue_comment>username_4: Nature's website has a [list](http://www.nature.com/natureevents/science/) of scientific conferences and events (324 events are listed in September 2014), but may be biased towards certain fields. I think I saw similar lists in other journals as well.
Upvotes: 1 <issue_comment>username_5: I worked as an administrator for the 21st McGill International Entrepreneurship conference and we listed our conference on a conference announcement directory called PaperCrowd.
It attracted several delegates from around the world. I found out it was in the same city I lived in and I applied for a job there and got it! I am now the proud community manager of PaperCrowd. We are working hard to improve the services for researchers worldwide.
You should try PaperCrowd - a global directory of academic research conferences. You can search by topics, geography and keywords for research conferences you are interested in such as law, legal etc.
Organizers add their events in a couple of minutes and it’s free. It’s restricted to academic research conferences.
It feels good working for a company that I have seen myself was effective.
<https://www.papercrowd.com/>
Upvotes: 1 |
2014/08/10 | 909 | 3,650 | <issue_start>username_0: I was an international student and graduated 3 years ago. Now I want to continue my research career instead of working in the industry. I worked for free for a professor for one month last year. It was a good experience and we had a good impression to each other. I approach her again and asked to get back to her lab, she said that she could consider to work together on a project for 1 year, but she has no money. On one side, I was always excited about to be back to a lab to do more researches, and expecting to extend it to a PhD study; on the other side, I couldn't be glad and work well without enough money. I am going to meet this professor soon, I was wondering:
1. Why did she asked me to work for free?
2. How to negotiate or should I just give it up?<issue_comment>username_1: The question you have to answer is:
>
> Can you go for one year without eating and living under a bridge?
>
>
>
As I assume the answer is no, you simply tell her you cannot work without getting paid. She shouldn't be offended by this.
Even if she has no money, she may know other labs that have, and can recommend you My master's advisor came one day with a friend of his and said "this is Prof. Smith, and he is looking for a PhD student in a project I think will interest you".
Lastly, depending on the country, working for free may be illegal. In Spain it is common to do PhDs unfunded, in Sweden it is considered slavery.
Upvotes: 3 <issue_comment>username_2: I see two questions here, one is what username_1 clearly mentioned. The other is
>
> If it is a common practice to do so?
>
>
>
To this second question, I will say it depends on the student's aims and advisor's funds.
I know one of my friend doing the same in KTH, Sweden. Another friend of mine did a research for 2 months in a UK university after her masters for free. The value you get is recognition for working in a good lab and may be some research papers if you can manage.
I assume your professor recommended so thinking you are enthusiastic about research and may do so even if she could not provide you with some money. If this is not the case, you can tell her politely, it is not considered offensive at all. *I also guess here that you have not told her about your intentions to extend this research to a PhD later*. If you do so, she may tell you later if she gets funds or may direct you to another lab if you discuss with her.
Upvotes: 0 <issue_comment>username_3: You need to be clear what you wish to achieve. Working "for free" is never great, but it might be tolerable if it helps you to get somewhere you want to be. Once you've decided what you want to get from this, then discuss this with the professor and ask how she can help you get there. For example, it might be possible to write a grant application to fund you for a PhD, if that is what you want. Be prepared to walk away if the prospects don't seem worth the risk.
You might also ask whether there is scope for earning money from other sources - say, from teaching. Be cautious about vague promises that don't ever materialise into anything concrete.
Above all, decide what this is worth to you, and what your exit strategy is going to be if things don't work out according to plan. Once you've worked for free for several months, it's easy to think "well, just another month... perhaps something will come along". This is unlikely to be a good situation to be in...
Upvotes: 2 <issue_comment>username_4: If you demonstrate value you should be compensated for value-added that you bring to her project. If she really wanted to pay you then she'd find a way.
Upvotes: 0 |
2014/08/10 | 775 | 3,279 | <issue_start>username_0: When applying for a tenure-track academic position in the US right after my PhD (in the US too), is it a good idea to ask for a letter of recommendation from a professor in my graduate school with whom I took a class where I got good grades while showing a strong interest (genuine interest), then later on served as a TA for that class?
I haven't done any research project with him though, but we get along well: he is one of the professors I appreciate the most both intellectually and personally, I would have chosen him without any doubt as advisor if I was in his area. He is well-respected in his field (say in the top 20, if you want some ranking). His area of research is a bit different from mine but useful for my research: he works on database management systems while I do machine learning, so there are some very interesting connections such as large-scale machine learning / data analysis (~aka. "big data"), query optimization, etc.<issue_comment>username_1: It depends on the school. At a SLAC (small liberal arts college), the hiring committee members will pay more attention to teaching experience and may be more willing to be impressed by a Big Name®.
At a R1, faculty are not only inoculated against Big Names®, but they have enough experience with individual Big Names® to read between the lines of otherwise blandly positive letters with hermeneutic vigor.
I've been repeatedly surprised by my senior colleagues discussing letters from other Big Names® with "he only wrote two pages of song and praise? He must not have liked that person" or of a fairly damning letter with, "she's a grouch. The fact that she wrote at all means that this person is brilliant." Context is important.
A letter that spoke only of your teaching and not of your research would be seen as faint praise in this milieu.
Note that I'm in the humanistic social sciences at an R1. Your mileage may vary.
TL;DR: save the references from this person for SLACs where the praise for your teaching ability will be seen as a strong positive.
Also note that at larger R1s, we are familiar with the difference in writing styles between American (where everything is excessively effusive) and European/Asian letters (where a strong letter of recommendation reads: "<NAME> was a member of my lab from 2005/10/1 to 2012/5/1. His work was perfectively adequate with no complaints. Sincerely, XX"). Smaller institutions may not have that experience and in those circumstances, you may ask one of your American letter writers to include a short paragraph noting that European counterparts thought highly of you but that might not show up in the letters in the hyperbolic form we are used to in the United States.
Upvotes: 4 <issue_comment>username_2: Having a recommendation letter that addresses your teaching ability and experience is useful if the search committee cares about teaching. Many advertisements specifically ask for at least one recommendation letter that addresses teaching, and I frequently see such letters when I look at applicant's files. Thus I don't think it would be unusual at all for you to have a letter of recommendation from your TA supervisor even though you haven't taken classes or worked on research with that person.
Upvotes: 2 |
2014/08/10 | 710 | 2,941 | <issue_start>username_0: Prior to completing my PhD degree, I used to hear, rather very often, that working in finance is the most non-academic viable career option for newly-minted PhDs in theoretical physics. But, now that I am actively searching for such jobs (any job, to be precise), it appears that employers in finance expect qualifications that practically all newly graduates like me lack. I might be wrong on this, but I don't know of anyone using SAS or R in their PhD research in theoretical physics. So, I was wondering what else is out there available to a newly graduated PhD with mostly a pen-and-paper background in theoretical condensed matter physics.<issue_comment>username_1: Two options I am aware of:
* The electronics industry. Sometimes they hire condensed matter theorists who can help them design better products. I know one theorist at Segate who works on better hard drive designs.
* Finance. Most, but not all, jobs will require programming experience. If you have not gotten any programming experience in a theory PhD I would consider your research strategy highly unusual. Theorists probably lack experience with R and statistics, but if you understand basic computer science concepts it should not be hard to learn these things on your own.
Finally, consider academic/education positions.
Upvotes: 2 <issue_comment>username_2: Regarding Finance, perhaps you should instead look for Quantitative/Mathematical Finance (quant) jobs. I think non-mathematical finance jobs do require different skills (though I don't imagine them being more difficult than QF jobs).
Many physicists become quants. You may want to try looking up <NAME> to see his books if you might be interested in reading them and the Wiki pages on mathematical finance to see what kind of math is involved.
I am a grad student of QF now and we are learning SAS, R, Brownian motion and Feynman-Kac theorem. (at least the latter 2 are) Basic for you (theoretical) Physics people, I presume?
Edit: Regarding the mathemagician's comment, I've read some job openings for quants. There's no specific mention of having a degree in QF. They are just looking for master's/PhD's in quantitative disciplines. I recall one mentioning master's in Math, Physics or Engineering as a prerequisite.
Finance is relatively easy to learn. You don't need a QF degree to be a quant. Try checking out Hull's Options, Futures and Other Derivatives or Bjork's Arbitrage Theory in Continuous Time, but please be patient with the cute math you may see.
Other sites to check out are the quant stackexchange (save it from beta please) and quantstart.com
Edit 1: Regarding guest's answer, why don't you work a technology company or something?
Edit 2: teaching highschool/secondary school?
My knowledge of Physics is only up to Vector Calculus (or Partial Differential Equations if you want) and basic Analytical Physics so that's all I got haha
Upvotes: 3 |
2014/08/11 | 1,215 | 4,894 | <issue_start>username_0: My friend, a doctoral student, is being accused of harassment/stalking by the dean, yet law enforcement has not contacted my friend, and the dean refuses to substantiate his accusation, for fear of retaliation. My friend has not been given a trial, yet the dean is preventing him from completing his PhD, suspending him from the university. The dean kept my friend's adviser completely in the dark regarding the accusations. It seems the dean is harassing him, pure and simple.
What should he do?
thanks<issue_comment>username_1: Universities do not in my experience hold "trials" in order to reach their decisions, however weighty. So the answer to the literal question asked is probably "yes".
I guess what you mean to ask is whether the dean has the unilateral power to do this. I'm not entirely clear on what "this" is: what does "preventing him from completing his PhD" or "*effectively* expelling" mean, precisely? But even if I did, I would have to know the rules of your friend's university rather intimately in order to answer. (*Someone* in your university has the power to do this. As @Paul comments, probably more than one person was involved in the decision. Just because the action looks single-handed to your friend does not mean that other university officials were not involved.)
One tip: if your friend's adviser doesn't know, get your friend to tell her!! (i) Could it make things any worse? (ii) Won't she find or sooner or later? Sooner may be soon enough to at least try to do something about it; later, maybe not.
**Added** "It seems the dean is harassing him, pure and simple." Well then he should report it to....oh. Seriously, if by this you mean that you think the dean has some kind of vendetta against your friend which caused him to simply fabricate these charges: though obviously I don't and can't know the situation, I find that very unlikely. Though there may be no "trial" system in the university, there will be some kind of clear guidelines and procedures for expelling students. If the harassment is simply made up then the expelling couldn't possibly have followed these procedures, which would open the university up to a trial, possibly an embarrassing and costly one. I think I understand this clearly, but a dean understands it like I can't even imagine.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Is there an ombudsman at the university? That would be the obvious person to go to after trying your advisor, director of graduate studies, and department chair (in that order).
Also the proliferation of the administrative ranks at universities often means that there are usually multiple Deans and associate provosts that you can talk to.
As with <NAME>, I highly doubt that a Dean would try to expel someone with no cause. While the Peter° Principle operates at the administrative ranks, Deans and Provosts have no job security (they only have tenure if they are also faculty, which many are not) and are thus unlikely to do deliberate grievous harm. [They are more than capable of grievous harm through incompetence, indecision, or an adherence to rigid bureaucracy, but that doesn't appear to the case here.]
° n.b.: <NAME> != Peter of the [Peter Principle](https://en.wikipedia.org/wiki/Peter_Principle) as far as I can ascertain.
Try to inquire with faculty to ascertain if there is more to the story (if it's your business, which it may or may not be; there are many things which regardless of [FERPA](https://en.wikipedia.org/wiki/FERPA) or [HIPAA](https://en.wikipedia.org/wiki/HIPAA) should not be discussed about fellow students).
Upvotes: 4 <issue_comment>username_3: Deans are human: there are good ones and bad ones. The bad ones are capable of this sort of behavior, although in my experience Deans of this ilk tend to focus more on faculty than on graduate students.
In *any* university, the keys to these things lie in the University's policies. *In the United States of America* Deans are typically granted a fair amount of latitude, but even so they must stay within policy guidelines. Again, at American universities there is usually some sort of appeal mechanism for this sort of suspension. That is the place for your friend to start: what internal University mechanisms exist on his campus? If there are none, there is informal appeal through the campus's Chief Academic Officer (usually called the Provost). He also needs to confer with his dissertation advisor about the next steps.
As has been noted, you have only one side of the story and that from an interested party. It's said that God helps those who help themselves. It was also said that we get by with a little help from our friends. Your friend needs to take some actions to help himself. If the facts of the case really are on his side, he will likely get some help from his friends along the way.
Upvotes: 2 |
2014/08/11 | 1,441 | 5,937 | <issue_start>username_0: I'm doing an (external) bachelor's degree in the field of computing and am in the second of my three years. The final requirement of this degree is a project, complete with development and a dissertation that has to be submitted to the institute and defended at a Viva. I've taken the liberty of researching well into a lot of parts of my potential project, and have given a lot of thought into it.
My long research has prompted my few family/friends in the industry to ridicule me (which I don't mind), and have warned me that creating a "good" project would run the risk of "questionable" practices enacted.
In short, I've been told that the panel might fail my project and transfer all it's content to a favoured student of their own if they find mine interesting enough. I do not know if this is a fact or just a rumor. But, I don't want this to happen (with me or anyone else).
What measures can I take to make sure that they can't do things like this to both the dissertation I submit and the code I develop? I've already thought of private repositories on online version control systems to keep the code, but what about the dissertation?
Note:
* we are required to include a declaration signed by my advisor and myself that allows the dissertation to be used by the institute for loans and publishing, as well as to outside organizations.
* I'm intending to release the software as open-source after I graduate, so this may be a problem.
* If it is stolen, I doubt that complaining to the institute will help, and may result on the ganging up on me.
P.S.: I hope I don't sound like a whiner or moron.<issue_comment>username_1: >
> My long research has prompted my few family/friends in the industry to ridicule me (which I don't mind), and have warned me that creating a "good" project would run the risk of "questionable" practices enacted. In short, I've been told that the panel might fail my project and transfer all it's content to a favoured student of their own if they find mine interesting enough. I do not know if this is a fact or just a rumor. But, I don't want this to happen (with me or anyone else).
>
>
>
I **hope** that this is just a paranoid rumor. It would not be founded in any university I am aware of, but of course not knowing your institution, one can't say anything for sure.
However, the question is what you can do to minimize the risk, if you feel this is an actual possibility that you need to insure yourself against. Usually, the best bet against having your work stolen is to make sure that as many people as possible know of this work as yours. For instance, you can show it to other faculty members that you trust (if you need an excuse, you can always ask them for feedback), or upload it to a (timestamped) preprint service. For code, the best is probably to just upload everything as open source to GitHub or a similar service.
Upvotes: 3 <issue_comment>username_2: >
> we are required to include a declaration signed by my advisor and myself that allows the dissertation to be used by the institute for loans and publishing, as well as to outside organizations.
>
>
>
This can be either a transfer of copyright (your work belong to them now), or a broad authorisation to publish it however they want. They are two different situations, and you would have to read it carefully, but it will most probably only cover your report, not your code. So you can just upload your code to a public repository and link the implementation from your report. Given your concerns, a viral licence like GPL sounds appropriate for you.
In any case, authorship is, under some jurisdictions, one of the unrenunciable rights. No one can pay you, convince you, or otherwise force you to claim authorship on your work. They can buy it, but they cannot change who did it. What they can do (and many universities do) is own the outcome of your research, like patents. The rationale is that they have been providing you with resources and advise. Check the legal conditions of your degree. This may include final undergraduate projects, where the student is also paying for the education, and not receiving any money from the university.
Anyway, I don't think a reputable institution will lightly steal from its students. Authorship can be easily proven in some situations, taken to court, and the damage to their reputation can be enormous.
Upvotes: 3 [selected_answer]<issue_comment>username_3: Send your full project documentation as a **registered** letter to your own address and , if you know any, friendly attorney-at law. Such letter **must** stay not opened, not from you, not from attorney. The idea behind it is, that the whole project documentation is registered as yours at defined date.
I've done such more then once to protect my start-up ideas from plagiarism, if i was going to discuss ideas with venture capitalists. According to at least German law, such protection fully works, if it discussed at any court.
**Edit (13.08)**
I have even found an online notary, notatus.de (sadly only in german), which offers legally proofed document depositation / escrow, specially for the purpose of authorship protecting. They offer even depositation and letters of deposit for computer files, so one isn't forced to send paper documents. BTW, this service is free!
Upvotes: -1 <issue_comment>username_4: Get a free certificate (e.g., from [StartSSL](http://startssl.org/)) and use it to digitally sign your document.
I know PDFs and many other file formats support digital signatures. See [this about signing a PDF with Adobe Reader](https://helpx.adobe.com/acrobat/kb/certificate-signatures.html); it appears Adobe even offers a free, easy-to-use service so you don't have to get a certificate from a third-party.
Digitally signing something proves that it is yours because no one could claim your work is theirs without knowing your private key.
Upvotes: 2 |
2014/08/11 | 763 | 3,292 | <issue_start>username_0: I will be spending 2 weeks in China later this year, to visit a university and talk to the various research groups. The host university will be paying all my expenses.
My contact person at the host university has asked me if I would like for him to make hotel reservations for me or if I would like to do it myself.
Ideally I would like for him to make reservations, as he knows his own city. But am I "allowed" to ask to see the hotel first, before he makes reservations for me? Or is that considered impolite and rude as they are paying for me?<issue_comment>username_1: If you have specific requirements regarding the hotel (accessibility, star level, kitchen availability, etc) you may want to send them to your host. The chain of command in academia is rather long. I would suggest that in your case it is something like
you -> your host professor -> their secretary -> their travel agent -> hotel
Each time you want to change your request, it has to pass through the whole chain, which makes the process particularly time-demanding and reduces the efficiency. It is a good idea to keep the number of such iterations as small as possible.
Upvotes: 3 <issue_comment>username_2: If all you want to do is get information about what hotel is being booked, you can always ask them to provide it so that you can share the information with your family and friends as well as the people in your office so that they know how to get in touch with you in case of an emergency.
Upvotes: 3 <issue_comment>username_3: If you want to have the final decision on which hotel to book, then ask your host for a recommendation of local hotels and how much he's prepared to support your housing expenses. With this information you can book the hotel yourself, and if it exceeds your host's housing support you can pay the difference. Remember that your host is not your travel agent.
However, I think you have a much better chance of getting a superior hotel for the same money if your host makes the housing arrangements. As a local, he likely gets better rates than you are as foreigner. Also the hotel proprietor will be eager to get the *next* guest your host invites.
Upvotes: 2 <issue_comment>username_4: Given that it's in China (and I'm assuming you're not Chinese), I would let the host do it for three reasons:
1. It's important to be gracious to the host. This is a general rule in most cultures, but particularly in East Asian ones. Letting your host be a good host is part of this. Trust your host's judgement here.
2. My experience with Chinese hotels from Shanghai to Xinjiang is that there is minimal to negative correlation of the quality of the hotel to the website or the official star rating. It is highly doubtful that you could ascertain anything superior from afar than what the local person would know.
3. Logistics. The host may want to put all of the people in a particular hotel (or spread them between a particular few) because of logistical reasons: geography and they have only one van to pick everyone up, etc. etc.
I would let your Chinese hosts handle everything. If you have particular needs (room must face towards the south; hotel restaurant must have halal food; etc.) then let them know. Otherwise, let your host be your host.
Upvotes: 4 |
2014/08/11 | 2,965 | 12,274 | <issue_start>username_0: Often the publisher requests to get the proof within 24 hours when it's ready. What are the reasons for making this so short? Do they want the authors to not make too many changes?
EDIT:
The email I received said:
>
> Please ensure you check the entire article carefully, and answer all
> queries. Return corrected proofs and any related material by uploading
> to the site within 24 hours.
>
>
>
EDIT: @StrongBad pointed out [a related question](https://academia.stackexchange.com/questions/23387/how-much-time-is-usually-left-for-authors-to-return-page-proofs-what-happens-if).<issue_comment>username_1: The reason is simply that nobody is expected to make any changes to the galley proofs. The content and the basic wording of the paper is fixed after acceptance. No rewriting or reformulating is allowed at this stage. The authors should *only* check if the typesetting and copy editing did not introduce any errors. Often you are also given a list of changes that the copy editor made and you can also work through this list. In other words, the author is only expected to read the galley proofs once and only with the "correctness lens". This could be done in less then 24 hours in almost all circumstances. In exceptional cases you may well ask for deadline extension.
Upvotes: 0 <issue_comment>username_2: Your paper ought to be in pretty good shape after you get to the point of galley proofs. At that point, you are really just checking to be sure that their typesetters didn't *introduce* errors. All of your own typos and requests from reviewers should have been fixed by the time you get there.
Upvotes: 2 <issue_comment>username_3: In addition to the fact that in most cases you can easily check the proofs within a day, I would assume that it’s also more efficient for the typesetters and in particular the copy editors in the case that you actually want to correct something as they are still familiar with your paper and are thus faster at applying your corrections. For example, after one day a copy editor usually remembers the reason and context of a particular change and can thus faster work your corrections.
Upvotes: 1 <issue_comment>username_4: As I said in my answer to [How much time is usually left for authors to return page proofs? What happens if I am late?](https://academia.stackexchange.com/questions/23387/how-much-time-is-usually-left-for-authors-to-return-page-proofs-what-happens-if), I have never seen a 24 hour turn around time requirement, but 48-72 hours seems quite common. I think there are two reasons for the turn around time to be on the order of days. From my experience, publishers are working on a tight schedule; there might only be a month or two between when the proofs are finished and the issue is delivered to subscribers. If an article needs to be re-typeset or delayed to a later issue, the publisher will need to rework the the entire issue which is going to take some time. It seems that with their time scale the longest they could wait for proofs would be two weeks. This leads to the second issue. Academics do not handle deadlines well and publishers need to handle the articles from the worst procrastinators amongst us. If you give a bunch of academics a deadline in 2 weeks a non-insignificant portion will take over a month. Quick, cheap, paper based publications with flexible deadlines for authors and reviewers just isn't practical.
Upvotes: 3 <issue_comment>username_5: [**Galley proofs**](https://en.wikipedia.org/wiki/Galley_proof) are part of the production process where a book or journal issue is actually printed, as opposed to the 'softer' process of deciding what pieces will go into it and in what form. As such, the deadlines for their revision are associated with the physical production process rather than the editorial process for the piece and can be quite different from deadlines for e.g. minor revisions or revise-and-resubmit requests.
These proofs are only meant to be used to check that the typesetting correctly represents the author's intent, and not that the content is scientifically correct (which should have been done at an earlier stage). Occasionally a one- or two-sentence 'note added in proof' may be appended to a paper but that's about it; for an example see the [AIP style guide, p.11](http://www.aip.org/pubservs/style/4thed/AIP_Style_4thed.pdf#page=14). Checking the typesetting is assumed to be a straightforward matter that does not require more than one day (though assuming that an academic can spare the time at the publisher's decision with no prior notice is another matter), so such deadlines are usually OK.
Note also that such deadlines can be negotiable if properly handled. If such a requests lands on you and you will not be able to complete it in time, it is usually acceptable to notify the editor, as soon as possible, that this is the case. A polite note along the lines of
>
> Dear Editor,
>
>
> We have successfully received the proofs of our article. Unfortunately, today is my thesis defence, my coauthor is getting married and my advisor is away due to travel, so we will be unable to complete your request to review the proofs within 24h. We will get them to you as soon as possible, which will likely be the day after tomorrow. Is this acceptable, or will it lead to a delay in publication?
>
>
>
can work wonders in stretching such a deadline. From personal experience, I have seen a 24-hour request be stretched to a full week without a publication delay.
Upvotes: 5 <issue_comment>username_6: The proof is an actually typesetted version of your paper, ready to production. In other words, everything is ready that someone pushes a button and the press can print the issue. This is the very last "lets check it one more time" thing.
For this reason:
* If you have a correction, it is actually cost money to them.
* If there is 50 paper in the journal, and 10 out of the 50 start rewriting the paper in the last minute, then the production line waits till everything is fixed, reformatted, again cost money. A lot.
Before anyone starts to complain about the 24 hr (which is common in my field, too), let us be a little professional.
Your paper should anyway be free of errors and well written at the point of submission. Then several referees check it back and force, as well as you are free to check your manuscript if you are not sure. When you are at the proof stage, your paper has already read and checked by several people, several times for months. You don't have a good reason to re-write anything, except if there is an error due to typesetting. In other worlds, if you done your job decently, you don't have more than 5 min job with that proof in 99% of the time.
Upvotes: -1 <issue_comment>username_7: I'll try to explain the problem from both perspectives: author and a journal typesetter.
The typesetting process goes as follows:
1. We pre-plan the issue contents 2 months in advance, in order to balance the issues in size. This is necessary for small journals with 4 or 6 issues per year, not quite for large journals with a long publishing queue. At this moment, we take articles that are accepted. If there's not enough of them, we go through the queue and try to find articles that can be accepted quickly.
2. Now the authors provide the final version. This takes some time, so I receive the articles usually 4-6 weeks before the issue date. That's not a lot of time.
3. Most articles are typeset within 1-2 weeks after I receive them. With these, there's no problem at all. However, then you have articles that take more time, since the quality of the figures is being discussed, as well as semantics (when the formatting from the authors is poor and the semantics are not clear) etc. This takes some time. So it can happen that the article is typeset like 2 weeks before the issue date, or even less.
4. So now the article is typeset and is with the authors for proofs. Any correction they make has to be incorporated. Sometimes it's not easy (requests for replacing a figure with a better one, for moving figures to other pages etc. are not uncommon). Sometimes I strongly disagree with the authors on these. In such cases, we need to have yet another couple mails exchanged or the chief editor involved, and that takes time again. At this moment you see that 24 or 48 hours can be the maximum we can give.
5. Once all articles get back, the issue has to be made ready, articles published online, CrossRef+Scopus metadata prepared, DOI registered etc.
That's the perspective of the journal I typeset. I hope that it is clear that the publication comprises a lot of steps. When the authors are cooperative and reasonable, everything goes fluently and the final version is ready 4 weeks before deadline. And then you have cases when things don't go quite well, and you get very close to the deadlines.
Moreover, to make things easier (and reduce the amount of work just before the issue date), you leave authors quite a short time for response. In most cases, there is plenty of time left, but if 80% of people misuse this time, we work 16 hours a day the last 3 days before the issue date to sort everything out, and we simply want to avoid this.
From the perspective of the author, 48 hours is not much for proofreading an article, especially since this has to be done very carefully. However, in most cases, if you ask for extension (a 5-line mail with a very short request is enough), it will be granted without any problem. Just please don't misuse the possibility.
Upvotes: 6 [selected_answer]<issue_comment>username_8: Speaking as a former Design Director, Typographer, and Production Manager of many publications and also of national-market print advertising work:
The reason that there's a tight deadline for authors' galley proofs is because of what galley proofs are for: evaluating whether the formatting has introduced any issues with readability or meaning; whether there's any typos or format errors; whether there's any omissions or duplications.
The turnaround is tight because it's part of the **production** phase, not part of the **editorial** phase. The time to edit and re-write and fuss over the article is done and gone. Galley proofs is a final reality check, not a chance to revisit that awkward sentence in the 4th 'graph.
Traditionally in print, editorial and not production is given the luxury of extra time. Usually there is no luxury of time, in spite of what it appears to the author. Most journals have a lot more production steps to go through and are very close to press time when the authors' proofs go out. It may seem like "not a big deal," but a printing operation has scheduled their presstime very closely, and if your book is late, it gets bumped from the schedule in favor of something that is actually ready for press. If your book is bumped from the press schedule, it might be days or weeks before it can slot back in. The cost to "hold" the press is spectacularly prohibitive.
Production and pre-press times are shrinking these days, it's easier today and faster to get a book to press than it was in, say, 1985. In many ways that exacerbates the problem with proofs turnaround...there's just no "fiddle" time anymore.
Upvotes: 2 <issue_comment>username_9: Short galley proof delays are not universal, in particular for journals having no print edition. I am involved in the copyediting phase of the [LMCS](https://lmcs.episciences.org/) journal (called "layout editing"), which is an arXiv overlay journal. When we modify articles for style or typographical reasons, we give authors 2 weeks to verify our work (sending them the new PDF plus a LaTeX diff). The 2-week deadline is really to put a deadline, but there really is no urgency. When the authors approve, the paper gets assigned an issue and the final version is published in that issue.
My understanding is that the galley proof step with short deadline is because, with print journals, some editing must occur as part of the preparation of a specific print issue, and fit in the production timeline for that issue. But once you get rid of the print version (and who reads academic journals on paper nowadays?), the problem simply goes away.
Upvotes: 2 |
2014/08/11 | 804 | 3,036 | <issue_start>username_0: I'm writing an application email to a professor for a PhD program. One dilemma that I have is whether I should add hyperlinks and if there are any problems in doing that. Consider the imaginary paragraph below:
>
> ... *and I currently work under supervision of professor
> [Foobar](https://academia.stackexchange.com/questions/27078/are-there-any-drawbacks-in-adding-hyperlinks-to-phd-application-email) in laboratory of [blabla](https://academia.stackexchange.com/questions/27078/are-there-any-drawbacks-in-adding-hyperlinks-to-phd-application-email) at
> University of* ...
>
>
>
Is this a bad practice? Is there any chance that they neglect the convenience of a single click because of a the visual incoherency of the text?
P.S. Are still any email client that shows the actual url instead of hyperlink text?<issue_comment>username_1: There are a few holdouts who only read emails once they have been printed. With the advent of smart phones and the decline of lab or research area secretaries this number has gone down but there are still a few. I know of at least 2 in my area of research at my alma mater that still worked with only printouts and both were, if a bit stodgy when it came to fancy technologies like emails, were brilliant researchers.
For that somewhat silly reason I would definitely find a way to put URLs inline in any emails to professors you aren't familiar with. Something as simple as:
```
... and I currently work under supervision of professor
Foobar([www.school.edu/~foobar][1]) in laboratory of
blabla([www.school.edu/~blabla][1]) at University of ...
```
Upvotes: 2 <issue_comment>username_2: I think I'd write it "straight," and give the links in a closing paragraph at the end. That way, links don't distract from the point you're trying to make`*`, but you've still provided the information that may be wanted. I'd surely put the full URL, instead of a blind link. So:
```
You can find <NAME>'s page at www.school.edu/~foobar The Blabla
laboratory maintains a page here: blabla.school.edu
```
`*` For an example if distracting links, see any Wikipedia page. (It *is* getting better, though.)
Upvotes: 2 <issue_comment>username_3: My personal remark:
I am not particularly fan of clicking all around links in an email I just got from a stranger, however natural and official looking it is.
From reader engagement point of view:
Links directly included to the text actually invite the reader to interrupt their reading and click and go to the website of another university, reading this and that there, **instead of reading your mail**! It is exactly you don't want, becasue you want her/him to stay and read your mail from start to end. If you really want to link, I would make a separate short paragraph at the end or in P.S., something like "for your convenience, here are links to blablabla". That way the reader easily can reach those websites, IF she/he chooses and AFTER she/he read your message.
Upvotes: 3 [selected_answer] |
2014/08/11 | 716 | 2,803 | <issue_start>username_0: I was offered an opportunity to prepare a 2-page research proposal for a postgraduate research program in computer science.
I am searching for a systematic and step by step technique to select/find a workable research topic.
Is there any systematic procedure/strategy/approach/method that researchers generally use to select/find and narrow down a research topic from an ocean of topics that pops in one's mind?
**[Is this technique an standard in the academia?](https://academia.stackexchange.com/questions/1646/how-to-select-a-masters-thesis-topic-if-your-advisor-wont-suggest-one)**<issue_comment>username_1: There are a few holdouts who only read emails once they have been printed. With the advent of smart phones and the decline of lab or research area secretaries this number has gone down but there are still a few. I know of at least 2 in my area of research at my alma mater that still worked with only printouts and both were, if a bit stodgy when it came to fancy technologies like emails, were brilliant researchers.
For that somewhat silly reason I would definitely find a way to put URLs inline in any emails to professors you aren't familiar with. Something as simple as:
```
... and I currently work under supervision of professor
Foobar([www.school.edu/~foobar][1]) in laboratory of
blabla([www.school.edu/~blabla][1]) at University of ...
```
Upvotes: 2 <issue_comment>username_2: I think I'd write it "straight," and give the links in a closing paragraph at the end. That way, links don't distract from the point you're trying to make`*`, but you've still provided the information that may be wanted. I'd surely put the full URL, instead of a blind link. So:
```
You can find Professor Foobar's page at www.school.edu/~foobar The Blabla
laboratory maintains a page here: blabla.school.edu
```
`*` For an example if distracting links, see any Wikipedia page. (It *is* getting better, though.)
Upvotes: 2 <issue_comment>username_3: My personal remark:
I am not particularly fan of clicking all around links in an email I just got from a stranger, however natural and official looking it is.
From reader engagement point of view:
Links directly included to the text actually invite the reader to interrupt their reading and click and go to the website of another university, reading this and that there, **instead of reading your mail**! It is exactly you don't want, becasue you want her/him to stay and read your mail from start to end. If you really want to link, I would make a separate short paragraph at the end or in P.S., something like "for your convenience, here are links to blablabla". That way the reader easily can reach those websites, IF she/he chooses and AFTER she/he read your message.
Upvotes: 3 [selected_answer] |
2014/08/11 | 875 | 3,635 | <issue_start>username_0: I've seen many, many examples of scientific presentations from the National Laboratories in which virtually every slide is oversaturated with information. If this were an isolated event, I wouldn't have given it any thought. But I've noticed this pattern in presentations over many years from among presenters hailing from US national laboratories. Though my field is computational science, I've seen talks from national laboratory scientists in other fields and their presentations also have this same characteristic.
In academia, I've been taught to keep slides as simple as possible, with as little info per slide as necessary. My understanding is a presentation should be though of as an "advertisement" for the paper to be published. Thus, presentation slides should be designed to preserve the audience's interest. One method of keeping the audience interest is to not overwhelm them with too much information all at once (e.g., not too much text and not too many pictures in a single slide). I presume this is universally true, regardless of discipline.
However, the overwhelming majority of national laboratory presentations that I've seen seem to fill up virtually every available space with as much information as possible. Why do presentations from national laboratories tend to contain so much information per slide? How does this meet the needs of their target audience?<issue_comment>username_1: My guess is that it's a cousin of the same problem [in the armed forces](http://www.zdnet.com/news/pentagon-cracks-down-on-powerpoint/96099), which has been a problem [for two decades](http://www.zdnet.com/news/pentagon-cracks-down-on-powerpoint/96099). (Note one link is from 2000, the other from 2010.)
On another level, the culture of the national laboratories has been trending in a more corporate direction, and many of the presentations that they need to give have limits on the number of slides to be presented. Managers want the whole "story" told in a handful of slides, which leads to over-compression of information.
Upvotes: 3 <issue_comment>username_2: Let me try to answer for the tendency in my own discipline (particle physics) where we have this problem across the board (i.e. universities too).
Much like questions and answers on a Stack Exchange site, those slides are expected to form a resource for future investigators. We *know* there is too much there for anyone to absorb in the meeting, but we also know that more people will dig these slides out of the archive over the next year and *study* them then actually attended the meeting in the first place.
Yes, in an ideal world there would be a technical report *and* a deck of slides, but in fact there are only the slides.
---
Personally I try pretty hard not to do this, and the result is a lot of backup slides and a lot of little URLs hanging around the bottom of the slides.
---
As an aside, I think that PowerPoint and similar polished slideware makes stuffing them (over-)full way too easy. I use a LaTeX base for mine (just the old slides class with my own library of macros for a long time, but I've started using Beamer) and these tools encourage a better style.
Alas, I know all the tricks to squeeze on just one more thing.
Upvotes: 4 <issue_comment>username_3: I think it is just presentation style. We have the same in our universities: hey, let's put 3 topics and 5 figures on this slide, so everyone will be impressed! I don't say that everyone should talk like <NAME> or the TED presenters, but I believe that presentations should comply some basic rhetoric and design principles.
Upvotes: 0 |
2014/08/12 | 428 | 1,815 | <issue_start>username_0: I'm thinking about studying in Germany next year at a university. I am a US citizen. I have completed a 2 year Associate's degree program at a trade school and also finished my general education requirements, at about 45 units as a combination of AP tests and classes at a community college.
When applying, they the schools ask for an official copy "school-leaving certificate" - I'm guessing this means diploma. Do I need an official copy of my high school transcript and/or diploma? Does my Associate and other college credit infer that I have already successfully completed high school? As I haven't quite mentioned any specific schools, I think this answer is probably the same regardless of the country of the institution.<issue_comment>username_1: Yes, German universities oftten require an official copy of your high school certification, even if you already have a university diploma which implies the completion of a high school. The same happened to me when I was starting my PhD in Germany - I also was required to present my high school certificate, although I also had two university diplomas (bachelor and master). In a similar manner, both of my university diplomas were required.
Upvotes: 4 [selected_answer]<issue_comment>username_2: You not only need to show that you completed high school, you also need to provide something that includes your grades\*.
In most (all?) study programs in Germany, applying students are ranked by high school ("Abitur") grades, and a cut-off is used to decide who gets in (*numerus clausus*). The cut-off differs per subject, and may differ by university.
\*include a key for the grades, as they differ in every country: e.g. highest, lowest, pass limit; or even better an official translation of US-grades to German grades.
Upvotes: 0 |
2014/08/12 | 2,223 | 9,606 | <issue_start>username_0: I am pursuing a specific research question. I have thoroughly surveyed existing research on the topic, and found dozens of researchers working on the problem, but was disappointed by their work. Their research:
* Does not consider the full magnitude of the problem.
* Constantly tests old, insufficient methods.
* Overlooks significant details, so the test results are meaningless.
* Lacks innovation.
I found lots of interesting ideas posted around the Internet. In blogs, forums, and USENET, I found people with some clever new ideas to approach the problem. These people had a genuine stake in the problem, so I found their ideas actually brought the problem somewhere meaningful. These informally-posted ideas need testing and considerable refinement. They are far from perfect, but many times better than what the academics are dealing with.
I would like to prepare some trials and publish some papers, centered around a number of these ideas. It is only fair that I give credit to the authors of those ideas. Essentially, I need to give credit to lots of anonymous people who posted their ideas informally. I have never read an academic paper containing highly informal references. Can I include references like this in my paper?
```
MutantTurtle17. “My Amazing DIY Tin-can Refugee Shelter.” MyBlog. 2014.
Retrieved from http://...
SimCityFan2012. “RE: RE: Look at this!” Shelter Designs Forum. 2013.
Retrieved from http://...
```<issue_comment>username_1: This is an actual problem that I also struggle with in some aspects of my research. There are problems in which the blogging and industrial world is, sadly, miles ahead of the scientific state of the art. **However, citing an abundance of blogs and other non-reviewed resources is rarely a good idea.** A few citations of web resources are usually ok, though. Hence, my (imperfect) solution to the problem currently is to cite the 2, 3 web resources that are best suited for my paper, and try to find academic resources that cover the rest of the ground as good as possible.
That being said, this situation is certainly a possibility for you. If you can take the ideas from these forums and blogs, and bring them on a sound scientific basis (e.g., through user studies or formal analysis, whatever is appropriate for your research) and publish it *both scientifically and informally* (e.g., in your own blog), there is a good chance that you make a strong impact on both the scientific side and the blogging community. At the end of the day, people tend to remember not only who originally threw a revolutionary idea or concept out there, but also (sometimes even more so) the person that made the revolutionary idea *work* (or, at least, clearly showed that it works).
Upvotes: 3 <issue_comment>username_2: Does your paper really test & verify *lots* of ideas?
-----------------------------------------------------
You state that "I would like to prepare some trials and publish some papers, centered around a number of these ideas." How many of those ideas do you expect to actually implement per a single paper?
If you implement, evaluate and contrast three novel ideas from blogs&forums, preferably including a solid comparison against a baseline published method; then that's just three informal items that you need to cite, in addition to the current academic publications.
There are references and references
-----------------------------------
If your paper assumes something, or claims non-obvious things, then you need 'proof' of it outside of your paper that should come from references. Those references need to be trustworthy - preferably respectable peer reviewed publications.
However, if your paper uses references for giving credit to ideas or pointing to original sources, then that's an entirely different class of reference, where blogs and forums are just as acceptable as, say, referencing archives of private informal letters that are used in studies of literature or history.
If you have never read an academic paper with a lot of informal references, then it is because it is very dependent on the field you're studying - for example, a thesis about racial stereotypes in online media would reference many informal sources as examples; while a thesis about particle physics wouldn't have any.
Upvotes: 4 <issue_comment>username_3: Maybe one way of dealing with having lots of informal references would be to divide the bibliography into sections, so that the reader can easily see the different types of reference (acknowledgement versus justification).
Or alternatively maybe put them all in an extended acknowledgements section (since databases won't be able to do much with blog citations anyway, perhaps it wouldn't matter so much if they don't appear in the official bibliography, provided the reader is sufficiently informed of who came up with what).
Upvotes: 2 <issue_comment>username_4: If the post knowingly belongs to some well known researcher or otherwise a known, notable person, such post can be cited, because even "personal communication" at the end can be a reference. However it is not good as a proof that something questionable is true as this is not a peer reviewed article.
If the author of the post is anonymous or not a scientist, such source is not trustworthy and is only suitable as a raw input data for analysis in social research.
Upvotes: 1 <issue_comment>username_5: This is just a personal opinion, but let's run it up the flagpole and see who salutes.
When deciding whether to use (and therefore cite) information from a particular source, ask yourself: *who has checked that the evidence and arguments presented in this source really support the conclusions this source reaches?*
For a peer-reviewed paper in a journal or conference proceedings, the answer is "a couple of independent experts in the relevant field, in a formal procedure in which they're fully focused on the task". That's great, you can go ahead and use (and therefore cite) the information with only a pretty cursory check of plausibility on your own part.
For a Wikipedia article with many independent authors and a lively talk page, the answer is "a large community of Wikipedia users, some of whom are experts in the relevant field and some are not, who may not be fully focused on the task in a formal process, but who at least have a clear pathway to correcting any errors they discover". Again, this is pretty good - maybe you need to be slightly more careful in your plausibility check than you would with a peer-reviewed paper, but you're still good to use and cite.
For a book that's been through many editions, the answer is "a large community of readers, some of whom are experts in the relevant field and some are not, but who don't have any particularly clear or reliable pathway to correcting any errors they discover". Before you use (and therefore cite) this, you're going to have to do a bit of work checking the evidence and arguments really support the conclusions yourself.
For a book in its first edition, or for a newspaper or magazine article, the answer may be "a single editor who was more concerned with style and grammar and spelling than with the substantive validity of the arguments". Before you use (and therefore cite) this, you're going to have to quite a lot of the work of checking the evidence and arguments really support the conclusions yourself.
For a Wikipedia article with a single author and a moribund talk page, or for a blog or forum post, or for an ordinary web-page, the answer may be "no-one". You can still use (and therefore cite) it, but you're going to have to do all the work of checking the evidence and arguments really support the conclusions yourself, and put enough details in your manuscript/assignment to convince the referees/examiners who are evaluating your manuscript/assignment that you've done that work.
(In all cases, part of that work of checking might be done by investigating whether multiple independent sources reach the same conclusion.)
Upvotes: 0 <issue_comment>username_6: I am a mathematician, former editor of several journals (currently, just one). Suppose I were to receive a paper submission with an accompanying email saying something along the lines:
>
> I have thoroughly surveyed existing research on the topic, and found dozens of researchers working on the problem, but was disappointed by their work. Their research:
>
>
>
>
> Does not consider the full magnitude of the problem.
> Constantly tests old, insufficient methods.
> Overlooks significant details, so the test results are meaningless.
> Lacks innovation.
>
>
>
>
> I found lots of interesting ideas posted around the Internet. In blogs, forums, and USENET, I found people with some clever new ideas to approach the problem. These people had a genuine stake in the problem, so I found their ideas actually brought the problem somewhere meaningful. These informally-posted ideas need testing and considerable refinement. They are far from perfect, but many times better than what the academics are dealing with.
>
>
>
Or/and, checking the bibliography list, I see many references to Reddit, Wikipedia, Quora, blogs by people I never heard of...
My crank-meter would go to something like 99% and my first reaction would be:
Should I even bother soliciting a quick opinion (let alone a referee report) on this paper? Or should I check if the supposed solution of the 4-dimensional smooth Poincare Conjecture uses "simply-connected" instead of "homotopy-equivalent to the 4-sphere?"
Upvotes: 1 |
2014/08/12 | 800 | 3,413 | <issue_start>username_0: The number of citations seems to be a good unit of measurement for someone's success in a specific field, however, shouldn't the h-index also include the popularity of a given field? For example, I've seen papers in computer science being cited thousand of times while other papers relating astronomy only a couple-hundred times.
When taking into account the actual quality of the paper and the amount of work that was put into releasing the evidence, the astronomy paper would probably be measured higher (for example). However **publications within less popular fields are cited less simply because they're not as *popular* as other fields**.
If I choose a field that is not particularly popular, I risk at perhaps not achieving the same amount of success that I would if I had chosen a more popular field.
Are standard measures of academic output skewed by the relative popularity of the field?<issue_comment>username_1: The comments are already spot on, but let me elaborate a bit.
**Comparing h-indices (or any other "hard" metric) is already dangerous in narrow fields and downright foolish if used for comparisons among different fields.** This is not only because some fields are larger than others, but also because:
* Differences in publication standards. In applied CS we write *lots* of papers, in many natural sciences, much fewer papers get written per researcher and time period. Arguably, this is because many empirical fields require the setup and analysis of lengthy experiments, something that is not typically (but not never) done in CS.
* Difference in co-author ethics. Just check around here on this stack exchange, and you will see that standards for co-authorship are *not at all* uniform in all fields. Clearly, fields with more loose co-authorship standards also expect researchers to be part of more paper projects, hence leading to higher h-indices on average.
* Differences in citation standards. In some fields, papers traditionally have 10 or less citations. In others, multiple dozen references are considered an informal minimum, again leading to higher average h-indices.
(and this is even without going into how easy h-indices are to manipulate if you are willing to - keyword "citation rings")
Upvotes: 5 [selected_answer]<issue_comment>username_2: Setting aside fields that are very small, with almost no one working in them, the size of the field matters quite a bit less than people tend to think. The reason is straightforward: larger fields have more citation donors, but also more citation targets. Suppose each paper cites 30 other references. In a field with a 1,000 papers, you would have 30,000 citations shared among 1,000 targets for an average of 30 citations received by each; in a field with 1,000,000 papers you would have 30,000,000 citations shared among 1,000,000 targets again for an average of 30 citations received by each.
In principle, the *growth rate* of the field does matter. If a field is growing rapidly, you have a large number of citation donors referencing a small number of target papers, leading to higher citations rates among these early entrants.
In practice, as mentioned in the other answers and comments, the most important factors are probably citation and authorship practices in the field, and the degree to which the field is adequately covered in the citation database you are using.
Upvotes: 2 |
2014/08/13 | 923 | 3,267 | <issue_start>username_0: I'm currently a college senior and I'm working on updating my resume. My GPA is a 3.46. Is it acceptable/ethical to put on my resume that my GPA is a 3.5?<issue_comment>username_1: **No.**
Just report the GPA as it is listed on your report / certificate.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Ethically, it is always best to round down slightly. If your GPA is 3.46251, I would specify it as 3.46. There isn't a big difference between 3.46 and 3.47, but 3.47 would be embellishing slightly, and that is not honorable.
It would be honorable to round it down to 3.4, but it has the air of fact falsification to round it up to 3.5.
The only possible case, where rounding it up to 3.5 *maybe* would be acceptable is the hypothetical case where you must choose from several options such as 3.0, 3.5, etc.
Upvotes: 0 <issue_comment>username_3: I agree with the other answers, which say that you shouldn't round a GPA of 3.46 to 3.5. Here's the reasoning I see behind this:
One scenario is that someone may be using a sharp cutoff (for example, a graduate fellowship that requires a 3.5 GPA). Then fine distinctions could matter, and everyone will be better off if your ineligibility is discovered early on. I consider sharp GPA cutoffs foolish, but unfortunately they are not as rare as they should be.
For anyone who is not committed to such a cutoff, there's no significant difference between 3.46 and 3.5, and logically it shouldn't really matter. On the other hand, there's a psychological difference, of the same sort as the difference between $9.99 and $10.00. The reason why rounding to 3.5 is appealing is that it crosses a psychological threshold that sounds better, but that's exactly why it's problematic. You don't want your resume to come across as manipulative, and that's what 3.5 looks like to me. I think "If your GPA were 3.52, you would report the extra digit to demonstrate that it was over 3.5, so a reported GPA of 3.5 means it's more likely something like 3.46. This candidate is probably trying to manipulate me by rounding the GPA to make it sound better." I wouldn't reject someone over GPA rounding, or consider it truly dishonest, but I wouldn't read the application as cheerfully or charitably as I might have otherwise.
For a more dramatic example, rounding 3.96 to 4.0 will look even more manipulative, since 4.0 has the special significance of meaning straight A's. I don't think anyone cares as much about 3.5 as a threshold, but it still signifies more A's than B's.
Note that rounding down is probably not in your interests either. If you round 3.44 to 3.4, then people may still assume you are rounding up from something like 3.36. It might not look as bad (since 3.4 is a less noteworthy threshold than 3.5), but you are still better off sticking with 3.44.
So how many digits should you use? If your school reports an official GPA, then I'd recommend using the same number of digits they use. Two digits is pretty standard, and I don't recall having seen more than three.
Upvotes: 3 <issue_comment>username_4: I disagree about it being "honorable" to round down.
3.46 should either stay as is, or would round up to 3.5. Anything over 3.45 would not round down. Basic math.
Upvotes: 0 |
2014/08/13 | 893 | 3,531 | <issue_start>username_0: Many papers use numbered references (e.g.: [1], [2]). Is this considered a good style or even a rule or **would it be acceptable to use abbreviated names of the authors and year of publication (e.g. [Smith09] for <NAME>, 2009)?** I find the abbreviated name reference style a lot more informative as after a while of reading papers on a given topic it’s usually easy to identify the cited publication without the need to look at the full bibliography. I have seen this style used in books and some editorials but it is not common.<issue_comment>username_1: The two systems are equally good but used in different communities/journals etc. You therefore need to check what is normally used in your field and when submitting manuscripts, of course, check what the specific journal uses. The fact that you say "most journals use" indicates you are in a field that uses numbered or *Vancouver style* (author-number) references. The author-date, or *Harvard-system*, is used by *most journals* in my field.
Upvotes: 4 [selected_answer]<issue_comment>username_2: You don't get to choose
-----------------------
Although as a reader I vastly prefer the name-year system, as I don't have to look up most of the references, the advantages are rather irrelevant - in almost all cases, you don't get to choose, as you'll have to comply with the citation standard of the publication.
The journal, conference or thesis standard generelly will list the citation style required, and that's it.
In some fields one or the other style is more common, but in any case you may encounter a publication where a different style is required, so check first.
Upvotes: 3 <issue_comment>username_3: The name/year system is much better, and you should use it whenever possible. In particular, if you work in a field with preprints your preprints should use name/year or initial/year even if the journal will later force you to change it. The reason is that name/year communicates relevant information, while number communicates no information at all. Just giving numbered references means many readers won't know anything about who did what work.
Upvotes: 1 <issue_comment>username_4: In math/CS you mostly use `[1,2]` or `[Lot02,Zai04]`. You can choose (unlike what most other answers impose) either of them, note for instance that both `amsplain` and `amsalpha` exist and either of them can be used in AMS publications.
It's a matter of habits which style the authors choose. The `alpha`/`amsalpha` `[Lot02]` style is better in most cases. However, there are communities where a numerical style is strongly prefered, moreover with the bibliography sorted by the order of appearance, and with compression turned on, because they cite hundreds of articles. And well, you don't want your in-text citations look like `[ABC00a,ABC00b,ABC00c,ABC00d,ACE00a,ACE00b,ABC01a,BC00a,BC00b,BC01,BC02]`, when it can be `[1--11]`.
Upvotes: 0 <issue_comment>username_5: It surely depends on the field we are talking about.
In medicine the number method now is the standard, probably because it improves overall readability of the text. Important references are often cited literally ("Smith et al. demonstrated that... [58]") and general statements by a collection of other researchers ("Many workgroups found...[23-27,57,89]).
Especially in books and reviews where the citations go in the hundreds You appreciate if the text is not clutterd by parentheses with long names but has only small-print numbers in exponential style.
Upvotes: 1 |
2014/08/13 | 2,092 | 9,536 | <issue_start>username_0: Occasionally I have some material to cover that is best presented in the form of take-home group projects.
Some student groups manage to find a way to coordinate their work well and to complete the projects successfully, with every team member benefiting from the collaboration. Other groups do not do so well:
* Some groups evenly divide the work, but still work in isolation, losing the benefits of working with peers.
* Some groups push the work to one or two students, while the remaining students merely contribute their name.
I wonder if there are strategies or tools instructors use that can encourage more groups to operate successfully while they are working outside of class?<issue_comment>username_1: I think first it may be worthwhile to accept that in any group work situation there is the possibility that people will worked siloed (or isolated from one another) or one or two people will push the work forward while others are relegated or chose to remain in a passive state.
There are some good reasons for this. I recall being a student with a pretty solid GPA, group projects were a horror for me. If the project grade is based on the overall project and does not take into account individual contributions this meant that students who were less focused on their GPA would be willing to turn in something that was not up to my standards. This led both to situations where other students refused to do work on the project (knowing that the stronger students would carry them in order to avoid dings to their GPA) and to situations where stronger students would freeze out other students (ie the stronger students would choose to take all the work and not let other be involved) in order to maintain control over the project.
Group projects are often used as an analogy for working in the 'real world' where working in groups is the norm. The fundamental difference is that in most cases if a peer is completely slacking or sending in subpar work there is a concrete structure to monitor and handle that issue (which doesn't always work of course but there's almost always more accountability than in academic group projects). You can mimic this behavior in an academic setting by splitting up the grades for the project. Don't give one 'group grade' to everyone, instead have students report on who did what (this is particularly effective if you can have them set this early in the project instead of during turn-in) and correlate the students grade to both their work and their work in the context of the project. Having this set up early can be a great way of preventing aggressive or strong students from freezing out what are perceived as the 'weak links'.
Additionally consider regular checkpoints on the project. This will let you get a feel for the interactions in the group and the content being produced while also minimizing the opportunity for a student to jeopardize the group by waiting until the last minute to work on their part (this will still happen to some extent).
In short - add more structure to the group project. This increases the workload on your end but it mitigates the most common issues you'll see in groups during group projects.
Upvotes: 4 <issue_comment>username_2: Drop the flat hierarchy in group projects. Use and quality based hierarchy, assign the hard-working students as group leads. Not all of them have the same level of leading qualities, but ask from them not to take the whole responsibility.
Divide the project into tasks, and tasks into subtasks (if they don't know how to do it internally, but first give them time to try to do it, or ask for that explicitly). Otherwise, clearly assign subtasks to each group member and require each group member to spend certain amount of time per week on those tasks. Lets say each student has to spend 10 hours per week on the project related tasks. Ask students to keep track of the time they spend on a spreadsheet document by marking down the start-end times and describing the solution, or if there is no solution why it didn't work. Require them to provide also references. This document preparation should not last longer than 15 - 30 min per week. Allow the document to be informal.
Make sure to protect your hard-working students. As @username_1 has mentioned, group project are nightmare for good students, as they take all the workload and do everything just to ensure that the overall grade remains within their standards. However, such behaviour has long-term effects on the hard-working students, resulting in burnout. Protect them as they may show up being useful in the later stages of the project, or sometimes in the future.
Upvotes: 2 <issue_comment>username_3: Splitting the grade is a very good way to encourage the participation of every one. However, it means you know how to split the grade. You can ask for each work to have an author contribution section, stating both who did work on which part and the overall participation of each student. You can also ask the group to tell you how to split the grade. It will encourage them to discuss the contribution of each one together. Most of time if they work fair together they will just split equally, but it will encourage to give less if one did slack which is just fair. Also for longer project (like semester long) I would have Q&A session with a teacher or teaching assistant. Clearly state the fair/unfair work repartition is one of the subject that can be discussed in this occasion. I would definitely not recommend to do the spilt yourself if not equally. There will always be this guy who can talk more than speak that will trick you. If this guy tries to trick the other group member, then they need to learn how to deal with it. it's part of their training. Also some time they will decide to split and work separately, it is sometime the best way to get the thing done and they need to recognise those situation too. Example: they work with people they don't like and interact very badly.
I think letting the student assign their own group roles themselves is critically important for their training. You want them to be able to take decision as a group, as they might need to do when they will be working in a company. They will be natural leaders that will take the reins, but that is ok, not everyone is good in this position. They might enter confrontation, but this is something they will also face later in their career and they need to be prepared for that.
Upvotes: 2 <issue_comment>username_4: <NAME>, who promotes a group-based approach for physics, uses a neat strategy to discourage slacking.
Exams are divided into an individual part and subsequently a group part, but if a member ever failed (even once) to attend the group sessions the rest of his or her group votes to allow or not allow that person to participate in the group portion of the exam.
Upvotes: 1 <issue_comment>username_5: I had an engineering teacher in high school who by far was the best (in my high school) at assigning group projects.
Students have a tendency to want to work alone because that is the environment they are accustomed to. High school teaches kids how to work in a 20th century factory: stay in line, follow the rules, do your work and let other people do their work.
My engineering teacher wanted us to work as adults would: he assigned us brief guidelines, and our group was responsible for collaborating and producing something for him. For example, as the first project in the intro to engineering class, he started by showing us a lamp he made. Then he asked us how one could make 10,000 of them for as low cost and as easily as possible. We had to deliver an assembly process (to make the lamp), a parts lists, and a floor plan of the building we would theoretically have.
I think what mostly made it so good was the lack of formatting. Many kids didn't like it, you had to actually listen when he talked because he didn't hand out sheets reiterating what he just said. You had to use your best judgement with regards to making the product look as nice as possible, as apposed to following some guidelines. The class made you think, you couldn't just go from one step to the next and get the correct answer, you had to think for yourself and make up your own steps.
Grading was a struggle for him, especially because this was one of his first times teaching this class. You got a grade for the project (everyone in the group got the same grade), and you got a grades for small check ins to make sure you were actually doing stuff in your group.
Engineering is mostly about problem solving, so when asked questions he would often respond "That's *your* problem". He did so if people complained they were doing too much of the work, or if their group wasn't listening to them, etc. You can't learn how to work in a group if some higher power solves your communication problems for you.
Hope that helps, sorry for rambling.
Upvotes: 0 <issue_comment>username_6: I took a class that involved group work. The professor allowed groups to vote to fire a member, provided they gave sufficient reason to the professor. This meant that everyone was held responsible.
My group nearly fired someone who kept missing meetings and then lied about it. However, he was sufficiently scared into working hard, so we let it slide.
I'm not saying this allowing teams to "fire" people is the best way. However, I think that finding a way to make team members accountable to each other is essential.
Upvotes: 2 |
2014/08/13 | 658 | 2,002 | <issue_start>username_0: The journal Science has two sections for submission, Research articles and Commentary.
<http://www.sciencemag.org/site/feature/contribinfo/prep/gen_info.xhtml>
Under commentary, there is a section for 'Policy Forum'. Unlike 'Education Forum' there is nothing explicit about research related to Policy. If one submits (not invited) a manuscript to Policy Forum, is this considered peer reviewed publication? Is there a difference in citation both in format of a citation and if it is common to cite a manuscript published in Commentary?<issue_comment>username_1: The general discussion for the Commentary section mentions this:
>
> Commentary material may be peer-reviewed at the Editors' discretion.
>
>
>
If you look up a sample Policy Forum paper online, you can download the [citation information](http://www.sciencemag.org/citmgr?gca=sci%3B296%2F5568%2F659). For example,
```
NUCLEAR WASTE
Yucca Mountain
<NAME> and <NAME>
Science 26 April 2002: 296 (5568), 659-660. [DOI:10.1126/science.1071886]
```
and then in various formats. Here's the BibTeX entry:
```
@article{Ewing26042002,
author = {<NAME>. and <NAME>},
title = {Yucca Mountain},
volume = {296},
number = {5568},
pages = {659-660},
year = {2002},
doi = {10.1126/science.1071886},
URL = {http://www.sciencemag.org/content/296/5568/659.short},
eprint = {http://www.sciencemag.org/content/296/5568/659.full.pdf},
journal = {Science}
}
```
Upvotes: 3 [selected_answer]<issue_comment>username_2: In general, a "Comment" is always a publication. However, your question is whether it should be listed in someone's publication list, presumably along side peer-reviewed articles. I believe the relevant criterion, as *username_1* suggests, is if the comment has itself been peer-reviewed. When yes, it can go in the regular list; if not, then it should presumably be relegated to a "additional publications" list or something of that type.
Upvotes: 2 |
2014/08/13 | 1,466 | 6,198 | <issue_start>username_0: I understand that according the ethical rules, obtaining the funding does not automatically entitle the principal investigator [(PI)](https://en.wikipedia.org/wiki/Principal_investigator) for authorship. But I don't understand the unwritten rules.
I was on a postdoc. During the postdoc, I was paid from a grant obtained by the PI. At the beginning, he told me to find a good research topic and write a paper ("this will be your child"). I spent some time on literature search and very preliminary computations. I presented my idea to the PI but he told he doesn't want me to continue this topic. He even repeated this several times, on different occasions. He said my idea was too losely connected to what his group was doing. So I gave up that topic and did not do it any longer. Finally, I published a paper on quite different topic, together with the group members, and the PI was also a co-author.
One day I talked to a colleague from that group and I mentioned my old research idea. I said I would like to develop it anyway, when I finish the current postdoc. He told me that the PI should still be a co-author because I spent some time working on this idea in his group and I was paid by his money.
Now, it's been a couple of years since I finished that postdoc. I have independence and I can publish myself as the corresponding author. I would like to publish a paper on the idea I once had. Should I somehow credit the old PI? (And his grant? It's over already.)
The whole idea of the research is mine. The PI did not contribute whatsoever to it. I feel that crediting someone just for his funding is not ethical. The more so that he rejected my idea. But I understand the words of that colleague as a sort of a warning because he has been working with the PI for a long time and probably he knows his attitide. And that guy (the old PI) is quite well-known person in the community. Do you think I should somehow negotiate with him? Or stick at nothing and just publish the paper as entirely mine?<issue_comment>username_1: I cannot see why the former PI should have co-authorship on a paper they did not support or had any interest in. You are not a slave (or at least should not be) when on a post-doc (or any other position) and should retain the freedom to take own initiatives. As long as you fulfil any obligations within the position you are holding, no-one can prevent you from developing your own ideas. I can see an issue if you use materials that involve costs that you are not covering, for example, lab equipment or chemicals, electronic resources that removes capacity or resources otherwise used by the project. I do not count, for example, using a computer and printouts as such resources.
So, for me there is no question you can use the research as your own and you should add only authors that fulfil reasonable contributorship criteria, that is have contributed to the science of your work (see posts under the [authorship](/questions/tagged/authorship "show questions tagged 'authorship'") tag for such criteria).
Upvotes: 5 <issue_comment>username_2: In (pure) mathematics, if you began working on this project while supported by the PI's grant, but without or even against his advice, then your publication about it would ordinarily include a footnote on the first page, saying something like "Partially supported by grant 314159 from the Munificent Funding Agency, <NAME> principal investigator." I actually often see such footnotes without the name of the PI, but I see nothing wrong with including the name if it helps to placate him. CAUTION: Conventions may be different in other fields.
Upvotes: 5 <issue_comment>username_3: Most disciplines have either a "Funding", "Financial Support", or a general "Acknowledgements" section that people use to note the source of funding for the research in question. Since the initial research that led to this paper was supported by your previous PI, you should note that in the paper and thank the funding agency and your former advisor for their assistance and support as you developed the idea.
Since the PI in question repeatedly decided to not support or become involved in the work when they had the chance, I don't see how there could be any reasonable expectation of co-authorship and it's very unlikely that they will be upset.
Upvotes: 4 <issue_comment>username_4: I believe the PI does not qualify for authorship. However, there is nothing to be lost by a little civility. You could send him a note - something like this (fill in the blanks)
>
> Dear Professor
>
> I hope you are well. It has been since I moved from to , and I am settling in well. .
>
> You may recall that we once discussed , but since it did not align with the research direction of , we dropped it and instead I focused on . Now that I am at I have dusted off the old idea, and actually was able to turn it into a paper that shows . I intend to publish it in .
> Now since the idea had originally been formed while I worked in , I thought it would be appropriate to make a mention of this in the acknowledgements; I hope that you agree that this is the correct way to indicate the link to , given that we did no research on this topic while I was there.
>
> I hope everyone is well. Please send my particular regards to - I miss . Best wishes,
>
>
>
If he thinks he ought to be included in the authorship list, such a note leaves the door open. It is usually not worth ending up in a fight with someone who is well established in your field - and it's their transgression, not yours, if they insist on being named a co-author.
That said - I stand by my first sentence: what you describe does not qualify the former PI for co-authorship in this instance.
Upvotes: 4 <issue_comment>username_5: Send them the manuscript when it is ready for submission but before you submit it with a note explaining that you feel s/he warrants a mention in the acknowledgement or even a co-authorship.
Their response will tell you a lot about their character.
But if they want to be on it, put them on it, and if you later regret doing this, just do not deal with them again in future.
Upvotes: 1 |
2014/08/14 | 2,206 | 8,373 | <issue_start>username_0: I find that in many cases, either a table or a plot will do an equally good job of presenting numeric information. Does anyone have any advice or even rules about when to prefer using a table over a plot and vice versa? I'm referring to tables and plots in the context of academic journal articles.<issue_comment>username_1: I would say: Use tables if the actual values are of importance and use plots if trends (or similar things) are important.
The rationale is simply that one cannot extract actual values of a function at specific places from plot. Vice versa, it's much simpler to see linear growth or periodicity from a plot than from a table.
Upvotes: 5 <issue_comment>username_2: As @username_1 says, it's often quite useful to preserve the numeric values - that's the motivation for tabulating data. However, plots have the advantage of being able to easily visualize trends in data.
If you have a set grid of *x* and *y* co-ordinates, with each pair of co-ordinates having a numeric value, you can sort of do both.
Here is an example. The trends in the data are made much clearer by plotting and colour-coding the data, but the numeric information is preserved. As a result the readability of the numbers isn't perfect, but (depending on your data) you can sometimes have the best of both worlds.

Upvotes: 4 <issue_comment>username_3: If you are able to do it, use plots, period. What little purpose tables used to have is currently best served by online supplementary material, either on the journal's website or your own.
If you have more than a couple of constants/data points (which would not require a table either), numbers are very difficult to read and (good) plots are much better for human consumption (I don't think this is merely a matter of taste; while I am not an expert there is quite a lot of research on this).
*If* actual values are actually useful to someone, a table is in fact a very poor way to provide them as anybody wishing to use them must go through a time-consuming and error-prone data entry process. What should actually be done in this case is making the data themselves available electronically. A table is not a decent alternative to that, not anymore.
Upvotes: 2 <issue_comment>username_4: >
> I find that in many cases, either a table or a plot will do an equally
> good job of presenting numeric information.
>
>
>
Strictly speaking, a plot does NOT present *numerical* information because it is just a picture. The purpose of a plot is to show geometrical form of some dependence(s) when this form is important. It is impossible to recover *original* numerical information from such picture. The requirement of reproducibility of scientific results requires to provide all necessary information needed to reproduce the results described in the paper. Most journals do not allow publishing large tables of numerical data but they allow to publish *supplementary information* online which can contain huge tables in TXT format. It is good idea to supply such information and it is free.
---
The topic of effective visual presentation of information is subject of [infographics](http://en.wikipedia.org/wiki/Infographics):
>
> [<NAME>. Visualizing Data (2008)](http://www.amazon.co.uk/Visualizing-Data-Explaining-Processing-Environment/dp/0596514557)
>
>
> [<NAME>. The Elements of Graphing Data (1985, 1994)](http://www.amazon.co.uk/Elements-Graphing-Data-W-S-Cleveland/dp/0963488414)
>
>
> [<NAME>. (1993): A Model for Studying Display Methods of
> Statistical Graphics. // Journal of Computational and Graphical
> Statistics, 2(4): 323-343.](http://www.stat.bell-labs.com/doc/93.4.ps)
>
>
> [<NAME>. The Visual Display of Quantitative Information (2001)](http://www.amazon.co.uk/Visual-Display-Quantitative-Information/dp/0961392142)
>
>
> [<NAME>. The Grammar of Graphics (2005)](http://www.springer.com/statistics/computational+statistics/book/978-0-387-24544-7)
>
>
>
and others.
Upvotes: 2 <issue_comment>username_5: In addition to the other answers, I think it depends on how much data you are trying to show. Personally, I like to use tables when I can. If you have only three data points, a figure wastes a lot of space and ink. Tufte calls this the [Data-ink ratio](http://www.infovis-wiki.net/index.php/Data-Ink_Ratio). In his book, he recommends:
>
> Above all else, show the data.
>
>
>
So, if you have hundreds of data points, you would show a figure, unless the exact data values matter (in which case it is rather a reference table). But if you have only a handful of points, it is more efficient to display the data in a table. Unless, as other answers point out, you want to visualise a particular relation or trend — then a visualisation is again more appropriate.
Upvotes: 3 <issue_comment>username_6: Readers and listeners tend to read without no more that 16 or about items on the figure during presentation, so a good table should not be larger that 4x4 or about. This is relatively small size. Use plots if you need to present more data.
Upvotes: 0 <issue_comment>username_7: Are the specific values really, really meaningful and relevant? Do they have meaning outside of the sandbox? Are you, for example, publishing new measurements of fundamental constants? No? Then, most likely, the actual numbers have no business being in your ~~article~~ extended abstract.
My rationale is: you are telling a story. Elements that don't serve to make a (major) plot point or at least support it have to go. Nobody will look at the numbers if their values are not relevant or support a point you are trying to make.
Now, there is data that does not lend itself well to the usual plots you can make (line plots, histogramms, bar charts, ...). Sometimes, a table is all you can do, especially if the data has no useful scale in at least one dimension. For example, assume we have investigated four methods in four scenarios and have collected some quality measure; the bigger the number, the better the method worked in that scenario.

What do we see in this table? Nothing, without really *reading* which may be a waste of time, given that the numbers may not mean anything on their own.
What is the *story* we want to tell? Maybe something like this: Methods one and three are complementary and excell in their respective strong scenarios; you should pick one of them if you know which category your application falls into. Method two is somewhat useable in all cases but worse than the specialists; use it if you don't know what you have at hand. Never use the fourth method, it's always bad.
We can improve the table so that it supports that story at one glance by normalising the data (I assume a linear scale from 0 to 250 here) and giving visual indication of "good", "meh" and "bad".

[[source]](https://github.com/akerbos/sesketches/blob/gh-pages/src/academia_27180.tex)
Now, the layout of the table can be improved and maybe you want to swap columns so that the complementary methods are neighbours. It is hard to show variances with this visualisation. Furthermore, the [choice of colors](http://colorbrewer2.org/) can be debated (red/green may have different meanings in different cultures; also they can not be distinguished by a sizable portion of all readers).
Still, I think the example serves to support *my* point: be creative when representing data, with a focus on supporting the narrative of the article and less on dumping data (there's other places for that).
There is plenty of literature on visualising data but I'm not intimately familiar with any, so I'll just point you towards some blogs:
* [FlowingData](http://flowingdata.com/) by <NAME>
* [Statistical Graphics and more](http://www.theusrus.de/blog/) by <NAME>
* [visualising data](http://visualisingdata.com/) by <NAME>
They have plenty of inspiring examples.
One further TeXnical note: it's possible to [draw small inline-style plots](https://tex.stackexchange.com/q/29293/3213) (called [sparklines](https://en.wikipedia.org/wiki/Sparklines), apparently) with which you could potentially fill a table.
Upvotes: 2 |
2014/08/14 | 1,981 | 8,120 | <issue_start>username_0: Is it ethical/lawful to recolor/scale the logos when importing them to presentation slides to make the logos meet the template standards and fit the theme colors?<issue_comment>username_1: Logos are often trademarked, and therefore you are **not** free to recolor them according to whatever color scheme your template happens to use.
However, many companies and universities do have multiple versions of their logo available, for precisely this reason. You should contact your university's (or organization's) press office (or similar office) to see what is available, before toying with it yourself.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Most companies/institutions guard their branding very carefully. Many companies spend thousands or even millions on developing a brand language, which includes fonts, colors, and other design elements.
I don't know the specifics of the legal ramifications of changing logo colors, but the owners of the logo are sure to be against it.
Upvotes: 3 <issue_comment>username_3: The following is based on general copyright concepts and should apply to any reasonable copyright laws:
Any logo is created by somebody, be it a professional graphic-design company or the dean’s nephew, and **without further ado** this person (or company) holds the copyright to that logo. This mostly means that you cannot do certain things with it (or an altered form of it), which usually include dissemination or using it for commercial purposes. Whether using the logo in a presentation shown to a small audience is included in this depends on your country’s copyright and other aspects. Using it in a publication would almost certainly be a breach of copyright, however. Anyway, let’s assume it would not be legal to use the logo for whatever you do.
As it would be pretty pointless, if, e.g., members of a university were not allowed to use its logo (when representing that university), the creator will usually have authorised the university and its members to use the logo – but this authorisation can be bound to conditions. Furthermore the university itself may impose conditions onto its members regarding the usage of the logo. These conditions may include:
* You must not alter the logo (e.g., by recolouring it or changing its aspect ratio). I expect this to be a common condition.
* The logo must make up a certain percentage of your slides, pages or posters. This is a rather silly condition in my opinion, but I would be surprised, if there were no precedent for this.
* The logo must not be resized. If sufficiently stupid people make the rules, this might happen, however it hardly makes any sense: The logo may not have any physical size to begin with (as many image formats do not contain this information) and how the logo is initially sized when imported in your software depends only on whatever the software’s creator chose to be the default. And even if it has physical dimensions, it does not make any sense to use the same size in print and on projected slides. Something similar holds for sizes in pixels.
* The word *penguin* must be on any page or slide on which the logo appears. I am exaggerating here, but the only way to be sure that there are no silly conditions is to check.
Now, if you are lucky, there exists some document which states that members of the university or similar are authorised to use the logo and which contains conditions (if any exist) and requirements of logo usage. [Here](https://brand.osu.edu/logo/) is an example thanks to Mkennedy.
On the other side of the spectrum, you may have some institute’s logo, which was handrawn by the director’s niece 30 years ago and gone through several iterations of scanning and printing, because the original has been lost. You have no official authorisation to use the logo at all and the legal grey zone you are entering does not change much if you additionally alter the logo (in any remotely respectful manner). It is very likely that nobody will care, let alone sue you.
Where on this spectrum you are is something only you can decide.
---
Anyway, I would recommend to use such a logo only on one or two slides, so it should not dramatically destroy your colour concept.
Additionally, you might consider adapting your presentation’s colour scheme to the logo’s colour scheme, but beware that the latter is not necessarily a good choice for projectors.
Upvotes: 3 <issue_comment>username_4: I am converting [my comment](https://academia.stackexchange.com/questions/27191/is-it-ethical-lawful-to-recolor-scale-the-logos#comment57539_27192) to [username_1's answer](https://academia.stackexchange.com/a/27192/15723) as the OP [suggested](https://academia.stackexchange.com/questions/27191/is-it-ethical-lawful-to-recolor-scale-the-logos#comment57651_27192):
Logotypes are typically covered by what's known as the *graphical profile* of organisations like companies, universities and indeed even political parties or NGOs.
A graphical profile usually contains things like (but not limited to) color(s), aspect ratio, font(s) and positioning of eventual text elements regarding a logotype in question. Depending on how "complete" or "strict" a graphical profile is, you can do varying degrees of manipulations.
It's typically not an issue to scale the image, given that the aspect ratio, or the width-height proportions are kept as the original. If the organisation in question has put some thought into their graphical profile, they should have the logotype in a [vector-based format](http://en.wikipedia.org/wiki/Vector_graphics), which scales up/down without any quality loss.
Keep in mind that scaling up an image is usually not a good idea, if the image is [bitmap](http://en.wikipedia.org/wiki/Raster_graphics) and not vectorised. It's also good to remember that there might be issues regarding readability, i.e. there might be a limit on how much you can scale down the logotype. Logotypes that have [text within the graphics](http://oic.nccu.edu.tw/data/ss_logo/24689962047b2b27651919.jpg) tend to have such limitations. [Keen observer might notice how badly the text renders if one does not pay attention when converting vector graphics to raster graphics]
Finally, even if you are allowed to crop a logo (due use as decoration on the edge of a slide or poster) exactly how you can crop the logo might be defined as well. For instance the logo I linked above has 4 predefined cropped versions, that you are allowed to use. Beyond those you are not allowed to crop/scale/change the logotype in any way.
Just exactly how that might be enforced is a whole different story however.
Upvotes: 2 <issue_comment>username_5: To clarify the resizing issue discussed in the comments: for any length-based system of preparing a document, the concept of "size" is indeed a valid one.
If the people who provide official University logos have done their job properly, the logo will be available in postscript and/or PDF formats that have a *defined size in centimetres*. These original dimensions are probably intended for reproduction on A4 paper and in that case should not be changed (in the interest of consistency).
For instance, the .eps logo of my University is defined as being 1.77 cm high. On official, printed A4 letters it is the exact same 1.77 cm in height. In most cases it would be inappropriate to rescale this logo when creating an A4 document. Consistency is good.
As a side note, the concept of measuring Powerpoint slides in pixels is wrong: it's a vectorized document. *There are no pixels*. I don't have a copy of Microsoft Powerpoint, but Apple Keynote's default slide size is 1024x768 points (not pixels!). 1 pt = 1/72 inch. I suspect Microsoft's system is the same.
A logical method of scaling to different media would be to scale based on the font size of your main body of text. Most A4 documents have 10pt font. So, if you're producing a poster or presentation with a main font size of 24pt, just make the logo 2.4 times wider and taller. This will keep the logo's size in proportion to the rest of the text.
Upvotes: 2 |
2014/08/14 | 511 | 2,293 | <issue_start>username_0: I am working full time currently as a programmer in a summer internship. In the fall I will be continuing school and working part-time as a programmer.
I believe that I want to be a professional programmer and someday a software engineer.
However, I am very interested in all the areas of computer science that I have been studying. I hear occasionally about how some computer science majors end up being "just programmers" and not "computer scientists." There seems to be a common thought that becoming a programmer means that you give up the field as a whole and the possibility of contributing to the field.
I would like to be a professional programmer, but also a lifelong learner in the field of computer science. Am I naive to think this is possible?<issue_comment>username_1: I am a professional programmer with a PhD.
I have had a colleague, who was working as a programmer and continued even publishing in his unrelated field (chemical engineering/textiles), so if you are dedicated, it is even easier for you to do, since your area might be related, but still it is not a light undertaking. But if you can find a more research heavy R&D position in the industry, of course that would make it easier.
I am personally planning to teach adjunct classes, to keep me fresh about theoretical basis.
Hope my two cents helps.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I was discussing your question with a software developer and here is what came out:
Software development require some specific skills you will develop in the job, and you will surely not be able to be an expert in development as well as, algorithmic, cryptography, network, AI etc... But if you want to keep working on some other computer science subject, it depends where and which project you decide to work as software developer.
If you develop an authentication server or an anti-virus you will still have to keep up in computer security, if you develop a network layer for a game you will have to know about how a network work etc... of course you should probably avoid to go to a company that design web sites or implement yet another client data base...
And as always: what ever you are interested in you can continue to be interested outside of your work.
Upvotes: 2 |
2014/08/14 | 574 | 2,225 | <issue_start>username_0: My paper was published and later on I found that two references were mistakenly included with the third reference, which is the correct.
My phrase looks like this:
>
> I am worried about this (Smith 2001; David 2006; Magnus 2007)
>
>
>
and this is the correct form that it had to be:
>
> I am worried about this (Smith 2001)
>
>
>
so the references "David 2006" and "Magnus 2007" have nothing to do with the cited phrase and they never did that work; and were not meant to be included but maybe it was a problem with reference management software that I used at the time.
I wonder if this will cause a problem to my paper or even plagiarism/retraction.
I am really worried about this and any advice is more than welcome.<issue_comment>username_1: Obviously it would have been better not to commit such a silly blunder in a published paper, but I don't think it matters much. Such errors, especially when accidental, will inevitably occur, much as typographic errors. A slight embarrassment to you, yes, as any typos or errors in formulas would be, but not truly "actionable" by anyone, so far as I know. So, bottom line, "forget about it".
Upvotes: 4 <issue_comment>username_2: Plagiarism is the use of another author's ideas or words without proper credit. You haven't done that, so there's no need to worry about it. Also, people don't retract papers over trivial editing errors like this.
However, since it's something that might be confusing to a reader, it may be worth asking the journal about printing a correction. Simply get in touch with the editor who originally handled your paper, or if they no longer work with the journal, contact the editor-in-chief. They would typically publish a one-sentence note in some future issue of the journal, stating that the references were included by mistake and should be ignored. Alternatively, they may decide the matter is too trivial to be worth the space to correct it.
Either way, you should post the correction on your web page, and any other place where the paper is publicly available (preprint servers, etc).
This is no big deal and happens all the time. Just get it fixed, move on, and be more careful next time.
Upvotes: 5 |
2014/08/14 | 1,681 | 6,899 | <issue_start>username_0: I'm sorry if this is not the right place to ask this sort of question, but I really need an answer.
I'm from the middle-east and wants to apply for a master's degree in Russia, but lately people are discouraging me about the idea, for the following reasons:
* A Russian degree is not perceived as "very good" in other parts of Europe (western countries) & USA
* difficult to get a job while (or even after) studying, since you are a foreigner
* even if you get a job, incomes are LOW
* there have been several racism-related crimes against Black and Hispanic (etc) people, and all-in-all it's a dangerous country.
Don't get me wrong, I love Russia and Russian culture, but I'm paying a big amount of money, so I'd love to get things in return.
My question is: are my concerns reasonable, or are they just the product of American (Hollywood) propaganda? Would you advise me to study abroad in Russia or drop the idea and find somewhere else?
I mean: is it worth the time/money/effort or not?<issue_comment>username_1: Leading universities (Moscow, St Petersburg and the state universities in the capitals of the former Soviet Republics) used to be very good during the times of the Soviet Union and the first decade afterwards, and it was possible to get a PhD position anywhere in Europe after finishing them (not a post doctoral position after doing PhD there, however). I am unsure about the most recent situation.
Upvotes: 3 <issue_comment>username_2: I try not to think of myself as a person influenced by the US propaganda (I am more acquainted with Soviet and late Russian propaganda, more likely). However, I can not really advise you to go for degrees in Russia.
First of all, the STEM subjects were traditionally strong in the Soviet time due to the nuclear project. Unfortunately, the professorial level degraded significantly in the post-Soviet 90s, and did not fully recover yet.
Second, you are probably aware that most of the universities in Russia teach courses in Russian. Unless your Russian is already fluent (or at least basic), be ready to spend a year or so mastering it before you will really address your subjects. Unless you plan to work in Russia afterwards, the knowledge of Russian (as opposed to English) is not something that increase your career progress dramatically.
Lastly, having summa cum laude BS and MS degrees, as well as a PhD (or Cand Sci as we call it) from strong Moscow Universities myself, I probably can confirm that many HR departments prefer to see something easily recognizable on your CV, e.g. the University of Oxford as a trademark impacts your progress much better than the Moscow Institute of Physics and Technology that sounds almost like Inshallah Salam Alaikum University for someone who never heard of it.
On the bright side, studying in Russia you most definitely will experience interesting adventures, befriend the best guys and hang around the most gorgeous young ladies. Also, if you can solve problems you face in Russia, you arguably are ready to solve them in any other place.
Upvotes: 5 <issue_comment>username_3: I will extend Dmitry's great answer, to answer the later part of your question: "Any alternative suggestions" - part.
**Teaching quality:**
It is clearly stated (and pretty obvious from History) that Russia used to do a great job in terms of science, but things have changed after since.
**Living Prospects:**
I don't know what is the real motivation behind your decision to pursue studies and possible career opportunities in Russia, but to me it Russia does not seem like calm waters. Overall, the country is facing political difficulties which may result in economic difficulties which in turn would affect the job market. Additionally, the country is now well known for its racist behaviour (as you have noted). I don't know how would that reflect to your assessment during studies, and living quality overall.
**The propaganda consideration:**
The same way you assume that you might be biased by the Hollywood propaganda, you should expect the world to be biased by it as well. If we assume that you get a quality degree in Russia, it might not have the same value outside of Russia. What if you decide to switch from Russia to another country afterwards? That would put you immediately in an unfavourable position, especially if you move towards west.
**Alternatives:**
I would strongly recommend you to have a look at European universities. It depends what your priorities are. If your priority is high quality teaching that you could start with Switzerland, Austria, Scandinavian countries etc. If you want a more laid back working atmosphere you could have a look at South or Eastern Europe: Spain, Czech Republic, Poland. However, if you want to have a strong scientific background, with considerably less living expenses than the other Western Europe countries have a look at Germany.
**Note:**
As the other members have asked, it would be nice to add details about the amount of money you are going to invest, and the field of study you are interested in.
Upvotes: 3 <issue_comment>username_4: It was a family tradition to study in Moscow, so I had strong stimulus (both financial and personal motivators) to continue this.
However, things have dramatically changed in a very short period of time. Ten years ago, the country was open to foreign academics (also potential ones) and they were considered a benefit to the country. This is not the case anymore. You could clearly see discrimination, not among your professors, but among the population.
Without fluent Russian you will have a hard time at university. Even harder outside it. I have been mocked, because my Russian, which is grammatically above the average level, even in academia, has an accent. I had also encountered aggression, both verbal and physical, when people notice that I am a foreigner. Young people are quite nationalistic, so expressing any opinion that is different than the current status-quo can be dangerous, if not among the right audience. Contrary to one of the answers here that states that all Russians are the same, I can say that I have met my best and most reliable friends there, and all of them are Russians.
I have lasted there 2 months. However, I have to admit that in the past education from Russia was quite competitive. My grandfather graduated in Foreign Relations and he was working for numerous Western European ministries, because of his degree and experience. Russia had also brilliant scholars in chemistry, medicine and biology.
Nowadays, when someone hears a Russian degree I can clearly see that the first thing he thinks is corruption and "How much this diploma cost?", which I think is rather unfortunate.
Unfortunately, due to the times we live in, being from the Middle East is quite a negative characteristic in the eyes of most people.
Upvotes: 3 |
2014/08/15 | 736 | 2,864 | <issue_start>username_0: I am going to start my PhD, unfortunately, I did not get any funding. I am thinking of taking a loan so that I could focus more on my PhD studies, but the loan will need to be repaid after PhD. Therefore my question is if my financial situation is likely to be better as a postdoc than as a PhD student. My reasoning:
As a PhD student:
I have to pay academic fees,
I do not get funding or a salary.
As a postdoc:
I do not have to pay academic fees or any other fees to a university (am I right?),
I may get a salary but may not to.
How many postdocs do get paid? I looked at research groups I am interested in and they say, they do not have funding for postdocs and their postdocs usually are supported by some grant that they themselves have arranged before coming to the group. So is my financial situation going to depend on if I can secure a grant for my research as a postdoc or I should expect to receive a salary? How likely is the success in either ways?<issue_comment>username_1: A postdoc is a time limited academic job, a PhD is a research student position. The latter is usually financed but does not have to be. I have never heard of a Postdoc that did not involve payment. In your question you seem to indicate there would be a choice between the two but a postdoc, as the name implies, requires a PhD so one must go through a research education (and receive a PhD) before applying for a postdoc position.
Upvotes: 2 <issue_comment>username_2: >
> So is my financial situation going to depend on if I can secure a
> grant for my research as a postdoc or I should expect to receive a
> salary?
>
>
>
The question is not getting a grant or getting paid, it is getting paid by the institution you work for or getting your salary from a grant giving institution.
>
> How many postdocs do get paid?
>
>
>
(As far as I know) All of them. Depending on the country you are talking about it may even be illegal to employ someone without pay.
However, (as @virmaior pointed out) there are some "postdocs" in Japan that are not paid and have no work requirements. At least in Europe and the US this is not common and in most cases this is probably a bad idea for someone looking for a regular postdoc position (see my comment below).
>
> How likely is the success in either ways?
>
>
>
That depends on your field, the quality of your work, ... . We can not answer that but you can talk to your peers /supervisor about the job market to get an idea.
>
> [as a postdoc] I do not have to pay academic fees or any other fees to
> a university
>
>
>
Right.
*One general remark: If you are not not getting a (paid) postdoc position or a grant you should try to find a job in the industry anyway. Finding a more senior position is usually much harder than finding a postdoc position.*
Upvotes: 4 [selected_answer] |
2014/08/15 | 2,048 | 8,733 | <issue_start>username_0: As a new semester of school approaches I have begun updating my syllabi for the classes I teach. I have lately used a clause in the syllabus about no children in the classroom as I feel it is a distraction to both me and the other students. Having been in classes both as a student and as an instructor where children are present, I find it necessary now to have such a written statement.
However, there are a few parents who dislike such a clause. Some of these young parents feel that they should be able to bring their children to class, as they otherwise would need to drop out of school because they do not have enough money to hire a sitter. I feel that this is just "how it is," and is part of being a responsible adult.
Do other universities have policies about children in the classroom? How can I reach a happy medium of not coming across as a complete jerk, but still maintain a level of education in my classroom?
**Added:** I am of the feeling that we many times need to make a rule because of that "one person" who ruins it for everyone. My stand as it is right now is that we need to come down firmly in writing, then adjust with leniency as people show they can handle having their child in class. I am not ridiculous about my classroom rules, but I prefer to give it straight, then relax the standard if needed.
Also, as a matter of scope, I teach at a conservative Christian university. Many of the students married young and have a child or two.<issue_comment>username_1: Most universities I know have both a cultural understanding and formal regulations that the only people who are allowed in the classroom are those that are registered for the course, except where explicitly permitted by the instructor. (Thus for instance one has the notion of "auditing" a course: this basically means that you are not signed up to take the course for a grade and will not complete the required coursework / take any exams, but you do have the instructor's permission to sit through the class meetings.) This is a defensible regulation: without it, who knows who would show up for a course, taking up possibly limited space and occupying the attention of the instructor and/or the other students?
Children are people, right? I would thus frame the discussion in that way: you're not discriminating against someone because they're a parent. You're just not allowing people in the classroom who are not registered for the course.
I am somewhat surprised that this is a problem for you at all, and I wonder where you are teaching and if the cultural mores and regulations are different there. I don't know of any American university in which people would think they could bring children to class except in some truly exceptional/emergency situation in which they have received the instructor's permission. In any case, I would advise you to look up your university's specific policy on "unregistered attendees". Assuming it is along the lines of what I am suggesting you should, at most, modify your syllabus to quote from and/or link to this general policy. Don't make the issue about child care at all.
**Added**: I just looked at your profile and saw that you say you are in South Korea. As I said, both cultural mores and regulations may well be different there, and if it is very common for students to bring children to class, that makes me much less confident that rules or customs are being violated. So to adjust my answer for this: "Do other universities have policies about children in the classroom?" Not policies specific to children, but more general policies and also different expectations that mostly prevent the issue from coming up. But I don't know what other South Korean universities do and anyway, your university is *your university*: it is (I suppose!) allowed to do things its own way. If you do not find written regulations of the sort I mentioned above, I would talk to your colleagues -- and especially, to tenured faculty; I also see that you are a master's student, which also may be relevant in terms of how much you are permitted to rock the boat -- and find out how they deal with the situation. If several other faculty members have successful "no children in the classroom" policies, then you should be able to implement yours. If you are the only one you know in your university who wants a "no children in the classroom" policy: because you are a graduate student instructor, I would advise against pursuing that.
**Further Added**: Please read the comments below about "drop ins". The policy I describe above is very standard in the United States. It seems that in certain European universities the culture (and perhaps regulations) are quite different.
Upvotes: 4 <issue_comment>username_2: In my years of teaching in Asia I have had one class session where a student brought a child with him. It was an exceptional case but I was surprised he did not ask for permission. The child was well behaved (maybe 7 years old) and sat in the back not disturbing the class in any way. For this reason, I let it slide and I might be willing to accept it happening in the future.
However, I do make it quite clear to my students, I am the captain of this airplane and I will not tolerate ANYTHING which negatively impacts the learning environment. This includes anyone who disturbs the learning process in any way. I agree with username_1 - **it is not a childcare issue. You need to focus the students on it being a learning environment issue.** If a student does not turn off their ringing phone, out they go. If someone dresses in a way which distracts students or me, out they go. If anything exists which negatively impacts the learning process for even one of my students, out they go.
I'm pretty strict on this and I don't generally have problem because of that.
Back to your core question: How do you maintain a level of education while not being a jerk? You focus on the real issue. **The real issue is not kids, the issue is disruptions.** While you can be forgiving and understanding, to do so in a way which negatively impacts your students should never be accepted.
Upvotes: 7 [selected_answer]<issue_comment>username_3: Being a parent and a college student is tough. Not everybody has good access to childcare and even if they do, things happen. Surely the mere presence of a child in class can't be much of a distraction except for a few moments at the start of class. If the child is quiet and well-behaved, why not allow it? (Your policy could say that distractions, including noisy children, are not allowed.) It'll make some of your students' lives just a bit easier.
Upvotes: 4 <issue_comment>username_4: I am now a student for 3 years in Germany (Karlsruhe Institute of Technology). I tried to find anything "official" about children in lecture halls, but that was not successful.
I have only seen students with their children in a lecture about 3 times. (There might have been more children, but I probably didn't notice)
Twice, the children were silent. Only once I've heard one of them. Then the mother went pretty quickly out of the lecture and came back (with the baby) about 15 minutes later. Nobody said anything.
What I think as a student
-------------------------
As long as children don't make noise and as long as the lecture hall isn't crowded I can't see any reason for them not to be there. When the child is loud, then the parents should directly go out with him/her. Most lectures aren't that silent that it is bad when you hear a baby cry for a few seconds.
However, when the child is distracting other students / the professor then the child has to leave the lecture hall.
What I would do in your situation
---------------------------------
I see two ways to deal with the issue.
### Opt-child-in
You could forbid children in your lectures. But if students really have problems, they might come to you and want to speak with you about it. Then you should make clear that you can make an exception, but only if it works. That means if the child is distracting you / other students, the parents have to search a solution.
I would go for this solution if there are many children who don't know how to behave in a lecture.
### Opt-child-out
Don't forbid children directly. When there are problems, you can speak with the parents. You can tell them that their child distracts other students and hence they should not bring it again to lectures.
I would go for this solution if there are only occasionally children who distract lectures.
More thoughts
-------------
You could ask parents to take a seat in the back / close to the door. This way they can quickly go out when the child/baby is loud.
Upvotes: 2 |
2014/08/16 | 584 | 2,311 | <issue_start>username_0: I am looking for a phd position in stem cell biology and have been trying very hard for the last 6 months in Germany or any European country. I have tried to contact many professors via mail but have nothing by way of a good response.
I have a good academic record and score.
Can anyone please help me out how to best proceed or where I may be lacking in my method so far?<issue_comment>username_1: The process is going to be very different from one country to the next. But from what I have seen so far, the three most common ways to start a PhD in Europe are:
* Reacting to a job offer. If there is a specific position available and no local student that's been pre-selected for it, it's possible to come to a university to start a PhD.
* Coming in contact with a professor through a course (possibly while doing some preliminary research work with said professor). This would require (re)doing a master's degree in the same university, with an eye toward the PhD.
* Coming with your own funding, very often a grand from a foreign government or possibly from a private company.
I don't know anybody who started a PhD by contacting a professor/research group out of the blue.
Upvotes: 0 <issue_comment>username_2: As aid before, the process is very different from country to country. I personally have experience in Germany, Austria, and Switzerland.
* More and more institutions move to PhD programs. Sometimes it is the only way to do a PhD, and Professors are discouraged to hire students without them going through the initial assessment of the PhD program committee. Usually, these programs are easily to find on the web. See e.g. here <http://www.vbcphdprogramme.at>, or here <http://www.biozentrum.unibas.ch/education/phd/overview/>
* Look for job advertisements at dedicated portals like ResearchGate (<http://www.researchgate.net>) or <http://www.eth-gethired.ch> in Switzerland.
* Contacting Professors out of the blue might work in some cases/disciplines. In fields or countries were industry pays much, much more than academia, Professors have problems in recruiting talented PhD students, e.g. computer science or generally in Switzerland. They are glad if they can finally put a qualified person who does the work on the grant they already got month ago.
Upvotes: 1 |
2014/08/16 | 1,629 | 6,876 | <issue_start>username_0: I am a few days from completion of an industrial internship that is required for my Masters degree. I have had a bad experience with the supervisor of this internship, who has insulted me several times during my internship period. He is mean, he gets angry very easily, he is bad-tempered and most importantly he is ignorant: he never knew what I must do as a trainee, it is me who proposes the tasks to him. Finally, now that I have finished the main task of my internship, he says it is useless.
Completing the internship and the degree involves reporting on my internship to an academic jury. Given this bad experience, how should I handle my report?
In particular:
1. I do not want to give an acknowledgement to my field supervisor in my report: is this a bad thing to do? I mean, will the jury ask me why I did not write an acknowledgement?
2. If the jury asks about my evaluation of the company where I did the internship, should I be honest in telling them what happened, or should I lie and tell them everything was fine?<issue_comment>username_1: 1. Be a better person than him. Write the acknowledgement.
2. I would think that telling your jury that he was a very hard man to work for is fine, but I wouldn't call him incompetent to people who are effectively his peers. I definitely wouldn't recommend him other students.
Upvotes: 3 <issue_comment>username_2: >
> If the jury asks about my appreciation of the company where I did the internship: should I be honest in telling them what happened and that my advisor is too incompetent? Or should I lie and tell them everything was fine?
>
>
>
You should do the same as in any other case when a professional relationship goes wrong. You need to:
1. **Focus on the facts.** *"The project was not going smoothly"* is a fact. *"We had communication issues, so I ended up delivering not what they wanted"* is also ok. *"The project was going badly because the advisor is incompetent"* is you trying to assign blame. Stay clear of that. You do not need to lie to the committee and pretend that all was great, but try to refrain from presenting your own interpretation of the events. It is understandable that you will want to make sure that the committee understands that the issues were due to no fault of yours, but by badmouthing your advisor (no matter how warranted) you are likely to reach the opposite.
2. **Take the high road.** If it is customary that students write an acknowledgement to their advisors, write a short, polite acknowledgement thanking him to allow you to work in his team, if you can thank him for nothing else. Rocking the boat over something so minor seems unwise.
3. Whatever you do, **stay professional.** Acting out of anger and a lust to "get back" on your advisor for his insults *will* backfire on you.
**Important Edit:**
Based on your follow-up information:
(retracted - some racist comments as well as statements about unduly long work hours)
The fourth, maybe even more important point is:
If something really bad went on (like in your case), **find out what the correct official action to take is**. File an official complaint with your university, or even talk to a lawyer and have him look into filing a law suite (racist comments in the workspace are certainly grounds for a civil law suite where I live). Don't take a placebo action that does not hurt and does not help, such as not including him in your acknowledgements.
The other 3 points stay in place - even if something terrible went down, you need to stay professional and you need to focus on the facts, in your official reports as much as when you talk to your jury about the incident(s).
Upvotes: 5 [selected_answer]<issue_comment>username_3: Even if writing an acknowledgement is the norm in such a kind of report, acknowledgements remain a kind of gesture, which should have no impact on a professional evaluation. If such gestures have to be performed for their own sake, they become pointless and worthless¹: If literally everybody is being acknowledged, being acknowledged isn’t worth anything. Thus for acknowledgement having any point, there must be at least a small chance of, e.g., an advisor not being acknowledged, if this person really does not deserve it.
So, if your advisor did everything to be not worthy of any acknowledgement (as it sounds like), the only reason to acknowledge him would be that he still has impact on your grade or career². Assuming that this isn’t the case, the jury should not ask you about this missing acknowledgement, as your relationship to your supervisor should (idealistically) not play into your evaluation. (I will come back to this in a moment.)
As for your defense (or interview with the jury), I second the already given advice: Avoid appearing to place blame on your supervisor, but focus on the facts instead. Depending on what the mode of the defense is, e.g., if you are mainly asked questions and do not have to freely report on big chunks, you might not even need to address the issues yourself, unless asked. If you are however asked, e.g., why you made some decision on your project and it was due to your advisor commanding this decision, clearly say so. This also applies to the acknowledgement: If you are asked why you did not include one, it is the jury who brings up this topic and not you, and you can thruthfully say that you felt that there was nothing to acknowledge – but never bring on this topic on your own.
Another thing that you should be prepared for: If your internship was sufficiently long, the jury might hold the opinion that it was your responsibility to report severe problems to the university, such that it could assign you a new internship position or similar. Whether this opinion is justified depends on several factors, such as how much time this would have wasted and how high the risk would have been that such a complaint would have backfired at you and so on.
Also, if it’s not too late for this: Talk to your student body. They better know your specific situation than we do and might have experience with similar cases. Also, they have the means to drastically reduce the chances that this company ever gets an intern from your university again (I assume that they keep a list of good and bad companies for such internships).
Finally, talk to the jury (or another appropriate person), after everything is over. They also will have means to drastically reduce the chances that this company ever gets an intern from your university again.
---
¹ And you might enter some euphemism treadmill which ends up in a special acknowledgement language which has nothing to do with actual language anymore, as it is the case for employment reference letters in my country.
² Be aware that there might be not-so-obvious ties between your advisor and members of your university.
Upvotes: 2 |
2014/08/17 | 654 | 2,744 | <issue_start>username_0: I think I have stumbled once again over the meanings of "issue" and "volume".
So I have found [this article](http://link.springer.com/article/10.1007/s12541-010-0035-y) which declares both fields. Now I used JabRef's `DOI to BibTeX` to pull the info and it worked as expected. But it only pulled the volume number.
But sometimes I noticed the DOI database only pulls the issue number for an article, more than only giving the volume number. Why?
I suppose one should prefer using the issue because there are (usually) issues of a journal ("magazine") in a year... right?<issue_comment>username_1: I guess your question is about citing an article. If the article you want to cite is in a journal which have both volume and issue number (this is very often the case). Then you should write both of them.
The issue is the booklet number in which the article was published. They are grouped together to make a volume. Often one volume correspond to all the issues of one given year, but not always. Page numbers usually run sequentially through a volume (issue 2's first page will be numbered one higher than the last page of issue 1 and so on).
Finding the article in a (paper) library is easier if you have both the volume and issue number since you directly know which booklet you need to consult. While helpful, the issue number isn't strictly required in order to find a particular article. Indeed, libraries often bind all the issues of a single volume into a hard-backed book where the page number is sufficient.
Today with electronic paper those notions might have lost their meaning, and in the end, the DOI is probably the best way to share a reference. However, it is still in the habit to provide both issue and volume number, and given what they mean, it does not really make sense to have only one of them. The last word will go to the editor of the journal you are publishing the reference in and the bibliography style may or may not include the issue number. So for your personal bibliography, it seems safer to have it for the day you publish in a journal which request issue numbers.
For this particular paper in jabref, I don't know why only one get pulled. Maybe it is a bug, or a database error.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The issue number is useful when pulling a hard copy of the journal before it has been bound. Some libraries wait a year or two before binding. So during that time you have loose issues.
P.s. Some people still use the library, still read hard copies. Not everyone is young and computer oriented--you'll get the wrong perspective if you think the SE demographic represents Academia. Don't assume everyone is using the screen only.
Upvotes: 2 |
2014/08/17 | 2,277 | 9,792 | <issue_start>username_0: What do you do when you've sent a paper on your fancy new algorithm to a conference, and before the conference has replied to you, you spot a newly submitted paper on arXiv on the same algorithm?
Possible reactions I can imagine:
1. You immediately submit your work to arXiv and/or open-source your code to "prove" you were working on it too (or at least as much as that might be worth at this point)
2. You just wait and see if the conference accepts it (but then what?)
3. You withdraw your paper entirely -- you "lost"
4. You totally ignore it -- it's not "official" until it's peer-reviewed, so you might still be "first"
Furthermore, who typically gets credit if:
1. Your paper is accepted, and is first to be published outside of arXiv
2. Your paper is declined, and is *not* first to be published outside of arXiv
---
### Update
I'm reading the other group's paper more carefully (I'd only had a chance to glance at it yesterday, and was alarmed because several of the key words and concepts were exactly the same as ours), and it seems like they might not have discovered the same algorithm after all -- it's difficult for me to tell because their notation and terminology varies considerably from ours, but there's a chance that we've found different algorithms, even though several key concepts are the same. I'll continue looking into it, but just thought I'd mention this to add more context. At least now I'm a little bit more hopeful.<issue_comment>username_1: If your paper gets accepted in this conference, you win. The submission date is before the arxiv uploading and no one can claim you plagiarized the arxiv preprint.
If your paper gets rejected, you probably lost. In subsequent submissions you have to cite the original arxiv preprint, make extra effort and experiments to differentiate your work from theirs (by augmenting your original work) and claim that both works have reached independently to those parallel findings. Still, this lowers your work's novelty and might lead to another rejection. In that case, the other side might lost too, because your original rejection might also signify that the algorithm is not that seminal or important.
So, you should consider in what ways you can expand your work to actually provide novel content in comparison to the arxiv preprint, in case of rejection. In case of acceptance, you have nothing to worry about.
**UPDATE:** I really liked the other answers. Submitting to arxiv the OP's paper as soon as possible is probably the best thing to do. Also sharing co-authorship (in case of rejection) is of course the ethical / right thing to do and that is what the OP should do. But:
* Case 1. There is some foul play on the other side. In that case, they do not want to share co-authorship but patent / steal the idea. In that case, co-operation is not likely to happen
* Case 2. No foul play involved and the OP's paper gets rejected. The other side has already patented those results by their arxiv preprint. They may even already submitted the paper to another conference (many times that is when you upload preprints). Why would they share co-authorship? Would the OP share co-authorship if his paper got accepted? Will he include the other paper in the related work section (of course he should) in his camera ready version (in
case of acceptance), when most of the results are identical? According to his comments he is not going to do that (when he has nothing to lose by that if his paper got accepted). Why does everyone assume that the other side will cooperate? These are serious questions that are easily answered on an ethical basis but the practical side is always more complicated. And what if the other side is more famous / established than the OP? Sometimes in that case they may even refuse co-authorship on that fact alone. Co-operation and co-authorship usually happens between similar / equal parties but they are harder to achieve when the other side has more leverage.
I really hope things work for the OP. But if his paper gets accepted he should definitely cite the other work and explain the situation in his camera ready version.
Upvotes: 2 <issue_comment>username_2: It seems to me that the best answer is some combination of 1. and 2. Because you submitted the paper for review before the other work -- call it paper X -- appeared on the arxiv, the community will readily believe that your work does not rely on paper X. (At first I wrote "completely clear that your work does not rely", but that's too strong: it's possible that you had some prior contact with the authors of paper X and learned about their work before it was published. But from your description that didn't actually happen, so no problem there.)
So you are in a fortunate situation: because you submitted the work to the conference before the arxiv posting, you have established your independent priority. The fact that the report hasn't come back yet has nothing to do with that. With respect to the submission, it would be reasonable to just wait for the report -- I am assuming that since it is a conference, it will come back within a month or so? If your paper is accepted, then you should include in the published version and also in your conference talk the information that similar (or the same...) work was independently done in paper X.
However it would be a good idea to write immediately to the authors of paper X and let them know about your work. If you are in a field where the conference paper will be supplemented by a later journal paper, then depending upon the degree of similarity you may want to consider a joint publication. If not, then your journal papers should cocite each other: this establishes that "you both have priority", which is certainly possible, and then both works should be publishable. (But in my opinion a joint paper is the better option if the work is very similar: does the community need two versions of the same work? Can everyone be counted on to know about and value the two works equally? Better to join forces: that seals it.) Depending upon the response you receive and the timing it might be a good idea to post your submission to the arxiv as well, with a note explaining the chronology.
I disagree with both 3. and 4. First, it does not matter who did the work chronologically first but rather that each work was done independently and before the other was *published*. **There does not need to be a "winner" and a "loser" here: you can both "win".** It is good that research communities operate in this way, much better than your option 4.: no one has control over which referee report comes back first or which paper goes to press first or anything like that, so if this were the standard it would be at the very least quite unfair and in fact open to all kinds of ethical issues and abuses.
**Note**: One of the comments asks whether the work was stolen. It seems that the only plausible way for this to happen is for there to be some collusion between the authors of paper X and either the conference organizers or the chosen referees of your paper. This type of behavior is in my experience extremely rare, so I don't want to address it in my answer.
Upvotes: 6 [selected_answer]<issue_comment>username_3: This actually happened with a paper I worked on. We handled it by:
1. Immediately submitting our own version to arXiv, including a short mention of the other paper.
2. Informing the other authors of our result, and offering to write a joint journal paper.
Submitting your own version as soon as possible strongly suggests that it was an independent discovery, especially if the presentation is completely original. It also sends a signal that you're not trying to hide anything.
By writing a joint 'final' version, both parties can share the credit. In our case, the papers had been submitted to different conferences, so we thought a joint journal version would be the most appropriate. In the end, both papers were rejected from these conferences, but the other authors were able to strengthen the original result, while we generalized it. This meant that we were able to write a very strong merged paper, which was accepted to the most important conference in the area.
Upvotes: 4 <issue_comment>username_4: Many theorems, algorithms, fundamental scientific ideas, etc bear the name of (or are attributed to) more than one person. This does not always happen because these persons worked together. Sometimes it happens because it is established that they worked on the same issue *approximately* during the same period and/or published *approximately* during the same period. An example that I can immediately give from Economics/Econometrics is in the sub-field of Stochastic Frontier Analysis: in 1977 two papers were published independently, laying the fundamentals of the field. Almost 40 years later, they are still mentioned together, when the author wants to refer to those that initiated the whole thing. These papers are
[<NAME>., <NAME>., & <NAME>. (1977). Formulation and estimation of stochastic frontier production function models. journal of Econometrics, 6(1), 21-37.](http://www.sciencedirect.com/science/article/pii/0304407677900525)
and
[<NAME>., & <NAME>. (1977). Efficiency estimation from Cobb-Douglas production functions with composed error. International economic review, 435-444.](http://www.jstor.org/stable/2525757)
Your algorithm and the other algorithm may be "cousins", and the existence of both may have positive externalities on the research and professional paths of all involved, since it makes for a more vigorous "look here!" shout to the scientific world. I would even consider *promoting* the other paper alongside yours.
Upvotes: 2 |
2014/08/17 | 682 | 2,795 | <issue_start>username_0: I have coauthored two papers, which are strongly related. The younger paper is thus citing the first and the older paper is annoucing the second along the lines of “a study of aspect X will be published elsewhere“. As ArXiv enables you to update your papers, it would be possible to include a citation to the new paper in the old paper after the aforementioned sentence. This might save a reader of the old paper some time with finding the new paper.
However, this breaks some paradigms that were inherently fulfilled by any pre-internet citation, i.e., that you could not cite future work¹ and that there are no loops in citation graphs (i.e., there can be no papers A₁, …, such that A₁ cites A₂, which cites A₃, which cites …, which cites A₁). Thus I find it conceivable that such a citation into the future may cause some problems, for example some weird software behavior (ignoring for the example’s sake that this would arguably be the software’s fault).
**Is there any such issue, which would make the aforementioned citation into the future a problem?**
---
¹ Of course, *will be published elswhere* existed before, but it could not be accompanied by a regular citation.<issue_comment>username_1: Well, arXiv paper is on the same level as any other preprint. When a paper is on arXiv, it's somehow not quite different from you putting it on your personal website. Whence I don't see any problem here.
Upvotes: 2 <issue_comment>username_2: Occasionally two related articles are published simultaneously and cite each other, so loops in the citation graph are OK. See for example [this article](http://www.the-scientist.com/?articles.view/articleNo/40047/title/Simultaneous-Release/) in The Scientist which describes two papers which do so. I was able to verify that they both cite each other through my university's library.
Upvotes: 5 [selected_answer]<issue_comment>username_3: I understand that when you are "updating" your paper, you are creating a new version in the same way a new edition to a book or a republication of the article in another journal or event. If memory serves arxiv takes track of the version numbers. Does it not?
The problem is: a 2nd edition should be cited by its new publication date and location. The same way we do with books. When a republication happens on journals there is usually a note in the header informing the original publication date and location of the article. I do not think this is an internet related phenomena. Version tracking is common in books. Republications are not common in printed journals, but they do happen.
In short: I do not see any problem. You are actually citing the past on a new edition. I think a note in the header in the document explaining that this is always an good idea.
Upvotes: 0 |
2014/08/17 | 678 | 2,735 | <issue_start>username_0: I have completed my MD. I am more interested in research than in clinical practice. I know there are post-doc positions available which accept post-MD candidates. Still I am thinking of doing a PhD in my specialty of interest. However I would like to know what are the benefits of pursuing a PhD after receiving an MD? Is it really necessary to obtain a Ph.D. degree for a research career?<issue_comment>username_1: Look at it this way; an MD says you are qualified to practice medicine, and a PhD says you are qualified to do research. In theory you *could* practice medicine without an MD (it'd be illegal in many countries but not having an MD does not mean you don't have medical knowledge), and likewise you *could* do research, especially part-time, without a PhD.
Leaving semantics aside, having that degree significantly increases your chances of pursuing a career in a particular field. If you intend on leaving the clinic and devoting your time and energy to research I'd say that a PhD would be expected, when you look for jobs. Whether or not you can bypass that expectation with other qualifications would be speculation.
As it turns out an MD & PhD combination is... uhmmm how to put it delicately.. "[so money](https://www.youtube.com/watch?v=ZlEXOzC6vqE#t=35)". :)
I say you put your hat in front of you, figure what you want to do, and go for it. Many doctors here (in Sweden) go for a double degree and it is definitely not something that's frowned upon. On the contrary, in many cases it is an additional merit for future promotions.
Upvotes: 2 <issue_comment>username_2: It is very much depend on your country and research field. Technically speaking many medical doctors affiliated to hospitals do research and publishes the results, so it is definitely not impossible. However, you should ask yourself if you have enough technical knowledge in your field you want to go and if you have enough experience in setting up research programs, writing proposals and papers etc.
Upvotes: 1 <issue_comment>username_3: I'm an epidemiologist, working with a large number of research-oriented MDs, as well as a number of MD/MPH, MD/PhD and PhD researchers.
In terms of an MD vs. an MD/PhD...
I haven't met anyone whose been held back by doing research with just an MD, but my field admittedly somewhat exalts clinical expertise. The biggest advantage of a PhD is that you will have spent a great deal of time doing *research* instead of clinical practice. If you want to be an expert in methods for research and analysis, that's where a PhD comes in. The MDs I work with often end up deferring to a PhD (like me) when the research goes from the high level to genuinely in the weeds.
Upvotes: 2 |
2014/08/17 | 1,085 | 4,582 | <issue_start>username_0: I am studying computer science and I have money to pay for my university study, but I do not have money to spend on entertainment such as going to cinema with my friends in the weekend or upgrading my laptop. I have applied for a job at McDonald's. My shift is 4 days per week with 7 hours per day.
I am scared this would affect my grades. How can I balance my studies with the part-time work I must do in order to improve my living standards or afford a social life?<issue_comment>username_1: [This study](http://www.nber.org/papers/w1742.pdf) suggests that employment outside the university (especially in excess of 25 hours per week) does have a negligible adverse effect on GPA, and a signigicant adverse effect on the probability of a student's continues enrollment. A second paper ([here](http://www.nber.org/papers/w14006.pdf)) suggests that hours worked **do** have a significant negative effect on student grades.
>
> ...results show that an additional weekly work hour reduces current year GPA
> by about 0.011 points...
>
>
>
A personal anecdote: When I attended the mandatory freshman orientation seminar during my first semester as an undergrad, they recommended *no more than fifteen hours per week*, suggesting that anything more than that would tend to have a negative impact on our grades. I do not now recall whether they also provided references to back up their claims...
Of course, this is a general guideline, and an individual student may well be able to handle more. I know some students who were somehow able to juggle a full course load, a full-time job, AND major family responsibility, but those paragons are very rare!
Upvotes: 3 <issue_comment>username_2: I worked full time for the first year and a half and part time for the remaining year and a half during my undergraduate career for Computer Science. It certainly meant that I had less time and flexibility but, ultimately, I think it made me more marketable and a bit better prepared for the working world than many of my peers. That's being said there's a fundamental difference between my work and your work - my work was within the realm of Computer Science.
Having a job, any job, is a necessity for many students. I would recommend, though, trying to find a job that's going to benefit, even if in a small way, your studies. Many schools have IT departments that hire undergraduate students as part time help. Many schools have departments that have an independent IT department, separate from the main IT department. Research schools tend to have infrastructure related jobs and internships. University towns often have businesses that are happy to have future CS graduates working in their IT or programming departments. If you have been in school for a bit you should know some of your professors - there are often paid jobs in research labs all over campuses. They aren't advertised but good relationships with professors will open a lot of doors.
I would try to get a job that's going to have long term benefit for you. Having a high level of confidence and comfort around IT business practices makes you a far more marketable computer scientist than someone who can code but is afraid to install an operating system(for example). Knowing the vocab and jargon will make you more marketable. Having a job, of any sort, with references will help when it comes time to start looking at what happens after you graduate.
As to whether a part time job will effect your grades... it will at least a little. Instead of spending 40 hours on some projects, sometimes I only had the time and energy, a finite resource to be sure, to spend 30 hours. I ended up graduating with a very good GPA(honestly I think being involved in research affected my GPA more than my job.) But having a job, especially a challenging job, in addition to being in school will mean you will have to be more organized than many of your peers. It means you'll have to be on top of your school work and that last minute all nighters will be pretty detrimental.
I don't think there's an all-inclusive answer for whether people should work during school. In my own hiring and looking-to-be-hired experience students who are employed during their schooling tend to be more reliable, more marketable and a more 'sure thing' than their counterparts who did not. But, as mentioned by username_1 in their excellent answer there are sacrifices involved. As a CS student I would be putting a lot of effort in moving beyond a 'mcjob' and into something at least slightly relevant.
Upvotes: 0 |
2014/08/18 | 592 | 2,520 | <issue_start>username_0: When I am reading papers, sometimes I see statements like
>
> Manuscript received October 29, 2012; accepted March 16, 2014.
>
>
>
Does this implies that the paper is directly accepted?<issue_comment>username_1: Yes, it should mean the paper was accepted with no substantive changes. Otherwise, it would say something like "revised June 1, 2013" in between the submission and acceptance dates. If there were several revisions, then only the final revision date is recorded.
In my experience, journals can be a little sloppy about this and allow small changes (such as typo fixes or minor rewording) without such an indication. Perhaps they should be more careful, since following an unambiguous rule has its benefits. However, this doesn't seem to be considered a big deal if the changes aren't substantive.
There might be some journals that never indicate revision dates. I think that would be pretty nonstandard, but there are a lot of journals out there and diverse practices in different fields, so it's hard to say for sure. (There are certainly journals that don't even show submission dates in the first place, but that's a different issue. The weird part would be highlighting the submission date while ignoring substantive revisions.)
I've heard stories about journals asking authors to submit a revision as a new submission (see, for example, [this blog post](http://dynamicecology.wordpress.com/2014/03/12/tell-me-again-what-major-revisions-are/)). In that case, the "manuscript received October 29, 2012" might be disguising the fact that there was an even earlier version (to make it look like the journal handled the paper more quickly). However, I've never seen any evidence of this myself, or heard these stories in mathematics.
Upvotes: 2 <issue_comment>username_2: Not necessarily. Some journals only state dates for submission and acceptance. In such cases dates for revisions are not mentioned. Since it is not uncommon, at least in fields with which I am familiar, for papers to go through two (or sometimes more) revisions, a date for revision usually refers to only one, the first set of revisions.
I can also confirm username_1's notion that some journals only accept papers that receive minor revision and reject those with major, but with the explicit understanding the paper should be resubmitted once revised. This will officially shorten the times from submission to accept but is of course a questionable action to manipulate such statistics.
Upvotes: 3 |
2014/08/18 | 1,652 | 6,893 | <issue_start>username_0: I just came across a paper published in a journal (IF < 2), which has used a couple of images without mentioning the source. This itself is not necessarily a problem. But they seem to be from a commercial software product I'm familiar with (they haven't mentioned even the name of the software, though it generated the image), and based on my experience with this software, the image looks to me like it has been tampered with.
Specifically, it is an image where the software predicts properties of a compound which is used to verify the result. Now, let's say some parameter have a cutoff at 0.5% and they are getting 10.3%. To conform to their result they removed the '1', and it become '0.3%'.
Emails to the corresponding author came back empty.
Should I report that to the journal or leave it as it is (may ruin someone's career)?
**They have used the image to prove the result of the experiment.**<issue_comment>username_1: This could be a big deal and something you should report and, alternatively, it might not be a big deal or reportable. The difference comes in the context.
Are the editted images presented as legitimate data or results that are the raw output of the experiments/research? By which do they say something like "In image blah taken by an electron microscope you can see that the magical unicorn bonds have been created by our process." (Keeping in mind that I know nothing about bio-chem and 'magical unicorn bonds' is a stand in for some actual process). Or in CS something like "Here you can see the robot we built" Statements like this imply or outright state that the object or information in the image is not just representative data but actual results or output. Data like this should not be manipulated or edited except for clarity(circling a targeted area or adding minor labeling).
The other kind of image is a bit rougher. These images can demonstrate what was expected to be seen, abstract output from the research, conceptual information. These kinds of images often are edited or manipulated. Sometimes as a demonstration of what was expected("We would expect magical unicorn bonds to appear after our procedure but instead....") or as an explanation of something more abstract("the robot should follow the optimal path as shown here when it uses the stairs instead of running repeatedly into a wall"). These are things that are no reportable. They can be in poor taste and they absolutely should be caught by reviewers if they imply results beyond the scope of the actual research. But, in some fields, these are the best way to demonstrate expectations, abstract information or background information.
All that being said - when you say the image is from a "commercial software" that makes me wonder if you mean not that it was created with "commercial software" but is actually an image from some commercial source. In this case the image may be copyrighted and it may not be appropriate, at all, to use in this research. This will depend on the image, the source and the 'tampering'. As a counter example to this in the realm of computer vision every uses the standford bunny model in their publication. It's a thing. This is not inappropriate. Someone using an image from a text book, however, or a Google search that they do not own is inappropriate and should be first reported to the PI of the paper and, potentially, the publisher if no action is taken.
Upvotes: 5 [selected_answer]<issue_comment>username_2: it may not be as bad as it seems. I often have to edit images from commercial software packages because the text is too small (or blurred) to read in the print version. Usually it's the axes that I have to fix because many packages make the fonts too small. The authors in this case may have simply overwritten with the same values, just in a larger text or (different/emphasizing font or color).
Upvotes: 3 <issue_comment>username_3: Manual alteration of figures aimed at deceiving readers (by fabricating, obfuscating, 'cherry picking' results, etc.) is a **serious matter**. It surprises me that people still try because it's very often quite easy to spot (other types of data fabrication are harder to catch). But it's sadly not uncommon, even in highly regarded journals and from researchers from reputable institutions.
For life sciences, according to one of pubpeer's moderators in [a comment](http://blog.pubpeer.com/?p=164#comment-1534840179): 'Most of the life science reports involve image manipulation - a good majority are gels, with a bunch of duplicated specimen images as well. […] we see sometimes on PubPeer […] things like doctored NMR spectra in chemistry.' The latest seems to be related to your observation, although it's outside my area of expertise.
You can get a sense of the type of things that are reported on PubPeer reading [this thread](https://pubpeer.com/publications/058CFA77EAF6D5E019D9902C6B3553) among many others. As an example, a close examination of this figure shows the use of copy paste to fabricate data:

Or this one, initially published, not in your average pay-for-publish shady 'open access' journal, but in *Nature*, that has a rather obvious copy-pasted middle panel:

(both the articles where retracted).
In your position, your first reaction is totally appropriate. Here is what I think is the best course of action:
1. **Contact the author(s)** it is a good way of showing that you are concerned but not necessarily interested in public shaming. If the authors do not react, then:
2. **Contact the publisher** (mentioning that you already contacted the authors to no avail) as suggested by @alarge in a comment. If it's a reputable publisher, the issue will be taken very seriously.
3. If all fails, you are left with public reporting of the issue, anonymously or not, via social media or the website listed above. Note that you are always at risk of putting yourself in trouble when reporting misconduct, the same as in any other field, so weight this risk if you intend to associate your name with the complaint.
EDIT:
>
> Should I report that to the journal or leave it as it is (may ruin
> someone's career)?
>
>
>
I think that, as a scientist, you have a responsibility to report that sort of misconduct when you see it. What *may* ruin someone's career is their sloppy ethics, not your concern for integrity.
Upvotes: 5 <issue_comment>username_4: First you should discuss the matter with a trusted colleague or two to check that they agree with you. This is a heavy accusation and before doing anything you should check that you haven't missed anything and that there aren't other plausible explanations. You may very well be right, but it's also easy for one person to make an error without outside feedback.
Upvotes: 2 |
2014/08/18 | 812 | 3,356 | <issue_start>username_0: I am applying this winter for graduate school and I will finish undergrad this winter as well. Would it be unethical to take a full-time position with the intention to quit soon until I know whether I got accepted for graduate school or not?
I am considering this because if I didn't get accepted, I wouldn't have a gap until I start working, and the company would be a good brand which would certainly not harm my PhD application (but this shouldn't be a discussion on what prepares me good for graduate school). My plan is going back to industry after a PhD, so considering my reputation in the field or possible research jobs lateron in the same company, how could I reason my leave for getting a PhD? Or would you better decline the offer and wait for the admissions result?<issue_comment>username_1: It wouldn't be unethical for your boss to fire you 8 months after you take the job, if he wanted to. Labor is an exchange you and your employer enter into voluntarily. So, you shouldn't feel bad quitting after 8 months if you want to. It's not an ethical problem at all. It is more plausible to think there might be a *personal* problem if this is a small industry that you would plan to re-enter after finishing the degree. Don't tell your boss up front that you might quit in 8 months, that isn't any of his business. If you do accept an offer to do a phd somewhere, then you should tell him promptly, just as a matter of courtesy. This is to allow the company adequate time to find a replacement.
Upvotes: 6 [selected_answer]<issue_comment>username_2: As a counterpoint to Shane's answer, I would say it strongly depends on what you agree on (explicitly or implicitly) with the company.
* If they are actively aware that you are looking into getting accepted for a PhD position (that is, you told them so), and they are ok with it, it is clearly fine.
* If they are not aware of it, I would say it largely depends on which job you are supposed to be doing. If it is one where the company is investing a lot of money into training you (e.g., a management trainee program, or they are ramping you up to work on their terribly complex main product), quitting after 8 months is of course still *legal*, but you should not be surprised if this company is not keen on working with you ever again.
* If they specifically tell you that they are expecting you to stay for the long haul and you lie to them (or, only very slightly better, don't tell them otherwise), the ethical question is pretty much undisputable in my book. Mind, you are still legally safe to quit, but I would argue it is definitely not ethical to do so.
Upvotes: 4 <issue_comment>username_3: No.
You do work, you get paid. You owe your company what it says you owe in your contract. I very much assure you that your employer won't worry about going beyond their contractual obligations.
Upvotes: 2 <issue_comment>username_4: You could also try to find out if it's possible to do a PhD while working at the said company. This can be a much better option than going to a graduate school. Getting a proper salary while gaining work experience while completing a degree, it's pretty nice. The one downside is that you'll be probably doing a lot more than just the very specific stuff related to your degree. Well, it's not necessarily a bad thing..
Upvotes: 3 |
2014/08/18 | 323 | 1,467 | <issue_start>username_0: I got two papers accepted in a decent IEEE conference. My current organisation (an industrial R&D lab) says that the conference fees are way too high and they would consider approval of funds only if we get a rebate for publication.
I have two questions:
* Is it normal across the globe to ask for rebate?
* Even if I do so, would it be considered professional? (The conference already has a discounted price for the second paper since authors are common in both.)<issue_comment>username_1: As the commenters write, this is quite unusual.
However, if the conference fees were not published before the paper submission deadline, you could try it out *if* you get the financial constraints imposed by your lab in writing. While strictly speaking, the organizers could consider this to be unprofessional, it is in that case still somewhat clear that the problem is not really your fault.
Upvotes: 3 <issue_comment>username_2: It is unusual and it is likely that the organizers would see it as unprofessional, especially if such request is coming from the industry.
How valuable are these publication for your employer? You could try to speak with someone at a higher level and persuade them to pay the conference fees. Alternatively they can advise submitting the papers to another conference with fees they are willing to pay. Or if publishing benefits you more that the employer, you can perhaps cover the difference yourself.
Upvotes: 3 |
2014/08/19 | 1,592 | 6,677 | <issue_start>username_0: Let me summarize the complicated state of affairs, as follows:
This summer, I was cooperating with the other team in a spin off project, we decided to publish the results, so I prepared the paper.
In the authorship, I added my supervisor's name. I think it is usual because I am his student and whatever I've learned came from him.
But when the other team's supervisor saw this, he asked me to drop my supervisor's name from the list. I sense that this is not only because he is not involved in the project, but also because of some political issues.
Dropping my supervisor's name makes me feel bad, like betraying him.<issue_comment>username_1: *Since you haven't explicitly mentioned which academic discipline this concerns, and as EnergyNumbers mentioned in the comment, different disciplines may have different conventions, I'll answer in terms of my discipline, (Theoretical) Physics. I would be horrified if it so turns out that things are any different in any other scientific discipline (at least)*.
Whenever an article gets submitted for publication, one has to''accept'' a declaration that all those people who made significant contributions were (at least) offered co-authorship. If they decline to be an author, that's a different story. As PVAL mentioned above in a comment, it is academic dishonesty if someone is sitting in the author list without having contributed anything significant to the investigation. Even if you discount the ''*politics*'', as you mentioned, the other team is frowning because your supervisor is ''*not involved in the project*''. If that means that he only offered occasional advice, perhaps born out of his experience with doing similar things, (''*have you tried ...*'' variety), then the right place for him is in the acknowledgements section. Just mention, ''*we thank [his name] for helpful discussion/ inputs* ...'' etc. But if it means that you want to include his name only because ''*what I learned came from him*'', I'm afraid I have to point out that authorship is not a Christmas card. (By the same token, why not include your parents, or your spouse, or your high school teachers - you owe a lot to them also :P). It is WRONG to include any person who didn't contribute TO THE INVESTIGATION, howsoever highly regarded he may be in your personal life.
But having said that, here's some seasoned advice - Go and talk to your supervisor in private and explain the situation. Ask *him* whether or not you should include him in the list, fighting opposition from the other co-authors. It is possible that he may have been in this situation before (*whichever side*), so he will show you the light. One-to-one dialogues go a long way in resolving these sort of harmless dilemmas.
Besides, that will serve another purpose - you will *show* him that you ''*respect*'' him so much that you want to gift him authorship in an investigation that he hasn't contributed to at all :P. (I expect that any sensible man would decline in this situation).
Upvotes: 4 <issue_comment>username_2: Yes, in many fields, it is very much expected that your supervisors' names will appear on papers written during your study, because in those fields, much of the substance of your work comes from your supervisors.
You're right that politics can be involved too.
Do bear in mind that different fields have very different conventions about what "authorship" means. Quite a few pharma trial papers have almost all their words written by ghost authors who do not appear at all in the list of authors. No doubt this will horrify some readers here who consider their own field's conventions to have some sort of objective purity, despite them being just as much a negotiated compromise as any other field's. In different academic disciplines, different types of contribution (data collection, analysis, writing, thinking, editing) may each earn co-authorship, acknowledgement, or money. Furthermore, the concepts of, meanings of and interpretations of *contributorship* and *attribution* within scientific publishing are in flux at the moment, evolving and trying out new forms - see [discussions](http://scholarlykitchen.sspnet.org/2014/04/22/when-a-scholar-is-one-among-500-what-does-it-mean-to-be-an-author/) at The Scholarly Kitchen and other places, .
So don't get too hung up about some people's ideas of what being a named author means.
Do discuss it with your supervisor(s). Find out what the conventions are for your field, and for your target journal in particular. And in general, don't add someone as author without having discussed it with them first.
Upvotes: 4 <issue_comment>username_3: We get inspiration and ideas from all over the place, but authorship implies something more specific and tangible.
If your supervisor did not make a contribution commensurate with authorship, he/she should not be named as an author (the fact of your being his/her student is irrelevant). If you feel compelled to acknowledge him/her, then use the **Acknowledgements** section or a footnote to do that (e.g.: "I am grateful to Prof. <NAME> for her guidance and for brainstorming a few ideas relevant to this article. Her feedback helped me bring the conclusion into focus.").
Upvotes: 0 <issue_comment>username_4: In a theoretical world, I would favor cutting advisors from a lot of papers. In theory you make a contribution, not just management. But in reality, we know their role is often to raise funds and run a little kingdom. The level of contribution to the actual work would not merit authorship were they a student (or a manager in a company).
However, we have to live with the practical world and in some fields, it is just expected that they get their names on there. Heck, it would probably hurt them with funding, promotions, etc. not to be listed. And I don't mean that in the sense of them getting something they don't deserve. I mean that in the sense that it is expected to see their names on the papers when looking at productivity of the "kingdom".
This other project doesn't seem like they are paying for you (and yes, payment is not authorship...but in the practical world...it sort of is). Also you say the other project is a spinoff. That sounds separate, but still somehow connected. It's not like you finishing up a paper from before you got to your new posting.
P.s. Please stop referring to yourself as a postgrad "student". Even though you are a peon, as a Ph.D., you are considered at least a worker, not a student. (Even the real students work more than they study, but that's a story for another day with many sad issues like workers comp for accidents.)
Upvotes: 0 |
2014/08/19 | 1,160 | 4,214 | <issue_start>username_0: I have many contributions related to interviews in radio, TV, web magazines and print magazines, and would like to know the different ways it might be "cited" in a scientific CV. I was thinking about adding them at the end of my CV is some section called "dissemination of research results" or something like that.<issue_comment>username_1: "Media Appearances" sounds better to me. Then I think I'd cite each appearance as if it were a conference talk:
>
> "How to Solve the Problem of Evil," *WFUV Radio,* 29 February 1904.
>
>
>
or
>
> "Why I am So Clever," *The Today Show,* National Broadcasting Corporation, 9 August 1999.
>
>
>
Maybe insert some descriptive language if the citation is unclear.
Upvotes: 4 <issue_comment>username_2: Interview section in your CV
----------------------------
If you have significant amount of interviews or you want to bring them in your CV, I did not find any special tip to bring interviews in a separate section; but you can open a section like `media appearance/interviews` and list your interviews in this section.
Citation style
--------------
By [googling your question](https://www.google.com/search?&q=interview%20citation), I found many related links in which the citation of the interviews are clearly presented. I bring some of them here for you. One may find a good resource for citation of interviews in [bibme website](http://www.bibme.org). In this website, you will find the format, examples and notes on correct citation.
[MLA](http://www.bibme.org/citation-guide/MLA/interview)
--------------------------------------------------------
>
> **PUBLISHED/BROADCAST INTERVIEW:**
>
> Last Name, First Name. Interview by First Name Last Name. Publication Information. Medium.
>
>
> **PERSONALLY CONDUCTED INTERVIEW:**
>
> Last Name, First Name. Interview Type interview. Date Interviewed.
>
>
>
[Chicago](http://www.bibme.org/citation-guide/Chicago/interview)
----------------------------------------------------------------
>
> **PUBLISHED INTERVIEW FROM PUBLICATION:**
>
> Last Name, First Name. Interview with First Name Last Name. Publication Title. Publication Information.
>
>
> **PUBLISHED INTERVIEW FROM RADIO/TV PROGRAM:**
>
> Last Name, First Name. Interview with First Name Last Name. Program Title. Network, Call letters, Date Interviewed.
>
>
> **UNPUBLISHED INTERVIEW:**
>
> Last Name, First Name. Interview by First Name Last Name. Interview Type. Location, Date Interviewed.
>
>
>
[Turabian](http://www.bibme.org/citation-guide/Turabian/interview)
------------------------------------------------------------------
>
> **UNPUBLISHED INTERVIEW:** Last Name, First Name. Interview by First Name Last Name. Interview Type. Location, Date Interviewed.
>
>
>
[APA](http://www.bibme.org/citation-guide/APA/interview)
--------------------------------------------------------
Although, in the above link, there is no guide for citing in APA format, as indicated `Interviews are not supported in bibliographies by APA. Please cite it as an in-text citation.`, but I have found the following guide in the [apastyle](http://www.apastyle.org) website for [citation of interviews](http://www.apastyle.org/learn/faqs/cite-interview.aspx).
>
> An interview is not considered recoverable data, so no reference to
> this is provided in the reference list. You may, however, cite the
> interview within the text as a personal communication.
>
>
>
How to easily cite your documents?
----------------------------------
By the way, I encourage you to use the softwares like [JabRef](http://jabref.sourceforge.net), [Zotero](http://www.zotero.org) and [Endnote](http://endnote.com) to prepare your citations and resources. Also you may use some online websites like [easybib](http://www.easybib.com/mla-format/interview-citation) and [citethisforme](http://citethisforme.com) for easier preparation of your citations.
Upvotes: 4 [selected_answer]<issue_comment>username_3: I advice you use this citing tool at <http://www.calvin.edu/library/knightcite/>
If it shows an error search for knightcite in their search bar...the tool helps you cite your work properly
Upvotes: -1 |
2014/08/19 | 913 | 2,794 | <issue_start>username_0: This is borderline trivial, but in my attempt to publish my work in a public repository, I've found badges at the top of my `README.md` to be useful. For example, using [Zenodo](https://zenodo.org/), I can create a badge that points to a proper DOI that looks like this:
[](http://dx.doi.org/10.5281/zenodo.11304)
*Encyclopedia of Finite Graphs*
If I have a critical piece of code, I can publish [`Travis.CL`](http://docs.travis-ci.com/user/status-images/) badges or [`Coveralls`](https://coveralls.io/) for code coverage.
Is the an equivalent badge or icon I can use to visually indicate that the work has been published on the [`arXiv`](http://arxiv.org/)?<issue_comment>username_1: I don't think that there's any. As well, I think you want to make one visually matching the ones you already have.
There's an XCF (GIMP) file for the arXiv community ad:
<http://meta.math.stackexchange.com/a/11924/43247>
I think it could be helpful to you.
Upvotes: 2 <issue_comment>username_2: I found a service that can create custom badges: [`shields.io`](http://shields.io/). Using the arXiv background color (Firebrick `#B31B1B`) I was able to create a badge that looked more or less "official". An example of their template and my specific use case:
```
http://img.shields.io/badge/--.svg
http://img.shields.io/badge/math.CO-arXiv%3A1408.3644-B31B1B.svg
```
After converting the svg to png for use on github, I got this:
[](http://arxiv-web3.library.cornell.edu/abs/1408.3644)
Upvotes: 5 [selected_answer]<issue_comment>username_3: To add on to the [OP's own answer](https://academia.stackexchange.com/a/27376/122278), it is also possible to use a logo from [Simple Icons](https://simpleicons.org/) (which has an [arXiv icon](https://simpleicons.org/icons/arxiv.svg)) in a badge generated by [shields.io](https://shields.io/). A number of other parameters can also be specified by adding keyword/value pairs after a `?` in the request URL:
`https://img.shields.io/badge/--?logo=&logoColor=`
Omitting any parameter will set it to its default. Some other possibly useful parameters include `logoWidth` (specified in pixels), `style` (default is `flat`), and `labelColor` (to change the color of the lefthand portion of the badge from the default `gray`).
Using `math.CO`, `1408.3644`, and `b31b1b` as the subject, arXiv identifier, and primary color, and setting `logo=arxiv` and `logoColor=red`, we get a badge that looks like this:
[](https://arxiv.org/abs/1408.3644)
Of course, you can compose it however you'd like in practice.
Upvotes: 2 |
2014/08/19 | 6,346 | 27,933 | <issue_start>username_0: I am preparing a lecture course for this semester and I am planning to teach from slides. I will consider these slides my notes for the course. However, the following points may also apply to handwritten notes that a lecturer might use as a reference.
I can see several advantages and disadvantages associated with providing the students with the notes for a given lecture prior to that lecture:
Advantages:
* Students have to write less (because the core of the material is already present), so more material can be covered;
Disadvantages:
* Students may lose focus more easily when they have digital notes because they do not have to copy everything down (this was my experience as an undergrad);
* Students may choose not to attend class because notes are available elsewhere (of course, they may choose to do so anyways...).
One alternative I have considered is to distribute all of the notes relevant to an exam some suitable time period prior to that exam. However, this may reduce the effectiveness of the advantage, although it will mitigate to a certain extent the disadvantages. So, my question is as in the title: Is it common for professors to distribute digital notes to the class? If so, what methods are common?<issue_comment>username_1: Yes, it is common, at least at the freshman and sophomore level. In my experience, few professors provide digital notes (or digital copies of lecture slides) in the upper levels of undergrad teaching.
I always appreciated those who did so, because I was usually able to take better notes due to not having to write as much. However, recall was usually not quite as good, for the same reason. (FWIW, I almost never skipped lecture, whether or not there were digital notes available).
As for what methods are common, I've seen three main methods, mostly differing as to *when* the notes are made available to the students.
1. Post all slides at the beginning of the course. This has obvious shortcomings if you change your lecture slides in any way--and good teachers almost always do!
2. Post shortly before the class session in which the material is presented. Either immediately before (< 4 hours) or several sessions in advance. This seems to work best, because students have time to print them out to take notes on, if that's what they prefer. You can also tweak your slides as needed without confusing your students.
3. Post lecture slides immediately after class. This seems to me the least optimal, in that students cannot read the slides ahead of the lecture or use them to take notes on, but could potentially still use this as an excuse to not come to lecture.
Upvotes: 3 <issue_comment>username_2: I used to distribute slides before class, and found that the disadvantages outweigh the advantages. I now distribute them shortly after each class. I tell students to jot down important points during the class, but not to worry about things like lists because they'll have the slides available. I also suggest that they merge their classroom notes onto the slides. The ones who listen to me tend to do very well.
I base my decision on the premise that learning should be effortful. I think, hope, and expect that providing slides after class is a compromise between no slides and slides before class.
As an aside, I also record my lectures and make the podcasts available. I do that because I have students for whom English is a second language, and non-traditional students who are sometimes called away for work. I'm a little torn about the recordings because some students *do* try to use those as a substitute for coming to class. I console myself with the thought that they were destined to fail the course anyway and would do even without the recordings.
Upvotes: 7 [selected_answer]<issue_comment>username_3: Different students have different needs. If you make notes available, students who benefit from not having notes have the option of not using them. If you do not make notes available, students who need them have no option.
I write fairly slowly, and do not seem to be able to listen and think at the same time as writing. For me, needing to write notes during a lecture drastically reduces the benefit of being there - the only product is what I manage to write down of whatever was written on the board, with no gain in understanding until I study the notes and text book afterwards.
You may have a student in your class who is not hearing what you are saying whenever they are writing.
Upvotes: 5 <issue_comment>username_4: I've seen lecturers print out copies of the slides, which the students would pick up as they entered the room, so they could make notes on them.
It requires the students to attend, but you could also say that you can collect copies in your office if people can't make it.
People sometimes picked up copies for friends, but it generally meant that people attended the lectures. It's really useful to have the slides to make notes on directly.
Kills a lot of trees though.
Upvotes: 3 <issue_comment>username_5: To answer the main question, yes it is common.
I guess it's a matter of your personal idea of your role as a teacher. My approach is to do everything I can to make the material easily accessible to the students. Some students don't want/can't attend classes, they have their own priorities.
Of course teachers are by no mean obliged to take these cases into consideration, and I'm perfectly fine with courses taught without anything else than the blackboard.
But if *I create slides anyway*, I see *no sane reason* to restrict access to these before, during or after the courses are taught. I will also mention textbooks, articles and other sources I used to create the content.
Upvotes: 4 <issue_comment>username_6: In the university that I attend, almost all of our professors provide digital notes. I believe this is a good practice because many students can't keep notes and comprehend what the professor says at the same time. As a countermeasure for the second disadvantage you mention, professors often give some extra techniques and explanations that are not included in the digital notes.
Upvotes: 1 <issue_comment>username_7: When I last taught big lecture classes, I would distribute pre-class and post-class versions of my slides.
The pre-class versions would go online the night before. They would have fewer steps worked out (I teach math), some solutions left blank for students to ponder in advance, and jokes subtracted. They would be laid out three slideframes to a letter-sized page, with ample space for annotation. So they could print them out and take them to class.
The post-class versions would be the complete slideshow from class, with worked-out solutions, jokes, and errata included. They would go up at the end of the day of class (depending on how much I had to fix). They were not laid out to be printed; it was just one slide per page of PDF.
Upvotes: 2 <issue_comment>username_8: I don't know how common it is, but I can tell you that in the classes I teach (general chemistry), I use a combination of lecture slides and writing on the board. Here's a rough breakdown of my approach:
* Approximately one slide per major concept, with very limited text (mostly definitions) and images/graphs to explain the concept. I make these available online (they can print before class if they want to)
* I write quick outline notes of major topics on the board before starting the section or chapter, then back-fill the details during the lecture, and give a review summary at the end.
* For problem-solving, I use one or more slides with an example and outlines of the steps, then demonstrate on the board using the same example. I follow that up with a different example, and finally have the students work through a problem on their own in class.
Using slides in this way works well for me because it allows me to spend less time writing and drawing on the board, and more time engaging students with discussion and question/answers. I think it helps the students as well - they are free to pay attention to what I'm saying, rather than just trying to copy down everything I write. Once they realize that they only need to write down the extra details, the lectures become more fun and I think they get more out of them. Since the slides are always available online, they can print them out whenever they like, or just look at them on their computers/tablets/phones.
Writing the summary notes and outlines on the board helps me organize the lectures, keep things on track, and prepare the students to pay attention. It also gives those with less-well-developed note-taking skills a template to use.
By combining the two, I think this approach addresses both of the disadvantages you brought up:
>
> Students may lose focus more easily when they have digital notes because they do not have to copy everything down (this was my experience as an undergrad)
>
>
>
Since the outline summary notes are on the board, students have something to write down, and I can use their tendency to automatically copy whatever I write as way of getting their attention if they seem to be "zoning out"
>
> Students may choose not to attend class because notes are available elsewhere (of course, they may choose to do so anyways...)
>
>
>
Since the slides are mostly graphical illustrations with limited text, there is still an advantage to coming to class. I do want to point out that I believe it's part of my job as a teacher to make coming to class valuable for reasons besides just getting a copy of the notes, but, I recognize that for some students who are good at self-study, that might not be the case. I try to follow the textbook pretty closely, so students who can't make it to class for some reason are still able to catch up pretty easily between the book, my slides, and notes copied from classmates.
Upvotes: 4 <issue_comment>username_9: Some background: In Finnish universities the mentality usually is\* that student should have the responsibility for their studies and learning (from what and how many courses to take each year to how to study course material). For example, it is very uncommon that attendance at lectures is mandatory, and even mandatory weekly exercises are not very common. In that sense, providing lecture notes has several advantages:
* Students have to write less (because the core of the material is already present), so more material can be covered
* Students may focus [sic!] more easily when they have digital notes because they do not have to copy everything down (this was my experience as an undergrad)
* Students may choose not to attend class because notes are available elsewhere (of course, they may choose to do so anyways...)
\*or at least it has been in the recent past; some things seem to be changing
---
Some lecturers provide the notes only after the lectures, because students are very good at pointing out typos and other mistakes in the material during the lectures. Some lecturers provide a preliminary version before the lectures, and a corrected version after the lectures. Some lecturers have completed the material on previous years and give everything at the start of the course. Some lecturers have completed the material on previous years and give material related to each lecture shortly before or after that lecture.
Nowadays, at least in Aalto University, the material is almost always provided as PDF files (or other relevant format) using a web service provided by the university. In the past there was a very complicated system where the lecturer provided the material to a certain company from which a student then ordered the printed material which was then distributed to a folder which was maintained by an association of the students in the degree programme of the student; but for some reason that system was abolished a few years ago.
The contents of the material given by the lecturer varies from a complete set of notes that is almost like a book, to bullet points listing what parts of the course book were covered during that lecture. Nevertheless, I think I've never taken a course where you wouldn't be able to study the material without attending the lectures.
>
> One alternative I have considered is to distribute all of the notes relevant to an exam just prior to that exam. However, this may reduce the effectiveness of the advantage, although it will mitigate to a certain extent the disadvantages.
>
>
>
I've never seen this happening. Some students have asked lecturers who don't provide extensive material during the course to do that before the exam, but most responses I have heard have the following points: Providing the lecture notes just before exam would encourage bulimic learning - that is, putting a lot of stuff in your head for a few days before the exam and quickly throwing it up during the exam without actually learning anything. Most students who don't want to attend lectures might still not attend if they know they will get the material anyway; by providing the notes so late you'd only harm their learning.
Upvotes: 3 <issue_comment>username_10: As a student, I was more focused when the professor had the slides behind him because the slides were just a summary and he had the details. I was more concentrated because I haven't had to write down every single word so the lesson was more fluent.
Upvotes: 2 <issue_comment>username_11: As a student, I am most comfortable and effective learning lecture notes that are provided in traditional book format (even if it isn't available on paper, but as a PDF).
I have a very strong bad feeling towards slides that double as lecture notes. I believe the requirements for these two are so different, that when people merge the concepts, the result is equally bad as a slideset as lecture notes. When the whole material is crammed into slide format, aiming to provide all information the student has to learn, it has several possible disadvantages (all first hand experience):
* Teacher reads the slides verbatim, rendering his/her presence meaningless.
* Teacher doesn't feel obligated to prepare for the lecture, because everything is on the slides (in extreme cases the slides are made by someone else and the teacher never prepares for the whole course).
* Teacher doesn't feel obligated to put proper effort into making the slides, because *"I'm just making supplementary material"*, resulting in quickly hacked together slides (examples include copy-pasted stuff alternating between first language and English, three complete courses with equations without defining any notations used, two complete courses with equations and no explanatory text at all).
* The slides are inherently not detailed enough to be a complete resource for the course, so students go to Google for additional resources, and sometimes what they find is completely wrong.
* The teacher's line of thinking may be so different from the students, that his/her idea of what should be on a slide results in slidesets that are incomprehensible and of absolutely no information to the students.
* Students are disengaged from the lecture because the slides are available online. Even committed students find it hard to pay attention.
* PowerPoint and similar software makes it easy to place randomly formatted random stuff at random places on the slide. A teacher with no visual/typographical sensitivity and/or proper knowledge of the software may easily create slides that are painful to even look at because of the messy and amateur layout and formatting. (Examples include a complete course in all caps, physics plots handdrawn in PowerPoint/MSPaint style, equations in plain text instead of the equation editor, source code listings in bullet points, etc.)
My suggestion is, take the time needed to type proper lecture notes (maybe next summer) in proper sentences, with all the details etc., and let the slides contain only those pieces of information that you simply cannot present verbally or on the blackboard. An alternative to writing lecture notes is providing pointers to existing (text)books in which the material is already covered (maybe the very books you yourself use to prepare for the course).
Upvotes: 2 <issue_comment>username_12: I provide my slides. I give them to the bookstore to print as a "course pack" sold for a nominal amount (none of which goes to me) and if I change them during the year, I put them online for download right before (or in some cases, immediately after) class. I typically have a separate small deck with comments and diagrams and such that arise from marking an assignment or test. I don't upload these ever. In the first class I explain all of this to the students along with the following important announcement:
>
> I can and will test you on material that was only covered verbally in class and is not written on these slides. You can and will lose marks for not knowing something that was only talked about during a lecture.
>
>
>
If I find myself putting the screen up to draw something on the blackboard, I take a note to add that diagram to the deck for next year. I find that drawing diagrams out is a great way to learn, but that most students will not do so - having a predrawn diagram helps them less than having one they drew, but more than having nothing, so I do it. Since I am teaching them how to make design decisions and record those decisions in the form of a diagram, the more examples they have the better.
In over a decade, no student has ever objected to what I'm doing, though there have been some who (wrongly) thought it meant they could pass without coming to class. I also had a few in the early days who objected to my file formats, giving me the extra work of exporting each deck to PDF or HTML or whatever they could handle; that seems to have stopped although I still ask each year if anyone needs a different format.
Upvotes: 2 <issue_comment>username_13: >
> Students may lose focus more easily when they have digital notes
> because they do not have to copy everything down (this was my
> experience as an undergrad)
>
>
>
I can either be writing down what you just said or I can be *thinking* about what you just said. Your choice.
If you try to force students to scribble down everything while you're talking they're going to be thinking "damn, where's my spare pen, this one is going dry" not "Hm. I wonder how that principle applies to...." when you say something.
You're dealing with adults, if they fail because they're overconfident, have bad judgement and think they don't need to attend class then they made that choice with their eyes open.
By all means, warn them and recommend against doing so but refusing to make slides, handouts or pdfs available is just devaluing your class for the students who do attend.
Upvotes: 2 <issue_comment>username_14: For large lecture classes I usually present the material on an overhead projector. I make the slides available to the students before the lecture, reduced to quarter size for convenience. Most of my colleagues do something similar.
It's true that this may result in some students cutting lectures. I am usually quite up-front about this, and tell students in the first lecture that while it is up to them, trying to learn the subject from the lecture notes alone is the best way I know for the average student to fail the course: for most students it will be necessary to attend lectures, consider extra comments, explanations and examples which I may give, and most importantly of all, ask questions and engage with the subject.
You can "force" students to attend lectures by withholding the notes, or by leaving gaps in them. I don't do this because I find it to be unacceptable behaviour: in effect, it treats students as children who must be manipulated "for their own good". I make it clear that I expect students to take a responsible, adult attitude towards their studies; the result is that most of them do.
IMO this is so important that I'm going to paraphrase it in bold type: **the vital thing to realise about students is that most of them are extremely cooperative**. They want to learn: if you make it clear what you expect of them, you will usually get it without pulling any tricks like issuing incomplete notes.
Finally we should ask (or perhaps we should have asked initially): what is the purpose of lectures and other classes? It is for the students to learn the subject (of course!). It is not an ego trip for the lecturer. If my students learn the subject then the course has been successful, even if the lectures have been half empty. (But if the lecturer demonstrates a concern for students and a dedication to teaching, then the lectures never will be half empty.)
---
BTW, I may soon have to find an alternative for the overhead projector. Apparently my institution believes that such things are too old-fashioned to be tolerated in a modern, forward-thinking institution which never stands still, and is gradually withdrawing support for these ancient devices :-(
Upvotes: 2 <issue_comment>username_15: In my own undergraduate degree, I noticed a distinct positive correlation between professors who didn't make their slides available deliberately ("so that you don't just skip the lectures and read the slides at home") and professors whose teaching I would rate lower-than-average. One of these professors taught so badly that I skipped his lectures anyway and just read standard textbooks on the subject; I ended up with a top grade.
I would speculate that teaching with the attidude that "most students are here to learn with goodwill and I'm here to help them" produces better teaching than the attitude that students are principally lazy and I'm here to force them to learn. I'm a researcher with occasional teaching duties myself now and I'm very strongly in favour of teaching materials always being available to download.
Here are two arguments in favour of making slides available. First, what if a student misses one of your lectures due to illness, injury or because their train broke down on the way? (This is not a lame excuse - we had a cluster of genuine cases one summer when the trains couldn't cope with a heatwave.) Would you rather they were able to revise the lecture themselves with the slides, or not?
Secondly, my experience during revision for exams is that even with notes of my own, slide printouts are incredibly helpful. Partly this is because a printout of a slide is a visual aid to jog your memory and remember how the thing on the slide worked; partly it's because you have two sets of "notes" (your own and the slides) which is better than one, partly it's because making notes on a slide with arrows pointing to the relevant parts is so much more effective when diagrams or pictures are involved than having to copy down the relevant points as text alone. If you have a complex diagram on a slide that you're explaining, do you want your students to be spending your lecture trying to copy it down, or do you want them to be paying attention to your explanation, safe in the knowledge that they can print out the diagram (or have already done so, and can annotate it without having to draw it first)? A rhetorical question, I know, but I think this point is important. Finally, everyone sometimes makes mistakes in their notes and being able to cross-check against the slides is very reassuring.
I'd summarise by saying that for a motivated and good student, having the slides available allows them to use their revision time more effectively, and will lead to better results.
There's one caveat here though. As discused in [1] and just about every "how to give an effective presentation" book and talk, the worst kind of slide is simply a list of bullet points with text to copy down; a really effective slide to support your teaching will have very little text and so be of limited use on its own for someone who has not attended your lecture. Some of my best professors had slides like this and provided us with extra "lecture notes" handouts; it's a standard I aim for in my own teaching when time allows. I leave you with the thought that if your slides are this good, and support your own teaching rather than holding a lecture in parallel to yours, you won't need to worry too much about students opting to stay at home and just read your slides in the first place.
[1] [Good slide design for teaching?](https://academia.stackexchange.com/questions/17933/good-slide-design-for-teaching)
Upvotes: 3 <issue_comment>username_16: The first course I taught, about 15 years ago, was an advanced undergraduate introduction to General Relativity. The way I taught that (and currently teach it again) worked for me and apparently for the students. There are a variety of overlaps with points made in the other answers.
The way it goes is:
1. I've assembled detailed formal lecture notes, which have turned into a what I realise amounts to a short book! (feel free to [have a look](http://purl.org/nxg/text/general-relativity) if you're at all interested). I distribute these in blocks before the relevant lectures. I request/advise the students to read ahead in these notes.
2. The lecture I give is somewhat more informal than the written notes. It goes over the material in the same order, but skips some details, and says ‘it's a bit like...’ more often than the notes would. That is, the lecture itself is significantly distinct from the notes.
3. I don't use powerpoint (or prepared acetate slides, before you ask), but I do use a document projector to display scribbled diagrams and mathematical derivations.
4. I record the lectures, and make this available to the students as well.
So they've got lots of resources.
The points that are relevant to the question here are:
1. This is well-known as a challenging course, but the students generally find the topic interesting and are motivated to work hard at it. That is, they **are cooperating and trying to learn**, and as others have pointed out, this is more important than anything else for the success of a course.
2. This is a notationally intricate course, and I know from being on the opposite side of the lecturn that it's basically impossible to take accurate notes and actually pay attention to what's being said. I still tell them to take notes.
3. I tell them that the topic only makes sense after the second time they do a course in it, but that the printed notes and the oral lecture count as two courses simultaneously! (to an educationalist I'd say something about ‘multi-modal learning ...*blah*... different learning styles ...*wibble*’). That, plus the recorded audio, cues them **to take charge of how they approach the material**, and to take a critical attitude to the available resources.
As far as I'm aware (I don't check) they do all turn up to the lectures. That's nice, as it suggests I do myself have a pedagogical function being in the room. If they don't turn up then either they're going to fail the course, or else they're able enough that they can master it with their own resources – both are fine by me.
The other course where I've used this approach was a pre-honours course, also regarded as somewhat challenging, but also with motivated students; also with a masters-level course which was a bit less interesting, but which could presume motivated students. It *might* not work so well with a service course, or a more bread-and-butter course.
The last point being said, I do have general sympathy with the insistence that undergraduates are adults, who can damn well be given responsibility for their own learning. But that fine attitude might run into difficulties in a different institutional or course, or a different student body, or a different (pecuniary) relationship with the students. There are important factors lurking there.
Upvotes: 2 |
2014/08/19 | 1,489 | 6,345 | <issue_start>username_0: I recently signed a graduate student contract. I had outside funding for some time but recently became a normal research assistant (RA) again, so must have missed a contradictory clause when I signed a similar contract a few years ago.
I am a US citizen at a US university.
In the contract it explicitly states that graduate students "shall not work more than 20 hours per week on duties given by the advisor."
I have found similar language in other graduate student contracts at other universities, an example [is here](http://www.gradschool.umd.edu/catalog/assistantship_policies.htm) (scroll down to *Duties and Time Commitments*). This example, however, states that RAs "shall not work on duties unrelated to research for more than 20 hours per week." My contract seems to imply the opposite.
This certainly applies in the first 2-3 years of a Ph.D., since class loads typically require enough effort that RA's will not have enough time to work more than 20 hours per week on research.
But, **after a graduate student has completed all class requirements, why is this clause still included?**
*It is clearly contradictory, as I'm expected to work 40-50 hours per week on my research, especially when I have no more classes to take. I have signed a legal document explicitly promising that I will not work the hours needed to finish my dissertation in a timely fashion.*<issue_comment>username_1: I think this is just an issue of definition of "what you must do to succeed in your academic program" vs "job duties". A research assistant is a job, and as such it does not always pay you to be doing what you'd be doing anyway. Sometimes you have to do work or help your adviser in a way that does not directly relate to your own program or research whatsoever - and that's what you are being hired to do.
When you get paid to be a research assistant and you end up working on your program and own actual research? This is awesome, and it's totally a bonus - but it's not what the agreement is designed to address.
In the USA a research assistant is paid (often by hour or in a flat stipend), tuition remittance/assistance is sometimes provided, insurance might be paid for the student-worker, perks and opportunities can abound or be completely absent (sometimes you get paid to go to conferences or do your own home work, and sometimes it's basically your field's equivalent of cleaning out monkey cages), etc. In short, there is a compensation package that makes it worth your acceptance, and losing it would be unpleasant.
However, on the other extreme if you work too much time on your RA/TA job you will not have enough time in the day to spend on classes or your own personal research. When you have no classes, then that means more time for your research work - not more time to work for the University.
So, in conclusion your agreement does not limit your collaboration with your adviser or on your research - it limits your **legally employed position's hourly requirements**. Talk with your adviser, take his advice, work on research with them until the cows come home (though I suggest you get home before the cows because life's too short) - but if you are doing more than 20 hours of work a week to fulfill your RA/TA duties, something is wrong it needs to be addressed right away.
Upvotes: 4 <issue_comment>username_2: The basic answer is that as an RA or TA, you are still supposed to be a **full-time student**. Thus, the work contract for either specifically limits you to part-time work.
What seems to be confusing you about your RAship is that your RAship is actually just a funding method in the case of your department. An RAship can include assisting a professor with her research in which case, such responsibility is capped at 20 hours a week so that you have the remainder of the week to work on your own research.
The same applies with a TAship whether that includes sole teaching responsibility or not. The point is that the TAship is secondary from the university's perspective to you working on your own research and coursework and finishing the program.
They care about completion rates and in most programs, you are no longer a fraction of a percentage point on that front but rather a single student failing to complete because of TAship or RAship will lower the numbers non-marginally.
Upvotes: 3 <issue_comment>username_3: If you are working at a US University, they often do this in order to ensure that you are ineligible for benefits under the family and medical leave act (fmla). If you work over 1250 hours they have to provide you with additional benefits, which cost more than they are willing to pay for your labor.
<http://www.dol.gov/dol/topic/workhours/fmla.htm>
Upvotes: 5 [selected_answer]<issue_comment>username_4: In principle, an RA appointment is a job, in which you could be required to do what the phrase "research assistant" literally indicates --- assist a professor with his or her research. In the experimental sciences, that could include, for instance, maintenance work needed to keep the lab running properly. In pure mathematics (my field), what it means in practice is that you do your own research. So, officially, my Ph.D. students with RA appointments are assisting my research for up to 20 hours per week, even though in practice, what they're doing may have very little to do with my research and will mostly coincide with their thesis research.
Upvotes: 3 <issue_comment>username_5: For the US, another possible contributing factor is that international students (who compose a large share of PhDs in many programs) are legally allowed to work at most 20 hours a week while enrolled as a full-time student (typically on a F-1 visa). Note that one is legally considered a full-time student even when one is a candidate who isn't taking any classes! (An exception is during the summer, when international students are then allowed to work up to 40 hours a week.)
Certainly this problem could be circumvented by having separate contracts for US citizens and non-US citizens. But then there'd be other problems (e.g. making things more complicated for the administrators, perceptions of inequality/discrimination). So perhaps it's just simpler for them to have one uniform contract, university-wide, for all the grad students.
Upvotes: 3 |
2014/08/19 | 1,117 | 4,797 | <issue_start>username_0: Say, in a particular field of science, method A or equipment B are the standard. Now I have invented method X or equipment Y which cost much less than A or B.
What are the necessary conditions, if any, that allow X and Y to be published as a journal paper?
(This question of course is loaded with my own assumption that originality in an academic research *does not* include cheaper price. I have long held this assumption from a simple fact that I have *never* encountered such thing as being cheaper being described in any background/introduction section in any journal articles I've read.)<issue_comment>username_1: I think that you miss a point in the word *original*: You have to present *original research*, not necessarily *original results*. If you found a cheaper method than anyone before, the research is obviously original, even if you didn't find out anything new.
Example from other sciences: In maths, a new proof of an old theorem is original. In CS, a new algorithm which is just 10% faster than an old one is original. In medicine, a new treatment for the same disease is original. And so on and so forth.
As well, even if your result is not better than the previous ones (slower algorithm, more expensive method), it can of course be original. It's just more questionable whether it's useful, sometimes it is, but you have to be quite convincing usually.
Upvotes: 5 <issue_comment>username_2: At least in life sciences, this could be publishable, depending on the details. In the end, what matters for publication is whether the paper is of interest to the scientific community.
In some cases, cost can have huge consequences. For example, if you invent a method to easily sequence a human genome for 100 USD, that would greatly advance biomedical research and diagnostics, and you will get a very high-impact publication. You can read more [here](http://en.wikipedia.org/wiki/$1,000_genome).
Upvotes: 3 <issue_comment>username_3: >
> What are the necessary conditions, if any, that allow X and Y to be published as a journal paper?
>
>
>
That the method or equipment is new and interesting. It is really as simple as that.
It is completely ok if the improvement or motivation of your technique mostly lies in cost savings - that is common in many research communities. If the way how you achieved these cost savings is by applying simple technical optimisations, you will have problems publishing your results. On the other hand, if you reduced costs by fundamentally changing the way how your technique approaches things, or if you manage to reduce costs by orders of magnitude, journals will presumably be interested in how you achieved these results.
Upvotes: 5 <issue_comment>username_4: Original, in this context, means significantly different. The motivation of your study can be to develop a cheaper method, no problem with that, but the question of originality nothing to do with this.
Original research only means that you do a non-trivial root that hasn't been explored yet. If you manage to be cheaper because you use a different effect or some non-trivial different element, then it is perfect for publication. If you manage to be cheaper, because you ordered your o-rings and screws from a cheaper online shop, then maybe it is not interesting for publication. Anything between I would try to publish: something that seems trivial for you or a gradual improvement, may be surprising or insightful for others.
Two notes:
* If it is a commercially viable method, you should really think about to **patent it first before publication**. Even a provisional patent is much better than nothing.
* At the end of the day, if your method will be widely used (because it is cheaper and practical), this will be a highly cited publication, even if you think your improvement is not significant from scientific point of view. Some of the most cited papers of all times are publications on software used in X-ray crystallography. 99.9% of people do not read those papers, do not understand those papers, and it may be that they just announce the implementation. Yet everyone who used them, cites them and do it rightly. So it is always better to publish, and then worry about if people like it.
Upvotes: 2 <issue_comment>username_5: Aluminum used to be more valuable than gold or platinum - it was used1 to cap the Washington Monument partly because of the rarity and value of aluminum back then (it's worth about $50 today). European monarchs would put out aluminum tableware - silver was for the common rich.
The Hall–Héroult process changed all that and made aluminum one of the cheapest metals on the planet.
Most everyone would agree that Hall and Héroult came up with something original, publishable and patentable.
Upvotes: 1 |
2014/08/20 | 9,128 | 35,558 | <issue_start>username_0: I have been dragged into an argument with someone who can't understand why millions are being raised to fund [ALS](http://en.wikipedia.org/wiki/Amyotrophic_lateral_sclerosis) research (that's the "ice bucket challenge", love it or hate it). He doesn't get why research costs so much money because - and I genuinely quote - "**it only takes time and effort**".
My first instinct was to say "Are you being serious or just trying to wind me up?". But then I realized that maybe from an outsider's perspective, this might actually be difficult to understand.
So I am asking for points to make when explaining the cost of research to lay people, and how to articulate these points in a way they can identify with them.<issue_comment>username_1: >
> it only takes time and effort
>
>
>
I think the answer to the question is already there. It takes time and effort. So it means it takes at least the money to pay the people for a long time. And effort means you need a lot of people.
More precisely, for medical research, reagents, animal model, clinical test are really expensive. Many different drugs need to be developed to have only one working in the end. Finally when a drug seems promising, you have to do year-long clinical tests, just to ensure patients safety.
So time and effort == lot of money
Upvotes: 7 [selected_answer]<issue_comment>username_2: People's salaries cost money. Given the overheads of running a business or university, plus the cost of fringe benefits like retirement and health insurance, you can basically double someone's salary to get the full cost of employing them (whether the number is directly charged or comes in through a national healthcare/retirement scheme). If their talent's are in demand, then companies may bid up salaries in order to attract them away from universities in order to try to make a profit on their labors.
STEM talents are in demand.
Therfore, each PhDed person who works on a project costs somewhere between $150,000 and $400,000 (salary is half that, recall) in a broad range of fields relevant to solving problems like ALS or cancer or other diseases.
Research is hard. Most drugs don't pan out, so you need lots of people trying lots of different ones in order to develop some that do work.
Mathematics is cheaper to pursue. That can literally be one person reading and scribbling and talking to colleagues until she has a breakthrough. These folks often get paid primarily to teach, but if they're willing to work on algorithms for companies or the government, then they are worth a lot, so salaries also get bid up.
Upvotes: 5 <issue_comment>username_3: I think it depends totally on the area of study.
Like everybody mentioned already, only the human costs are already relatively high, and equipment for bio sciences is crazy expensive as well.
Let's just make some estimates.
An average neuroscience lab has at least one EEG machine, which for those levels cost between 30 and 70 thousand bucks. You need good computers, which usually range from 1500 to 2500 USD, at least one for each member, in a 5 person group. That is already ~100,000 USD only in basic equipment.
You sometimes need to pay for your space in some universities, like a rent, having access to live animals to do experiments also costs about 2000 bucks per experiment for small animals, larger animals have all sorts of extra costs, I do not know, but having a chimpanzee should be costing a lab well over half a million bucks/ year to maintain and do experiments.
Want an MRA to do functional scanning of the brain, first get a University that has one, also crazy expensive, then pay the rent.
Conferences plus travel expenses is another sizeable chunk.
Thanks to the ripoff that Academic publishing is, Universities pay hefty sums of money to get access to research journals.
All of these costs are not even taking into account salaries for Professors, PostDocs, Grad Students, etc
Upvotes: 4 <issue_comment>username_4: I'm an experimenter, so in addition to the cost of some highly trained people's salaries and benefits and the cost of keeping the lights on and the white boards cleaned there is equipment and consumables.
Some of the equipment is precision kit. Lots of it are produced in small runs because only a few hundred sites in the whole world need that kind of stuff, so there is no economies of scale. Some of the consumables are pretty exotic and cost a whole heck of a lot.
All of this applies perfectly well to the kind of research mentioned in the question, plus they have to deal with human subjects concerns and that doesn't come cheap either.
---
And just because I like to talk about my work...
Because I'm in big science, when it come time to stop tinkering around with prototypes and testbeds and get really serious we usually build a *one-of-a-kind* multi-ton detector with thousands or tens of thousands of instrumented channels. Giga-bytes per day data streams, massive computing infrastructure, hundreds of salaries and travel money.
The question isn't *"Why does it cost so much?"* but *"How do you expect us to get it done with so little?"*. Then we go ahead and try because we're happy to have the chance to do it, even on a "shoestring".
Upvotes: 5 <issue_comment>username_5: Even in purely theoretical fields or fields without expensive lab costs, research is expensive.
I think the root of the problem is that your friend seems to be measuring 'cost per result (e.g. cost to cure ALS)'. Most lay people probably don't understand what it means to learn something *truly* new about the world. Discovering something new is really, really, *really* hard—even for very intelligent people (remember Einstein's 1% inspiration, 99% perspiration). Thus, whilst it may be true that "it only takes time and effort", the amount of those things needed to achieve even minor progress is huge. To achieve a major breakthrough, like curing a disease or proving a major theorem, might take thousands of scholars working for years or even a century (see, e.g., the Poincaré conjecture).
The cost to achieve one of these major 'results' is indeed large, but that is because they stand among the crowning achievements of mankind.
Upvotes: 4 <issue_comment>username_6: The research group that I've been fortunate enough to work with as part of my Masters dissertation (Microelectronics) have recently received funding of £400,000. To me that sounded like an immense amount of money; but then I saw some invoices for how much it costs to repair and maintain the antiquated machinery they're forced to use.
Plus, does your friend really not understand that "time and effort" cost money in and of themselves before you even start thinking about equipment and whatnot? Does he think that researchers don't need a salary because they are immune to starvation and the elements? US$1million would pay the salaries of 5 research assistants for 4 years; and those salaries are not exactly generous.
Upvotes: 2 <issue_comment>username_7: I would emphasize who's time and effort is involved, and how much of it is involved. It isn't like we have a couple of undergrad research assistants working on the problem so we can pay them next to nothing. ALS research occupies the time of teams of doctors.
Second they seem to be underestimating the type and length of the work involved. It isn't like they can go over to CVS and say I have this prescription for this never before seen drug whip some up for me while I browse the magazine rack, or go to some medical supply store and order a never seen before medical device.
Third, safety protocols, it isn't like they fabricate some new concoction in the lab then go jab it in some guy's arm like they do on TV. It takes a long time because they are actively trying to not kill people, and exercise an abundance of caution.
Upvotes: 2 <issue_comment>username_8: To deal with this answer more generally, I would insist on using their own argument: ***nearly everything on earth is just a matter of time and effort.***
This includes everything from The Great Pyramids to battleships to cars, books to research, farming to mining. They are all products of the application of effort and time.
The reason things cost money is not mostly because of the raw materials. With most things in life, the raw material is the easy part!
I like farming/gardening as an example. Much of a garden or even a farm can be done with little to no raw materials. You need dirt, sure, but that stuff is everywhere! It isn't always good for growing, but you can fix that with time and effort to till, cultivate, amend with more cheap stuff (waste products and so on). Bugs? Squish them - time and effort. Seed? Harvest it wild, make your own selective cultivation to raise yield - time and effort.
Unlike gardening, though, research isn't growing the same crops over and over again every season. With growing food, it often gets easier each year if you do it right - you learn what works and what doesn't, the soil can be made more fertile. But with research, once you have your product you don't get to just repeat the same process again and call it good. That wouldn't bring about a publishable result - no one cares if you "discover electricity for the 40th time."
When research is easy and quick, it gets done easily and quickly, and then there is no reason to do it again. The low hanging fruit gets picked, and it doesn't grow back!
Therefore, we can expect research to get harder as we as a society learn more - more time, more effort per discovery is required. This means research should only get more and more expensive, while making and growing stuff should get cheaper and cheaper. Economics (research!) is more complicated than this of course.
So let's talk about what else costs money. 'Stuff', raw materials, do cost money. Research needs labs and offices, electricity/fuel, and most need all sorts of special equipment and apparatus. Medical labs need stupid amounts of these, and everything costs so much money because research is picky - poisoning someone with an injection because you didn't spring for that stupidly expensive "approved" beaker instead of using a $1 drinking glass from the second-hand store that wasn't resistant to the chemical regents you used is generally frowned upon.
And if a person thinks human time and effort should be free and people should not ask anything in exchange for their life dedication and work - why in blue blazes are you still talking with this person? Suddenly, a relevant XKCD:

And if the Senator seriously thinks medical research - or any research - should be cheap or free...well, good luck to you in your noble fight, friend.
Upvotes: 4 <issue_comment>username_9: Beyond the cost of lab spaces, researchers, guinea pigs, machinery, and raw materials, I would like to add a few costs caused or worsened by the lack of [open science](http://en.wikipedia.org/wiki/Open_science) (a great [TED video about open science](https://www.youtube.com/watch?v=DnWocYKqvhw) if you aren't familiar with it):
* **Paid access to research articles**. In addition to preventing the general public from having easy access to and contributing, it has two main consequences:
+ Universities must pay some insane amount of money to access publications, typically [a few millions USD yearly](https://academia.stackexchange.com/q/29923/452).
+ Even the most highly-ranked universities don't have access to all articles (far from it), which mean researchers sometimes waste time to try to get access to some papers.
* **Data sets unavailability or price**:
+ Unavailability: researchers sometime have to create their own data sets or abandon project ideas due to the unwillingness of others to share their data sets, because sharing data set might mean wasting some publication opportunities.
+ Price: some researchers do share their data set, but but not free. For example, in the natural language processing community some of the key data sets are only available on the Linguistic Data Consortium website, which charges download.
* **Unreleased source code**. Many publications don't share their source code that was used for the article. A couple of reasons might explain the behavior: just like for data set, access to source code gives an edge over other researchers, who will have a harder time improving, amending, etc, the article. Also, the source code might be badly written and researchers can be embarrassed about it. It might be a way to avoid other people finding bugs in your code that invalidates some of your results. I asked [Are there any journals or conferences that take into account the availability and the quality of the source code when selecting the papers to publish?](https://academia.stackexchange.com/q/24490/452) one day, even in 2014 it is hard to find... Also, see [reference on availability of source code used in computer science research articles](https://academia.stackexchange.com/q/29137/452): "in computer science systems, out of 410 papers that were analyzed, only 85 has a link to source in the paper".
* **Too many articles due to overreliance on bibliometrics** to assess researchers (grants, promotions, etc.), which push researchers to over-publish. I sometimes feel that I am a documentalist, trying to navigate my way through myriads of papers that are written in some unnecessarily complicated way with barely any contribution.
* **Lack of good, widely-used platforms to publicly comments on existing articles**. E.g. if something is unclear in an article and a researcher spent his morning to understand, there is no good platform for him to leave a comment to help other future readers who will have the same understanding issue.
* **[Publication bias](http://en.wikipedia.org/wiki/Publication_bias)**, which results from the fact that positive results have a much higher chance to be published than negative findings. This slows down research too: see the article [Why Most Published Research Findings Are False](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1182327/) or the TED video: [<NAME>: What doctors don't know about the drugs they prescribe](http://www.ted.com/talks/ben_goldacre_what_doctors_don_t_know_about_the_drugs_they_prescribe) (really worth watching if you don't know the extent of the problem).
* **Page limit of the articles** inherited from the Middle age where articles were printed and distributed through paper form (and charged $30 per article, without any dime going to the authors' pockets). This forces authors to chop off some of the information they would have liked to convey. How many times did you wonder how the authors performed some mathematical derivation? How many times did you wonder which parameters did the experimentalists used?
All these inefficiencies cost money too. E.g. [How to Make More Published Research True](http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.1001747):
>
> Currently, many published research findings are false or exaggerated,
> and **an estimated 85% of research resources are wasted**. (in biomedical research)
>
>
>
---
Some [key points](http://articles.mercola.com/sites/articles/archive/2013/02/13/publication-bias.aspx) from the TED video [<NAME>: What doctors don't know about the drugs they prescribe](http://www.ted.com/talks/ben_goldacre_what_doctors_don_t_know_about_the_drugs_they_prescribe):
>
> Former drug company researcher <NAME> looked at 53 papers in the
> world's top journals, and found that he and a team of scientists could
> NOT replicate 47 of the 53 published studies — all of which were
> considered important and valuable for the future of cancer treatments.
>
>
> Half of all clinical trials ever completed on the medical treatments
> currently in use have never been published in the medical literature.
> Trials with positive results for the test treatment are about twice as
> likely to be published, and this applies to both academic research and
> industry studies.>
>
>
> In 2010, three researchers from Harvard and Toronto identified all the
> published trials for five major classes of drugs, and then measured
> two key features: Were they positive, and were they funded by
> industry? Out of a total of 500 trials, 85 percent of the
> industry-funded studies were positive, compared to 50 percent of the
> government-funded trials.
>
>
> According to a 2011 study in the Journal of Medical Ethics,4 nearly 32
> percent of retracted papers were not noted as having been retracted by
> the journal in question, leaving the readers completely in the dark
> about the inaccuracies in those studies.
>
>
>
Upvotes: 4 <issue_comment>username_10: One part of the answer that was not mentioned so far is that **highly skilled people want to get paid good**. Or to phrase it differently: You will not find people with the skills you need to do research without paying them accordingly. Especially for pharmacy / medical studies you need a lot of man-hours of people who probably made a PhD at university. They spend a lot of their money to get to this point and they want to get paid accordingly.
As those studies require - as already mentioned - a lot of time, you need to pay a lot of expensive man-hours.
Some numbers
------------
* The median education debt for indebted medical school graduates in 2012 was $170,000, and 86 percent of graduates report having education debt. Specifically the 2012 the median debt at graduation was $160,000 at public institutions and $190,000 at private institutions. (Source: [How Much Does Medical School Cost?](http://gradschool.about.com/od/medicalschool/f/MedSchoolCost.htm))
Upvotes: 3 <issue_comment>username_11: You could put in perspective by comparing it to other ventures.
For instance, running Walmart "cost" around [$450bn last year](http://en.wikipedia.org/wiki/Walmart). Running IBM cost [$115bn last year](http://en.wikipedia.org/wiki/IBM). Running Pfizer cost around [$20bn last year](http://en.wikipedia.org/wiki/Pfizer).
Now, compare this to the cost of finding the Higgs boson: around [~$13.25bn over the total time taken](http://www.ibtimes.com/forbes-finding-higgs-boson-cost-1325-billion-721503). Hubble: around [$10bn up to 2010](http://www.nasa.gov/pdf/499224main_JWST-ICRP_Report-FINAL.pdf).
Scientific research is not cheap, but it's way cheaper than running a business and has much wider reaching impacts. What sticks in people's minds is that the money comes from "funding" and "charity" and that strikes them as "waste".
Upvotes: 3 <issue_comment>username_12: Research does not necessarily cost a lot of money. Even today, some areas such as literature, philosophy and theoretical mathematics (non-computational) literally cost only as much as a desk, pen and paper. In theory. In practice, as many have already pointed out, even in these fields it can make sense to do a little cost-time trade-off.
But what about the other fields? Since the question is about ALS, I will focus on more technology-heavy fields. I will also try to explain more the barriers to making the same research cost less, rather than try to guess what that particular researcher's budget was.
* **You need to be associated with a university or institute, and they take a big cut out of your funding.** A vast bulk of research is done by university professors. While on the face, you might think that such institute-affiliated researchers take a monthly salary, and then get money from the government on top to spend on their research, that isn't quite true. When you receive a grant, the university or institute takes a large amount of this, eg. 50% or even 80%. Part of this is probably just taken because the president wants a nice house. But it also pays for the electricity in your office, the electron microscope everyone gets to use, the salary of the janitor, the salary of the health and safety people whose approval is required by law for you to obtain restricted research chemicals, etc. You could save a lot of money by not being a university affiliated researcher, but you can't just up and go and do research from your garage - you must deal with a lot of bureaucracy unless you want to get fined or jailed, and you lose the very important benefit of easily being able to eat lunch with leading scientists and talk to them about science.
* **Materials are expensive.** Others have explained in detail why consumables (chemicals, enzymes, single use sterile tools such as petri dishes) are expensive. A lot of these you could in theory make yourself. I know many biology labs who eschew modern kits and still use DIY methods from decades ago to save money - but it's a lot of work and introduces a lot of risk for error. Even then, some crucial reagents are simply impossible to manufacture if you don't have a large chemical plant. Think by analogy to computers: You can do a lot with DIY electronics, but nobody is going to be building an i7 out of scrap metal in their garage.
* **Equipment is very expensive.** Even the simplest biological research equipment tends to run from thousands of dollars to hundred of thousands or even millions. Even something as basic as a centrifuge can run you [two grand](http://www.sigmaaldrich.com/catalog/product/sigma/cls6758?lang=en®ion=US), and you cannot do any molecular biology without one. If you want to do sequence based research (absolutely necessary for ALS), either you must buy a [very expensive sequencer](http://www.labx.com/v2/adsearch/detail3.cfm?adnumb=516494) or you must pay someone to run samples on theirs.
* **Scientists need to eat.** Research isn't a hobby, it's a full-time job. Perhaps the professor's salary gets paid by the university - but often grad students and postdocs are paid from the grant money. All these people must be paid a salary, otherwise their landlord will kick them out and they will starve. It would be ridiculous to expect someone with a 40-hour job to do a few hours of research every weekend and get somewhere.
With ALS and biology in particular, there are also some "soft" factors contributing to research costs: Firstly "big data" and high-throughput studies are currently in vogue, and both require expensive, specialized systems. Second, it is getting harder and harder to find problems that aren't expensive to solve.
There are many studies which can be done for a $10.000 in reagents (which isn't a lot, other costs notwithstanding) over 1-3 years. Many labs already do this. But not every lab is this lucky.
What happens if your disease has hundreds of variants which must be characterized by spending thousands on genotyping (it costs a few hundred per person, to cover things like cost of manufacturing the genotyping chip, cost of chemically preparing the sample, salary of people who do the specialized genotyping, and the profit margin of the genotyping company)? What happens if the protein that causes the disease must be purified using an extremely expensive chemical? What happens if the aberration that causes the disease is so microscopic, that you need a million dollar microscope to see it? What happens when the thing you study turns out to be so complex, that only a supercomputer (which are very expensive to have or use) can hope to make sense of the data? What happens when you are studying a very dangerous pathogen that will kill you unless you have a [$100 million](http://www.popularmechanics.com/science/health/med-tech/4315093) BSL-4 lab that takes [$2 million](http://www.nature.com/news/2009/091111/full/462146a.html) every year to maintain?
What's worse is that biology has been around for quite some time, and a lot of the questions that are "cheap" to answer have already been answered by someone else. Problems like ALS, which have stayed unsolved for decades, are unsolved for a reason: Sometimes it's just because nobody happened to be clever enough to come up with the right idea, but a lot of the time it's because the scale of research needed to attack them was prohibitively expensive. Now technology has advanced, and it is no longer prohibitive, but still expensive (meanwhile the technology affording that discount is itself not cheap).
---
>
> it only takes time and effort
>
>
>
Certainly not. There is some research to which this applies in a trivial way, but a lot of important research is practically impossible without using the technological infrastructure built over the centuries of human history, and using that infrastructure is rarely free.
>
> Why does research cost so much money?
>
>
>
How much does he think it should cost? Based on the quote, I suspect it is $0. That's not gonna happen.
It costs what it costs because that's what it costs. If you can come up with your own budget for a given study that is much smaller, yet still makes it feasible, go ahead. But just writing numbers on paper and asking why it doesn't cost this much is wishful thinking. You don't go to the car store and tell the guy you think his Ferrari "ought" to cost $32.27 because that's how much cash you happen to have in your pocket.
>
> points to make when explaining the cost of research to lay people
>
>
>
The condensed version of the above is: Even smart people sometimes cannot solve a problem without buying expensive things. You cannot study moon rocks without buying an expensive rocket to go to the moon. You cannot study what's inside the atom without buying an expensive atom smasher. You cannot test a cancer drug if you don't have expensive cancer cell cultures to test it on.
Science is not just sitting around in a room and philosophizing. You must also do experiments. The experiment's set up can sometimes get involved and complicated. Observing the outcome of the experiment can require specialized sensors or measurement tools. Processing the data gathered can require state of the art supercomputers.
>
> how to articulate these points in a way that they can identify with them.
>
>
>
"You get what you pay for."
Upvotes: 3 <issue_comment>username_13: The other answers address the cost of actual research extensively enough, but there are, sadly, other aspects to consider.
Academic research is, for the majority, funded by the government. That means that, although I have a high opinion of some government funding agencies, **the decisions on how to spend the money is often political**. As any other government organization, members of academia work in a loop to protect their interests and privileges.
As a result, in addition to the natural amount of effort and money spent on endeavors and projects that turn out to be dead ends (these are actually useful to research as they rule out possibilities), there is a tremendous amount of money lost in generating heat.
It's frequent to see faculty being hired because they are the spouse of another faculty, or because they have social traits that contribute to the 'diversity', grants being attributed hoping that the awardee will return the favor, funding being attributed to people on the basis of the number of publications instead of the quality of them (even more so now with the open-access movement and the subsequent logarithmic raise in paper count), etc. Not to mention cases (e.g. in medicine, socio-economics, history) where funding goes to promote the political agenda of the governing majority.
At the end I think at least a good third of academic research is either marginally incremental, frankly redundant or even completely bogus.
It should be noted that while these sums ('millions!') seem huge to us, the amount of money spent in academic research is typically a fraction of the government's budget.
Upvotes: 3 <issue_comment>username_14: First think about all of the other challenging and expensive endeavors we do. Running a business, building a bridge (or writing a computer program)... these were all done before by someone else. And if you are looking to build a bridge you are probably reading a book titled something like "how to build a bridge" and written by the last person to attempt what you are wanting to do.
Research involves solving problems that are open, there is no book to read because it has not been done before. Now, you can gather a lot of materials from this person over here that had one idea that helps you, and that person over there. But these scattered resources are harder to find (I.E. in academia) and at the end of the day *you* are the one who needs to put it all together and build the very first bridge in existence.
Now do us all a favor and put a whole bunch more money into safety testing.
Upvotes: 2 <issue_comment>username_15: This is well explained by an American expression: "Time is money."
You have to pay "people" for their time. And these are not random people, but highly trained and paid researchers making salaries, typically in the top 20-30 per cent of the population.
Also, such researchers normally have to be supported on most projects by "advanced" scientific equipment, which also costs a lot of money.
Upvotes: 2 <issue_comment>username_16: One point to make is to realize that what is "so much" is a very relative term. If you see value in what is being produced then the cost may seem small and definitely "worth it". If you do not see the value, then of course the result is the opposite. It would be possible to go through, for example, public expenditures in government budgets and argue for military costs, for social benefit schemes, public health, public schools etc. and find similar disagreements.
It is becoming popular in certain political circles to want research to be directly translated into profits within a short time span. Although there of course is nothing wrong with gaining profitable results many discoveries have only yielded such results long after the timing of the discovery. The spin-offs from discoveries are also often realized once the basic research has been done, for example as a result from unexpected discoveries. As the saying goes, "If I knew what I was doing, it wouldn't be science".
Hence, I think the view of "cost so much" is either a lack of perspective, or insight, or simply not finding the progress of any value. Having this view point is certainly valid but as many responses have already discussed, one must then argue what costs should or could be reduced.
To call research inefficient is not sufficient. We can see that certain problems can be solved if sufficient finances are supplied over a sufficient period, take the US space program to put a person on the moon as an example. But, was that a cheap project? Obviously for some but not to others.
If no time or effort was spent on anything, life would remain simple but hardly bearable, provided our intellectual development would occur spontaneously.
Upvotes: 2 <issue_comment>username_17: Well, in academia most of the funds go into salaries and I feel like people who never employed anyone just don't know how expensive this is. Here in Austria 3 years of PhD cost the university around 110 000€ and a Postdoc (the more experienced researchers) is around 200 000€. So if you got 2 PhD Students and a PostDoc working for 3 years, and that's not a very large Project, it's around 420 000€ just salaries, but nothing happened yet.
Now the other costs depend very much on the research topic and field, but in Chemistry you need to buy chemicals and also other supplies. They might be expensive or cheap, but for such a project you might have another 100 000€ of budget. keep in mind, that's only around 900€ a month for every researcher. Now I don't know if this would be a lot or not for something not familiar with Chemistry and I have to agree, that's not a bad budget but also not extremly large.
Let's say that's it for this project and we don't need any new equipment or something like that. But now, at least in my country, there's also what they call "overhead", another 20% of that what's in the budget right now goes to the university. Why? Because they provide the infrastructure (HR people, non-scientific personell, electricity, equipment, labspace,...) which of course also produces cost. So this rather small project would be around 624 000€ with nearly 70% (420 000) just salaries.
Upvotes: 0 <issue_comment>username_18: Apparently your friend who says "it only takes time and effort" has never seen the proof that time=money. Since effort=time, it's clear that time=effort=money.
In my field (computer science), grant support for a middling-size research project might last five years and provide support for two PI's, a postdoc, and two to four graduate students. Let's look at an example personnel budget. Note- you can find lots of NSF or NIH sample budgets online that illuminate this further.
Assume both PI's ask for two months of summer support per year. If their base salary is $90,000 for a nine-month contract, then their base monthly salary is $10,000. Thus, if they both ask for two months, then that's $40,000 per year in base salary for the PI's.
Then you add on PI fringe benefits costs. This money provides benefits like health care. Different universities have different fringe rates. But, let's say that our rate is 15%. Then we have an additional $40,000\*.15 = $6,000 per year.
Postdocs are similar, but they're paid full time to do research. They might have a base salary of $55,000 per year. Then their fringe benefits are $8250
Now let's talk grad students. Let's say we have three graduate students. Each graduate student has a monthly stipend of $2,500- or $30,000 each / $90,000 combined per year. Then you charge fringe on those students, which is going to be slightly less, say 10%. Thus, the fringe for all three per year is $9,000.
These are all the "direct salary costs". To recap, we have:
```
Salary + Fringe:
PI 1: $20,000 + $3,000
PI 2: $20,000 + $3,000
Postdoc: $55,000 + $8,250
Grad 1: $30,000 + $3,000
Grad 2: $30,000 + $3,000
Grad 3: $30,000 + $3,000
--------------------------
Total: $185,000 + $23,250 = $208,250 per year
```
But, these are only the direct costs. All grants also include what are called "indirect costs", which are paid to the university and go towards things like building maintenance, utilities, and non-research-staff salaries. A pretty normal indirect cost rate is 50%. So then we also have:
```
Indirect Costs = 0.5 * Direct Costs
$104,125 = 0.5 * $208,250
```
Finally, our total yearly cost:
```
Total Cost = Direct Costs + Indirect Costs
$312,375 = $208,250 + $104,125
```
Just like when you go to the grocery and just throw stuff in the basket, it adds up a lot quicker than you think. Notice that all of this is just to support six people per year (two PhD PI's, one PhD postdoc, and three grad students). And this is just counting salaries, it doesn't include other costs like equipment, publication fees, or travel.
The salary-only cost for a three year project under the above would be $937,125. The salary-only cost for a five year project would be $1,561,875. Note that the actual costs would be slightly higher in real life because of things like pay raises and cost of living adjustments.
If you consider published papers to be the basic unit of academic research, then suppose that this project turns out 8 papers per year (which is reasonable but optimistic). The cost per published paper is then a little over $39,000 per paper.
Upvotes: 1 |
2014/08/20 | 671 | 2,962 | <issue_start>username_0: To make things simple, I am asking whether the rebuttal letter is starting with
>
> Dear Editor and Reviewers
>
>
>
or
>
> Dear Reviewers
>
>
><issue_comment>username_1: Generally you address all letters to the editor (she/he is the contact person!) and if any part of your letter is relevant to a review, than you address them, too. In practice generally t means that in the main header I address on the editor, and when I answer the reviewers, I address them there.
Upvotes: 1 <issue_comment>username_2: One thing to do is to write two documents:
* a short letter to the Editor, summarizing your rebuttal in a few sentences;
* a detailed Answer to Reviewers, including line-by-line response to all comments and suggestions of the Referees.
Note that the Editor not necessarily has enough time to read through all the detailed response him/her-self, but the presence of this document re-assures that the discussion between Authors and Referees is going in the right direction. If you experience problems addressing Referees' questions, or disagree with them significantly, you should mention this in the letter to the Editor.
Upvotes: 2 <issue_comment>username_3: I usually address the editor since I don't ever communicate with the reviewers. However, I thank them for the report in the first sentence of the letter. I think that this corresponds well to the way how reports are written, because they are impersonal as well (they don't begin with *Dear authors*).
However, In my opinion, while one should keep a good level of politeness, your paper won't be treated any better or worse if you write *Dear Editor and Reviewers* or just *Dear Editor*.
Upvotes: 0 <issue_comment>username_4: The traditional review process is a communication between the editor, who have solicited reviewers to obtain critical peer review of the manuscript, and the author. As such you should respond to the editor and provide a response to the review comments by the reviewers. It is not likely the reviewers will see your response unless they agreed to re-review your revised version of the manuscript.
Open review processes are gaining ground where the entire review process is open to the public. The Open Access publisher Copernicus has such a system where anyone can leave comments to the authors. The handling editor will also appoint official reviewers who's reviews will be posted publicly. In such a system the response to the reviewers (official reviewers as well as other who have left comments will also be posted publicly. In such a case the letter can be directed both to the editor and the reviewers since at least part of the review process has the form of a public discussion (Copernicus also calls their manuscript "Discussions".
So in the traditional sense all communications will be with the editor but in the case of public discussion format review processes, the letter can be directed also to reviewers.
Upvotes: 1 |
2014/08/20 | 1,173 | 5,202 | <issue_start>username_0: Which section of a research paper should be written first? When I finally finish my analysis I begin to write the Methods section and the Results section. That is the first "block" of my writing. After that, I discuss it with co-authors (they are of course involved in the analysis, but at this stage they have the real results, graphs, and tables) and co-workers. Only after that do I begin to write the Discussion section and the Introduction. Is that right or is it better to write it in a different order?<issue_comment>username_1: There's no 'right' order. Starting where you feel able to do so is far better than not getting started. I usually write lots of sections concurrently, or start writing and work out what the sections are later.
On the other hand, it will make your life somewhat easier if you can work roughly in order of dependencies, so you don't have to keep changing what you wrote earlier. From example, it might be helpful to write out some of your notation before you start using it. The introduction will often come late in the process for this reason.
Upvotes: 5 [selected_answer]<issue_comment>username_2: This almost exclusively depends on the personal taste and habits of you and your co-authors.
There are different types of writers and different types of projects. The order of writing has to be suitable for these. If you and your co-authors are absolutely certain that what you did already makes a nice and complete paper story, then writing the methodology and results first makes sense - you can adapt the introduction accordingly then. However, if the final scope of the paper is not 100% fixed already, writing the introduction first makes sense, so you can check whether your project-so-far actually reads complete. The literature review may then have some impact on your methodology section, so that you can make the distinction to previous work very clear.
Edit-as-you-go style writers may also want to write the introduction first so that what is written so far is always in a clean state. For others ("binge writers" and writers that iteratively refine detailed plans) it may not really matter again, so the properties of the project can dictate the order of writing.
There is a plethora of literature on successful academic writing, and different books will advise different approaches, so there is probably no unique answer to the question.
Upvotes: 3 <issue_comment>username_3: I would just like to address this comment from the TC:
>
> Everything is OK, I just heard one conversation that without a written
> hypothesis and proper literature review no one should start to write
> an article. I thing that this part belongs to the Introduction but it
> is possible to have it only in mind when I actually writing a paper
> and put it on it later. I´ve been just curious, how other people
> writing their papers.
>
>
>
The order Method > Results > Discussion > Introduction (let's call this the MRDI method) is extremely common in my field (biomedical). From what I can tell there is nothing wrong with it, though your comments are certainly worth discussing.
**It's important to distinguish between conceptualizing and writing**
Generally, the path from concept to manuscript is not linear or singular. Some phenomenon, questions, or challenges might have sparked us to gather some evidence and perform a hypothesis test. At that stage we might not have writing an introduction, but we would have already known the big three in the introduction section: what is known, what is not, and how can our work help here. So, in a way, your quoted comment seems to have mixed up "research" and "writing." Yes, we may not have the introduction written first, but the big picture is already conceptualized, and recorded in some form not necessarily as a manuscript.
**Hypothesis integrated analysis plan**
As for the comment on written hypothesis. Again, I think the speaker mixed up "research" and "writing" or he/she was speaking to a group of very new science students. In my field, all analyses were based on hypotheses, and all procedures are predetermined. In a sense, if we come up with any results, it's already implied that hypothesis setting and analysis planning have already been completed.
**Don't fuss over the real "order"**
Personal experience: Don't sit down and think "Okay, I am going to write the Discussion section and nothing else!" When write, just let your thoughts flow, if later the materials deem more appropriate to be moved to Introduction, so be it.
Same goes for Results section. Some people were very disciplinary about not to interpret any findings in the Results section. But I just write whatever I want in the draft, and then parse out the interpretive parts later. For me, thoughts are thoughts, they just come out like a chain of... sausages of different stuffing, for the lack of a better analogue. After I am done, I'll then go back to cut them up and categorize them. As time goes on I am getting better at churning out sausages of similar stuffing in one chain, but I have no desire to compartment my thought process like how writers compartment a research article.
Upvotes: 3 |
2014/08/20 | 667 | 2,749 | <issue_start>username_0: I have seen that the Scimago Journal Rank make a rank of journals that cover different topics and classified them into quartiles. For what I know conferences also made their proceedings books, but I was dubious if they could be compared agains journals like Scimago does.
I say this because I believe that journals have most ot the time a higher impact than conferences, and compare them side to side (journals and conference proceedings) it is not such a good idea.
The question that I have is if Scimago could be a good way to rank a conference impact, and also if there is another computer science conference ranking that uses the same quartile calculus as Scimago?
Any help?
Thanks<issue_comment>username_1: This depends a lot on particular area. In some areas, proceedings of the best conferences are comparable with decent journals, while in others conferences are more social events. Computer Science in general is more a conference-oriented area, so proceedings usually have decent impact.
To check it in particular subarea, if you have access to Scopus (other services probably have similar features), try the Analyze Journals tool, to compare some conferences and journals in your field. In my case (Computer Vision), the top conferences are beaten only by the top journals.
For more info on different habits in different subareas I recommend:
*Wainer, Eckmann, Goldenstein, Rocha: How Productivity and Impact Differ Across Computer Science Subareas. Communications of the ACM, 2013.*
edit: **I just re-read your question and noticed your actual answer was about something else. There I can point you again to Scopus (which has both journals and top conferences) and their SJR/SNIP...**
Upvotes: -1 <issue_comment>username_2: >
> Is Scimago a good way to rank conference impact?
>
>
>
No. It isn't.
-------------
At best, Scimago is a good way to obtain a modified "PageRank" of a publication in the graph of citations between Scopus-indexed publications in a three-year window, with each citation weighted by the similarity of the citing and cited publications, as measured by their common citation profiles.
Even if you accept that Scimago's abstruse formula is an accurate indicator of "impact"—which is debatable for numerous reasons—neither the raw citation data nor the precise definition of "cocitation profile" (on which the formula depends) are available to independently verify Scimago's rankings.
In particular, a few spot checks suggests that Scimago's coverage of major computer science conferences is spotty, and that the data it extracts from those conferences (even for relatively straightforward things like "number of citeable documents") is not particularly accurate.
Upvotes: 2 |
2014/08/20 | 941 | 3,897 | <issue_start>username_0: At my Czech university where I study computer science (but I believe math and physics are organized the same way here), most courses have both lectures (professor presenting the topic to a large class) and "seminars" (TA giving exercises and homework to a smaller class).
I believe courses at US universities have different structure and that the term ["seminar"](https://en.wikipedia.org/wiki/Seminar#Universities) has different meaning. But there has to be something like our "seminars", right? What is it called? How does it look like?<issue_comment>username_1: This is often called a "[recitation section](https://en.wikipedia.org/wiki/Recitation#Academic_recitation)".
Upvotes: 4 <issue_comment>username_2: Generally these are referred to as "labs" or "discussion" sections, as opposed to the "lecture" session.
Giving an example from computer science, it is not uncommon for there to be a lecture session where the instructor (such as the professor) talks about the material, takes questions, etc. Then there is often a lab session, often held in a computer lab, where students can work on hands-on assignments, homework, and course projects. These can be staffed by the instructor/professor or by student teaching assistants, or held as an open lab where the room is reserved but no one conducts the session - students come and go and work as they please.
In fields of communication, philosophy, and history, it is common that this lab session is replaced by a discussion session. The lecture is often of a mass variety, where the professor gives the talk to hundreds of students at a time. During the discussion session the professor or a student teaching assistant holds discussions, readings, gives out assignments, and various other similar tasks.
In physics and biology, again there is often a lecture or mass lecture held by the professor, then sometimes both lab and discussion sessions may be held. Again the staffing and locations vary, but the general theme is the same.
I have personally experienced these in a number of institutions in the US in the fields of biology, chemistry, physics, philosophy, history, communications, art, computer science, and psychology...so it certainly seems to be a very common pattern.
These non-lecture sections are almost always of a less-populated variety as well. If the class only has 20-30 students total, then the lab sections are of the same size. If the class is over 30, I have usually experienced lab and discussion sessions to be smaller, with as little as 12-20 students maximum - but this usually varies by room and lab availability and subject, and thus will vary by University, department, and subject.
Upvotes: 4 <issue_comment>username_3: In the Uk at least there are often tutorials which sound similar to what you describe. They consist of a small group of approx. 4-5 students with one tutor (generally a professor/lecturer/post-doc). The aim is to do exercises and go through problems the students are having with the course. Also you are often set homework for the tutorials.
I don't know how common this system is in other parts of the world.
Upvotes: 2 <issue_comment>username_4: As a variation on username_1's answer, at my last institution, the local jargon was simply to refer to these class meetings as *section*.
>
> I can't solve this homework problem, I'll ask my TA about it in section.
>
>
>
It may have been short for *recitation section* but I don't believe I ever heard the longer form.
Upvotes: 0 <issue_comment>username_5: At the institutions I've attended and taught at, these have been called "section" (I've never heard "recitation section"), "tutorial", "discussion section", "TA session"/"TA section", "lab", "small group", and "studio".
Of these "discussion section" was the most common and, at my BA and PhD institutions, the official title for it.
Upvotes: 0 |
2014/08/20 | 1,963 | 7,950 | <issue_start>username_0: This question is related to [How to improve myself as a lecturer?](https://academia.stackexchange.com/questions/5236/how-to-improve-myself-as-a-lecturer). However, being a PhD student, my main teaching obligation is leading recitations, i.e. sessions for groups of ~20 students that take place each week after a lecture and the content of the lecture should be revised mostly via exercises.
If you do not use the term "recitation session", please see a related question:
[What is the equivalent of European "seminar" in US universities?](https://academia.stackexchange.com/questions/27415/what-is-the-equivalent-of-european-seminar-in-us-universities)
At my university (Charles University, Faculty of Mathematics and Physics, Czech Republic), in the mathematics/theoretical computer science classes, usually you have a 90 minute class where the recitation teacher reminds students of what was said in the lecture, hands out or writes exercises, and then, in shorter blocks, students go through the exercises and one student shows the solutions on a blackboard.
I'm not very fond of this structure and I would like to improve my classes with ideas that I cannot find at my university (where we usually do things the aforementioned way).
Some ideas that I've had in my previous years, which you can judge effectiveness of:
* handing out "cheat sheets" containing the entire course notes (with compact proofs) beforehand: usually useful, as lecturers rarely have such compact notes beforehand, but very time consuming.
* trying to ask each student how he is faring and offer personal advice: when I was a student, I preferred this model, but it takes a lot of time to go around 20 people and shortly talk to each one; my students (in an exit questionnaire) argued that they want more exercises done per class, so time is of the essence.
* allow group work during a class -- this seems natural to me (science is mostly done in groups) but often results in people not being able to perform as well in final exams. Plus I am not yet sure how to allow groupwork so that groups don't delegate the work to the one enthusiastic student in the group.
* use text questionnaires through and after a session to find out what students would like; I am very happy with the information in those and will use those in the future but students of one university tend to suggest improvements which they have noticed at the same university (especially where it is expected to go to one university for the 3-year Bc. and 2-year Master's).
* filming yourself (as was suggested in the lecturer's question) is definitely a valid option but I feel it won't help me as much with recitations, especially since (I believe) there are not many great recitation sessions publicly available on the internet.<issue_comment>username_1: I've been leading tutorials (much like your recitation sessions) and studio based learning sessions for 7 years as a TA, and have been teaching as a professor for 2 years. An effective way to improve teaching, at least for me, has been a keen interest in the scholarship of teaching and learning. Through delving into research I've radically changed my own viewpoint of what it means to be a teacher, and deepened my understanding as to how learning works.
In my own teaching, I try to structure my classes around student centred, active-learning principles. I act more as a facilitator of learning, and have students actively discuss and work to achieve the objectives for a class. Rather than overtly lecture on a principle, I'll prepare readings for the students ahead of time then in the session have students discuss and reinforce the key elements of a principle through a realistic application, usually in small groups. They will all present their work and then discuss their approach and its merits.
The above example is only one small and, to be honest, quite vague example of more active learning. The key is that the focus and effort should be on and from the student. There are many, many more great examples and other evidence-based effective teaching methods out there. One fantastic resource that I routinely read and recommend is:
<NAME>., <NAME>., <NAME>., & <NAME>. (2010). How learning works: Seven research-based principles for smart teaching. [Link to this book on [Google Books](http://books.google.com/books?id=gu5qpi5aFDkC&lpg=PP1&dq=How%20Learning%20Works%3A%20Seven%20Research-Based%20Principles%20for%20Smart%20Teaching&pg=PP1#v=onepage&q&f=false) and [Amazon]](http://rads.stackoverflow.com/amzn/click/0470484101)
Upvotes: 2 <issue_comment>username_2: I've taught Biology discussion sections, and trained graduate students to teach. I'm a biology education researcher, so I am familiar with studies that measure student learning in different environments. I'd summarize good teaching as follows:
1. Find the student misconceptions.
If you are good at your subject, you likely have not struggled to learn it. Your biggest mystery is what do students not understand. Unfortunately, students don't have the meta-awareness to tell you, so you'll need to figure the misconceptions out together, at least for the first run-through of the course.
2. Show students their misconceptions.
Because students don't know what they don't know, they often feel like they understand material that they actually do not. When you show them they don't understand, they are more eager to fix the problem.
3. Give students an opportunity to learn.
Your class activities will indeed include a lot of group work, because discussing a problem, asking questions and answering them keeps everyone improving right at the sticky points. The big secret to group work so that everyone stays busy? Groups must be only 2 or 3 students. And you'll be walking around and asking the quiet students what they have figured out, so there is no slacking.
Here is an example of how to make this work. It's copied from [an earlier answer of mine](https://academia.stackexchange.com/questions/26219/what-to-do-in-recitation/26243#26243).
* Look over the problem set the night before, and determine the easy ones from the hard ones.
* When students arrive, tell them to start work on a difficult problem. Plan on giving them 5-10 minutes -- whatever they need until many of them slow down. Walk around as they work and just see how they choose to work.
* After several are stuck, have them get into groups of three and compare their techniques and what they found difficult.
(note: students hate group work because it is more painful than just listening. Most classes will try not getting into groups, or not really talking, just to see if they can get out of it. A cheerful forcefulness works well on American students)
* Walk around again and ask questions like, "Tell me how far you are. Can you show me a part that is difficult? Can you get out your lecture notes and find the section relevant to this problem?"
* Usually at this point you will see a sticky point that more than one student is wrestling with. Now is a great time to pull everyone's attention back to the front of the room and you can work through that problem (or one similar) and answer questions for 5-10 minutes.
* Have students return to the problem and complete it. Then they can do a similar problem on their own to reinforce.
* Move on to the next topic and assign a new, difficult problem.
As you can see, this process shows both you and the students what they don't understand, and provides space and motivation to fix the problem. Because you're visiting the groups, they must produce.
Things I wouldn't recommend for a recitation:
1. Making course notes or a study guide. Students should be making their own.
2. More lecturing
3. Questionnaires are fine, but again, students don't know what they don't know
We've actually posted an example of discussion work for biology here: <http://vimeo.com/33801546>
Upvotes: 3 |
2014/08/20 | 937 | 3,863 | <issue_start>username_0: If I am the TA for a class, what should I do if a student asks me a question which I can't answer? While "tell the truth and say you don't know" is one approach, are there other options?<issue_comment>username_1: I agree that honesty is the best policy, and it's too bad if you're in a situation in which you feel worried about admitting you don't know the answer. You shouldn't try to bluff, by pretending you know but don't have time to explain or by giving an intentionally vague answer. However, there are ways of handling it more smoothly than just saying "I don't know" and leaving it at that. Depending on the circumstances, you can say "That's a really interesting question. I haven't thought about it, so I'll have to look into it, but let's talk about it in office hours." (Or you can promise to return to the topic in the next class meeting if it's really relevant to the course and everyone in the class will want to know the answer.) Or "These issues can be complicated. I don't know the details off the top of my head, but the place I'd look them up is Reference Work X. I'd be happy to show you where to find it after class." Or "That's a good question, but it's somewhat beyond the scope of this class. I'd be happy to investigate it with you outside of class."
The key is to respect the student's desire to learn. If you avoid the question or give an answer you know is inadequate, then you're being deliberately unhelpful. If you just give up and admit defeat, then at least you're being honest, but the student still isn't finding out what he/she wanted to know. If you respond by pointing the student on the road to an answer, even if you can't supply it off the top of your head, then you've done everything that can be expected of you.
Upvotes: 7 [selected_answer]<issue_comment>username_2: If that is course related Just tell them "we will get there", don't hurry. Go home find out the answer and give them answer the next day or so. :)
Upvotes: -1 <issue_comment>username_3: I'm a professor, so I am expected to know the answers, but sometimes I don't. This often involves some minor detail in a programming language.
So I usually say, "That's a good question. I don't want to give a wrong answer, so let me think about that and get back to you." We use a course management system which includes a discussion board, so I will usually then add, "I don't want to forget, so post that on the forum. That way everyone will see the answer." Then in the posted answer I try to explain how I found the information. I find this works well.
Upvotes: 4 <issue_comment>username_4: It's happening a lot so in this situation I prefer to not saying I don't know the answer" but I will say good question and let us think about it and we will discuss later
Upvotes: 0 <issue_comment>username_5: >
> I don't know the answer to your question at the moment, let us all try to find a
> solution together.
>
>
>
In that way, you are communicating the fact that there are always new ways to look at things and presented for the first time, it is difficult to answer.
Then, you might well be in the same shoes as the questioner and other students and one logical way is to sit and solve it together. You could invite the whole class if you wish; a cooperative effort. The main idea is to try to find a way to tackle the problem before it dies away.
Upvotes: 0 <issue_comment>username_6: You should say that you don't know the answer or did not prepare for answering it (especially if it is a question that is out of topic). Then you can either search for the answer with the student if you have time and it is appropriate. Or if you cannot at that moment, you can say that you will look for it and give the student some explanation by e-mail or next time. This is something that happens even to professors sometimes.
Upvotes: 0 |
2014/08/20 | 1,496 | 6,452 | <issue_start>username_0: I would have thought the document and overall findings are to be a closely guarded secret until defense or publication, so you can imagine my horror that a hiring professor would ask if he can have a pdf of my dissertation. This is in the context of a job application, whilst he decides whether or not to invite me for an interview. I have already sent them the other standard documentation that was requested in the advert. Is his request as unorthodox as it seems to me?<issue_comment>username_1: It might depend on the field, but it strikes me as pretty normal to ask for the PhD thesis in the context of an academic job application.
Your PhD thesis shows the quality of your research. Your PhD thesis shows the quality of your ability to *communicate* your research. Both are essential skills in (academic) research. Unless you already have many published papers — and it appears that you do not (or else why would it be secret?) — then your PhD thesis is the only document that can serve as evidence that you do possess those skills.
If you are worried about results leaking, you can ask for the manuscript to be treated confidentially. If you are worried about the recruiter stealing and abusing your results, you might want to reconsider if you want to work there in the first place.
Upvotes: 5 <issue_comment>username_2: In a word, yes. It is very common for academic employers to want to know about a candidate's research in progress, and they often ask for research plans, unpublished manuscripts, reports on ongoing projects, etc. From the employer's point of view, they want to know as much as possible about what the candidate is doing, so as to evaluate the promise of their research program and their productivity. This is especially true for junior candidates who do not already have a large body of published work. So a request for a draft of a PhD thesis would not be out of line.
When a candidate shares such material as part of their application, the hiring professor or committee has an ethical obligation to hold it in confidence. They should not circulate it beyond those people within the department who are involved in the hiring decision. Also, it would be ethically inappropriate for anyone with access to this material to exploit it for the gain of their own research program (e.g. by trying to solve the candidate's thesis problem before they do, or giving it to one of their own students). As the candidate, you have the right to expect that this will not happen.
Of course, as a matter of practice, if you want the job, you don't have much choice but to give them what they ask. But I wouldn't see such a request as unusual or unreasonable, and I don't think you need worry about them using it unfairly. If you are still worried, you could send them the thesis along with a note saying "since this is work in progress, I would ask that you keep it in confidence".
Also, I would say it's an exaggeration to say a thesis should be a "closely guarded secret" or to react with "horror" to a request to share it. It's generally prudent not to share unpublished work indiscriminately, but it's not as if it were missile launch codes. If there is something to be gained by sharing it with someone (e.g. useful input from an expert, a potential collaborator, a job) then often that's a good idea. It seems to be pretty common for people starting out in academia to overestimate the risks of people stealing their work: yes, there are horror stories, but in the long run, you usually have more to lose from excessive secrecy than from reasonable openness. Paranoia is generally not a helpful trait for an academic.
Upvotes: 6 [selected_answer]<issue_comment>username_3: I know at least one country where it's a general requirement that you send two copies of your thesis for any application for an academic position (in a particular field). Very few recruiters have the time and inclination to actually read it but that's still a requirement. Theses are also all archived (on microfiches!) and can be ordered from any university library in the country.
It's all a bit silly now because online repositories are much more practical than either copying thousands of pages or reading microfiches on a bulky machine but it underlines the fact that in principle a PhD thesis is a public document and one that (academic) recruiters might want to see.
Also, a (completed) PhD thesis is a form of publication, even if you haven't put it out in the form of a book. Depending on the field, it's not as well regarded as journal papers but it would certainly establish your priority claim in the extremely unlikely even that someone would try to publish something based on it.
Upvotes: 0 <issue_comment>username_4: As others have said, it's not unusual.
However, to answer those talking about paranoia; at my Uni, we used to have seminars where we discussed with other PhD students different aspects of our thesis. This was done with the explicit agreement that these discussion will remain in strict confidentiality, and we will not use each other's work. However, there were no written, signed documents to ensure this.
So after one such sessions where I was explaining a central point of my thesis, one of the fellow students, who was researching an entirely different subject, was quite interested and stayed on to discuss at length my thesis. I was flattered by the attention - only to find, a few months later, that he published a book containing, basically, all of my PhD thesis. I had to completely change my thesis, and although my tutor commented on the fact that the book contained what looked like my work, there was nothing I could do. The guy who stole my work is now a lecturer at the same University.
Upvotes: 3 <issue_comment>username_5: After recent experiences of my ideas stolen and authorship credit being stolen from me, I have become one of those people who overestimate risk. It is better to be safe than sorry. To help: seek advice reg. this situation with a trusted adviser/ graduate's guide/ free legal services of uni. This can also help to prepare a professional approach.
Do you have an option to ask the recruiter to sign a form / letter / email communication that the unpublished thesis will be treated as confidential, and clarifying the extent of confidentiality. This cannot be an uncommon request, if you cite examples and state your worry in an upfront manner.
Upvotes: 0 |
2014/08/21 | 4,726 | 20,003 | <issue_start>username_0: **It seems to be accepted wisdom in the business world that reference letters for former employees should be extremely terse.** They should confirm that the employee worked there, and essentially nothing else:
>
> To whom it may concern: <NAME> was employed by the Weyland-Yutani Corporation from June 2137 to September 2139. Signed, <NAME>, manager."
>
>
>
The standard reasoning is that if the letter contains something unfavorable and the employee is turned down for a future job, they might sue their former company for libel, claiming the unfavorable statement was a lie which damaged their career. In order to avoid the possibility of such a legal battle (which, the reasoning goes, could be very expensive, even if the company wins), the company tells its managers to write letters with no content, so that there's no chance of them containing something actionable.
This question on Workplace Stack Exchange, [Is there any evidence that giving references for former employees is inherently risky?](https://workplace.stackexchange.com/q/226/26826), attests to this practice, and the answers provide some suggestion that the company's fears are justified.
**On the other hand, in academia, detailed and informative recommendation letters are the norm.** They often run to multiple pages, and contain specific information about the candidate's history and activities at the institution, as well as the writer's (supposedly honest) subjective assessment of the candidate's strengths, weaknesses, and potential. This is not only common but effectively mandatory; a minimal recommendation letter of the kind described above would immediately consign the candidate's application to the nearest wastebasket.
This would seem to be just the sort of thing that fills corporate counsel with horror, yet we do it every day. No academic employer of mine has ever told me not to do so; for that matter, I can't say that I've ever received any official guidance, one way or the other, on writing recommendations. Nobody in academia seems to be concerned that writing a letter that could be construed as less than favorable could result in legal consequences. **So how are we getting away with it?**
Are universities treated differently under the law, making them less vulnerable to such threats? Or do they willingly accept the legal risks in order to make the world a better place by providing actual information about their former students/employees? Or is everyone ignorant of the risk of trouble? Or is there something I have not thought of?
To forestall a couple of objections: I know that academics usually try to write only positive letters, and decline to write if they have nothing good to say, but that's evidently not enough in the corporate world. Anyway, candidates often end up damned by faint praise. Also, I know that we send letters in confidence, with the understanding that they won't be shown to the candidate, and sometimes this is backed by having the candidate sign a waiver, but I have to believe that a sufficiently determined and litigious candidate could get access anyway.
(I know this question may sound rhetorical, but I ask it in seriousness, and hope to learn something from the answers. My context is the US, but if things are substantially different in other places, that would be interesting to know as well.)<issue_comment>username_1: In general, **yes**, there is a major difference in how employees' and students' letters are handled. Applicants to US undergraduate and graduate programs are routinely asked to waive the right to see their letters of recommendation, and therefore the letter writers are free to speak in a much more candid manner and offer a frank appraisal of a candidate's positives and negatives.
It's worth noting that this is often **not** the case outside the US, where students are often directly given letters of recommendation that are exactly of the bland and generic format you described above. These letters, as you can imagine, are essentially useless in describing a student's real qualifications, and consequently we give those letters very little weight in our deliberations.
Upvotes: 4 <issue_comment>username_2: In Japan, the recommendation letter is a 推薦状 [*suisenjyou*] and it will in all but the most Westernized schools be given to the student such that they can read it and stamped with the seal of the professor and possibly an official seal from the school.
The purpose of this is slightly different than an American recommendation letter -- which is more of an appraisal of the student's abilities. By and large, the Japanese recommendation letter serves as an introduction / expression that the professor or teacher vouches for the student as someone trustworthy for the new job or school. It contains little information and not much detail on why the student would be recommended.
It was quite the pain getting letters of this format from some of the Americans less aware of the existence of what to us is such a bizarre practice.
---
I believe but cannot vouch that the practices are similar in Korea and China.
Upvotes: 4 <issue_comment>username_3: I'd guess part of the answer is that academics deal with the risk because they can't afford not to, because a reference serves a different purpose.
In business, my understanding is that people are largely selected for jobs on things that can be measured and compared apart from references (such as formal qualifications, performance in aptitude tests, years of experience), and on things that the hiring committee is able to assess (such as answers at interview).
While some of these play a part in academia, a large factor of hiring decisions is the applicant's research potential/demonstrated performance. This is sufficiently specialized and difficult to compare that the hiring committee cannot do that task entirely alone. They need additional experts to help make the judgements. Part of that can be done by sending papers to experts chosen by the panel, but part of it is done by receiving detailed references from people who are already familiar with the applicant's work (and so hopefully won't need to spend as much time to make a judgement).
Upvotes: 3 <issue_comment>username_4: My impression is that the difference is largely cultural. The academic community in the U.S. has developed a culture of relying on written letters of recommendation, and this practice has a lot of inherent stability. Someone considering suing knows that a questionable lawsuit would permanently ruin their career, in a way that wouldn't necessarily happen in the business world. (For one thing, academia is a much smaller, more tightly knit, and more philosophically coherent group.) If a student or job applicant did sue frivolously, then it would probably be cheaper to settle, but the university would not do so: proving a point would be a higher priority than saving money. (By contrast, for-profit businesses are less likely to make unnecessary expenditures to try to defend their principles.) University attorneys know that if they advised faculty not to write or require letters, everyone would ignore them and it would just hurt the university's reputation. The culture takes on a life of its own, regardless of the legal context.
However, I believe U.S. legal precedents are actually pretty favorable for academic letters of recommendation. The key phrase is "academic deference," which as I understand it is the theory that academic judgments are not subject to legal review because the courts aren't even capable of evaluating these judgments. (Of course I'm not a lawyer and so I'm probably at least overlooking subtleties.) You can sue for reasons like discrimination, but you can't successfully sue someone who made a good faith effort to judge your academic performance or potential on the grounds that you think they judged it wrong.
The academic deference doctrine is based on a number of precedents, but it's somewhat controversial. For example, people argue that it makes it too easy to get away with discrimination (since it makes courts reluctant to examine evidence), and that lots of non-academic judgments are equally difficult to evaluate but don't have a special doctrine protecting them from review by the courts. See [this article by Moss](http://scholarship.law.berkeley.edu/cgi/viewcontent.cgi?article=1365&context=bjell) for a review of the precedents and an argument against academic deference.
Depending on the circumstances, there may also be other legal issues. For example, it's much harder to sue someone for libel based on their evaluation of your published scholarly work. In particular, you have to prove [actual malice](http://en.wikipedia.org/wiki/Actual_malice) since publishing the work makes you a public figure for this purpose. In other words, it's not enough to prove that the evaluation was wrong; instead, you have to prove that they knew it was wrong or at least had reckless disregard for the truth. (See, for example, [Posner's wonderful opinion](http://www.projectposner.org/case/1996/75F3d307) on whether calling someone a crank is defamatory.) I believe this is much more clear cut and better established than academic deference is, but it is not relevant for unpublished work or other recommendation letter content.
Returning to the cultural theme, it's also worth thinking about psychological factors. If I run a business, then my ex-employees could be applying for jobs at competitors, partners, or companies that are irrelevant to my interests. Among these possibilities, partner companies I'm actively working with are only a small fraction. The generic case is that I won't care about the new employer or will actively harbor ill will towards them. If a lawyer advises me that it's best not to say anything, I may seize on that excuse even aside from the lawsuit potential. By contrast, academia is a world in which we're all partners, and that feels very different. Universities do compete against each other in some ways, of course, but there's a much greater feeling of good will and cooperation than you see between most for-profit companies. It's no surprise that very different norms and practices have developed in a far less competitive environment.
Upvotes: 6 <issue_comment>username_5: I believe there is a very reasonable misunderstanding here - there is a difference in a "verification of employment", being used as a "reference", and a "recommendation letter". These all have their common uses, but they are handled very, very differently.
Verification of Employment (present or past)
--------------------------------------------
>
> To whom it may concern: <NAME> was employed by the
> Weyland-Yutani Corporation from June 2137 to September 2139. Signed,
> <NAME>, manager."
>
>
>
This is a perfect example of the proper, commonly accepted response to a request for a verification of employment. This sort of letter is useful in a wide-variety of circumstances, including:
* Proving your work/career history to an employer
* Verifying your job to a government agency or company (think insurance, banks, State benefits, etc)
* Qualifying for corporate discount/membership arrangements (cell phones, computers, credit unions, etc)
This is not at all to be construed as a letter of recommendation - it isn't one!
There are cases where more information is requested, such as:
* Is the employee eligible to be re-hired at the company
* How did the employment end - were they fired, quit, laid off?
* Salary verification (this is not the most common, but it is not unheard of and more commonly requested by banks and government agencies)
Serving as a Reference
----------------------
A employer or manager may be asked to serve as a reference, where a prospective employer might wish to call and speak with them personally and ask them various questions. What questions get asked, and any potential for legal action, depends upon what is said, documentation, and consequences. Most of the legal actions revolve around claims of libel or defamation. Note that in the US you are allowed to cherry pick your own reference to provide to a future/prospective employer.
Sometimes this is requested by email or fax, in which case it is common to just treat it as little more than a verification of employment request. However, sometimes this is much more important than a mere verification - it depends upon the position, industry, recruiter, etc.
Letter of Recommendation
------------------------
In industry a real letter of recommendation functions much more closely to how it is treated in academia. A person in a position to closely evaluate an employee is asked - almost exclusively by the employee - to provide such a letter. This is almost always requested from an immediate supervisor, manager, directory, or executive with a close working relationship with the employee.
All the same rules apply. These are less common in industry, perhaps at least partly because they are rare and unexpected and because unsolicited opinions tend to be discounted and distrusted much more commonly than in academia. However, they can still come in handy.
Fear of torts really isn't that big of a factor as far as I've seen, because a letter of recommendation is expected solely to be positive. I'm not aware of any case of a company suing another company because "you said this guy was great and it turns out he was useless!" It just isn't done.
And an employee couldn't very successfully sue an employer for saying something nice about them; the USA may have a reputation for being litigious, but that's just ridiculous (not that it hasn't ever happened I'm sure, but it's obviously frivolous and very rare).
>
> To forestall a couple of objections: I know that academics usually try
> to write only positive letters, and decline to write if they have
> nothing good to say, but that's evidently not enough in the corporate
> world. Anyway, candidates often end up damned by faint praise.
>
>
>
Actually, in the corporate world this is almost exactly what happens. If an employer doesn't want to provide a real recommendation, they just provide a verification of employment.
I've had recruiters contact previous/current bosses of mine (who I talked with in advance to ensure they would provide a useful reference rather than a mere verification of employment), and been told by the recruiter when such contacts were very positive. "They really gave you a glowing review!"
Most good corporate bosses will be more than happy to say good things if they have a good opinion of you, and will accept any request of yours to share their opinion. If they get asked for a reference and they don't know you well or don't have a great opinion of you, they'll probably just not say very much - unless they hate you, in which case you really shouldn't have offered them as a reference!
How Does Academia Do It?
------------------------
It's really much like the corporate world does it: they recognize that libel and defamation are in fact illegal, and so avoid even a potential appearance by avoiding saying negative things. They also realize that speaking poorly of others tends to reflect badly on you, too - so "if you don't have anything nice to say, don't say anything at all".
And yes, you can be damned by faint praise. If you are looking for more than an entry-level job at a company and talk about what great work you did at your last place and how valuable you were, and then they talk to the reference you provided (probably your boss or a colleague) and all they'll say is you worked there, and maybe you were a nice person who was usually punctual? Yeah, that's not going to go well in any sector.
Upvotes: 5 <issue_comment>username_6: In academia, publicly grading a student's performance is par for the course. After all, you get a GPA which is a very public (if often somewhat subjective) "evaluation" of your academic strength... Writing a few words that describe, in essence, where that grade came from, is a logical extension. From there, a full letter of recommendation is not much further.
By contrast, employee evaluations are considered highly confidential; so from the outset, there are different approaches to the way a person's performance is evaluated and publicized. That difference seems to carry forward into the culture of recommendations.
I have been a manager of people "in industry" for years, and ran into this problem from time to time. On a handful of occasions I have agreed to be a reference for people who used to work for me. Typically I have done this only for people who had already left my organization, with whom I had remained in touch, and who I would not be afraid to give a good recommendation. Typically those things tend to go hand in hand... and I know that for at least one of them, my recommendation made the difference for him getting his "dream job" (I know this, because 10 minutes after hanging up the phone to the HR department of his future employer, he called me to say he had just received a verbal offer, and that they mentioned that my reference had tipped the scale). For me, doing the right thing on occasions like that is more important than following recommendations of corporate lawyers... call me reckless.
Update (quoting from <http://www.aaup.org/issues/academic-freedom/professors-and-institutions>):
>
> The professional standard of academic freedom is defined by the 1940 Statement of Principles on Academic Freedom and Tenure, which was developed by the American Association of University Professors (AAUP) and the Association of American Colleges and Universities. It is the fundamental statement on academic freedom for faculty in higher education. It has been endorsed by over 180 scholarly and professional organizations, and is incorporated into hundreds of college and university faculty handbooks. The 1940 Statement provides:
>
>
>
> >
> > College and university teachers are citizens, members of a learned profession, and officers of an educational institution. When they speak or write as citizens, they should be free from institutional censorship or discipline, but their special position in the community imposes special obligations. As scholars and educational officers, they should remember that the public may judge their profession and their institution by their utterances. Hence they should at all times be accurate, should exercise appropriate restraint, should show respect for the opinions of others, and should make every effort to indicate that they are not speaking for the institution.
> > AAUP, Policy Documents & Reports 3-4 (9th ed. 2001) (hereafter "Redbook").
> >
> >
> >
>
>
>
Further down this same (rather long) article, there are numerous instances of case law. I am quoting just one that seems to indicate that academic freedom and "selection of faculty" are intertwined - this would seem to give academics the freedom to write proper detailed references (protected by "academic freedom") since such references are used "for the selection of faculty":
State v. Schmid, 84 N.J. 535 (1980), appeal dismissed sub. nom., Princeton Univ. v. Schmid, 455 U.S. 100 (1982)
>
> Any direct governmental infringement of the freedom of teaching, learning, and investigation, is an assault upon the autonomy of institutions dedicated to academic freedom. In addition, **universities perform functions, such as the selection of faculty, that are inexorably intertwined with the exercise of academic freedom.**
>
>
>
I think that last sentence is the reason "we are getting away with it".
Upvotes: 2 <issue_comment>username_7: I always show a student (or colleague) the letter I've been asked to write - noting that although s/he has (almost always) waived the right to see the letter, I have not waived the right to show it. S/he can suggest edits, or even ask me not to send it (that's never happened).
Upvotes: 3 |
2014/08/21 | 2,475 | 9,831 | <issue_start>username_0: I have recently submitted a 40 pages paper to a journal, say (A). After about 6 months, the editor let me know that several reviewers have declined to review my paper, and so he decided to reject the paper. He suggested that I submit my paper to a more specialized journal. Journal (A) is already a specialized journal and I only know 1 journal more specialized than (A), let's call it journal (B). So, one of my options is to submit my paper to journal (B) and accept the risk of a similar feedback from the editors of journal (B), of course after several months.
In the mean while, I think the main reason several reviewers declined to review my paper is that (1) my paper is relatively long, (2) My paper consists of two parts and each part addresses a different subject. Therefore, the set of reviewers who have expertise in both subjects and are willing to read and review my paper is very small. Due to these facts, it is very likely the editors of journal (B) face the same problem. So, as the second option, I am thinking of splitting my paper into two shorter papers each consisting only one subject of my original paper. Regarding this option, I can think of the following pros and cons:
Pros:
(i) There are a good number of experts in each subject and it is fairly easy to find a reviewer for each one of my shorter papers.
(ii) This facilitates the referee process of each paper and hopefully reduces its time period .
(iii) Two papers (each approximately 20 pages) look better than one paper (approximately 40 pages) in my CV.
Cons:
(a) The second part of my paper depends on the notations and results of the first part. So the reviewer of the second part may prefer to read and review the whole paper at once, or even worse he/she may call the paper containing the second part incomplete.
(b) Part of the motivation of the developments in the first part of my paper comes from my work in the second part. By separating these two parts, the reviewer of the first part can complain about the lack of enough motivations and justifications for my results. In my opinion, it is not a serious problem because I will explain the application of my works which is going to appear in the second paper. But I am not the person who makes the final decision and the reviewer may blame this and reject the paper.
(c) I can imagine that it would be a difficult path to follow the referee processes of two related papers simultaneously, because of the following reasons: It is possible that the opinions of reviewers of the shorter papers differ significantly. Or it is possible that these papers get refereed in two very different time periods. It is also possible that one of the papers gets accepted and the other one doesn't, which is a pretty ugly situation.
Unfortunately, I have faced each of the above difficulties ((a), (b) and (c)) in my previous submissions and I know how they can ruin my papers. In fact, the main reason that I organized my results collectively in one paper was to avoid the above issues. But now that my paper has been rejected without any peer review, I am considering the option of splitting my results into two papers. So, I have the following questions to ask from people who have more experience and have been involved with similar situations (for instance as an editor or a referee):
(1) What do you think about the above pros and cons? Do you know any other pros and/or cons? And, is it a good idea to split my paper into two shorter papers?
(2) If it is advisable to split my paper into two papers, should I submit them to the same journal, (maybe same editor), or should I submit them to different journals according to the best editors who can handle my papers?<issue_comment>username_1: Well, in my opinion, the following fragment of your question outweighs everything else:
>
> (2) My paper consists of two parts and each part addresses a different subject. Therefore, the set of reviewers who have expertise in both subjects and are willing to read and review my paper is very small.
>
>
>
I appreciate that you are able to imagine the problem that will be faced by the journal editors. So, splitting into two is certainly not a bad idea in this case.
I can mention some instances where this has been done in the past. In each of these cases, the problem wasn't as acute as yours (i.e. both could actually be reviewed by the same expert), but perhaps the authors chose to do this because of length considerations (or some other reason that I can't imagine). Also, in these cases, it wasn't a case of two different journals A and B - it was two sequential papers in the same journal. So, that creates an additional option for you, if you find it appropriate. *(The context here is Physics, but I'm sure this can be generalized to Maths, if [I'm right](https://academia.stackexchange.com/users/4511/vahid-shirbisheh)!)*
**Example 1**: [<NAME>](http://en.wikipedia.org/wiki/Richard_Feynman) was a charismatic Nobel Laureate (Nobel, 1965), as you probably know. Here are his two significant contributions to QED, appearing back to back in Physical Review:
[Paper1](http://journals.aps.org/pr/abstract/10.1103/PhysRev.76.749), [Paper2](http://journals.aps.org/pr/abstract/10.1103/PhysRev.76.769) (both are free pdfs officially, given their landmark status.)
In particular, he began Paper 1 by writing:
>
> This is the first of a set of papers dealing with the solution of problems in quantum electrodynamics.
>
>
>
and started Paper 2 with the sentence:
>
> This paper should be considered as a direct continuation of the preceding one ...
>
>
>
He had developed the formalism in the former and applied it to the problem in the latter. That makes a candidate for splitting into two.
**Example 2** Here are two papers by [<NAME>](http://en.wikipedia.org/wiki/Sidney_Coleman) which form the backbone of phenomenological effective Lagrangian method in low-energy Nuclear Physics. (These aren't free and I'm not sure you will be able to get past the paywall here!)
[Paper 1](http://journals.aps.org/pr/abstract/10.1103/PhysRev.177.2239), [paper 2](http://journals.aps.org/pr/abstract/10.1103/PhysRev.177.2247).
Notice again, that they are consecutive papers in the same journal. The second paper also has an extra author, and that could be one reason for splitting into two. But once again, from the point of view of content, the general method was devised in paper 1 and applied to some context in paper 2. But here, the authors spent a section of paper 2 in explaining what they developed in Paper 1.
Thus, long story cut short - it should be possible to go ahead and split into two parts. If they are consecutive, you can carry over everything directly, if not, spend a few sentences explaining your notation etc.
PS - Congratulations for doing this sort of work which could put the editors into this type of a fix. That smells like a significant contribution to Maths, having applications elsewhere (other branches?), which is probably why you insist that it would be rare to find a referee who can ably judge both!
Upvotes: 5 [selected_answer]<issue_comment>username_2: This problem sounds either:
* bad editorial choice and poorly structure paper or
* two papers that were not split in time
Generally a research program is not just growing like a blob, but follow some strategy, planing. If you split the two parts, you can briefly mention this strategy (even if it is retrospective) and the results of the other paper. It can be in the introduction and in discussion as well. Most journals allow to cite manuscripts under revision. If not, you can try to describe without citation that you had this and this motivation, and the results will be published soon.
If you don't want to split the paper, you can restructure it as one part to subordinate to the other. Eg. if you you have some important technical detail in the first part you use in the second, but boring in itself, you put the details in an appendix or SI. It is pretty common practice, especially in older paper, and it helps the reader orient themselves around.
Two notes:
* Again, I would consult with the editor at first place. They are busy, but generally nice people, so he may comment more in detail about your worries about size and structure. You may have completely different reasons of rejection.
* You always can try to send out another journal (not necessarily more specialist) without much re-editing. Different journals have different expectations for length, structure, scope.
Upvotes: 2 <issue_comment>username_3: Allow me to quote from the code of practice of a journal in my field, which (IMO) should apply everywhere:
>
> Fragmentation of research papers should be avoided. Authors who fragment their work into a series of papers must be able to justify doing so on the grounds that it enhances scientific communication.
>
>
>
The practice of splitting papers up into "publons" (or minimum publishable unit) [is a scourge](http://pubs.acs.org/doi/pdf/10.1021/pr0505291). Get rid of the "two papers looks better than one" adage in your mind. It's [not necessarily true](https://academia.stackexchange.com/questions/16985/should-i-publish-a-given-unit-of-work-in-more-smaller-papers-or-fewer-larger-pap), and can be detrimental to the science.
As it stands, you present no valid reason as to why splitting the papers up will improve how your work is communicated. Therefore, you shouldn't do so.
Upvotes: -1 <issue_comment>username_4: (b) could be a serious problem, because the referee reading the first paper does not know how legitimate the promised application in the second paper is.
If so, one option is to just follow the editors' advice, resubmit to the more specialized journal, and move on.
Upvotes: 0 |
2014/08/21 | 335 | 1,563 | <issue_start>username_0: I have built a software that handles protein clustering data in specific format. It is able to draw several plots on that basis along with many other analyses. It has more options than its existing competitor. Should I just try to get it published as a software/technical issue, or try some
case study data (research work on a logical data) produced by the software
+
the software,
in a better journal?<issue_comment>username_1: In my discipline (political science), one would typically do both. Write an applied article that uses the software for an actual research project and submit that to a substantive journal. Then, separately, describe the software and write it up for a software journal (e.g., Journal of Statistical Software, R Journal, etc.).
Upvotes: 3 <issue_comment>username_2: In my field (Bioinformatics) you would need to test that the software works as intended. Many researchers use simulated data to evaluate performance under controlled conditions, and then apply the method to a case study to demonstrate its validity with real data. Although not explicitly required for (most?) journals, if I was reviewing your paper I would most likely request that you demonstrate its utility with real data. That, however, does not require a lengthly study, any appropriate dataset would suffice. That being said, if you include some real data some referees may ask you that you demonstrate your software in a broader set of (real) conditions. That has happened to me before- referees are insatiable :-)
Upvotes: 2 |
2014/08/21 | 600 | 2,661 | <issue_start>username_0: I am an undergraduate student in Mathematics and I think that I have discovered something significant in Mathematics. My friends and some professors to whom I have sent my ideas also confirmed its significance. They suggested me to write a paper on it.
But the problem is that being an undergraduate student in Mathematics, I don't know how to write a paper. Besides, the professors to whom I have sent my ideas weren't experts in this field and they have asked me to send my works to some experts in the field. But unfortunately I don't know anyone such and I see no point in assuming that even if my work is significant, they would give time to read it without dismissing it beforehand as a work of some crank.
So what should I do? Can some suggestions be provided?<issue_comment>username_1: First of all, Im not a mathematician but I am in a closely related field (CS)
I work in the machine learning domain and sometimes I read mathematical papers to find hints of my problem so I think I may able to provide a little help.
According to my experience, the most different part in Maths to the other domain is that, their main results, are always a collection of theorems or properties instead of experiments / methods.
In most of the cases I saw people organize their paper in the following way:
* Formulate the problem, provide some minimum facts that is nessasary for the readers to understand your contribution.
* List out your main results in layman terms.
* Write all the necessary lemmas and theorems that finally lead to your main results.
* QED!
* If available, some application examples.
Upvotes: 0 <issue_comment>username_2: **Ask your professor to introduce you to a suitable expert.**
If some of your professors think that your work is of substantial quality and probably novel, then they are surely fine with introducing you to some suitable researchers in the field. Note that, for example, a postdoc in the respective field may suffice. They should be willing to give your work the "badge" that in their oppinion, your work seems to have potential by writing that to the suitable researcher themselves. This should get rid of the problem that you describe in your second-last sentence:
"*But unfortunately I don't know anyone such and I see no point in assuming that even if my work is significant, they would give time to read it without dismissing it beforehand as a work of some crank.*"
Note that an experienced academic in the respective field can add a lot of value to your paper, including improving the accessibility and providing a more comprehensive literature survey.
Upvotes: 4 [selected_answer] |
2014/08/21 | 1,591 | 5,931 | <issue_start>username_0: Some — perhaps many — academics seem to be very careful in keeping unpublished work secret. It is not difficult to find anecdotes where academic ideas are stolen, such as in [this post by @Markus](https://academia.stackexchange.com/a/27460/1033). Others, such as [@NateEldredge in this post](https://academia.stackexchange.com/a/27434/1033) write that *It seems to be pretty common for people starting out in academia to overestimate the risks of people stealing their work*. Personally, I'm rather at the other end of the spectrum, and I don't feel afraid that my ideas would be stolen. Perhaps I'm naïve.
Is there any research on the question: how common is academic theft, really? Such as surveys of people having experienced (or committed!) such theft according to an appropriate definition, possibly compared to peoples' perception as to the risks. It would be interesting to see if there are some facts to refer to. Perhaps it is field dependent?
*(By academic theft, I am* not *talking about plagiarism, but rather about stealing research ideas before anything is published)*<issue_comment>username_1: I've been a graduate student (Masters at one school and PhD at another) and I have never seen the sort of thing you are describing. I am afraid `The Social Network` has made everyone paranoid about their ideas being stolen. The reality is that most ideas are difficult to steal, because implementing them might be time-consuming enough that the original person has a huge head start. The only time you should worry is if you think you have a GREAT idea that you think is easy to do once you think of it. You will probably know if this is possible.
In reality though, there are a lot of smart people out there and if you are thinking of it, at least 1 other person has probably considered it. Its much more likely that you are recreating (or attempting something that doesn't work), but that might just be in my field (Neuroscience/Imaging).
On the other hand, I have found that it is pretty common for people to take credit for other peoples work, or use other peoples software without crediting, especially for grant applications. Again though, this might be unique to my field.
Upvotes: 3 <issue_comment>username_2: Stealing ideas is difficult because you have a victim you have stolen from and presumably they know (or will know) that you stole from them.
Fraud is much more common, much easier to do, and much harder to prove—unless you do something really stupid like re-use the same image multiple times in various unrelated papers, something the people who get caught always seem to do. (Note 1)
* [<NAME>](http://en.wikipedia.org/wiki/Jan_Hendrik_Sch%C3%B6n) - Physics
* [Haruko Obokata](http://en.wikipedia.org/wiki/Haruko_Obokata) - Biomedical sciences
Also common, as Yiuin states, is people (notably PIs and supervisors) taking the credit for their underlings'/minions' work.
Note 1: This means that the either all the frauds are stupid and re-use images and get caught, or that the frauds who don't re-use images are rarely caught.
Upvotes: 3 <issue_comment>username_3: Not quite an answer, but too long for a comment. In order to quantify how common academic theft is, one needs to define theft. You attempt to define it as:
>
> By academic theft, I am not talking about plagiarism, but rather about stealing research ideas before anything is published
>
>
>
Now consider the following scenario. Alice has been carrying out research on topic X on and off for years. She has a number of nice research findings that she hasn't gotten around to publishing and hasn't shared the findings with anyone. Alice finally decides to start focusing on topic X and publish her existing results. Unbeknownst to Alice, Bob has just started working on topic X as she begin trying to publish her results. I don't think anyone would argue that Alice has engaged in academic theft. The issue becomes what would have to change for Alice to have engaged in academic theft.
What if Alice found out about Bob's intentions from Carol (or maybe Bob himself) and that changed Alice's research direction?
What if Alice only starts doing the research after she hears about what Bob is doing?
What if Bob presents a novel approach to topic X and Alice runs with the approach faster/further than Bob, but Alice is careful to always credit Bob with the new approach? What if Bob presented the new approach N years ago (choose your N)?
In order to quantify how often academic theft occurs, one needs to define what theft is.
Upvotes: 3 <issue_comment>username_4: >
> Is there any research on the question: how common is academic theft, really? Such as surveys of people having experienced (or committed!) such theft according to an appropriate definition, possibly compared to peoples perception as to the risks.
>
>
>
See the related articles:
>
> <NAME>, Raymond, <NAME>, and <NAME>. "[Normal misbehavior: Scientists talk about the ethics of research.](http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1483899/)" *Journal of empirical research on human research ethics: JERHRE* 1.1 (2006): 43.
>
>
>
and
>
> Martinson, <NAME>., <NAME>, and <NAME>. "[Scientists behaving badly.](http://www.nature.com/nature/journal/v435/n7043/full/435737a.html)" *Nature* 435.7043 (2005): 737-738.
>
>
>
Among a sample of 3,247 NIH-funded scientists in the United States, asked about the behavior "Using another's ideas without permission or giving due credit":
* 1.4% said they themselves have engaged in this behavior within the last three years
* 45.7% agreed with the statement, "I have observed or had other direct evidence of this behavior among my professional colleagues including postdoctoral associates, within the last three years."
Please read the article for methodology, limitations, etc.
Upvotes: 5 [selected_answer] |
2014/08/21 | 719 | 3,261 | <issue_start>username_0: Most top graduate programs require at least 3 recommendation letters. Do students who apply to such programs (and have a reasonable chance to get in) typically have such extensive research experience that they know three professors who can write in detail about their research? Or is it more common for such students to have some recommendation letters from faculty who can confirm that the applicant is competent (for example because they did very well in their class, won an award, ...) but with whom they have not worked much personally?
I understand it would be ideal to get all letters from faculty with whom you have worked closely on a research problem. But I wonder how commonly this actually happens, especially considering the fact that the final undergraduate year typically cannot be taken into account for applications in fall.
If it depends much on the discipline, I'm in physics.<issue_comment>username_1: In general, I don't think all that many students have *three* unique reference letters, all of whom can vouch for research ability. Two is quite common, as most undergraduates who pursue graduate degrees (at least in the sciences) do have some research experience at their home universities. Many of them also pursue a summer research project separate from their main undergraduate research project (or projects), providing another reference.
But I do agree that the third letter usually comes from a non-research source. I wouldn't even necessarily expect a third research letter.
Upvotes: 3 <issue_comment>username_2: Which field are you in? In the humanities and social sciences, we expect letters from faculty you've taken seminars in, not just your thesis advisor.
Upvotes: 1 <issue_comment>username_3: I have reviewed PhD applications in Communication, Computer Science, HCI, and Informatics. I'll answer questions first:
* It is not typical, in my experience, for students who apply to programs to have three faculty write letters that speak directly about extensive first-hand experience working on research with the candidate.
* It is most common, in my experience, to have one or two letters from faculty with extensive first hand experience overseeing research (often a coauthor on a paper) and one or two letters from people who know the candidate somewhat less well.
You don't need every letter to give a first hand description of your research experience. If you don't have *anybody* who talks about it, it will be a problem.
You want letters from people that:
* Describe your ability to do excellent research in the area or field you are applying.
* Explain how you are very smart, skilled, hardworking, generous, easy to work with, etc.
* Demonstrates enough history and experience with you that we can trust the opinions expressed in the letters to be informed and accurate.
* Are from colleagues whose opinions the letter recipient will know and respect.
Not every letter needs to do every one of these things. The important that the entire package convinces the reader that they're not taking a risk by accepting the student. Do what you can. If no one person can say everything, it's totally OK to the fact that you're turning in multiple letters to give a more full picture.
Upvotes: 2 |
2014/08/21 | 5,134 | 20,659 | <issue_start>username_0: This actually happened to my wife, but for the sake of simplicity I'll talk about it as if it happened to me.
I wrote a final exam for a university course last week, and a couple days ago I got my marks and the correct answers back. I disagreed with one of the questions I answered wrong, so I pulled out the textbook that was assigned to this course and found that it supports my answer. I sent an email to my prof with the page number and the exact quote from the textbook that supports my answer. His reply was (with slightly changed wording):
>
> In class I said that *correct exam answer*... This is an issue with any text and shows why class is so vital: Texts rapidly go out of date or (such as the broad text used for this course) demonstrate a lack of depth. Lectures are usually much more up to date.
>
>
>
Keep in mind that this is an Archaeology class, which in my unprofessional opinion really doesn't "go out of date" all that quickly. The textbook is the assigned textbook for this course by the university. The online lecture notes posted by the prof make no mention of the disagreement. I was not present at the lecture.
Do professors have an obligation to recognize the assigned textbook as an authority in the context of the course? In my experience, when confronted with such a problem they typically go "Ok, fair enough, I'll give you the mark", but are they just being nice or are they supposed to do this? He's not a senior prof (not even PhD yet), so do you think going to his superior would help?
If I get this one extra mark it will bump me up 0.4 GPA for the course because I'm right at the cut-off.
Edit:
Since several people asked, the question was something like "Which Aztec god is the god of war and is associated with water". The book said one god, Huitzilopochtli, was the god of war, while Tlaloc was the god of fertility and rain. When studying for the exam, Huitzilopochtli stuck in my head as the god of war, so I picked him. The prof said that in class he mentioned that Tlaloc also had militaristic aspects.
Note that I'm not saying the prof is wrong objectively, only that our book makes no mention of Tlaloc being war-like and instead makes emphasis on fertility and life, being a beneficial god, which seemed totally opposite to war. When I sent my email I explained that I picked Huitzilopochtli because the book lists only him as the war god, but that I recognize my answer is only half-right due to the water reference, and that I feel that Tlaloc is also only half-right since he's not a war god. Also, the prof agreed with me that the book was misleading, but said that I should've come to the lecture. Hence my question here focusing on whether the book should have any authority without getting into the details of the question itself.<issue_comment>username_1: >
> Do professors have an obligation to recognize the assigned textbook as an authority in the context of the course?
>
>
>
No, there is no such obligation. It's a bad educational practice to choose a textbook that's seriously unreliable, but even good textbooks slowly go out of date, and they sometimes have a lack of detail or even outright errors as well. It's important for professors to try to be clear about any deficiencies the textbook has, for example by highlighting them in class. I provide a written list of any typos or other issues I am aware of (although I note that of course there may be others as well). However, there is no obligation to accept the textbook's version as a correct answer, and there are no specific rules about how things must be brought to the students' attention. It's entirely up to the lecturer's discretion.
I would expect that many professors would be more flexible or accommodating than what happened in this instance, but not all of them. At least in the sort of universities I'm familiar with (in the U.S.), there's no way an administrator will change the grade under these circumstances if the faculty member who assigned it is unwilling to do so.
On the other hand, it's not clear to me from what you say whether this person is a regular faculty member (due to the lack of a Ph.D.). If you are dealing with a teaching assistant, it could be worth asking the professor in charge of the whole course. This will probably upset the TA, but it might work (since the professor will want to maintain common policies among all the TAs assigning grades). Other than that, I don't see any recourse.
Added in light of [<NAME>'s answer](https://academia.stackexchange.com/a/27483/): I'm assuming your answer is definitely wrong. I.e., either the textbook had an error in it or it's out of date regarding a clear scholarly consensus. On the other hand, if you can make a case that your answer is actually correct or accepted by serious researchers (not just that the textbook says it, but that authoritative and up to date scholarly sources agree), then you've got more of a basis for disputing the grade.
Upvotes: 6 <issue_comment>username_2: My opinion -- as a university teacher for four years pre-PhD and eleven years post -- is that your story is balanced precariously on the border between "unfortunate" and "actionable". What is to be done about this probably depends a lot on your national and local university culture, the culture of your department, and even on the judgment of your own instructor.
Here is some advice about how to best deal with the situation:
>
> I sent an email to my prof with the page number and the exact quote from the textbook that supports my answer.
>
>
>
That is already not the ideal strategy. This is a matter that requires some *discussion*, and email -- especially email exchanged between people who don't know each other well -- is not conducive to discussion but rather to one-sided statements of position, often of a nature which is more definitive, defensive or combative than a person would be in a face-to-face meeting. You should go to physically meet with your instructor. It is not too late to try to do so.
>
> "In class I said that correct exam answer... This is an issue with any text and shows why class is so vital: Texts rapidly go out of date or (such as the broad text used for this course) demonstrate a lack of depth. Lectures are usually much more up to date."
>
>
>
That's a pretty good answer. If the textbook is incorrect, superficial or out-of-date on the point which was covered in the lecture, and if you did not attend the lecture, then you are showing that you did not receive and learn the information you were tested on.
>
> Keep in mind that this is an Archaeology class, which in my unprofessional opinion really doesn't "go out of date" all that quickly.
>
>
>
Definitely don't say that again. This sentiment is indeed unprofessional. It is also ignorant and insulting: academia is about the progression of knowledge, not just keeping it preserved for posterity. Archaeology is no different from any other field in that manner.
>
> The online lecture notes posted by the prof make no mention of the disagreement.
>
>
>
That is not definitive, but it makes me more sympathetic to your situation.
>
> I was not present at the lecture.
>
>
>
That's bad. You have every right to expect that when you miss lectures you miss critical information. That's desirable, really: otherwise what's the point of lectures? By any chance did you contact the instructor and ask to be updated on what you missed? Did you get notes from some classmate that did not include this point? Either of these would mitigate your absence (the first more than the second).
>
> Do professors have an obligation to recognize the assigned textbook as an authority in the context of the course?
>
>
>
No, of course not. On the contrary, they have the obligation to correct the textbook when they feel it is helpful and/or necessary to do so.
>
> In my experience, when confronted with such a problem they typically go "Ok, fair enough, I'll give you the mark", but are they just being nice or are they supposed to do this?
>
>
>
I agree; "I'll give you the mark" is the more typical, nicer reaction. Not to do it is being a little callous, in my opinion. But it is unlikely that "they are supposed to do this", at least not officially. The instructor of a course has a certain amount of authority. This decision, although it may not be a "nice" one, seems to fall within that authority, at least in my experience.
>
> He's not a senior prof (not even PhD yet), so do you think going to his superior would help?
>
>
>
At most universities I'm familiar with, someone who does not have a PhD is not a "professor" at all. But that probably doesn't really matter: what matters whether he is the "instructor of record" or a "teaching assistant". (Probably: in some places, one does in practice have more or less classroom authority according to one's academic rank and seniority.)
Yes, going to his superior might help. But you should think very carefully about this and have at least one face-to-face meeting with your instructor first. Before you do that:
**Find out whether your answer was actually correct, or arguably correct.**
If it is, you'll have much more of a leg to stand on. If it isn't, if push comes to shove...well, we mark the right answers right and the wrong answers wrong, don't we? Finally:
>
> If I get this one extra mark it will bump me up 0.4 GPA for the course because I'm right at the cut-off.
>
>
>
This is the line that tipped me over a bit into recommending that you pursue the matter at least a little further. It is one thing to mark a question wrong because it *is* wrong. It is another thing to stand on this to the extent that it lowers your final course grade. There's a proportionality issue here: yes, you were apparently wrong to go with your textbook rather than the instructor. But were you *that* wrong?
It seems likely to me that some more senior personnel in the Archaeology department will feel the same way. If you can find such a person, then maybe they can influence your instructor. However, if you are very confrontational with your instructor then he may be inclined to stand on principle, even in the face of senior personnel. You really want to make changing the grade the easier, more palatable option for all involved.
Upvotes: 6 [selected_answer]<issue_comment>username_3: >
> "In class I said that correct exam answer... This is an issue with any text and shows why class is so vital: Texts rapidly go out of date or (such as the broad text used for this course) demonstrate a lack of depth. Lectures are usually much more up to date."
>
>
>
Well, your lecturer is right. Text books are sometimes factually wrong. If he indeed pointed out the error in class and you were not aware because you did not go to class, you can hardly blame the lecturer.
That being said, most lecturers would probably be open for a sensible argument, but you should certainly approach it as a nicety or concession of the lecturer, not something that you can force by applying to some sort of obligation.
>
> He's not a senior prof (not even PhD yet), so do you think going to his superior would help?
>
>
>
Rank doesn't really have a lot to do with it. Going to his superior (if such a person exists, which may depend on how your university works) *may* help, or kill your cause entirely. In my home university, complaining to the department head was generally a horrible idea. Department heads *never* decide against a lecturer in a case that is not a clear-cut violation of university policy. All you would do in this case is make the lecturer much more unsympathetic towards your cause.
Upvotes: 5 <issue_comment>username_4: Textbooks, even very good textbooks, can contain errors, even egregious errors. When that happens, I try very hard to emphasize the error and explain why something else is correct and the textbook is not. I'll probably mention it in two or more class sessions. For one book, I have an online errata sheet.
I will not accept an incorrect answer on an exam, no matter what authority the examinee might bring forward. (But I will also consider that I might be the one who is wrong!)
There are already several good answers to this question. I'm writing because you bring up the effect of this answer on your GPA. The goal of a university course is not a grade; the goal is mastery of the material. Master the material and the grade will take care of itself. You have your eye on the wrong goal.
Unless it's already luminously obvious, go to the professor (in some universities, everyone in charge of a class is called "professor" regardless of academic rank), apologize for the email, and ask for help in understanding why the book's answer is not appropriate. That probably will not get you any slack on the exam in question, but you'll get at least one item right on the final exam!
Upvotes: 5 <issue_comment>username_5: >
> "In class I said that correct exam answer... This is an issue with any text
> and shows why class is so vital: Texts rapidly go out of date or (such
> as the broad text used for this course) demonstrate a lack of depth.
> Lectures are usually much more up to date."
>
>
>
All you need to know is right there in the professor's response.
1. The professor covered the correction in the lecture.
2. You missed the lecture.
3. Therefore, you missed the question.
That makes it officially, "Your problem."
The lesson is twofold:
1. Attend the lectures.
2. Textbooks are not the ultimate authority.
Upvotes: 4 <issue_comment>username_6: I'm actually very surprised by the answers to this question.
First, I think it's very bad form for a lecturer to deliberately include a question on an exam where the assigned textbook gets it wrong. I would certainly never do that.
Second, if the lecturer thinks it's important that the students get the correct facts about this topic, (s)he should inform the students through a forum that all students are required to use (whether they use it or not), such as e-mail or a class website (I would do both, actually, just to make sure). Again, I think it's bad form and sloppy to think it suffices to just "mention it in class", *especially* if the lecturer intends to include it in the exam. All lecturers know that all students cannot make every single class, and I would think it's both unfair and childish to "punish" the students that didn't make that specific class in this way.
Upvotes: -1 <issue_comment>username_7: You've just learn two vital lessons: first, books are not holy texts that should be taken without doubt, and second, even if the lectures aren't obligatory, it's crucial to attend them if you want to get higher marks. Generally, that was on the lecture is more important than that, what's in the book, when it comes to the exam. University is not a school.
Archeology is the topic very prone to change. We made some speculations about the past, based on limited hints. Any new discovery can make the whole book invalid (because some hypotesis, which was taken for granted, was overthrown by that discovery).
What's more, after the details, it's not that the textbook is contradicting the lecture. The questions was about the war god associated with water, and according to your input, the book doesn't say that Huitzilopochtli was associated with water. In that case, it was the **lack of depth** (according to your professor) of the book that made your answer incorrect.
So you were not on the lecture (or you were not paying attention), but you feel you deserves good grades anyway, and to proove that, you try to find some contraditions between the correct answers and the textbook, which are not there. Just learn from that lession and not think of university course as of school class with single 'correct' elementary book.
Upvotes: 3 <issue_comment>username_8: >
> If I get this one extra mark it will bump me up 0.4 GPA for the course.
>
>
>
I'd say this is the only real impetus present. Without this, either the instructor or you ought to let the marks slide. Have you discussed with the instructor how this seemingly small matter can substantially affect your grade, chances for future scholarships, grad school apps, etc..?
I say, ethically, the instructor was justified in their response, for the reasons that Pete lists. But you may as well schedule a face-to-face meeting and play your sympathy trump card.
Upvotes: 0 <issue_comment>username_9: Presume the professor has power.
In my first job as an instructor, here is my basic memory of what the department chair said to me:
"Here is the syllabus. You are required to talk about each of the items of the syllabus. Just talk about them for 2-5 minutes. Of course, talk about them more if you want, at your discretion. But our goal is to create students with high quality. Do whatever it takes."
In fact, I did once use that strategy immensely. I was also required to give a final exam. However, the exam didn't actually need to be about subject material. When I found out that the course I had to teach was about Microsoft Internet Secure Accelerator, and I found out that this proprietary software had been discontinued in the marketplace for a couple of years, I spent 40 minutes covering the software, and then just took the course in a totally different direction which would actually be useful for students.
On the other hand, I recall at a different institution where I learned that a math instructor was required to assign very specific problems from the text book, and the topic covered each day was assigned to her. So her department gave her no leeway whatsoever.
You could try contacting a department chair or dean or whomever, but know that you're likely to be "burning bridges"/"making enemies" by doing such things, and it might not help you at all. Chances are that the dean of your college isn't going to care if you get a GPA that is 0.4 higher. (S)he will be more concerned about the ideas that you skipped class and that you are not responsibly accepting the consequences of your action. If this experience causes you to learn that you should do what is expected of you, such as showing up to class and paying attention to lectures, then that will likely be far more favorable for the chair/dean than worrying about your grade.
If I were a dean and a student approached me about this, the best that the student might hope to get for is an "I will look into it". And I would be diligent enough to get the instructor's side of things. However, I would not be inclined to take the student's side and order the instructor to make a change. I would be far more inclined to want to show my fellow instructor that they have my full support.
Sometimes, your instructors might not have full autonomy in assigning grades. For example, there might be a rule printed in the syllabus which says that exams will determine 40% of your grade, while quizzes will count as 25% of the grade, and remaining grade points may come from other items (like in-class assignments, participation, homework/research papers, etc.). Your instructor may or may not have full control over the syllabus. I would expect your instructor to be absolutely bound to whatever is written in the syllabus. However, if your instructor decides to make one quiz worth more than another quiz, that could be fully within the instructor's rights. And if an instructor wants to say that one question has multiple accepted answers, and that another answer isn't accepted, I would expect that may be (and probably is) fully within the rights of the instructor to determine.
In a lot of ways, you don't have much power as a student. This is by design, as instructional institutions try to impose compliance to make sure that students can, naturally (from experience), easily adhere to certain professional expectations.
To summarize a lot of the above: Exactly what your professor is allowed to do may vary a bit, but if you do not have something in writing, chances are pretty high that your instructor has full leeway to make lots of decisions, certainly including disagreeing with the textbook and deciding what answers to accept as correct. (I recall as a community college student when an instructor even dis-enrolled me from a class, I think even after a class had started.)
And since the professor took an approach of defending a decision, and hasn't offered you any points yet, your bet bet is to seek points from some method other than getting points for your answer to that question. You are not likely to win this one.
Upvotes: 0 |
2014/08/21 | 964 | 4,169 | <issue_start>username_0: Since sept 2013, I have been doing a Masters in mathematical finance. Our course requires us to do an off-cycle internship of 5 months minimum (typically, from April to September).
During this internship, I'm supposed to do something that is at least remotely related to mathematics/finance, and at the end of the internship, I have to give a report of what I have done. The content of that report is supposed to contain actual mathematical researchs / developments / ideas, and is necessary for the obtention of my diploma.
Unfortunately, since my internship started I have been given absolutely no work fitting the requirements.
What I've actually been tasked to do is repetitive work consisting in using a software to process tons of record from a database, generating new records as output, and making sure the generated data is consistent. Whenever of these jobs fail I have to debug the software and propose a fix to make the job work. There is roughly ~50 millions records to be processed and the end goal is that all of them were processed correctly. As far as I can tell there is no way this will be done by the end of my internship, which is in 2 months.
What's worse is that the team I'm in is facing increasing pressure to have all these records processed soon, while the higher-ups are heavily implying that it is taking so long because we are slacking off. This is pretty annoying because I want to make a good impression and I'm working as good as possible, but at the same time I don't feel like it is worth the effort to work extra-hard on this, given how little recognition I get and given that I will leave before the positive consequences of my work can affect me.
What can I do in my situation ? I haven't started my internship report yet (I don't even know what subject I could do it on), and I know I will to do it on my own time. At the same time the repetitive nature of my work makes me pretty burned out and whenever I never feel like working on maths for several hours.
**TLDR: my 6 months internship that is supposed to be about mathematical finance is actually about doing extremely tedious work, and I have to complete a report by the end about all the mathematical research I'm supposed to have done.**
What would you do in that situation ?<issue_comment>username_1: There are two people who can help you
1. Your immediate manager
2. Your advisor (or tutor if you don't have one)
Talk to your manager first about the requirements stated by your university, and how to translate your work into a working paper or report worthy of submission in an academic institute. I am sure he will be of help.
Then try talking to your advisor/tutor on whatever was discussed with your manager. He/She can advise you on planning your report.
I had been into such a situation, and my advisor suggested a way to tweak my working to do some additional tests/work. That extra work made my work eligible for an academic report.
Upvotes: 4 <issue_comment>username_2: Talk to your manager about it again and then your adviser . Perhaps your adviser can suggest to the company that if this doesn't resolve itself they will do what they can to suggest students no longer intern there thus no longer providing free labor.
Upvotes: 2 <issue_comment>username_3: The work you've done, while repetitive, *may well be relevant* to mathematics/finance. In order to answer this, you need to understand what the purpose of the task was, where the input data came from, and how the output is used. Your manager (or work colleagues) should be able to explain this to you, but you need to take the initiative to find out! An internship is a learning experience for you; if you're not asking questions, you're not learning.
Then, if you still feel that this work is not relevant to maths/finance, talk to your advisor at the university. If they agree with your assessment, they will probably take steps to correct the problem. If it is too late to correct the problem, they can at least insure that this doesn't happen again, perhaps by speaking with the company, or even not allowing students to intern at that company again.
Upvotes: 3 |
2014/08/21 | 879 | 3,169 | <issue_start>username_0: As part of their job, professors have to take care of getting funding, prepare classes, advise students, fulfill administrative tasks and attend to various meetings/conferences.
How much time do professors have to carry out research on their own (i.e. excluding the above-mentioned tasks)?
I am especially interested in the field of computer science > machine learning, in the US, in an averaged-size university.
I am mostly looking for studies that try to quantify how much time professors have to carry out research on their own.<issue_comment>username_1: The answer is going to vary greatly depending on whether the professor is in a tenure track, has already obtained tenure or is in a contingent/adjunct position.
Besides these issues of rank, it also matters a great deal what kind of university/college we are talking about. If you're a tenure-track professor in the so-called "R1" institution, you will surely be spending significantly more time doing research than a non-tenure, contingent adjunct at a community college.
The size of the university generally matters much less than individual and university rank.
On top of all that, it often comes down to the individual professors' interests and ways they conceive of their career.
Upvotes: 4 <issue_comment>username_2: I found this small-scale, not randomly-sampled [survey](https://thebluereview.org/faculty-time-allocation/) from [Boise State University](http://en.wikipedia.org/wiki/Boise_State_University):
Warning:
>
> All charts below are from TAWKS Phase 1 Stats, initial survey of 30
> higher ed faculty from Boise State University. While findings are
> highly suggestive, they do not represent a random sample.
>
>
>
Answer to question:
>
> Only 17 percent of the workweek was focused on research and 27 percent
> of weekend time.
>
>
>
Graphs:




Upvotes: 5 <issue_comment>username_3: The [National Center for Education Statistics](http://nces.ed.gov/surveys/bps/) in the United States surveys faculty of post-secondary institutions (see the [National Study of Postsecondary Faculty](http://nces.ed.gov/surveys/nsopf/) page for information on methodology). The most recently available data is from their 2004 survey, with 26,110 respondents across the United States.
The NCES allows you to create custom tables from this dataset using the [PowerStats](http://nces.ed.gov/datalab/powerstats/) tool on their website. (You have to create an account to use the tool.) This is a valuable tool if you're interested in exploring these and other statistics.
As per [username_1's answer](https://academia.stackexchange.com/a/27494/11365), time spent on research varies quite a lot by academic rank:

and by institution type:

Upvotes: 3 |
2014/08/22 | 868 | 3,790 | <issue_start>username_0: Do students need books to learn in courses? For example, I found that books often contain some mistakes and therefore it is better for me to study those things on the Internet where I can concentrate on one thing, learn the best knows methods of that subject and then move to the next. Okay, that course wasn't really advanced.
But if you have to decide whether one should write a traditional book or editable lecture notes like in <http://stacks.math.columbia.edu/>, which one would a lecturer choose?<issue_comment>username_1: Depends on the class. I've been in classes where the handouts *were* a book -- a pre-publication version of a textbook that the professor was developing. In those cases, the handouts contained as much detail, and expressed it as clearly, as the final book would -- and luckily these were good instructors AND good writers, so the books were decent ones -- and the chapters were provided sufficiently far in advance of our needing them to permit reading ahead, so I didn't feel a need to consult other textbooks as well.
If any of those requirements is not met -- if the notes cover only the material addressed in the lecture and homework, or if they aren't clear and complete and correct, or if they aren't provided in a more-than-timely manner -- then you probably do need a textbook to back them up.
If in any doubt, you need to specify an alternative textbook or textbooks that students may optionally want to consult (which is good practice even when you do specify a primary text).
Upvotes: 0 <issue_comment>username_2: No, Students don't need a book. They DO need good resources.
Maybe its an eBook or a PDF compiled from your teaching notes or outline/slides (Hopefully cited well) and things like short stories needed for a language class can often be found online in their entirity, for free legally, when copyright has expired.
For a language class, using a Sherlock Holmes story could prevent students from needing to buy a book as they're out of copyright. You can also look for material that is licensed as [Creative Commons](http://creativecommons.org/) which allows for free use, such as material from MIT open courseware, Khan Academy, and P2P Univerity. You can learn more about [Creative Commons for Education here](https://creativecommons.org/education).
Long story short, make good content, and use it legally, either by using stuff people have given to the public, or under doctrines such as Fair Use.
Upvotes: 3 [selected_answer]<issue_comment>username_3: To answer your title question--**Student do NOT need books, they just need quality resources.**
The best option for this will vary depending on the subject, class size and teaching style (the latter more than the former), and the students' learning ability. For example, in a undergrad intro course filled with students of average ability, a standard textbook may be the best option, perhaps supplemented with a few additional resources (links to videos, papers, etc). However, when you are teaching an advanced course, with students who both know how to learn on their own and have some experience with the subject matter, a textbook may still be a good option (depending on your institution's policy, you may be required to assign on for the course), but students will learn more and better if you provide them with primary sources, seminal papers in the field, and thought-provoking position papers, in addition to the textbook.
As for which a lecturer should do,
>
> write a traditional book or editable lecture notes
>
>
>
that will depend on all the above factors, as well as individual talent and motivation. Not all faculty members will necesarily want to write a textbook, though almost all will write lecture ntoes.
Upvotes: 2 |
2014/08/23 | 1,939 | 8,317 | <issue_start>username_0: I am a PhD student in a social science field, and am on a teaching assistantship. My contract stipulates that TAs are to work no more than an average of 15 hours per week.
For my particular department, the past protocol was that 10 hours were devoted to teaching duties (class time, office hours, prepping, and grading), while the remaining five hours are devoted to research with an assigned faculty member. Overall, I do not think that is unreasonable.
However, that setup was determined when the PhD students mostly taught undergraduate courses. I have been assigned a graduate course, and have been told to expect an enrollment of "at least 30 students."
I have never taught before, at any level. I am also still taking a full-time course load of my own.
Does this seem like an unreasonably high expectation from the Dean and program director? Or am I just being a wimp?
I definitely want to teach, and I want to make sure that my students receive the best quality education from me that I can provide. And I would like to do this without sacrificing my own academic progress, nor what is remaining of my sanity.
Just curious to know what other people's experiences in this area have been.<issue_comment>username_1: It varies but what you've described does not sound unusual.
In my social science department at a state research university, it's standard for graduate students without other support to have teaching and/or research responsibilities that include 20 hours a week of work and that can often involve teaching. This is in addition to taking classes on their own and/or doing research.
In my own graduate studies at a business school, all students started with two years of full fellowship that meant no TA/RA requirements at all for the first two years.
Upvotes: 2 <issue_comment>username_2: I also believe this is not unusual (depends on your discipline though). Moreover, graduate classes have certain advantages, other than the ones already mentioned in the comments (such as more disciplined and capable students). The first one is, that you encounter a new pool of possible collaborators. You are a PHD student and those are possibly MSc students, so they may need to write a MSc thesis. By teaching this class (and doing it well) some of the students will be interested to work with you (on their MSc thesis), so you get to have collaborators / workforce basically for free. So, if you have some spare ideas that you cannot work at the moment, perhaps proposing a new MSc thesis is the way to go. Also, some times graduate courses may be graded strictly based on assignments, so you may recommend assignments helpful to your work, such as proposing to your students to write detailed literature overviews on the subjects that interest you.
The other advantage of teaching in general, is this is something you need to master anyway if you want a job in Academia. So, the sooner the better. It is better to do it in a more protected environment (you are still a student so some mistakes due to inexperience are expected), instead of having to do it later. So, although the task assigned to you might seem overwhelming at first, it is best to do it, as soon as possible. Also you will realize that the second year you need to teach the same class (many times the same graduate student teaches the same course for more than one year) things will be much easier, since you will already have done the bulk of the work, in terms of preparation. Also, doing it while you still have course-work is a valuable learning experience, since you get to compare your teaching methods with the methods of the course you are enrolled in, so you may improve accordingly.
Another subject that is not clear from your question is whether this course you are teaching is for one semester or an entire year. If it is for one semester and the next semester you do not get any other TA duties, it is very reasonable that for this semester you must devote more hours than the "10 hours devoted to teaching" that you are required to do.
Upvotes: 2 <issue_comment>username_3: It is incredibly common for the number of hours devoted to teaching be woefully inadequate to deliver good instruction except possibly for highly seasoned instructors teaching the exact same course multiple times.
Unfortunately, I don't know any fair solution to this. You'll probably spend two or three times longer than the expected number of hours in order to do even an adequate job. From what I observe, people cope either by working far more than they're supposed to, or by delivering not-very-good instruction. It's a difficult spot to be in. (Alas, not the only difficult spot in academia.)
The bright side is that if you're really interested in teaching and put in way more time than you're "supposed" to, next time it won't be as bad (you'll only have to put in somewhat more than you're supposed to), and if you do well you will both help your students and be known as someone who is a good teacher (good if you want to do more teaching).
Upvotes: 4 <issue_comment>username_4: I'm actually kind of shocked by the other answers saying this is normal. Maybe math and the social sciences are somehow different in this regard, but this sounds like an insane situation. I'm sure you're a wonderful person, and it sounds like you're trying your best, but it's totally unfair to the graduate students in your class to have them taught by someone with zero teaching experience. To me this seems like an enormous red flag about your department that they would do this. Where the hell are the faculty who should be teaching these courses?
That said, I'm not sure I have a course of action to suggest beyond doing your best in the assigned class (especially since it's presumably starting up quite soon), but I agree with you that it's unreasonable.
Upvotes: 3 <issue_comment>username_5: Doesn't seem unusual to me. Did you sign a contract? More importantly did someone in your department sign the contract. I find the term "contract" to be kind of meaningless in graduate school. We aren't unionized, so we're in the unfortunate position of needing to take the term contract to mean "loose policy."
Some students in my department were successful in the past in lobbying the chair for more pay due to more time spent working. Of course, this solution precludes that you'd be fine working that many hours for more pay. This doesn't seem to be the case? Therefore, the only reasonable thing to do is to adjust your mental and physical deliverables in accordance with the 15 hours.
I'd be more upset about the 15 hours. That seems like a nice way for the University to get cheaper labor in terms of teaching the courses and for your advisor also to throw money your way for 5 hours, which unfortunately means you have expectations to deliver on that front. 5 hours for research I would imagine ends up being more like 10 or 15, most unpaid?
Upvotes: 2 <issue_comment>username_6: It is unusual, and probably disserves all parties, for a grad student to be assigned to teach a grad course, apart from considerations of alleged competence or not. For an advanced grad student to fill in for their advisor while the advisor goes to a conference is somewhat common in mathematics, in my experience, but that's not the same.
I don't have a good sense of the situation for other disciplines, but in mathematics at my research university (and other places I've been) grad students might be assigned full responsibility for lower-division courses, but never for upper-division, much less graduate-level.
It's not only the workload, and perhaps not even the issue of competence in the material, but of *mastery* of the material, so that reasonable questions from students could be answered authoritatively, instantly, etc. A grad student's opinion might be harder to consider "authoritative", not because one would presume that they don't know, but because they lack a track record and credentials to allow people to effortlessly trust their expertise. One *should* be able to effortlessly trust the expertise of people giving graduate-level courses, in my opinion.
(And, yes, unfortunately, being "on the faculty", being "old", etc., are no guarantees of competence, much less expertise... I know...)
Upvotes: 3 |
2014/08/23 | 4,365 | 17,981 | <issue_start>username_0: In other words:
>
> How do professors profit if a larger (rather than smaller) proportion of entering PhD students in their department complete their degree?
>
>
>
(At my department at least, it seems like the only benefit is "warm-glow altruism". A professor gains no more from having more students at her department complete their PhDs than I do by having fewer stray dogs in my town getting killed by traffic. A professor's career prospects does not seem in any way affected by whether 80% rather than just 50% of PhD students complete their degree. Not surprisingly, the completion rate, at least at my department, is closer to the latter.)
Note: **I am not asking whether it is a good idea to give professors stronger incentives to increase PhD completion rate.** Neither am I asking for suggestions as to how PhD completion rates can be increased. Rather, I am asking what the incentives at present are (granting of course that this varies from place to place).<issue_comment>username_1: PhD is largely depend on the individual effort and skill of the graduate student, much more than the result of a student in lower educational levels. Therefore it is questionable how much credit should be given to the supervisor and the individual.
Indirect intensives of being a good mentor is numerous from tenure track level, as it is mentioned in the comments. A strong prof-grad students relationship is also a very strong basis of future research networks for both the students and the professors. I saw several examples how a successful former graduate student can raise the reputation of a prof, and help in new cooperation.
Direct incentives (head money? ) to the teacher to improve that rate would be counterproductive. PhD is not elementary school: it is not like pass or fail. Just because someone passed it, the career prospects doesn't get much better if it is not coupled with actually skills, good publications etc. So just pushing someone to pass when it is not deserved doesn't do any good to students neither. Even if the reason of failure is beyond the student, or the prof is unreasonably demanding, changing the lab and / or institutional level of protection of the student is much better solution i believe.
Imagine this scenario: top notch laboratories can (should) have really challenging research topics. Big part of the research is however carried out by graduate students. Do we offer direct intensives to the professor to lower the level and bet on safe projects, just to have higher pass rates for his graduate students? Easy trick, yet at the end the student ends up with poor publication record and the once-prestigious lab lower its quality, too. It is not good to anyone.
Upvotes: 0 <issue_comment>username_2: If an advisee leaves without completing a Ph.D., then the advisor can end up feeling like the whole process was a waste of time, since the goal of strengthening the research area by training a future contributor was not achieved. That's not fair or reasonable: the education and experience may be useful for the advisee despite not culminating in a Ph.D., and the opportunity to try was itself valuable regardless of how it worked out. However, these feelings are relatively common, and the advisor's investment of time and effort becomes an incentive towards higher completion rates.
This personal investment begins only when the student chooses an advisor, so the corresponding incentive is not relevant before that stage. On the other hand, spaces in graduate school are a limited and therefore valuable resource, so there is still an incentive for faculty not to waste them, even at the coursework stage.
What makes this issue tricky is that nobody can agree on what the ideal completion rate should be. To a first approximation higher is better, but this isn't a universal principle, and a 100% completion rate would arguably mean you're doing something wrong. Over the course of a difficult five-year program aimed at a narrow career, some people will legitimately discover that there's something else they'd rather do instead, and others will be prevented from finishing by factors beyond their or the university's control. The only way to enforce a perfect completion rate would be to refuse to admit anyone who seemed like they might not finish and then bully all the students into finishing regardless of whether or how their circumstances had changed. For example, you would never take a risk by admitting someone who seemed like they deserved a chance, and you would never gracefully accept a student's change of plans. Ultimately, the fundamental difficulty is that we can't necessarily distinguish between reasonable attrition and inappropriate attrition caused by unsupportive practices, and this makes the whole issue contentious.
Upvotes: 3 <issue_comment>username_3: In France, labs are assessed by the [AÉRES](http://fr.wikipedia.org/wiki/Agence_d'%C3%A9valuation_de_la_recherche_et_de_l'enseignement_sup%C3%A9rieur) ([mirror](http://web.archive.org/web/20170426234442/https://fr.wikipedia.org/wiki/Agence_d'%C3%A9valuation_de_la_recherche_et_de_l'enseignement_sup%C3%A9rieur)) (~= Department of Evaluation of Research and Higher Education).
The [evaluation criteria](http://www.aeres-evaluation.fr/Actualites/Communiques-dossiers-de-presse/L-AERES-publie-son-referentiel-des-criteres-d-evaluation-des-entites-de-recherche) ([mirror](https://web.archive.org/web/20160608103201/http://www.aeres-evaluation.fr:80/Actualites/Communiques-dossiers-de-presse/L-AERES-publie-son-referentiel-des-criteres-d-evaluation-des-entites-de-recherche)) show that the AÉRES prefer labs where PhD students don't drop out too often.
Since the AÉRES evaluation has an important impact, e.g. to get funding, professors have an incentive to increase the PhD completion rate (and they also have an incentive not to turn PhDs into some ever-lasting position: labs typically to try to graduate students in 3-4 years).
---
In the US - at least in my university - I'm not aware of any incentive, except that new PhD students will avoid professors who have the reputation of taking a while to graduate their students.
Upvotes: 3 <issue_comment>username_4: I'm going to answer the opposite question. What disincentives do professors have to increase the PhD completion rate in their department? I'm looking at the opposite of what you asked because I feel it will give us insight into your question.
In some fields, which I can only talk directly about the one I am familiar with - Compute Science, having students complete their PhD in a timely matter isn't a focus unfortunatly. Professors who are already established may already have a little army of PhD grads out there citing their work and growing their PhD empire. Which means that current or recent crops of PhD students under that professor sometimes get stuck. Why? Unfortunately one thing that PhD students are is cheap(ish), reliable labor.
A PhD student is going to work on the tasks or research set before them by their PI. They are going to do their best to complete that research - not just because it benefits their career but because failure in that arena may lead to disfavor or even being removed from their program. A professor can be assured that PhD students working under them will focus on their research of choice and will invariably cite or reference previous work done by the professor. Incidentally the PI is almost always an author on any paper that student publishes.
That's a bit cynical, I'll admit. But even beyond the pure cynicism of authorship farms and indentured servitude is the basic truth that it takes time to train someone to do research work at a high level. If I have been researching creating artificial butterfly wings out of volatile polycarbonates(or whatever) then it's going to take at least a year, if not more, to get a student enough information, enough skill and enough confidence to wind them up and let them go on important research.
Right now some of the other Academic posters are rolling their eyes. This sounds both incredibly cynical and a little bit tin foil hat ish. But the thing is... these are real problems that have been identified in at least a few schools I know of. You get some old, cranky, tenured professor who has dozens of students out in the world and who has gotten used to his or her status as king of the research mountain and all the sudden students start taking longer and longer to graduate. All the sudden that professor forgets what it was like to be a PhD student, trying to make a difference with your research and trying to get the heck out and graduate and only remembers that it's annoying and expensive to train a new PhD student and, btw, this student's producing pretty well and... well it would be more convenient if this trained student would stick around for a while.
What is done about this? Some programs tie tenure positions with completion rates and timelines. This is problematic and, to be honest, I've heard people talk about it but I've never seen or heard of it in action.
Some programs have a soft cap on the length of time a student can remain as a PhD student. I have seen this in action and I'm conflicted about it. On one hand, PhD students need to either graduate or leave the program eventually. But different fields can take different amounts of time to get results. Additionally, sometimes life just happens and the student who is moving along their degree runs into a snag(changes PIs, has a health emergency, just needs more time). The hope is that these soft caps are flexible enough to deal with these situations. The ways I have seen these soft caps(for lack of a better word) implemented is by having a fairly strict timelines for course work completion, having a deadline for committee selection(generally very early in the process) and having regular meetings between advisors(academic style advisors, not research advisors) and students.
Although your answer states that you're not interested in what could or should be done in these situations I think it's valuable to say that: I think low PhD graduation rates are signs of an unhealthy department and unhealthy academic culture. They should be taken very seriously and the fix should never be the 'warm fuzzy glow' of graduating someone.
Upvotes: 3 <issue_comment>username_5: In the United States, the National Research Council (<http://www.nap.edu/rdp/>) regularly ranks university departments. One of the criteria by which departments (and thus universities) are ranked is the "Average of Median Time to Degree" with the presumption that lower is better.
In my university, this results in the provost's office putting pressure on us to reduce our time to degree through a number of measures, some coercive and punitive. Most of the time, the coercion and punishment is directed towards the graduate students (i.e., after the sixth year they lose their rights to certain fellowships, they have to pay more for their gym and library memberships, etc.).
But the department as a whole gets some attention. Our chair has been regularly sending out lists of our orphaned and missing ABD candidates (some now in their 12th year) and asking us to find and terminate them (either through graduating them, or formally asking them to leave the program).
I've also been putting some pressure on my peers through the admissions system by suggesting (since they are my colleagues, I have no power) that people with many students on the books should refrain from taking on new students until they clear their backlog more a bit.
I should note that I'm in anthropology where: 1) students tend to go into the field and disappear for quite sometime; 2) doctoral students are mostly useless to faculty for slave labor as most of us work individually and not in labs (with the exception of some of our archaeological / biological colleagues).
Upvotes: 4 <issue_comment>username_6: There's a bit of an inaccurate presumption in the question itself, or anyway ambiguous wording. That is, some assumption of causality...?!?
As a mathematician, I do not think of PhD students individually or collectively as assisting my research program so much as doing teaching work that I don't have to do. Even my own PhD students, while stimulating to talk to, do not "help" my research program. Perhaps this is contrary to mythology. Although more-experienced TAs (teaching assistants, in the U.S.) are generally better at their job due to that experience, that level of proficiency is attained within a year or two of beginning the role. That is, in my sort of situation, there is absolutely no motivation for faculty to delay completion of PhDs. Indeed, as @roboKaren comments, there is some pressure from higher-ups to improve "stats". (That is, in terms of TAs, we cycle them through at a certain rate, so there're always roughly the same number to do the TA work. Shortening or lengthening the cycle time would accomplish nothing, and we have no motivation to do so.)
Is the supposed issue about people leaving the program? This is a very complicated question, considering that it might be entirely reasonable to leave a program if/when one discovers it's not what one thought. Should we brow-beat people to stay? Bribe them? True, being nasty to try to "drive out weaklings" is incivil, but I don't address that possibility.
If the question is about the "drop out rate", then it is probably impossible to answer usefully. If it is about time-to-completion, some useful things can be said.
This simplifies the question to two very different ones: about my interest in *colleagues'* students finishing faster/slower, and about my interest in *my* students finishing faster/slower. Apart from general humanitarian/professional-ethical concerns, I have no interest in the arc of other peoples' students, as it is really not my business. It is true that my perception (based substantially on two stints as Director of Grad Studies in Math) is that some PhD students and their advisors devolve into an unproductive, uncommunicative relationship, with "blame" not clearly assignable, but which creates enormous difficulties in completion. E.g., if either party is naturally non-verbal (despite the seeming ok-ness of this in math, ... which is not the case, actually), there is a potentially fatal problem. If there is a misunderstanding (from either side) about the context, origins, degree, and sense of the alleged "novelty/progress" of a thesis, this can be a huge obstacle.
For my own students, I try to arrange a thesis project that will optimally
use the talent, energy, background, etc., in the time fully-funded by the department as TA (not counting on RAs (research assistantships), fellowships, etc). My thinking is that one's clock really, really starts once one is "out the door", so taking a sixth year instead of just five may be well worth it. But if/when someone says they want to finish more quickly, a suitable project can be chosen, up to a point: usually, there is a misunderstanding, a misguided prior belief that, indeed, somehow, faculty or departments are conniving to delay degree completion. :) "If only they'd get out of my way so I could do my research...!?!" ... overlooking the point that all the "required" stuff is meant to assist that (nevermind the common stylizations and distortions and misunderstandings).
So: incentive for *quicker* completion than the funded period? None, on scientific and educational grounds. On bureaucratic grounds, there's an ever-hardening soft cap on funding, which is a good reason to get people out the door... even though this soft funding cap is meant in practice primarily to avoid funding "eternal grad students" who make a bit of progress but either can't finish a PhD-worthy project or who are unhire-able for some reason (e.g., inability to speak in English, in the U.S.).
So, while my experience as Dir Grad Studies in Math made me unhappily aware of others' foibles, I don't think there's much motivation in mathematics for faculty to *delay* PhD graduations, and only a little to accelerate it. But faculty aren't the obstacle, in most cases. It's just that the U.S. undergrad system under-prepares people for grad school, so there're two or so years of basic coursework necessary to be even marginally literate, and then becoming aware of the basics of any fragment of modern mathematics can (obviously... though people in CompSci seem to have a radically different epistemological mythology) take a year or two. Unsurprisingly, quite a few people discover that "math" is not what they thought it was, given that U.S. coursework typically is at best very-early 20th century stuff, giving no inkling of mid-to-late 20th century stuff, apart from the not-really-contemporary-research, somewhat delusional (inevitably fairly contrived) "REU" (research experiences for undergrads) episodes. Given the popularity of REUs in the U.S., I'll not comment further on the alleged virtues-or-not... apart from agreeing that, yes, it is good to spend some time hanging out with other same-age kids who really like math, and not being in the usual stodgy classroom situation...
So, if "completion rate" means completions-per-admission... well, that's a can-of-worms, all the more for programs (such as ours) which try to be imaginative in visualizing the success of some students who do not necessarily have the traditional trappings of "success", but seem to have sufficient interest and talent. Gambles! If we'd be taken to task for not gambling on sure things... ?!? And a similar analysis applies to length of time to completion: pressuring people or limiting them to finish more quickly could induce more "failures", rather than "coerce" faster success.
Again, I feel that the question itself is predicated on a misunderstanding of why it is not so easy to get a PhD... but maybe I'm just old-and-tired. :)
Upvotes: 3 |
2014/08/23 | 1,518 | 6,513 | <issue_start>username_0: Today I saw that one of my students in the class was looking very sad. He sat down at the end of class and he looked like he was in a bad mood. After all the students left the classroom I called him over. He told me that his mother has cancer.
I told him be strong and try to not think about it too much. But honestly, this was a big lie. How can I try to get him to not think too much of it?
What should I have said to him?<issue_comment>username_1: As an instructor, the best you can do is to offer your condolences and tell him to just ask you if he needs anything. For example, you could offer an extension on assignments. If he needs some time off from lectures, maybe a classmate who takes good lecture notes will agree to make a photocopy, or you could get the lectures to be recorded for him.
Your university might also have a policy that allows students to withdraw from the course and receive a refund of fees without penalty to grades: it would be worth finding this out and advising your student if this is possible. Depending on the severity of the illness and how much it is affecting your student, he might wish to take fewer (or zero) courses for a while.
If the course is nearly finished, perhaps an aegrotat (compassionate consideration) will be applicable to the final exam. But neither an illness in the family, nor a bereavement or any personal physical or mental health issue can excuse a student from having to learn the material and complete assessments in order to pass a course. He will still have to demonstrate mastery of most of the material.
Just be compassionate, and as flexible as you feasibly can. Nobody could ask for more.
Upvotes: 8 [selected_answer]<issue_comment>username_2: I imagine that most universities will have policy on this, and professionals who are trained to help. Apart from "find out the policy", IMHO unless you know the student well personally *and are confident in your ability to deal with things like this* (which the fact that you are asking this question suggests you are not) then all you should do is express sympathy and,
* ask whether they've told anybody else at the uni. If not, **with their permission**, consider informing whoever has overall responsibility for their academic progress (eg head of dept, director of studies, etc)
* make sure that the student is aware of whatever counseling services your institution offers
* research for your own info what arrangements can be made for extensions to deadlines, or consideration of circumstances when exams are marked, both for the current situation and in the event that the parent dies.
* make it clear to the student that allowances can be made (assuming this is the case), and that they should not be hesitant to speak to you if they feel that the situation is affecting their academic performance. No need to be specific for now - if the student is worried about this then just knowing that there are "options" may reduce the stress that they feel.
NB I have no particular qualification to comment here; once again, your uni probably has people whose job it is. If in doubt, consult them.
Upvotes: 5 <issue_comment>username_3: You might encourage him to reach out to other resources at the university. I imagine your university has counseling or psychological services, a chaplain, and the student may have an assigned advisor who can help the student understand their options, and provide professionally trained support.
I think the comments about offering certain accommodations, as you think appropriate, is generous and reasonable.
Upvotes: 3 <issue_comment>username_4: It's unhelpful to tell you the teacher what you "should have said" because there is no perfect recipe for what to say, if exactly the same situation should arise again which it won't. Your priority should be to acknowledge that your student gave you an answer which you understood. I advise not advising anything before you, or someone with time for closer discussion with the student, has identified an actual need. I would ask whether his mother is receiving treatment or tests. From the answer you will learn whether the student has good or uncertain information, without which you cannot know whether this is just an imaginary case of worry, an actual terminal cancer case or likely something in between. There are other good answers here aimed at helping the student complete or delay completing the class. An empathic response (that I learned from a doctor) to be used if and only if the student talks about his Mother's medical progress is "Prepare for the worst and hope for the best".
Upvotes: 2 <issue_comment>username_5: Its good that the student shared the problem,the best thing to do is to let the student know that the parent is not the only one with cancer and he shouldn't over think about it,its normal,it happens and most of all it can be controlled by a series of medical procedures,the student has to accept the fact because its not the end of everything after all,Just try to encourage him or her that the doctors are under control and the parent will be ok
Upvotes: -1 <issue_comment>username_6: >
> I told him be strong and try to not think about it too much. But honestly, this was a big lie. How can I try to get him to not think too much of it?
>
>
>
This is not your place. I was due to graduate top of my class back in 2005 before a tutor started to hand out unprofessional advice that she was not qualified to give. I stopped working altogether and couldn't graduate. The gap on my CV and the loss of my dad, plus not being able to graduate caused a lot of pain. I hope you keep it professional. You might be opening a can of worms by overstepping the mark and putting someone's life in a much worse place.
Upvotes: 3 <issue_comment>username_7: My two cents: While a tragedy has befallen this student's family, the student must keep up his work.
Part of life is learning how to deal with blows like this, and to keep moving though them.
He will need to prioritize things in his life, in order to make sure that he fulfills his educational requirements, while also meeting familial expectations.
Working on his education can actually be a way to take his mind off of his family situation, if he chooses to look at it that way, or he can see it as a burden, but in the end that is his choice to make.
I think you did right to offer your condolences, but am not sure if there really is anything else you need to do as an instructor.
Upvotes: -1 |
2014/08/23 | 767 | 3,575 | <issue_start>username_0: I am a PhD candidate currently preparing a manuscript for journal submission in the environmental modelling and engineering field. One of my authors is from China, and contributed some data and ideas for the methodology that I found quite useful.
This person wants to be the corresponding author for the study with the argument that being the corresponding author is the only way to secure future funding for the study in China/from Chinese sources. Yet he is no way in a position to explain and defend the study, as he wasn't really involved in modelling, analysis and writing, and has limited understanding regarding the main model functioning.
I have so far refused to let him be the corresponding author, but he is really putting pressure on me to be the corresponding author.
Can someone give me some pointers on why it is so important for a Chinese academic to be the corresponding author? Does his argument about funding generally hold true?<issue_comment>username_1: If he's just one of the many authors, this paper your are talking about is of 'no use' to him. He can't use it to claim rewards of any kinds (funding, awards etc) in China. However if he is the 'corresponding author' (or first author), he is able to claim that he played an important part in the paper and this IS his paper...He's not lying in saying "...is the only way to secure future funding for the study in China/from Chinese sources..." but do you want to let him do that?
Upvotes: 3 <issue_comment>username_2: I am currently a foreigner postdoc in Mainland China, and have been here since almost 2 years. I have discussed this specific matter with colleagues, and also often run in the same moral conflict. Therefore I hereby write from personal experience.
The issue the OP puts forward boils down as, why should somebody who is unable answer any specific technical questions about a published study figure as a corresponding author?
The practice of including supervisors as corresponding authors is widespread in China for a number of reasons. Still nowadays for many funding opportunities (and refunding & rewarding as well, concerning publication charges and prizes) the published status as corresponding author is a requirement. Moreover the corresponding author is regarded as higher in the hierarchy in the local strong leader & face culture. This is why this person is pushing to be listed as corresponding author, and also because pushing for short-term goals is also common practice in Mainland China.
I have until now resisted against making "honorary" corresponding authors, but this frequently leads to a conflict. Not only in China, but here this is taken more seriously and may lead continued issues.
Upvotes: 2 <issue_comment>username_3: I have been doing research in China for 10 years now, and hopefully I can answer your question. When a candidate is being considered for a faculty job here, only those papers where he/she was a first author or corresponding author are counted. This is because there is a rampant abuse of honorary authorships, namely colleagues and students are added even though their contribution was minimal or none at all. As usual, instead of cracking down on the abusers, more rules and regulations are put in place. This is why in most papers published from China, there are around three to four first co-authors (whom contributed equally) along with four to six corresponding authors. The consensus is currently being upped another level where some institutions are not counting corresponding authors.
Upvotes: 3 |
2014/08/24 | 1,329 | 5,668 | <issue_start>username_0: I will defend my PhD thesis soon and I currently have a bunch of papers in a finalization phase. As first author of these papers, and as my work is funded with public money, I am very uncomfortable with the idea of giving my rights to a private editor, for ethical reasons. I would like to release my papers under a Creative Commons-like license or in the public domain (CC0?) but I want them to go through a peer-review process.
How can I do this? (And why do so few researchers seem to be concerned with these ethical problematics?)<issue_comment>username_1: I will address two aspects of Creative-Commons licences seperately:
**Open Access and not giving somewhat exclusive rights to a journal**
The reason why journals do not give everybody access to their articles and want some exclusive rights on the article is that they need to pay their expenses (typesetters, copy editors, printing, maintaining editorial managemetent system, …) and do so by selling articles and issues (mostly to university libraries and similar). You can change or circumvent this by:
* Publishing in an open-access journal. In this case, it’s usually you and not the reader who pays for the journal’s expenses. Since there is no need for the journal to get exclusive rights, you can usually publish the article under a Creative-Commons licence. Note that some funding agencies give you money exclusively for publication costs, which you can spend on this. Be aware though that [there are more black than white sheep](https://academia.stackexchange.com/q/17379/7734) in this field. Finally, the price of publication is debatable (but so is the price of pay-to-read journals).
* Nowadays, many journals offer pay-to-publish open-access options, which are like the above, just that your article is published in a classical journal instead of a pure open-access journal.
* Many journal allow the authors to disseminate preprints, e.g., via [the ArXive](http://arxiv.org/). Though they still get some exclusive rights to the article and there may be some restrictions, there is de facto open access to your article. SHERPA maintains [an extensive database](http://www.sherpa.ac.uk/romeo/) on what kind of preprint publication is allowed by which journal.
As to why researchers do not care more about this: The old publication system is an established structure which takes time to change. Theoretically, we could do with one central, globally funded open-access publication system (in particular by avoiding printing costs). Also note that in technologically inclined fields, publishing preprints is very common, so there already is open access to most publications and thus less incentive to change the system.
**Allowing others to build upon your work**
This aspect of the Creative-Commons licence mainly makes sense for works of art or widely used texts (such as licences), which somebody would actually want to directly build a work upon.
Building your work directly on a **research article** would be a very unusual thing to do, as instead of modifying the original article, you would rather publish a comment or a new article citing this article. Do not forget that building upon the ideas of an article is allowed anyway (unless they are patented), it’s just building on the text or the figures, which would be changed by putting the article under a Creative-Commons licence. Finally, it would arguably be more harmful than useful, if anybody could publish an altered version of your paper, as this would lead to confusion for readers of work which cites your paper.
With a **review article** it makes more sense to allow others to edit your article, which is the idea behind [Scholarpedia](http://www.scholarpedia.org/article/Main_Page).
Upvotes: 3 <issue_comment>username_2: Let me build upon excellent username_1's answer, adding a few things:
* there is a *de facto* open access standard license, and that is **CC-BY**. It is similar to public domain, and I'd strongly suggest not to use CC0, as you would waive also the *attribution* of your work. CC0 it is used for databases and data, where CC-BY licenses are trickier.
* institutional or disciplinary archives as ArXiv are heavily used in some academic communities (eg. math, high energy physics) but they do not provide peer review. It is customary within those communities to archive pre-prints. You should try to understand the customs in your very own community: it's very important if you want to make the best choice (and if you want to foster open access publishing in your discipline).
As per your last question:
**why do so few researchers seem to be concerned with these ethical problematics?**
Because their priority is their **academic career**. Academia is built on personal, scientific reputation, and the current system (with Impact Factor and other scientometric indicators) is completely unbalanced towards old, authoritative closed-access publications that provide the "authority" a young researcher needs to gain reputation between its peers. You need to publish a lot, and in reputable journals. You will advance, get grant, get tenure on your publication (and citation) record.
Open access publishing (as a model) is relatively young, thus it is difficult for open access publications to gain the same reputation as those other journals; thus researchers don't want to publish there, thus the old, closed-access system survives.
Often, the rational, selfish choice of the researcher is enough for the ethical choice to be bypassed. Ignorance of alternative models (and ignorance of the intrinsic flaws of the current closed access system) do the rest.
Upvotes: 3 |
2014/08/24 | 596 | 2,353 | <issue_start>username_0: While somewhat related to this [recent question](https://academia.stackexchange.com/questions/27543/my-student-told-me-his-mother-has-cancer-what-do-i-do) but this situation is different in that my supervisor has over the recent months had unspecified health issues. It has not overly effected our relationship other than a bit of a delay in some of our emails. Without knowing exactly what is wrong it was conveyed at one of our meeting that it was not overly serious. In the last email I received it was noted by my advisor that their health was poorly over the summer.
As someone who has previously had a prolonged serious bout of [Crohn's](https://en.wikipedia.org/wiki/Crohn%27s_disease), I can understand how poor health can impact on someone's life and would like to be able to make our professional relationship work in a mutually beneficial way.
So in a way my question is two-fold.
1. Is it appropriate for me to sympathize/empathize with my supervisor about their health issue?
2. Should I ask them how they best wish to proceed when they may be ill in relation to submitting work etc.?<issue_comment>username_1: Since in a recent email your advisor has told you about the health issue, it is perfectly acceptable for you to sympathize/empathize with them and it would not be out of place to ask if you could do anything regarding your work to accommodate them. That said, if you feel uncomfortable raising the issue, you should be able to depend on your advisor being mature enough to tell you what he/she needs in terms of accommodations.
If your advisor had not mentioned anything to you and you wish to sympathize/empathize with them, then you need to be more delicate.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Depending on how much it slows down your research, how far you are from graduation and how bad the illness is, a common solution is to get a temporary or permanent co-advisor. As [username_1 said](https://academia.stackexchange.com/a/27568/15723), your advisor should be mature enough to tell the students if it is necessary to do so (this actually happened a few weeks ago to a friend of mine), but illness can be treacherous (also happened during this summer...) so more risk-averse people might prefer to start looking for a co-advisor even before they are advised to.
Upvotes: 2 |
2014/08/25 | 460 | 1,878 | <issue_start>username_0: I want to improve my learning skills. Is there any advice or a good method for that? For example, I want to read an article or book and retain as much information possible. But sometimes I read and lose focus, or when I read I can't remember everything as well as I'd like to.<issue_comment>username_1: **Short answer**
Practice!
**Long answer** Practice (smart)! The more time you spend being a student (studying, reading, summarizing, asking questions, taking notes, and pursuing answers) the better you will become at being a student. For more specific tips, check out [this article](http://www.scotthyoung.com/blog/2007/03/25/how-to-ace-your-finals-without-studying/) by <NAME>. He focuses on methods that allow you to study smarter rather than harder. The key to this is to recognize that all knowledge is interrelated; everything connects to everything else, and therefore no learning is wasted. I find his tips and hints to be useful, though I cannot learn quite as efficiently as Young claims one should be able to.
Upvotes: 2 [selected_answer]<issue_comment>username_2: Something to look into is style of learning. For example, I'm a haptic learner, which means I learn best through doing. 'Doing' in this case can be a simple as making flashcards. On the other hand, I can try and read a text book all day, and I won't retain much at all. Therefore, 30 minutes of making and drilling flashcards is a much more efficient use of my time instead of an hour spent reading.
Everyone learns slightly differently. If you learn best when you hear something, try recording the lectures and playing them back. If you are a visual learner, maybe look into plotting out lecture points on a white board. Honestly this is just something that you need play around with, though you'll find many ideas to get you started on the internet.
Upvotes: 0 |
2014/08/25 | 5,757 | 24,822 | <issue_start>username_0: I think this is an unspoken question in academia: how do you factor in the effort put forth by a student before taking your class?
*I think the answer to this question has several consequences:*
1. it presents accurate reflection of your teaching,
2. it presents more accurate reflection of the quality of work of a student
*It also addresses a more profound question:*
Is good educational outcome measured by grades a product of privilege or a measure of intelligence?
---
Personal anecdote:
In my undergrad classes, I find for some courses the class will be divided into two camps - one who has done some work outside of the class (the ones who have a background), another who will try to learn the course material as the course progresses.
The results are vastly different. The ones who have done work outside of class will achieve higher grade across the board, are more motivated, ask extremely technical questions in classes, gets all the attention of the professor as the next "rising star", etc., whereas the other students who haven't had as much experience prior coming to class will try his/her hardest with uncertain outcomes and will tend to struggle a great deal more.
From my own observation, a course taught by a lecturer who uses less conventional course material seems to even out the grade much better than a lecturer who uses standard material - since in these cases, the advanced students are less able to predict what will show up on the exams.
This is most significant in computer science classes. Any computer, CS, programming classes will teach things in two folds: practical and theoretical. I find in many of my classes, a portion of the student will have a great deal of knowledge in the former, therefore significantly reducing his/her course load. How easy is it to ace an introduction to programming course if you already have years of experience in the language? These courses will always produce several students who go on doing interesting research or work at some big-name companies, and the lecturer will be praised as being effective at teaching. But it is so obvious to me, that the student - the one who has taken years of programming at middle to high school level, a summer course at another University... - is more advantaged in this area than a student who is just coming in to be acquainted with a field of study.
Hence, what is the ethics of preparing yourself for a course before it begins given such uneven playing field in today's academic setting? Will this be a mark of a motivated student who is genuinely interested to learn a subject or a student who is just trying to get ahead and earn good grades? Why should or shouldn't a student prepare himself for all the courses he will take one or two years down the road?<issue_comment>username_1: A student takes a course to prove he learned the stuff in the syllabus. He/she passes the course, and then has official certification that he knows that stuff.
Who cares *when* he learned it?
The trickiest part is keeping the course interesting for the high-flyers, without alienating the less able (or less knowledgeable) class members.
Upvotes: 5 <issue_comment>username_2: IMO, there is no ethical problem with students preparing for a course in advance. As username_1 notes in their answer, what matters is that the student learns the material, not *when* or *how* they learn it.
That said, there are (at least) two actual problems that your question touches upon:
* the occasional problem of advanced (or self-taught) students taking beginner-level courses for "easy credits", and
* the practical problem of teaching a course for students with a highly variable baseline experience level.
---
The first one *could* be considered unethical behavior in some circumstances. For example, let's say I'm a math major who decides to minor in biology, and the university happens to offer an "introductory math for biologists" course. If I was unscrupulous enough, and nothing specifically forbade me from doing so, I might be able to take that course and basically just show up for the exam, getting free biology credits for stuff I learned in first-year math.
(In fact, I freely admit to having done something similar myself on occasion. For example, one of the last courses I took as an undergrad, to make up the credits I needed to graduate, was a first-year "computer literacy" class that basically consisted of learning how to use a web browser, e-mail and a word processor. As I had already completed a minor in computer science, as well as worked for several years as a full-time software developer, I hardly need to mention I did not find it much of a challenge. Still, it got me the three extra credit points of "general studies" that I needed.)
It could also be argued that there is no problem: if you consider the credits to be awarded for knowing the material, then I clearly deserve them. On the other hand, if you consider the credits to be earned for *learning* something, then I clearly do not (as I *already got* credited for learning the same material once, back when I first learned it in math class).
More generally, issues like this can occur any time an institute offers multiple introductory-level courses covering essentially the same material (but possibly from a different angle) and does not specifically forbid students from taking more than one of them. When that happens, a student can effectively learn the material once, but get credited for it *n* times, where *n* is the number of such overlapping courses they can find.
The usual solution here is simply to try to find and close any such loopholes. Some possible methods for doing that include:
* explicitly designating certain courses as equivalent to each other, so that a student can only receive credits for one of them, and
* restricting certain very basic introductory courses, like the "math for biologists" example I gave above, only to students actually majoring in the target field (here, biology).
Also, *if* students are going to commit such shenanigans anyway, it can be argued that they should be allowed to demonstrate their competence in a separate exam, so that they don't need to waste time and classroom space, and skew the grade distribution, by actually sitting in a class they don't need.
---
As for the second problem I mentioned above, i.e. teaching a class where some students already know much more of the material than others, there are several distinct approaches that people may argue, depending on their general teaching philosophies. My personal suggestion, which *I'd consider* a "moderate" position, is to take a "triage" approach and mentally divide your students into distinct groups:
1. The **advanced** group consists of the students who already know much of the material, and just want to show their competence and get the official certificate for it. Some may be hoping to learn some interesting new tidbits, but they'll still pass the course even if they learn nothing new at all.
2. The **learning** group consists of those students who actually more or less match the intended level of the course: they know the prerequisites reasonably well, and are reasonably capable learners, but don't yet know the material you'll be covering. In many (but not all) courses, this group will make up the majority of the students, and they're the ones you should adjust your teaching pace for.
3. The **struggling** group consists of the students who lack some of the prerequisite knowledge the course assumes, or who are otherwise falling behind the rest of the class. They'll likely need remedial teaching to have any hope of passing the course. You should not have many students in this group; if you do, it may be sign that you're going too fast, and some of the students that *should* be in the "learning" group are falling behind.
It can be a temptation to focus on the "star" students in the advanced group, since they're the best in class, have the most interesting questions (if any), and generally appear to offer the most reward for the least effort. They're also the ones most likely to end up as future grad students in your field, which makes focusing on them even more tempting.
However, the advanced students are, by definition, *not* the ones the course is actually intended for, and they're not the ones you're really there to teach. Rather, your main goal with the advanced students (insofar as they come to class at all, rather than just taking the exam directly) is simply to **ensure that they're not bored**. There are several ways you can achieve this, such as:
* giving the advanced students additional bonus exercises or projects that go beyond the standard syllabus, or letting them pursue their own side projects;
* encouraging the more advanced students to help the less advanced ones;
* encouraging students to come see you outside class if they'd like to learn more about something that sparked their interest; and
* if you still cannot engage some students in class, letting them skip it if they already know the material.
---
As for the struggling students, there is an argument to be made that you, ideally, shouldn't have *any*, and that even a single student falling behind is a sign that you should slow down until they can catch up. Essentially, this approach amounts to lumping the "learning" and "struggling" groups together, adjusting your pace to the tail end of that combined group, and treating any students who progress faster as "advanced".
While there *is* some merit to this approach, it can also be taken too far. Especially in higher education, it is often simply not possible to set the pace by the slowest students in class, and still have enough time to cover all the material in the allotted time. Instead, as an alternative to simply letting these students fail, it may be more effective to set the in-class pace by the majority of the class, and to focus specific remedial efforts (such as individual tutoring, possibly by some of the more advanced students) on those that need it.
Of course, even this won't always help, and sometimes you may simply have to let a student fail. In particular, it's important to learn to distinguish students who'd *like* to learn, but have a hard time doing so, from those who are simply too lazy or disinterested to learn. The former group can be helped; the latter cannot.
Upvotes: 7 [selected_answer]<issue_comment>username_3: The goal of a university course is education on a particular topic. Students who study before the class are educating themselves on that topic. Students who don't study before the class are, hopefully, being educated by the teacher on that topic. Both of these things achieve the goal.
Frankly, if you have even the slightest suspicion that it is unethical for your students to educate themselves, you are in the wrong place. There is no such thing as "too educated" in academia.
Upvotes: 3 <issue_comment>username_4: I think the question can be greatly simplified:
**Should credits be awarded for effort or for knowledge?**
The person who knows their stuff without having it picked up in class was obviously smart enough to pick up the stuff they needed to know on their own or before hand. Call that preparation, genius or whatever, if he or she knows their stuff, they deserve the credit, no matter how long, short, hard or easy they had to work to learn it.
Should you give credit to someone who does not know what their doing, just because they put a lot of effort and time into it? I hope not! I do not want to be operated on by a doctor that put many years of hard work into NOT KNOWING their stuff, I rather get an operation from a self taught autistic savant that spend 3 Month reading and watching videos, acing all tests, never sitting in a single class!
Sure I admire people that put a lot of effort into learning something, because they can put that much effort into something, but when a job needs to be done, the only thing that counts is, who can do the job!
It's a shame that so many Universities require students to take nonsensical courses, which they have to pay a lot of money for, rather than giving them the option to just take the test. And why do Universities that allow you to just take the test want like 50% of the tuition from students that only come in once for the test. Quite an obscene payment for the time to take the test and correct it...
Don't get me wrong, credit for effort is GREAT, ... in Kindergarten!
Upvotes: 3 <issue_comment>username_5: Some parts of the implicit hypotheses of the question are misguided. First, courses are about the students, not about the teachers. The teachers may *give* the students something, but if the students already "have too much", this is the *opposite* of a "problem".
Yes, it is a minor sociological problem to have highly motivated students in a room with unmotivated students. :) Hmmm. :)
Also, a "problem" to have genuinely gifted students in a room with ... uh, maybe anyone who'll "feel bad" if they can't "compete"? But, wait, why is it a competition. Oh, yes, "grades". Hm. Rewind. Don't punish kids who're talented and who've taken initiative, and don't punish kids who're just "doing the class", either. Classes should not be a test of giftedness, (or else it's a rip-off), nor should they punish it.
The notion that we can arrange to have student populations be homogeneous is ridiculous... btw.
That is, the outcomes of inhomogeneity are inevitable. If the gifted kids, or kids who've read lotta books before, already know all the usual pranks, it's fine. Don't make up extra-perverse pranks to try to bring them down, and don't assign them extra classroom chores as *punishment* for doing well. That is, don't add perversities to the accidents of nature and society.
When I was younger, I did often sign up for courses that I already felt I'd read about sufficiently to not feel at a disadvantage. Seemed reasonable to me. Not to mention that a sincere person might not feel constrained (as other answerers have mentioned) to wait to take a class to read the dang book. Srsly.
Should unambitious students feel intimidated by the few students who've already read all the stuff, know much more? Uh... yes. Not that they should fear that their deficit from that level will be fatal, but, yes, that that level of function is possible and reasonable and *desirable*. Any program that makes precocity penalizable is criminally perverse: knowing more is good; knowing less is less good.
One operational problem is that such stuff is not easily "formalizable" and "institutionalizable", but that should not be allowed to be an obstacle. It's possible to teach "typical" kids, and not torment them, without hassling exceptional kids. Obviously.
Upvotes: 2 <issue_comment>username_6: Whats the difference of the student (say, X) who studied long before taking the course and the student (say, Y) who failed the course the first time and have to retake it in the next year?
Seems to me that X is a better student than Y, because Y once failed the course. But in term of preparation before taking the course, both have them. And if any of them obtained an A grade in ethical way, that is, by doing their homework, and doing the tests correctly, I find there is no problem at all. We shouldn't discriminate either X,Y, or normal student in obtaining their credits/grades.
As for whether credits should be awarded to effort or to knowledge, it is at the discretion of the professor/class, the professor might give grades partially on effort (homework, essay, or project) and partially on knowledge (test results).
Upvotes: 2 <issue_comment>username_7: There is actually two good questions contained here but conflating the two would be a mistake.
1. What to do about students who already have a lot of knowledge that would be covered in the course?
2. Is there a socioeconomic problem with some people having more resources and therefor being more prepared.
The reason I split these questions is that it is entirely possible for a student to use public resources, or take classes in a way that they end up not challenged by a class. And depending on the motive there should be no problem with this. If I was interested in a topic I would probably do some amount of self study before enrolling in a college course (I have finished a B.S. and am working on a M.E.), I have also taken classes at the Master's level that were far less challenging for me simply because my Bachelor's was in Math. However, if I were to take a class that I knew all the material for just to get more credits or improve my grade there is an issue because I would be taking space and resources from people who would learn more from being there. So, there is no ethical problem for a student to self study because they love the material or so that they can maximize their return on the course... in fact we should encourage them. But taking a useless class in order to meet a metric is wasteful and denies resources to others.
(For the next part my view is that of someone living in the U.S. but it is applicable widely)
As for the second question, I agree that there is a socioeconomic issue with some students having the resources to be more prepared, my question is how to we mitigate this? I would suggest a combination of more free computer and book access, better schools and more activities outside of school being available. Personally as a father I am making an investment in my son by using my wealth to enrich my child's mental abilities, and that is perfectly fine. As a people we need to do better about erasing economic gaps and realizing social gaps are not gaps at all but opportunities to grow. So ethical? On a personal level yes, on a societal level no. Who is to blame? All of us... but all of us is also the solution.
Upvotes: 2 <issue_comment>username_8: There are cases in Academia where students are held to different standards based on preparation. At my university there were some classes in mid-level physics that were cross-listed as both undergraduate and graduate classes. Typically the students were graded on two curves, one curve for undergraduates biased by actual undergraduate scores, and a second curve for graduate students biased by actual grad student scores. I thought that this approach was both merciful and a decent solution to the "different levels of preparation" issue raised by Illegal Immigrant.
When the differences between students are not easily catagorized as above, it becomes difficult to divide students into groups. This is a practical matter, not an ethical one. Unfortunately, practicalities often trump ethics in American society since Americans place a high value on simplicity and equal treatment. It is a real problem, but usually there is no admissible (constitutional) mitigation for the problem.
Upvotes: 2 <issue_comment>username_9: It doesn't directly apply, because it doesn't so come from prior prep work or experience, but during my doctoral program, there was a mandatory class that had two different groups in it - PhDs in that department, and PhDs outside it who, while fairly sophisticated, clearly lacked the same background in terms of coursework.
How he handled this is to divide the course into "tracks". There was a modest, "shooting for a B" track, and a much more challenging "shooting for an A" track, with a hard mode "shooting for the A+ track" that actually had little effect besides drawing a certain sort of student, since the + in a grade doesn't even exist at that institution.
The department majors were required to be on the A or A+ track, everyone else could choose. You might similarly be able to ask students to self-rate their familiarity with the subject, and strongly encourage those who rate themselves well to consider a more challenging track.
Upvotes: 3 <issue_comment>username_10: For our purposes, there are three types of students causing the concern:
1) Students who have taken similar, or comparable-level courses and are "repeating" this course for a "cheap" grade.
2) Students who have not taken comparable level courses but have "boned up" ahead of time.
3) Students who have not done 1) or 2) but whose proficiency in computers give them a "natural" advantage.
The ones we should be most concerned about are the students in the first category, the "repeaters." That is solved by establishing course levels, to say the students who have passed courses at an equal or higher level cannot take this course.
The students in category 2) don't provide much a worry. At one level, you want to applaud them for the "eager beaver" tactics, but at another level, it might not do them much good.That's because they are (probably) learning the book, or course, not the material. As you pointed out, introducing new material dissipates their advantage most of the time.
The students in the third category are the ones you want to encourage. They are those whose general computer efficiency at work or playing computer games gives them an advantage in computer science, even if they haven't seen the material before. Why should that not be the case? Someone who has worked with hammers and nails and other tools will have an advantage in physics over someone who hasn't.
Yale math professor <NAME> would "sort out" his students on the first day of Calculus class, by giving an *algebra* quiz.It was one that everyone would "pass," but some would do so faster than others, and those were the likely "A" students.
Upvotes: 2 <issue_comment>username_11: Ethics aside, as many people have noted that the concerns that arise aren't strictly ethics-dependent, there are practical issues to address.
Harvey Mudd College saw this in a big way in its CS courses, particularly the introductory sequence. Historically, they had a required intro programming course that first-term freshman took unless they placed out via AP or local exam. This left a lot of students with substantial programming background mixed with students who had none. The class dynamics were broken in many ways from this. Students with experience could answer questions and complete assignments much more readily than those without. Students without would often get disheartened, feeling like they lacked talent in comparison, rather than recognizing the difference in background from their more advanced peers.
They addressed this issue by separating incoming students into various streams until they either completed the intro course and went on to another major, or were on equal footing going into the CS-major coursework. The true intro course was split into sections between students with moderate experience, and those with little to none. Switching between them was made relatively accessible. Students with more experience to place out of the intro course, but not with all the material the school wanted them to have for the major, took an advanced intro course that covered the first in-major course's material, while those who took the plain intro would take that a second semester course covering just the more advanced material.
The consequences of this were pretty huge. Gender parity in the CS major shot up, even faster than the rapid shift of the school's overall demographics. Enrollment in subsequent CS courses and the CS major increased among all groups of students.
Upvotes: 2 <issue_comment>username_12: In my honest opinion, this is more of a corollary to a fundamental systemic problem than an ethical one. The questions to be raised is why such students take those courses. There are few motivies I have personally witnessed.
* To get an easy grade
* The course is mandatory or a prerequisite with no way around it
* Students need to fit some numerical credit criteria and they don't want to take it
All 3 of these reasons exhibit issues with how a university function. The universities seem to behave as if their priority is to give an "equally" measured certificate. Whether or not someone actually learns something seem to come second. This is not only universities' fault. Institutions of all sorts seem to use GPA as a significant measure of success. I have seen scholarships or university benefits where GPA was literally the main judging factor. This motivates people to not "risk" a lower grade in a more technical course when they can get an easy "A" by practically learning very few things.
Another issue is that the exams are in many senses unfair. It is quite possible for one to get a worse letter grade than one should given their understanding. Because of the results mentioned and many others these people are encouraged to retake the course in order to improve their passing grade rather than taking follow up courses.
Upvotes: 1 |
2014/08/25 | 1,493 | 6,621 | <issue_start>username_0: **Intro**
For an upcoming project I have to get into a new topic of research which is not similar to anything I have worked so far. Furthermore, this project requires learning new programming language(s) and simulation environments. And in the end a software implementation and a final report has to be written
**About the topic**
The whole topic is about wireless mobile communication, the basics of communication technologies which are used in this topic are the same with the usual mobile wireless technologies I've learned before. However, the dynamics of the mobility are fairly different. The clustering algorithms used are not typical.
**The essential question**
How do I tackle such a situation, where multiple new things have to be learned before the final results are delivered. I separate the things I have to learn into
* Prerequisites: programming languages, simulation environment, relation between them and the topic basics
* Main work: learning the essential elements of the topic and becoming fluent in them
**Past experiences**
In the past I have usually worked on research topics which I have fulfilled the prerequisites for. So, I would tackle the topic following these steps:
1. Read broader literature: Surveys
2. Read literature about the specific problem
3. Implement if there is something to be done
4. Write final report
In this case I am a bit **lost**. *I don't know where should I start from.* That's why I need help from more experienced researchers. Compared to the list I provided above, step 3 is quite more complicated in this case, as I am not familiar with the programming language.
**What I plan to do**
I want to follow the following steps in order to get into this topic
1. Fulfill the prerequisites: get familiar with the required programming languages and simulation environments. Coupling between them etc. Do some exercises until I feel confident.
2. Start reading broad literature: Surveys
3. Focus on the specific problem: Read specific literature
4. Start implementation
5. Write the final report
I am not quite sure if these steps are OK. Sometimes I get confused and I want to move `4 -> 2`, `2 -> 3`, `3 -> 4` and maybe parallelize something there with step `5`.
Any suggestions would be greatly appreciated. I feel so overwhelmed by this topic and need urgent help :(.<issue_comment>username_1: While your concern is perfectly valid (been there myself more than a few times), you should probably consider and accept that this is the way you are going to learn new things for most professional endeavors. This is how things work out of college; e.g. think about this: are you able to foresee what technologies and concepts are going to be required for your next project? And even if you could name them, could you set aside enough time for mastering them to be prepared for the next assignment?
As I see it, you already have a sound plan, I would stick to it. I would, however, recommend that you not get too engaged in just familiarizing yourself with the new environment. Sure, take some time to get some grip on it, but once you are got the basics, I suggest proceeding with your plan. One major asset of your assignment is going to be your new acquired skills, but that mastery will come in small portions along the whole road of the project. I find it unlikely (for students and professionals alike) to be able to learn a new paradigm in a reasonable time, without some means to guide the effort, e.g. class projects, so, instead of trying to get to the bottom immediately, get the basics and learn the rest along the way.
Other than that, as I said before, your plan is sound, just don't let the fear of the unknown overwhelm you into panic.
Upvotes: 3 [selected_answer]<issue_comment>username_2: In my (albeit limited) experience, depending on where you are in your career, you may want to search for a collaborator. When I was graduating, I had a project on the backburner that I didn't know how to tackle, so when I had a certain audience in job talks I'd try to send out dog-whistles about certain aspects of the research project to see if they could give me something back or be interested by the remarks.
In the end, I got a brand-new collaborator (an expert on what I wanted to do) who really started to enjoy the project and is finding new things about the technique in which he's an expert. And the results ended up better than I ever could have hoped! Perhaps your PI may have friends that she/he trusts not to scoop you that you can confide the project in and then consequently have a new collaborator.
Learning this new topic "on the street" as it may be would make this the most efficient way for you to learn and retain what you need for your research. This way you don't have to learn everything from the ground up which would take forever and your collaborator can tell help you figure out what you *need* to learn.
Upvotes: 0 <issue_comment>username_3: Learning a new programming language is probably the most straightforward part of this. Unless it's a *really* obscure language, there will be tutorials online and books to help you. If you already know one or two programming languages, learning a new one will usually not be difficult, and you can probably estimate for yourself how long it will take. However, if you're coming from a procedural programming background like C/C++, Java, Perl, PHP, VB, or most of the "familiar" languages, it will take more time to learn a functional programming language like Haskell.
Make sure your learning is very focused. Don't try to learn the entire language (unless you want to), just learn enough to accomplish what you need to. I usually read a chapter or two of the book, then try to code up a bit of the functionality I need. When I get stuck, I read a bit more. Lather, rinse, repeat.
Learning about the simulation environment may be easy or hard, depending on how well documented it is. If there isn't much documentation, but you can get a sample bit of code or configuration, then you can try making tiny changes to see what effect they have.
As for learning about mobile/wireless communication, have you considered talking to a company in this field? Large telecomms companies usually have some connection to academia. If what you're doing has business potential for them, they may fund your research, lend you equipment, or at least offer some mentoring. Or if your project isn't relevant to traditional cellular networks, maybe think about what sort of business might benefit from this technology, and then ask them for some support.
Upvotes: 0 |
2014/08/25 | 1,002 | 4,275 | <issue_start>username_0: During my PhD it bothered me how much time my supervisor had to spend on writing proposals to get funding to do science, which in practice pretty much meant that he had no time to do science because he spent all that time in the time-consuming business of getting money.
As I was finishing my PhD and looked at postdoc opportunities, I wrote myself a project proposal for starting investigators in which I spent overall about one month (literature review, securing collaborators, writing itself, etc). This was very competitive and only one in every 100 applicants got funded - I did not get funding, although I made it to the interviews, which I was told meant I made it to the top 10%. I am not completely unsatisfied about the outcome because I gained experience and contacts, which eventually led me to being offered a postdoc position by one of the people I had included as collaborators. However, the whole process of putting the thing together meant I lost about one month that I could have spent doing science. It also made me realize that even writing a high quality proposal in which I had spent a lot of time working on the details would not necessarily lead to guaranteed success. If only 1% of applicants get funded it means that statistical noise alone is enough to push you out of the winners pool!
Now that I am a postdoc I have to spend some time helping my new boss with his proposals and in the near future (maybe in the next year) I will have to start applying to some competitive project money myself. Again, this means that I will not be doing research during that time and will spend a considerable amount of time trying to get that research funded.
How can this loss of productivity be quantified? Are there studies on how much less research is carried out because of the time spent on competitive hard-to-win grant calls?<issue_comment>username_1: I think this questions presupposes that you could do more research if you didn't have to write the proposals to get the money to fund your time. The reality of the situation is that there are more people that want to do research than can possibly be funded, so some system must be in place to decide. Competing for grants is the least bad system that we've been able to come up with.
That being said, there are [studies](https://www.insidehighered.com/news/2014/04/09/research-shows-professors-work-long-hours-and-spend-much-day-meetings). I think the bigger concern, however, is how much time is spent on report writing and other post-award administrative activity.
Upvotes: 3 <issue_comment>username_2: I think the question is barking up the wrong tree, to mix metaphors.
As you proceed through the researcher lifecycle (Ph.D. student -> postdoc -> professor), your responsibilities will change. For instance, one popular criterion for paper authorship in psychology is that one should have contributed to two out of the following four aspects of research:
1. Grant acquisition
2. Data acquisition
3. Data analysis
4. Manuscript preparation
As a Ph.D. student, you will mainly work on 2-4. The postdoc will spend less time in the lab (point 2), more on 3-4... and he will also start working on grants (point 1). Finally, the professor will mainly focus on points 1 and 4, to a degree on 3. However, the bottom line is that all four activities are "doing research", since grants are just the way money is allocated to competing research groups nowadays. (Of course, one could argue that it would be better if funding were just distributed equally among researchers, but that seems to be a different question.)
Similarly, in industry you could wonder how much time, money and energy is spent on marketing, pre-sales and sales activities and how all these resources could be much better spent on R&D, production and actually serving customers. But this would miss the point that without salespeople, there would not be any customers, nor money to spend on all the things the non-salespeople like to do. (And frequently, the conversion rate on business proposals is similar to the 1% you quote.)
So: don't see writing grants as a drain on productivity. Writing (and getting) grants is how you get the money to do everything else in science.
Upvotes: 6 [selected_answer] |
2014/08/25 | 1,135 | 4,888 | <issue_start>username_0: Earning a PhD help you learn tools and techniques in your field, and lets you become an expert in a very specific field of science. However, once you get your degree and try to enter the labor market, you're competing against people who are younger than you (i.e. recently graduated non-phd engineers) and may have actual work experience (i.e. non-phd engineers with a few years in the market).
There are, of course, some positions in which they cannot compete, including research and teaching position. Those are the minority, though... in most other fields it seems that holding a PhD is not as important as (for example) work experience.
So, I would like to know:
* Is work experience more important than having a PhD in the many work environments?
* What skills can I acquire/demonstrate during my PhD to become more "valuable" than a non-PhD candidate for the same job?<issue_comment>username_1: This question is too broad, but I have a feeling it's a commonly asked one, so I'm going to try to answer it anyways.
To address (what used to be) your second question first: holding a PhD does not make a job candidate any more desirable for the vast majority of positions. Indeed, it can be a factor against the candidate, as they will be perceived as more expensive. For positions where a less-qualified candidate could also fill the role, being overqualified is rarely a good thing. Jobs that are specifically looking for a PhD will typically state that in the requirements. For example, "Masters required, PhD preferred" is a common one in certain parts of the banking sector. However, for entry-level positions (data entry, lower-level analyst roles, etc), you may be at a disadvantage.
Regarding your first question, though, you're being overly harsh on yourself. The process of earning a PhD is *significant* work experience; indeed, that's your main selling point when looking for your first job. Depending on what you did, you will have some or all of the following experience:
* Identifying, clearly stating, and figuring out how to address a problem - this alone qualifies you to be a consultant at any large firm; this is all they do, all the time, for different clients
* Project management
* Advanced technical writing - your thesis, academic publications
* Communication skills - working through the peer-review process
* Public speaking - presenting at conferences
* Experimental design - your research project
* The art of researching - the simple knowledge of how to properly find articles, sources, etc
* ...
Even better, you've been doing all that for four years. You should be selling every single one of those points as hard as you can when you move to industry.
---
EDIT: The above answer stands for the edited second question as well; as a graduate student, you will want to learn all of the above if you wish to enter the workforce. More specifically, though, almost industry positions seeking PhD candidates will apply to value the following three above most else:
* **Self-starter** - shown in that you got your work done
* **Collaborative** - demonstrated through *successful* collaborations with other researchers (successful = researched together, published together)
* **Good communication** - demonstrated through publications, public speaking, conference presentations, teaching, etc.
During the PhD, aim to do those things, and during your job search, emphasize all those traits in your resume and during interviews.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I think the answer in large part depends on the kind of job you want to work when you have completed your degree. You mention that you do not know the kind of job that you want to work when you get out of a the program. This is unfortunate. A Ph.D. = specialization, a mistake many Ph.D. students make is to assume specialization = job. In fact, specialization can make it harder to find a job. If you're goal is to enter into industry than I think you have an interesting road to navigate. The job market will be kinder to you if you have a range of skills that you are really good at, the Ph.D. program is going to pull you in the opposite direction. You're wise to want to understand better what it is about getting a Ph.D. that will make you standout in the workforce, but you'd also be wise to look at some skills for your desired positions that you might not develop while in the Ph.D. program, and look to develop those too while you are in the program. For me these skills are computer science skills. For you these skills might be something different.
Unfortunately, unless you are in the hard sciences, and even then specialization can be a demon, having a Ph.D. might not mean all that much when stacked against someone with a Master's, wider breadth of skills and more industry experience when looking for industry job.
Upvotes: 2 |
2014/08/25 | 1,934 | 8,313 | <issue_start>username_0: I am doing my master thesis on wireless networking. I am working on the source code, which the writer of one of the articles provided to me.
Initially, I sent him an email and I asked him to send me his C++ source code . So he did. But the problem is, there was no documentation for his code and he is the only one that knows anything about this codebase. As a result, I totally depend on him. Last night I sent him an email and I asked him a question about his code and he replied to me very quickly, only three hours later. Probably, I will need his help again.
Before my last night's email and after spending one month working on his code, I saw minor progress to my work. But after yesterday's mail and his guidance, my work was significantly accelerated. I will probably need his help again maybe another two or three emails in a week. But I am afraid he will become angry with me for sending him a lot of emails.
So what should I do? Assume you are the source code programmer. Do you think he will become angry about my emails? Or he will be willing to help (since I will cite his article)?<issue_comment>username_1: Although your intentions are probably good, you seem to take advantage of the creator of the source code. He is not your personal debugger nor he must be the one writing the code for your thesis (especially for free). He has written a paper and he has given you his source code. That is all the information you need to know. Read his paper (or his paper slides if they are publicly available) a hundred times, till you know every little detail of it and then look at his code (another hundred times) until you correlate everything between the paper and the code. Of course this will be slower than sending him a couple of emails but it is YOUR thesis not his. Only when you do all this and still there are unanswered questions, then collect all the possible questions you might have (including queries about his code and his paper) and then send it him ONE email, with all your questions. Anything more than that is exploiting his kindness. This kind of back and forth emails with questions is one of the many reasons why many people are unwilling to share their codebase.
Since he is probably a nice guy, you should consider that in the future you might collaborate with him or need his help. Being pushy or lazy (and delegating your work to him because you do not want spend 1-3 weeks refactoring or studying his code more carefully) is a sure way to burn bridges with him. And you really do not want to do that.
On the other hand, if after giving you access to his source code he understands (by your email) that you did everything humanly possible to understand his code and paper and just want some extra help, he will be willing to assist because: a) you seem to appreciate his work b) you seem to understand his time constraints c) you are also a smart, hard working guy worth collaborating with. And this is the message you need to convey.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Why not just ask him? He knows how busy he is, how interesting your project is for him, and how *dumb* (if any) your questions are. You can also offer to pay him back by writing a (partial) manual on his code, as you go on understanding it.
If I was to receive this kind of email I would see that you are acknowledging my effort and trying to be respectful with my time. And maybe this will be good for me, forcing me to rethink aspects of the code, and make a mental note of doing better documentation in the future.
Also, in that situation, I would greatly appreciate to be kept in the loop about your project. Even if I can't or won't help, I would like to see the the progress.
A quick answer is usually a good sign, it means that he finds your questions interesting and not something you should have been able to figure out by yourself. In any case, if you want to make sure, you can ask someone that is roughly familiar with that code (probably your advisor), and see if the questions are indeed something you should have been able to figure out by yourself; and if you are not, they should teach you the techniques you are missing to do it.
All this said, I have been given messy code to work with three times, and the three of them I ended up grabbing the paper and reimplementing it myself in a couple of days (and in two of them, the result was way better). Maybe your case is too big, but you should consider it.
Upvotes: 5 <issue_comment>username_3: To some extent I'm probably repeating what others have said here already. But in any case, here I go with my 2 cents.
First, the situation you describe is actually quite unusual. In most cases where code is used in a publication, the code is not published, and if it is not published, it is often unobtainable from the authors. If they do provide it, they are not very likely to answer questions about it. For example, often the code is written by some junior person like a grad student, and once that person has departed, the senior authors, who are also the corresponding author, don't know anything specific about the code, because they have not actually written any of it. They may also no longer have a copy of it, if they ever did.
So, you are already in a good situation, that someone is responding to you.
Another thing to bear in mind is that academics like people to be interested in your work. Since your correspondent wrote the code, he is probably the main author of the work; people who write the code generally are. So, he may not mind answering questions about his work as long as they are not stupid. Avoid basic/general language-related questions which are not specific to the code, for example.
As someone else said, if you want to know how he feels, why not ask him?
So, I'd suggest three concrete things.
1. Express your appreciation for the time he is spending replying to
you. Don't go overboard; a sentence or two is enough. But it is
important to do so.
2. If you interested/willing to have him as a co-author, ask him if he
is interested in being a co-author. If he is not interested, or you
don't want him as a co-author, ask him permission to add him to the
acknowedgements. You should certainly add him, if he agrees.
3. Ask him if it is Ok to keep asking him occasional questions. Perhaps
outline what and how much you expect to be asking him, if you have
an idea, so he knows what to expect.
4. This is going above and beyond, in some sense, but since you say his
code is not documented, document it, perhaps checking with him first
about how to do so, in case he has preferences. Then, send the
documentation to him. That's a nice concrete way to show appreciation.
Upvotes: 2 <issue_comment>username_4: As someone who is often on the receiving end of such questions, my advice is:
1. First and foremost, understand the theory behind the code before you go ask questions. Many of the questions that I get about my code reveal that the person with the question simply doesn't have basic background knowledge in optimization (e.g. "what's a Cholesky factorization") without which they couldn't possibly understand the code.
2. Make sure that you have the latest version of the author's code. Don't use an earlier version.
3. Understand how the software is licensed (if at all.) You will have to work within the terms of that license (e.g. the author might have put the code under the GPL, and your derivative work will also have to be GPL.)
4. Don't complain about the quality of the code or features that it lacks. If it doesn't do what you need, ask ask the author whether this is possible within the current code, or by a simple extension or whether the algorithm fundamentally doesn't handle that case or whatever. Do not assume that what you want will be easy or even possible at all. Depending on the author's response, you might get a ready made solution, or you might get some information on how to modify the code, or you might get told that it isn't practical.
5. If you're getting errors, then please provide the input data and output so that I can recreate the problem. Make an example that reproduces the problem as simply as possible rather than giving me all of your code. I will attempt to recreate the problem on my machine.
Upvotes: 3 |
2014/08/25 | 410 | 1,543 | <issue_start>username_0: Of late I noticed that recent books or long articles tend to make a short summary before every set of paragraphs. For example, we might have:
**1. Introduction**
We so and so ...
Therefore, so and so ....
**What is so and so:** Let us consider ...
**Properties of so and so:** We notice that ...
where every boldfaced phrases is followed by a set of paragraphs. For a real example:
<http://modular.math.washington.edu/edu/2011/581g/misc/Darmon-Diamond-Taylor-Fermats_Last_Theorem.pdf>
My questions are:
1) Is this style of writing really a trend?
2) Is it still a good idea if we are writing an article of moderate length such as a statement of purpose or a short report?
In my opinion, such style seems much more intelligible.<issue_comment>username_1: It's probably not a trend, but if you like the style, your are free to mimic it. There are millions of scientific articles written every year. Tracking or finding trends in style in these articles is probably not worth it. It almost certainly varies among scientific communities and among what journals and conferences prefer.
Upvotes: 2 <issue_comment>username_2: Ah, it's just LateX's `\paragraph{}` command.
It's not the most widespread style, but I think it's clearer, too, and I suggest you to go on using it.
What problems could there be? I can only foresee issues with journal articles, if you encounter overzealous copy-editors. A good card to play is "if your house style contains `\paragraph{}`, I am using it".
Upvotes: 4 [selected_answer] |
2014/08/26 | 657 | 2,789 | <issue_start>username_0: I want to write a paper on Mobile Screens in HCI, as phones are getting uncomfortably big for users, I want to provide some solution to the big screen phones or how logically state that how small screen phones are suitable for the user.
Coming back to question, for the above-mentioned reason, I was going to make a survey of big phone screen users, and I was going to mention it in my paper, is it okay to mention brand names of phones and their specific products with images?<issue_comment>username_1: I don't know about hardware, but in case you feel like mentioning some software (and hardware often comes with software...), beware that the EULA can prohibit researchers and scientists from explicitly using the names of their systems in academic papers.
Oracle is notorious for that, they sued David DeWitt following some benchmarks he had published that mentioned Oracle, see [the DeWitt Clause](http://en.wikipedia.org/wiki/David_DeWitt). Luckily the University of Wisconsin supported him, but Oracle banned all Wisconsin alumni for a while.
Upvotes: 3 <issue_comment>username_2: Although in some fields, this may be a problem, as [username_1 has explained](https://academia.stackexchange.com/a/27631/15723), in other fields, it is not only a good idea, but *expected*. For instance, in the laboratory sciences, you would normally list the vendors and suppliers who provided the "raw materials" being used, as well as, in many cases, the experimental apparatuses used to study them.
Upvotes: 3 <issue_comment>username_3: For cases of the type you mention I would probably recommend not naming the brands. The reason for this is that the manufacturer is unlikely to be happy if you say that their equipment is not very good (and they can't all be good or there is nothing to write about). This is particularly important if they have given you samples/equipment to use for research.
The lab I'm at has done several comparisons of various software and microscopes for surface metrology and in all cases the equipment/software is just referred to as A, B, C etc.
While from my point of view the reasons for doing this are purely maintaining good relations I expect there is a legal aspect too. I know we have received long loans/discounts on some instruments which presumably involves a contract saying we can't publish anything negative about the instrument.
If a company has provided some equipment used in this sort of thing you may wish to put them in the acknowledgements. Although be aware this may de-anomimise you data. If very few people make an instrument of a certain type saying thanks to company X makes it obvious what it is. On the other hand knowledgeable readers could probably make an educated guess anyway.
Upvotes: 2 [selected_answer] |
2014/08/26 | 775 | 3,234 | <issue_start>username_0: I am going to conduct a workshop on my research work and for that I need to show how it is done on the software. I want to know,
* Is it ok to display the proprietary software (through TeamViewer) from my university?
* or Should I use screenshots?<issue_comment>username_1: Generally speaking it is totally fine to use or demonstrate publicly (at a conference or elsewhere) some proprietary software. Many presenters use Microsoft Windows along with Microsoft PowerPoint, and nobody ever gets into trouble.
That said, if you want to be 100% sure you need to peruse the [end-user license agreement (EULA)](http://en.wikipedia.org/wiki/End-user_license_agreement), which is the contract that stipulates under which conditions the program might be used: this is the ultimate reference. For example, the EULA can prohibit researchers and scientists from explicitly using the names of their systems in a benchmark. (Oracle is notorious for that, they sued David DeWitt following some benchmarks he had published that mentioned Oracle, see [the DeWitt Clause](http://en.wikipedia.org/wiki/David_DeWitt)).
Beyond the EULA, one exception I can think of is if you have signed some confidentiality contract, which can happen when the software hasn't been released yet and the developers give you access to some pre-release version of it.
Another exception is that if by "show how it is done on the software" you mean reverse engineer the software, then you might want to be careful: reverse engineering is [borderline](http://lwn.net/Articles/134642/). [E.g.](http://en.wikipedia.org/wiki/End-user_license_agreement#Reverse_engineering) in the United States, EULA provisions can preempt the reverse engineering rights implied by fair use, c.f. [Bowers v. Baystate Technologies](http://en.wikipedia.org/wiki/Bowers_v._Baystate_Technologies).
The last exception I can think of is video games: they can be seen as art, both for the graphic and the music. Just like you can't show a movie or broadcast music publicly in the general case (e.g. [Twitch](http://www.twitch.tv/) mutes some screencasts), video game publishers might use some similar arguments in court. That's very theoretical.
All in all, if you're having some doubts, you might want to drop by the legal office in your University, because as any law 101 course says, the answer is always "it depends".
Upvotes: 5 [selected_answer]<issue_comment>username_2: Screenshots are much better than a live demonstration.
Too many things can go wrong with a live demonstration: Problems with the Internet connectivity at the conference, at your university, or anywhere between the two points. Servers being down for maintenance. Software crashing unexpectedly. Embarrassing user errors during the presentation. You cannot connect your own laptop to the video projector at the conference, and you need to resort to another computer that does not have the right software installed. Yearly software license expiring the same morning as you are supposed to give the talk. Wrong screen resolution for the software. Etc., etc.
None of these reasons are related to copyrights. Of course *if* you can show screenshots you can also show the software itself.
Upvotes: 2 |
2014/08/26 | 4,198 | 17,639 | <issue_start>username_0: Say for example you are a new PhD student and realize that you hate doing research in your free time but you are more than willing to spend several hours in the morning to do research. Was your decision to become a PhD student right for you or not?
I'm asking because people are suggesting that "you have to be really passionate and curious about everything all the time"
what if you want to be curious during your work and just relax and have fun doing unrelated things when you have some free time in your hands?
I guess this indicates that you aren't really passionate, but what's the threshold for "being passionate enough for a PhD"?
These thoughts have really made me somewhat confused over my decision of going for a PhD. It's been a few weeks that I'm into this program and after reading a lot of stuff online addressing the question whether the PhD is right for you or not, I'm already starting to question my decision.<issue_comment>username_1: Having to be passionate and curious about everything all the time is a great recipe—for exhaustion.
You should be curious and interested in the research that you want to do, and be motivated enough to keep going even (or perhaps *especially*) on days when things just don't work the way you want to. More importantly, it should be interesting enough to you that you're willing to put up with the failure that is a necessary component of successful research. But it's not necessary, or even practical or desirable, to spend every waking hour thinking about or doing research.
Upvotes: 5 <issue_comment>username_2: If you are limited to "several hours of research in the morning" this is going to make completing a PhD in a reasonable amount of time difficult. While part time students can and do finish PhDs, it is a long and difficult road. If your interest in the research wanes over time such that several hours becomes a few hours, it is going to become an even longer and more difficult road which may cause a downwards spiral. While many students fail to carve out personal time, completing a PhD does not require you to not relax and have fun. One can be passionate about research and still take a break and relax.
Upvotes: 2 <issue_comment>username_3: My experience is that I was so exhausted from learning new things at the beginning of my PhD, that I couldn't take more in at some point in the day. I would read light literature, watch silly sitcoms, go out, do things completely unrelated to work. And I also worried because I saw that my supervisor and other 'real' scientists seem immersed in their topic more of the time.
By the end of my PhD my fingers keep itching to write down ideas or look at data when I'm doing other stuff. It simply gets easier because there is less novelty to the work you're up against, and because you become more competent within your field. Once you feel that the knowledge you have can contribute to the field, it's a whole other ballgame. But that takes time, and it's really nothing to worry about when you're just beginning. Taking time off to do other things is still incredibly important of course - not only psychologically but also for work productivity - but the spontaneous, out of the blue research-related insights tend to increase in frequency over time. I don't think the number of hours you like to spend doing research right now can tell you all that much about what's to come.
Upvotes: 5 <issue_comment>username_4: The other answers already explain that no, you shouldn't be doing research all the time, with no time for breaks, relaxation and other interests. All this is quite correct.
However, I am a bit unsure whether the stark dichotomy "all research all the time" versus "a few hours in the morning" is really helpful here. Maybe an answer to the following question is more helpful:
*Is doing a Ph.D. right for you if you plan on doing it as a 9-5 job, 40 hours per week?*
And there, I would be a bit more careful. Yes, there are people who can do a Ph.D. like a "normal job", at their desk at 9am sharp and dropping their pencils at 5pm. It is *possible*. However, my personal impression is that these people will avoid the typical Ph.D. burnout, but they will likely not be top performers. So my answer to the modified question above is:
You will probably do fine with a 40 hour workweek, if you make sure to stay self-motivated. However, you should think deeply about just *why* you want to do a Ph.D... because unless you are uncommonly brilliant, you will likely not be productive enough to stay in academia and compete with people who routinely put in 50-60 hours per week. You may want to think about leaving academe with the Ph.D. and going into industry.
Note that I am not saying "40 hour Ph.D. students" are lazy. However, if you can't work up the level of passion and commitment to your topic that makes you *want* to put in 50-60 hours per week often, then there will likely be someone else, and that someone else will have more publications five or ten years down the road.
Upvotes: 7 [selected_answer]<issue_comment>username_5: Since you're only a few weeks into your PhD programme I suspect you have a somewhat limited understanding of what research is and how varied it can be. I'll give you my perspective, as a PhD student.
I think a lot depends on how excited you are about your topic. When I did my MSc, I was fortunate in being able to propose any topic I liked within a broad range of areas. So I ended up doing research developing some of my own ideas, with a lot of freedom to follow my own path. I got good results, and now I'm working on my PhD. I love it. I'm not sure how passionate I'd be about doing research based on or forking off of someone else's idea. But since it's my own "baby", I'm very passionate about it, and would do it for fun even if there wasn't a PhD at the end.
But let me put this into perspective by pointing out that I'm not equally passionate about all parts of research. I actually enjoy writing, so I don't mind writing papers, but I wouldn't say I'm passionate about it. Sometimes I find paper writing tedious. I love thinking about my research, doing it, and talking about it. I do not love writing proposals; it gets easier over time, but it's not something I would do for fun. As for reading the literature, it depends on how well-written it is and how relevant it is to me. I don't mind it, and I recognise the value of it, but I'm not passionate about it. When I get home at the end of the day, I'm not tempted to read papers or write proposals.
I found the first six months of my MSc to be a struggle, and I suspect most people do. I read papers, but found it very difficult to concentrate on what I was reading because the language was so stiff and formal. I spent too much time reading things that weren't that useful. I made false starts. Research is a skill that takes time to develop, and although others can help you, ultimately it's something you have to teach yourself.
You're in the PhD programme now, so unless you're truly miserable, I suggest not to worry if it's right for you until you're further along, and have a broader picture of what research really is.
Upvotes: 2 <issue_comment>username_6: I will also give my perspective as someone who is almost 3 years into a PhD in the UK.
I am one of the unusual students in my department who treats my PhD time like a full time job. I come in at the same time every day (9:30 approx.) and leave between 5pm and 6pm every day. I very rarely do any work at home, because I need the time to spend doing other things. My partner works full time, so we have a routine during the week and like to spend time together in the evenings and on weekends. The only times I actively work at home are when I have deadlines coming up (i.e. progress report due, presentation to finish). That said, I quite often discuss my work with my partner, which can help me think. My supervisor is very strict, so I certainly wouldn't get away with being away from my office without permission from him.
The first few months of a PhD isn't really indicative of what the rest of the time will be like. You will likely spend most of your time initially reading up on your research topic, which can be incredibly dull! Annoyingly, much of the scientific literature is written in a way that makes it hard to read, so when you are spending most of your day reading, you will find it difficult.
Once you have settled into your PhD, you will spend most of your time doing other things. It does depend on your research area. I am in the climate field, so much of my time is spent doing data manipulation and analysis on large datasets. During my first 6 months, I was doing a lot of data collection and quality control. I was also learning a lot of programming skills that I required to be able to work with these datasets. I also took some time to go to skills courses offered to PhD students at my university. One course that was very useful in my first year was about literature searches - how to do effective searches and also how to record the information learnt while reading. I can go back to a paper I read 2 years ago, and not have to re-read it because of the notes I made every time I read a paper.
Different universities provide different experiences, so you may get the opportunity to do some teaching (unfortunately I don't have that option). Attending a conference in your first year is a great experience, and something to work towards. I presented a poster of my research at my first conference, which was also a new skill learnt.
I guess what I am trying to say is that you don't have to spend hundreds of hours per week working on your research to be a successful PhD researcher. On the other hand, you will need to do more than a couple of hours per day to be able to get your PhD in a reasonable amount of time. I am on course to finish mine in 3.5 years.
Don't let your experience in the first few months put you off, as it will get better. If you still aren't enjoying it after 6 months to a year, then maybe reassess your decision.
Upvotes: 4 <issue_comment>username_7: Full disclosure social science Ph.D. program.
First weeks in a Ph.D. program... here's what's going on.
You're in the hazing period of grad school. People are over-exagerating.
* Your peers (cohort) are all hyper competitive with each other right now. There's still very much that need of many to show that they are the smartest person in the room. This usually comes from a place of insecurity (i.e., needing praise, not wanting to feel the impostor syndrome, etc.). Overtime, this will fade. Then it will become like most other working environments... there will be lots of honest water cooler talk, less competitiveness, etc. However, within your lab it will also mirror the corporate world, where your peers will agree with everything your boss says, fake knowing things they really don't know, and generally try and win favorability.
* Everyone needs to say how hard graduate school is and how much work it is in the beginning. It's a mantra that no one ever questions. You get the same kind of thing in corporate jobs. The truth is, I haven't found the work of it to be that hard at all! Sure, I've experienced lots of frustration in feeling held-up due to delays in feedback, new directions in the lab and a lack of face to face time with my advisor, but the work is quite easy. The people who do spend way too many hours of their life doing work I have found to be one of four types:
1) Obsessed with perfection as a means for impressing. This is the typical OC type you surely have encountered through your entire life.
2) Has no defining personal life that gives them sustenance outside of grad school. This is the student who will answer his/her's advisor's email 5 minutes after it arrives in his/her mailbox, as such sets the expectation that this is acceptable. Most students are like this their first year, but there will definitely be people around you that stay this way.
3) Not very competent. When I arrived in graduate school I was really surprised at the amount of incompetence I saw in my peers. I'm certainly of average intelligence, but have always been a self-doer, so I wasn't expecting everyone be a genius. However, there's just a lot of incompetent people in the programs. Ironically, these are the ones that seem to fare the best in terms of getting through the program. I still scratch my head with this one.
4) Inefficient. This can involve having poor time-management skills, but I find it's more about workflows. My peers spend a lot of time every week doing repetitive tasks that they could streamline and automate in so many different ways. My favorite was a student who needed to run a series of descriptive and regression analysis on newly available data every a week. The task literally took this person over 10 hours every week because this person manually entered everything into SPSS (a point and click statistical software). In 2 hours of research on simple coding skills this student could have learned how to automate nearly the entire process, and had the task finished in seconds. The fact that the advisor thought this was a good use of time is a whole other issue! The point is, I find that what my lab mates finish in 20 hours a week literally takes me under 5 hours to finish. **Just work smart!**
Your question seems more about work-life balance. This applies in graduate school too. Let the program run your life, give up all the small things you do outside of school, and you'll be saying hello to depression in no time.
Yes, you can have a life while being a Ph.D. student. In fact, I'd say as soon as the Graduate Program becomes how you describe yourself to others it's time to reevaluate your life.
Upvotes: 3 <issue_comment>username_8: I have found that for some reason people really dislike it when other people do things for reasons that aren't "noble".
I posted a question on here a while ago asking about doing a post-doc, and I said that the main reason I would want to do one is because it seems like it would be a fun way to spend my time for a few years. I got a lot of negative replies along the lines of "If that's the reason you want to do a post-doc, you clearly aren't the type of person who should be doing one" or "I hope no one wastes research dollars on you".
Similarly, anyone who creates a start-up with the goal of "getting rich" is harassed. If they had pretended their real goal is to solve some great problem in the world in order to make it a better place, they would have gotten a lot of positive feedback instead.
In the programming community, a lot of people say "you're not a real programmer" if you don't have side projects that you work on in your free time. How can you possibly be passionate about your work if you don't do it in your free time as well?
In my opinion, all of the above is a lot of baloney. Who cares what your motivation is. We're all going to be dead in the next hundred years, and the only thing that matters now is the tangible results you can provide to your employing agency. If I can produce high quality research publications in a post-doc that I'm doing for "all the wrong reasons", who cares? Life isn't fair; there are people who slave away and have little to show for their work, while others hit upon a lucky discovery with little effort and are set for life. If you would rather spend your free time drinking beer and socializing with friends instead of reading articles, but you're still able to make good progress on your PhD, who cares?
I've found that most people who brag about working hard and being at the office all the time are normally wasting their time in other ways anyhow. Sure, they may be "working" 10 hours a day, but they're not actually accomplishing all that much.
So I wouldn't let other students' opinions on whether you're "passionate enough" have any bearing on what you do with your life.
**EDIT**: Having said that, I will point out that *probabilistically*, those who are more curious and interested in their work will tend to do better than those who aren't. If you're forcing yourself to work 60 hours a week, you will *probably* not do as well as someone who has to stop themselves from working that many hours because they're having so much fun. For instance, I spend my research time performing molecular dynamics simulations, writing programs to analyze the data, and writing papers on the results. But then I spend my free time learning about quantum mechanics. Maybe what I do in my free time could somehow help my research; then again, maybe it won't. But I don't get burnt out, because I'm splitting my time up into divisions of what I can handle without anxiety.
Upvotes: 3 <issue_comment>username_9: This is a difficult one, but my impression of having spoken to people who write a lot of papers (some of them very good math papers) is that it's necessary to take breaks and not think about research all the time. This is to prevent mental (and even physical) exhaustion.
I want to comment though that you should not necessarily judge what the correct ''work schedule'' is from professors and other people with permanent positions, these people have much more experience in the game and probably work more efficiently.
I also knew someone who published two articles in good journals and then went to work in finance after the PhD as they didn't want to carry on in academia. I guess if you work 24/7 you are increasing the chance of burn-out early on and not wanting to continue in academia.
Upvotes: 2 |
2014/08/26 | 3,114 | 13,344 | <issue_start>username_0: Organizing a conference is very **difficult** for a scientist, you have to have the right collaborators, to find the right venue, to spread the word, to advertise it, to send the call for papers, to receive the papers, to check them, to organize the lunchs and dinners, etc etc.
**All these tasks are very time-consuming**, and stressful, and they will steal precious time to your scientific research activity, and also to your family/friends time.
So, why do people decide to organize conferences?
**What are the main benefits and advantages of running this complex and time-consuming activity?**<issue_comment>username_1: I'd say the main benefits are prestige and honor. To be able to organize a serious conference, you got to, as you pointed out, have a non-trivial amount of connections and be a respectable and accomplished member of the scientific community. At that point in ones life, some people may decide to use their status to promote their scientific community. In my experience, conferences are connected with the institution the leading scientist is from. So that also may be a significant amount of finances the institution will receive from organizing a conference on a regular basis. So, by contributing so much to the institution, the scientist will certainly hold a more favorable position among his colleagues and the institution itself. Besides, there might as well be a financial compensation for the scientist's efforts, which I (perhaps naively) think is a rare occurrence, since I know of only one such practice. Finally, many conferences are inclined to naming an award in honor of their founder, usually posthumously. Therefore, when/if the prestige of the conference rises, so does the importance of the award, and that is generally a nice way to be remembered and honored once you pass away.
So in a way, you organize a conference, because you can/want do demonstrate your ability to do so, i.e. proving the influential collaborators and accomplishments you acquired during your career. As awards you expect to be remembered and honored and to get satisfaction by the fact that you contributed significantly to your community.
Upvotes: 3 <issue_comment>username_2: **If everyone refused to organize conferences, we would have no conferences.**
Yes, it's that obvious. The same goes for editing journals, refereeing papers, and most other *service* activities.
The main result of a research conference is that it helps attendees do better research, to find and understand new ideas, and to disseminate their own research, etc. By organizing the conference, you are the facilitator of all that. Furthermore, you get to influence the direction of the conference -- what it will focus on, who the invited speakers will be, how it will be run, and so forth.
Additionally, I would argue that academics are paid to organize conferences in the same way that [we are paid to referee papers](https://academia.stackexchange.com/a/7571/81).
Finally, there are fringe benefits to you personally, including the prestige mentioned by @username_1 -- you can list it on your CV under service activities. Frankly, this is not going to make or break your career, and you probably shouldn't be doing it early in your career, since (as you point out) it takes time away from things that may be more essential.
Upvotes: 4 <issue_comment>username_3: It fills an acute need
----------------------
Organizing a conference takes significant effort, but it also usually fills a need within a community. New conferences generally arise when some subfield or a geographic community within a field has a critical mass of research that they want to exchange, but that is poorly served by the existing options.
If ten years ago the FooBar conference occasionally got papers like "FooBaz is a nice new thing" but now there are many FooBaz papers that get rejected with reason "The paper is okay but outside the scope of FooBar" then everyone in the FooBaz community would benefit from a specialized conference on FooBaz - and if it is a success, then it often gets repeated and turns into a [semi]yearly tradition.
If you want to make progress in an emerging field, you are motivated to invest personal effort in making the field a success, to advertise your research and the related research that's growing out of it. If it's an estabilished field, then often you form some organization that unites the relevant scientists and is able to get funding and administrative resources for the explicit purpose of advancing that field - which involves organizing conferences.
It's a shared effort
--------------------
I've actually never seen ***a*** scientist organizing a conference. Usually it would get managed by some university, institute or other organization (though through active initiative of their scientists) - it would piggyback on their existing administrative capacity. Similarly, organizing the papers and organizing the event is usually split among separate people so that the workload is manageable.
Also, I've seen many conferences that get 'rotated' among the community. If a conference makes sense, then multiple research centres are interested, otherwise you can just make a local seminar. If organizing a conference requires your [institution] involvement only, say, every 6 years, then it's not so tedious to make it intractable.
Upvotes: 3 <issue_comment>username_4: Arranging conferences is very hard work. You need to be prepared to organize a meeting place, be a "travel agent" for visitors seeking information on how to get there etc. and then make sure things runs smoothly during the meeting. Of course more individuals will be involved in arranging a meeting, how many depends on the size. So unless you are a despot and can order people to work for you, I would say that arranging a meeting is not for prestige and honour without being well-deserved through sweat and tears (I am assuming you want the guests to enjoy a productive and comfortable meeting atmosphere).
Arranging a meetings, from small workshops for tens of people to conferences for hundreds can be really rewarding. If you arrange for thousands you need to run things through a professional organizer, such as a conference centre. Even that can be rewarding but no less a lot of work. To be able to cope with the work load, which will likely start half a year to years before the meeting (depends on the size and format of the meeting), you need to have a good idea of what goals your meeting will have. For smaller meetings where all visitors are assembled for the entire meeting, you probably have a personal interest in the topics to be covered. In a larger meeting with many separate sessions, you and others in the organizing committee probably have invested interests in several different ones. In the end, successful meetings come from a deep interest from the organizers in having the meeting and its themes covered. At least in my field, I have never encountered anyone doing a conference just for glory and filling a CV. The benefits will definitely be there after a successful meeting but then, you have really deserved them.
So, in my experience, organising meetings is really rewarding, it is hard work, but pays of scientifically and socially if the organisation is done well and visitors return with a sense of value from the experience.
Upvotes: 4 <issue_comment>username_5: Preamble
--------
Having organized 3 different small-scale (~60 participants) events with international speakers on a variety of topics (one of which evolved into semi-annually workshop series), these are my personal rules of thumb for keeping it enjoyable (**when done right it really is enjoyable!**):
Separate the logistics from the contents
----------------------------------------
This is the hard part, and you are right to be dreadful about it. The solution: ask for help! For anything beyond 100 participants, you should consider hiring a professional conference organizer and forming a multi-person programm committee etc. For small-scale stuff, this is what I usally try to do.
1. Start **on time**: 6-12 months is a good range. Remember that academic speakers have teaching loads they need to navigate in their schedules.
2. Secure a **budget** from your manager (professer of your research group, or dean of your department)
* it should cover speaker's travel reimbursements, catering and accomodation
* optionally: speaker's gifts (bottle of wine, book etc.), after-conference dinner
* for my events, $2K turned to be plenty (and there was no admission fee for participants!), and typically management won't think twice about it
3. find a good **office manager** / secretary to take care of the practicalities:
* beforehand: managing the mailinglist of participants, travel arrangements of speakers
* during the day: name badges, coffee during breaks, drinks afterwards, printing handouts
4. find support in the **IT department** for website related stuff:
* announcements, registration, publication of presentations
* live feed video is a whole different ball game (I have no experience with it, best left to the pros)
5. find at least **one colleague** willing to share the burden with you:
* this really helps to cover each other's workload peaks (teaching, papers etc.)
In my experience, asking people for help with such events is usually enthusiastically supplied. Especially office managers relish at the chance to break their daily routine and they will work wonders to get things done. Make sure to properly and publicly praise them at the closing of the event (e.g. give them flowers / small gift in front of the audience. Also invite them to the after-conference dinner).
Concentrate on the contents
---------------------------
The logistics out of the way (okay, you will need to periodically monitor people for their progress and potential hiccups, but that's no big deal), you can concentrate on what you do best: content!
1. Find your **niche**: which unique gap in the world of conferences (both geographically and topic-wise) does your event fill?
2. Define your **audience**: which people should attend and with what knowledge would you like them to return afterwards?
3. Find good **speakers**
* they should be **familiar with** -but not necessarily known to- the niche/audience that you are targetting (this depends a bit on your goal: getting the state-of-the-art from a world expert is different from getting an interesting and intellectually stimulating view on a topic from a relative outsider)
* you should have **witnessed them present at least once** (either on video or live)
* you would like to **pick their brains** afterwards (which is why after-conference dinners for speakers/organizers are so great: you get priviliged access to experts in the field, and they are usually in a great mood over a good meal / glass of wine)
* this is really the main benefit: **close interaction with experts**. If you are professionally competent and nice to talk to, they will form a good opinion of you and this might give you career opportunities (speaking invitations, job offers based on word-of-mouth etc.)
4. Find a good **conference chair**:
* should be **knowledgeable** of the nice/audience
* should be **firm but not a dominant** personality so as to get a focussed but also lively discussion between audience and speakers
* you should have **witnessed this person perform in that role** at least once
* if you yourself fit the bill on all three accounts above, just put yourself into that role. It will give you great exposure to an audience and will be a rewarding experience. But don't force yourself into it for the exposure alone (e.g. because you are too shy or too dominant to facilitate discussions).
5. **Mingle with the audience**: apart from the speakers, talking to people in the audience is also very rewarding.
* Obviously you will have to check up some of the logistics but with good support there is usually enough time to talk to a few people during the coffee breaks or at the drinks afterwards.
* Even if you have no other role at the event, the fact that you are the organizer will register in people's minds and they will associate you to the topic of the day and this can lead to interesting future career opportunities (mostly small, like speaking invitations).
The worst two things that ever happened to me were:
* a disappointing speaker (great-looking resume, nice phone interview before, but I had never watched that person speak and it was terrible, so now that's on the checklist)
* a last-minute cancellation by a speaker (you cannot really plan against it because no-one would like to be a reserve-speaker, luckily the speaker enlisted a co-author as a substitute and that turned out to be a good experience)
Upvotes: 2 <issue_comment>username_6: One benefit that I haven't seen mentioned yet, and one of the major reasons I organised a conference:
Once you've organised one, for the next few years after that, you have a good way to defend yourself against substantial pressure from colleagues to organise the next conference, on the grounds of it not being your turn, because you did it recently.
How long your immunity lasts will depend on the size and frequency of the conferences you usually attend.
Upvotes: 2 |
2014/08/26 | 1,056 | 4,532 | <issue_start>username_0: We are currently trying to submit a comprehensive survey article I have been working on for a while as a journal paper. Some of the journals we're considering (I'm in Computer Science) require, for survey articles, submitting a 2-5 page *white paper* to the Editor-in-Chief, in order to evaluate the relevance of the proposed survey (upon which the EiC would either discourage or encourage the submission of a full survey):
>
> Authors interested in submitting overview articles are required to consult first with the Editor-in-Chief (EiC) of their Transactions of choice before submitting a white paper proposal. White papers are limited to 2-pages and should motivate the topic, justify the proposal, and include a list of relevant bibliography including any available tutorial or overview articles related to the subject matter. (...) The EiC solicits input from the Editorial Board on whether to encourage submission of a full paper.
>
>
> (specific requirements example taken from: <http://www.signalprocessingsociety.org/publications/overview-articles/>)
>
>
>
I have found [some](https://academia.stackexchange.com/q/11961/4249) [questions](https://academia.stackexchange.com/q/1597/4249) [on the topic](https://academia.stackexchange.com/q/10259/4249) if white papers, but it looks to me like all of those are either concerning short standalone papers, or white papers for grants, which differs a bit from my situation.
Just to summarize, this white paper (2-5 pages, depending on the journal) is *not* ment to ever be published, but rather to help the Editor-in-Chief and the Editorial Board in deciding on the relevance of the proposed survey. So, for me as the author, the goal of this white paper is to motivate the EiC to invite me to submit the full survey manuscript.
So, my question is *how to structure and what information to include in a white paper, meant as a proposal for publishing a longer survey paper*. Some specific questions I'm thinking about:
* Should I divide it into sections, similar to a regular (short) paper (introduction, discussion, etc.) or should it have some different, (specific or not) structure?
* The reqirements mention including a *"list of relevant bibliography"*: as this is meant to be for an survey article, my complete list of bibliography is much larger that 2 pages. Do I just mention "related work" and omit the references used as sources for the survey from this white paper, or something similar to that?
* Should I repeat some parts of the survey, or just include the motivation and explanation of the survey topic in this white paper?<issue_comment>username_1: White papers are a great way to get noticed. Many universities also offer them as resources for current students as a way to gain insight to popular concerns within the area of expertise.
Just look at: *<http://www.kenan-flagler.unc.edu/news/publications/white-papers>*
You want to make sure that the white paper gives insight or focus to the main thoughts of your subject, and that proper credits are given to any of the relevant discussion within your paper. If you do decide to include the focus on your survey, that would be ok. However, discussion on what the highlights are and not giving the survey the main focus, may lead to offers to have the survey published or have you give more in depth discussion of your findings. Hope this helps.
Upvotes: 0 <issue_comment>username_2: If I understand your question correctly, the longer survey paper is mostly or entirely written already, right?
The purpose of a presubmission like this is generally to help the editor decide if the topic is broad enough and well enough aligned to be of interest to a large fraction of the journal's audience.
As such, the introductory material that you've written for your survey paper should be just what is needed: it should already give the scope of the survey, the motivation for it, and an outline of how the rest of the survey is structured.
I would recommend clipping out your abstract and introduction, keeping its references, maybe adding a few other key references if your introduction was reference light, and sending that in.
Finally, in your cover letter, you should say that this is exactly what you are doing. That will also let the editor know that this isn't a submission in advance of having written the manuscript, but an inquiry of whether to proceed with a fairly mature manuscript. Speaking as a sometime editor myself, that's very useful information.
Upvotes: 2 |
2014/08/26 | 1,935 | 8,409 | <issue_start>username_0: *I'm not sure that this is the right exchange for this question. It asks about the possibilities of research in mathematics and computer science.*
Extended Background
-------------------
I am very interested in mathematics and theoretical computer science. Over the past few years I've gained lots of experience from math competitions, programming competitions, software development internships, and talking to professors, and I've learned that I do not enjoy writing code; I enjoy writing "beautiful" and elegant code in practice, just as I enjoy beautiful and elegant proofs in mathematics, but I do *not* enjoy writing hundreds of trivial imperative lines of code to make some company money. I've learned that I like to think critically about problems, and I like finding the quickest, most efficient, most elegant solutions. I enjoy the mathematical (or theoretical) side of computer science.
So, recently, after coding only with imperative languages my whole life, I discovered Haskell, a functional programming language. It is extremely close to mathematics: Everything is a function. Ideas are defined rather than executed; instead of giving a computer step-by-step instructions, the computer is given a definition. This discovery reaffirmed my passion for mathematics and the mathematical and theoretical side of computer science.
Minimal Background/Question
---------------------------
I've also been frequenting a website called [Project Euler](http://projecteuler.net/), which is filled with 450+ mathematics-based programming problems. This is essentially the epitome of my passion. I love solving these problems with Haskell (and sometimes, when I'm clever enough, with merely a pen and paper).
I know that many of the questions (if not all of them) have already been solved and researched in both the field of mathematics and the field of computer science, and hundreds of their variants have been as well. However, many of these problems were initially proposed and solved hundreds of years ago by mathematicians (such as Euler and Gauss), and most of the relatively newer problems were all proposed and solved by the 1980s.
So my question is, are there still problems like this that I can think about for a living? If so, which field and sub-field is closest to this type of research? If not, would I be involved in these problems by teaching and then introducing them to students of mine, or hosting competitions?
TL;DR
-----
* Can I solve problems like [this one](http://projecteuler.net/problem=151) as a career?
* If this research isn't viable, would I be more involved with this type of problems by teaching and hosting competitions, etc?<issue_comment>username_1: This may be a better question for Math.SE. That said...
These sorts of problems fall in the category of **contest math**. Sometimes they are related to active areas of mathematical research (most often number theory, combinatorics, and geometry), but more often, current research deals with problems that are considerably more complex and can't be solved (or even stated) in a page or two.
Note also that, at least in the US, there are very few people who get paid to do mathematics research full-time and exclusively. Most professional mathematics researchers are professors at colleges and universities, and their duties also include teaching and administration (to varying degrees).
There is a recognized subculture of mathematicians interested in contest math. They may get involved in creating problems, organizing contests, and coaching students. At some universities, such activities may be considered a significant part of their research or "scholarship" duties, but they would normally be teaching regular math classes as well.
**Edit:** Actually, there is a category of careers I had forgotten: intelligence. The NSA and its counterparts (GCHQ, etc) employ many thousands of mathematicians. Of course, it's hard for an outsider to know what goes on there, but it could be that their activities have more of a problem-solving flavor. At least they are not bound by the requirement that their work be publishable! The NSA has a well-known (and highly competitive) summer internship program for undergrads, the [Director's Summer Program](https://www.nsa.gov/careers/opportunities_4_u/students/undergraduate/dsp.shtml), so that could be one way of testing those waters relatively early on.
Upvotes: 5 <issue_comment>username_2: As @NateEldredge already explained, such easily-apprisable questions, most often arising in math contests, or as puzzles, are mostly not what could allow a person to make a living as an academic mathematician, except in cases of ultimate-extreme talent, perhaps.
At least as idealized, academic mathematics aims to develop and validate viewpoints that clarify fundamental issues. "Problem solving" is significant, but is at the extreme end of the phenomenological aspects of the business. As <NAME>. noted, most "active" questions in mathematics will not be easily understood by amateurs, or perhaps even by non-specialists. These questions are mostly not of the easily-apprisable sort...
Upvotes: 3 <issue_comment>username_3: I believe @Mangara's comment is very relevant here:
>
> Number theory, combinatorics and geometry (the areas Project Euler mainly draws from) actually have lots of accessible problems that are still unresolved.
>
>
>
Accessible here means that the problem can be easily explained. However, unresolved problems that anyone can understand tend to be **extremely difficult**. If they weren't, then someone would have solved them!
Get a copy of Richard Guy's [Unsolved Problems in Number Theory](https://rads.stackoverflow.com/amzn/click/0387208607). There you can find some problems that constitute "real mathematical research" but are more or less in the vein you describe.
You can usually only (get paid to) do this research in a university, you have to be very talented to get a tenured job in these areas, and it's a long road. Also, due to the highly interconnected nature of mathematics, the techniques used to solve these problems are often vastly different from what you might expect -- for a famous example, see the proof of Fermat's Last Theorem via algebraic curves. So if you want to work in (say) number theory, you'll still need to learn deeply about and use other areas of mathematics.
Upvotes: 4 <issue_comment>username_4: Here's the view from my vantage point:
* The kind of interesting problems you can solve in 30 minutes to 7 days each (as found in mathematics olympiads, competitive programming contests, and which, by the way, have tremendous value by being fun for whoever gets into them and being accessible to large numbers of high-school students, as well as developing their mental capacities and practical skills), are typically not of interest to most professional mathematicians because they are considered too trivial. Research publications require more substance than a couple of observations and a direct application of some 100-year-old theorem. Also, you would drown in the task of citing the relevant literature.
* Therefore, as a professional research mathematician, if you want to keep your job, you will be effectively forced to find a frontier somewhere in some field and publish results which are very likely to be novel and well-received by journals. You can expect to meet lots of interesting sub-problems, but also lots of questions that frustratingly don't have easy answers. Also, the chances of this type of work making impact on some field external to mathematics are not great.
* Interesting sub-problems come up all the time in all sorts of other research, not just in mathematics. Even in programming jobs that are not considered research — sometimes a requirement says that certain aspects have to behave in a certain way and meet certain constraints, and naïve solutions turn out to not be good enough.
Putting it all together
-----------------------
To alter the question a bit, I think there is a wide selection of jobs where 10-30% of your time will be spent on solving interesting sub-problems. These are not restricted to research in mathematics. Very few jobs where this figure will be 100% (people do this on a volunteering basis, verifying problems for Codeforces, TopCoder, etc., not to mention actually competing just for the fun of it).
Upvotes: 3 |
2014/08/26 | 987 | 4,124 | <issue_start>username_0: I usually spent an incredible amount of time answering the questions raised by reviewers when submitting research manuscripts to a journal. The length of the response is often longer than the paper itself. Such a process, albeit time consuming, has significantly improved the quality of the work.
Since there are many thoughts that can not be delivered in the paper, which are elaborated in the response of the reviewers, I am wondering whether it is good to upload the response online along with the paper? (e.g., research profile page) I think this will benefit the readers but am not sure what might be the consequences resulting from that? Do note that I don't have any clue as to the reviewers' identities.<issue_comment>username_1: You are free to post your response as you see fit, If you know who the reviewer(s) is/are then you may need to think twice about mentioning their name(s) since. I am not sure how you may be thinking of posting such comments but I assume you will rewrite them into some form of self-contained text. As such it would not be very different from a blog entry and so one suggestion would be to use a blog type web to add comments around your publications. You may also provide means for commenting on your papers and associated posts.
But, in short, no problem posting your own thoughts but stay clear of adding the thoughts of others that may be given in a context other than open posts.
Upvotes: 4 [selected_answer]<issue_comment>username_2: *(I request that this answer may be viewed as an addendum to Pete's nice answer here, in light of Nate's comment above.)*
As Nate pointed out, most of the typical referee responses would be concerned with stuff that should enter the manuscript. So, assuming that all his useful suggestions were incorporated in the text itself, there isn't generally more meat from that conversation that could warrant a separate 'response log' to be uploaded anywhere (I'm assuming on arXiv, for example).
But the impression that I get from the question is that, OP is inquiring about suggestions that go deeper than the above paragraph. In some cases, it is possible that referee queries stuff on the lines of
>
> How is *[a fact that you established on the basis of your calculation in the manuscript]* consistent with *[a sacred tenet, or a well-established or experimentally verified result]* ? Aren't the two incompatible because of *[some qualitative reasoning, devised by the referee]*?
>
>
>
The reasoning looks valid to you, so you sit down and calculate the implications of your calculation on the established fact, and find that the two are indeed compatible. Then, you identify a weakness in the qualitative reasoning, and let the referee know about this. Now, all this isn't worthy of being included in the text of your manuscript, since it is off-track from the overall theme of the work. Yet, this is a valuable piece of information, and is likely to help future readers because they may also reason this apparent contradiction. Responses of this kind are worthy of being put up. Occasionally, one encounters those one-page or two-page ''Comment on [a paper]'' sort of things on arXiv, so these can definitely be put up too. It doesn't necessarily have to be journal article manuscript always.
Lastly, regarding acknowledging the referee, there are two options - either take their permission (ref - Pete's answer), or you simply acknowledge ''the anonymous referee'' for pointing it out, in case option 1 doesn't work out. I know some instances where this has been done in my field, but one example that I can find is [over here](http://link.springer.com/article/10.1140%2Fepja%2Fi2013-13072-1). Sorry, there isn't any corresponding arXiv version for this, so if you can't access it directly, here's the relevant excerpt:
>
> The author would like to thank the anonymous referee for making insightful comments which have been helpful in improving and updating the manuscript.
>
>
>
But seriously, option 1 is the better option (why strip the poor guy of his due credit!).
Hope that helps.
Upvotes: 3 |
2014/08/27 | 1,505 | 6,401 | <issue_start>username_0: I am doing my masters in the UK and at final stage of my dissertation. First I will have to confess that I regret that I selected this dissertation as it is something which I failed to get a clear idea about. I don't know if it is because I cannot understand the ideas that my supervisor is trying to communicate with me.
First my target was to implement an algorithm (I was given only a specific field and asked me to select an algorithm) using a language that I had never used before. I used to meet regularly and given research papers to read to clarify any doubt I had. There was no support in the implementation other than the research papers. Also no one to guide on the computer language I needed to use for the implementation. By June, I was stuck a little bit in the implementation and when I expressed some doubts, I was told there is not enough time for implementation now as I am not able to understand it so take an existing software which implemented the said algorithm and create some performance improvements.
I spent a lot of time, more than a month for just setting up the work environment and understanding how that software works. I am not very confident to speak out so I thought I will eventually get a hang of it. After a thorough research I realised that the said software is widely used all over the world and has been optimised in all possible ways already. But it was already too late and at the end I am left with almost nothing but some failed trials. Now the deadline to submit my masters dissertation is in a week and when I spoke to my supervisor about the limitations of the software and why I cannot optimise it last week, I was given a new research paper to read and asked to try their approach.
I learned a lot in this research, many algorithms, some new software, computer languages etc, but it was not focussed. I don't know what I will write in my report as the project is supposed to be a study with evaluation and testing to show as a proof to my conclusions. I have already written a literature review of 35 pages and about 10 pages about the software I am trying to optimise. I cannot submit just that and I don't think it is a good idea to continue new trials now as I may not be able to complete writing my report (which needs to be around 60-65 pages) with just one week left. I have distinction from the marks for my course units and all of the effort I took for an year will go in vain if I fail the dissertation as it has 50% weighting. What can I do to make sure I will pass my dissertation? Please advise.
NB: I know it is a long question, sorry about that. I felt I need to write the context to explain my situation.
---
Postscript:
===========
I continued and completed my thesis with all the failed trials. Added some sections in the thesis comparing the different approaches I tried, findings based the trials and some suggestions for future work. Even though I was paranoid till the results came out, I passed with distinction.<issue_comment>username_1: You are free to post your response as you see fit, If you know who the reviewer(s) is/are then you may need to think twice about mentioning their name(s) since. I am not sure how you may be thinking of posting such comments but I assume you will rewrite them into some form of self-contained text. As such it would not be very different from a blog entry and so one suggestion would be to use a blog type web to add comments around your publications. You may also provide means for commenting on your papers and associated posts.
But, in short, no problem posting your own thoughts but stay clear of adding the thoughts of others that may be given in a context other than open posts.
Upvotes: 4 [selected_answer]<issue_comment>username_2: *(I request that this answer may be viewed as an addendum to Pete's nice answer here, in light of Nate's comment above.)*
As Nate pointed out, most of the typical referee responses would be concerned with stuff that should enter the manuscript. So, assuming that all his useful suggestions were incorporated in the text itself, there isn't generally more meat from that conversation that could warrant a separate 'response log' to be uploaded anywhere (I'm assuming on arXiv, for example).
But the impression that I get from the question is that, OP is inquiring about suggestions that go deeper than the above paragraph. In some cases, it is possible that referee queries stuff on the lines of
>
> How is *[a fact that you established on the basis of your calculation in the manuscript]* consistent with *[a sacred tenet, or a well-established or experimentally verified result]* ? Aren't the two incompatible because of *[some qualitative reasoning, devised by the referee]*?
>
>
>
The reasoning looks valid to you, so you sit down and calculate the implications of your calculation on the established fact, and find that the two are indeed compatible. Then, you identify a weakness in the qualitative reasoning, and let the referee know about this. Now, all this isn't worthy of being included in the text of your manuscript, since it is off-track from the overall theme of the work. Yet, this is a valuable piece of information, and is likely to help future readers because they may also reason this apparent contradiction. Responses of this kind are worthy of being put up. Occasionally, one encounters those one-page or two-page ''Comment on [a paper]'' sort of things on arXiv, so these can definitely be put up too. It doesn't necessarily have to be journal article manuscript always.
Lastly, regarding acknowledging the referee, there are two options - either take their permission (ref - Pete's answer), or you simply acknowledge ''the anonymous referee'' for pointing it out, in case option 1 doesn't work out. I know some instances where this has been done in my field, but one example that I can find is [over here](http://link.springer.com/article/10.1140%2Fepja%2Fi2013-13072-1). Sorry, there isn't any corresponding arXiv version for this, so if you can't access it directly, here's the relevant excerpt:
>
> The author would like to thank the anonymous referee for making insightful comments which have been helpful in improving and updating the manuscript.
>
>
>
But seriously, option 1 is the better option (why strip the poor guy of his due credit!).
Hope that helps.
Upvotes: 3 |
2014/08/27 | 365 | 1,454 | <issue_start>username_0: I am currently a B.Tech. student in India, and am applying for Master's courses in the U.S. I took the GRE exam a week ago.
Does having graduated from high school in the U.S., where English is the native language, count as certification that my English is good enough? Should I still take the TOEFL exam?<issue_comment>username_1: Every school words the requirements, if any, for the TOEFL a little differently. For example, [Harvard GSAS](http://www.gsas.harvard.edu/prospective_students/application_instructions_and_information.php):
>
> Applicants whose native language is other than English and who do not hold the equivalent of a US Bachelor's degree from an institution at which English is the language of instruction must submit scores from the Internet based test (IBT) of the Test of English as a Foreign Language (TOEFL)
>
>
>
A quick look suggests that in general native language and the language of your Bachelor's degree is what matters and not the language of your high school.
Upvotes: 3 <issue_comment>username_2: If you are applying as an international student, you need TOEFL. It does not depend on you high school education. For international students, you will be considered as a non-native english speaker, so toefl is necessary. If you are applying as a domestic student (meanign you have citizenship/greencard), you dont need toefl.
Anyways, toefl is a very easy exam!
Upvotes: -1 [selected_answer] |
2014/08/27 | 985 | 4,346 | <issue_start>username_0: I am applying for a PhD program and I am in the process of choosing people for recommendation letters. Being someone in the final year of master's program (and not been involved with many people in research), one of my potential referee is my internship guide. I had worked with him closely for 3 months and a paper came out of the work.
He does not hold a PhD. He is not involved with research in a big way. He holds a Senior Manager post and possesses 20 years of experience. Although he can write a good recommendation letter, will it carry weight considering his background?<issue_comment>username_1: When I am evaluating prospective PhD students I am looking for evidence of research potential. While someone might write a postive letter, unless it provides evidence of research potential (or in rare situations addresses a weak aspect of your application) it is not going to be a helpful letter.
In general admission committees are going to question the ability of someone who has never been involved in research to assess research potential. So even if the letter talks about you research potential it may be discounted. The more of an unknown quantity the letter writer is, the more effort he/she is going to have to spend explaining how he/she can evaluate the person. Is is almost independent of the actual experience. Admissions committees often have hundreds of applications to look through and don't always spend the time they should evaluating everything.
Upvotes: 3 <issue_comment>username_2: I would think at least twice about getting a letter of recommendation for admission to a PhD program in X from someone who does not themselves have a PhD in X or some closely related field. (No PhD at all is to my mind no worse than a PhD in a totally unrelated field.) The reason is that a grad school admission letter makes an argument that you (i) are a strong candidate for the program compared to the stream of candidates known to both the recommender and the readers in the department and (ii) will succeed in the program. Someone who does not have a PhD in a closely related field to X is simply much less convincing with regard to (ii) than the majority of people who do. One also worries about (i).
In my experience, *all the letters* you get for admission to a PhD program should address (i) and (ii) and not too much else. Thus getting a letter from someone who directed a corporate internship you did or non-academic job you had or volunteer experience you have is a poor choice *unless* these experiences are directly, intimately related to (i) and (ii) above. For instance if you are applying to a PhD program in chemical engineering then if you worked for a chemical engineering firm and did research that academic chemical engineers find significant (so especially if you published any academic papers), then great: get a letter from someone there, whether they have a PhD in that field or not. Also, qualifications are correlated with credentials but not perfectly: everyone has their own example of a famous academic who happens not to have this or or that academic degree. If you are interested in a program on the border of mathematics and philosophy and can get a recommendation letter from [Saul Kripke](http://en.wikipedia.org/wiki/Saul_Kripke), do it! Don't be deterred by the fact that he doesn't have a PhD: that would be ridiculous.
Back to your situation. About your potential recommender, you write:
>
> He does not hold a PhD. He is not involved with research in a big way. He holds a Senior Manager post and possesses 20 years of experience.
>
>
>
This does not sound like a good choice. By "not being involved in research in a big way", he cannot speak to your research potential as well as someone who is. Since he does not have a PhD, the readers will probably wonder about how trustworthy he is (relative to other writers) when he insists that you will be successful in a PhD program. Unless you have a good reason to believe that the faculty doing admissions will know this person personally and esteem him highly, I would avoid getting a letter from him. However, if you feel like he is uniquely well qualified to speak to issues (i) and (ii), then it might be worth trying to have him give information to a more traditionally appropriate letter writer.
Upvotes: 3 |
2014/08/27 | 889 | 3,752 | <issue_start>username_0: Can I go for Phd in USA, Canada, Australia or any other European country with my master's degree from a German *Fachhochschule*? I am planning to work for two years as a lecturer and researcher in my home country (Pakistan) and look for PhD fundings/admissions from there. I believe the experience gained during these two years will make my application stronger.
Secondly, is there something I should do before leaving Germany (extra courses for more CPs, degree recognition or any such thing)? I'm enrolled in 120 CP Master's degree with following division:
* Course work = 60 CP
* Mandatory internship = 30 CP
* Master Thesis = 30 CP
Do universities from these regions require more CPs? I've heard this is the case for German technical universities.
This [question](https://academia.stackexchange.com/questions/23774/are-masters-degrees-from-german-universities-of-applied-sciences-fachhochschule?rq=1) is related but it is about PhD in Germany only.<issue_comment>username_1: In principle, there's no formal obstacle to applying to a US program, since almost all of them require only a bachelor's degree for admission. I similarly don't see a major problem with admissions to Canadian or other European universities, since you would have a master's degree and therefore have an "equivalent" degree, so long as you're staying in the same discipline (electrical engineering to electrical engineering); rules may be different if you're moving between fields (e.g., computer science to materials science).
Where you may run into a problem in admissions is that as a *Fachhochschule*, your school may not have as strong an international profile as from *Hochschule* and other schools of equivalent rank in Europe. Consequently, the school you're applying to may not have ever had applicants from your *Fachhochschule*. Consequently, regardless of how good your profile and application are, you still represent an "unknown" quantity, and therefore there is a greater risk to the department by admitting you instead of someone from a school that is better familiar to the department to which you're applying.
For non-European questions, the number of credit points aren't that important. For European admissions, there may be an issue regarding this, but that's really decided on a university-by-university basis, so you'll need to consult the individual schools you're interested in for guidance. However, taking additional credits beyond what is required for admission won't help you nearly as much as demonstrating research potential.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Refer to [Wikipedia](http://en.wikipedia.org/wiki/Fachhochschule), the FachHochschulen in German - speaking countries play a role of institutes of vocational education. It is difficult to give a direct answer. The situation really depends on which PhD program at which University in the USA or other English - speaking countries to which you are going to submit your application. Basically, the admission committee there will investigate your profile, or they will have a professional third party to provide an objective investigation.
In my humble opinion and experience, given a quite normal situation only in German speaking countries, a student holding MSc. or Diplom(FH) from an FH may be admitted directly in a Master program at a university, who is allowed to award Doctor degree. But that student will definitely required to take the examinations in core lectures in Bachelor program at that university. The universities in USA know this, very probably.
I suggest a solution, that you will take a Master course at an internationally accredited university in your home country before you make a step further.
Upvotes: 1 |
2014/08/28 | 1,648 | 6,904 | <issue_start>username_0: I've heard that salary inversion is a problem in academia, and it happens when Universities continue to hire new and highly qualified people at higher and higher salaries, but they don't increase the salaries of the existing faculty at the same rates.
I want to ask - why is this actually a problem? Shouldn't salary be based on merit and qualifications, not how long you've sat at a particular desk?<issue_comment>username_1: I think there are several issues.
You might argue that salary "should" be based on merit and qualifications, but these are highly subjective and often evaluated by people within the university who have other vested interests, so there's a substantial risk that internal politics or other factors may not measure merit in a fair way. Given this, existing faculty may prefer a system that puts more weight on objective measures like seniority, and faculty have a lot of say in the administration of a university.
On the other hand, new hires are not really paid based on their merit and qualifications either: they're paid their market price. The dean may not think that Assistant Professor A's qualifications really "deserve" a salary of $X, but if the job market is strong and that's what other institutions are paying similar candidates, she either pays him $X or she doesn't get to hire anyone this year.
Then, suppose
Assistant Professor B has been with the department for 3 years. Her qualifications are comparable to those of A, but she was hired in a year when the job market was weak, so she accepted a low salary offer $Y < $X. Since then has received a standard raise of a few percent per year, so her salary is still below A's. She could probably go back on the job market and find a job paying $X, but here she is well on her way to tenure, she has settled in the city with her family, her spouse has a job in the area, and they are disinclined to make another (possibly long-distance) move. Moreover, changing tenure-track jobs is always challenging (she would have to get letters from within her department, but if too many colleagues find out she is thinking about leaving, it will hurt her standing within the department). So she has limited mobility and no real leverage to negotiate for a higher salary.
If she finds out A is making more than her, despite being similarly qualified, she will not be happy. Unhappy faculty who nonetheless have no interest in leaving is not a recipe for a well-functioning institution. And a dean who's in favor of correcting salary inversions will probably get Professor B's vote.
Upvotes: 5 <issue_comment>username_2: >
> I want to ask - why is this actually a problem? Shouldn't salary be based on merit and qualifications, not how long you've sat at a particular desk?
>
>
>
Yes, it should. But consider what happens if I've been sitting at a desk for ten years, doing hard work and feeling good about it, and one day you arrive, take the next desk, and have a higher salary than I do. It is hard for me not to feel:
1) That my employer is less than grateful for my ten years of desk sitting, given that I (apparently; as you point out this does not logically follow) am being paid less than if I hadn't done any of it whatsoever.
2) That maybe I have been sitting at the same desk for too long. It gives me the idea to follow your lead and earn more money by starting fresh at a new desk.
Thus: although one can say that the market is the market and its invisible hand is totally amoral, in reality this amoral practice is bad for *morale*. And if you as an employer do happen to value the person who has been sitting there for ten years as much as the new guy you would hire -- or even if replacing the 10 year veteran would be disruptive to the workforce -- then you may have some interest in pushing back against this market force. The desk-sitters certainly will, and depending on their mobility, you may want to take their feelings into account. (In a sufficiently bad economy, maybe you can persuade them that they're looking at it wrong, and that instead they should be very happy about having ten years of steady employment.)
Upvotes: 5 <issue_comment>username_3: I think the main issue in that is that you will drive your talent away and keep the left over if you do not make the correction.
If the market is higher than a few years ago, if you do not correct the salaries, basically you encourage all the one that have salaries below market to look for new position. The best ones will succeed while the less good ones will fail. Results: you lose your best element and keeps the less good.
Since academia has very little 'fixed' position the phenomena is less striking, but in a company (particularly in country where laying off is frowned upon) you get a concentration of some 'that cannot go somewhere else'. This is definitely not what you want.
Upvotes: 2 <issue_comment>username_4: Besides all the other great answers here, there is another reason that merit does not entirely govern salaries. Many universities have an annual period for "merit" raises which require little justification, saving raises for "equity" reasons for special cases. Since everyone is eligible for these so-called merit raises, almost everyone gets one (management being what it is). As such, institutions impose caps on departments that may limit merit raises to 2-3% per year over the department. Coupling that to the public nature of government employee salaries, you can see how these raises become little more than cost-of-living adjustments. On the other hand, departments have to recruit new hires against a competitive background market that may cause a new salary to be higher than a 2% annually compounded salary from 10 years ago. Thus, an inversion.
Upvotes: 2 <issue_comment>username_5: Salary inversions are mostly a problem at public schools, where the state budget situation and state legislature have some control over whether there will be any raises at all. Due to inflation, this means that often all state employee salaries are shrinking in real terms. So, the longer you have the job the less you're paid no matter how much merit your work has. Hence salary inversions are not typically a result of younger profs having more merit, but rather the budget realities of working for a state government. Of course, there are other affects that can counteract this: many schools have funds specifically intended for reversing inversions, and many schools have automatic raises at tenure and/or promotion so that there are at least some raises that the state can't block.
To summarize: salary inversions are a problem *precisely* because "salary [should] be based on merit and qualifications, not how long you've sat at a particular desk." Inversions happen because given equal qualifications the person who has been at the job *less* time will be paid *more*.
Upvotes: 3 |
2014/08/28 | 2,242 | 8,598 | <issue_start>username_0: I have worked with my adviser for a few months as a graduate student. Everyone, including her other graduate students, seems to call her by her first name. She never expressed a preference to me, so I've been calling her "Dr. Smith". She signs her emails with her first name.
I am worried that I am being awkward. I don't mind calling her "Dr. Smith", but during meetings where other students are present, it would be jarring to call her "Dr. Smith" while other students use her first name in the same conversation. So far, I have managed to avoid this issue by choosing my words very strategically to avoid directly addressing my adviser at all.
I don't wish my adviser to think that I am making some sort of statement by being unnecessarily formal. On the other hand, I don't wish to appear too informal either. Apart from that, once you've addressed someone by their title for several months, unceremoniously switching to their first name out of the blue seems like it would be very strange.
Is there any tactful way of resolving this predicament, besides waiting and hoping for the adviser to express a preference? I am in the US.<issue_comment>username_1: If all her other grad students use her first name, you can too. It would be a very strange person who let some of her grad students use her first name and insisted on others using Dr Surname.
Upvotes: 4 <issue_comment>username_2: >
> Excuse me, do you prefer me to call you Dr Smith or Ellen?
>
>
>
A polite question will solve all your doubts.
Upvotes: 7 <issue_comment>username_3: Despite I think that your question is already answered in the other two related questions in this site ([Should your PhD students call you by your first name?](https://academia.stackexchange.com/questions/10671/should-your-phd-students-call-you-by-your-first-name) and [Is it acceptable for me (an undergrad) to call professors and other research professionals by their first names?](https://academia.stackexchange.com/questions/25758/is-it-acceptable-for-me-an-undergrad-to-call-professors-and-other-research-pro/25761)); I am posting my answer as follows.
1. Go back to the culture of the country in which you are studying. In some cultures, it is very normal to call a professor (also a boss, teacher, someone who is older, etc) by his first name. For instance, in my culture, it is very odd to call a professor by his first name because even students are sometimes called by their last name.
2. Look at other students of your advisor who are in your level and see how they call her. If they call her by her first name, you can also call her by her first name too. You are a student too. What's the difference? But I advice you to look at the students at your level. Perhaps post-docs or PhDs call their advisor another way.
3. Ask her directly and politely. Try not to complicate things for yourself. Ask her politely the way she prefers to be called. I remember when I wanted to write a professor's surname in an email and I was not sure how should I spell his name correctly, I asked him and he was happy to tell me the correct way of his name spelling.
Upvotes: 2 <issue_comment>username_4: I really don't think this is close to such a big issue as you seem to make it.
What's really bothering me in your question is the following:
>
> So far, I have managed to avoid this issue by choosing my words very strategically to avoid directly addressing my adviser at all.
>
>
>
You are investing **way** too much effort into addressing this non-issue. As I see it, you have 3 options, all of them entirely valid:
1. Ask her, as [username_2 says](https://academia.stackexchange.com/a/27726/15723).
2. Silently switch to calling her by first name. Everybody does it, why wouldn't you?
3. Go on calling her by last name, until specifically prompted by her to go for first name instead. You don't mind, she apparently does not mind, so why bother?
Literally all three are probably ok. Just decide for one option, and then start thinking about more important things (such as your research).
Upvotes: 6 <issue_comment>username_5: Don't worry too much about it. If everyone else calls her by her first name, *and* she signs her emails with her first name, then she clearly doesn't mind being called by it.
Switching suddenly to calling her by her first name will be far less awkward than avoiding addressing her at all.
Upvotes: 2 <issue_comment>username_6: All other answers are correct. One note to add is that if you are coming from different cultures, it is possible that your advisor does in fact feel awkward about it, but is not very comfortable with stating so explicitly. So it's likely best to get this straightened out as soon as possible.
Otherwise, this flowchart provides the answer ;-). On a more serious note, you seem to be in the situation that the student in the comic is. In my opinion, **avoiding to address him/her at all is worse than either being too formal or too informal**.

*Source: [PhD Comics](http://www.phdcomics.com/comics/archive.php?comicid=1153).* ***Do not take seriously.***
Upvotes: 5 <issue_comment>username_7: I can't imagine that someone who signs her e-mails to you with only her first name would object if you addressed her by her first name.
Upvotes: 4 <issue_comment>username_8: A person must never refer to there boss or somebody they work with who is in a higher position than you by their first name, if one of my employees called me by my first name, I would be deeply upset, it is unprofessional to use your bosses first name while talking to him/her, always use mr/dr/prof. [last name]!
Upvotes: -1 <issue_comment>username_9: **Yes**
Adults call other adults by their first names. You're an adult and she's an adult so you call her by her first name.
(Cultural conventions vary by country but certainly in the anglophone nations, this holds true pretty well)
Upvotes: 2 <issue_comment>username_10: First criterion is "fit in". Do what others do.
On another hand, from the opposite point, it may be worthwhile to think of whether you *want* to use honorific forms, however subtle, of address. One might not want to address one's grandmother exaggerately-familiarly, nor one's grandfather, nor father, nor mother, ... nor significant mentor?
Upvotes: 2 <issue_comment>username_11: There's one other option that hasn't been mentioned: call her either one, depending on the situation.
You've been calling her "Dr. Smith," but, apparently, she has never said, "Please, call me Linda." Therefore, she doesn't seem to mind being called Dr. Smith.
In meetings, everyone else calls her "Linda," but she doesn't seem to bristle, nor has she said, "Will you please show some respect and stop calling me Linda!"
I interpret this to mean she is unfazed by either one.
Nothing says you need to flip a switch, and always use "Dr. Smith," or always use "Linda."
In meetings where everyone is calling her Linda, call her Linda. When you are in a one-on-one meeting in her office, call her Dr. Smith, if that's what you're more comfortable with.
I work alongside several people I have a "part-time first-name" relationship with. I might call them by their first name in some situations, and use their more formal title in others. It depends largely on their rank and position, my rank and position, the formality of the meeting, and who else is in the room.
Your advisor seems to be someone who doesn't mind either name. Be glad you're working with such an adaptable professor.
Upvotes: 3 <issue_comment>username_12: *She signs her emails with her first name*
*Everyone, including her other graduate students, seems to call her by her first name*
These are strong indications this person would be comfortable with you using her first name. But even if she says, either in response to a question, or on her own initiative, "Pronce, please call me Mary," *you* may not be comfortable doing so, based on your own cultural upbringing. This happened to me. After finishing a Master's in the midwest, I moved to East coast, where I was the only one calling people Professor So-and-so. It took a few years for me to re-train myself.
(Sample question you could pose: "Do you have a preference about what name you go by with your students?")
But that's okay. I remember some advice given to me in Latin America when I was struggling with choosing between the formal and informal modes of address in Spanish: **"What matters isn't what you call the person, it's what you say, and *how you treat the person*."**
Upvotes: 2 |
2014/08/28 | 846 | 3,702 | <issue_start>username_0: I am about to submit my first article to a peer-reviewed journal. I have basically already decided which one, but I have a shortlist and am still in principle considering my options. One of the questions I have is about the timetable for eventual publication. What should I expect? (In the long run, it probably won't make a large difference, but for me, right now, it would be nice to get something out the door this side of New Year if it is at all possible.)
I am puzzled by the lack of information about timetables and deadlines on the sites of these journals. I have looked at journals from related fields in the past and they all seem to be very secretive (or undecided?) about these things. Is this on purpose?
I guess I can infer something from previous publication dates -- a biannual which was last published in May might be slated for a next issue in November and probably have a deadline several weeks before that, which I probably won't make (it is now late August and the review process would apparently take several months). But why don't they simply put a date up front so authors won't have to fret?
Would it be out of line to email the editor of the journal and ask about the planned deadlines for the next couple of issues?<issue_comment>username_1: In most fields and for most journals, there is no time-table because **it takes as long as it takes.**
This means that after they receive your paper, they will send it out to a few referees, which have to be found first. How long finding referees takes is not under the control of the journal editors - they have to ask people until three of them said "yes".
Then, the referees have to write the review. Typically, they have about 3 months of time for that, and a request for extension is almost always granted. So think of it more like 6 months. Exceptions apply here (e.g., for the journal "Science", who manage to enforce shorter reviewing times).
Then your paper maybe accepted only conditional to changes. You do the changes within a month or so, and then reviewing starts again. So there goes another 3-6 months in addition to the time that the editor needs for organizing the process.
This means that for most reputable journals, there is a long pipeline of papers in-progress and whenever they make a new issue that is not a special issue, they take some papers that already went through the process and publish them. There can be a substatial "out-queue" of these papers, which is why some publishers came up with the concept of already publishing "done" papers before the paper has a journal issue assigned (e.g., for Springer, this is called "Online First").
As an example, in computer science, an overall time span of 1-3 years is common.
As a bottom line, the time to publish can only be influenced by the journal editors up to a certain, small, degree, e.g., how swiftly the editor performs actions whenever there is something that she can do at the respective point in time. So publishing a time line would make little sense.
Upvotes: 4 [selected_answer]<issue_comment>username_2: This will depend significantly on what your field is. For me, there would be no way anything other than a very short (and preferably very significant) paper would appear this year. For others a paper will be bordering on out-of-date by then (I'm told).
Upvotes: 2 <issue_comment>username_3: I personally use the information given in published papers (i.e. Elsevier journals provide dates for submission/submission after revision/acceptance or publication ). This can be somehow indicative yet I've personally experienced both ends of the spectrum (later/earlier than the average turnaround).
Upvotes: 2 |
2014/08/28 | 1,668 | 7,143 | <issue_start>username_0: I have realised that my performance improves when I make my progress publicly accountable. For example, my jogging times improve and I lose more weight when I enter myself onto a public rankings list. I would imagine that my academic progress would also improve if I make my progress publicly visible.
I'm wondering what online rankings or any other publicly visible progress reports might be available in academia? I've heard of impact factors, but is there anything else? I'm sure there are people against turning academia into a competition, but I find some sort of competitive element really improves my performance.<issue_comment>username_1: You can always compare the length of publication lists, e.g., using [Google Scholar](http://scholar.google.com/), [ResearchGate](http://www.researchgate.net/) or even your colleagues' homepages.
Summaries of publications and their impact are (attempted to be) provided by the [h-index](http://en.wikipedia.org/wiki/H-index) and similar measures. (Impact factors really pertain to journals, not researchers... but indices for researchers may take into account whether the researcher publishes in a low- or high-impact journal.)
There can be, uh, *lively* discussions about the merits or not of such measures, especially in the context of hiring and remuneration decisions.
One problem with using such measures for short- and medium-term motivation as you propose is the [lag between the work and the publication](https://academia.stackexchange.com/questions/27728/journal-publishing-timetables) - feedback is *far* too slow. If you are looking for motivation, you may want to check [Productivity.SE.com](https://productivity.stackexchange.com/questions).
Upvotes: 3 <issue_comment>username_2: Some more ideas to complement StephanKolassa's answer.
Another possible idea that crossed my mind multiple times in the past is making a [Trello](https://trello.com/) board showing my papers in different columns according to their progress status, with a scale such as:
* 0: preliminary idea, not clear if it will even work
* 1: some experiments or theoretical work done, seems promising; will likely lead to a publication but lots of work needed
* 2: most of the experiments/proofs ready, still to write down in coherent form
* 3 draft mainly ready
* 4 published as preprint and submitted
* 5 first round of positive reviews, awaits modifications or already resubmitted
* 6 published; congratulations!
Other random ideas: if you are writing a long document such as a thesis or a book, you may want to publish online a graph tracking the number of pages written vs. time. Similarly with lines of code. If you are using git you can gather lots of statistics using [GitStats](http://gitstats.sourceforge.net/) ([example](http://gitstats.sourceforge.net/examples/viewgit/index.html))
Upvotes: 2 <issue_comment>username_3: [Microsoft Academic Search](http://academic.research.microsoft.com) maintains top-something lists of researchers in different fields (e.g., top-100 researchers in software engineering, top-20 researchers in WWW in the last 5 years, etc.). Clearly, you will only start showing up in these rankings after you already had some number of cited publications, so it is probably not useful for your PhD research. Also, as I have experienced myself, [the ranking is weird](https://academia.stackexchange.com/questions/16303/how-does-microsoft-academic-search-generate-its-rankings), so I wouldn't say it is exactly a reliable way to measure your progress. Finally, it should be noted that the feedback achieved by these rankings is going to be **very slow**. In fact, you will (theoretically) see improvements in these rankings **years** after you wrote high-impact papers. It is not exactly that you can use it to track how good your last week was.
You could also use [Google Scholar](http://scholar.google.ch) and compare your profile against profiles of other people of similar academic age or standing. The good thing here is that you should at least show up as soon as you have co-authored something. However, similar restrictions to above apply - progress will be very slow, and any comparison based on e.g., Google Scholar h-index will be a noisy heuristic at best.
Upvotes: 2 <issue_comment>username_4: As many answers already pointed out, academic progress is slow. Changes in comprehensive rankings will take their time. I assume this is the reason why people say that a person in academia must endure frustration and have tons of patience. On the other hand, humans seem to usually need a much more frequent feedback in order to actually motivate yourself.
**I find one little trick particularly helpful: find colleagues that work on closely related topics. Share information and work together on the same project.** Seeing someone else having success on his or her share of the work will surely make you want to contribute as well (if not more). I must admit, that this will only work, if everyone involved aims toward the success of the project. There are various ways of measuring, publishing, and rewarding progress in this case: the tool can be as simple as a progress bar on an intranet's webpage that steps along as milestones are reached, money for visiting a conference on the condition of a publication, or the already mentioned Trello board that just visualizes how the contributions of different people in your group help a project advance.
Sharing information in a small study or work group breaks down the long-term progress into the small steps that academia actually enforces: idea, thinking, trial, and finally success or failure. Adding *discussion* at any state of this progress will help to uncover misunderstandings and sure failures faster than if you would work on your own. Linearizing your thoughts to actually tell them so someone enforces a clear picture in your head, again ruling out possibly overlooked details.
Furthermore, swapping ideas will help everyone in your group gain insight and see the same world with different eyes. To every topic (how old and worn out it may be) there is a perspective or context that has not been considered yet. Finding out about these is something I found particularly motivating during my time of study. Keeping the discussion with colleagues and friends alive will surely help you with this.
**My approach to answer your question is: choose a method of feedback that suits you and try to shorten the intervals. The method of feedback does not matter as much as the time you have to spend without.**
Lastly, a small excursus: We have some source code for numerical computations that we share among the PhDs in our work group. It is always a bit frustrating to find oneself spending hours of debugging someone else's mess.. Recently, we started offering bounties (you know those coconut-chocolate-bars?) for bugs someone would find in someone else's source code. Now one has one more reason to watch out bugs on your own (to avoid spending a "bounty"). Moreover, the frustration is somewhat mitigated in case a bug is found since one receives a (small) compensation.
Upvotes: 1 |
2014/08/28 | 748 | 3,477 | <issue_start>username_0: A few months ago I reviewed my first paper. The authors submitted a major revision and I am asked to review the paper again. I received a long cover letter where my comments and the comments of two other anonymous reviewers have been answered.
The paper is relatively long (40 pages) so I am trying to avoid unnecessary work. Should I:
1. Re-review the **entire paper** as if I saw it the first time,
2. Just check if **all** comments have been addressed, or
3. Just check if **my** comments have been addressed?
I am afraid that if I just check the comments, it is possible that the authors have made some other changes, or that their changes might have broken the integrity of the paper as a whole. On the other hand if I re-review the entire paper it might be unfair to the authors.<issue_comment>username_1: This is difficult and, to some extent, a matter of opinion. So, my opinion is this: Your job as reviewer is to identify problems and make critical comments that will improve a paper. In the first round, hopefully this is what you did. The editor then looks at these comments to decide whether to offer an opportunity for revision. In many (most?) journals, an offer of revision is a 85%+ chance of the paper eventually being published. So, when the paper comes back to you, it has already received a round of review and has already been deemed nearly ready for publication. You need to check to make sure that your comments and concerns have been adequately address. Particularly with major revisions, you also need to make sure that no significant new errors have been introduced. This can be a lot of work, but that is precisely your job; you put in effort now in hopes that others reviewing your paper given the same level of effort. If a paper addresses your concerns and introduces no new errors, your job is to recommend to the editor to publish the paper (possibly with minimal additional revisions to address new or lingering concerns).
Upvotes: 3 <issue_comment>username_2: When a major review has been requested, I think you really do need to go through the whole paper, for exactly the reasons you cite—the changes *you've* requested may conflict with the changes another reviewer has requested, and therefore the authors may have had to exercise significant discretion on which one to formally include in their revisions. Similarly, there may have been other changes that arose from your comments or those of your fellow reviewers.
Minor reviews require a lower level of commitment, unless the authors have indicated more substantial changes have been made.
Upvotes: 6 [selected_answer]<issue_comment>username_3: Anytime you are asked to review a paper, that is what you should do. If the editor asks you to provide feedback on how your own comments have been dealt with, then that is what you do. As username_2 states, the comments from several reviewers have been considered by the authors which means the paper is at least partially new. An example: it is not uncommon that reviewers opinions differ, which means authors have to decide what they believe is their preferred direction forward, following all points proposed by all reviewers may be impossible. As a reviewer you may then need to re-argue your view point and possibly find other ways to make these view points count. There are hence good reasons for making a second thorough review of a paper that has been given a major revision by the editors.
Upvotes: 3 |
2014/08/28 | 1,812 | 7,657 | <issue_start>username_0: Consider the following situation: a student has started a bachelor's/master's/PhD at a lesser-known university program but feels that he would benefit from a more prestigious institution with better professors, equipment, and learning environment.
He decides to apply for a bachelor's/master's/PhD (the same degree as his previous level) program at top schools, planning to abandon his current program if he is accepted.
Is this unusual or frowned upon? Is the fact that the applicant has already started a program likely to hurt the applicant's chances? (Do the answers to the above questions depend on the current level of the applicant?)<issue_comment>username_1: In the US this is generally known as "transferring".
At the undergraduate level it is very common, and institutions usually have standard procedures for admission, awarding credit for equivalent courses already taken, and so on. Students may transfer for a variety of reasons: to attend a more prestigious place, or to find a program that's a better fit for their interests, or just because they'd rather live in a different city. Transfer applicants are not necessarily advantaged or disadvantaged compared to new freshmen, though the applicant's record at their current institution will be taken into account.
At the graduate level, it is less common, and tends to happen when a student is actively unhappy with their current program. It tends to be handled on a more *ad hoc* basis and it's hard to say whether such applicants are generally at an advantage or a disadvantage.
Upvotes: 3 <issue_comment>username_2: Speaking strictly from the US, this is defined as a being a "transfer student" and is extremely common. How common? Well, it's a federally reported statistic called "Transfer-Out Rate". I am fond of the [CollegeResults.org](http://www.collegeresults.org/default.aspx) tool, which defines this stat nicely (emphasis mine):
>
> The percentage of students who began in the 2006 cohort of first-time,
> full-time, bachelor's or equivalent degree-seeking freshmen at the
> institution and transferred to another school without earning a degree
> at the initial institution. **Reporting of transfer data is optional for
> colleges and universities that do not consider preparing students for
> transfer as part of their mission.** (IPEDS)
>
>
>
For instance, The [University of South Alabama](http://www.collegeresults.org/collegeprofile.aspx?institutionid=102094) is reported as having about 16% of students transfer out to another institution before receiving a degree. The [University at Buffalo](http://www.collegeresults.org/collegeprofile.aspx?institutionid=196088) (New York) posts about a 20% transfer rate.
So, as the write-up for the statistic suggests, some schools even offer specific degree tracks designed to transfer. In the state of Wisconsin, the entire "technical college" offers such programs. In Florida, these are called "junior colleges" and offer similar tracks.
Even at Universities designed without this as their intent, there is always a percentage of people who transfer in and out at an undergraduate level. As username_1 points out, this is not nearly so common at higher levels, but it happens often enough and is hardly taboo.
The Upsides
-----------
Sometimes it is desirable to plan to transfer, such as getting started at a nearby location while you make preparations/plans to go elsewhere (I did this myself, starting in a tech school and applying to my target University just the next semester once I had family/work preparations in place). Many people spend 2-3 years at such colleges, taking all the classes they can towards their higher target degree (usually called "general degree requirements") - usually because it is more convenient and often vastly cheaper (a person can easily save tens of thousands of dollars in tuition this way). Class sizes are often smaller. The difficulty level of the material varies, but most "introduction" classes look a lot an awful lot alike after a while.
The Downsides
-------------
If you are looking to go onto higher degrees, especially to a PhD or competitive Masters program, recommendation letters and research experience (when possible) are highly desirable if not absolutely necessary. If you will only be at an institution for 2 years instead of 4-5, then you have a lot less time to find and build these connections! Surely you can do it, but I know I've personally found it took me about 2 years to find the right connections that seem to have "stuck" and suit me and what I want to do (after figuring out what I actually want to do, naturally). If I had completed most of the degree before coming to my present institution, this might mean I'd have missed out on such opportunities to do some real work with these people.
Most institutions have requirements of "credits done in residence", so you can't just do 4/5ths of a degree at Podunk Clown School Of Higher Learning and transfer into Harvard for a semester and get your diploma. Which is unfortunate, because that sounds like it would make for a great B-movie.
Some administrative complications can come into play too. If you decide to retake a course you did poorly on or failed in a previous semester, the rules about how to do this and whether or not it's possible vary depending on where you took the class. Retaking a class you took originally at School A is not always possible or straight-forward once you are in School B.
How GPA is calculated also differs in this same way, as some institutions only calculate grades earned in residence, others combine them, some don't transfer 0 credit classes (like if you failed a class) and others transfer all grades even if they don't give you credit for the course towards your degree!
The complexity can be a real head-ache sometimes, but they are generally annoyances to be overcome rather than deal-breakers.
Big Warning
-----------
Credits earned at one institution aren't automatically accepted anywhere else! Indeed, at some colleges they offer two versions of a class - one that is likely to transfer, and one that isn't; the one that can transfer costs more but is otherwise the same course! This can be a minefield, so if you want to make plans or consider transferring, talk with your target school's Admissions/Registration/Records and whoever would be in charge of a transcript evaluation first!
This process varies hugely by US State and even between institutions within the state (even when they are State schools in the same system!), but suffice it to say that this is an extremely common problem in America and you have to be careful and plan accordingly.
Admissions Considerations
-------------------------
Finally, you asked about how this might effect perceptions of you in terms of admissions or otherwise. The most important thing I've seen is that once you've had a semester or two at any college, most colleges no longer put much weight into your high school grades, SAT scores, etc - if they even require them at all. So it is very common that people will not do great in high school, go to a local or otherwise non-competitive college for a few semesters, get really get good grades, and then apply to more competitive institutions. Most Universities understand that proven experience in a college setting is a far better predictor of academic success than any high school transcript or standardized test can possibly be, and then put little weight into anything they have to say. This can backfire on you, of course, if you didn't do well at your last program but did better in the past.
Upvotes: 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.