date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2013/01/19 | 925 | 3,852 | <issue_start>username_0: I am currently writing a reference letter for a colleague applying for a US green card. Given that I don't know much about the process and expectations of the immigration bureaucrats who will handle the application, I asked around for examples of such letter. Obviously, they are all glowing, and I started writing in the same style.
Now, don’t get me wrong: she is truly a great researcher, I wish her the best of success with her application and hope to help as much as I can (at least, not to let her down). But… at the same time, as I finished writing my letter, I wondered: **is it possible that I went over the top with praise?** Is it even possible, in such a case? And if so, how can I tell? I mean, I did not write anything factual wrong, but if read very literal (and outside context), it might sound more like the eulogy of a Nobel prize winner than the recommendation of a mid-carreer researcher (even a very good one).<issue_comment>username_1: There a few signs that you might have gone over the top:
* Have you used many *absolute* superlatives ("*the best*" rather than "*one of the best*," "the most dedicated" instead of "extremely dedicated", and so on)?
* Is your letter too long or too detailed, given the length of time you have known the person (four pages is probably too long for someone who worked for you on a summer project, unless you've known that person independently in other contexts before then).
and most importantly
* Would such a letter, if you were the one *receiving* it rather than *writing* it, cause you to have an unfavorable or skeptical reaction about the candidate?
In other words, if it makes you think "nobody's that good," you've probably gone over the top.
Upvotes: 4 <issue_comment>username_2: Letters written for a green card application are very differently structured to letters written for other purposes. As was explained to me when I went through the process, the structure of a green card letter is usually
* I am awesome
* here are all the ways in which I am awesome
* because I am so awesome, you should trust me when I say that this person should get a green card
* and oh yeah, they're pretty awesome, which I can tell because I'm awesome.
I'm only slightly exaggerating here. The point is that GC letters are not read by academics - they are read by lawyers who don't evaluate technical skills so much as achievements and strength of recommender. So there's no way to go over the top really.
Upvotes: 6 [selected_answer]<issue_comment>username_3: Having been the recipient of a few recommendation letters when applying for a US Green Card, I can assure you it is impossible to give too much praise. The letters I received from my colleagues were humbling and, to put it mildly, embarrassing!!
Upvotes: 2 <issue_comment>username_4: The purpose of a recommendation letter for green card application is to convince the application reviewer that the applicant is a person the US wants for its national interest.
As long as you don't lie, I think you're fine.
For example, you can say **you think** she is the best scientist you ever met. This would be your own subjective opinion. You think that way. Others may not think the same. No one can say you lie because it's just your opinion.
Basically, you can say anything you want. But, be careful. You don't want to step on your own toes. You better have evidences to support whatever you say in your recommendation letter. For instance, she'd better be good enough to be called the best scienist you ever met. The evidences would be something like, she received some outstanding awards from well known organizations, etc.
Remember, you'll have to sign on the bottom of the recommendation letter and send it to the US government. Would you be careful when you submit a document to any government?
Upvotes: 2 |
2013/01/19 | 1,065 | 4,483 | <issue_start>username_0: I'm doing a masters course at a different university than I did my undergrad (both in maths). I did well in my undergrad but now I'm finding it very difficult. The shock of the change is hard to cope with.
I'm not just finding it very difficult, but I get zero feedback: we don't have any tests and I was not able to make friends so I don't know how others are finding it. I'm starting to think that I might be an idiot.
* How can I build some confidence in my abilities at the same time as I rush and struggle to keep up with the massive rate of new material I need to learn?
* How might I get myself some reassurance that I can actually pass, or stop worrying about this so much? (I feel terribly guilty using so much money from my family to be here) It's possible that I don't really have a chance at all but knowing that would be fine.<issue_comment>username_1: Feelings of inadequacy are quite common when starting a new graduate program—you are surrounded by talented people (if they weren't, presumably they wouldn't be graduate students!), and there is often a big leap in expectations between the requirements of a bachelor's program and the corresponding graduate program, particularly if you're at a top program in your field.
As one of the commenters mentioned, talking to your professors or teaching assistants may be one good way to get help, or to get at least some reassurance about how you're doing. Depending on what your university or department provides, they may be able to provide you with resources to help you study or prepare for your exams. This might be as much as arranging a tutor for you, or as simple as providing you with sample exams from previous course offerings. If your department has a "graduate student council" or "society" or something like that designed to help out the students, then a lot of these may have already been collected by students from previous years for use by students in later years.
In general, though, don't get too discouraged. It is recognized by most instructors that graduate school is harder than an undergraduate, and the grading tends to match that view. There are very few graduate school courses that are not graded "on a curve"; otherwise everybody would have bad grades! (When the top score on an exam is 60/100, most schools won't let you fail everybody!)
Upvotes: 3 <issue_comment>username_2: I guess the level of undergraduate courses at your new university is higher than the ones at your previous univ. So it is better you audit some of the undergraduate courses or you study them by yourself. For example, if you have taken algebraic topology this semester you might need to audit general topology too or you maybe should study some advanced algebra as well. Ask the professors about the background necessary for master courses and how you can get this background.
Upvotes: 2 <issue_comment>username_3: My advice is to befriend a second year or older grad student.
I was in the exact same spot as you are now and unfortunately for me, there was nobody there to help me (I am in a foreign non English speaking country, where people were anything but friendly).
I am in my 3rd in grad school and I have learned the following golden rules (the hard way).
1. If you are not sure what is expected of you - ask as many people as possible. No one will come and tell you what you need to know. If you are not confident doing this in person, an email is also acceptable.
2. If there is a problem you don't know how to solve, always approach the person you are going to ask, with a possible solution. Nobody is going to do your work for you.
3. Grad school can be a lonely place. Find some colleagues to share the pain.
My lack of confidence was usually a direct result of the complexity of a certain task. Just because something seems too complex, doesn't mean that there is no solution.
When I find myself in such situation, I like to step back a bit and clear my head. I usually go out with friends or do some sports. This should jump start your motivation.
Next step is to break down a task into smaller junks that are easy to swallow. I use a free web tool called [Trello](https://trello.com/), it also has a smart-phone app and supports collaboration.
Decide on a deadline for the big tasks and try to fit the smaller once into a schedule.
A series of small successes is relatively easy to achieve and can do miracles for your motivation and overall confidence.
Upvotes: 1 |
2013/01/20 | 1,530 | 6,477 | <issue_start>username_0: Should I put on my CV papers (or "talks") that appeared in (peer-reviewed!) conferences that have no proceedings? Should I avoid duplicates if the same paper also appears in another conference (with proceedings)?
Should I make it clear that the conference has no proceedings, or just list it in the "Conferences" section? If it changes anything, my field is Computer Science.
**EDIT**:
some conferences are peer-reviewed but have no proceedings. Call it workshops if you wish. You submit a paper, and a committee selects 30%-40 of the papers - each submission gets a slot in presenting the results during the conference/workshop, however no formal proceedings is issued.
(I'm surprised no one heard of such conferences; maybe I'm using the wrong terms; sorry for that)<issue_comment>username_1: If the conference has no proceedings, it is indeed unpublished. Usually those kind of conferences do not have any problems with you submitting the same paper to a Journal or other conference that do have proceedings.
Be sure to check with the conference regulations, and if you never did any copyright transfer, there should not be any problem at all.
Upvotes: 1 <issue_comment>username_2: I don't quite understand what you mean by "appeared in conferences" but have "no proceedings", but I'll chalk that up to differences between fields. I'll answer your question of whether one should add this to their CV. In my opinion, you should absolutely add it if you're a grad student or in your early career phase. If it has a DOI, you can list it under the peer reviewed section, otherwise simply list it under conferences (or the equivalent).
At this stage in your career, a CV not only conveys what you've accomplished (peer reviewed publications, awards, degrees, etc.), it also is a measure of your "scientific activity". It answers the question: "Are you someone who is capable of doing research, publishing, attending conferences and presenting your results in front of an audience of your peers simultaneously or are you someone who simply holes up in their office and publishes in solitude?". It demonstrates that you (possibly) will be someone who networks with their peers, is capable of establishing collaborations, thus broadening their research horizon, etc. It doesn't matter if you've not done these already — it gives a better impression that someone who has done nothing at all.
If you're an established researcher, you probably might not worry too much about it, as by then there are several other metrics that more reliably demonstrate your scientific worth than conference publications/abstracts/talks. Nevertheless, even they have to indulge in this cat-and-mouse game — the difference being that now they have to tout every mundane activity (membership on department committees) as somehow demonstrating their "interest" in the university's affairs.
Upvotes: 3 <issue_comment>username_3: I would not put such a "paper" in the publications section. After all, there is no publication, apart from an abstract.
In my field, conferences usually *do not* have proceedings, and if they do, proceedings papers have very little value. Also, conference abstracts are not peer-reviewed. I've only heard of a single rejected conference abstract, and this was for political reasons.
If you have a dedicated *conferences* section, then I would put the "paper" there, as to not suggest that there is a publication.
Upvotes: 4 [selected_answer]<issue_comment>username_4: In that case, **the text you wrote as part of your submission is not published** (except maybe in some cases in a book of abstracts distributed to participants, but it doesn't count). So, in your CV or scientific production listing, you can add an item for your talk in the “conference talks” section, but not the same as a paper.
In the fields familiar to me (physics and chemistry), it is actually very common for early and mid-career scientists to list “conference talks” in their CV. Later in the career, you may list only “invited talks”, though you should still maintain somewhere a complete list of your scientific production, which includes all conference talks.
Upvotes: 2 <issue_comment>username_5: I assume that you're in computer science, since otherwise you probably shouldn't list *any* conference papers as publications. So I'll answer as a computer scientist.
**No, you should not list such papers as publications in your CV,** because those papers are not actually published. Moreover, there is likely an expectation that the same paper *can* be published at a different conference. **You must not list the "same" paper at more than one conference.\***
I know of several conferences/workshops like the one your describe in computational geometry (my home field), including [EuroCG](http://eurocg.org/), the [Fall Workshops](http://www.umiacs.umd.edu/conferences/fwcg2012/), and the [Young Researchers Forum](http://socg2012.web.unc.edu/computational-geometry-young-researchers-forum/) at SOCG. At all three venues, submissions are *lightly* peer-reviewed, only a subset of submissions are accepted, and a booklet of abstracts is distributed to participants and/or on the web. But no formal proceedings is issued at these events, because it is [expected](http://www.ibr.cs.tu-bs.de/alg/eurocg13/cfp.html) that accepted papers will later appear in more polished form at a *more formally* reviewed conference. (Some early iterations of EuroCG did have [formal proceedings](http://www.informatik.uni-trier.de/~ley/db/conf/ewcg/index.html), despite the expectation of later publication, but other conferences were unwilling to accept papers that appeared in those proceedings.)
As others have said, it's perfectly fine to list those talks under "Unpublished Workshop Talks", especially early in your career. You might even include the acceptance rate if you want to emphasize that the venue carries some prestige. Alternatively, if you did publish the paper elsewhere, you might include the phrase "Also presented at ..." after the publication info.
---
\*...but journals are different. In most subfields of computer science, conferences papers *can* be published later in refereed journals, usually in a more expanded/complete form. Even so, I recommend listing each paper only once in your CV, including all publication venues for each paper, rather than listing the same paper once under "conference papers" and again under "journal papers".
Upvotes: 3 |
2013/01/20 | 1,391 | 6,002 | <issue_start>username_0: When engaging in research, I know its a good idea to read lots of papers and talk to others about what has been done before and what is currently being researched to avoid "reinventing the wheel". That is, to avoid researching/publishing a result that has already been discovered.
In fields where physical experiments are common, replication studies are necessary. But in theoretical/computational research, originality is key and duplication seems to be generally frowned upon. How common is it to inadvertently publish a finding that was already discovered? What do you when you happen to find yourself in this situation? Should you just scrap your work if your methods are too similar to someone else's?<issue_comment>username_1: >
> How common is it to inadvertently publish a finding that was already discovered?
>
>
>
**Far more common than anyone realizes or wants to admit.**
[Stigler's Law of Eponymy](http://en.wikipedia.org/wiki/Stigler%27s_law_of_eponymy) states that **No scientific discovery is named after its original discoverer.** (Stigler's law was proposed in this precise form in 1980 by <NAME>, who self-referentially attributed it to <NAME>, but of course similar statements were made earlier by many others, *including Stigler's own father.*) I wouldn't go as far as claiming that *every* scientific discovery is misattributed, but there are hundreds of examples. Off the top of my head: Fibonacci numbers, Pascal's triangle, Gaussian elimination, Euler's formula (both of them!), Voronoi diagrams, Markov's inequality, Chebyshev’s inequality, Dijkstra's algorithm for shortest paths, Prim's algorithm for minimum spanning trees, the Cooley-Tukey FFT algorithm, the Gale-Shapley stable matching algorithm (for which Shapley recently won the Nobel Prize in economics), ...
>
> What do you when you happen to find yourself in this situation?
>
>
>
**Be brutally honest, both with yourself and with the scientific community.**
If your work has already been published, post a reference to the prior art in your web page listing your publications. (You do *have* a web page listing your publications, don't you?) If possible, publish an addendum to your paper. Email anyone who has cited your paper already, giving them the earlier reference. When asked to review papers that cite your paper, include the earlier reference in your report. Become a walking advertisement for the earlier work.
If your work *hasn't* already been published, try to figure out which parts of your work have actually been done before. Some of your results will appear verbatim in the earlier work, so you can't take credit for them. Some of your results will be easy corollaries of the earlier work, so you still can't take credit for them. But perhaps some of your results will take the old work in a new nontrivial direction. Build on that.
Also, if your results were previously known *in a different field*, there may be some value in bringing those results to the attention of your research community.
>
> Should you just scrap your work if your methods are too similar to someone else's?
>
>
>
**Of course not!** Now you have evidence that your methods actually work! Push them further!
Upvotes: 7 [selected_answer]<issue_comment>username_2: Besides the excellent username_1's answer, I would like to add one more point to the phenomena of reinventing a wheel. It touches more "research towards an already invented wheel", rather than "publishing a reinvented wheel".
You have a problem and need to crack it. Your problem is practical and novel, you know that. But in order to solve it you need to invent some machinery and you just do not know whether it already exists, or not - simply because you do not have a good feeling for all the subtle aspects and issues of your problem. In such a situation, it is often easier to steam ahead, learn as you go, invent something for your problem and then, when you already are familiar with all the quirks and dark corners of your problem, look around carefully to find out how's the thing you invented actually called. The odds are, it already exists in some form, most probably invented in a different niche for different purposes, but it happens to be very similar to your problem.
Of course the above does not work for everybody, because it can be a frustrating experience to find that somebody else already invented what you did too (usually already long ago and in a better quality than you). My angle on this is to be always proud of myself, because those early solutions tend to come from very smart people, so if I managed to independently come up with the same thing as they did, it's a reason to feel better.
At that moment, however, one should realize, his/her approach and angle to the whole issue is slightly different than that of the guys who invented it earlier. You simply came to the same junction from a different direction and you are heading elsewhere. At that point it's just great to proceed in your direction, because you can be almost sure, that your direction is original and unexplored territory - otherwise the earlier work would be cited and that's easy to find out.
The process I describe above also partially explains why inventions tend to be named after guys who arrived to the junction later. They simply had a perspective which took them farther in terms of social impact than was that of those who originally solved the problem. Often solutions get named after the guys who popularize them and make their applications bloom, not those who solved them originally.
Upvotes: 4 <issue_comment>username_3: Reinventing the wheel may be beneficial if you explain something better than the previous studies, release your code/software, etc.
In computers science it can be frustrating when people publish a summary of their methods, provide results, but no code so that others can apply this to other data sets. So you end up reinventing the wheel.
Upvotes: 4 |
2013/01/21 | 460 | 1,898 | <issue_start>username_0: Is it a good practice to include an abbreviation in a title of a research paper.
for example : PC , IDE , RAM<issue_comment>username_1: It is acceptable if (Edit: and only if) the abbreviation you are using is common (in your field or in general) and there is no risk of confusion.
I have two papers whose titles contain the abbreviation (RD) which stands for (Rapid Decay). Since it is an acceptable abbreviation in our field these titles are fine.
The best way to find out is to check similar papers or titles in your field.
Upvotes: 5 [selected_answer]<issue_comment>username_2: There is a big difference between can and good practice. I can think of no reason why it should be considered good practice and a number of reasons why it would be bad practice. despite this, many journals will allow you to use abbreviations in titles, but you will need to check with the editor to see if you can.
Upvotes: 3 <issue_comment>username_3: I would go against the checking similar papers in the field, and more with checking the submission guidelines.
For example the IEEE Trans in Evolutionary Computation will [reject any paper with Acronyms in the title.](http://cis.ieee.org/component/content/article/7/125-ieee-transactions-on-evolutionary-computation-information-for-authors.html)
Yeah, it just happened to us recently, that is the reason I know, however it was our first time submitting to that Journal, and we had no trouble with acronyms before.
A friend even told us that the very same journal asked him to put the explanation of S.O.S. since it was an acronym as well.
So, I would recommend checking the submission guidelines.
Upvotes: 3 <issue_comment>username_4: Using acronyms/abbreviations in title is very bad practice. Doing it also in abstract and highlights, which are all separate documents used by indexing services, is also bad. Avoid it.
Upvotes: 0 |
2013/01/21 | 670 | 2,758 | <issue_start>username_0: I have been notified by the IEEE organizing committee that my paper has been accepted for their conference and requested to register. and the status of the paper is **AAR**. Please see the quotation below.
>
> [AAR]This paper need thorough revision to be accepted as a full paper
> for the conference.
>
>
>
I have attached an image of their review process.
[](https://i.stack.imgur.com/GvwOa.jpg)
What will happen to my paper after the submission of the camera-ready paper? Is there any possibility for my paper not to be published in the proceedings and IEEE Xplore? Or is it guaranteed to be published after the submission of camera-ready paper?<issue_comment>username_1: If the Journal/Conference editor/chair has accepted your paper, it is guaranteed to be published, given that you make the changes. That is the reason they emphasize the "review" part.
Some papers have only minor revisions, so if the changes are not made, it won't affect that much the quality of the conference. But if the changes are major, it usually indicates that you have to step up the level of the paper following the suggestion of the reviewers.
In conclusion, as long as you make the changes, your paper should be accepted in the conference, but if you neglect to do them, probably it wont.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I think the flowchart in your question is pretty clear as to what happens next. But I'll break down the relevant part of the flowchart into words.
You have to make thorough revisions to your paper, and then resubmit. It will then be reviewed again. As a result of the review, it may be accepted, and it may be rejected.
**AAR**: your paper's current status - accepted after revising. It's now up to you to make the thorough revisions, and to then submit the revised paper
**REV** is the status your paper will have once you have submitted the revised paper.
**RVI** will be its status when the revised paper has been sent out to review. Judging by the flowchart, it will get sent to the same editor and reviewers as before, because a revised paper does not pass through the **WFR** stage of waiting for review, where reviewers and editor are assigned.
It may then be accepted (**ACC**), rejected (**REJ**), or conceivably, according to that flowchart, get returned to you once more as **AAR** for further revisions.
The flowchart also suggests that whether it's accepted or rejected, you still prepare a camera-ready version. That would seem to be very unlikely: I find it very hard to believe there's any use for a camera-ready version of a rejected paper; only an accepted paper would need a camera-ready version.
Upvotes: 4 |
2013/01/21 | 873 | 3,593 | <issue_start>username_0: First of all, I know that your work speaks for you, and if you have really good papers, you have better chances, but just bear with me, and for the sake of argument lets assume that your research is only one of the points to consider.
I did my undergrad in what could be considered the best University in my Country (Mexico)and got a Magna Cum Laude.
And then, I did my Graduate studies in the University of Tokyo publishing a couple of Journal Papers (I'm really pushing for that 3rd one!)
This question is directed to people in the US, since I'm looking to find a permanent position.
What is the perception Universities in the US have of foreign Universities? I happen to know that UK Universities like Cambridge and Oxford have no problem (for obvious reasons), but a professor friend of mine told me that other Universities are just not that well known. And having a degree from the Hawaii University was better than having a degree from Tokyo University. (As a side note, he is a professor at Haw Univ, and he wanted me to apply over there)
I just want to know how true or false this is, and realistically speaking how hard/easy is to get a position as a postdoc and eventually a full time professor if you are not from a US univ.
For example, do I have the same chances as someone who graduated from a top University (your Ivy leagues, Public Ivy League, MIT, Stanford, etc) or do I least have the same chances as middle range Universities?
As a side note, I have a postdoc in the UCLA lined up, so I guess that'll boost my chances a bit.<issue_comment>username_1: If the Journal/Conference editor/chair has accepted your paper, it is guaranteed to be published, given that you make the changes. That is the reason they emphasize the "review" part.
Some papers have only minor revisions, so if the changes are not made, it won't affect that much the quality of the conference. But if the changes are major, it usually indicates that you have to step up the level of the paper following the suggestion of the reviewers.
In conclusion, as long as you make the changes, your paper should be accepted in the conference, but if you neglect to do them, probably it wont.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I think the flowchart in your question is pretty clear as to what happens next. But I'll break down the relevant part of the flowchart into words.
You have to make thorough revisions to your paper, and then resubmit. It will then be reviewed again. As a result of the review, it may be accepted, and it may be rejected.
**AAR**: your paper's current status - accepted after revising. It's now up to you to make the thorough revisions, and to then submit the revised paper
**REV** is the status your paper will have once you have submitted the revised paper.
**RVI** will be its status when the revised paper has been sent out to review. Judging by the flowchart, it will get sent to the same editor and reviewers as before, because a revised paper does not pass through the **WFR** stage of waiting for review, where reviewers and editor are assigned.
It may then be accepted (**ACC**), rejected (**REJ**), or conceivably, according to that flowchart, get returned to you once more as **AAR** for further revisions.
The flowchart also suggests that whether it's accepted or rejected, you still prepare a camera-ready version. That would seem to be very unlikely: I find it very hard to believe there's any use for a camera-ready version of a rejected paper; only an accepted paper would need a camera-ready version.
Upvotes: 4 |
2013/01/21 | 1,520 | 5,990 | <issue_start>username_0: Going to conferences induces some costs on a researcher's personal budget. In all places I know, expenses directly related to travel and accommodation are usually covered (travel, hotels, food), but there are also some indirect expenses that aren't typically covered.
I'll only give one example, that is directly applicable to me: when I'm away I have to get a baby sitter for the kids (for the days where my wife can't pick them up, say). However, I'm sure there must be other examples.
Are there institutions that cover these “hidden” (or indirect) expenses? What rules do they follow? It must be difficult to know where to put the limit… (*“hey, I'm going on a conference in Sweden in December, which means I have to buy myself a new coat! can I get it reimbursed?”*)<issue_comment>username_1: Usually, if an "indirect cost" will be reimbursed by the university, it must be a cost that would normally be allowed if it *weren't* occurred on travel.
So, for instance, something that would be considered an "equipment" purchase—such as a battery for a laptop to replace one that dies—might be allowed, but babysitting costs might not.
However, most institutions do have a "travel manual" or regulations that cover what costs are permitted for travelers to have reimbursed. If you have any questions about the policy, you should consult your institution's travel office for guidance. (These regulations often change, usually in response to someone else going overboard and exploiting loopholes in the regulation, which are then tightened for everybody.)
My instinct, however, tells me that such policies are probably quite rare for any institution that accepts government financing for its operations. Usually, those funds have significant restrictions on what sorts of expenses can be associated with travel, and thus it's easier to adjust the institution's policies in accordance with that. For institutions that are privately financed, it's a lot easier to institute policies that are more liberal. But you'd probably have to go to an extremely "progressive" institution (maybe a Google?) to find one that will reimburse you for these sorts of costs.
Upvotes: 3 <issue_comment>username_2: The usual solution is very simple: you will get a daily allowance ("per diem"), which is a lump sum of money that covers all small costs related to travelling.
---
A concrete example: a researcher at a Finnish university, travelling to a conference in Germany. You will get a daily allowance of 61 euros per day, tax free. This should cover food and all other small expenses related to travelling.
Direct costs related to travelling (conference fees, hotel, transportation, etc.) are covered based on the receipts. However, lunch & dinner is *not* covered, as they are included in the daily allowance. Corner cases have special rules (e.g., what if lunch & dinner is included in the conference fee).
Upvotes: 5 [selected_answer]<issue_comment>username_3: Sometimes conferences actually organize child care services or provide support of child care that you pay yourself. See examples: <https://www.hr.cornell.edu/life/support/conference_care.pdf> and <http://www.aps.org/programs/women/workshops/childcare.cfm> The last one says:
>
> Examples of Allowed Expenses
>
>
> * Daycare expenses at the March Meeting
> * Extra daycare expenses incurred at home because the primary caregiver was attending the March Meeting (e.g., cost of a sitter)
> * Expenses incurred in bringing a babysitter (or grandparent) to the March Meeting
>
>
>
Other than that I don't think many employers explicitly reimburse those costs. **People tend to consider that they are part of the "package" that you accept when you start a career in research** (I am not saying it is a good thing).
Upvotes: 1 <issue_comment>username_4: Disclaimer one. I'm from a big Japanese University with lots and lost of Money.
Here, like Juka suggests, travelers get paid a daily expense, for example, I went to Australia last year, and I got paid the exact sum for the airplane and a daily allowance that is supposed to cover meals and lodging. The thing is, unless you eat like a king and sleep in a 5 star hotel, you'll usually end up with extra money. (around 200 USD-300 USD more).
I ask my adviser and he told me that this was normal, and postdocs and profs get even more money, because they consider they have families.
I think is a good practice, but then again, if your University does not have a huge endowment, it may get tricky.
Upvotes: 1 <issue_comment>username_5: >
> Are there institutions that cover these “hidden” (or indirect) expenses?
>
>
>
Yes, but not necessarily all such expenses, and specifically I don't know about babysitting costs.
To give some concrete examples, I have had the following expenses covered:
* Personal insurance having to do with my travel
* Laundry during travel (although that was from a commercial research outfit so might not apply)
* Membership in a professional society which enables reduced registration fee for a conference
* A tube for carrying posters (as opposed to an actual poster which is a direct expense)
>
> What rules do they follow?
>
>
>
Individual institutions have their own rules, and if these are not in writing - people in charge of budgets have some set of rules in their heads which you would need to query...
>
> It must be difficult to know where to put the limit… (“hey, I'm going on a conference in Sweden in December, which means I have to buy myself a new coat! can I get it reimbursed?”)
>
>
>
Actually, I just asked a [related question](https://academia.stackexchange.com/questions/89849). I would actually think that if you live, say, around the equator and need to be at a conference in Sweden you should indeed be reimbursed for the cost of coat - either partially or fully. After all, you're unlikely to need that kind of coat in your daily life and perhaps not even when you travel.
Upvotes: 1 |
2013/01/22 | 1,922 | 7,763 | <issue_start>username_0: I am about 1, 5-2 years in my Ph.D. studies in the no-man's-land between bioinformatics, systems biology and proteomics. (If you are not sure what those terms are, read: "biomedical research")
Coming from a more mathematical/technical background I was thrilled to work in this field, and my M.Sc thesis was pretty successful. Now diving deeper and deeper into the field I feel much less motivated to go on. What frustrates me most, is how little we really understand of complex biological systems, and all our efforts in the field are essentially just waddling in the darkness, trying to find the "holy grail" that may or may not exist. I personally feel that there is an undeniable lack of rigor even amongst the most respectable of scientists out there:
* most biologist really have no clue beyond pipetting liquids left and right, as soon as it comes to data analysis they expect something along the lines of: "computer says yes/no" (see: [little Britain's famous sketch](http://www.youtube.com/watch?v=D4A18tUUb2Y))
* computer scientists/mathematicians can't really cope with the uncertainties in the data
* statisticians are essentially the con-artists of the field, rambling on undecipherable monologues. Sorry if I offend someone but it feels like one can prove/disprove anything with some creative use/interpretation of statistics.
Putting my rants aside, I went up and talked to one of the younger group leaders in our dept. I feel close enough to the person to give my honest opinion and respect his thoughts on the matter. The first thing he asked me after I was done rambling on, however, was how long it has been since I started. When I told him it's been about a year and a half, he smiled and said: "well, it was about time". According to him, it's common for a Ph.D. candidate to become jaded with his/her work somewhere between 18-months to 2 years in. He claimed that one simply gets deep enough into the field to see all the potential problems/pitfalls in research, and feel negative about it all.
Which brings me to my question(s); is there such a thing as 18-months syndrome, in your experience? Could it be a discipline-dependent phenomenon or applicable to other disciplines? How can one avoid getting stuck in a tailspin (negative spiral)?
PS: I wanted to tag this question as "research-psychology" but don't have the rep to create a new tag. If someone with more rep agrees with me on the tag, I would appreciate the help :)<issue_comment>username_1: Everything is possible: I'm pretty sure, from a large enough population of former PhD students, most will tell you that they felt demotivated at some point, but the timing will depend on the individual and the particular circumstances.
It is true, however, that mid-PhD corresponds to a particularly large number of negative factors, and it is common to feel bad about your thesis around that time. Heck, it's common even that there is a [PHD Comics](http://www.phdcomics.com) that [highlights it](http://www.phdcomics.com/comics/archive.php?comicid=125):
[](http://www.phdcomics.com/comics/archive.php?comicid=125)
(Don't mind the exaggerated *x*-axis scale. The area highlighted corresponds to mid-PhD.)
Now, why is that? Well, among your rants, most of the factors are actually listed in your question: Now, you know the field well enough to see not only the good, but also the bad in it. The initial elation has left, and you are left with the doubts. This is sometimes accompanied by deep questioning about your progress: Have I done enough? Have I taken the right course of action? etc.
---
But the most important point is: how to get out of it? Well, part of the problem is a natural “oscillation”, which means this is probably actually just a low point, a bad moment, and it will actually get better. Don't have too much fear of “spiraling down”: you've made it thus far, and you're aware of the issue!
As for more actionable advice, I would say:
* Now that you are more knowledgeable of the field, you can actually start to make better choices: if you don't like a given approach, just steer away from it. You still have some time to do so, and it is part of your PhD to learn making strategic decisions (if you haven't already).
* You may not see it, but you will be much more efficient during the second half of your PhD than the first, mostly because you have learnt a lot already and can make better decisions.
* Pick a few challenges (one or two) that you would like to meet, and focus on those: you'll feel much better if, instead of chasing some holy grail, you can help solve these specific issues that you care about.
* And remember: completing a PhD means becoming an expert in your field, and that actually means being able to critique its practices, recognize the good and the bad. It sounds like you have actually achieved this goal!
I hope this helps…
Upvotes: 6 [selected_answer]<issue_comment>username_2: For long projects it is common to feel frustrated or even desperate after some time. You cannot continue for years only with the energy that you had originally. Some of the initial magic is fading and you realize there are bad sides. Do not worry. You will also start seeing new good sides on this too. Maybe you and the other researchers in your field are not going to save the world right there right now. But you **are** all part of a collective effort that advances knowledge.
Upvotes: 2 <issue_comment>username_3: Every PhD is a series of ups and downs, and it might just be that what you are experiencing. If that's the case. don't worry, it'll pass.
However, reading your rant... as someone with a very similar background, I might be able to offer some perspective. I have seen this exact same thing before, and I've heard that rant, not least from myself, several times. Obviously, I can only guess, so please forgive me if I am making assumptions about you which might not actually apply.
Most likely, there is a mismatch between how you view yourself and what the field requires.
You have chosen an interdisciplinary field, which requires you to be a generalist, yet you come from a specialist background. Given that you've switched gears, I'll assume you are more of a generalist.
Now, you need to understand that many people in the relevant subfields are specialists, however, you put them down for not being well-versed in everything you consider important in a generalist setting, rather than accepting that they will be more skilled in their area of specialization than you might (be able to) recognize. I'd call that Dunning-Kruger if that wasn't thrown around so much these days. Especially your statement about statisticians is concerning in that regard. Calling established researchers con-artists 2 years into your grad program is, shall we say, questionable.
Most likely, you consider yourself a "big picture" person, given the field you have chosen, which often goes along with a more individualistic (perhaps even a bit confrontational) personality, that sole author on that one important paper, rather than one item on a page-long list of 3 consortia. This will not work in biomedical research, or any larger research effort for that matter. It is vastly complicated, a big team effort, and you need to learn how to become a player in that community. If you cannot find a role for yourself in that setting, you are very likely to become ever more frustrated. You will also be considered arrogant if you don't adjust your views on the perceived shortcomings of adjacent disciplines, meaning at some point people might just not want to work with you.
In an interdisciplinary field, that's a career ender...
Upvotes: 0 |
2013/01/22 | 420 | 1,726 | <issue_start>username_0: I wish to apply for M.Sc studies in Computer Science to 3-4 universities. Only one of them requires the applicants to take a GRE (general test), and my question is as follows: is it a good idea to send the scores to the rest of the universities even though they do not specifically require it? Will it affect my chances?
FYI: I did one practice test and didn't get a good Quantitative Score (only 145), although I took the test without preparation, without paper (soon after taking the practice test I've read that is allowed scraps of paper) and after 4-5 years after finishing undergraduate studies. I think with 2 months or so of moderate studies I can get about 160. What do you think I should do?<issue_comment>username_1: A good GRE score *can* help your chances at admissions; a poor GRE score does not do anything to help your chances, but they can hurt you, particularly if you're already a somewhat "borderline" case.
However, if a school does not require GRE scores, then I would only submit them if they are strong (well above average). Otherwise, you're introducing at best a neutral "fact" into the conversation.
Upvotes: 3 <issue_comment>username_2: My philosophy in application processes is what you present should only serve to strengthen your case: *"that you should be admitted/accepted to .... "*. If you look at it with this perspective, I would say submit only what's required of you and what you think gives a fair but good image of your intellectual/social abilities. Anything else has the potential of raising questions in the admission officer's head.
Overall I agree with @username_1's answer. Submit scores/GPAs/transcripts only if you are confident in them.
Upvotes: 2 |
2013/01/22 | 1,647 | 7,150 | <issue_start>username_0: As my field, Computer Science (CS), has many sub-areas and specializations. I found myself having a *not-so-good* impression about different areas within CS. For example, I see working on Software Engineering as a waste of time while working on Artificial Intelligence (AI) is much more worthing an investigation.
This is not a field-specific question, I wish hearing whether this exists on other fields as well. Is it common in academia for individuals to find some subfields of a more broader research area more interesting and relevant than others? Relatedly, how does one avoid thinking that way?<issue_comment>username_1: Just like the grass is always greener on the other side, it is somewhat usual for people to think that their particular area of expertise is somewhat more valuable than others. I have seen it in all fields I have heard of…
And it's not restricted to academic life: I'm sure the cardiac surgeon feels that his work is so much more important than that of the family physician, while the later thinks that he's the one tasked with stopping epidemics and diagnosing the important stuff to save lives.
*But I'm sure you already knew that… so, what is your question exactly?*
---
**Edit: *how does one avoid thinking that way?***
The most difficult part, for me, in thinking objectively about other fields is to be able to correctly identifying the challenges they face. I find that it is altogether too easy to think *“hey, that's a trivial optimization problem”* or *“know that they know the compound formula, I wonder why it takes them so long to synthesize it”*. Better overall understanding of other fields helps a lot avoid this way of thinking. It takes a lot of effort to acquire such a broad general knowledge: reading, listening to conferences outside your area of expertise, chatting with colleagues, …
Upvotes: 5 [selected_answer]<issue_comment>username_2: I think a part of being a human is assigning different values for different things (or preferring, or choosing something among other things). So I do not worry about valuing some field of study more than others.
I think the real challenge is to be realistic. I mean one should know his abilities, his interests and find a (scientifically rigorous) field which matches with his abilities and interests.
Upvotes: 3 <issue_comment>username_3: The same thing happens in many different fields, mathematics, physics, etc... Many in academia criticize computational scientists like myself as too interdisciplinary (i.e. we are jack of all trades, but masters of none).
I think it naturally emerges from the competitive nature of academia these days. We're all competing for precious resources, such as funding, tenure track positions, prestige and attention, etc. To be competitive, we must assert that our work is more worthwhile than others. In the face of criticism and competition, people of the same field sometimes are tempted to assert their worth by belittling the value of other fields.
There's no better way to avoid doing this than to distance yourself from others who engage in the "we're better than them" attitude. Unfortunately, it can be difficult if you're already surrounded by people with this mentality already. In which case, it might be worthwhile to just get out of your comfort zone and attend research seminars and presentations in completely unrelated fields. Sometimes, if you surround yourself with others who value a topic that you don't, you can learn how to appreciate it in ways you've never thought of before.
Upvotes: 3 <issue_comment>username_4: It's important to distinguish "what I find interesting" from "what someone finds interesting" from "what I think is important" from "what is important to others".
It is perfectly acceptable to find other fields uninteresting - that's just the nature of subjectivity and personal preference. Or more poetically, "vive la difference".
It is less acceptable to go from "I personally find this area boring" to "this area is boring". At that point, as others have suggested, you need to understand why others in the field find it interesting. Ask them ! Put yourself in their shoes. (I will often say, "I'm glad someone is working in area X and I'm glad it's not me" :).
Upvotes: 3 <issue_comment>username_5: ### Is valuing one field over another is a common behavior in academia?
The other answers clearly answer **yes**. There can be *subjective* reasons to such an observation (e.g., a cardiologist could feel more superior to a gastroenterolog), but there might be also an objective part to the observation (as you example goes, the results produced in software engineering are somewhat *shakier* than those in graph theory).
### How does one avoid thinking that way?
Besides an excellent point by other answer saying "you should try to better understand the challenges of the other field", I also argue that you should **better understand the dynamics of scientific pursuit in general**.
Kuhn, in the [Structure of Scientific Revolutions](http://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions) argues that scientific work in any given field has [three phases](http://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions#Three_phases). The first, *pre-paradigm* and subsequent transition to *normal science* are relevant for this answer:
>
> The first phase, which exists only once, is the *pre-paradigm* phase, in which there is no consensus on any particular theory, though the research being carried out can be considered scientific in nature. This phase is characterized by several incompatible and incomplete theories. If the actors in the pre-paradigm community eventually gravitate to one of these conceptual frameworks and ultimately to a widespread consensus on the appropriate choice of methods, terminology and on the kinds of experiment that are likely to contribute to increased insights, then the second phase, *normal science*, begins, in which puzzles are solved within the context of the dominant paradigm. Etc.
>
>
>
Often we observe somewhat substandard results and works in fields which clearly fall into the category of those still being in the pre-paradigm phase. Your specific question is relevant to this due to the fact, that whole of computer science is still a young field and many problems we are solving are new, often vague, or ill defined, etc. This is is especially the case for the fields and communities tackling applications of applied-mathematics-style computer science to real-world applications, i.e., software engineering. Your reference to software engineering is clearly the case here, large parts of artificial intelligence fall into this category as well, and I am sure other fields and subfields too.
*Even if you find yourself working in a "soft" field, it does not necessarily mean the niche community is not tackling a sound problem (though sometimes it is the case, but you need to look very carefully into it). Sometimes working on such can be even more demanding/challenging/satisfying than routinely solving puzzles in the normal-science context.*
Upvotes: 3 |
2013/01/22 | 1,018 | 4,230 | <issue_start>username_0: I teach an undergraduate course in thermodynamics. In class pop (surprize) quizzes account for about 10% of the grade. I use [canvas](http://en.wikipedia.org/wiki/Instructure#Canvas) for my in-class quizzes (and to collect homework assignments, start discussions).
The class room I teach in doesn't have computers so when I set up an pop quiz on canvas, I generally let the students out of class during the last 10 minutes to log in to one of the several campus computers to take the quiz.
However, I know that our computers aren't top notch and one can easily spend about 5-7 minutes just logging in and another 2-3 minutes launching a web browser to access canvas.
Given these technical issues (that can't be sorted out because of a lax IT department) I generally keep my quiz open for about 9 hours. This also takes into account the other classes that my students may have to rush in to right after mine which might prevent them from attempting the pop quiz until later that day.
Isn't this unfair to students who take the quiz immediately? By keeping my quiz open for 9 hours, it takes away the **surprise** component of it substantially. Is there a way I can do this without having to have quizzes on paper and in-class?
Should I just be mean and keep my quiz open for only the 20 minutes or so at the end of my class?
---
*Edit:* I was thinking about this and I thought of a couple of things that I'd like to add:
1. One way to nullify this is by announcing that there would be a quiz in the next 3 days. That way, the students will try and learn and not just haphazardly flip through their textbook as I assume they would if it were a true pop quiz.
2. I could tell them that the examinations which account for 70% of the grade will be tough and it would be sensible to be honest with pop quizzes.
3. Borrowing from Zenon's comment below, why not mix multiple choice questions with single valued answers with only 1 attempt?<issue_comment>username_1: Given the limitations of the material of the students and at your disposition why do you do a quiz on internet? You can be assured that students talk between themselves. I think that you should simply do the quizz in class with pen and paper to be really fair.
If you have a large number of students, you could do most of the quizz as multiple choice questions and use a device (scantron?) to correct them automatically. As a complement you could have a few questions with *one word* answers.
Upvotes: 5 [selected_answer]<issue_comment>username_2: First it is only unfair if not all students had the same information. If you say "the quizz will be open for 20 minutes" and leave it open for 9 hours, it is somewhat unfair. If you are clear, then they have equal chances. However that does not solve your other issues such as students chatting or cheating.
I think your real solution is: **if you don't have the resources to make them take the online test in a decent way, just do it the old fashioned way**.
I have never worked with scanning machines, but if they work fine it sounds like a good solution.
Another way of not having to grade everything is to have students take the test, then shuffle the papers so that another student grades them. You cannot have all your evaluation performed in this way, but you can do some. If you want to further decrease the chances that they do not grade tests honestly, you can announce that you will perform yourself a second correction of 10% of the tests, and that graders found cheating will have their own mark diminished or invalidated.
Upvotes: 3 <issue_comment>username_3: An alternative not yet mentioned (caveat: this is a method I've done some work on) is to use a combination of QR codes and google forms. Then, the students can take it on their cell phones in class and the remaining students can take it pen-and-paper.
It does have the weakness that particularly savvy students can cheat by texting each other answers, but there's tradeoffs to every possible method of testing.
This could be more helpful than sending them off to the computer lab. Their entries will also be time-stamped so you can say you won't accept anything not finished before a certain time.
Upvotes: 1 |
2013/01/23 | 1,041 | 4,538 | <issue_start>username_0: Does using a bibliography software actually save you time aside from when converting citation style?
I have used refworks and endnote for years and from 3 years ago I decided to ditch them both and do the whole referencing business by hand because of frustrating problems they caused many times (references showing up incorrectly, having to manually add papers, references suddenly missing etc.). I only dump pdf files in them to keep a record of the references. I have been doing fine and I think it has been pretty efficient in three years I have had two change citation style manually twice which was painful but that was it.
I decided using endnote today again because i am writing a major review article. And already its painful! After inputting 15 references manually as the pdf files that can not be identified correctly (beats me why! clear pdf with OCR) and spending 30 minutes inputting the references, and then searching them to add them back in the paper I am doubting my decision!
Can anyone give me some motivation on why to use these tools really? I mean yes style change and finding duplicates can be good. But is that it? I feel like going back to basic but think there must be something wrong with me as it seems everyone else is using them without going insane!<issue_comment>username_1: How easy it is to manage references depends a lot on your working conditions.
If, for instance, you're an academic in a humanities field, where the "standard" bibliographic style is the Harvard or MLA styles, where you just quote an author's name and the page number, then bibliographies are relatively simple, since citations are straightforward, and the bibliography itself is simple and can be created on the fly.
If, on the other hand, you are working in a field such as mathematics or physics, which uses the "numbered" style, putting together the bibliography can be a royal pain in the neck. You need to add a new reference at the beginning of the document, and now *all* of the reference numbers have shifted throughout the rest of the paper. Then having a tool that will do the referencing for you automatically is a major help.
IF you need to use a package, and your choice thereof is up to you—you should find one that best suits your needs. But the important thing is finding a method that works both for you as well as for any colleagues you might be working with in the near future.
Upvotes: 3 <issue_comment>username_2: It depends what you mean by using bibliography software. I think of bibliography software as doing three things:
1. They help you organize, search, and find your references. While
Pubmed and Google Scholar are quite efficient at finding references
for my field, I often prefer to search my own library of papers I am
familiar with when looking for a reference. I use JabRef for this
purpose and it saves me loads of time even when not converting
citation styles.
2. They help you create a reference list at the end of a
manuscript/grant/etc. If you have a database you simply need to tell
the software what papers have been referenced and what format you
want the reference list in. This saves you time when you convert
styles (and the first time you create a list). I don't think it
really does anything else. The key is that in my opinion ALL
bibliography software does this stage well for ALL styles. I see no
reason not to use bibliography software to create a reference list
at the end. This is the section where it is easy to make minor
mistakes and can waste a lot of time getting the style correct
3. They help you with formatting in text citations. This is where most
of the software falls down. In text citation styles have a lot of
variability (book, chapter, article, first time citation, subsequent
citation on a page, citation in a foot note, etc) that make
automation hard. Defining an automated system that can implement an
in-text citation style is no small task. Even if you can create such
a definition, many publishers have small in house tweaks. Create
software that is fully compliant with a style and allows for tweaks
to be made easily, is even harder. If you are lucky enough that your
software has the style you need, or that your target publishers are
easy going enough, then using bibliography software for your in-text
citations is a no brainer. If you are not so luck, you may not want
to use that feature.
In summary I would always use bibliography software for 1 and 2, but only for 3 if I am lucky.
Upvotes: 4 [selected_answer] |
2013/01/23 | 1,828 | 7,466 | <issue_start>username_0: The question of [how to choose PhD committee members has already been asked and answered](https://academia.stackexchange.com/q/1192/2700) in general terms, but I have the following more specific questions regarding the choice:
* Is it more important that your advisor already know your committee members or that you do (e.g., your advisor knows them but you've never met them before vs. you know the member but your advisor doesn't have a strong or pre-existing relationship with them).
* Is it better to get someone in your discipline or someone doing more related work (e.g., if you're getting a computer science degree, you ask computer science faculty vs. you're getting a computer science degree but everyone except for your advisor is from the English department)?
* Should you pick people who are already invested/interested in you succeeding or is the dissertation process supposed to be a chance for you to win people over to your side?<issue_comment>username_1: You first have to check to see what the university rules are for a Ph.D committee. Many universities have rules that limit the number of external members, the number of members not from the department, the number of members that are not "regular faculty" and so on.
Assuming that you've checked all those constraints, then you should definitely discuss this with your advisor, who will have more experience in constructing committees in your area.
Which brings up another issue: community norms. In your field of research, are there customary roles that committee members play ? that's something you need to discuss with your advisor.
Finally, after all of the above, to answer some of your questions:
>
> Should you already know them?
>
>
>
Not necessarily, although that helps with the first approach. It helps if they're familiar with your work even if you don't know them personally.
>
> Should they be doing research related to your dissertation topic?
>
>
>
Not necessarily, but they should have *some* connection to your work, otherwise they won't be able to provide any kind of useful feedback for you.
>
> Should they be in your discipline?
>
>
>
Definitely, unless your topic is interdisciplinary and you want the input from someone in the other discipline.
>
> Should they be selected more for how much they'll help you get a job?
>
>
>
That's definitely a factor. It's not a critical factor for all members of your committee, but it can be a factor when looking for an external member. Ideally, if you're able to do some research with members of the committee, they can write a letter for you.
Bottom line:
* talk to your advisor and discuss all of this with him/her
* Different members of the committee can play different roles. The mix is what's important
Upvotes: 2 <issue_comment>username_2: There are many ways to build PhD committee, which depend on the local system (country, etc.) and your field. But, here are some general principles that should apply broadly. You need to bring a mix of **highly competent yet diverse** evaluators, with **not too much proximity** to yourself or your advisor lest it be thought that you are cherry-picking a partial (friendly) jury for your work.
Regarding your questions:
* >
> Is it more important that your advisor already know your committee members or that you do
>
>
>
I don't think it's a very important part of the decision-making. Certainly, you don't want the advisor's best friend (or yours!), that could make people think you're scared of unbiased questioning.
* >
> Is it better to get someone in your discipline or someone doing more related work
>
>
>
Here's one of the factors that play a very important part, for me, in picking committee members. First, all members need to be able to have a good understanding of your work. However, it is good that not all of them are precisely expert in particular field of expertise. It helps to have people from other (related) fields, because they will bring a different perspective, and give you the opportunity to highlight not only the very technical details of your work but also its significance for other fields.
* >
> is the dissertation process supposed to be a chance for you to win people over to your side?
>
>
>
**No.** It's good to bring people who don't necessarily agree with you on everything, but you should also avoid as committee members anyone overly critical of your approach of things, *unless you know them well and they can keep it under control* and agree to disagree. Otherwise, you risk that person actually coming to your defense to win you over. I have seen defenses being “derailed” (though all ended well) by a committee member who was overly argumentative, and it wasn't a nice experience for anybody involved.
Upvotes: 3 [selected_answer]<issue_comment>username_3: >
> Should you choose your committee members or should your advisor?
>
>
>
Yes.
Choosing your committee is a collaborative process ideally, and should take place with input from your advisor, but also accounting for your own preferences.
The first thing you should do is check about the constraints on your committee put in place by university rules. This can be quite complex, and will often limit the freedom of choice you have in the selection process - for example, your hypothetical "Degree is in CS, everyone but your advisor is from English" example is simply outright impossible at the institution I got my degree from.
>
> Is it more important that your advisor already know your committee
> members or that you do (e.g., your advisor knows them but you've never
> met them before vs. you know the member but your advisor doesn't have
> a strong or pre-existing relationship with them).
>
>
>
*Someone* should know your committee members. That can be you, or that can be your advisor, but someone should know if they're likely to be problematic for this particular dissertation (it's a theory dissertation and they hate theory, etc.)
>
> Is it better to get someone in your discipline or someone doing more
> related work (e.g., if you're getting a computer science degree, you
> ask computer science faculty vs. you're getting a computer science
> degree but everyone except for your advisor is from the English
> department)?
>
>
>
The "perfect" committee member is someone in your discipline doing related work. When those people don't exist, you should probably aim for a mix - you want a committee that can go "Yes, this is clearly a project worthy of a degree in $Discipline", but also people who can provide input on the specifics of your project.
Your advisor can probably talk to you about people who "should" be on your committee for various reasons, including political ones ("It will look strange if Y isn't on your committee...")
>
> Should you pick people who are already invested/interested in you
> succeeding or is the dissertation process supposed to be a chance for
> you to win people over to your side?
>
>
>
Your committee should be people already interested in your success - believe me, even people really excited by your work and interested in you moving forward can cause problems. Someone whose an outright skeptic, and might be inclined to just dismiss the whole project? That is *not* someone you want on your committee. You have the rest of your career to try to win people over to your side *after* you have your degree.
Upvotes: 0 |
2013/01/23 | 634 | 2,360 | <issue_start>username_0: For example, I am interested in the research related with "Quantum Hall Effects", and want to have a list of institutions/universities that have good contributions in this topic, with number of publications during the recent years. Any idea how? Is there any website provide such searching service? Google scholar, arXiv?
I tried [APS search](http://publish.aps.org/search) by searching "Quantum Hall Effects" in Abstract/Title. It shows all related papers. But now I just want to know the statistics of the institutional contributions.
Any idea how?<issue_comment>username_1: This is not that good, but is the closest I can think of something like that.
Microsoft Academic is very good, and have tons of features like that.
[Here is a list of the most cited institutions on Machine Learning](http://academic.research.microsoft.com/RankList?entitytype=7&topDomainID=2&subDomainID=6&last=0&start=1&end=100)
[Here is a list of the most cited institutions in NeuroScience](http://academic.research.microsoft.com/RankList?entitytype=7&topdomainid=6&subdomainid=14&last=0)
However, I think it only works for preloaded topics.
Other problem is that MS database of indexed papers is rather small compared with Google's one.
Upvotes: 0 <issue_comment>username_2: Statistics on institutions should be taken with 14,000,000 grains of salt: some institutions have changed name over time (including many recently, e.g. in France many institutions were forking twenty years ago and are now merging back), and affiliation rules vary widely between authors (you'll see below huge contributions from state-wide agencies like “Russian academy of science” or “CNRS”, they are not the same as universities or labs). But, you can do it with many bibliographic search tools.
Here's how to do with *Web of Science*:
1. Make a regular search (here I chose “title”, but you can do something more complicated)

2. On the results page, click on the “Analyze Results” link

3. Choose your field of interest (here, organizations)

4. Enjoy!

Upvotes: 4 [selected_answer] |
2013/01/23 | 1,487 | 6,252 | <issue_start>username_0: My research lab organizes monthly “internal” seminars, where we give the opportunity to talk to PhD students around the middle of their PhD, as well as newly arrived post-docs (who can talk about what they did before and present their project). However, attendance is a big problem, and it's the same people who never show up, unless the speaker is from their group. I suppose good team leaders encourage their whole team to show up, while a few others have told me point blank that they consider it “wasted time”, because it decreases the time students can work at the lab bench. So, while they cannot forbid them to attend, they just discourage them.
So, I am wondering what we can make to help increase student turnout. What do you use to attract people to seminars? We have tried coffee and sweets, which didn't work very well.
Some specifics, if it can help: research lab is about 25 permanent staff, and between 2 to 3 times that number of students and post-docs. Seminars are held every month, rotating between teams.<issue_comment>username_1: In my university, we have the students seminar around 12pm with pizza at the end. Using a time slot when most people are free usually helps. And although sweets are good you can't survive on that, free lunch on the other side is always a plus.
Else we also have seminars friday around 4pm with snacks and beers afterward. It is a time when most people are not as productive as the rest of the week, and the ability to socialize afterward with the rest of the department is always a plus.
Of course, the best way would be to engage the *leaders*. Maybe invite them to give a talk and try to make it worth their while so they can see that the goal of those seminars is not only giving the talk but the discussions that can flow out of it.
Upvotes: 4 <issue_comment>username_2: >
> So, I am wondering what we can make to help increase student turnout. What do you use to attract people to seminars?
>
>
>
Short answer: **Make the seminars useful for the group members.**
*First, the diagnosis:* The group members are probably too narrow-minded and do not understand that getting insights from currently irrelevant topics does in fact become often very useful in the long-run. The group members seem to optimize in a greedy manner for their short-term interests, shooting themselves in foot in the long-run. It seems, they do not understand that **seeing connections between dots at some future timepoint is much much easier if you saw the dots and their contexts before.** But this does not come by direct explanation, they need to realise it by themselves. It's your task to set the example and at least showing how at least you benefit from the seminars. This is a long-haul task and has to do with your general attitude to world. In the short term, you can perhaps do the following:
1. push for *all* group members (including the professor(s)) giving **conference rehearsal talks** - if you are in an area where going to conferences makes a difference. At the talks encourage giving the speaker not only content-relevant feedback, but more importantly methodological feedback on *how to speak*.
2. invite **external speakers** and **actively support networking** of the group members with the speaker. Especially in informal interactions (which are often started by interactions during, or right after the talk), people tend to find common interests and receive feedback on their own work. Possibly start even a small collaboration. The idea here is to, over time, show the group members that attending tangentially relevant talks is useful for *cross-breeding of ideas*.
Upvotes: 5 [selected_answer]<issue_comment>username_3: >
> I suppose good team leaders encourage their whole team to show up,
> while a few others have told me point blank that they consider it
> “wasted time”
>
>
>
First I wouldn't put a quality judgment on the team leaders. Hopefully all the team leaders are "good". Further, hopefully all the team leaders have done a cost-benefit analysis of their staff attending and have simply come to different conclusions.
The problem does not seem to be the junior staff, but rather the example set by the senior staff. I would argue that you do not want to encourage the junior staff to "disobey" their team leaders by offering sweets. The permanent staff needs to come to a consensus as to whether or not these meetings are useful and who should attend. The possible outcomes are as follows:
1. The meetings are a waster of time and should be canceled
2. The meetings are critical for all groups and attendance should be
mandatory
3. The meetings are useful for some groups but not others ant those
that want to attend should attend
4. The meetings are critical to some groups and require participation
from all groups and attendance should be mandatory.
Once a decision is reached, it is the lab director (the person responsible for the 25 permanent staff) to see that it is carried out.
Upvotes: 2 <issue_comment>username_4: One thing you can do is also use this time slot to present additional information that people will want to hear. For example, you can add a 5 minute "news cast" about the lab: who's new, who is leaving soon (and what they will be doing), general announcements, lab babies, whatever announcement people will have. We are doing this at my group.
Upvotes: 1 <issue_comment>username_5: **Identify the needs/wants of the people you want to attend.**
You are currently phrasing the benefit of these seminars from the perspective of the speakers (i.e., to provide an opportunity for PhD students and postdocs to present their work). This is a noble goal and I would keep it as a goal but if you are having attendance problems then perhaps the other members of the lab do not see this as a useful activity (you mention that some have clearly expressed this).
So you need to identify what they would want from the seminars. Perhaps more informal chalk-talk discussions, a journal club where each paper is based around the work of a student or postdoc, etc... would generate more interest.
If those that do attend are consistently talking about how useful the experience is, then attendance will go up.
Upvotes: 2 |
2013/01/23 | 930 | 3,481 | <issue_start>username_0: For example, University of Maryland, Baltimore County [is said to be](http://en.wikipedia.org/wiki/University_of_Maryland,_Baltimore_County) a **research university**. (Same thing for [Rice University](http://en.wikipedia.org/wiki/Rice_University), for example.)
Are there "non-research universities" also? What is the difference?<issue_comment>username_1: In the US context (and many other countries) the difference can be somewhat foggy. However, in the past (1900-1930's), US university landscape adopted that invented in Germany (c.f. [here, Chap. 2](http://books.google.com/books?id=xbztTkGKOHEC&dq=pasteur%27s%20quadrant&source=gbs_navlinks_s)), therefore looking how Germans do it can be indicative.
In Germany, Austria and Switzerland you would see a distinction between a [Universität](https://en.wikipedia.org/wiki/University) and a [Fachhochschule](https://en.wikipedia.org/wiki/Fachhochschule) (or sometimes just called *Hochschule*), also translated as *University of Applied Sciences*. [Citing from](https://en.wikipedia.org/wiki/Fachhochschule):
>
> It (Fachhochschule) differs from the traditional university (Universität) mainly through its more application or practical orientation and less research. ... The Fachhochschule represents a close relationship between higher education and the employment system. ... Nevertheless, in Germany the right to confer doctoral degrees is still reserved to Universitäten.
>
>
>
I guess, the difference applies to different countries as well, though the nomenclature would differ.
Upvotes: 4 <issue_comment>username_2: In the US, the [Carnegie classification](http://classifications.carnegiefoundation.org/descriptions/) is used to describe different kinds of academic institutions. The system changed in 2005, but under the previous incarnation, universities that had significant research components were called "R1" universities. Under the new system, universities with research components are called "RU/H" or "RU/VH" (Research University/(V)ery (H)igh research). It's most likely that the term 'research university' is an indirect reference to this.
**Update**: The [Carnegie classification](http://classifications.carnegiefoundation.org/descriptions/basic.php) has many categories of institution: only three of them are predominantly research-focused. So there are many more "non-research" institutions than there are research institutions.
Upvotes: 5 <issue_comment>username_3: There are **research** universities and there are **teaching** universities.
* Research universities have graduate programs and their focus is on doing research. This means most professors teach one or two classes (some have 0!) but have other obligations.
* Teaching universities on the other hand don't typically have graduate programs (if they do, it is just a Master's program) and the professors have full teaching loads (I think 3-4 courses is the norm) with little expectations to publish.
* For example, Austin Peay State University, where I did my undergrad is considered a teaching university. Every professor has a full course load and not a single one of the professors I had has published in the past 5 years.
**UPDATE:** chronicle.com defines **teaching university** as one where professors have "a standard teaching load of four courses a semester", from [Interviewing at a Teaching University](http://chronicle.com/article/Interviewing-at-a-Teaching-/45217/).
Upvotes: 5 [selected_answer] |
2013/01/24 | 1,477 | 6,390 | <issue_start>username_0: It is an expectation that the PhD would make an original contribution and/or advance knowledge in a given field. I understand this is a universal assumption for this level of study across all universities.
One of my friend's research experiment has not produced a single positive result. This was a science experiment, so it is easy to quantify whether the result is positive or negative.
[It is a bit different in the social sciences, where the outcome (result) would be that either the null hypothesis is supported or rejected (with some analysis on the effect size to make the analysis meaningful in a given context). In other words, the data analysis either supports or does not support the proposition that is being investigated.]
My question is: What should a student do if none of the research outcomes or results are positive?
Simply writing that nothing new was found does not add or advance knowledge other than to just confirm the status quo (which I guess is a form of contribution, but there has to be more than this at this level of research!).<issue_comment>username_1: Your friend might not want to hear this, but there's nothing you *can* do except for start over - with a different experiment. Research fails - and should ! If there isn't a risk of failure, you're not out on the cutting edge doing research.
But in most failures, there's a grain of something to build on ("[from the ashes of disaster come the roses of success](http://www.imdb.com/title/tt0062803/)"). Maybe the student is too demoralized right now to see it, but almost always there's some clue in the failure that leads to a different research question worth asking.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I think there is a case to be done to report negative results, since it gives at least a blueprint of what does not work.
However, as username_1 mentions, usually a PhD is measure on its contribution to expand knowledge. If your friend is already 5 years into his PhD however, I think there is some adviser's fault, since he should have had some kind of insight that this thing was not working and a different course might have been wiser.
Upvotes: 4 <issue_comment>username_3: Technically, you have nothing to face failure except doing *another experiment* as @username_1 said, Consulting supervisor and other acknowledgeable people and carefully looking again to the problem **formulation** and to your solving method. Here the goal is to identify the error.
More importantly, at least to me, is the non-technical reaction for such failures. I reward myself by relaxing, playing more with the kids, sleeping early; playing some games or sometimes watching movie. I try my best to forget the problem for one or two days..
Upvotes: 2 <issue_comment>username_4: A defensible null result (that is being able to definitively say that something isn't there as opposed to not being able to say anything either way) *is* a result and *does* advance the frontier of knowledge.
This should be obvious.
If that kind of thing can't be published in your friend's discipline then there is something seriously wrong with the culture of that discipline.
To be sure, null results are generally not sexy and can't expect to get into a first rank journal unless there was a widespread expectation that this was a shoo-in, but it is still real science.
Upvotes: 3 <issue_comment>username_5: Two major thoughts:
1. A negative finding is still a finding. Publishing failures is a harder road to publication, but still a valuable contribution to science as a whole.
2. While this might be a little late for your friend, I always advise looking for "fail-safe" research questions for dissertations or other research projects where a student's success depends on a single finding. What I mean by fail-safe is that research questions should be chosen where "Yes" or "No" are both interesting and publishable answers.
Upvotes: 2 <issue_comment>username_6: I work in the field where publication of negative results is frowned upon, and basically impossible. The result is huge positive bias which severely impacts the ability of the field to properly judge its own advances and perspectives. This field is Artificial intelligence, and it already experienced two "winters" (check wikipedia for AI winter phenomenon). I personally think the third winter is due in two to three years, maybe sooner.
So publishing negative results is crucial to keep out the bias. In AI, every approach just works, if you read the papers (except for the approaches before the one published in the paper, those don't work well, that's why we really need to publish this one). Yes, I am being sarcastic.
On a more serious note, in engineering studies the point of obtaining PhD is inventing a new and better method for something. Of course, the theory goes, if you invent something useless (that it does not work), there is no point in publishing it. However, often people refrain from publishing methods that don't work, but they *should*, given the general ideas in the field. And that is wrong.
So the main difference is not humanities vs. hard sciences, the main difference is usually engineering (basically hellbent on *inventing* something new and reporting about it) vs other sciences. For example, the mere fact that some substance does not affect cancerous cells (is therefore useless for curing cancer) can be very important in medicine research, so other people will not waste time examining it further.
Upvotes: 2 <issue_comment>username_7: [This is similar to another question on the topic](https://academia.stackexchange.com/a/156495/128924).
There is nothing wrong with publishing null results.
The goal of research is not to have positive results, the goal of research in an academic setting is to be **published, read and cited**. Having null results is only a minor inhibition on that. You can still get published and have a successful dissertation if:
1. The methodology was sound.
2. The argument that one could have expected your approach to work was clear and agreeable to your audience.
3. The paper was well written and clearly outlays the conclusions and implications for the field that practitioners care about.
Having an interesting and successful thesis helps, no doubt, but it is not the sole issue here. The demands of 1,2,3 are very high.
Upvotes: 0 |
2013/01/24 | 3,513 | 14,662 | <issue_start>username_0: This situation is not uncommon; in my case, I have to submit abstract for conference in September by the end of January already. But the problem is I have no results yet. I'm pretty sure that in those 8 months I will get pretty interesting results. But..
How to write smooth abstract without reporting results?
I started something along these lines:
1. This and this is an important factor ...
2. However, few studies on this topic have been done ...
3. In our study, we compare this and this
4. ??
Now the problem comes in point 4, where I should report some results.
* How should I go around that?
* What formulations should I use?
* Shall I speak in present, future, or past tense? The studies are usualy written in past tense like "we analysed, we compared...", but in this case I would tend to present tense.
Thank you for your help. Examples are welcome!
P.S.: [this is an interesting discussion](http://scientopia.org/blogs/drbecca/2012/05/09/the-art-of-the-ambiguous-conference-poster-abstract/) however didn't give me actual guidelines of how to write it.<issue_comment>username_1: **Don't write results you don't have**. Neither in the present, past or future tense. Just don't do it. Yet, I agree with you that there are circumstances where you *do* need to write an abstract on on-going work. For example, many big conferences in my field now ask for abstracts to be submitted up to 10 months in advance of the conference itself! If you are a post-doc staying on a 12-month project, you want to present something but you might not yet know how things will turn out. So, two techniques I propose:
1. Just write about the methodology, and present your goals in a general way, without “predicting” particular results but insisting on the importance of the topic. That is, **emphasize strongly your points #1 and #2, and then describe point #3 as you would your “results”**. Things like:
>
> In this particular study, we compare the efficiency of methods A and B on given subsets of a reference database. We use a large number of different criteria for measuring efficiency, including …, … and … We also discuss in detail the implementation of subprocess X in method B, because its has not been specifically optimized in the existing literature.
>
>
>
I know it sounds vague, but that's the best you can achieve honestly, without pretending to know what you expect to find.
2. Bait and switch: if you have existing results in a closely related study, you can incorporate them as part of your results. Mix this approach with above, so that you have at least a few specific results to list in your point #4. Then, when you will make your presentation, just present your new results alongside the old (some people would remove completely the old results, but that makes it too much of a “bait and switch” for my taste). It is, after all, quite common for people to include newer results in orals/posters that they obtained after the original submission. It is not frowned upon, as long as you keep a decent agreement between the original abstract and the final content.
Upvotes: 7 [selected_answer]<issue_comment>username_2: **You cannot predict the future**. You may obtain the results you hope for1. But things can also both go horribly wrong (your laboratory burns down, your samples mysteriously evaporate,...) or extremely interesting - you may happen to measure something beyond your dreams. Let your abstract only tell truths - what your (vague) setup is, what you want to measure and what you *expect* to happen. But don't pre-claim results when you cannot even foretell their existence for sure. Just be honest - say that you will present the results obtained by them, whatever they may be. I don't like [cliffhangers](http://tvtropes.org/pmwiki/pmwiki.php/Main/CliffHanger), but they tend to work...
---
1 But make sure you don't "accidentally" measure only what you expect to be measured!
Upvotes: 3 <issue_comment>username_3: **Short Answer:**
Writing something you didn't do as the time of submission is a **lie** even if you are sure you **will** have it eventually (I believe uncertainty exists everywhere).
***It's simply not your turn this year***, target another one or wait for the next year.
---
**Long Answer:**
I would speak from Computer Science (CS) perspective.
Submitting an abstract in CS conferences is one of two:
* Submitting to the abstracts (short papers) track of the conference.
* Submitting an abstract (i.e. 250 words) first then submitting the full paper. For example, these days AI has the [big guy](http://ijcai13.org/submission_instructions "big guy") submission deadline.
I assume you mean the first case otherwise you will have no time for preparing your results.
Then the missing results is one of two:
1. Part of the contribution (method)
2. Evaluation (support) of the contribution (method)
In the first case, I really recommend not to submit at all unless your results are ready. ***You just do not have something new in this case***.
The second case I will be more tolerated about it. In CS, you can *play around* it by:
1. In case your missing results are the experiments of your method, you can do initial experiments and believe its the general case. Thus write your abstract based on it.
2. Illustrate with examples and/or real world scenarios.
Upvotes: 2 <issue_comment>username_4: The challenge with ever-earlier deadlines for conferences (sometimes six months *or more* in advance of the actual date!) makes planning for a conference a very difficult prospect.
You're left with only a handful of options, none of them particularly appealing:
* Submit an abstract on incomplete research, and hope that the work is completed in time for the conference. In this case, you say something like "we will present our work on X, Y, and Z." You make no claims about the *findings* related to your work in those areas, though. You also try to edit the abstract, as appropriate and if possible, to better reflect the subject material that you will *actually* present at the conference.
* Submit an abstract on already completed work. The advantage is you know you will have the results and you can put together a good presentation. The downside of this is that it means you will be presenting last year's results at this year's conference. If you are in a "hot" field, this can mean ceding significant ground to your competitors if they get just a little bit luckier than you, and they have findings just before a deadline and you don't.
Ultimately, there's no right answer to which option to take. You have to decide this based on what is expected of you in your field, and what impact this will have on you and your career (if you can opt for the safer track, or if you have to go for the higher-risk option). The only thing that you should **never** do, as I said above, and as other posters have mentioned, is make claims that you have not obtained.
Upvotes: 4 <issue_comment>username_5: I commented on the original question
>
> Don't do it. No results means no abstract.
>
>
>
While this received many upticks, I have also been told
>
> That sounds incorrect. Any references or related experience?
>
>
>
and that statement also has some upticks. While this is not an answer to "how to write an abstract" it attempts to clarify my comment (but is too long to be another comment). Hopefully it is helpful.
I think there are so many things wrong with writing an abstract without results that it is difficult to explain my thinking. The apparent reason for wanting to write an abstract without any results is
>
> No abstract means missed opportunity; I cannot afford that. I have to present results I'm sure I will have by the time of the conference.
>
>
>
Which has recieved essentially the same number of upticks as my comment not to do it. I disagree with "I cannot afford that". I have never seen or heard of someone being denied tenure or a job because they didn't present at a conference one year. Hiring decisions are never so close that a single conference presentation (no matter how prestigious) sways the decision. I would argue that there are very few upsides to submitting an abstract without results and potentially some downsides.
Submitting an abstract without any results will not get you a place at a highly prestigious conference or a keynote address. It will get you a place at a conference that essentially accepts all abstracts, but not much more than that. In fields that I am familiar with conferences happen at least every 6 months and more often every 3 months. This means that by not submitting now you are merely delaying your presentation by 3-6 months. Therefore the cost of not submitting is a 6 month delay and a slightly different conference that is potentially slightly more prestigious (e.g., with results you might be able to get a talk instead of a poster).
In slow moving fields 6 months is essentially meaningless. In fast moving fields, 6 months is a long time, but in the fast moving fields I am aware of you don't present results until they are about to be published. This means you don't want to submit an abstract of results you don't yet have. Therefore I see very little cost of waiting for the next conference.
So what are the benefits of waiting. Again they are not great. The abstract will actually represent what you are going to talk about. You will likely get put in the correct session. There is a higher chance of getting a talk. If everything goes tits up, you will not have to withdraw. While most people will not remember, some of your close colleagues will and this could hurt future references. Withdrawing also screws over the conference organizers and they will not forget.
There is also the issue of how long do you need to get results. If the abstract was due the day of the conference, presumably you would want to have results before submitting the abstract. What about a week? A month? 6 months? Where is the line?
Finally there is the issue of integrity. While one can write the abstract to make no promises and only state the current truth, this is in fact difficult. If you do this frequently enough you will likely eventually make a statement that is a lie.
In an attempt to answer the question, what about:
>
> We don't have any results yet as it is still N months before the conference. By the time the conference rolls around we are sure we will have something interesting. If not, we will present some old data or just not show up.
>
>
>
Upvotes: 2 <issue_comment>username_6: The results for my project were not a pretty as expected, but I had months to optimize. Unfortunately the abstract had to be submitted asap. So I added great detail to background and methodology, some vague noises about the results, and ended with "preliminary results are discussed." It wasn't perfect, it sucked actually, but it did the job, and the results are now where they need to be for the conference in the summer.
I realize this feed was originally discussed in January, but figured anyone desperate on Google would see this and maybe see a glimmer of hope.
Upvotes: 2 <issue_comment>username_7: Here is an example of an abstract with no results that was accepted for a conference [source](http://www.aacrmeetingabstracts.org/cgi/content/abstract/2004/1/254):
>
> **Evaluation of genetic susceptibility for non-Hodgkin lymphoma in the InterLymph consortium**
>
>
> The incidence of non-Hodgkin lymphoma (NHL) has steadily increased worldwide for many years and is still present after taking into account changing diagnostic patterns and HIV infection rates. Although most other important risk factors have yet to be identified, there is substantial evidence suggesting a relationship with conditions that alter the immune system. A consortium that includes essentially all case-control studies currently being carried out in Europe, North America and Australia has recently been formed (InterLymph) to help stimulate and coordinate etiologic studies of lymphoma. Studies are using the new WHO classification of lymphoproliferative disorders and have comparable questionnaire data for most key lifestyle and environmental exposures. InterLymph will have substantial power to study the main effects of less common SNPs, gene-environment interactions and rare sub-entities. Most studies with complete enrollment plan to carry out genotyping of an initial group of SNPs in genes that play a role in regulating the immune system, including IL1A, IL1RN, IL1B, IL2, IL6, IL10, TNF, LTA, and NOD2. The SNP list will be expanded based on interest and resources over the coming years. A set of DNA samples from 102 ethnically diverse individuals that have been sequenced and analyzed on one or more platforms as part of the SNP500Cancer project (<http://snp500cancer.nci.nih.gov>) will serve as gold standards. Further, a round-robin of sample exchange will assure genotyping consistency across participating laboratories. Initial results from the analysis of SNPs in the above genes will be presented and analytic issues will be discussed, including an approach that will help evaluate the probability that statistically significant associations are false positive findings.
>
>
>
Upvotes: 1 <issue_comment>username_8: I agree that you shouldn't report results you don't have, much of the advice on this thread is sound. I would say don't get too into the literature review either...this doesn't necessarily belong in the abstract. The methodology and reasons why are probably key here.
I too am in this boat (though I realize this is an old thread, I'm sure people like me will come here in the future, just as I did).
What some of the people here don't quite seem to understand are mandatory requirements. I don't have data for my thesis yet at all but I'm required by my program to submit an abstract in a few weeks, and I have to submit the abstract for a grade in my class right now. I have no way to get any results at the present time--it's not an option for all of us. Being in academia and research may require you to write an abstract without results. That's okay. Improvise--we're all here to learn, and you won't succeed without that skill.
Upvotes: 2 <issue_comment>username_9: Just keep it vague and talk about the topic and that results will be presented. Don't make any predictions or guesses. Yes, this makes the abstract a bit more nebulous, but who cares, it is a conference abstract, not a journal abstract. And it's better than lying or predicting.
This is no big deal. Write up the abstract. Go to the conference. Have fun.
Upvotes: 1 |
2013/01/24 | 1,603 | 4,992 | <issue_start>username_0: Like many professions, academia is a challenging environment for women. In some disciplines (e.g. computer science), the number of women remains low despite efforts to increase it. Have there been any academic studies on the ways of improving the working conditions for women, specifically focussing on women in academia? As an academic working in hard sciences (i.e. not gender studies), what book or review could I read on the topic, to help me get a better understanding of these issues (and possibly improve my own behavior)?
I'm not interested in “advice” (in part because I am not a woman), but in studies of how effective are various possible ways of improving the working conditions for women (in academia). Like “we study universities implementing policies X and Y, and show that they do increase gender diversity bu xx%”
---
*The question [“Women in academia”](https://academia.stackexchange.com/q/1363/2700) is related, but I'm asking for material with a totally different perspective.*<issue_comment>username_1: In the UK there is [Athena SWAN Charter](http://www.athenaswan.org.uk/) which
>
> recognises commitment to advancing women's
> careers in science, technology, engineering, maths and medicine
> (STEMM) employment in higher education.
>
>
>
They have a number of reports that could be of interest including [Measuring Sucess](http://www.athenaswan.org.uk/sites/default/files/Athena%20SWAN%20Impact%20Report%202011.pdf) and a whole section devoted to [good practice](http://www.athenaswan.org.uk/content/good-practice).
Upvotes: 2 <issue_comment>username_2: You may want to check out the [Association for Women in Science](http://www.awis.org/) (AWIS) website. There's a resources area on the right side of the page which includes publications and factsheets. Elsewhere, there's a link to relevant committee or groups for different STEM fields.
Upvotes: 3 <issue_comment>username_3: I think that you read French, so there is this book: [*Parcours de femmes à l'université : Perspectives internationales*](http://www.amazon.fr/Parcours-femmes-luniversit%C3%A9-Perspectives-internationales/dp/2296010482) and also this study: [*Les femmes à l'université:
Rapports de pouvoir et discriminations*](http://www.efigies.org/wp-content/uploads/2007-11-24_je-efigies-anef_formation-doctorale-rapports-pouvoir_actes.pdf).
Upvotes: 2 <issue_comment>username_4: The most recent paper to make a big splash on this subject was "[Science faculty's subtle gender biases favor male students](http://www.pnas.org/content/early/2012/09/14/1211286109.full.pdf)", by Moss-Racusin et al. You can start there, and dig backwards through the references - you'll hit most of the major reports on this topic.
A few notes on the topic of this paper itself:
It is the same gender biases that academics have towards their students that they also demonstrate against their peers, so don't narrow your research too much. And if your question is "why are there so few academic women in the sciences?" you need to look at the problem from top to bottom. Women aren't going to want to become professors if they are already noticing the bias in undergrad.
Upvotes: 4 [selected_answer]<issue_comment>username_5: A Google Scholar search of "academia women" seems to reveal a number of potentially-relevant studies.
Below are primarily retrospective/introspective qualitative articles, but some quantitative articles exist.
‘We make the road by walking’: a collaborative inquiry into the experiences of women in academia
<NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>
Reflective Practice
Vol. 13, Iss. 6, 2012
<http://www.tandfonline.com/action/showCitFormats?doi=10.1080%2F14623943.2012.732939>
Inspiration From Role Models and Advice for Moving Forward
<NAME>, <NAME>
Behavior Therapy, Volume 43, Issue 4, December 2012, Pages 721–723
<http://dx.doi.org/10.1016/j.beth.2012.03.001>
Kleihauer, Sarah, <NAME>, and <NAME>. "Insights from Six Women on Their Personal Journeys to Becoming Deans of Agriculture: A Qualitative Study." Volume 11, Number 1–Winter 2012 (2012): 64.
<NAME>., <NAME>. & <NAME>. (2012). The different worlds of academia: a horizontal analysis of gender equality in Swedish higher education. Higher Education (18 december), 1-16.
<NAME>. and <NAME>. (2012), The academic jungle: ecosystem modelling reveals why women are driven out of research. Oikos, 121: 999–1004. doi: 10.1111/j.1600-0706.2012.20601.x
Multi-Institutional Study of Women and Underrepresented Minority Faculty Members in Academic Pharmacy
<NAME>, et al.
Am J Pharm Educ. 2012 February 10; 76(1): 7.
doi: 10.5688/ajpe7617
You may also wish to check out well-known blogs and sites that discuss the academic environment, including <http://theprofessorisin.com> and <http://chronicle.com/section/Home/5> and <http://www.phinished.org>.
Upvotes: 2 |
2013/01/24 | 509 | 2,125 | <issue_start>username_0: I had an inteview day for a PhD program at a US institution. I met 4/5 professors and a few phd/post docs students. Should I send a thank you email to my host professor or not? What about the other professors?
I think the interaction we had that day was good enough so I am not sure if an email follow up is really that necessary.<issue_comment>username_1: It depends a bit, I guess... I personally think that it's always nice to send a mail and thank the person for the hospitality. I don't think it will increase your chances for the position but hey it's good manners :)
Whether or not you get a position is often related to not only how good you are as an individual, but also:
* whether or not you will bring some new expertise/perspective to the lab/group
* whether or not the group leader believes you will fit into the existing group (socially/culturally etc..)
* whether or not it will cause them an extra effort to get you there (in case you'll require work/residence permit, specific equipment etc)
* whether or not they already have a more suitable candidate in mind (it's pretty mean for the individual but also very understandable for professors to have interviews with other candidates when they have already made up their mind about a particular one. Motivation behind something like that could be to plan for future projects or bureaucratic reasons)
These are only a couple of factors I can think of. But to come back to your question; I think you'll have nothing to lose and if anything it'll show that you have appreciated the chance to visit the lab and talked to people, which is a positive thing. ;)
Upvotes: 4 <issue_comment>username_2: A polite and courteous thank-you email is never inappropriate. Also, if you've left anything out of your interview day (or promised to follow up on something), it's a good opportunity to do so.
However, you shouldn't turn this into an opportunity to go overboard and plead or beg for a spot, or oversell yourself. That is unlikely to go over well, and can undo the good job you did on your interview day.
Upvotes: 5 [selected_answer] |
2013/01/24 | 589 | 2,509 | <issue_start>username_0: I'm sorry if this is a bit subjective, but I really don't know where to find the truth.
In my hometown, lecturers usually will only approve research in computer science & information system where the result is new software. The goal is creating a software that can be used directly by business. Usually, research that can't be 'seen' and used directly will be rejected and considered useless.
As in my university, lecturers said that creating new software is a type of research called a quasi experiment. Students are expected to perform the following activities: gathering requirements, designing UML models, and implementing the source code. In the seminar (final exam), lecturers will ask a lot about business process and customer satisfaction. No maths. Most questions are subjective and hard to prove.
Is it true that creating a fully functional software or web site like this is a kind of quasi-experimental research?<issue_comment>username_1: As undergraduate research, you mostly don't have the time, experience nor support to create a full fledge project from scratch. By that I mean creating original work in you field. Thus, given the scope of the project, implementing a software is a valuable exercice that can also be really useful for research. For exemple, in my domain (bioinformatics) there is a special issue of NAR (Nucleic Acids Research) solely devoted to webservers. The latest issue can be found [here.](http://nar.oxfordjournals.org/content/40/W1.toc)
I think one of the big issues you are facing (in almost every field) is that the amount of knowledge you get when leaving the bachelors has been pretty constant for the past 50 years. At the opposite, the level of new research has grown exponentially in these time. Thus, the gap to create something new is constantly growing.
Upvotes: 3 [selected_answer]<issue_comment>username_2: Depends what your definition of an experiment is:
>
> 1. a scientific procedure undertaken to make a discovery, test a hypothesis, or demonstrate a known fact
>
>
>
This covers scientific programming, but not a lot of other areas of software development.
>
> 2. a course of action tentatively adopted without being sure of the eventual outcome
>
>
>
Well, for sure all software development is done without being sure of the eventual outcome! You have hopes, you try stuff, you analyze its consequences, you find a way of improving the software or mitigating the issues, and you learn something.
Upvotes: 2 |
2013/01/24 | 964 | 4,125 | <issue_start>username_0: I am reading a paper and have questions about the details of the procedure described. I have read other papers by the same team but they don't explain too much about that procedure anyway. I think it might be common, but my supervisor doesn't know it too.
I am stuck and I want to get out of it. As a student working on my master thesis, can I email the contact author for the manuscript cold out, or should I ask someone to contact him for me? I would ask my supervisor but I don't want it to imply that I'm avoid taking initiatives when I could do it on my own.<issue_comment>username_1: You can definitely contact a paper author. They might be of the 'it obviously follows' == after 5 pages of calculations kind, or the empiricist who published the 20 successful regressions or simulations out of 200, with 180 contradicting their result or being inconclusive; and in either case ignore your question. From personal experience though, it can even lead to breakthroughs: in my case, someone sent me his lecture notes which clarified something I was stuck with, and related to the submitted question. However, if your advisor knows the author, or simply is well-known in their field, do mention that you are their student, as it should increase good will on the author's part - after confirming with your advisor that they are cool with it. Showing that you are active, interested, and independent should also go down well with the advisor.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Definitely contact the author. Collaboration is what research is all about. Authors expect these sort of emails when they publish. Also, sending email like this lets people know your name, one person at a time. This way, when you're at a conference later on, you can go over to the author and say, "hey, I emailed you a while back, nice to meet you in person." It's always good to network.
It would be good form to mention your advisor in the email, whether he's well-known or not.
Upvotes: 4 <issue_comment>username_3: I will give the point of view from Eastern Asian Universities.
Here the Lab culture is too focused on the professor as the head and only public face of the laboratory.
Because of this, many students are not used to being asked directly about their research, and usually they do not know what to do about it, and will end up asking their professor.
The best case scenario is that the professor won't mind and will give the student authorization to mail you back, but the worst case scenario (it happens!) is that the professor will get offended because you contacted a student and not him, and you won't get any answer at all.
This mostly applies to Universities in China, Japan, Korea, etc. I would recommend mailing the professor and asking him directly, it will take time, but is usually your best bet. Even more, it is way better if you get your adviser to contact him for you, and then he can ask for his permission for you to contact the student directly (I'm really not joking about this)
Unless the guy writing the paper is foreigner, then go ahead.
Upvotes: 3 <issue_comment>username_4: Yes, you can and should contact the author of the paper.
The more thought-out and coherent your email and questions are, the better chance you stand of getting a useful reply. The risk with a cold email is being ignored, so make sure that you do everything to avoid that.
1. Make sure you carefully state your question, and make sure that you actually can't understand it yourself (for example by reading the references in the original paper).
2. Make sure the subject line is to the point, send the email from your university address, and don't be afraid to mention your advisor or even cc them on the email - this will give the person you are cold-emailing some immediate context.
Upvotes: 2 <issue_comment>username_5: I would encourage you to email the author.
I have sometimes emailed authors of papers and occasionally I have been mailed by people asking about my research.
If you write a polite and reasonable mail, most people will be glad to hear from you.
Upvotes: 0 |
2013/01/25 | 1,876 | 7,291 | <issue_start>username_0: I am wondering, for full time university teachers (not those who also have research responsibilities), what is *generally* the number of hours per week that they teach? I currently teach 20 hours per week and find the load quite heavy giving me little time to prep new modules with quality. Adding to that the responsibility of marking and it is not uncommon that I end up working more like 50-60 hours per week to teach 20.
Are these numbers average? High? Low?<issue_comment>username_1: 20 hours per week for most teaching only positions is normal. I have seen Profs with major research responsibilities that at times have had 20 hours of teaching as well.
My rule of thumb for a course: You will spend 2 to 3 times the amount of teaching you do preparing the first time you teach a course and this decreases as years go on for that same course (a new course requires considerable amount of time for preparation again).
First time I was teaching advanced thermodynamics and fluid mechanics courses I was spending two full days per 2 hours of teaching actually! but then after three years it was down to preparing 2 hours for 2 hours of teaching.
Your numbers seem right to me. In short 20 hours of teaching = virtually no time to do research.
Upvotes: 3 <issue_comment>username_2: Depends to different parameters, but university generally expects each academic staff works
* 40% research
* 40% teaching
* 20% involvement in committees and university meetings
Of course, different personalities have different interests to focus on either of research or teaching activities. That's why, some take more courses than others.
In addition to personal interests, needs of school is another issue. For instance imagine one of the lecturers needs to stay at hospital after injury in accident. Head of school asks one of the academic staff to cover his/her absent colleague.
Upvotes: 2 <issue_comment>username_3: There is no “typical” number in this matter. Let's take a few examples:
* UK, lecturer: it's usually a full-time position, so you have to put in 35-40 hours per week. The ratio of lecture time over all the rest (preparation, departmental committees, etc.) depends on the contract, but I have rarely seen it pushed past 1:1 (which means roughly 20 hours of teaching, maybe 25 at most).
* France, "PRAG": this is the position of a high-School teacher detached to a university, and the closest one to a lecturer position. Their nominal teaching load is 384 hours per year, with a weight of 1.5 for lectures and 1 for exercises sessions. But for one the year is very short, from about 23 to 26 weeks, so that means about 15 hours per week, and this is only the nominal amount. In any cases, they (voluntarily or not) have to teach additional hours, which are paid in addition (to a **lower** rate than nominal hours).
* As a point of comparison: a French high-school teacher would have **18 teaching hours** per week.
Other comparison points, less relevant to the question as they are for teaching+research positions:
* France, associate professor (*<NAME>*): junior-level position, supposed to be half teaching and half research. This has a fixed number of **192 teaching hours per year**. If you consider that those are spread on 30 weeks, it gives **6.4 hours per week**. Even considering it is not a teaching-only position, that number is lower than the one you quote.
* France, full professor (*professeur des universités*): same number of hours in theory, but as you gain seniority you can do more full-class teaching (with bigger groups), of which every hour counts as 1.5 hour in your yearly total. Through this, and other mechanisms, senior professors usually have fewer hours to teach.
Upvotes: 5 [selected_answer]<issue_comment>username_4: I assume you are talking about in term time teaching per week and not total teaching per year/52. I have never heard of a permanent full time position having a heavier teaching load than a 5-5. Some people then choose to teach summer courses (but this wouldn't effect you teaching hours per term week). Often this type of load includes some repetition so you might only have to prep 7 courses of which only 1 might be a new prep. The amount of classroom contact time for each course might be as low as 3 hours but could climb to 6. There might also be some office hour contact (which you might count as teaching contact). Many full time teaching jobs have lighter loads and can be as low as 3-3. Research intensive departments can have typical teaching loads as low as 1-1. Adjuncts often teach as high as 6-6 in order to make ends meet.
There is so much more than just the number of taught hours that influence teaching load. I think you need to look at number of classroom hours, number of unique preps, office hours, and marking.
Upvotes: 2 <issue_comment>username_5: In the US teaching load is assigned in terms of courses or credit hours per year and varies heavily between "teaching schools" and research schools. For example, I currently teach undergraduates at a teaching school (i.e., it does virtually no research, but has the name university attached like most institutions) that has a load of 30 credit hours per year, and am moving to another teaching school that has a load of 33 credit hours per year.
At a research school nearby I know lecturers (no research) teach 3-3 (3 courses in the fall semester, 3 courses in the spring semester, and mostly 4 credit courses). Professors (researcher and teaching load) at the same school taught 2-2 (2 courses in fall, and 2 in spring).
A credit hour is approximately 15 lecture hours over the course of a semester, so these loads vary from 33\*15=**495 teaching hours per year** (**9.5 hours / week on a 52-week year**), to 6\*4\*15 = **360 hours per year** (**6.9 hours / week on a 52-week year**). For this spring semester I am currently teaching 11 hours of lecture per week and an additional 4 hours of lab sessions per week. My load is usually lighter in the summers, but not by that much.
Upvotes: 2 <issue_comment>username_6: It depends on the type of institution. at my university, a post 1992 UK university in Newcastle, myself and some other colleagues teach on average 14 hours a week! yes and you have to research and engage in administration, including marking (lots of it) meeting students, supervising both under and post graduate students. I have broken down finally. it was too much to bear and I am currently on sick leave. Hopefully, my hours of teaching will be reduced after this incident.
In pre 1992 universities, I understand the typical teaching load is 6 hours a week.
Upvotes: 2 <issue_comment>username_7: A university teacher teaches 10-12 hours typically a week. Although this job has vast roles and preparation time for the lecture may vary depending on the topic.
My university lecturers had different teaching hours depending on the additional work assigned to them by university like assessing test papers, results, grad management.
Upvotes: 0 <issue_comment>username_8: In my university in Australia, my 60% time is teaching which translates into 360 hours for one year so about 10 hours per week. The other 20% is research and 20% admin (committees etc.) I feel exhausted.
Upvotes: 2 |
2013/01/25 | 787 | 3,257 | <issue_start>username_0: I had started my PHD last year on a topic that has several interesting problem to solve in the area, however I don't find the problem interesting enough to spend 4 to 5 years. As time went by and I came across a new problem through my course work and interactions with various professors. I have started liking a different topic that is not very related to the original.
What would be a wise thing to do in such situation? Is it a common situation (changing topics midway)?<issue_comment>username_1: First, I've yet to meet a single PhD candidate that actually did what he proposed in his research plan in the first place. Unless he entered for some particular project (and still)
Is not uncommon to look for different topics and think that it may suit you better, and in all fairness, you should be doing something you like, not some topic other person imposed on you.
Now, switching topics, specially if they are unrelated, will have the consequence of delaying your PhD graduation considerably. I switched topics on my PhD 2 year, but I mostly changed the application, while the most fundamental part (in my case, it was the math) was pretty much the same, so I got to use most of the foundations I learned over the first couple of years.
I hope it helps.
Upvotes: 3 <issue_comment>username_2: My advice is to talk about it with your advisor (naturally).
When I was narrowing down subjects I, too, was struggling with how much *interest* I had in various topics. I got some advice from someone who was working at ABB and had recently finished a PhD. It went something like this:
>
> *A PhD is as much a long process as it is becoming an expert and contributing in a field. If you pick a topic you're enthusiastic about at the beginning, chances
> are, you'll become tired of it before you finish. If you pick a
> topic that seems less interesting, after working on it for a long
> time, you probably will come to love some things about it you didn't see at the start.*
>
>
>
In my case, the latter was true. I was more interested in finishing on time than having the topic of my dreams. But I finally enjoyed my topic a lot, even though at first it seemed boring and not up my alley.
Upvotes: 2 <issue_comment>username_3: To me, the only *wrong* answer to this question would be, "throw everything away and start over", and there may even be some (rare) situations where that approach is justified.
Everything else is basically varying shades of "right", depending on your specific situation.
* Talk to your advisor about pursuing your alternate interest as a side project, with the ultimate goal of a few publications on that topic.
* Work with your advisor to identify other labs doing similar work, do a collaborative project with another lab with the goal of publishing.
* On a similar vein, if your program allows it, do a `x`-month (`x` < 1 year) rotation in a different lab that focuses on your new-found interest, with the goal of familiarizing yourself more with the intricacies of that field, readying you for a postdoc or professorship role in that subfield.
* Write down your ideas as possible ideas for grant applications for the future (postdoctoral tenure, professorship positions).
Upvotes: 2 |
2013/01/25 | 866 | 3,667 | <issue_start>username_0: I am wondering where I could find global statistics on the flow of students between countries. This would include numbers such as "how many students with US bachelor went to UK for a master" and "how many Spanish master students went to US for a PhD"
They always say that the academic world outside US suffers from brain drain towards the US, i want to find way to quantify this<issue_comment>username_1: I don't have the statistics at hand for the case of students. But there was recently a publication of mobility statistics for scientists on 16 countries:
[The report is here](http://www.nber.org/papers/w18067)
[The IEEE also made an infochart that was a bit more explicit using some of the data in the report.](http://spectrum.ieee.org/at-work/tech-careers/the-global-brain-trade)
Upvotes: 5 [selected_answer]<issue_comment>username_2: The **UNESCO Institute for Statistics** offers some student mobility statistics by country in their [online database](http://data.uis.unesco.org/) (not by degree level, I'm afraid). It's apparently not possible to give a direct link to a particular item from this database, but you can find the figures you're interested in the section `EDUCATION > Other policy relevant indicators > Number and rates of international mobile students (inbound and outbound)`.
From that, there are several indicators of interest, e.g. `Mobility indicators > Net flow of internationally mobile students` or `Inbound students > Inbound internationally mobile students by country of origin`, depending on what you're after exactly.
Generally, you'll have to "play" a while with the database interface to get an interesting view of the data. See the "customize > selection" and "customize > layout" options available in the menu on the top of the tables. In particular, you'll often find some interesting additional variables or filters hidden under the "customize > selection" menu. It may take some time to format the data the way you want.
They used to have a simple, nice interactive map of this data on the following page <https://uis.unesco.org/en/uis-student-flow>, but it doesn't seem to work anymore (the map seems to load forever without displaying anything). I gave the link anyway in case they eventually fix it.
Another option, more restricted in geographical scope, is the **Organization for Economic Co-operation and Development** (OECD) [online database](https://stats.oecd.org/). They offer some data on mobile students by country of destination (but only for OECD countries), degree level, and field. You'll find several variables of interest in the section `Education and Training > Education at a Glance > Students, access to education and participation` (e.g. "[International graduates by country of origin](https://stats.oecd.org//Index.aspx?QueryId=121221)", "[Share of international students and all students by field](https://stats.oecd.org//Index.aspx?QueryId=121222)", etc.) Look also at other variables without "mobile" or "international" in their names, sometime they include filters relative to international students, so it might interest you. Here is an example for doctoral students: <https://stats.oecd.org//Index.aspx?QueryId=118857> . Like the UNESCO database, it requires some time to get how the database works, so you'll have to invest some time to find what you're after. Hopefully the links I gave previously will help you.
Otherwise, if you're interested in some particular countries, it may be worth looking at the websites of national statistical institutes or higher education ministers, they often offer data and figures relative to international students.
Upvotes: 2 |
2013/01/25 | 527 | 1,945 | <issue_start>username_0: I would like to ask if there is some algorithm how to arrange courses to timetable. I study at the university and we can choose a few different lesson times for each subject.
The problem is how to coordinate all subjects with student's requirements, for example to have school only 2 days a week and/or to select some hours based on capacity.
One subject - you have to select one from the first table and one from the second table (it's lecture and seminar). There can be only one table to select from (only seminar/only lecture):

---
<issue_comment>username_1: There are algorithms for timetabling, but I doubt that you would want to get into the level of detail required for understanding them and applying them to your - as I take it - one off situation. Timetabling is a difficult problem for a computer to solve when there are many activities and people to timetable. It is an NP-hard problem, and a hot topic of current computer science research.
Perhaps something like [this](http://www.timetableonline.com/) would help you? I haven't tried it so I can't comment on whether it is a useful/competent solution.
Upvotes: 2 <issue_comment>username_2: Why did this get moved to <https://academia.stackexchange.com/> ? This is an algorithm question!
Anyway, it so happens some friends of mine have one of the top timetabling algorithms, based on competition wins. Here is a link to the paper (which will include references to other state-of-the-art timetabling algorithms)
>
> An automatically configured modular algorithm for post enrollment
> course timetabling <NAME>, <NAME>, and Marco
> Chiarandini - Technical Report TR-2009-15, University of British
> Columbia, Department of Computer Science, 2009. [[pdf]](http://www.cs.ubc.ca/cgi-bin/tr/2009/TR-2009-15.pdf)
>
>
>
Upvotes: 3 |
2013/01/25 | 1,064 | 4,276 | <issue_start>username_0: I received an invitation to nominate students for an award that could be for *an undergraduate, a graduate or a post-graduate student*. I've seen those terms used before, but never been sure what they mean. I know *Bachelor* student, *Master* student, *PhD* student and *post-doc*.
The timeline:
Being a Bachelor student → Getting the Bachelor degree → Being a Master student → Getting the Master degree → Being a PhD student → Getting the PhD degree → Being a post-doc → ...
Then what do *undergraduate*, *graduate* and *post-graduate* students refer to? Are *undergraduate* students exclusively students studying to get a *Bachelor* degree, or can it also refer to students studying to get a *Master* degree? After all, that's a graduation that they don't have yet. Literally speaking, it could also be "under" the PhD degree, but that's surely never used as such.
And a *graduate* student, is that then someone studying for the *Master* degree, er is it used only for people studying for the *PhD* degree?
But then what is a *post-graduate student*? Is this a *post-doc*? But *post-doc*s aren't students anymore, so then it could only refer to *PhD* students. Or are post-docs considered students, too?<issue_comment>username_1: In many American Universities, the concept of a Master student is that which is enrolled purely in a Master course, and is expected to leave the school after graduation.
A Graduate student is usually enrolled with the objective of doing a PhD, many Graduate students, provided they have the coursework and thesis, might get a Masters degree in the middle of the program along with the PhD.
Usually for postgraduate students I also think is for post docs, but I'm not sure. In Mexico (and maybe France, because we share some characteristics of the language) a postgraduate student is one doing either a PhD or a Master, and a Graduate student is one doing his Bachelor degree.
Upvotes: 1 <issue_comment>username_2: I am almost certain that post doc is *not* what is meant.
In English speaking systems outside of North America, and especially referring to Europe under the Bologna accords, an *undergraduate* refers to someone who is studying for, but has yet to receive, his *first post-secondary education degree*. Typically this degree is some equivalent of Bachelors, but in some cases students maybe enrolled in accelerated programs with a longer term of study that leads directly to (the equivalent of) a Masters degree.
A *graduate* student can, but not necessarily, refer to someone who is studying for a *graduate diploma*. In many countries having a Bachelors (or equivalent) is not sufficient in itself to qualify one for starting a postgraduate degree. One often requires a "good enough" Bachelors degree (such as one with honors). The graduate diploma is an intermediate step in which a student who has already received his first post-secondary degree studies further in order to qualify to enroll in a masters (or sometimes doctorate? I am not sure about this) degree program.
A *postgraduate* student refers to someone who has already obtained a first degree, and is now pursuing a second, third, or Nth degree beyond it. See, e.g. [this Wikipedia entry](http://en.wikipedia.org/wiki/Postgraduate_education).
A postdoctoral researcher is generally *not* considered as a student.
---
In English speaking North America, an *undergraduate* typically refers to someone studying for a bachelors, since almost all (if not all) degree programs go through that stage in North America. And a *graduate student* refers to any student studying for any degree beyond that of the bachelors (so that would be typically the masters or the doctorate).
Upvotes: 6 [selected_answer]<issue_comment>username_3: In the USA An undergraduate student is one who's working towards a bachelors degree; typically a graduate student is one who has a bachelors degree and is either working on a Masters are higher level degree; a postgraduate degree level refers to someone who has earned a masters degree and is in route to a higher level degree; a postdoctoral Student is when they have completed coursework for the doctor degree but still has other requirements to finish like a thesis or disertation.
Upvotes: -1 |
2013/01/25 | 544 | 2,267 | <issue_start>username_0: I wonder: everyone in academia handles a lot of email, on all aspects of their work. I mean, emails about conferences, emails about journals and papers, emails about research, teaching, supervized students, and so on. So if you have a big announcement to send to many people (eg seminar or conference) does it matter WHEN you send it? I mean, is it better to send it on Wednesday afternoon (middle of the week) rather than Saturday evening (middle of week end)? I imagine if people receive the email at time they are busy they possibly will overlook it.<issue_comment>username_1: First, **I don't think it makes much difference.** Researchers deal with information, and a lot of that information is communicated through emails nowadays. So, most researchers I know are very careful about their emails and read them thoroughly. Especially if you write to people who know you, your name should be enough for them to read your email through anyway.
But, I can understand if you have an important announcement to make to a large list of people (who don't all know you personally), you may want to “micro-optimize” this. I have done it in the past: having a call-for-papers email ready on Saturday, and waiting ’til Monday afternoon (US time) to send it, thinking people who read it with a fresh mind, and not alongside the batch of “week-end email” that they might triage on Monday morning. (Some people stay connected during week-ends, of course, and for those it makes no matter.)
Upvotes: 5 [selected_answer]<issue_comment>username_2: My short answer is that I only send important email within working hours avoid early morning late afternoon hours. I usually get a very high reply rate with this technique as opposed to send email at midnight.
Upvotes: 3 <issue_comment>username_3: Marketing experts advise to send emails on Tuesday, Wednesday and Thursday. They also advise to send the email early in the morning, so that they will be at the top of the pile for those that check their emails right after waking up/arriving at the office.
Sunday afternoon is also a good choice. If the receiver reads emails during the weekend, you will be one in a very few, and if the mail is read on Monday, it will be at the top of the pile.
Upvotes: 3 |
2013/01/25 | 809 | 3,538 | <issue_start>username_0: One of the master's student is working on a research problem. I am a PhD student. I have an idea which I proposed to my advisor on the same problem. Now he wants me and the masters student to work on that idea and publish a paper. Would I be treated as a first author in that paper or would I be a second author?<issue_comment>username_1: You give a partial version of the story using pejoratives instead of trying to stay based on facts. So it is difficult to answer your question with that information.
What we can say is that authorship is something to be discussed with your advisor. It will be his decision, in the end. A good way to help yourself is to work hard on that idea, and clearly make your point. The best way to help yourself be first author is to write the paper, or at least the more significant portions of it! Start already with introduction and methods, and as soon as results are gathered, write it up and then submit it to your advisor. If you have done a large part of the work and wrote the manuscript, you should have no problem being first author.
Upvotes: 3 <issue_comment>username_2: Authorship is always controversial, but the general rule is: **Contribution of authors determines the order of the authors on the scholarly publications**. The first author is the one who has contributed the most and usually writes the paper. The last author is usually a professor or senior researcher who leads the team, and his role is almost supervisory.
In your case, how significant was your idea to produce outputs? If you had a significant idea, you have designed the research. Even if the MS student has made many experiments, you can make a contribution by writing and preparing the manuscript.
Finally, you can ask a senior researcher to judge between you, if still there is any controversy.
Upvotes: 1 <issue_comment>username_3: A PhD student insisting on being the first author on a master project is something not welcoming sometimes. **Specially if the master student understands the problem and she can solve it by *herself***. In this case, unless you will bring a new major perspective to the solving method, you won't be the first author.
I have worked with master students and my role was very clear from the beginning. I was involved either as
* supporter to the master student
(i.e. checking the literature, suggesting improvements, studying the problem, brainstorming for better ideas, help with writing)
or
* the master student is supporter for me
(i.e. doing code implementation, graphic design..etc).
*If the research problem is the student thesis*, then most likely you will not be the first author (it is the student thesis, right?)
The authorship thing is the supervisor responsibility. If you are very concerned about it (i.e. you wouldn't work unless you are the first author), then you should speak with the supervisor before starting. Tell her why you want to be the first author.
This said, if you took the leading position as an experienced researcher you might be the first author without asking for it (unless alphabetical ordering took place).
Upvotes: 3 <issue_comment>username_4: Why not see this as an opportunity to supervise an able student? Prod them, needle them, cajole them. Do whatever it takes to get the student to generate a good result. It seems to me your adviser is giving you an opportunity to grow your professional capabilities.
If the master's student does all the research work, then they should be first author.
Upvotes: 2 |
2013/01/25 | 3,160 | 13,103 | <issue_start>username_0: I am currently on the academic job market, and scheduling on-campus interviews with institutions that might want to hire me.
Suppose I am invited to an on-campus interview at the University of X, and must travel there by air. They handle travel on a reimbursement basis: I buy the plane ticket, and then they reimburse me.
However, the interview is a few weeks away. Since the job market sometimes moves fast, there is a chance that by the time of the scheduled interview, I may have already accepted another offer (say from the University of Y). Of course I should then decline the interview at X, but I would have already bought the plane ticket.
**How should I plan for this contingency?**
1. I could buy a refundable ticket to X. However, these are normally several times the price of a non-refundable ticket, and if I do end up traveling to X, they might balk at reimbursing me for such an expensive fare.
2. I could buy a non-refundable ticket to X. If I end up not going there, I could ask X to reimburse me for the cost of the ticket (or at least the "change fee" charged by the airline to let me use the ticket's value for a future flight). However, I suspect they will be reluctant to reimburse me for a trip I'm not making, and might refuse to do so altogether, in which case I am out-of-pocket.
3. I could wait until the last minute to buy a non-refundable ticket for X. But it may still be expensive for them (or may exceed their limits), and the most convenient flights may be sold out.
4. I could contact University of X and ask them for guidance. I'm a bit hesitant to do this, as I am afraid that if I bring up the possibility that I might accept another position, they might think I am not seriously interested in theirs.
**Is there a standard way to handle this situation?**
This is in the United States, if it matters.<issue_comment>username_1: It is always best to ask. You can use another pretext as a reason for your “hesitation”. Say, for example (in short):
>
> I wonder what the restrictions are on travel reimbursement. I could buy a ticket X right now but I am not fully sure about the exact timing of the flight (due to family arrangements not yet settled). However the price might increase if I wait.
>
>
>
If you do not want to ask you have no possibility other than 1 or 3. Depends on how you evaluate your chances of finding a job before the interview. Probably 3 is worse than 1: if you have a cool new job you will not mind so much losing a plane ticket. I do not think universities would accept option 2.
Upvotes: 3 <issue_comment>username_2: While the cost of buying the plane ticket is expensive, I would look at it as an "opportunity cost." If you wait until the last minute, and *don't* get a job offer and *don't* get reimbursement, then you have the worst of all worlds: no job offer, and a very expensive plane ticket to pay for!
It is therefore much better to ask the schools for guidance. However, if the issue is a financial one, you *could* mention to the school that you might need a travel advance in order to help pay for your ticket. Normally they can work out some sort of arrangement if it is a problem for the candidate to pay for her own ticket.
You *are* correct, however, in thinking that the school would not appreciate having to ask what happens if you need to cancel because of accepting another job. If you do that, you can save yourself buying the plane ticket, because you're probably not getting a job at that school anyway!
Upvotes: 2 <issue_comment>username_3: I have been in similar situations recently, and I have to say first that most of the time, the university was directly taking care of buying the plane ticket for me, so it somehow simplifies things. Nevertheless, I would say that the first point is that whenever you schedule an interview, you should somehow commit to attend it, meaning not accepting any offer before scheduling the interview. If you are expecting soon an answer from a place that you would definitely take if accepted, then wait until having received the answer before scheduling the next interview.
It's also worth mentioning that in many cases, an interview is not just an interview, and you might also give a seminar, meet and talk with people there, so it might be interesting to go there in any case, even if you have been accepted to another place. However, it's quite important to be open about it with the interviewing institution. I have been in this situation, i.e., going to an interview while having accepted another offer before, and mentioning only at the last minute, and it was clearly a rude thing on my side (to my defense, I didn't fully realise it was an actual interview, and thought it was just a seminar to make contact with the team there. It was however a clear fault on my side).
So, to answer your question, let's assume you've already applied to A, and you're waiting for their answer, and you need to schedule an interview for B. Two cases are possible:
* either A is your preferred place, and in that case, wait until you have received their answer before making any arrangement. Ask A when they plan to give the answer, and if the expected date is close to the interview for B, then explain the situation to B. Academia is a competitive market, and there is nothing wrong in applying for several positions at the same time. Excellent candidates are usually accepted in several places, and as JeffE would say, if you're not excellent, why would they hire you anyway?
* or B is your preferred place, and in that case, schedule the interview regardless of the date when you receive the decision from A. The only tricky part might be that you receive the answer from A before the interview to B, and that you have to make a decision, but that's hope to you to know the trade-off between accepting A and losing for sure B, or refusing A and risking for B.
So, in both case, go for 3: wait until the last moment to make any arrangement. You should finally consider the fact that academia is a small world, and that if you apply to both A and B, it's very possible that they are both aware of your double application. Anyway, I wouldn't expect a recruitment committee to believe that you have only applied to their opening.
**EDIT** Apparently, I wasn't as clear as I wanted to. In no case I'm suggesting not to *accept* any interview offer, but simply to try, if possible, when you're waiting for an answer from A, which would be your first choice, and you receive an interview offer from B, to wait to have the answer from A to schedule the interview at B. If you can't wait, then schedule it anyway, but be clear with B that you're expecting an answer from A that might arrive, and that you might have to formally accept before the interview at B.
Upvotes: 4 <issue_comment>username_4: If you've bought the ticket for an interview that you honestly intended to go for, and you get an offer in the meantime, it doesn't necessarily mean that you should decline the interview.
* You might be pleasantly surprised by the place and realize that it's more in contention than you think.
* You might make useful contacts that will help you later on in your career even if you don't go there.
* you might get an offer from this place, and having two offers improves your negotiating position tremendously.
Upvotes: 3 <issue_comment>username_5: I'm assuming you're applying for tenure track jobs here. The value of a tenure track job is enough more than the cost of a plane ticket, that I really don't think it's worth doing anything that could compromise your odds at University X. If it's a matter of buying a ticket today or in a couple days, then you can easily drag your feet on buying the ticket without them even knowing. But otherwise, just buy the plane ticket and deal with the tricky situation if it comes about.
Upvotes: 3 <issue_comment>username_6: I am writing to disagree with <NAME>'s answer. (I signed up just now, and am unable to comment.) At least, I would consider part of his answer to be poor advice for applying to TT jobs in mathematics.
In particular, I believe you should *always* accept invitations to interview if you think they're in your best interest -- i.e., if you would be happy to take a job there, and if you are not absolutely confident you won't get better offers.
Suppose that I had scheduled an interview at A, and I get an interview offer from B. I prefer A to B, but I also really like B. Then I would *definitely* accept the interview offer at B, even if you might get an offer from A in the meantime.
I say this as someone who was on our hiring committee this year. It is a difficult and stressful process for us, but surely it is much more difficult and stressful for candidates. We understand that candidates want to get the best job possible and expect that they will look out for their own best interests. We expect honesty, but hesitating to accept an interview offer at a school you like, under almost any circumstances, seems unwise to me.
What if you accept an offer from A before your interview at B, and the plane ticket is bought? I would contact B, tell them you had accepted another offer, offer thanks and apologies, and offer either to come and give your lecture, or to simply cancel the trip. Most hiring committees, I think, would be gracious and kind, would ask you not to come, and would refund your plane ticket. Perhaps they would like to meet you anyway, and give you the option of giving your talk. In the unlikely event that they are rude, this will give you reason to be grateful that you will be working elsewhere.
Upvotes: 5 <issue_comment>username_7: The university and hiring committee is already committed to paying for your flight out (plus room and board). As such, I think it would be reasonable to offer to pay for half the cost of the ticket IF you accepted an offer at another university.
I would not ask beforehand about anything, but this gives them the opportunity to not be so out on the money and can pay for someone else's ticket out there -- what's $200 out of a $400 ticket compared to hiring the right somebody?
Upvotes: 1 <issue_comment>username_8: You should go ahead and book your non-refundable ticket now. If you get a job offer from your preferred school prior to this interview, ask them when they need their decision by. If they say a date prior to this interview ask if you can extend the date until after this interview. They will almost always agree to this if they truly believe you are the best position for the candidate.
You should do this for two reasons. (1) Who knows, while at this school you may fall in love with it! But (2) if you don't, and you get an offer from them, you can use it to bargain with the other school. Note you should do this tactfully.
In the end, if worse comes to worse, you have a job and can afford to sacrifice several hundred dollars on a plane ticket. I'd worry about this scenario only afterwards.
Also, sometimes if you tell a school "I've gotten a great offer from another university, but I really like your program. Is there any way for me to move up my interview so I can have your decision before I have to get back to the other university?" They will agree. You should only do this if it is conceivable you could accept their position.
Upvotes: 2 <issue_comment>username_9: I am not sure why you can't just tell them you have already accepted an offer, and leave it to them to cancel the interview. If they cancel the interview on the ticket that they agreed to reimburse you for, then they have to reimburse you. This shouldn't make them mad because they have the choice of interviewing you or cancelling, so they are no worse off than any other scenario in which you accept the first job offer. It is assumed you will have job offers, so its a risk on their part in participating in the process. Additionally, you could ask about this scenario from a different phone number or anonymous email to get what information you can about the university. Disclaimer: I am not a professor.
Upvotes: 1 <issue_comment>username_10: Accept the X interview (if the place is of interest) and buy the ticket now (not letting price go up) and with a normal non-refundable class. In all likelihood, you won't be off the market (at least won't have accepted a Y offer yet). It is in your interest to generate more than one offer at same time to drive a competition for your services.
If you do get a Y offer so early and are not able to delay a decision, you can try to get the X trip moved up (after all they know you will be off the market so there is pressure). Or you may be able to get it reimbursed--for instance by turning it over to X uni ticket office or by having Y pick it up (as they are trying to make you decide now). But it really is to your advantage to collect X and Y competing offers...so I would just stall Y after they make the offer). Worst comes to worst, if you have to pay for a few hundred bucks ticket, it's no big deal if the Y offer is very attractive.
Upvotes: 2 |
2013/01/25 | 2,575 | 9,594 | <issue_start>username_0: **Observation:** According to [OECD stats](http://stats.oecd.org/Index.aspx?DatasetCode=RFOREIGN), the number of international students at US higher education institutions is the highest in the world and still rising (see also [Wikipedia here](http://en.wikipedia.org/wiki/International_student#United_States.2C_United_Kingdom_and_Australia) and a [report here](http://learningenglish.voanews.com/content/article/1546399.html)).
**Question:** *What are the main factors underpinning the observation above?*
I am interested in (partial) answers pointing to studies, or sources of statistical information on the topic, not solely opinions.
---
*This is a reformulation of [this](https://academia.stackexchange.com/questions/7455/) question*<issue_comment>username_1: Your question is interesting me so I did an hour or so of internet search for academic studies of the reasons behind brain drain in recent years. I will give here some results which I found. It is not a total answer to your question and I do not think anybody can completely answer your question because it is a very complex and highly studied issue. Reasons (driving factors) depend on each individual. Paper 2 makes useful distinction between PULL and PUSH factors and gives a list of useful examples of the two.
-- *Brain Circulation Replacing Brain Drain* at [Science CareersBlog](http://blogs.sciencemag.org/sciencecareers/2012/09/brain-circulati.html):
>
> "Brain circulation," meeting attendees noted in a consensus statement issued 6 September, is the "mutli-directional flow of talents, education and research that benefit multiple countries and regions and the advancement of global knowledge." In an era when many scientists and scholars move between several countries to pursue training and research, the statement suggests, "brain circulation" often more accurately describes international mobility than "brain drain," which implies a unidirectional flow that only benefits certain countries.
>
>
>
This is in agreement with Charles comment. Maybe the situation is not so disymmetric than it used to be.
-- *Analysis and Assessment of the “Brain Drain” Phenomenon and its Effects on Caribbean Countries* at *FLORIDA ATLANTIC COMPARATIVE STUDIES JOURNAL*:
>
> In order to understand how the “Brain Drain” happens, we must spend some time discussing migration and the reasons people leave their home countries in the first place. The reasons many Caribbean natives go abroad and fail to return home fall within two categories often referred to as pull and push factors. Push factors are circumstances or events in the home countries that result in persons leaving. Examples of push factors are the structural adjustment programs enforced by the International Monetary Fund and the World Bank on developing countries that increased unemployment and reduced government funding on social programs in these countries which then led to increased migration. Pull factors are the incentives in the receiving countries that encourage persons to seek employment opportunities there. Examples of pull factors are the immigration incentive policies of the receiving countries that tend to attract higher educated, skilled and trained personnel. For example, the H-1B visa system in the U.S. is often used as a stepping stone by immigrants who want to acquire employment-based permanent residence there. The current immigration policy in the U.S. enables those applying for the H-1B visa to have the dual intent of attaining temporary work status but intending to apply for permanent residency (Kapur and McHale 2005). Other developed countries have similar immigration policies that continue to attract highly skilled workers from developing countries. Currently in Australia, employers of immigrants are not required to prove that domestic workers will be adversely affected by the employment of foreign employees, in fact, all they need to show is that employing the immigrant will be, in some manner, beneficial to Australia (Kapur and McHale 2005).
>
>
>
-- *China's brain drain* is a [report](http://shanghaiist.com/2009/07/28/dang_brain_drain.php) on a Gallup survey:
>
> This article argues that education, employment and family are the main reasons behind China’s brain drain. The article also provides useful statistics concerning the issue.
>
>
>
-- *Thai Diasporas and Livelihood Strategies in Thai Society* [here](http://www.trf.or.th/TRFGallery/Upload/Gallery/Documents/Files/%201000000022.pdf):
>
> This article uses traditional definitions of Diaspora to examine the phenomenon of the brain drain in Thailand. It also considers the reasons for emigrating to another country in terms of personal livelihood
>
>
>
(last few examples are taken from here: <http://www-cs-faculty.stanford.edu/~eroberts/cs181/projects/2010-11/BrainDrain/>)
Upvotes: 2 <issue_comment>username_2: Not a full-fledged answer, but there was a post (by [Marginal Revolution](http://marginalrevolution.com/marginalrevolution/2012/04/why-is-u-s-higher-education-so-dominant-and-why-is-harvard-1.html)) pointing to a [paper](http://www.isb.edu/faculty/upload/Doc23820121635.pdf) [1] that reports *alumni control* of the Board of Trustees as a key factor:
>
> All this is made possible by a model that transfers control to those
> who value it most, that is the alumni, who then drive competition for
> students, faculty, facilities, research, programs, global ties, sports
> coaches and rankings. Conversely, they also provide funds and guidance
> to maintain uniform excellence in all these pursuits. This maximizes
> the value of the degree or the “sheepskin” that the alumni are
> figuratively cloaked in for the rest of their lives.
>
>
>
[1] “Why is Harvard #1? Governance and the Dominance of US Universities” – Working Paper 2012, Indian Institute of Management, Ahmedabad.
Upvotes: 2 <issue_comment>username_3: Abstract
--------
Firstly, the answer will tackle the question's false assumption that the US is the most attractive destination for international students. Secondly, I will cite some of the factors making a country/region's education system attractive to international students. Finally, to tackle some of the comments, I will present a chart showing number of international students per capita in selected countries.
USA is not the most attractive destination for international students
---------------------------------------------------------------------
According to the [OECD Factbook 2011-2012: Economic, Environmental and Social Statistics](http://www.oecd-ilibrary.org/sites/factbook-2011-en/10/01/04/index.html?contentType=/ns/StatisticalPublication,/ns/Chapter&itemId=/content/chapter/factbook-2011-84-en&containerItemId=/content/serial/18147364&accessItemIds=&mimeType=text/html), the number one destination of foreign students among OECD countries is Europe followed by Northern American region:
>
> European countries in the OECD were the destination for 38% of foreign students in 2009 followed by North American countries (23%). Despite the strong increase in absolute numbers, these proportions have remained stable during the last decade.
>
>
>
To put the numbers above to global perspective, observe also that
>
> Foreign students enrolled in G20 countries account for 83% of total foreign students, and students in the OECD area represent 77% of the total foreign students enrolled worldwide.
>
>
>
Factors driving attractiveness of higher education in OECD countries
--------------------------------------------------------------------
Again, according to [the same source](http://www.oecd-ilibrary.org/sites/factbook-2011-en/10/01/04/index.html?contentType=/ns/StatisticalPublication,/ns/Chapter&itemId=/content/chapter/factbook-2011-84-en&containerItemId=/content/serial/18147364&accessItemIds=&mimeType=text/html) (emphasis added):
>
> *Language* as well as *cultural considerations*, *quality of programmes*, *geographic proximity* and *similarity of education systems* are determining factors driving student mobility. The destinations of international students highlight the attractiveness of specific education systems, whether because of their **academic reputation** or because of subsequent **immigration opportunities**.
>
>
>
---
---
Commenters to the question cite the ratio of international students per capita as an indicator of attractiveness of education system for foreign students. While I do not see any direct correlation between attractiveness of an educational system and the ratio of foreign students per capita (countries can be arbitrarily protective, or non-protective w.r.t. their own citizens), I prepared the following chart from the OECD data (relevant to year 2009):

The chart was constructed by merging data from the [OECD.Stat](http://stats.oecd.org/Index.aspx?DatasetCode=RFOREIGN) with OECD countries population data from [OECD population 2009](http://dx.doi.org/10.1787/888932502638) as published in the [corresponding section of the OECD Factbook 2001-2012](http://www.oecd-ilibrary.org/sites/factbook-2011-en/02/01/01/index.html?contentType=&itemId=/content/chapter/factbook-2011-9-en&containerItemId=/content/serial/18147364&accessItemIds=&mimeType=text/html). The computation is done on non-citizen students column for the year 2009, except for United States it is the number of non-resident students (due to lack of a non-citizen students datapoint).
Upvotes: 3 [selected_answer] |
2013/01/26 | 653 | 2,756 | <issue_start>username_0: *Q*: How common is it for a faculty/school to request/demand the good PhD applicants to come in for an interview?
For someone that lives outside of the U.S. and wants to apply to do a doctorate in the U.S., should that person temporarily relocate to the U.S. (at the correct time and for the correct duration of course) with the expectation that they'll have to interview in order to get in?
Do top schools vary from mid-ranked schools when it comes to interview frequency? Does it vary by faculty?<issue_comment>username_1: In my experience, it's typical for a US university to invite PhD applicants for a visit only *after* they have been accepted for admission. The purpose of this visit is primarily for the school to make a good impression on the student and try to convince the student to accept their offer of admission. An interview is not normally a required part of the application process.
However, one of the best things you can do to increase your chances of admission to a particular graduate school is having a professor there who wants you to work with them, and who will lobby the admissions committee on your behalf. In order to get this kind of connection with a professor, you will need to meet them and establish a rapport well in advance of applying to the university, and it's possible that the professor will want to meet you in person as part of that process.
Upvotes: 4 [selected_answer]<issue_comment>username_2: It's rare to demand an interview. Having said that, I know that in some disciplines (parts of bio, for one), students are brought in BEFORE the final decisions to meet with faculty.
Upvotes: 2 <issue_comment>username_3: This is highly field-dependent. In some fields (e.g. psychology), almost all programs do at least a Skype interview, and the vast majority of candidates do on-site interviews (sometimes trips are paid for, sometimes they aren't). Other fields (e.g. math, econ) rarely do interviews, and instead invite students to come to a visit day once they are admitted (often paid for). If you don't know whether interviews are common in your field, I would suggest finding someone to talk about the interview process: your adviser(s), a current graduate student, or even other applicants are all valuable resources.
Almost every program that does interviews will offer some kind of remote interview for candidates who can't make it. However, I think that the majority of the time your chances of admission will be higher if you go to an on-site interview to meet people in person, though the size of that difference is debatable. Also remember that on-site interviews are valuable for you as well, they give you a chance to evaluate the program and the people there.
Upvotes: 2 |
2013/01/26 | 1,141 | 4,509 | <issue_start>username_0: One of my advisor's students asked me how frequently I have meetings with him. And since at that time we had very few meetings I replied:
>
> "We have a very few meetings and the advisor does not allocate time to me."
>
>
>
The advisor heard it and has started retaliating by reducing those few meetings to zero and complaining about me to other faculty, etc. Do you have suggestions about what I should do now?<issue_comment>username_1: I'm assuming you can't switch advisors. Have you tried having a heart-to-heart with your advisor about this matter ? Failing that, maybe you have members of your committee who can help mediate ?
Upvotes: 2 <issue_comment>username_2: First, **do not assume bad faith** and **go talk to your advisor**. There is an issue which you feel threatens the successful continuation of your PhD, the natural person to talk to about it is your advisor. In this case, it so happens that the problem involves him too, which means you have to be diplomatic about it, but he's still the right person to hash it out with. **Behave professionally**, do not accuse him of anything, just state the problem factually (*“I need more involvement from you in order to successfully overcome this problem”*), and see what he and you can come up with.
Not assuming bad faith at first is good advice in most professional situations. In this particular case, the elements you mention are:
* lack of time, which could be completely explained by other factors such as the advisor being swamped (don't get me wrong, it still needs to be fixed somehow, but it doesn't necessarily mean he's being an ass)
* rumor (*“complaining about me to other faculty*”), which might be just that
Now, if after trying in good faith to solve the issue with him (give it a few tries), the situation doesn't improve *and* it is hurting you *and* you think he is of bad faith: I listed several possible recourses in [this related answer](https://academia.stackexchange.com/a/7296/2700) (and another write-up [here](https://academia.stackexchange.com/a/5849/2700)). **But don't jump the gun**, because a lot of the options on this list will mean burning a bridge.
Upvotes: 5 <issue_comment>username_3: On my first day of PhD school, our department chair said
>
> Never say anything bad about your advisor; he or she will hear about it.
>
>
>
We all laughed, but it's invariably true--people talk, and word gets around. I've found that the best ways to repair a damaged relationship with your advisor are
1. Talk to him or her directly about the problem
2. Produce the best research you can with frequent updates (in person or via email)
Upvotes: 4 <issue_comment>username_4: I go with the other commenters on approaching the matter diplomatically and courteously as a means of getting the feedback and support you need—up to a point.
My experience of academics is that though often bright and accomplished, they can sometimes be a precious lot, flinching from the mildest bit of constructive criticism, often taking things personally. If I could turn the clock back I would have seriously considered changing supervisors. I often think I got my PhD in spite of and not because of him. When you are just starting out in research you need the feedback, support and encouragement. So try not to be stoic about it, look for potential avenue out of your problem: have a chat and get it in the open, or change supervisors if your differences are irreconcilable. It helps to be on the same wavelength, personality-wise.
Nobody is saying they are not human beings. But if you are being neglected, then the issue needs dealt with sooner rather than later. If it is a PhD you are doing this may well seriously delay getting your research out in good time for your final viva.
Upvotes: 3 <issue_comment>username_5: I agree with the others: **Talk to your advisor. Do not assume bad faith. Be professional. Be an adult.**
But do not expect the situation to be resolved in a single talk. The response you describe is completely out of proportion to your offhand comment; "threatening" a student is *never* appropriate. It's impossible to tell — **and it's none of our business** — whether your advisor is actually being childish and abusive, or you are just perceiving her (justifiable) annoyance as an attack. But in either case, your student-advisor relationship is dysfunctional. Even if you can resolve this particular situation, you may be better off finding a new advisor.
Upvotes: 3 |
2013/01/27 | 787 | 3,196 | <issue_start>username_0: I have had a different set of professors every semester and as batch sizes are pretty big ~80, and with every professor dealing with 3-4 separate batches of equivalent size, for barely 3-4 months at a time, it has been hard to develop one to one relationships with professors. In such a scenario,
* How do I decide whom to approach for a recommendation?
* How should I go about this to get a good recommendation, in this specific scenario?<issue_comment>username_1: Given the OP’s situation explained in his question and the following comments that he has trouble with getting recommendation letters written by his professors themselves, if I were him I would
>
>
> >
> > Find the professor who would write the recommendation letter himself.
> >
> >
> > Find the professor who taught the class which I had very good grade.
> >
> >
> > Find the professor who taught the class in which I was most interested.
> >
> >
> >
>
>
>
Usually, I need three recommendation letters. Now, I have three.
My personal experience is that I tend to have good grades when taking the class I am very interested. If I had good grade in a class I was interested, the chances are the professor would remember me and more willing to write the recommendation letter himself for me.
Upvotes: 3 <issue_comment>username_2: In addition to [scaahu's fine answer](https://academia.stackexchange.com/a/7509/3), even though I know it is difficult in some places, I would emphasize that students should begin to develop personal relationships with professors *long before* you need one to write a recommendation letter.
My experience is even with mass lecture classes like you describe (and those aren't even that massive - but 3-4 mass lectures are quite a few per semester) if you take the time to meet with the professor, ask for extra curricular work (e.g. to be involved with projects the professors are conducting), or be very active and engaged within class you can develop that relationship a professors needs to write a quality recommendation letter.
Upvotes: 3 <issue_comment>username_3: While both saaahu's answer and username_2's answer are on target, I will add my thoughts.
Make yourself known to the prof from whom you want the letter. You can do this in many ways. If you sit in the front or are particularly active in that class, the prof will naturally remember you (and naturally feel more comfortable that he/she know enough about you to write a recommendation letter).
If you want a letter from someone whose class you took a while ago and you did not do anything to stand out, then you must take up some extra work now (see if you can help that prof with some projects so he/she can get to know you more).
For me, I don't expect students to develop a relationship with me long before they need a letter but I do expect them to make their abilities known to me. I've had students who are silent in my classes then come to me and ask for a reference. When I tell them 'sorry, I don't know enough about you' they get a little sad but they also understand quite quickly when I explain things to them.
So, if you want a letter, help the prof to help you.
Upvotes: 3 |
2013/01/27 | 2,972 | 12,244 | <issue_start>username_0: There's perhaps a better title for this question, but I can't immediate think of one - suggestions for amendment welcome.
I'd like to clarify the purpose of reading textbooks. It sounds obvious at first, but what I mean is opaque to me.
I'm currently reading a textbook describing ~35 problem-solving and improvement methods, as part of [this course](http://www3.open.ac.uk/study/postgraduate/course/t889.htm). It is densely packed with definitions, ideas, procedures etc. I'm highlighting as I go through and occasionally making notes in the margins.
When I finish reading, I barely remember what I've just read, let alone what I've learned. If I was asked to describe the characteristics of a technique I've just read about, I would struggle to put forward a coherent and strong answer.
Even if I condense my notes and read through them, the problem remains - there are too many 'facts' to learn, remember and regurgitate.
This leads me to ask what the purpose of reading textbooks is. Is it to learn and understand 'facts' *and* be able to remember them? Or is it to learn and understand? That is, it doesn't necessarily matter if you can't remember, so long as you can understand ideas when you revisit them and can argue them in your work.
For clarity, I'm a distance-learning student with The Open University, so the textbook I'm referring to isn't a *traditional* textbook, but one that is perhaps closely identified by F'x as a *coursework book*. These books are used in place of lectures, and so are meant to be read in a linear fashion on a week-by-week basis.<issue_comment>username_1: You should be getting several things out of this on the first pass
* That there are *many* variation on this theme
* That the problem has odd corners where a specialized approach is *much* better than a general approach
* You should recall some of the more important ways of categorizing the problem in order to select an approach.
* You should probably remember a couple of the most general methods *and* their limitations.
On subsequent reads--and you won't master a complicated field without going over it several times--you should have a better idea of what to be looking for and should start concentrating on either the kinds of problems you expect to deal with (if you know and/or are doing independent research) or the kinds of problems that your instructor indicates will be important.
Upvotes: 3 <issue_comment>username_2: It might helpful to change the question slightly, into
>
> What is the purpose of writing a textbook?
>
>
>
Usually a textbook is written to lay out a fairly well codified body of knowledge about a topic. The "well codified" part is important: it's expected that this body of knowledge has some lasting value. The textbook is also written (hopefully) in a way that \**teaches* this knowledge, as opposed to merely dumping it out.
To me it sounds like the book you're reading is of the second kind (a dump of facts). Such a textbook is better treated as a reference book, like a dictionary. No one reads a dictionary (unless they're trying to pass the GRE or win a spelling bee :)), but they will refer to it to get the meanings of words.
Similarly, with a book that describes 35 problem solving methods, maybe reading it cover to cover isn't the best strategy. Rather, you should focus on a few techniques (or even one) and try to understand that well. Then put the book away and revisit it from time to time.
So to answer your question:
>
> A textbook can be a collection of facts, but often it's more than
> that: it's a path through the facts that provides a structure with
> which to process the facts. The goal of reading (and learning) is to
> acquire both the facts AND the structure. The facts will be easier to
> remember if you have the structure in place, and the structure makes
> more sense with the facts as examples.
>
>
>
Upvotes: 5 <issue_comment>username_3: There are many many different types of textbooks, and they have very different goals. Off the top of my head, I can list the following three main kinds:
* **Coursework book**, used as reference material for learning a rather broad topic. You expect it to bring a general introduction of the techniques in one field, broad overview, enough to understand the challenges in the field, identify the most common solutions and be able to work them on your own. This will surely include many problem sets, with or without solutions. It is also typical for this type of book to “highlight” some of the content, which the author deem essential for the reader to learn.
* **“State of the art” book**. They can be very different in scope, content and style of presentation. They exist to give a summary of the extent of knowledge on a given topic. They are written for experts and wannabe-experts, so more attention is usually given to correctness than than pedagogy. Such work is useful not only because of the text itself, but also because it usually offers a large number of references to seminal and important papers in the field, which offer you a good way to get started. As such, they're also useful to people who are already experts, they give good references for common knowledge (*“hey, I know it was established in the 1980’s that co-enzyme X accounts for a nontrivial part of this metabolic pathway, but I wonder who did that work… let's check”*).
* **Reference book**. In the most extreme case, it's like a dictionary: examples of such are the [Abramowitz and Stegun](http://en.wikipedia.org/wiki/Abramowitz_and_Stegun) or the [CRC Handbook](http://en.wikipedia.org/wiki/CRC_Handbook_of_Chemistry_and_Physics). Those are not usually called “textbooks”. This is not something you're supposed to read from A to Z, but rather open when you have need.
In the first two cases, **if the textbook includes problem sets (or exercises), you should do them. *For real***, without looking at the answers until you're finished. If you're stuck, give it some time, then come back. Don't give up. This is where you'll learn the most.
Upvotes: 4 <issue_comment>username_4: The answer to your question is that there is no point in "brute forcing" through a text book in this manner, unless you're cramming for an examination.
Do not read textbooks in this way, especially if you do not own them. If you borrow a textbook from the library, and then read it cover to cover, and not remember anything, that is a waste of time.
Good textbooks are worth owning, which implies that they will be in your possession for years. You can use them for reference, and study them over the years in piecemeal fashion as your wandering interest returns to the topics from time to time.
If you really want to absorb the material in your textbook, **you must do the chapter exercises**. You can give yourself a course by going through the book, or you can spread this over years.
Maybe the book is a real *tour de force* on the subject matter and requires a lot of commitment, such that if you put in the commitment, you become an authority on those problem-solving methods. Is that something you want for yourself, though?
The important thing to memorize from your textbooks is just enough of a summary of the ideas that when you encounter some idea in the world, you can remember which of your textbooks has something to say about that topic.
For instance, this book, let's call it Foobley and Bings, has 35 problem-solving methods. Can you remember enough about the gist of the methods so that when you see a problem, you can think "Aha! This problem has a general pattern which fits one of the problem-solving methods in Foobley and Bings." Even if you don't remember the details of the problem-solving method, this can be a big time saver, and the fact that you recognize the pattern shows that you have knowledge. (Even Foobley and Bings themselves may have to crack open their own book to solve that same problem, if they haven't touched the material in years. Maybe the wrote the book to "unload" it from their brains to "make room" for something else, while having something to refer to.)
Upvotes: 5 [selected_answer]<issue_comment>username_5: I read something about this issue a long time ago:
>
> When you read a book and then forget all the content, what remains is **Intelligence**.
>
>
>
Unfortunately, I do not remember the name of the person who wrote this;-)) Do not worry about forgetting details, something will settle in your mind. Besides, reading a book (whatever it is) is an exercise for your brain and makes you smarter over the time. But I agree with others, most of the time it is not wise to read a text book from cover to cover.
Upvotes: 2 <issue_comment>username_6: To paraphrase what I was often told at university: a good higher education won't teach you everything you need to know, but it will teach you how best to find it. Even a passing familiarity with the literature on any given topic is a good start down that path.
Upvotes: 2 <issue_comment>username_7: When I read a textbook, I read through an entire chapter quickly, just getting an overview of what will be shown and then go back through the chapter section by section, doing exercises (if the book has them) and trying out each idea to make sure it fits in with what I already know.
I try to link it to something I already understand well, so that the knowledge "sticks". Drawings at this stage nearly always help me. Especially if they're strange links; My mind seems to be better at remembering very odd things.
Once I'm done with a chapter, I revisit it about 20 minutes later, then an hour later, then a day, a week, and a month later. Once the month repetition is done, I tend to find that I can remember everything in that chapter.
Perhaps a little long winded, especially the repetition, but it has been said that repetition is the mother of learning.
[Scott H Young](http://www.scotthyoung.com/blog/ "<NAME>ung") has some excellent resources on how to study textbooks. He seems to be very, very good at learning. His free chapter of "Learn more, study less" has some excellent tips in.
Upvotes: 2 <issue_comment>username_8: Reading a textbook is reading for academical purposes. In many ways this is very different from reading for leisure.
Unfortunately, they don't teach you that in your first days at university, thus many students still try to read ALL books on a reading list from A to Z - and fail.
What really differentiates academic from leisural reading is, that you have four phases:
1. Prepare what you will read - What questions should the textbook answer?
2. Academic Reading itself - Read only the text that may answer your questions. Read it, mark elements, take notes, read again until you got the gist.
3. Post-processing of what you read - Did the text answer your question?
4. Application of what you read - Academic reading is the basis for academic writing, so archive your notes, tag them, classify them.
Upvotes: 3 <issue_comment>username_9: I follow this cycle. YMMV. For me, if I am able to solve all exercises then I am good about it.
1. First I start with reading Wikipedia page [for ex. Process Synchronization]
2. Then I try to think about it in practical scenarios
3. Then I read the course book which is suggested by my university. I underline important points
4. I start solving exercises. Usually I will have answers manuals too, so first I just check whether I am doing correctly or not.
5. If I am not able to solve a question, then I mark down the words which are specified/related to the question & read them all.
6. Try solving the question again.
7. Repeat it till you get the idea, doesn't matter actually if you come up with a wrong answer, but approach should be right
8. Read reference books on same topics & underline new points found
9. Prepare short notes of underlines points & important keywords
10. Try to write everything in own words.
Upvotes: 2 <issue_comment>username_10: This is the wrong way around. To *understand* you need to have the (relevant, important) *facts* at hand. So "knowing facts" comes before (is a lesser ability) than "understanding". Also, "understanding" includes weeding out the irrelevant facts.
Upvotes: 1 |
2013/01/28 | 1,056 | 4,332 | <issue_start>username_0: I am about to graduate with a B.S. in mathematics and will (hopefully) be attending a PhD program somewhere starting in the fall.
What is one supposed to do the summer between? I don't have any money, so I can't "travel." I would love to do another REU, but I don't think they generally admit graduates.<issue_comment>username_1: As someone who jumped keenly straight from my undergraduate into graduate school (we could start at any time), my advice is to rest. Even if you cannot travel, spent time with your parents or grand parents or anyone who will give you a room to sleep in. Maybe help out at a local charity. Clear your mind. Read some novels. Go hiking.
This could be the last extended break of your life (before retirement). **Enjoy it.**
Upvotes: 5 <issue_comment>username_2: I agree completely with Dave on his answer. There is one instance that I think is worthwhile to do some preparation and that is if you are doing a particular grad program and in one of the fundamental areas relating to it you don't have a good foundation. Lets say you are going to do a Master's in Applied Mathematics and you are not good at dealing with differential equations. If that's the case then put in some time now and sort it out rather than later when you will be drowning in coursework, exams, projects and all the rest.
Upvotes: 2 <issue_comment>username_3: Your mention of REU makes me think you are in the US and you say you don't have any money. My suggestion would be to get a job. You will come out anywhere between 5-10k ahead. US stipends tend to be low. Not having credit card debit at the outside of grad school is really nice. A small amount of savings will provide a large percentage increase in your monthly budget. Dipping into your savings for 1k extra per year will give you something like a 10 percent increase in your monthly budget. Having savings/reduced debt might be the difference between having to or not having to take a non-academic job. It might also allow you to afford a laptop or self fund a conference trip or buy some much needed reference books.
Upvotes: 3 <issue_comment>username_4: I totally agree: **Take a break and enjoy it!**
I changed from my masters to doing my PhD on a weekend. The last exam was on Thursday, the first day on monday.
I regret it. I also moved to a new country during that weekend, so this was a very stressful time, with finishing the thesis, a breakup and all. So, compared to that, the first couple of days in my new job where relaxing, even if this was and still is a really hard time. But, with slightly less adrenaline powering me, I immediately became ill. Not a good first impression.
The beginning of a PhD is not a walk in the park, especially if you change topics like me. You want to be fit for this time.
In retrospect I should have taken a little holiday, even if it meant taking up a small loan. Doing my PhD I earn enough to be able to pay something back. A simple relaxing holiday does not require a lot of money. You need a place to sleep, some food and maybe a couple of beers. Try to empty your ToDo-list as much as possible, or it will haunt you in the years to come.
Using this time for preparation is probably a bad idea. You will have time to get to know your topic. If you start relaxed and undistracted (ToDo-list), then you will have a great start.
Good luck!
Upvotes: 1 <issue_comment>username_5: You might want to consider being a counselor at a math camp for high school (or even middle school) students. This is a good job, albeit a fairly relaxed one, as these programs often aspire to achieve the dynamic of a big, happy family that really loves math. You’ll get good practice teaching without facing the administrative issues that one might encounter as a TA. There is something wonderful about seeing students at this age understand a concept for the first time; everything is new to them, and the excitement is infectious. Finally, on pain of sounding overly focused on professional development, experience like this looks great on a CV. Some programs you could consider include MathPath (program for middle school students, rotating location), PROMYS (program at Boston University, focused on number theory), SIMUW (program at the University of Washington), and the Ross Program at OSU.
Upvotes: 2 |
2013/01/28 | 7,298 | 31,207 | <issue_start>username_0: When you do a research presentation, what is usually the focus that you take.
Some professors tell me to make the slides as self explanatory as possible, and I quote:
>
> Someone should be able to understand your slides without you being there
>
>
>
To me, this approach seems counter intuitive to the principle of a "talk". After all, you already wrote a paper that meets that objective.
Other people, for example in things like TED talks or (please bear with me) presentations by Apple, have very bare bones slides, where they only focus on transmitting the main message of the talk.
What is your take, should the presentation be made as didactic as possible or just a cold transference of information?<issue_comment>username_1: <NAME>, a rather young professor, has an interesting style, encompassing the minimalistic approach. Have a peek at this video: <http://www.youtube.com/watch?v=HaPsYmOmgcI>
He also provides some useful guidelines for preparing a presentation: one of the most important is considering your audience: <http://matt.might.net/articles/academic-presentation-tips/>.
It is important to engage your audience, not necessarily to tell them every piece of information, and, in a way, advertise your work so that they will read your paper.
If you are aiming to get feedback, then you need to focus your story on what you want to get feedback on.
If you are teaching, you will need either more details in your slides or accompanying notes. Perhaps in this case, you might want people to be able to understand your slides without being present. But for regular scientific presentations, I would not aim to make the slides all encompassing. That's what the paper is for.
Upvotes: 5 <issue_comment>username_2: I used to take the point of view you refer to, but I don't believe in it any more. In my current way of thinking, slides for a talk are a dynamic accompaniment to the story that you're telling - they're visual aids for what you're saying.
Unlike TED talks or the Apple talks, a technical presentation necessarily has more content on slides, because even a visual aid needs to lay out notation, formal statements, diagrams and so on. But I don't think it's necessary to make the slides completely stand alone. Make the slides relatively clear and uncluttered, and make sure they flow along with your story, and you should be fine.
Upvotes: 5 <issue_comment>username_3: I would say the two most important points are to make slides you are comfortable with and not to limit your oral presentation to reading your slides.
Furthermore, if you are presenting a research paper, i.e., where more written material is available to the audience, then the objective is usually to make people want to read your paper, instead of explaining the entire paper in 20 minutes.
Some people prefer to have full slides, arguing that when members of the audience are not understanding English very well, it can help them to have both the oral presentation and the slides, especially when the speaker does not speak a perfect English. It is also helpful for members of the audience who got distracted at some point, and who can quickly read where the speaker is. Other people prefer minimal slides, arguing that having both the full text and the oral presentation might confuse the audience. In particular, whenever a slide is displayed, the audience tends to read it immediately, and during the reading, to be less receptive of any spoken words.
In other words, the only "bad" presentation would be to have full slides, and to limit your presentation to reading them, because you become basically useless. However, you can have long slides, as long as you consider them as an aid for the audience who haven't followed what you said (for whatever reason), and not as your script to read. You can also minimal slides, containing only the key points. In the end, you need to be comfortable with your slides, and to give a presentation like one you would like to attend.
Upvotes: 7 [selected_answer]<issue_comment>username_4: A slide being self-explanatory? Why would you be presenting then? What's the purpose of YOU being there?
IMHO the slides should enhance your presentation not be the presentation. *YOU* are the presenter and the slides should help you convey your message better. Having self explanatory slides takes away the attention from you which is a nonstarter for a good presentation. At any given time during your presentation you should aim for having sufficient material on the slide (sometimes just a picture or a formula or at times a couple of bullet points, etc.) to help you convey your message.
Upvotes: 5 <issue_comment>username_5: Good question, and actually the source of much debate when preparing or discussing a presentation...
As [Charles](https://academia.stackexchange.com/a/7526/495) mentioned, you should feel comfortable with your own slides. Different people have different presenting styles, and will rely on different types of slides. It also depends, to a certain extent, on what you are presenting, i.e. if it actually makes sense or not to use pictures instead of text.
I usually make more-or-less self-supporting slides. I put statements in full sentences, equations, and the odd figure. I even have statements in there that I will repeat almost verbatim to the audience. This is a huge no-no for many people, but it all depends on the delivery: If you repeat the statement without reading it, if it just flows with the rest of what you're saying, then there is, in my opinion, no shame in that.
I have two personal reasons for making my slides self-contained. The first is that the slides contain the points/statements that I absolutely need to make, i.e. the stuff that I don't want to forget because I got distracted by a question or some detail.
The second reason is that usually the slides are all that's left after you've given the talk. Unlike TED talks, which are available as videos, most conferences will only put your slides online. If your slides are just a collection of images and keywords, they will be of very little use to anybody who goes over them without you in the foreground.
Upvotes: 4 <issue_comment>username_6: From my point of view, a *wall of text* in a slide that you're just going to read is boring.
* There is no added value in reading it unless your audience is under 6.
Most people read faster than you can speak which means they are ahead of you and don't care about what you're saying.
* It gives an impression that you don't know about your subject because it doesn't give you opportunities to elaborate.
* It keeps your eyes attached to the wall where you should be facing your audience and looking for eye contact.
* It also removes everything that is enjoyable in a natural speech like rhetoric questions, pauses, small jokes, suspens, etc.
The best (in fact only but there may be others) resource I know of for making presentations that avoids this concern is the book from <NAME> : *[Presentation Zen](http://rads.stackoverflow.com/amzn/click/0321811984)*.
Upvotes: 3 <issue_comment>username_7: The answer depends strongly on your audience. I agree that keeping the slides simple is nicer, and more engaging. Your not tempted to just read your slides, but they serve as a visual anchor for your audience and help them follow the talk. If your doing a sales presentation or similar, a Steve Jobs-style presentation can help you "wow" the audience. Also, you can use slides mostly to convey information that you can't with speech, e.g. graphs, screenshots, etc..
However, as I said, you have to consider your audience. In my field (particle physics), typically half of the audience is staring at their laptops during a talk. Some of them are following your slides there, some are doing something completely different. There are dozens of plots and numbers and technical details that you have to show, so the slides are typically pretty dense. In fact, the whole purpose of many talks is to "present plots", so your talking is auxilliary to the slides, not the other way around. You explain certain features of the images, guesticulating, and answering questions. The other thing is that our slides serve as documentation for the talks, so people expect to understand the gist of the talk by looking at the slides afterwards. As an (advanced) student, people would even give me their research talk slides instead of papers or books to learn from.
So, giving a "nice", "best-practice" talk in a work meeting of physiscists, you'd probably confuse and disappoint them, if they are listening at all. If you are in the humanities, you'd probably not use powerpoint at all, or just one or two slides with important quotes.
My tip is to look at what your colleagues are doing, and to start from there. It can never hurt to clean it up a bit, make it concise and legible, but at the same time try not to alienate your audience.
Upvotes: 3 <issue_comment>username_8: There are valid arguments for both sides. What you need to decide is which arguments make sense to your situation:
* were you asked to make your presentation slides available after your presentation? Is it expected? In that case, you should probably add more information to your slides than bare bones (somebody who looks at your slides 6 months after you gave the presentation should find complete coherent ideas inside).
* are you invited as a guest speaker? that implies that people are more interested in *you* and what *you* have to say than your slides (keep your slides minimalistic)
* are your slides going to be used as documentation later? Are they going to be reference lists? That gives you two options: use minimalist slides with an accompanying document (text / images / movie / whatever) or use an exhaustive presentation.
For example, we are often asked to "prepare some slides" in the absence of a presentation (in my current workplace). That means making exhaustive slides, with explicit exposition of ideas and as much context as possible.
We also have internal presentations (organized as one hour seminars), presenting general aspects of new technologies, summaries of conferences, the ideas behind some of our projects etc., to our colleagues. Those presentations are made to capture and hold attention and contain images, sometimes a joke or two and so on.
I guess the most relevant questions here are:
* what is the scope of your presentation?
* who is your audience?
Upvotes: 2 <issue_comment>username_9: I like to play devil's advocate… while I agree in general with the other answers that it's not a good idea to put too much information on slides, I will list **a few of the reasons why *sometimes*, you might want to have self-explanatory (or at least, rather dense) slides**:
* Meeting of a technical nature, with some participants absent but who will be able to read the slides afterward. In such meetings, the slides serve not only as a support for your oral presentation, but also as a **written reference for what was said in the meeting**. As such, they will be consulted later, both by persons who missed the meeting and some who were there.
In such a case, ideally you would make two different contributions (two versions of your presentation, or a presentation and a “technical note”). However, that's more work, and a good compromise can be to simply have self-explanatory slides.
* Language issues: **if you fear not being understood by everyone**, either because their language may not be strong enough, or yours may not. I advise this sometimes for students who do not feel sure enough.
* Language issues, take two: having self-explanatory slides allows you to make a **dual-language presentation, with oral in language A and slides in language B**. I often do that myself, working in a French-speaking country where there are a number of students and post-docs who don't master French. For national seminars/conferences, it is sometimes considered more polite (especially by senior professors) to present in French, yet having slides content in English helps those who do not speak French follow your presentation.
* If you feel you may get lost, and want to be sure to have a backup under your eyes. I see it as a **last resort**, and prefer in that case to have **a few keywords per slide** displayed on the presenter screen (maybe your laptop).
Upvotes: 4 <issue_comment>username_10: I would say the slides should be **as concise as possible**. Absolutely no formulas or sentences that go more than 2/3 of the line width. If you can put one picture and spend one minute talking about it, it makes a much better slide than one with the points of what you are saying written there.
I have reasons against texts in slide. The obvious exception being presentations on things that are text in nature, for example programming languages.
* People read the text while you are speaking. If you are going to *say* that, why make them *read* it too?
* **Text is distracting**. Text and speech are of the same form in our minds. You may listen to music and read a book, you may see a picture and listen to a talk, but **you can't read and listen to a speech at the same time**. You may have had noticed this, if you often found yourself forgetting what the speaker was saying while you were reading his slides.
* Text usually gets long. That means, unless you have a bullet point like "- Scalability", anything else you write there becomes a whole at least 10 word sentence or phrase. This makes the effect of the previous points stronger.
* Slides do not replace notes/books. Many teachers do this! They make the slide as if it's a summary of the book they are teaching and students study the slides and pass the exam. That's horrible. There are far better formats for summarizing a book than using slides. That's worse than watching a 100 hour documentary over 10-minute youtube pieces.
* Regarding formulas, they are boring and no one is going to pay attention to them anyway. It may be the most important thing in your work, but again, no one cares. The only exception is during a university course, where the professor may want to explain in details how the formula is derived. If not, again simply presenting it is quite useless.
It's not just that full slides are terrible, but also that slides are not created for it either. If you have ever tried to talk without slides, you would understand me better. Traditionally, we would use blackboards (or whiteboards!). If during your talk you felt like something would be better drawn, you would use the blackboard. Slides were created so you wouldn't have to waste time during the talk to draw a possibly elaborate image. Or, like I said before, in case of presentations on all things text, you wouldn't want to write by hand a 4~5 line piece of code if you could just type and format it with a computer.
In summary, I would like to emphasize again that **slides assist the speech, not carry them out**. They are there to *help* you (as the speaker) demonstrate what you are trying to say, rather than *replace* you by being completely self-sufficient.
Final note is that, some amount of text is usually inevitable. My point is to try to get them to the minimum possible.
Upvotes: 3 <issue_comment>username_11: To paraphrase the Bard: "The talk's the thing".
If you've already written (and published) a paper about the talk contents, then your aim for the talk is to get across the information to the people in the room.
So, think about what slides will facilitate you oral delivery. Clearly, this will depend on the audience. Some areas of study appear to require 1,000 words per slide; others get away with zero.
[<NAME>](http://www.ted.com/talks/larry_lessig_says_the_law_is_strangling_creativity.html) has a very minimalistic approach to slides that works very well.
Upvotes: 2 <issue_comment>username_12: >
> When you do a research presentation, what is usually the focus that you take.
>
>
>
Talks are advertisements for papers and (more generally) for research agendas. They are *not* substitutes for papers. **The primary aim of a research talk is to provide the audience with the *intuition and motivation* to get involved in the described research, at a minimum by reading the paper.** Technical detail should be kept to the minimum necessary to provide that intuition and motivation. (How much detail is actually necessary depends on your audience. If you give too few details and focus entirely on intuition, your technical audience won't be motivated; on the other hand, if you give too many details, you'll obscure the intuition. But since almost everyone errs on the side of giving *way* too many details, it's better to aim for too few.)
**Talks are not papers; they're performances.**
>
> Someone should be able to understand your slides without you being there
>
>
>
I strongly disagree. Slides are an augment to the talk, not a substitute for it. Again, the point of the slides is to help provide motivation and intuition. Text should be kept to a minimum. Technical details should be kept to a minimum. There should be lots of pretty pictures that provide intuition. It's perfectly fine to include complex formulas or charts or graphs *as pretty pictures*, but don't expect the audience to absorb the fine details. It's also fine to have complex pretty pictures that you can (*literally!*) point to during the talk, to keep the audience engaged in your story. But the slides shouldn't *distract* from your presentation, by (for example) giving the audience something to read instead of listening.
**Slides are not talks or papers; they're props.**
Upvotes: 5 <issue_comment>username_13: Slides should be
1. **minimal**
2. **goal oriented**
3. **not full of text**
4. **Graphics and Visualizations** are highly appreciated
The key in preparing slides is to *know your message* and try to approach the audience without burden them with big portion of text slides.
Upvotes: 3 <issue_comment>username_14: Slides in no way have to be "self explanatory". Theres nothing worse than punctuating an energetic presentation with a slide that takes more than 10 seconds to understand. Full sentences on a slide force the viewer to read them, while you are saying them (have you ever watched an english movie with subtitles?). This is pointless and can be distracting.
Slides are there to enhance the talk by adding info that is best presented visually. Graphs and pictures are the most important examples of this.
Unlike what other have said, I believe you should not incorporate what you are saying, into text on a slide. Yes, it might help people with language barriers...you could present the slide in multiple languages, and have have someone doing sign language as well. But back in the real world, the objective should be:
Presenting the information, as clearly as possible, to the "main" audience, while captivating them throughout.
Once you accept this objective, you can focus on the presentation itself, not how your slides will hold up on their own or in other languages etc..
Learning to give a lecture is best done through experience. It is important to actually pay attention to your audience. You have to give new information time to settle in, which may not be natural because you (the presenter) already know the information. You will begin to get a feel for your audience and find a good tempo.
If you want something didactic, it should be put together separately than the talk and have its own clear objectives in mind.
Upvotes: 2 <issue_comment>username_15: * **Less Wordy & Short** Statements / Points on Slides - **Concise, Brief, Clear**
* **Crisp well-defined visual representations**
+ *Diagrams / Charts - they can eliminate the need for many slides & needed text* - as they say, a picture is worth a thousand words
+ Even better if you have the ability to **create InfoGraphics** or similar visual representation
* Yes, for *when you are not around to talk about the slides*.. Some **presentation notes below the Slides could possibly add a little fencing or meat** to the *brevity of your concise slides*
* For academic paper presentations, You can **tune up or down the level of detail/ depth & breadth** based on who is the **audience and what/ how much you want to expose them to**.. based on your audience psychology.. It's a fine judgement call
* Again, I cannot stress enough on great **VISUAL representations** that **consolidate CLEANLY** what *could have taken many slides of text / points* - A great diagram will take effort to build but pay off many times over:
+ It also **forces you** to **clearly organize, sanitize and align your matter**.
+ It's possible to for different parts of your STORY to not align or flow well when spread over many slides, but **a diagram will force you to refine it** or **will just look wrong** or **become a disaster**
PS: These are some quick thoughts after a long night flight that can be refined on another fly by.
Upvotes: 2 <issue_comment>username_16: It depends. If you are going to present the slides, you can keep it minimal and still self explanatory as much as it can be. Because since you are presenting it, you know it in and out so you can explain during the presentation.
If you are going to share the slides with people who could not attend your session or altogether it is for distribution only then it should be self explanatory obviously.
Upvotes: 1 <issue_comment>username_17: I heartily vote for **(mostly) self-explanatory slides**.
One conference I attend usually has eight parallel tracks. I frequently am interested in different talks held at the same time, attend one of them and afterwards read through the slides of the others. And it is extremely frustrating to then get slides that are unintelligible on their own.
Yes, I understand that I am supposed to just read the paper in this case. To which I reply:
* Often the paper is not yet available (or even written) when the authors present work in progess. After all, that is what a conference is there for.
* Reading a few slides is a lot faster than even skimming a full paper. I will usually decide based on the slides and the abstract whether investing the time to read the full paper is worthwhile.
In addition, suppose that three months after the presentation, you get into a conversation about the topic you presented on. With (mostly) self-explanatory slides, you can just send the other guy the presentation for a first idea and then follow up if he is interested (related to the second bullet point above). With a minimal presentation, the best you can do is to recreate the entire verbal talk... *if* you still remember what exactly you said back then, since a self-explanatory presentation also serves as a reminder for the author himself.
Finally, preparing a (mostly) self-explanatory presentation also forces me to think beforehand about what I am going to say, how I am going to structure my talk and allocate my time, and it helps me not to forget about important details.
And yes, I do understand that slides will never be completely self-explanatory, which is why I called them "(mostly)" self-explanatory. On the other hand, neither will the published article be - a lot of stuff is not documented even in the best published articles. I believe that (mostly) self-explanatory presentations yield a good balance between communicating a rough outline and not going to the whole trouble of writing resp. reading a full article.
Upvotes: 4 <issue_comment>username_18: I remember one of the most important thing form presentation skills training. Normal human brain is capable to remember 4 things. This means to have simple, goal oriented slides with approximately 3 strategically selected points. Just two is a waste of energy and four can be already to much for some people.
There is a lot of info about this topic. Check this article [The Limits of Memory](http://www.dailygalaxy.com/my_weblog/2008/04/the-limits-of-m.html)
And drawings, pictures, schematics....picture is worth a thousand words. This doesn't apply to super heavy duty cliparts and similar stuff.
There is no simple guideline "how to make perfect presentation". Can't be. Every presentation is different, even about the same topic.
Upvotes: 2 <issue_comment>username_19: Presentation style is something that I have made a conscious decision to change. When I started doing presentations at the start of my PhD I used to put everything on the slide so that I didn't forget anything. Every graph had about 5 bullet points of text explaining what it meant. Although this is an effective survival mechanism for those new to presenting, it hardly makes the most engaging talk.
Soon after I started my postdoc I attended a conference, and on the way home I found I couldn't remember the message from a single talk, out of the many I attended - almost all of which used the same style with lots of text and information on every slide. At the same time, a senior colleague of mine always gives memorable presentations - partly due to his lively personality but also because he concentrates on a single important idea on each slide, and no more.
As a postdoc I now have quite a lot more experience of giving talks, and so I was determined to experiment and learn how to give better presentations which engage with the audience. Before I usually had very few questions, which is a sign of either the talk was boring, or that nobody understood what you were trying to say.
What follows are my general guidelines for conference presentations to specialists in the field.
Rule number 1 is avoiding information overload. Stick rigorously to trying to present no more than three ideas in a talk. It is important to highlight those important points/results and often tell the same idea in different ways if possible to give it chance to sink in. The objective here is to convince a largely captive audience that your work is relevant to them. If you achieve this then they will invariably follow it up by looking at your relevant papers and hopefully citing them. Also important here is to focus on the executive summary. The full details are available in the paper, so focus on the highlights. Researchers will invariably give you the benefit of doubt, so don't waste time asserting your cleverness by putting up complicated equations or algrithms - you will just lose your audience.
Avoid putting more than a single equation/figure/phrase on a slide. Some ideas are complicated and can take a while to explain, but what you want is for the audience to process the minimal information on the slide and focus their attention back on you and what you are saying.
Presentations which are minimal are much harder to write, as you have to know what you are going to say with minimal prompting, so practise is a necessity. But since adopting this style, I have had a lot more engagement with the audience and questions about the work.
Upvotes: 2 <issue_comment>username_20: I use **no slide** at all when I need to present a work closely in detail.
Many answers above have discussed this question deeply, I don't need to repeat again. For summary, most answers stand on the minimal slides side, and the [answer of username_9](https://academia.stackexchange.com/a/7536/14341) is a good explanation of when using self-explanatory slides.
The purpose of slide presentation is for advertising your work. For example, in conference, most of your audience don't know what you are doing. (Yes, in the physics conference, you talk about physics, but most physicists on the room may not work on your field). But what if all of your audience work in your field, know all basic concept and need you to give a presentation in higher detail? Or particularly, your work is highly full of math and math transformation/calculation, which needed to have a close looking? Or when you present to your colleagues, who need to ask you in the middle of the talk, not after the talk like in conference?
When you use slides, you need to condense sentences, break ideas into bullets and slides. Although it is good for getting the ideas, I don't think it's good for getting it fluently. If you have ever seen a long math proving by slides, you will get anxious. Every time an audience raises a question, the presenter needs to go back (and forth) several slides. That will disrupt your thinking. Instead, I just write all I need to say in a document and present it.
I rather not to use slides, than use it with wrong purpose. Its purpose is on the presenting side, not explaining side. For explaining purpose, I make a document (.doc, .pdf) to present.
Upvotes: 0 <issue_comment>username_21: Most answers focus too much on the merits of text on a slide. More focus needs to be given to other slide elements.
Slides with overly complex or too many figures are far more problematic than slides with text. At least with text, I can understand it quickly if I want. Figures on the other hand are often exactly what you would find in a journal article, making them inappropriate for a talk.
Most people need to [simplify their slides](https://brushingupscience.wordpress.com/2016/12/16/simplify-your-slides/), or at least build up their figures piece by piece instead of presenting everything at once. For example:
[](https://i.stack.imgur.com/J2YCa.gif)
Upvotes: 1 <issue_comment>username_22: Minimal slides are no slides. That is often fine, but the question is what to put on slides conditional on having decided to use slides.
This is really a pseudo-dilemma. Slides should not be a substitute for your talk. They should not be projected handouts. They should not be summaries of your paper. They should not be your notes to read off. They should not include lots of distracting information and walls of text.
But that does not mean they cannot be self-contained. Let's look closer what that means. That slides are self-contained does not mean that they have "all the information," whatever that is supposed to mean. Your talk will not include all the information from your paper (I hope), so your slides need not do so either. That slides are self-contained means that you can understand what they say without hearing the speaker. That's a good thing, we often do not hear everything a speaker says. Attention is scarce, and it is easy to get distracted for a moment. But for this to work, your slides will need to contain much less information than your presentation. But that little information they do contain should be self-contained. Here are the first four slides from an example presentation:
[](https://i.stack.imgur.com/nJqUv.png)
It's not a particularly good set of slides, but the slides are clearly self-contained. Hopefully, Holmes can provide more information during his talk then one can find on the slides. They still serve a purpose. If someone gets lost while listening, it should be absolutely clear where Holmes is in the talk and easy to find a way back in.
Upvotes: 2 |
2013/01/28 | 1,098 | 4,533 | <issue_start>username_0: If I have works that are almost submitted to journals, or are in journal review, is it appropriate to include these on my CV?<issue_comment>username_1: Don't put anything in your CV you cannot justify if asked. A CV is not just a list of your accomplishments, it's a list of material you can provide to a recruiting committee in order to help them make a decision.
In a same way that if you claim to have a given degree, you should be ready to provide the corresponding credentials, if you claim to you have a paper under review, you should be ready to provide the submission.
In other words, you can list in your CV your submitted work, but not the pieces of work that are "almost submitted", unless you're ready to provide the draft if asked (the question is: if the draft is not submitted, that probably means it's not ready, therefore can you provide it?). If it's possible with the journal/conference policy, you can even put your submitted version on a pre-print site, such as [arXiv](http://www.arxiv.org).
On going work can go into the "research statement" part of your CV, where you can explain the different ideas you're working on, and even give the key concepts.
Upvotes: 6 <issue_comment>username_2: I will usually list things that are on the arxiv. They can be viewed as tech reports, so I don't see the harm in doing so.
Where it gets tricky is if (for example) you submit to a double blind conference. In such a case adding the paper to your CV might be viewed as a breach of the process.
But in general my view is that if you have the paper posted on your web page (and you should!) or on the arxiv, then it's perfectly fine to list it.
Upvotes: 4 <issue_comment>username_3: *Okay, it seems I have to play devil's advocate again… because my position on this is different from Charles' answer.*
My CV lists my scientific production separated between peer-reviewed articles, non-peer reviewed articles (I have none, but it could happen), invited conferences, oral conferences. As such, I would **definitely not put a non-published paper** among the “publications”, especially not among the peer-reviewed ones. In my field, it is rare to publish (in the sense of “make publicly available”) a manuscript before it is accepted (chemists don't use arXiV much, because most journals prohibit it), so I find it weird to list unpublished material in a CV.
So, because you didn't tell us your field, I would say **beware**:
* if your manuscript is **unpublished**, it's not a publication, **don't list it as such**
* if it's published (arXiV or your website or other) while in review, clearly mark it as such (and don't list it as *peer-reviewed*)
I would say that the “under work” manuscripts do not add much information anyway. The topic they cover is surely already covered by your research statement (or list of research interests), so why would a hirer care about whether you are writing this paper or that paper?
Upvotes: 3 <issue_comment>username_4: Another thing to think about is the rules of funding agencies and other people to whom you might submit these CV's. In German and EU funding applications, only *accepted* papers can be listed as part of an author's "publications" list. Work that is in review, no matter how far along the review process, cannot be listed until an acceptance notification has been given.
However, it's also not clear what stage of your career you're in. If you're applying, for instance, for a *post-doctoral* position, then it would probably make some sense to mention manuscripts under review. Normally, in such cases, the CV isn't going to a committee—usually it's just the advisor himself.
Upvotes: 2 <issue_comment>username_5: I have a Publications section on my CV with Peer Reviewed Publications, Submitted for Publication, Conference Publications and Presentations, Invited Lectures and Seminars, and Reports.
For the manuscripts that have been submitted for publication I just put the authors, title, and I put (*submitted*) as the year.
I see people include the journal they submitted to, but I don't think that's appropriate since it unjustifiably uses the reputation of the journal to bolster your reputation. Anyone can submit anything to Nature or Science.
I also recently had a PhD applicant say they had submitted a manuscript to relatively good journal. I asked for a copy and the article was in no way suitable for the journal they submitted to.
I never put in-prep on my CV, since it's practically meaningless.
Upvotes: 3 |
2013/01/29 | 1,631 | 6,502 | <issue_start>username_0: Should I mention my Stack Overflow (and other Stack Exchange sites) reputation in my CV while applying for a post-graduate position?<issue_comment>username_1: If I was to hire a Postdoc, Phd student, a Master student or a programmer for my project and that position had a related component (lets say Programming, GIS, Maths, CS, Mathematica etc.) and someone had a very good reputation in the site that I understood, I will definitely see it as a strong indicator. Of course it's easy to check the type of questions that have been asked/answered and the calibre of the person => if someone develops a strategy to just get points by answering easy questions and asking general questions that are bound to get a lot of up votes that would not win a lot of brownie points. Nevertheless I will never penalize anyone for it. It's active participation which is always positive.
Upvotes: 4 <issue_comment>username_2: **No, you shouldn't. Not yet, anyway**

---
In general, I think it's perfectly fine to list that information on a CV for an academic position. Depending on the profile of the position itself, I would feature it more or less prominently. Say, if you apply for a scientific programming position (or a position with heavy coding), you could list it under a “Skills” section where you would say:
>
> C/C++/Fortran with a focus on shared-memory and distributed memory parallelism (OpenMP/MPI)
>
> Received formal training at XXX National Lab, taught parallelism course at University of YYY, involved in StackOverflow (username: zzz) on this topic.
>
>
>
(if the format allows it, like a PDF, consider including a hyperlink)
If the position is not one heavily involving code-writing, you could tone it down, or even list it in a “Hobbies” or “Personal” section. Many people like to list hiking or book reading or civil war reënactments, so why not list Stack Overflow!
**But…** in all cases, only do it if your account is of the *wow!* type. You don't have to be [<NAME>](https://stackoverflow.com/users/22656/jon-skeet) (it may take years of therapy to accept that, but that's the sad truth), but you don't want someone to look up your profile and think *“meh”*.
Upvotes: 7 [selected_answer]<issue_comment>username_3: I would say not to. As a hiring manager I really care most about the relevant details such as work experience academic focus. Although SE/SO is pretty darn popular it's also a website and, although we may disagree about this :-), not seen as a professional/peer-reviewed/authoritative/juried/etc. source. I would just see it at fluff and wonder why it was there. If you're going for a research position keep in mind that these people haven't seen sunshine in months, much less a computer that doesn't have Matlab open on it.
I would, however, be interested if somebody explained to me (in the interview) what this whole thing was when I asked the "so what else are you into" question. As a hobby this shows that you are a technologist first and that you make your geek part of your life. I would take that into consideration for an academic or a professional position because it shows where your interests are.
In general; I would say that if you feel strongly about something- don't put it on your resume. Save that for the interview and make a good impression with it.
My 2cents.
EDIT: <NAME> made a good comment below. Although it should be obvious, it is worth pointing out that this post is made from an industry perspective. It is provided to frame the topic within the larger context of an interview; any interview. My experience has been that there is very little difference between my professional and academic interviews, ymmv - of course, and therefore I submit that the commentary is germane.
Re-reading my comments I realize that, yes, I was being a bit flippant for comedic effect. No offense intended.
Upvotes: 4 <issue_comment>username_4: It depends on the field. Unless you're sure that it's well-known and valued in your field, don't write it as a claim of competence — but you can write it as a hobby.
Let's not over-estimate the value of Stack Exchange. Stack Overflow, by far the largest website, has an Alexa ranking of 86. It's probably well-known among programmers, but probably not among others. Stackexchange.com in its entirety has a ranking of 469. That includes Stats, GIS, Maths, CS... `superuser.com` has a ranking of 1620, `serverfault.com` a ranking of 2159.
Most likely, an academic reading your application will not know the website you are mentioning. A CV should focus on the important parts. Any space wasted is harmful to your cause of grabbing their attention. Mentioning a high score on a website they haven't heard about is a waste of space. Therefore, I would not put it on your CV, unless you're sure it's going to impress them.
That being said, I've heard of people writing in their CV that they're among the top 200 in World of Warcraft. It's perfectly fine to write hobbies in the CV — but if you're trying to convince someone of your competence by citing your score on a website they're not familiar with, you might do more harm than good.
Upvotes: 3 <issue_comment>username_5: I can't think of a situation where it would be helpful to list SE reputation on your CV. Most people won't know what SE is, and so will either not care or think it's weird. If your reputation is not very high then it's also going to look bad. Finally in the one situation where it might help you (the person reading your CV is active on SE and you are a SE superstar), there's no point in listing it because the person will already know who you are. I don't need a CV to tell me that Qiaochu Yuan is a major contributor to MO.
Upvotes: 4 <issue_comment>username_6: I don't think it is a good idea because the value of a "SO rep point" is unfamiliar to most of your audience (and varies across sites). Still, SO provides a useful public record of thinking, learning, teaching, dialogue, and social skills.
One way to leverage this would be to point to examples of your teaching, learning, and problem solving - with perhaps a well chosen example in the research and / or teaching statements; if your published results benefited from SO, a reference to a SO question could tell an interesting back-story behind a paper, and be a launching point for anyone who is curious about your other contributions.
Upvotes: 3 |
2013/01/29 | 2,032 | 8,451 | <issue_start>username_0: I do not understand what good it does them. A professor said it gives opportunity to read papers he would not read on his own. I am sure there is more to it but I do not see what they gain by participating in peer review process. It takes time, it is not paid and not even publicly acknowledged. Why do they do it?<issue_comment>username_1: I can think of at least five reasons why doing peer reviews gives an advantage to yourself.
* You get to read recent research results before everybody else.
* It gives you a good opportunity to think really critically about a potentially interesting paper.
* You can put it in your CV and it will show that you are a known expert in the fields of journals you review for.
* You want to stay on good terms with the editor, who may judge your submission to the journal.
* You want to get an editorial position with the journal, which is highly prestigious. For this one typically needs reviewing experience.
Upvotes: 5 <issue_comment>username_2: I think **academics are paid to perform peer review, in the same sense that they are paid to do research**. I don't have a boss telling me what to research and paying me when it's complete; rather, my university expects me to perform research that is judged significant by my peers. In the same way, my university expects me to perform peer review. In my annual reports to the university, I report my research outputs and I report the journals for which I have performed peer review.
You may argue that my continued employment and promotion depends more heavily on my research than on peer reviewing, but the same could be said when comparing any of my service or teaching duties.
The bottom line is, academia is a [gift economy](http://en.wikipedia.org/wiki/Gift_economy), and if you want to be part of the community, you're expected to do peer review.
Upvotes: 6 <issue_comment>username_3: username_1 gave a good list of “short term” answers, i.e. the reasons why one would accept a given review. Maybe I'll summarize the first two of them, because they are the ones that motivate me the most: **curiosity**.
Maybe curiosity killed the cat, and I'm sure it killed a bunch of scientists too, but for sure it is what makes most of us tick. Whenever I receive a request for review, even if I don't have much time for it, my first instinct is to read the abstract and think *“hum, cool, how did they do it in detail?”* or *“I wonder if they thought about this and that”* or *“hey, I thought that was guaranteed not to work, how did they manage?”*, or even *“oh, I had never thought about that”*). In all cases, it makes me want to accept.
---
Also, there's a long term component to it. Even though the commercial publication model is *deadly sick*, **peer-review is a very good part of academic publication** (and I mean “good” in the moral, ethical meaning). In days I am fed up with the system, I sometimes think peer-review is the *only* good part of academic publishing. So… **by submitting papers for review, you opt in this whole peer-review system**, and it becomes a moral duty to do your fair share of the reviewing work.
Don't get me wrong, you're not contractually obligated to do so. But, if you send papers for review and never accept to review any, your colleagues (and the editor) will see you as a free-rider of the system, and will resent it. And I would too.
Upvotes: 4 <issue_comment>username_4: The other answers do a good job of laying out the practical benefits and the roles of curiosity and obligation. However, I think there's an additional psychological factor: being asked to review a paper shows that the editor values your expertise, and that feels good. This is a shallower reason, but I think it plays a substantial role in encouraging reviewers.
I can remember the first time an editor I didn't know personally asked me to serve as a referee. It was really exciting, and I thought "Wow, this famous person has actually heard of me and is interested in my evaluation." Of course it's not as exciting the hundredth time, but it still feels good to be a valued member of the research community, and I would be unhappy if the requests stopped coming.
Upvotes: 5 <issue_comment>username_5: In addition to the other excellent posts here, I find that reading a manuscript in order to write a review is different than reading a published article just to see what is in there. When you write a review, it forces you to actually think about the manuscript, about its internal logic, about possible weak points. After all, it have heard it said that "the job of reviewers is to kill bad papers and to make good papers even better", and to make a good paper even better, you first need to understand it and think about the subject matter in a way that not even the authors did.
I find that I learn a lot more from papers I review than from other papers I consume.
Upvotes: 3 <issue_comment>username_6: Nobody mentioned quality yet. One reason I like to review papers is because I can encourage authors to make better papers.
It sucks to read badly written papers. By reviewing them, you can make the world a better place!
Upvotes: 4 <issue_comment>username_7: When I asked the same question, the answer was "control and public relations". While the second one is obvious, the first is a little more subtle and evil. Having someone's else paper in advance it allows you to:
1. establish a "give and take" relationship with peers you want on your side. anonymity is easy to break, if you want to and know what is going on in other researchers' offices.
2. slow down the publication of a peer by dragging the review process or demanding additional science to be performed, especially if they are scooping you.
3. get a sniff of what's going on in someone's else plate, thus granting you a head start that might be useful if you want to attack the same field.
4. indirectly control the quality of a journal to reduce its score. In some universities, the current score is used to evaluate the paper production of a researcher to grant him funds. If you can have an effect on the overall quality of a journal, this will reduce the total score of a researcher's past effort, and give an edge to someone else to get more funds.
Upvotes: -1 <issue_comment>username_8: Just to state something explicitly that was part of all the previous answers: **Reviewing papers is part of being a good academic citizen.**
Compared to other jobs, academia is not something you *do*, it's a system you *enter*. It's a community, an ecosystem of sorts, that provides *benefits* for those who are part of it, at the cost of some *duties*.
These duties, in academia, usually consist of publishing, teaching, supervising students, organizing and attending talks and conferences, doing some outreach, and, yes, refereeing publications, research plans, and grant proposals.
Of course, as in most communities and ecosystems, there will always be *bad citizens* who enjoy the benefits without the duties, and if their numbers grow too large, they end up destroying it. Fortunately though, most of us see being part of this community as a privilege and actually enjoy the extra work (see the other comments above), so that risk is, in my opinion, relatively small.
Finally, if you need an analogy, think of this site: You can ask questions and post answers. People usually do both, and actually more of the latter. You yourself invest your time in answering questions and, as an implicit trade, can rely on others to answer when the question is yours.
Upvotes: 4 <issue_comment>username_9: This is an old question that has re-emerged, but I'll have a stab at a new answer...
For better or worse, publications are the 'currency' of modern academia. People look at how many papers you've published, and where you've published them, and make inferences about your professional status and calibre.
If peer review didn't happen, we would all effectively have a licence to print money. Everyone could have a new paper in Science or Nature every day - just write something down and submit it. Clearly, this would lead to rapid, catastrophic devaluation of the 'academic currency', to the detriment of everyone who had invested in it.
So, I suggest there is a quasi-economic imperative for academics to undertake peer review. It is in our interests to ensure that others are being held to the same standard that we have been held to.
Upvotes: 1 |
2013/01/29 | 405 | 1,529 | <issue_start>username_0: Are there any ways of getting to know about new PhD positions in CS other then just looking trough university/research groups web sites? Maybe people from different branches can name some specialized mailing-lists where PhD positions announcements are quite usual. I am particularly looking out for computer systems, mainly dealing with cache problems, scheduling problems, multicore architectures, etc.
Here are a couple of websites that I found:
* [EuroSys — European Job Openings in Systems](http://www.eurosys.org/jobs/#doctoral),
* [PhDportal.com](https://www.phdportal.com/) (formerly [www.phdportal.eu](http://www.phdportal.eu)),
* [Research Grants | International Scholarships](http://scholarships4phd.blogspot.com/),
* [Scholarship Positions](http://scholarship-positions.com/),
* ~~http://youngbrigades.com/~~ (site has new, unrelated owner; verified on 26.12.2020).
I would appreciate any help. Thanks!<issue_comment>username_1: If you know the field you want to study and have read some good papers, contact the authors and ask them what mailing lists are good for their field.
You might also state your motivation: they are generally good at knowing who has funding and may even be looking!
Upvotes: 4 [selected_answer]<issue_comment>username_2: Why this post is active again without any new answer or comment?
Anyway, in the past I found this page very useful, it has PhD/postdoc/faculty positions, CFPs so on and son on.
<https://research.cs.wisc.edu/dbworld/browse.html>
Upvotes: 1 |
2013/01/29 | 1,203 | 4,962 | <issue_start>username_0: I have finished my thesis, it's been proofread by my advisor and myself, yet I have 24 hours to make last-minute changes to it. What should I be looking for? I will not make any substantial changes to the content, but what about the form? For such limited amount of time, where should I focus my effort? Or, said another way: what’s in your last-day check list for a thesis?
*Major modifications to the original question. Thanks to F'x for the advice.*<issue_comment>username_1: Your goal is to present a viable thesis to your examiners so perhaps there is a need to change your thinking about not making "substantial changes". I know this is a difficult call at this late stage but if you discover a gap in your thesis, it is better to address it before submitting it to your examiners, rather than getting the examiners to point it out to you. If the latter happens, you will have to substantially revise your thesis and this will tax you emotionally to say the least.
As for the checklist, I have the following suggestions:
1. Check that you have really built up your case for the research. Your examiners will not be convinced if you present a flimsy case. Ensure there is a strong reason why you conducted the research (i.e. define the gap in knowledge that you are addressing).
2. Check that you have actually answered your research questions. I am unsure in which field you are situated but in sociology the answer is often not that clearcut. However, you can still make a strong case for or against your research proposition.
3. If you have done statistical analysis, make sure you demonstrate that you have a good understanding of what you did (i.e. you understand the assumptions that underpin the technique, for example Pearson's r is for linear relationships).
4. Check that you have a section (in the concluding chapter) that spells out in black and white what contribution your thesis is making in your field. Often we just assume that the examiners will understand the contribution. We know our research so well (after doing it for 3 or so years) that the contribution is apparent to us, but it is a different story for the examiners.
5. Demonstrate critical thinking with a blend of personality. This is a bit controversial but your thesis is a reflection of your interests and a little bit of personality in your thesis will not hurt (only a little bit, though as this is academic writing). In my case, I incorproated my experience as an immigrant to explain why I chose to study what I studied.
Remember, you will not get a poor result because of typos (though many typos will create a poor impression), so focus on the bigger issues if you can. All the best!
Upvotes: 2 <issue_comment>username_2: First, if you've already proofread it recently, a second try will most likely not help. You won't see the typos and weird sentences anyway :) I'll advise to focus on specific short parts that can make a difference in the reading. It's also the right time to get someöne else on board to give these specific items a second look (with fresh eyes).
Without further ado, I suggest you limit yourself to checking the following items:
### Text
The main check here is not really for typos (although be sure to fix those you will see), but rather for *clarity*.
* General introduction, general conclusion
* Introduction and conclusion of each chapter
* Summary/abstract, if one is included (sometimes it's written in 10 minutes in a haze, in which case it's worth extra checking in the end)
* Acknowledgments, if it's already present (some people only include it after the defense is over). Make sure you're not forgetting someöne important, like your wife or your bonsai.
### **Figures and figure captions**
* Quality of the graphics
* Do color and symbols mentionned in the caption match the figure?
* If you intend to have black and white figures in print, are the figures understandable in black and white? Do the captions make sense for both version (color and B&W)?
Equations
---------
Check your equations. Again. Typos there are typically hard to find.
Numbers & tables
----------------
All tables, all inline numbers: make sure they include units, make sure the number of significant digits displayed is reasonable and consistent.
Bibliography
------------
Do not care to much about the formatting: if most of it is okay, noöne will really complain about one or two missing page numbers or lack of italics in one title. However:
* If references are hyperlinked (using DOI number), click on each to check that they match the right online paper
* If a paper is “in press” or “accepted for publication” or something else, check if it has been published since and update its status
*(The starting point for this was [my answer](https://academia.stackexchange.com/a/4760/2700) to [“Examining paper proofs”](https://academia.stackexchange.com/q/2951/2700), but it is now significantly different)*
Upvotes: 4 |
2013/01/29 | 596 | 2,475 | <issue_start>username_0: Over the past two years, I've been collaborating with a PhD student. He did experimental work, and I did modeling and data analysis based on his experiments. Now that my colleague is about to write up his PhD thesis, in which way can he ethically include the modeling and data analysis results in his thesis?
I don't need any of these results for a thesis on my own, and we are currently writing a paper on this together, so there are no worries from my side about misuse of these results.
There is the related question [Are overlapping dissertations ethically acceptable?](https://academia.stackexchange.com/questions/5920/are-overlapping-dissertations-ethically-acceptable), but I am more explicitly asking about how my colleague can present the results which are more based on my work in a good way.<issue_comment>username_1: **Don't worry, be happy (and be truthful)**. There's nothing wrong in including in one's thesis stuff that you didn't do yourself, as long as the delimitation between what the candidate did and what others did is clearly marked. And by that, I mean no lies, but also no half-truths either.
Basically, the presentation will thus depend on the interaction between you two and his part in the analysis (which ranges from “nothing” to “he suggested ideas that I tried” to “he ran my code himself”). In the first case, he could say:
>
> As part of project X, I sent these results to Dr. <NAME> at Big U. for him to perform his widely acclaimed topological Bayesian half-filter analysis. This analysis revealed that …
>
>
>
There's nothing wrong with presenting results obtained by others from your work, as long as they shed light into the phenomenon you're studying. I once had a student who published a work, which was built upon by another group during his PhD, and he presented this at some length (and critiqued their extension) in his thesis. That's part of the whole story.
If the collaboration was closer, just make sure the thesis clearly indicates its nature and the contribution of everyone. Then, no fuss!
Upvotes: 5 [selected_answer]<issue_comment>username_2: F'x's answer is good. I would just add that your collaborator should check his institution's thesis guidelines. Mine had specific directions on how joint work should be included in the thesis, such as an extra paragraph explaining who did what part of the work.
Of course, your collaborator's advisor should also be in the loop.
Upvotes: 2 |
2013/01/29 | 818 | 3,526 | <issue_start>username_0: I have received an admission offer for a PhD at a prestigious US university. However I am also currently working in another lab outside the US with a potential to also receive PhD admission there.
I think the deadline to respond to the US school is April 15 but the professor asked me politely to give an answer within a reasonable amount of time, also my former supervisor (which has no interest in the matter) suggest not to string along the US school and try to make a decision well before the deadline.
I feel it might be unethical to wait the last minute because I respect the professor and he might lose good potential candidates. On the other hand I want to wait and see if there is a concrete possibility of pursuing a PhD at this other place.<issue_comment>username_1: **As soon as you are sure, or April 15, whichever is sooner.**
You have *no* ethical obligation to answer before the April 15 deadline, especially if you are waiting for an offer from another department.
On the other hand, it would be *nice* to answer earlier if you can. So it would be *nice* of you to contact your current department's admissions committee (either directly or through your lab director), tell them that you have another admission offer but would prefer them, and ask if they're likely to offer you admission. If they haven't decided, it would be *nice* of you tell them about your April 15 deadline and ask them when they expect to make up their mind.
(I'm assuming that you prefer to stay in your current lab. If you'd rather accept the pending offer, even if you got an offer from your current lab, then what are you waiting for? If you're not sure, then what more information do you need?)
Upvotes: 5 [selected_answer]<issue_comment>username_2: In Europe there is no 15th of April deadline and nobody cares about US deadlines (if anything the top institutions would want to put pressure on people to make decisions quick so that they don't loose them as good PhD students are not that many (yes its sad but true)). There are many early cycles e.g. Oxford and Cambridge give people offers before the application deadline date for many US departments. And lets say if you get a funded offer in November if your supervisor is keen you can arrange to start by Trinity term in April.
Also this can be the case if the offer was made in irregular time of the year which is not that uncommon (e.g. in Sweden you can start Phd anytime during the year as soon as the guaranteed funding is available for the entirety of the PhD).
In many places you will be interviewed and asked whether you will accept the offer or not just to make sure you are not going to waste their time for a long time. You might say this is unethical behavior and I agree but it has happened to myself a couple of times. You will be expected to make a decision within reasonable amount of time and that is specific to the institution and how they run their business. Back in the day I had offers that gave me less than a month or ones that were open for a couple of months. If the position is funded and there is an expectation to start ASAP especially if there is a company behind the project you might be called in and be given an ultimatum and I have seen offers being retracted.
Advise: talk with the institution that has given you the offer, explain the situation, and get a date by which you can reply. If it is close enough you might be able to stretch it a bit by asking politely for a week or two more time.
Upvotes: 2 |
2013/01/30 | 624 | 2,554 | <issue_start>username_0: I completed my undergraduate study in computer science outside of the US. I hope to obtain a PhD degree in Human-Computer Interaction (HCI) in the US, but I don't feel that I am qualified enough (in terms of research experience). So I am hoping to find a research internship opportunity in US/Canada.
I could probably find relavant projects in my own country, but I feel that trying US/Canada first is more straightforward. After all, it's in US/Canada where I am looking for a PhD degree.
I have a few HCI professors in mind, with whom I would like to work very much. Should I contact them for internship opportunities? What are my chances and more importantly, **how can I improve my chances**?
Some facts that I think are relavent to the question:
I am not a US resident so I will need a VISA. I don't expect assistantships (it's up to the professor). I have already learnt the basics of HCI and I know the fundamental research methods (through reading and auditing classes). My undergraduate university is not *the best*, but it's surely one of the best universities in my country.<issue_comment>username_1: Assistantship is upto the professor (Research Assistantships) as well as the department (Teaching Assistantships). It is highly likely that you will be supported by both over your phd life. That being said, I don't think US universities make distinctions among PhD applicants based on their nationality (in general) in terms of funding or admission. Best of luck for your application.
You might mail professors who you have in mind, but do check in their website whether they explicitly prohibit that.
Upvotes: 0 <issue_comment>username_2: You should simply **send an email** to the supervisor you are interested in. Your **best** help to be accepted to any program is to have the support of a professor. In US/Canada you should just start by, *Dear Mr/Mme, X*.
Now, to convince the professor that you are a good fit, you need to convince him that you are have **research potential**. This is what the university will be looking for too.
You could talk to him about your interests, and how they recoup his (assuming they do by OP post). If you have done any research in that field you could mention it.
If you can find and read first a relevant paper he wrote and have questions about it, ask. I you can't because they are too complicated (which may really happen), or behind a paywall to which you don't have access, you could ask for a few classic references to get started.
Upvotes: 3 [selected_answer] |
2013/01/30 | 3,732 | 15,872 | <issue_start>username_0: In academia, there has been recently some cases that came to light of very well known scientists that have fabricated their data out of thin air.
In some instances these papers have been cited many times by other researchers and some of them have even been praised. Thus, when the truth came to light, it also appears to the public that scientists have bad peer-review processes.
In light of the presented reasons, how can a reviewer make at least some sanity test that the data is (most likely) not fabricated? Suggesting so could come as a great injury to the researcher, but I think there should be some kind of mechanism to control this.<issue_comment>username_1: There is only one reliable way to do it, which is to try to replicate their results.
The unreliable, but not completely useless way, is to see if the numbers fit Benford's Law.
Benford's Law describes the distribution of the **first digit** of many very diverse data sets. This is the distribution:

(public domain chart from [Wikipedia](http://en.wikipedia.org/wiki/File:Rozklad_benforda.svg))
<NAME> describes this further, in [Not the First Digit! Using Benford's Law to Detect Fraudulent Scientific Data](http://dx.doi.org/10.1080/02664760601004940), a paper in the Journal of Applied Statistics from 2007
Upvotes: 7 [selected_answer]<issue_comment>username_2: It depends on the nature of the data. If the data presented is in form of pictures (such as photos of biological experiments, like Western blot), you can check for traces of [image manipulation](http://www.nature.com/news/2009/091009/full/news.2009.991.html). Guidelines to examine photographic data are available from the [Council of Science Editors](http://www.councilscienceeditors.org/i4a/pages/index.cfm?pageid=3646).
Upvotes: 4 <issue_comment>username_3: Edit
----
After thinking about some of the points raised in the comments, I would like to expand on my answer, but also defend its form against the criticism that it is so vague as to be unhelpful. [In case you are wondering what the original answer was, it is roughly the sections 'Looking for mistakes' and 'Trusting your feelings'.]
Benford's law
-------------
This was [mentioned by @EnergyNumbers](https://academia.stackexchange.com/a/7605/5830). The answer is very popular; however I don't think that it is particularly helpful.
Benford's law is only one of many statistical techniques that can be and have been used to detect fraud or bias. It has become widely known, probably partly because it is simple to apply, but also simple to justify in a 'hand-waving' way.
However, its validity is much more limited than @EnergyNumbers (who calls it *the* unreliable way) implies. As originally formulated, Benford's law said that if you take a large range of numbers which have different sources, contexts, meanings or units, the logarithmic distribution emerges. This is a very interesting statement, but has little utility in detecting fraud. The statement that Benford's law, whether applied to first or second digits, should apply to a particular set of observations of a single variable, is an extremely strong statement. There are many, many natural examples of well-formed non-fraudulent datasets to which Benford's law does not apply. Several other digit distributions could reasonably arise in bona fide data. You may or may not be able to justify the assertion for your own data, however what you should not do is blindly apply Benford's law to various sets of numbers, and start forming opinions about their reliability.
It is a serious statistical technique and requires non-trivial statistical understanding to apply. The same thing applies to checking for normality. Unless you have a good understanding of how normal distributions arise, you will not be able to form a theory as to why some distribution should be normal. If this is the case, then any test for departure from normality will be useless.
A paper that really examines this for Benford's law is [The Irrelevance of Benford's Law for Detecting Fraud in Elections](http://vote.caltech.edu/content/irrelevance-benfords-law-detecting-fraud-elections). [Hat-tip to @Flounderer who linked this in his comment.]
Why this answer doesn't go into any statistical detail
------------------------------------------------------
The original answer I gave, below, tries to err on the side of *not* handing 'formulas' over to people who possibly don't understand their use. I tried, perhaps not very successfully, to suggest starting places for thinking about *how* and maybe *why* people either fake results or unconsciously introduce bias.
This kind of forensics is in some ways very similar to other stats, but has some very important differences. If you are looking for a signal in some noise, you might form two hypotheses, both of which imply that data is random, but with say different means or distributions. If you are looking for cheating, you have to remember that fraudulent data is not in any sense random. Spotting it involves teasing apart (possibly) three elements: the real numbers, the deliberate adjustment, and any pseudo-random perturbation that might have been made to mask the adjustment.
I believe that in order to properly apply some forensic test to a set of data, you need to first develop a proper theory of why the test might be meaningful. This entails hypothesing about exactly how the data might have been manipulated. For example, Benford's law was successfully used to investigate whether China's GDP growth in % was being rounded up if it had a high second digit: <http://ftalphaville.ft.com/2013/01/14/1333552/chinas-non-conforming-gdp-growth/> (registration required).
Taking a whole battery of tests and applying them to some data might allow you to get to the stage of theorizing, but it can't get you any further. This is why in the first few paragraphs of my original reply, I talked in very general terms about how faked data might differ from genuine data. These are supposed to give you places to go looking for anomalies, which you later investigate rigorously.
Looking for mistakes cheaters might make
----------------------------------------
Starting points can be things like testing to see if the numbers fit the conclusion too closely. If an experiment was done on several groups of test subjects, all of which are supposed to be identical, then you would expect the success rate in each group to be close to the overall average, but not too close. Some researchers who have made up their results had all group success rates equal to the average success rate to the nearest integer.
If you get someone to make up the results of 20 successive coin tosses, they deviate from statistical likelihood because they don't put eg. enough sequences of 5 heads in a row. People usually think things like this are less likely than they are. Look out for things which are 'too random' or 'too regular'.
Researchers into election fraud have had some success looking at the last two digits of numbers to see if double sequences like '11' or '22' occur less than they should, because humans who make up 'random' numbers tend to avoid them. This applies in the specific case where you have enough digits that the trailing digits should be uniform, but that no rounding should have been applied. This test wouldn't have detected the Chinese GDP rounding, or manipulations where leading digits are adjusted.
The mathematician Borel weighed the loaf of bread that his baker gave him each day and decided that the average was too far below the standard weight of a loaf to support the hypothesis that the baker wasn't making underweight bread. He confronted the baker, who promised that he would make the loaves heavier. After that Borel continued to weigh his bread. The average weight was now high enough, but he studied the distribution of weights and realized it corresponded to that which you would get if you always took the maximum of several observations of a normal distribution. He concluded that the baker always gave him the biggest loaf from those on the rack, but that the average was still below spec.
This is a classic illustration of how someone might falsify results - by taking the best result from several runs. In order to reason about the distributions, it was first necessary to understand how this method of cheating works.
Or suppose that someone had a bunch of results but threw out those which they didn't like. Has this introduced unlikely biases in the selection of the original test subjects? Eg if patients are supposed to be chosen at random but there are fewer old people than one might expect. In general if any data were rejected you should test for dependence between rejection and other variables.
Sometimes real data has a particular bias or noise which is lost in faked data. In the Simonsohn paper cited below, he looked at a psychological study where subjects were asked to say how much they would pay for a T-shirt. Unlike other, genuine studies, the results didn't cluster around multiples of $5.
Another thing which can be hard to look for but which is very damning is to figure out what the results might be if no effect was present and see if eg a single digit has been changed, or a round number has been added.
Sometime people genuinely do introduce biases unconsciously because they believe in their theories or want to succeed. This could mean that they make very small adjustments which can have a large cumulative effect, such as rounding up numbers which should be rounded down.
Trusting your 'feelings'
------------------------
The other thing you need to try, is to get a 'feeling' for something dodgy, outside of the actual numbers. Again, all this does is give you a place where you try and build a proper statistical hypothesis and then test it against the data.
A mathematics professor once said to me that you can spot false proofs by two things: either the work becomes very complicated at the point where it is wrong, or the wrong step is skipped over as obvious. Not quite the same situation I know, but very complicated data handling procedures could be designed to be difficult to replicate (or could be the point where the researcher manipulated data until she got what she wanted). Saying something like 'cleaning' or 'normalization' without explaining exactly what was done could also be a red flag.
If there's a very very standard source of data of a particular type and someone didn't use it, or used it but not in its original form, why not? People often give a citation justifying some supposedly straightforward manipulation they perform on the data to clean it or get it in a more convenient form. Usually but not always, this reference should be to a standard textbook on stats or experiment design, or to some paper which everyone in the field knows. If it's to something extremely obscure is this justified by the obscurity of the topic? Does the cited work actually say what they claim it does?
How to proceed
--------------
I have tried to promote the general skill of trying to understand how people fake things, why and how they mix the truth with fabrication (or sometimes are subject to unconscious bias), and what constitutes strong evidence of anomaly. Looking at case studies, of which [Simonsohn's paper](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2114571) is a great example, can help. <NAME>'s famous book 'The Mismeasure of Man', on the face of it a political tract critical of biological determinism, is also a collection of many case studies of deliberately or accidentally biased scientific work.
If you think that something is fishy, but you don't have the analytical tools you need to prove it, then you need to do research into specific statistical tests that apply to those cases. Among academics, most stats isn't done to detect fraud, and even if you have good quantitative skills you might not have this knowledge. The example of Borel is a good one in that many of us don't know offhand what the distribution of the 'biggest loaf to hand' *should* be, given some reasonable assumptions for the distribution of the loaf sizes.
However, as a researcher you should definitely have the skills to go and find this out from a book. Asking a statistician is a very important technique which may or may not be a last resort, depending on how friendly your statistician is.
Upvotes: 5 <issue_comment>username_4: Fabrication of data is not easy to find out as a reviewer. You may try tricks for raw numerical data, if they come in large numbers and can be expected to have normal distribution. But even if the tests say there is some likelihood the data was fabricated, it is still not "proof". You would need at least strong likelihood to prevent publication. That is not easy to find.
If you look at accounts of retracted papers and retraction process, you will find out that the culprit is usually identified not by the numbers alone but other facts: he has very rapid publication rate compared to field studies, he has weird behavior and does not allow coauthor to see raw data, things like that. In most cases there was nothing a very diligent reviewer can do. Taht is sad, but that is the truth of it in most cases.
Upvotes: 4 <issue_comment>username_5: There has been related work in survey research on how to detect interviewer falsification of survey responses, sometimes referred to as curb-siding (when an interviewer is allegedly sitting on the curb next to the house where they were supposed to be doing an interview). See [a collection of practices](http://www.amstat.org/sections/srms/falsification.pdf) from the Survey Research Methods Section of the American Statistical Association, and [a system for detecting interviewer falsification](http://www.rti.org/pubs/paper-detectingintrvfalsifi.pdf) from RTI, one of top 3 US survey research organizations.
The general findings usually run along these lines: interviewers are OK with getting the first moments (means, proportions) about right, but are lousy in the second moments (variances and correlations): they avoid extreme answers, thus reducing variance, and are lousy at correlations (may not know well enough how things go together).
Not much of that may be applicable to natural sciences, though. I would suggest enlisting a local statistician. Many stat departments run consulting courses for their grad students that welcome requests for expertise from other disciplines.
Upvotes: 3 <issue_comment>username_6: It is possible to tell lie with numbers so here are some other evidences (when numbers lie) that can flag (but not confirm) that some of the data may be manufactured.
1. The authors have a track record of publishing multiple times in very low impact paid journals.
2. The authors seem to lack a crisp understanding of the subject matter.
3. It doesnt seem the authors very frankly described how they decided various things and reached to conclusion.
4. Jargon rich literature, and complex, ambiguous sentences. It all appear very official, royal and professional-looking.
5. Looks like some very routine work and similar work has been done in many other places.
6. Impossible units or impossible values.
7. Wrong procedure. Such as the author does not know in what fraction of the material to look for a certain isolate, or they mention such a condition where such result is not possible due to physical / chemical/ some other reason.
8. The author seemingly copy-pasted or permuted-combined same digits in various places.
9. Statistical measures for too many datasets are very alike.
10. The authors seem to avoid mentioning their own limitations and any troubles they experienced. As if everything is very swift, neat and clean.
Upvotes: 1 |
2013/01/30 | 1,774 | 7,298 | <issue_start>username_0: I often end up frustrated over the pace of my work, or rather the lack of "worthy" results. During my feedback talks with my colleagues and superiors I usually get good feedback but I find it distressing that most of it is qualitative arguments.
I sometimes feel like I have not progressed as much as I would have liked to, but not sure how to assess whether or not I have developed "enough" over time. Which led me to wonder if it is possible at all to measure how well a PhD student is progressing.
The usual measures in the community appears to be:
* number of publications
* which journals the publications appeared in (or rather the impact factor)
* number of hours in the lab (regarding how "hard-working" one is)
* number of credits taken from courses during phd studies
I personally find none of the above to be a good measure. Publications are a fact of research, or the goal rather. But they should not a be measure of how well a PhD candidate is doing in research. I believe the pragmatic demand on "more publications" has essentially lead to overall lower quality and novelty in individual publications. But even without that subjective comment, it should not be a revelation to anyone here that the amount of publications (and especially in which journals they are published) is more dependent on the seniors on the paper rather than the grad student who wrote it.
As for the other two measures I point to, they are just too naive variables to mean much. I mean you can be in the lab for 18h a day, but not learn much new or even worse not even remember those things you have learned. Besides one can also argue whether or not it's actually better for a grad student to be obsessed with number of hours in the lab, or courses taken.
**Summary:** Is there a good way to measure your progress through-out your studies? How can I evaluate my development as a scientist, in quantitative (and unbiased) terms?<issue_comment>username_1: posdef, this is something that I've struggled with as well. Instead of knit picking over what the definition of 'is' is, I would like to offer the approach which I have used. Your mileage will vary but I've found this approach to work well for me and it may work for you as well.
First, the crux of the problem for me was that progress is either analog or digital, qualitative or quantitative, right? This is what we are led to believe and I think that in the case of education, it is not true. There are discrete quantitative measures by which you can, and have, gauged your progress. 40 classes to get your Bachelor's degree. If you complete 20 classes, you're half way there. Quantitative progress. If you're half way through and your GPA is 3.5 then you can make a qualitative assessment of your empirical data. 50% complete, doing well with room for improvement. So throughout our undergraduate work there is a pretty consistent set of standards and metrics by which to measure our progress and the quality of our work.
Graduate school, for me at least, has been somewhat different. With a Masters program you've usually have either seven-eight classes and a thesis or ten-twelve classes and some kind of a project. For the first portion of the program you can track as before, but then you get into the core of your research or development and encounter something, which I think you are alluding to, 'the perception of quality'.
* "How many papers did you write?" 10 - High(Quant)
* "How good did people think they were?" 4 - Low(Qual)
* "How many hours did you spend in the lab?" 100 - Low(Quant)
* "How good was your lab output/finding?" 10 - Exceptional(Qual)
I haven't started the doctorate yet, but I can make an educated guess that this only intensifies with candidate work.
What it really boils down to at this point in your academic career, at least in my experience, is 'how well is your work received?' and how to do you track that to evaluation your own progress. What I've done is to take a two pronged approach to each aspect.
On the quantitative side I've set up a simple database with my course work, grades, number of publications, number of lectures, number of citations, everything I can think of to track my progress. You could also do this with a spreadsheet pretty easily. The basic tenet of this approach is 'What do I have to accomplish & what have I accomplished."
On the qualitative side I've asked professors, facilitators, leads, reviewers, even peers in some cases, to evaluate my work against the task objectives. Usually you get something like "*you're doing fine*" but if you can get more detail, do so. A question that I like to ask is "*Would you feel comfortable with me teaching your syllabus?*" This seems to get their attention. It's interesting because it puts the qualitative assessment back on to themselves and forces them to think of your mastery of the material in terms of dissemination rather than assimilation. "*Do you feel comfortable with me teaching this material/running this lab/managing this team that has your name on it?*" Good bad or ugly I write it down and give it a 1, 2, 3. 1=no faith. 2=some faith. 3=complete faith. If, after 6 courses (for sake of argument) you have a qualitative score of 15+ then you know that more than half of your superiors have faith in your mastery of the content that you are consuming or presenting.
To be fair... I'm a bit of a numbers junkie and this may not be the kind of system that works for you but it has worked for me so far.
Best of luck.
Upvotes: 2 <issue_comment>username_2: To reduce anything to a quantitative score, you need a valid metric. There are very few valid metrics you can use in research, as you state in your question. The only other half-common metric that you didn't list above is **Papers read/annotated**. We all know reading is really useful, and you should definitely aim to read X papers a week (where X is some random number that makes sense in your field). That said, the goal of reading is to gather information, and how much information was gained (and retained) is a lot harder to measure.
That said, I have two half-answers:
* Make up your own metric based on **hours of productive work.** At the end of each day, just write down in an excel sheet or something how many useful hours you worked that day. You can use some service like [Rescuetime](https://www.rescuetime.com/) to help you in this, or the [Pomodoro technique](http://www.pomodorotechnique.com/), or just simply buy a stopwatch and keep track yourself. At the end, do a simple `# useful hours/total hours worked` to see how productive you think you're being. That'll probably be more useful than anything else you'll come up with.
* Make **task-specific metrics**. I tracked my progress on my thesis using a [custom shell script I wrote](https://gist.github.com/4683189) that tracked how much text I added in a given time period and plotted it out. (Yes, I probably spent more time making the shell script than I gained in motivation from using it. Whatever.) I tracked progress on one of my projects by how many datasets I had analyzed. I tracked progress on another project by how much coding I completed each day. These are much more useful than broad, overarching metrics.
Upvotes: 4 [selected_answer] |
2013/01/30 | 1,405 | 5,594 | <issue_start>username_0: When I started my Masters thesis one advice I got for getting good material to read was to subscribe to several journals' alerts system, so that I would get mails with eTOC (electronic Table of Contents).
This was pretty cool then, for months I could stay on top of what's been published out there and was up to date in my own little narrow area. Now almost 3 years later that list of journals have expanded a bit, and the more work I have at the lab more mails that accumulate in my mailbox. What used to be no more than 10 unread mails is now about 900. The output is more than I can handle, what more restrictive method can I implement?
I guess it is pretty clear that this way of following literature is not sustainable in the long run. So I wonder if there are other and perhaps better ways of staying on top of recently published articles?
---
Please note that I have checked the following two questions prior to asking this one. I do feel though my question differs from these two in its essence.
* [Am I reading enough of the scientific literature? Should I read for breadth or depth?](https://academia.stackexchange.com/questions/50/am-i-reading-enough-of-the-scientific-literature-should-i-read-for-breadth-or-d)
* [How do we know if something relevant is already published?](https://academia.stackexchange.com/questions/5073/how-do-we-know-if-something-relevant-is-already-published)<issue_comment>username_1: During my PhD, I subscribed to the RSS feeds of the journals I would regularly read (started with 6 of them, ended up with a dozen). I would skim through titles of all new articles, and read abstracts of those whose title drew my eye. I then found out that some journals (*J. Chem. Phys.* in that particular case) offer specific RSS feeds for each of their sections, in addition to the “whole journal” feed. That helped reducing the number of journals I was skimming through.
Now, after the end of my PhD, my research interest are broader, the number of journals I like to watch is larger but my time is more limited. This system didn't work anymore, and I set up a new system, which has worked well for a few years. I use bibliographic databases (*SciFinder* and *Web of Science*; but I'm sure Google Scholar and PubMed have the same features) to create **publication and citation alerts**. Here's what I have set:
* **Citation alerts for all my own papers**: if someöne cites my work, there's a good chance I'll be interested in their paper. This one has two additional “strategic” bonuses: you get to keep an eye on your competition, and you can suggest newer work to other authors when relevant (*“hi there, I saw your recent paper citing my 2008 article on X, I thought you might be interested on a new extension of this algorithm that we published this year”*).
* **Publication alerts for major players** in the field of interest: I have 10 to 20 of those, watching all papers these people publish.
* **Citation alerts for some seminal or high-impact papers** by others in the field: a good way to see how a new idea is adopted/improved by the community. Those tend to trigger a massive number of cites, so you may want to get rid of them after some time. I have between 5 and 10 of those alerts at a given time.
The only drawback to this method: database updates tend to lag somewhat behind the RSS feeds of the journals themselves, so you get papers that are 2 to 8 weeks old.
---
In addition, use **conferences** to stay on top/catch up with the literature:
* Look at contributions made, see what's new and go check the relevant recent publications.
* Even if you're not at the conference, look at the online program and see what looks new.
* If you're attending, talk with people… you can also use that opportunity to ask some people (whom you do know well enough):
>
> Have you seen that new technique by the team at MIT? it seems to work really well… I was wondering: what has caught *your* eye in the recent literature?
>
>
>
Upvotes: 6 [selected_answer]<issue_comment>username_2: I subscribed to the math arXiv, i.e. I signed up to be sent an email every day about the articles posted to the math arXiv (the email contains titles and abstracts).
When I first did this, I got very excited and subscribed to lots of areas that were interesting to me - geometric topology, general topology, algebraic topology, group theory, etc etc. and I swiftly got completely swamped and ended up reading nothing. I decided that this wasn't getting me anywhere, and I unsubscribed from everything except one subtopic (geometric topology) which was most relevant to me.
I also made it a part of my morning schedule to go through the arxiv email (usually sent around 5a) - wake up, read my morning webcomics, go through the arxiv emails, make a note of anything that sounds relevant to me to read later in the office.
In summary then:
1. Subscribing to the arxiv instead of a journal gives me a manageably small list of articles per day (as opposed to a long list of articles on a more spaced-out schedule)
2. ArXiv allowed me to focus in on a small field of research - this might not be possible for journals, since they potentially include articles in a range of subject matter.
3. Making it a part of my daily schedule (particularly for a time when I'm potentially procrastinating from making the bikeride to school) made me more likely to actually do it.
4. At least in mathematics, it might take quite a bit of time for a paper to make it to the publishing stage, and reading the arXiv lets me be more up to date.
Upvotes: 3 |
2013/01/30 | 3,620 | 13,407 | <issue_start>username_0: I am teaching a freshman science course for the first time and I am doing also outreach activities in high schools. To be able to attract the young generation to science one has to connect the concepts with everyday applications.
So I build my power point slides using pictures from the textbooks which we officially use. Unfortunately when one does a Google search on any subject by images, one gets much more appealing and fascinating pictures. Some of these pictures are even related to simple applications which are explained in popular science sites on very recent discoveries. The problem is that I cannot use those pictures from Google sites in my slides because they are copyrighted.
What should I do then? Should I stick to the boring-looking textbook pics to avoid copyright problems, or bring life to my course by using images Google shows up (but then I might go to the jail!)?
Is there something that says one can use images shown by Google for educational purposes with no copyright issues?<issue_comment>username_1: 1. Find an interesting image.
2. Check for licensing conditions. If license has generous terms (like Creative Commons license) allowing free reuse of the image, or reuse under conditions that you meet (like attribution or absence of modifications), use the image.
3. If you think your use is covered by fair use: use the image.
4. Otherwise, contact the copyright holder for explicit permission to reuse.
5. If it is not clear who is copyright holder, it is orphaned work. In some European jurisdictions it can be used (but not in the US).
Educate yourself with this nice website: <http://www.teachingcopyright.org>
This document for teachers is a good resource too: <http://www.umuc.edu/library/libhow/copyright.cfm>
---
It must be noted: many people follow the algorithm below
1. Find image
2. Screw it! Just use image
3. Realize that nobody came to put you in jail
It does not mean it is right, but they are not going to jail. And if their use is not to make money they probably will not meet issues at all.
Upvotes: 6 <issue_comment>username_2: >
> Is there something that says one can use Google images for educational purposes with no copyright issues?
>
>
>
Nope, there is no such principle in general, although it depends on the particular country. In the U.S. the closest concept is [fair use](http://en.wikipedia.org/wiki/Fair_use), which covers some cases. Unfortunately, there's no simple way to tell when it applies. For example, it's not true that all educational uses are automatically fair use.
I'm not a lawyer, but my understanding is that if you reproduce a figure from a paper so you can criticize it, then that's certainly fair use, but if you decide to illustrate your cryptographic protocol using Bart and <NAME>, then that's likely not. Of course many cases fall in between these extremes.
In practice, though, you are unlikely to get in any trouble for using copyrighted images in slides for an academic presentation. People do it all the time, and I've never heard of any legal action. Posting the slides online is a little riskier, but even that is sometimes done. [Don't interpret this as legal advice, of course: it's still illegal if it's not covered by fair use.]
If you want to be careful, you can choose to use only public domain images or those available under a suitable Creative Commons license allowing re-use. The web page <http://search.creativecommons.org/> can help you find such images.
Note that Creative Commons licenses typically require attribution, and that's a good practice in general. If you use any images you don't create, I'd recommend a little note giving credit off to the side somewhere. After all, it's good to model high ethical standards for our students.
Upvotes: 5 <issue_comment>username_3: You can use images showing up in Google Search if and only if the license allows for it. Therefore, you might be interested in [Advanced Google Image Search](http://www.google.com/advanced_image_search), where you can **search by copyright status**. More information about the *Usage rights search* can be found [here](http://support.google.com/websearch/bin/answer.py?hl=en&answer=29508&rd=1).
For example, here are [freely useable images searching for "Mars"](https://www.google.com/search?as_st=y&tbm=isch&as_q=mars&as_epq=&as_oq=&as_eq=&cr=&as_sitesearch=&safe=images&tbs=sur:f&tbo=d&biw=1920&bih=930&sei=p0oJUe3AMYvJswbe-IHwCg). Notice how many of them are from NASA or the Wikimedia Foundation.
And here is an example [searching for "IBM"](https://www.google.com/search?as_st=y&tbm=isch&as_q=IBM&as_epq=&as_oq=&as_eq=&cr=&as_sitesearch=&safe=images&tbs=sur:f&tbo=d&biw=1920&bih=930&sei=p0oJUe3AMYvJswbe-IHwCg#hl=en&safe=images&tbo=d&as_st=y&tbs=sur:f&tbm=isch&sa=1&q=IBM&oq=IBM&gs_l=img.3..0l10.29944.30237.0.30401.3.3.0.0.0.0.126.334.0j3.3.0...0.0...1c.1.r-4TDHePhjg&bav=on.2,or.r_gc.r_pw.r_qf.&bvm=bv.41642243,d.Yms&fp=6c5f0053a270bafc&biw=1920&bih=930).
**Note:** See the important remark by @jb. [in the comment below](https://academia.stackexchange.com/questions/7620/can-i-use-google-images-for-my-presentations-without-violating-any-copyrights/7628#comment13031_7628)
—
you should verify with the original source (1) that the picture really is free to use, and (2) under what conditions
Good luck and enjoy!
Upvotes: 7 <issue_comment>username_4: In case the search engine does not matter to you, you could also search in Flickr instead of using Google. The advanced search on Flickr has an option to search for pictures with the Creative Commons licence only, which is a good start.
However, check the individual licence terms. E.g. some pictures require that the photographer is mentioned.
Upvotes: 3 <issue_comment>username_5: We publish an industry magazine (very colourful and attractive piece!) and we use all our images from [ThinkStock](http://www.thinkstockphotos.com.au/?countrycode=AUS).
This is not a free site but once you have subscribed to it (for a year or a month), you can download the number of images in your package.
There is no copyright restrictions and you can manipulate the images in any way you like.
Check this too: [MorgueFile](http://www.morguefile.com/) Its free.
And [StockExpert](http://www.stockxpert.com/)
Upvotes: 0 <issue_comment>username_6: A copyright is still limited in some very important ways, such as Fair Use. This is a legal doctrine which allows even copyrighted material to be used by others for purposes of "criticism, comment, news reporting, teaching, scholarship, and research.", see [this link](https://www.copyright.gov/fair-use/) (or archived version of [original link](https://web.archive.org/web/20210605143659/https://www.copyright.gov/fls/fl102.html)).
Since it sounds like you would be using the images for nonprofit educational purposes, as long as you cite the course it is not infringing on anyone's rights.
Upvotes: 2 <issue_comment>username_7: I'm not a lawyer, but I think that your use of the image is a non-profit commercial use, it might be considered as fair use, so you will be fine. From Nolo, [The 'Fair Use' Rule: When Use of Copyrighted Material is Acceptable](http://www.nolo.com/legal-encyclopedia/fair-use-rule-copyright-material-30100.html):
>
> When Is a Use a "Fair Use"?
> ===========================
>
>
> There are five basic rules to keep in mind when deciding whether or not a particular use of an author's work is a fair use:
>
>
> ### Rule 1: Are You Creating Something New or Just Copying?
>
>
> The purpose and character of your intended use of the material involved is the single most important factor in determining whether a use is a fair use. The question to ask here is whether you are merely copying someone else's work verbatim or instead using it to help create something new.
>
>
> ### Rule 2: Are Your Competing With the Source You're Copying From?
>
>
> Without consent, you ordinarily cannot use another person's protected expression in a way that impairs (or even potentially impairs) the market for his or her work.
>
>
> For example, say Nick, a golf pro, writes a book on how to play golf. He copies several brilliant paragraphs on putting from a book by <NAME>, one of the greatest putters in golf history. Because Nick intends his book to compete with and hopefully supplant Trevino's, this use is not a fair use.
>
>
> ### Rule 3: Giving the Author Credit Doesn't Let You Off the Hook
>
>
> Some people mistakenly believe that they can use any material as long as they properly give the author credit. Not true. Giving credit and fair use are completely separate concepts. Either you have the right to use another author's material under the fair use rule or you don't. The fact that you attribute the material to the other author doesn't change that.
>
>
> ### Rule 4: The More You Take, the Less Fair Your Use Is Likely to Be
>
>
> The more material you take, the less likely it is that your use will be a fair use. As a general rule, never: quote more than a few successive paragraphs from a book or article, take more than one chart or diagram, include an illustration or other artwork in a book or newsletter without the artist's permission, or quote more than one or two lines from a poem.
>
>
> Contrary to what many people believe, there is no absolute word limit on fair use. For example, copying 200 words from a work of 300 words wouldn't be fair use. However, copying 2000 words from a work of 500,000 words might be fair. It all depends on the circumstances.
>
>
> To preserve the free flow of information, authors have more leeway in using material from factual works (scholarly, technical, and scientific works) than to works of fancy such as novels, poems, and plays.
>
>
> ### Rule 5: The Quality of the Material Used Is as Important as the Quantity
>
>
> The more important the material is to the original work, the less likely your use of it will be considered a fair use.
> In one famous case, The Nation magazine obtained a copy of <NAME>'s memoirs before their publication. In the magazine's article about the memoirs, only 300 words from Ford's 200,000-word manuscript were quoted verbatim. The Supreme Court ruled that this was not a fair use because the material quoted (dealing with the Nixon pardon) was the "heart of the book ... the most interesting and moving parts of the entire manuscript," and that pre-publication disclosure of this material would cut into value or sales of the book.
>
>
> In determining whether your intended use of another author's protected work constitutes a fair use the golden rule: Take from someone else only what you wouldn't mind someone taking from you.
>
>
>
As you can see, this answer might violate the rule 4 and 5. Wish me luck.
Upvotes: 2 <issue_comment>username_8: There are plenty of innovative services out there that offer you legal content for low or no cost. Most of them even offer their own PowerPoint add-ins so that you can get your pictures without even going to Google and having to worry about copyright. Shutterstock, Pickit and Pexels are such services and while the first one costs a little amount per picture, the other two are free alternatives with great content. Discovering those has greatly reduced my copyright headaches when I present and I've even made some of my own photos available for usage. Recommended!
Upvotes: 0 <issue_comment>username_9: It depends on the image
=======================
Your question is not well posed, and also most of the answers that you get are actually answering other questions. Google searches images all around the web so you have to check if the website where the image is stored has some permissive licence or guidelines regarding the sharing and reuse of the image. Filtering by licences using Google Images could be an option but actually, the way this search works is to check if the pages have a link to an explicit license. So sometimes the authors used images without consent and put the text of the website under creative licences and you think that also the images are under creative licence. In other cases, the website might have a dedicated page for permission to reuse that Google could not find automatically. So Google filters are not always infallible.
If the image has a restrictive copyright
========================================
If the image has a restrictive, it means you did not find a permissive licence. All content in the web is copyrighted if a permissive licence is not released. However, even for copyrighted images, there is sometimes the option of ['fair use'](https://en.wikipedia.org/wiki/Fair_use) in U.S. and in Europe, you may have a look to [Copyright Directive Article .5.3](https://en.wikipedia.org/wiki/Copyright_Directive) which list exceptions that the European state **might** integrate into their law:
>
> illustration for teaching or scientific research, provided the source,
> including the author's name, is acknowledged,
>
>
>
So eventually you have to check the legislation of the country where you present the talk.
A specific question has already been posted for talks:
[What is the legal status of using copyrighted images in academic/conference talks?](https://academia.stackexchange.com/questions/38876/what-is-the-legal-status-of-using-copyrighted-images-in-academic-conference-talk)
Upvotes: 1 |
2013/01/30 | 773 | 3,354 | <issue_start>username_0: It's come up in our lab that we should probably have business cards for when we attend conferences, however, we've been given very little guidance for what is appropriate for a graduate student.
Some questions that have come up:
1. Should we be trying to standardize the cards to look like the official university's cards? (e.g., with the university logo, etc)?
2. Related to the first question, should we be going for standardized or for something that will make us stand out?
3. Should we be putting our current status on the card? It seems like if we haven't hit ABD yet then it might be counterproductive because then we'd have to buy new ones each time we made progress.
4. Should we be adding our research interests directly on the card? What about advisor?
5. Any other information (other than contact/website) that we should be including or tips on this?<issue_comment>username_1: I would standardize them and simply put the most relevant information on them. My (dated) business card looks like this:

Now I think it has a little bit more information than needed; fax is almost certainly not needed, but there might be circumstances where it's handy to have the mailing address. But important are:
* University + department
* Academic homepage
* E-mail address
* Perhaps phone number
* The fact that you're a PhD student.
I wouldn't add too much information on them. Business cards are for core info, nothing more; they might get crowded otherwise.
My own business card is outdated: a university reform means I'm no longer at the *Department of Space Science*, but at the *Department of Computer Science, Electrical and Space Engineering, Division of Space Technology*. But I don't care, because the e-mail address is still correct, and the new department/division wouldn't even fit on a business card ;)
Upvotes: 5 [selected_answer]<issue_comment>username_2: You should check with your University's communications/pr department before printing up anything that reflects the institution's trademarked materials but I think it sounds like a pretty good idea. You just want to make sure that you don't step into any legal quagmires. Anytime that you produce collateral that associates you to an organization you can get into dangerous territory. For example- if you hand out your Awesome U. business cards at a pro/anti *whatever* rally, then you associating that activity with the institution. Obviously, they have reason to control such materials.
This sounds especially for nice for full time students that are attending conferences and the like.
Alternatively you could have personal business cards that say what ever you want and just say "Grad Student". I'd check first but that seems like a reasonable compromise.
Upvotes: 4 <issue_comment>username_3: Everywhere I have worked so far, there have been standard templates from the University for how their business cards should look, so I've simply used those, ordered through the University press.
My titles have been:
Project Assistant (during my undergrad)
Wissenschaftlicher Mitarbeiter / Research Assistant (during my PhD studies)
Postdoctoral Scholar
Scientific Officer
Postdoc
and everything else has been dictated by the University graphical manual and policies.
Upvotes: 2 |
2013/01/30 | 2,715 | 11,300 | <issue_start>username_0: I am a first year PhD Student, writing a conference paper with an Italian Professor, very senior and renowned in our field. Every commit he makes to the SVN is riddled with spelling and grammar errors. I have been fixing the errors and also trying to improve the expression but I have this impression he is not very happy about me doing it. (Maybe something to do with him having dozens of publications and me having a total of Zero)
On occasion, he actually reverted my changes to stick with his wrong or inferior-quality expression. How to deal with this? I would hate to see this paper go with an inferior quality language when I could have improved it.<issue_comment>username_1: I have learnt very fast not to be a perfectionist! So perhaps you can be less judgemental (not saying this in a bad way).
However, you still need to be rigorous and if there is something that is dramatically wrong, you can then discuss it with your professor.
Perhaps giving him two or three correction options may help. Let him choose which version he likes. In any case, you would be preparing the options, so whichever option he chooses will be ok for you.
As F'x said, talk to him.
Upvotes: 2 <issue_comment>username_2: You have better things to do like focusing on getting that PhD and the first paper. The guy is renowned as you so at this stage it doesn't matter for him that much. No editor is going to reject a paper you write with him because it had mistakes. At most people will say that the language should be improved etc.
Practical advise:
1. Don't piss the guy off its not worth it. Be more politically savvy.
2. I don't know your field but in some fields you have to write in a very specific manner and what might seem inferior quality to you might be the standard way to write in that field.
3. Only raise the issue if its a titanic of a mistake! Do it gracefully. Next time don't change it put a polite comment.
Upvotes: 3 <issue_comment>username_3: It is always better to be diplomatic in academia, especially when you are a first year graduate student, and your advisor is a well-known person. That is "never piss off your advisor".
He can do whatever he wants if you piss him off. After you work with a professor for 2-3 years, your will only have two reasonable options: (1) quit the program, or (2) suffer and somehow get the PhD. The other option is to switch advisor, but if you are in the middle of the program, that is almost a no-option. Note that, even if your advisor doesn't become angry when you point out language mistakes at the moment, he may choose to stay calm, and find a way to react to you in future. My high level point is: be diplomatic. That is how academia works. If you piss off your advisor, you are not going to succeed in getting a PhD or a good job after that.
Coming to your specific question, may be you should leverage the fact that your advisor is well-known. After you submit the article to a journal (they may see his name and may be they won't be harsh), the editors will likely ask for improvements to the language, and may be then you can tell your professor that you will handle improving the language/grammar. Then it will show up as "taking responsibility" and he will appreciate you.
Upvotes: 2 <issue_comment>username_4: Other answers assume that the professor is OP's advisor, which seems not to be the case. I would advise to avoid working with this guy in the future (if he's stubborn on such small matters, it's unlikely to end well), and if he was your advisor, the standard JeffE's response would apply: "Don't walk, run!"
I am somewhat appalled that most answers seem to recommend the "play safe, don't mess with powerful people" approach. The language errors themselves might not seem like a big enough deal to pick up a fight, but by choosing to be "politically savvy" now you make it easier for yourself to compromise in the future on more serious matters. Yes, to fight over such issues requires lack of self-preservation instinct, but by choosing to do a PhD instead of an "honest job" you've already shown that's not a problem ;). Sorry if this is a bit off topic/argumentative, but I've seen this kind of answers also in other threads and I think the advice goes in exactly the wrong direction - there is a lot of excellent jobs outside academia, so contrary to other answers in this thread you are actually full of options (being smart enought to be doing a PhD gives you a very strong hand).
Upvotes: 1 <issue_comment>username_5: As a French native speaker whose English was gradually improved over the years, I've been on both sides of this kind of situation, and there are different aspects to consider:
* If your collaborator does not like you fixing *typos*, then there is a real problem;
* It is usually accepted that papers are not written in British-English, nor in American-English, but in *Global*-English, and being a native English speaker yourself might not be strict advantage. If your collaborator has successfully managed to publish dozens of papers, then either his style is somehow accepted, or all his papers have been written by others, in which case he wouldn't mind letting you write the paper in your own style;
* The style of an author is personal, and changing it can be seen as touching the ownership of the text, and can also be seen as offensive, especially when the gain of the modification is not immediately perceived (which might be the case for non native speaker, a "better" expression does not necessarily strike as an improvement). This is particularly true if you only change one expression: if you were to rewrite an entire paragraph, to change to some extent the content, it would probably be easier, since you've clearly improved the text.
* At least for me, there is a clear notion of trust of people I'm working with. There are some people I completely trust, and I don't care a second if they modify my text, but I would be reluctant to see my text modified by someone I just started working with, especially if it's only for cosmetic purposes, like picking the best expression or word when the original one is understandable/correct, or changing a notation, or reformatting the paper, etc. I'm not saying I would be against, but I would need to understand and agree with the gain. In my opinion, this is mostly a notion of trust rather than seniority.
In summary, let it slide, as long as it does not impact the overall clarity of the paper, especially in the first stages of the paper (some expressions you don't like might disappear naturally after a while, replaced by new content).
Upvotes: 3 <issue_comment>username_6: I'm also a first year, non-native English speaker PhD, but I've been doing some writing with my Master thesis adviser (native speaker of my language, not English, but very well versed in English and ~2 more languages). In any language, I often have long causal (is that the right word?) sentences, and he sometimes wouldn't agree with my style.
Even though I was the one doing most of the writing, I still feel like I can offer some useful tips. And, before the list, I *support everyone arguing strategy and being careful that your actions aren't **misinterpreted** as disrespectful*
* if it's just *typos (spelling)*, or *obvious grammar* ("It's advantage" vs. "Its advantage"), just correct it on your own and *accompany it with an SVN comment* ("Ran text through spellchecker", "Spotted and corrected few minor spelling mistakes")
If you feel like your professor has an easily-bruised ego, make it sound like not a big deal. Just some routine check-ups and tune-ups you did, nothing major you changed.
* request **in person meetings**, or (in case it's not possible to meet in person) **video-conference/phone-call meetings** or at least ask the guy (nicely!) if it would be okay to **collect and send** your opinions and confusions on the paper **via e-mail** once or twice a month or so
* **keep track of** passages and expressions that **you would change**. Rank them if you want, from the ones that are just plain confusing you and which you can not understand, to the ones that sound strange language-wise to the ones you just think you have a better expression for.
If you sit on that information for a few days, you'll come to terms with some of them, realize that some are really a matter of personal style, and which parts are just simply confusingly written and hard to understand.
* **communicate with the professor**, respectfully and diplomatically expressing your concerns. Some suggestions that I would feel comfortable with.
*"I'm not sure if I understood what you meant in this passage here (...). I have interpreted it as (...), is that correct?" (slip your suggestion here)*
*"As a non-native English speaker, I am not too familiar with this expression or weather it can be used in this context. Do you think it would be a good idea if we / I checked for an alternate expression?"*
*"I had a very hard time to understand this part (...). After going through it and understanding it, I have re-written it in a way that sounds clearer for me. Would you have time to go through this and offer your opinion?"*
*"Would you mind interpreting this couple of sentences for me? I do understand the gist of it from my practical work, but I can't seem to put the pieces in place after reading it."*
* this way, you're not imposing your style or writing, and it can not be misinterpreted as "I think my writing is better than yours." But, as papers are written to be understood by others, **you expressing your concerns might prompt him to re-think the part of the text**.
If he tries to explain on the spot, and looses himself in the explanation, that should be a clear hint even to the professor that it's not really clearly written.
There is no chance of you changing the meaning of something you misunderstood. Also, you showed that even though you would write something differently, you respect his style, reasoning and opinion. My ex-supervisor always told me, it's always okay to have an opinion of your own if you can back it up and defend it. If you can both concisely explain to each other how and why you've written a portion of text, it will be easier to reach an understanding.
* always offer him the chance to do it ("we might" -- it means you) but say that you can implement the changes yourself ("or I can write the potential changes" -- it means you again).
Offering them to do it shows respect of their opinion, and offering to do it yourself shows commitment and respect of their time. Very diplomatic :)
* **never** say you think **there is a problem**. [Saying you have a "problem" is a sign of weakness in academia](http://www.phdcomics.com/comics/archive.php?comicid=848) - so you definetely shouldn't accuse a professor of having one. Look through my post, go ahead: I never used the word "problem" before this paragraph. Not once.
So, shortly, I strongly advise diplomacy. But also, **talking to your supervisor**. If you offer your suggestions in a way that tell your professor that you value what he's written, his opinion, and his work, he shouldn't have problems doing the same with you. And if he still does have a problem with it... **Don't walk. Run!** (by @JeffE)
Upvotes: 1 |
2013/01/30 | 407 | 1,850 | <issue_start>username_0: In my actual research work, I need some functionality that is not supported natively by any existing solid tools. So I have two choices: rather I implement this functionality my self, or I use an emerging tool which is in beta version.
So, can this have a negative impact on the acceptance of my results by the research community? (I mean when I publish it in a scientific paper)<issue_comment>username_1: There's no simple rule about what sort of software is acceptable. What you do needs to be reliable, publicly documentable, and justifiable. Some beta software satisfies this, and some does not. Ultimately, you need to be able to make the case that your methodology (including the software you use) is trustworthy. Even assuming it is, you need to be able to convince other researchers. If you aren't sure, then you should consult with experts about the particular software. If you're a grad student, then asking your advisor would make sense.
Upvotes: 3 <issue_comment>username_2: At the end of the day, the main requirement is that you can trust the tools you're using. If you're using open-source software, then you may want to double-check that the algorithms are written correctly. If you're using proprietary software, then you may want to consider verifying results with other software for at least some samples.
Note that this is true with any open-source package. For my thesis work, I used a particular open-source analysis toolkit which was very popular in the community. They were regularly releasing updates as people investigated the software and found small glitches. During my regular use, I even helped uncover and report a moderately serious bug that would have resulted in bad output — and possibly erroneous conclusions — in a particular edge case. Always know the limits of your tools.
Upvotes: 4 |
2013/01/31 | 898 | 3,690 | <issue_start>username_0: I'm teaching a programming course with ~20 students, guiding them through coding assignments and helping them understand what they are doing, and why it works or doesn't. Tasks expected of them range from trivial to medium-complexity in the range of a 40-hours curriculum.
Now, toward the end of the course, they know enough to solve moderate programming problems. In order to get them to work on a few things more "exciting" than what we offer them, I am considering asking them to join an online coding (or problem-solving-through-coding) competition, such as Project Euler. I wouldn't expect them to be able to solve all problems, of course, but I could select a list of problems for them to pick from. For example:
>
> For this session, you are expected to solve between 5 and 10 problems from the following selection of Project Euler numbers: 1-10, 13, 15, 20-24, 26-29, 33, 35-38.
>
>
>
Sure, I could just copy these problems and make them "assignments" for them, but I think it could bring some fun for them to see it as part of a competition. Also, why I don't understand why, it seems that to their generation, doing anything *online* is vastly more exciting than doing the same thing otherwise. Finally, I have some hope that a few students might actually get into it, and continue doing it for fun after the course.
Now, comes the question: **what downsides do you see** to requiring them to participate in one of these online challenges? (I'm most interested in the specific case I detail above, but generic advice/answers for other types of online participation might be interesting too!)<issue_comment>username_1: I'm doing my entire Bachelor course by distance (online) from an interstate university in Australia, when I live in antother state. To check that we've done all the related online module readings, etc, for most subjects we have 10-20 marks of the total marks for the subject set out for forum participation.
You could allocate a small percentage of marks for this which would hopefully give your students the incentive to do this online task.
Upvotes: 1 <issue_comment>username_2: This sounds like a great idea to me overall, but I can see a few potential issues:
1. It's possible that one or more of your students might already be a participant, which could raise issues of fairness. (Other students may complain that he/she got a head start.)
2. Similarly, the fact that these problems are widely distributed on the web may make it easier to find solutions online. I haven't looked at the Project Euler solutions online and don't know if they are any good, but it's not hard to find purported solutions. This could also be a pain for you: if you make it easy to cheat, then you're more likely to have to figure out how to deal with cheaters.
Upvotes: 4 [selected_answer]<issue_comment>username_3: One potential problem, if you're within the US, is running afoul of your University's interpretation of [FERPA](http://en.wikipedia.org/wiki/Family_Educational_Rights_and_Privacy_Act). My university forbids me from *requiring* students to participate in an publicly-accessible forum using their real name or university email address, because *the fact that someone is a registered student* is considered a protected educational record.
Upvotes: 3 <issue_comment>username_4: Project Euler is pretty reliable but (as well as the other answers) I can see two risks:
1. The website (with the challenges on it) may become unavailable.
2. The owner of the page can edit the content as often and as much as they like, so there is no guarantee the challenges set when you viewed them are the same as your students will see.
Upvotes: 2 |
2013/01/31 | 2,090 | 9,257 | <issue_start>username_0: Related to this question: [How to buy plane tickets for job interviews?](https://academia.stackexchange.com/questions/7484/how-to-buy-plane-tickets-for-job-interviews)
What does it mean to accept a job offer? Or maybe more accurately when have you accepted a job offer?
The latest and most conservative might be when you sign a contract. This seems a little late in the game as often contracts, especially in the States are slow to be generated. The earliest might be when you go for an interview. In the UK it is pretty standard for universities to not reimburse candidates to whom offers are made and subsequently turned down. Middle ground might be when you enter contract negotiations or verbally agree to the terms of your contract.<issue_comment>username_1: There are many different takes. “Accepting” a job offer is pretty much that: **if you tell the person who offered you** (your HR contact, the hiring committee, any person of authority in the hiring process) that you accept their offer, that you take the job.
**Does it mean there's not turning back? Of course not!** The question then becomes: how binding is that agreement? And again, there are answers on many different levels: legal, moral, diplomatic…
* Legal: as always, better ask a lawyer, union representative or knowledgeable and trusted friend. Everything depends on the local law, the type of offer made, what you said and/or wrote, etc. It may sound logical that nothing's set in stone until you have signed a contract, but that may not always be the case. Some institutions might, for example, require you to write and sign a binding letter of engagement before the contract is drafted (which, as you said, can take time). In some jurisdiction, the simple fact of showing up for work on the first day of the contract *is* a binding, implicit contract following the terms of your offer. (Though I would say it should be obvious to all that actually *coming to work* is pretty binding.)
* Moral: that's the most variable of all. Turning down an offer you had accepted orally, because you now have a better offer from some other place, is not *wrong* in itself. The important thing is: being of good faith, and being diligent to inform them that you have changed your mind. If it turns out that you have accepted the offer, knowing all the while you would end up turning it down, that would be unexcusable.
* Diplomatic: people understand the position you're in, as they have most probably been through it themselves some year back. So, they will be sympathetic, as long as they feel you are of good faith, diligent and show acceptable contrition (not sure that's the right term for what I'm trying to describe… let's say you look/sound apologetic enough). Otherwise, well, you risk make enemies and that may not be the best thing to do early in your career.
The fact is: the game is played by both sides. Hiring committees don't dismiss all other candidates immediately when they offer you a position, and they know that Stuff Happens. In a competitive environment such as academia, they surely have a plan B (and probably C and D).
Finally: if that's a tenured or tenure-track position, you'll probably stay very long (life?) there. It's an important choice, and thus you shouldn't find yourself bound by promises made too fast, or you may come to regret it.
Upvotes: 4 <issue_comment>username_2: It sounds like customs depend on the field and country, but here's my experience based on mathematics in the U.S. I'll answer based on customs rather than laws, since that's generally more relevant: one can get a bad reputation for doing something perfectly legal, and one can get away with things that aren't legally justified if nobody is willing to enforce their rights in court.
**The short version:**
Acceptance is understood to be a final decision that commits you to showing up for a year. You can ask to be released from that obligation, but you shouldn't just announce you aren't coming. If you give a good enough reason for your request, they'll grant it. If your reason isn't compelling to them, they won't try to force you to show up but you'll really damage your reputation (not just at that university). Saying you got a better offer afterwards is not considered compelling, and you are expected to withdraw other applications upon accepting an offer.
**The long version:**
Negotiating does not imply accepting an offer, and in fact you should always try to negotiate over whatever you care about before you accept, since your leverage will never be higher. (Some candidates don't like the idea of having leverage, but you can think of it as a benefit to the department as well: it's much more effective for the department to tell the administration "We need to do X to get our wonderful candidate to accept" than "Our wonderful new hire wishes we would do X".)
Accepting an offer just amounts to saying you accept it. In principle, this could be tricky: people's memories of an oral acceptance could be disputed later, and it's possible to write things that sound like an acceptance but might not be meant that way ("Great, I guess we'll be colleagues next fall then"). Of course I'd strongly recommend avoiding anything that might be ambiguous or confusing, just in case, but in practice I've never seen this actually cause a serious problem. Any sensible department will follow up to get an unambiguous answer in writing, so if the situation remains ambiguous it's because both sides screwed up.
The real question is how binding an acceptance is, assuming both sides agree the offer was accepted. In the communities I'm familiar with, you cannot unilaterally change your decision once you have accepted. You could presumably get away with it, since the department isn't going to sue you if you don't show up, but that would be very bad for your reputation. Instead, the standard approach is to explain how things have changed and ask the department for permission to withdraw your acceptance.
In certain cases, this is perfectly straightforward. Suppose an unexpected problem has arisen in your life: for example, one of your parents was just diagnosed with cancer and you want to live close to them for the next year or two. Surely any reasonable department would understand and approve.
Of course, most cases are less clear cut, and amount to personal preference. This is more likely to arise after a deferral, where you accepted a job but then went on leave for a year first, since the extra time allows for more things to change. In general, departments will be pretty unhappy if you defer and then change your mind. It's important to ask to be released from your obligation rather than simply announcing you won't come; the department will often agree, since they understand you would likely leave after a year anyway. It's not good and you should try hard to avoid this situation, but occasionally it happens. If it does, you should feel a little guilty for making it harder for other candidates to get deferrals, by contributing to the impression that people with deferrals might change their minds in the meantime.
At the other extreme, you might simply change your mind within a single yearly job market cycle and decide you prefer another offer you already had at the time of your decision. This is probably not even worth asking about: when you accept an offer, it is understood to be a final decision, and you can't just re-evaluate your options. You should officially decline all your other offers when you accept an offer; if you aren't ready to do that, then you aren't ready to accept a job yet.
Of course, the trickiest case is when you get an offer with an early deadline and have to make a decision before other universities you might prefer can make an offer. Most departments don't want to pressure people into making this kind of decision, and it's always worth asking for an extension of the deadline. Many departments will agree, but a few will not (I know of one department that strategizes about how to put time pressure on people).
If you are caught negotiating with a department that is trying to pressure you in this way, then you should be as tough as they are. On the other hand, their behavior does not mean your acceptance becomes non-binding, and unilaterally changing your mind will still look bad throughout the community. If the department refuses to grant an extension or show any other flexibility, then they are clearly indicating they want a definitive answer now, and you'll have to give them one. It's worth considering whether you even want to work for a department that would treat you that way.
As soon as you have accepted an offer, you should withdraw all your other job applications. Partly this is so they don't have to waste time evaluating a candidate who is no longer available, and partly it is because if you don't withdraw them, then it looks like you are still hoping for a better offer. That is what will really offend people, because it will look like you tricked the department whose offer you accepted by giving them what was understood to be a final decision while still staying on the market to see what other offers you could get.
Upvotes: 4 [selected_answer] |
2013/01/31 | 732 | 2,817 | <issue_start>username_0: What does it mean when it says under a journal article that it has been communicated by "XYZ" where XYZ is not the author but some other scholar with a very strong reputation? What is the relationship to the actual author and/or the content? Is this some sort of seal of approval to get results out and known quickly? (I am specifically wondering in the context of mathematics and mathematical physics.)<issue_comment>username_1: Generally, XYZ refers to the editor that handled the paper at the journal.
Upvotes: 6 [selected_answer]<issue_comment>username_2: See [this question on Mathematics.SE](https://math.stackexchange.com/questions/41871/what-does-communicated-by-mean-in-math-papers) and its very good answers for full details, which I will summarize below. It should be noted that this information is part of the journal format, and added by the publisher itself (along with the publication timeline).
* Some journals published by learned societies or national academies require that “communications” be presented (or sponsored) by a member of the society. This was the case, for example, of the *PNAS* (*Proceedings of the National Academy of Sciences*) until July 2010; the top of an article looked like this:

* Some journals use this formulation to denote the handling editor: the one who makes the editorial decision (or recommendation to the full editorial board), after having selected referees and received the referees' reports.
This is not common practice: most journals do not indicate who the handling editor was for a given article.
Upvotes: 4 <issue_comment>username_3: It is important to note that "Communicated by" can mean a direct submission of a paper by a scientist who is not (in most cases) directly involved in the paper itself. It was designed as a way for established scientists to give a "leg up" to their younger colleagues by allowing them to circumvent the normal review process. This means you may have to give these types of papers a bit more scrutiny as a reader.
PNAS is the publication where I have most commonly seen "Communicated by" publications, but this feature was phased out in 2010:
>
> Until July 1, 2010, members were allowed to communicate up to 2 papers
> from non-members to PNAS every year. The review process for these
> papers was anonymous in that the identities of the referees were not
> revealed to the authors. Referees were selected by the NAS member.
> PNAS eliminated communicated submissions through NAS members as of
> July 1, 2010, while continuing to make the final decision on all PNAS
> papers. ([wikipedia](http://en.wikipedia.org/wiki/Proceedings_of_the_National_Academy_of_Sciences_of_the_United_States_of_America))
>
>
>
Upvotes: 3 |
2013/01/31 | 649 | 2,708 | <issue_start>username_0: When I was a grad student, I participated in a bunch of conferences, and sent papers to journals and the like.
But now I'm no longer in academia. I've been out for more than three years, and I still get a few emails a week notifying me of new conferences, hotel discounts for those who register, extended deadlines and the other usual stuff.
The thing is that none of these emails offer contact information or ways to unsubscribe from the mailing list. And it's not like they're from organizations I previously interacted with, but they are certainly about my former line of research. They are probably worse than spam, because I don't even think you could report them as spam.
I removed myself from all site registrations I can remember, such as IEEE and ACM, but these keep coming and coming and coming and coming.
How can a former academic get himself removed from all these mailing lists?<issue_comment>username_1: I also get these mails, and I have the impression there is no way to cancel these kinds of emails. Your email address is associated with an academic context, and available in public. This makes it fair game for all obscure conferences and journals who want to lure you in.
The solution I see is:
1. Get a new mail address
2. Try filter out any mails mentioning conferences and such. Gmail has facilities for this kind filtering based on keywords.
3. Use a spam filter and let it train on filtering this kind of mails. Thunderbird and other mail programs, and probably gmail have these kinds of self learning spam filters. You just keep flagging it as spam until they are automatically removed from your inbox. Do check your spam box once in a while to catch errors.
Upvotes: 2 <issue_comment>username_2: Sadly, that sounds like spam, and you should treat it however you'd treat spam.
I'm not sure you should assume good faith on the part of the people sending you those e-mails. I'm an active academic with an e-mail address accessible on my university website, and I receive:
1. Legitimate conference announcements over e-mail lists used by people in the area, which I could unsubscribe from if I wished.
2. Announcements from people who actually know me.
3. Unsolicited announcements from scam journals and conferences.
I receive several of the last category every week (and more, unsurprisingly, than any other category). I do *not* get legitimate mass e-mails that are not in the first two categories. It's possible that in other fields there are some legitimate announcements sent the way you describe, but I'd guess very few.
In other words: spam is spam, and you should feel no compunctions about treating it that way.
Upvotes: 4 [selected_answer] |
2013/01/31 | 318 | 1,349 | <issue_start>username_0: Often publications resulting from funded research are published after the grant ends. It seems nowadays more and more publishers are requiring fees (submission fees, page charges, and open access fees). How does one pay for these fees after a grant ends?<issue_comment>username_1: **I refuse to publish in a journal that charges me for publishing my years of sweat and labour for their financial gain and asks me for money as well. Its against my principles.**
Once I was asked to pay 500 dollars for color pictures. I asked them to revert them back to black and white and just have them in colour online.
Open access is a different story and I see it as being legitimate on their part to ask for money for it. It you don't have the grant money => no open access. Put a preprint somewhere and publish somewhere that allows preprints or at least turns a blind eye.
Upvotes: 1 <issue_comment>username_2: The way it works here in Japan, is that each professor has an amount of money called "discretionary spending", from there, they get money for these kinds of things. And is usual that if a professor has his name in the paper , he should use part of this money to pay for the submission.
That money does not comes form a grant, but the University's endowment, so I guess it depends on that.
Upvotes: 3 [selected_answer] |
2013/01/31 | 2,979 | 12,514 | <issue_start>username_0: One of the issues we have at my English-language institute is the problem of getting our doctoral students to write papers in English. For some, writing isn't a big challenge. For others, however, the process is about as pleasurable as pulling teeth or a lobotomy (without the benefit of anesthesia).
What we've found is that there are a few problems that tend to creep up:
* Students don't know how to commit their ideas into paper
* Students are afraid of writing poorly, so they don't write at all
What I'm wondering is if there are any resources available that can help—particularly international students—with overcoming the "academic" version of writer's block.<issue_comment>username_1: As an international student, I have suffered a lot from writing academic papers. I believe **It is an advantage to be English-native speaker in the academia**. Well, I mean mastering the language not necessarily a mother tongue. Each time I submit a paper, I expect some comments on its english. I remember first time I submitted a paper it was rejected because of what so called bad english on it. Now after five years of writing and submitting and with the help of the supervisor and continuing reading papers, I got minor comments (in most cases its the reviewer style more than the language itself).
I believe it is up to the students, If they want to have a career in academia they should push themselves by reading and writing in English. They will notice their skills will be developed over the time. Also, proofreading ,*to know your mistakes*, is a good thing specially in the beginning.
>
> Students don't know how to commit their ideas into paper
>
>
>
This is why you should push them. You want them to have successful careers after PhD, this will be very difficult without a good sense of writing academic papers.
Ask them to write first draft and hand it to you. You can comment on it and send it back to them. This is how it worked for me.
Upvotes: 3 <issue_comment>username_2: Of course there are plenty of resources about [how to overcome writer's block](http://www.google.com/webhp?hl=en&tbo=d&output=search&q=overcomming%20writer%27s%20block#hl=en&q=overcoming%20writer%27s%20block), however for different people different techniques work. The advices I always found very useful as a PhD student (although I cannot find the original sources, it's been years) were these:
1. **do not aim high at the beginning. Crappy and hasty first draft is perfectly fine, iterative improvement will come later**: as the author [here](http://grammar.ccc.commnet.edu/grammar/composition/brainstorm_block.htm) points out, inexperienced writers tend to have too high standard on themselves. Since I am in a formal field, I therefore refrained to start with the paper's motivation, but rather tried to work out the mathematical flesh first. That one is easier in terms of language since the form can be copied/learned from good papers of others. But this differs across disciplines.
2. **block time every day for writing and do nothing else at that time, even if you should stare at a blank wall**: this is my way to kill the procrastinator in me. Simply three hours every day a week for writing. Even if during that time one would really just stare at a wall and write nothing, it's better than procrastinating. Eventually the boredom is so high that writing becomes welcomed activity. It is imperative not to do anything else, especially not to study, read or otherwise consult any literature, also get disconnected from Internet and colleagues, etc. The best for me was to go for this to the department's library where was no wifi connection. I read somewhere that this technique is used by some novel writers, but can't find any source of this advice.
3. Another powerful technique is **use public commitment wisely**: that is, publicly commit to delivering an artifact at a precisely specified deadline. E.g., first draft of the paper next Friday. Tell to your boss, tell to your office-mate, whatever. The higher the authority you tell, the better. For many people this has a magic effect, because we tend to value our commitments, however painful it sometimes is to stand up to them.
But again, different techniques work for different people.
Upvotes: 6 [selected_answer]<issue_comment>username_3: Thank you for asking this question addressing what appears to be a sadly widespread problem in academia. I'm an international student (from India in US) myself, and it's been a great boon for me to be able to communicate fluently in English.
Particularly re: international students, our Office of International Students organizes (spoken) English classes, which are free for all international students and scholars. In addition, at Rice we used to have a group of grad students get together for lunch on Fridays and converse in English (this was a registered student organization called 'English Corner', and they had funding from various sources for the lunches). I realize that you were referring to resources for writing as opposed to speaking, but I believe that particularly for non-native speakers confidence in speaking can translate to confidence in writing. In terms of the actual writing process, there are some online resources, such as [Englishforums](http://www.englishforums.com/)
As for resources for a more general audience, in certain departments around here I've heard of a thesis-writing class that grad students are required to take for credit. Our Office of Graduate & Postdoctoral Studies has recently been trying to put together some professional development workshops and courses, which often focus on the 'communication' aspect of academia. Another possibility is that the beginning courses in the doctoral program (1st and 2nd year classes) could be made to have a strong (or at least non-trivial) writing component. For example, one might require students to write an expository term paper or something along those lines.
I realize that roughly everything I've mentioned above has to do with resources available at my university, so perhaps my answer consists of 'Here are some resources that might already exist at your university, or might be put into place there'.
Upvotes: 2 <issue_comment>username_4: <NAME> is very good on this:
<NAME>., & <NAME>. (2006). The handbook of academic writing a fresh approach. Maidenhead, England, McGraw-Hill.
<NAME>. (2006). How to write a thesis. Maidenhead, Open University Press.
<NAME>. (2005). Writing for academic journals. Maidenhead, Open University Press.
Understanding perceptions and fears about judgement (external and internal) and the difference between performance/learning orientations are sometimes useful conversations to have.
Upvotes: 3 <issue_comment>username_5: There is a strong line in professional writing that suggests that there is no such a thing as writer's block.
Writing for a Journal should be something mechanic, not some work of art that should come from the depths of your hearth, at the end, it is a skill, and a skill that you have to work on.
Many professors are really bad writers, because their own professors were also very bad, there are few writing courses in a grad student's curricula.
These and other very valid points are presented in "How to write a lot" by <NAME>, I find it to be a very useful book, and full of great advice both for students and professors alike.
[Link to the book](http://rads.stackoverflow.com/amzn/click/1591477433)
Upvotes: 2 <issue_comment>username_6: I recommend [beeminder](https://www.beeminder.com). It allows you to set goals publicly. If you go "off track", it penalises you in a few ways you can choose (charges you money, posts to your facebook account, ...).
For me it has been very effective to get me to write. I use it according to the following rule: I count writing sessions, where a session is at least 5 words. In fact, most of the time, I will end up writing hundreds of words, and beeminder just forces me to go over the activation barrier and start typing.
Upvotes: 2 <issue_comment>username_7: I don't really see this as a problem specific to non-native English speakers. I know plenty of first-language English speakers who are absolutely rubbish at writing.
The problems I've personally witnessed go much deeper than correct use of the language: It's mainly about organizing your thoughts and what you want to say, and then saying it in the clearest way possible. This usually does not involve any in-depth knowledge of the English language. In fact, being *too* good may make your writing worse.
I use and preach an incremental writing approach which consists of the following steps
1. Start identifying the **one statement** that your paper will make, e.g. "*here is a new method to solve problem X*", or "*method X is better than method Y for problem Z*".
2. Once you're clear on what the message will be, write a rough sketch of your paper in terms of the statements you will make. This should be the **main story** of your paper. Each statement should really only make a single point, e.g.
* Solving problem *X* is very important.
* Most people use method *Y* to solve problem *X*.
* Method *Y* has this/that weakness.
* Method *Z* avoids this weakness.
* Show on an actual example that *Z* is better than *Y* for problem *X*.At this point you should start worrying about *consistency*. What you have to look out for is dependencies between your statements, i.e. have you really stated everything you need to state such that I can make the next statement? Will you be using words/concepts/methods before introducing them? It is important that you get these things right at this early phase, where ironing problems out is still relatively easy.
3. Once you've nailed your story line, you can start *fleshing-out* your individual statements. Here too, I would recommend sticking to bullet points and making only one statement of fact per bullet. The first statement above, for example, can be expanded as follows:
* Introduce problem *X*.
* List several instances of problem *X*.
* Give a concrete example of where solving problem *X* is important.
* State benefits of solving problem *X* more efficiently.Here again, dependencies are crucial! Make sure you don't use any information without having given it in a previous statement. Also try to keep dependent statements as close together as possible. Remember that you're trying to tell a story and need to keep your reader on track.
Also, note that I haven't said a thing about sections. It is usually only at this point that I would start placing section headings and grouping different statements. Doing so too early may cramp your story-telling.
4. You should now have a somewhat complete story-board of *what* you're going to say, and you still haven't had to worry about *how* you're going to write it. What you need to do now is turn the **bullets into text**.
The way I usually go about this is to turn every statement/bullet into a paragraph of text. The first sentence of said paragraph should be the statement, followed by at most one or two sentences either explaining it in more detail or giving an example of what you are saying. If you need an example, almost every paragraph in this answer was written this way.
5. The final phase is **refinement**. Your paper, at this points, will probably consist of a large number of very short paragraphs that don't necessarily flow into each other. This is where you start merging paragraphs and using connectors between them, e.g. "Thus", "However", "Furthermore", etc...
This final step is not something you do once, it's something you repeat until your paper looks, feels, and reads like a regular paper. I usually go through a paper with a red pen and fix things by hand while reading, then implement the corrections, and then wait a day or two before iterating again.
In summary, the process I've just described is **completely mechanical** and does not involve any in-depth knowledge of the underlying language. The only language ability you need is to formulate clear statements. If you do anything fancier than that, you'll risk loosing any reader whose level of English is below your own.
I am very much aware that there are many people who can just sit down and write beautiful, precise, consistent, and easy to read papers. Good for them. For the rest of us, I suggest using the approach I describe. At least it works for me.
Upvotes: 4 |
2013/02/01 | 2,633 | 10,694 | <issue_start>username_0: Over the past few days, my advisor and I had been going really hard. We got a really good idea and the preliminary results looked good, so I started spending long days in the lab, going home only to sleep. My advisor saw this and he started spending a lot more time with me and we had long meetings whenever I requested. This has been going on for about 4 weeks and although I loved it while I was in it, I feel burnt out now. There are still really exciting things I need to try but I don't know why I can't get myself to do any of them.
What is a good strategy to escape this burnout phase?
I have already tried:
* Playing an instrument I was good at
* Just taking some time off
* Limiting my work hours
But none of these and others seem to work.<issue_comment>username_1: For me, having a structure is usually something that brings a good balance.
One of the reasons PhD students can get very disorganized and end up wasting a lot of time is the lack of a fixed schedule, this is both needed to have a productive life and a balance between your work and personal life.
Just try to keep by an schedule, and you'll see you will get more relaxed.
Upvotes: 3 <issue_comment>username_2: For a true burnout you will need to stop working, rest, and seek counseling/medical help. You need to lower your expectations of yourself and virtually eliminate what others expect from you. Ultimately, because work is about expectations (either self-imposed or set by others), I doubt that you can continue working and recover from a burnout.
Given that you state that the burnout occurred over a short period, rather than a sustained year-upon-year effort, my advice is to take a vacation. Three weeks should do the trick.
Just remember, life is about enjoying it, not earning money, because in the end you will take nothing with you.
Upvotes: 7 [selected_answer]<issue_comment>username_3: I find burnout a reoccurring effect, and to some extent it comes with academic research as you are continually trying solve problems and come up with new ideas. In this respect I find doing science like doing art - if I am not in the mood for doing it then the results won't be good and productivity is low, so the only solution is to stop completely. If you have got the research 'bug' (you normally love research and it preoccupies pretty much every waking hour of your day) then when you are ready you will come back to thinking about it and want to get back in the lab.
My advice is to do nothing until you are ready - don't think about the lab at all or worry that you are not doing anything, just rest completely - go for walks, watch moves, kill zombies, whatever.
As a post doc I have learnt to organise better, and back off if things get too hectic, taking an afternoon off for example. I still suffer a little at the end of the year, where I take a fortnight off but usually I am itching to get back after a week.
Upvotes: 5 <issue_comment>username_4: As an addition to the current suggestions, I can highly recommend adding some exercise to your daily life. Lab life, especially when intensive, makes as sedentary life style. You sit in front of the pc, by the wetlabs... etc
What kind of exercise you do is a preference thing, I personally love high-tempo ball sports like football (soccer) or squash. There's nothing like the endorphine high you get after wearing yourself completely and take a shower afterwards. It will help you get troubles off your mind as well. I can highly recommend squash for this purpose; when playing with an even opponent, an hours workout will get you to a point where forming shorter sentences is as complicated of an intellectual task as you can manage, which means no time/place for daily worries.
Another important thing is to get good sleep. Not just the hours in bed but the quality of sleep. If thoughts and worries about work are haunting you in the sub-conscience, it really doesn't matter how long you are in bed. In this aspect you'll have a positive synergy between physical workout and better sleep.
Hope it helps, and you'll start feeling better soon.
Upvotes: 4 <issue_comment>username_5: I agree with a lot of the other answers, but I have a few additional ideas that haven't been suggested yet.
Do you find yourself thinking about this project at odd moments, even when you're supposedly resting or doing something else? You need to reset your mind by clearing out this project and replacing it with something else for a while. It needs to be sufficiently compelling to get your attention away from the thing that has filled your mind for 4 weeks. Then, after a bit, your enthusiasm for your old project will regenerate and you can be excited about it again.
When you get sick of working on a particular project, one thing that can sometimes be helpful is to spend some time (perhaps a week or two, maybe more) working on a very different project of some sort.
Another possibility is that you are not actually burned out. You may instead have conditioned yourself to associate this project with working very long hours. Now, whenever you think about working on it, you subconsciously feel like if you work on it, it will consume your life again and you don't have the energy for that. This is a bit harder to deal with. To continue to work on this, you have to break the conditioning. If you can force yourself to work on the project, but with more reasonable hours, that may help.
Upvotes: 2 <issue_comment>username_6: Burnout is a word of many meanings. But basically, it is characterized by a very strong physical exhaustion, a general anxiety and the feeling that you are a failure at work, that you will never meet the expectations of the persons you work with/for. This last feeling is strengthened by the fact that a person in burnout thinks she owes something to the others. A last symptom is depersonalisation : you have the feeling of living outside you and the world, you are a spectator of your life, not an actor of it anymore. If you have this last symptom, you should go to the doctor right now, not asap, now !
Most of the time, a burnout becomes a real medical problem (as a strong anxiety syndrom) and needs that a medical doctor takes action.
Besides giving a medication, a MD will give life advice such as :
* Stop completely working for a while, do a sleep cure
* Avoid any activity that relates to work (you're in academy, don't read complicated stuff, you're a plumber, don't do any home improvement)
* Change your environment : go visit your old uncle who is a farmer in Ohio (or a fisherman in France, or ...)
* Modify the way you live, be more involved in your own life. Sometimes, we (=people in academy) don't take the time to cook, to do sports, to rest without activity. Even if one can live happily with a 100% focus on work, it increases the odds of being burned out.
* And my last advice : at first try to avoid seeing people from work. It is necessary, so that you can realize that they don't really need you and you don't really need them.
Upvotes: 4 <issue_comment>username_7: First of all, noone here can know what is really the matter with you. So we all find it rather alarming, because
"Not getting yourself to do exciting things" *can* be a symptom of serious medical problems.
However, after a "work-sprint" you may just be exhausted in a perfectly normal way. E.g. after I had handed in my Diplom thesis, I needed two weeks of basically doing nothing and sleeping a lot (incidentally and very typically, I got a cold as well). It's just paying back your debts in recreation, in the very literal meaning of the work.
### Things to do:
* Talk to your advisor. From what you wrote, you have a very good relationship. If you think you are in the normal need-for-holidays, tell him, and get the holidays.
* During the holidays,
+ Sleep much
+ Spend much time outdoors. Sun (in case it's winter now where you are) and excercise is good for everyone and you may need to catch up due to the work sprint. Doesn't need to be real sports, for me personally it would be better to do "excercise" on a non-exhausting level, but longer. 5 - 8 h of walking, biking or slow cross country skiing would sound good to me, but your marks may vary of course. I'd say, a good amount of fresh air is when you fall into your bed at 8 pm and sleep till next morning...
+ Make sure you eat lots of vitamins
* If you are afraid (i.e. you are not 100% sure that it isn't) something more serious may be the matter:
+ **Don't wait until you know it is serious!** By then, it will be *very* serious, and you may not be able any longer to seek the help you need.
+ Also talk to your advisor. If you think, holidays may help, take them. However, here are two additional "saftety lines":
+ Schedule a meeting for after your holidays to discuss whether you are again in working condition. Ask him *now* that he should get you to medical help if you are not in working condition after the holidays.
+ Ask him to *come and get you to medical help* if you don't show up after holidays.
In addition (before the holidays),
* find out whether your university has some kind of psychological counselling (not sure about the correct English name), examination offices usually know that.
* Alternatively, find out a psychological clinic (university hospital?) with emergency counselling hours (again, someone please correct my English)
* If you don't get yourself to doing this *now* (till Monday noon), go to your advisor (or very good friends/relatives), tell them you have a psychological emergency and that they should get you to medical help immediately.
### Normal exhaustion after intense work:
Personally, I know and love these exciting periods of intense work. However, they are exhausting, and you need the recreation afterwards as you'd recreation after a mountain tour of several weeks. Also, they don't happen every day (I think one couldn't survive that, even though they are incredibly good). BUt from what I know from fellow researchers, these a serious driving force for quite some of us. Welcome!
* Even though you are now exhausted, remember how good it is. I think a healthy balance is if you are exhausted like you are exhausted after a big physical effort. I remember them like physically strenuous tours.
* They are not an every-day experience, but odds are that this wasn't the last experience of the sort :-) And, while this one may have been too much of the good, you can learn knowing when it is enough (and/or to plan for recreation afterwards). For me, this got easier once I had the experience that new such spells of incredibly good work do come.
Upvotes: 3 |
2013/02/01 | 1,307 | 5,055 | <issue_start>username_0: There are many times when I am faced with the task of extracting data from a published graph (usually a bitmap image in an paper). For example, a scatter plot from which I would like to get a list of individual (*x*, *y*) coordinates for the points.
One option is to ask the contact author for raw data. Most will do it, sometimes in nice ASCII format, sometimes in Excel files, sometimes in formats that I cannot open (chemists are fond of software like [Origin](http://www.originlab.com) or [Igor Pro](http://www.wavemetrics.com/products/igorpro/igorpro.htm)). Some authors never reply, or ask questions like “what do you want to do with it?”. In all cases, it takes time. Sometime, it's not even possible (I can hardly email the author of a 1936 paper!).
The other option is to extract the data. I currently use [g3data](http://www.originlab.com) to do that, but for large scatter plots **having to click on every single point is tedious**. Thus, I am looking for a data extraction software that could **recognize individual points automagically**, and possibly filter them by point color or symbol used. Is that even something that exists? What other tools can you recommend to work around this issue?
I don't think it'd be appropriate to have extra requirements on the software, so I'm happy with free or commercial solutions, running on any OS. Of course, if given the choice, I'd prefer open source software running on Linux and Mac OS.<issue_comment>username_1: We had a very similar problem at my old job: we had to scour a huge literature database containing literally thousands of papers for any data showing the solubility behavior of different species. A lot of this data was from the 1950s through 1970's, and was data we could not reproduce for a very large number of reasons (time and now safety regulations being chief among these).
The colleague who was responsible for collecting all of this data used a package called [Data Thief](http://www.datathief.org) to remove the data from graphs. It seemed to work well, but is also (from what I recall) commercial software (or rather shareware, but still technically not free). It is cross-platform and written in Java, so perhaps satisfies a decent amount of your criteria.
Upvotes: 3 <issue_comment>username_2: A colleague suggested I use [GraphClick](http://www.arizona-software.ch/graphclick/), a Mac OS software that includes (according to its website):
>
> * Automatic detection of curves (solid, dotted or dashed), symbols, bar charts, or perimeters of areas
> * Frame-by-frame digitization of QuickTime movies
>
>
>
The later is something I had not thought about, but might actually be useful for some teaching needs (analysis of motion from a video). My first experiences are good: the software is easy to use, includes a nice magnification UI, and automatic curve detection works fine if the graph is “clean”.
---
And here's a list of other possible software from [this answer on Cross Validated](https://stats.stackexchange.com/a/14440) (link thanks to @AndyW and @Paresh):
* [Engauge Digitizer](http://markummitchell.github.io/engauge-digitizer/) (free software, GPL license) auto point / line recognition. Available in Ubuntu repository (engauge-digitizer)
* [Get Data](http://www.getdata-graph-digitizer.com/) (shareware, free trial version, $30 for personal license) has zoom window, auto point / line recognition
* [DigitizeIt](http://www.digitizeit.de/) (shareware, free trial version, $49 for personal license) auto point / line recognition
Upvotes: 5 [selected_answer]<issue_comment>username_3: I used [DataThief](http://www.datathief.org/) years ago. From what I remember, it is not fully automated. You start by loading a digital image and identifying the axes, some tick marks, the axis limits and the scale (i.e., linear/log/polar). This lets it handle bad scans (e.g., rotation and warping). Once it knows the bounding box of the plot, you then tell it what to extract (curves, points, errorbars, etc.).
It is written in JAVA so should run on most OS's. I believe it is free as in beer (and it might be open source).
Upvotes: 3 <issue_comment>username_4: [Here](https://stats.stackexchange.com/a/72974/25283) I describe how it is possible to recover data from **vector** graph in a PDF file with maximum exactness and even estimate introduced recovery error. I show how it can be done in [*Mathematica*](http://www.wolfram.com/mathematica/) but the method shown is very basic and simple enough to be easily implemented in other systems.
Upvotes: 2 <issue_comment>username_5: Here is a very good online tool: <http://arohatgi.info/WebPlotDigitizer/app/>
Upvotes: 4 <issue_comment>username_6: [ScanIt](http://amsterchem.com/scanit.html) does well. It is free of charge, albeit not open source; runs on Windows. It can automatically recognize points, and even distinguish between different symbols used as points:
[](https://i.stack.imgur.com/kr3Ei.png)
Upvotes: 2 |
2013/02/01 | 1,974 | 8,641 | <issue_start>username_0: Occasionally I find myself currently reviewing a paper where I find that many of the criticisms I have are addressed in papers in which I am an author (and often as lead author).
I don't want to make my identity known to the writers of the papers—but how do I make my points clear without breaking the "anonymity" of the review. Even if I cite a bunch of papers each time, it will probably be obvious what's going on.<issue_comment>username_1: I recently reviewed a paper in a similar situation. The paper was good but some relevant publications I was aware of (not mine) probably should have been cited. When writing the review I simply suggested the specific topic which should be referred to, rather than specific papers. That way the suggestion is there and all they have to do is look. They got a couple of papers that I had in mind, and some others, so all was well in the end.
I don't personally agree with suggesting citation of your own papers directly in a review since as you point out this leads to suspicion of the identity of the reviewer, but also because reviews should be impartial as far as possible. If a paper that I authored is the most relevant work then any proper search will find it, if not then something else equally suitable will probably be ok for most readers.
Upvotes: 4 [selected_answer]<issue_comment>username_2: If you believe that there are papers that the author should cite, list them in your review. Yes, I mean the specific papers, with complete bibliographic information. If you include some of your own papers in that list, the author of the refereed paper may *suspect* that you're the referee, but so what?
On the other hand, if you believe that the *only* papers that the author should cite were written by you, you're probably wrong. Look harder.
Upvotes: 4 <issue_comment>username_3: If the author is an unknown nobody, then you don't need to worry about hiding your identity. The author probably has a lot to learn from you anyway, your papers will help him, and the author might even be happy to learn that his paper was reviewed by such an experienced expert in the field. (The number of relevant publication which you authored clearly indicates that you are an experienced expert in the field.)
The story is different, if you know the author quite well. In this case, the author might have been well aware of your publications, but intentionally didn't cite them. In such a case, I would rather avoid reference to my own publications.
The conclusion is, if you think your publications will help the author (and that he will take at least a cursory look on them), then reference the publications that you think are relevant. If you just want to complain that the author didn't cite you, then let it be.
Upvotes: 2 <issue_comment>username_4: >
> I don't want to make my identity known to the writers of the papers
>
>
>
Why?
(Yes, also in my field reviews are supposed to be blinded on one side. But often very good guesses to who was the reviewer are possible. Sometimes it is obvious, and oftentimes I believe I could track down the reviewers at least to their groups because of the specific use of certain terms. And, yes, you could probably track me down because I also have typical questions. Personally, I'd prefer receiving review swith the reviewers' names (it is useless to thank reviewers A and B, as those two review all my papers, and everyone knows it - but I'd like to acknowledge reviewers by name) and to write reviews under my name as well.)
I see several possibilities here.
* As was suggested already, **name the issue, not the paper**. You can also **guide the authors to search terms** that will lead to relevant papers.
* There may be **valid exceptions** to this:
+ Sometimes one wouldn't know from title and abstract that the paper is relevant, e.g. when some methodological point was presented in a paper about an application.
+ Sometimes it is impossible to dig out relevant papers between other papers that use the terms differently or some combination of search terms happens also in irrelevant context\*
* Your question sounds as if you are well-known for the topic which you found missing in the paper.
+ **If you are The Big Guy** for this topic, **pointing to your publication does not compromise your anonymity** - any other reviewer who is aware of the issue would have done it, too.
+ If you are not The Big Guy, but maybe the only one in that field looking into this topic, odds are still you were asked to do the review *because of this expertise*. IMHO, the quality of the review matters much more than semi-existent (see above) anonymity.
In that case, I'd **write the review so the authors can understand and address the issue easily**. If you really think that this compromises your anonymity, you may **write to the *editor*** that you think your review is not anonymous, because ..., and possibly that he may decide to give your email to the authors and that they could contact you in case of further questions (IMHO it is much less work to answer a few questions that to have to review an additional time).
\* e.g. "soft classification" in remote sensing is used ina certain way, which I took over into chemometrics. However, one very important classification method in chemometrics is SIMCA, the "S" for soft. It could be used as as soft classifier in the remote sensing meaning. But is usually isn't. So I got tons of hits with SIMCA. No hits excluding SIMCA, and after looking into a certain number of them and never finding it used in this "soft" way, I gave up and had to say that I didn't find any such application. If anyone knows such a paper, please tell me the proper citation. I don't mind if you're the author.
Upvotes: 2 <issue_comment>username_5: This is a great question. I've run into this situation several times, and I'll tell you what I do:
* **First choice: Post a comment visible to authors.** If I'm lucky, the program chairs have chosen reviewing software that allows me to post comments (which are separate from my review) that will be made visible to the authors. Then, I mention that the citation in a comment that's visible to the authors. This way, the citation/comment can't be linked to my review. The authors might guess that I was one of the reviewers, but they won't know whether I was one of the folks who wrote a positive review or a negative review.
This is the best case, but sometimes you get unlucky and the program chairs have chosen reviewing software that doesn't support this feature. In which case, my second choice is:
* **Fallback: Contact the program chairs and ask for their help.** I contact the program chairs, explain the situation (that I have a comment I'd like to share with the authors, but I don't want it to be linked to my review, because it might identify me), and ask for their help. Often they have a way that they can accomodate this. For instance, most online reviewing software these days can accomodate external reviewers. In that case, the program chair can send me an invitation as though I were an external reviewer, and I can supply an external review whose only content is the citation. As another example, the program chairs might be willing to manually send an email to the authors mentioning this additional comment, or they might be willing to submit a review of their own mentioning the relevant citation.
* **Last resort: Don't mention it in my review.** If none of the above options are available, then I do *not* mention the citation in my review. I believe reviewers have the right to remain anonymous, and don't have any obligation to the authors that supersedes that. Instead, I mention the related work in a comment to the program committee, to justify my recommendation on whether to accept or reject the paper. Then, I might include a general comment in my review that the authors have not done sufficient review of the literature and that they should do a further literature search; and I might even include some tips, like the conferences or journals that they should be reading. This is not as helpful to authors as I would like, which is why this is my last resort. And, if I'm forced to this last resort and the program chairs aren't able to help me get my message through to the authors, then I tend to view that as a shortcoming of the arrangements that the program chairs have made.
I think it is great that you are thinking about these issues and doing your best to provide authors with detailed comments and reviews. Kudos! That's the kind of spirit we should all applaud and encourage.
Upvotes: 2 |
2013/02/01 | 1,637 | 6,695 | <issue_start>username_0: In business (e.g. IT industry) remote work (aka "telecommuting") can be relatively common (see e.g. a recent StackOverflow [blog post](http://blog.stackoverflow.com/2013/02/why-we-still-believe-in-working-remotely/)). It is exceedingly rare (or even nonexistent) in academia, even though most part of academic work consists basically of thinking and writing, which can be done in any environment. Are you aware of any successful implementations of "remote work" in academia?
Of course, there are many factors making it less feasible - an academic employee usually has other duties (e.g. teaching) which can't be done remotely, there is also the social aspect of research, meetings etc. (although this is not that much different from similar aspects in a programming job, unless we argue that doing science is "more creative" than mere programming and thus requires more physical presence). Also, currently available tools still make e.g. making a web seminar or math meeting difficult (no blackboard), although this too is changing (see e.g. G+ Hangout seminars: [TCS+](https://sites.google.com/site/plustcs/)). However, given the scarcity of jobs, "N-body problems" etc., it seems to me that the potential for remote work (even part-time) is, as of now, underutilized in academia.<issue_comment>username_1: In my subject area (computational physics) telecommuting is certainly possible, and in fact I am very lucky to be able to spend a significant fraction of time working off campus. As a post doc I generally find that when it comes to research (in terms of developing and running code and writing papers) I am much more productive working in an isolated environment without distractions. However, it is also very important to maintain links with other researchers and students in the group and department. This can partly be done with tools such as Skype, but the importance one-to-one interaction should not be underestimated, as well as just 'being around'. Many problems are solved over coffee, and this interaction is a critical part of collaborative research.
Overall though I think that telecommuting, at least for researchers for some of the time is a good thing, particularly in conjunction with flexitime for those with families.
Upvotes: 2 <issue_comment>username_2: Doing it part-time is rather common at least in places i have worked. At moment me and most of my colleagues work from home two days of the week (usually Mondays and Fridays for most and Thursdays and Fridays for some) on the days that no teaching is involved. If there is an important meeting we will show up otherwise we skype. I think i generally work on average 6.5 days in a 22 day work month from home.
Upvotes: 3 <issue_comment>username_3: >
> Are you aware of any successful implementations of "remote work" in academia?
>
>
>
Yes: quite a few people I know are (or claim to be) more productive working from home than going to campus; they only go there for teaching, when they need a lab or when they need to talk to someone.
But of course, it depends on the area where you are working; engineers will eventually need labs, test equipment etc...
Upvotes: 2 <issue_comment>username_4: While there are advantages to remote working for individuals, my feeling is that it is used too much and leads to a bad work environment. Unlike industry, academics are not evaluated as frequently or in as meaningful of a way. When some people work remotely and others don't this can lead to resentment and a feeling that they are not pulling their weight. Academic departments are often on the verge of dysfunctional and generally have cliques each representing their own interests. Extensive remote working exacerbates these problems. Problems can often be solved much more efficiently over a coffee/beer than the telephone.
Upvotes: 3 <issue_comment>username_5: Good science starts from good definitions. You have not given your definition of `remote`, and I personally can think of at least three different scenarios:
1. Work in office some days, from home some other days, but the office is withing commuting distance. It is true that academia does not care which computer you write your papers on, especially if you work is only on a computer (pure math, theoretical sociology, may be some computer science). Many companies allow their staff to work like that, especially if their roles are well defined and can be performed off-site.
2. Collaborate remotely: you write a paper together with somebody in another university, in another country, etc. At the extreme, you meet the person with whom you published that paper only a couple years later at a conference where you are presenting it. I think nearly every paper with more than one author is written that way... although in some disciplines, a team of 10 authors means the personnel of a single lab.
3. Work from home full time, with the nearest office being a few hours away. (My worst commute was get up at 4am, drive 2hrs to the airport, take a flight with a connection, total 4 hrs, spend 3 hours in meetings, take an 8pm flight, drive back, back home 1 am next day. Don't want to do this very often, thank you.)
All the responses so far addressed options (1) and (2). To me, "remote" means (3): I am sitting at least a time zone away from the rest of the company (I work in a private sector). There is absolutely no freaking way this could work in academia even if your work only involves a computer. (Obviously, if you are a biologist with a lab to attend every day to look at your mice, your question simply does not make sense.) If you raise a question like that in a job interview, you can bid it farewell, pack and go: there are dozens Ph.D.s waiting in line, and nearly each of them will take a job on any condition. (Yes, there's overproduction of Ph.D.s, which they probably did not tell you when you applied for that highly coveted degree. If there were no over-production, there would be no point to have this website, as university managers will be hunting Ph.D.s, not Ph.D.s hunting jobs.)
While you what you seem to see from your Ph.D. student perspective about academic work seems to be research (which is of course doable across continents if needed), you will HAVE to teach, and you will HAVE to do some service (department committees, qualifying exams, colloquia, campus involvement, etc.), and then later on take graduate students that you are supposed to pamper and educate. If you are thinking about post-doc options, then again the expectation is that you learn from your mentor, their lab and their department by being present there and working with them. You can't do this remotely.
Upvotes: 2 |
2013/02/02 | 2,061 | 9,015 | <issue_start>username_0: This question is based on some observations which could be wrong (in that case, let me know).
I am applying to PhD program in various universities in US in theoretical computer science (TCS). Some of the things that I heard is that getting admission in TCS in top 15 theory universities is tough. The toughness is obviously due to the large volumes of applications that these universities receive (I don't know how much?). However one of the big factors is limited funding available with professors.
So who funds the students? Professors or universities? In some cases I heard, that it is the professor who funds the student. Then in that case, why is the admission process centralized (the professor who is actually funding may not be in the committee)?<issue_comment>username_1: There are several models for funding graduate students: often times the professor is responsible for funding. However, in many cases, the system has "joint" sponsorship—at first the students are sponsored by the department (while they do teaching service or are taking classes, for instance). After that, they are then paid for by their advisors.
The role of centralized admissions is to cut down on the cumulative workload. Especially with the ease of submitting applications electronically, if each professor were responsible for selecting his or her own students, faculty would be swamped by applications. Having a central pool makes the process simpler for everybody.
Upvotes: 3 <issue_comment>username_2: Depending on the university, funding for students can be allocated in different ways. From what I've seen in Computer Science, you can be guaranteed funding, which usually means that your tuition will be covered to some extent and you may receive a living stipend. You can receive no funding, which means that you have to pay your own tuition and your own living expenses, or you can receive partial funding which is some subset of guaranteed funding.
Some universities or departments don't admit graduate students unless they are guaranteed funding by either the department or by a professor. In these cases, sometimes the department/school might have some money set aside to fund graduate students, usually as TAs, but professors will have their own funds through grants. This allows professors more latitude in choosing graduate students that they think are promising and who share the same research interests.
Other universities/departments will admit students without guaranteed funding. Students that are admitted without guaranteed funding will need to find their own funding sources through scholarships, fellowships, or finding their own paid positions (e.g., research assistants, project assistants, or teaching assistants). If the student is unable to find any of these, footing the bill for tuition will fall directly on them. This can be very stressful and can lead to students needing to drop out because they can't find funding or a mad scramble/funded positions being very competitive.
My observation, and your mileage may vary depending on university or department, is that if the university does not offer all graduate students guaranteed funding, they still try to limit the unfunded students that are admitted to be balanced against the number of funding opportunities that may become/be available. Departments also tend to admit slightly more students than they have positions for in anticipation of some students choosing to go to a different university after they receive all of their admittance letters.
As to why it's centralized, what people said about uniform standards and saving on administrative costs makes sense, and there's also an argument that many universities like to keep statistics and information on how many students are applying and being admitted to each department, what their demographics are, etc.
Upvotes: 2 <issue_comment>username_3: >
> [...] then in that case why the admission process is centralized (the prof who is actually funding may not be in the committee)
>
>
>
This is because the top universities (you said top 15) want to maintain their high standards of admission into the graduate program and would not *generally* want to allow a candidate with rather poor qualifications on paper into the program just because someone is willing to fund them.
Remember that a top university will also most likely have a very rigorous curriculum, which the student will have to successfully complete (at least in the US) before they can advance, and this is independent of the student's research work with the faculty that is willing to fund them. So if a candidate's qualifications do not convince the committee that they are capable of advancing the program after 2 years, they will most likely not admit them because it will then be a drain on the university's resources.
That said, it is possible for such candidates to still get in, but the faculty and their references will have to make a *really strong* case for them and they must have some redeeming quality/ability elsewhere.
Upvotes: 3 <issue_comment>username_4: I'm a theoretical computer scientist currently serving on the admissions committee of a large top-15 computer science department.
>
> Some of the things that I heard is that getting admission in TCS in top 15 theory universities is tough. The toughness is obviously due to the large volumes of applications that these universities receive (I don't know how much?).
>
>
>
This year my department received about 50 PhD applications from people whose *primary* interest is theoretical computer science and probably another 50 with *secondary* interest in theory, out of 750 PhD applications overall. We've offered admission to about 10 theory PhD students (out of about 200 PhD offers total). We realistically expect three or four theory PhD students to accept our offer (out of about 80-100 total).
>
> However one of the big factors is the limited funding available with professor. So who funds the students? Professors or universities?
>
>
>
**It's complicated.** A typical PhD offer from a strong department includes guaranteed funding in some form. My department promises five years of funding to every incoming PhD student, assuming they make steady progress toward their degree. (Do *not* accept a PhD admission offer without funding. If they really want you, they'll pay for you.) Most of our students take 6 years to finish, but in practice, (100-ε)% of our students are funded for their entire stay. A typical theory student in my department is a TA for 2-4 semesters and an RA of fellow for the rest.
When a student is admitted, the *department* is making a contractual commitment to funding that student, assuming they make adequate progress toward their degree. In practice, most of that funding comes from individual faculty grants, most of the rest comes from the department's budget for teaching assistantships, and a small fraction comes from fellowships (university, NSF, DOE, NDSEG, Hertz, etc.).
The *number* of students that each group is allowed to admit depends primarily on three factors:
* **Advising capacity:** How many students can each faculty member in the group reasonably advise? The limiting resource here is faculty *attention*, not *money*. Theoretical computer science faculty tend to have relatively small groups, compared to some areas in CS. Steady state in my group seems to be about 3 PhD students per faculty. This is the most significant factor, in my opinion.
* **Funding capacity:** How many students can each faculty member reasonably expect to fund? This isn't just a function of the faculty's *current* grants; a typical grant lasts only 3 years, but each student needs 5-6 years of funding.
* **Teaching demand:** How many TAs does the group need to support its teaching responsibilities? Conversely, for how many semesters are students in the group expected to be TAs, as part of their PhD training? The ratio of these two numbers basically determines how many students the department is willing to pay for on its own dime.
>
> In some cases I heard, that it is the professor who funds the student. Then in that case, why is the admission process centralized (the professor who is actually funding may not be in the committee)?
>
>
>
In US computer science programs, *departments* offer admission, not individual faculty. Students are completely free to change advisors or even research areas, even if their existing advisor is funding them. (Of course, they have to fulfill their funding obligations, but that's an orthogonal issue.) Formally, students in my department do not even choose their thesis advisors until the end of their first year. (One of my recent PhDs entered the department as an RA in distributed systems/sensor networks; he switched to algorithms at the end of his first year.) For that reason, it's crucial that the admissions decision does *not* rest entirely with a single faculty member.
Upvotes: 5 [selected_answer] |
2013/02/02 | 3,953 | 16,152 | <issue_start>username_0: When writing academic papers, I am really bad at improving what I have already written. I have heard that most of the time writing should be allotted to revisions. I know a few academics who are really good at keeping on revising until they are happy, but I simply can't do it. Knowing that a sentence/paragraph/section can be improved but not being able to do so is very frustrating.
My partial self-diagnosis:
1. I refuse to make big changes, probably since it is a lot of work. (This sounds like I'm just procrastinating.)
2. If write with collaborators (almost always), I do not want to change what they wrote or revised, unless it is obviously wrong. (This sounds like I lack confidence in my writing skills. Or I just don't want to upset my coauthors?)
3. Before rewriting, I can't even re-read properly. I don't want to re-read the paper carefully and create a current copy of it in my head. I tend to skip parts. Even after I have re-read the manuscript, it is not always clear what the current state of the paper is.
I am sure I have many weaknesses that I am failing to verbalize in this question, but I'd like to hear **what others did to train their rewriting skills**. Also, I want to hear **how you rewrite**.
FYI, I am not a native speaker of English but I have seriously written only in English. My field is science and engineering.<issue_comment>username_1: I think the more efficient way would be to start reading and focusing on how to write in the first place and then focus on editing your work. This approach is going to save you a lot of time.
Perhaps the fastest way is to get some professional help which often is free in academic institutions in the form of academic writing courses. If you get the opportunity through these classes to show your writing to a linguist you can gain a lot.
If that's not an option then some classics on writing are:
* <NAME>., & <NAME>. (1972). The elements of style. MacMillan.
* <NAME>. (2006). On writing well: The classic guide to writing nonfiction. Harper Perennial.
Perhaps then you should start focusing on editing and I recommend these for the start:
* <NAME>. (1986). Line by line: How to edit your own writing. <NAME>.
* <NAME>. (1995). Edit yourself: A manual for everyone who works with words. WW Norton & Company
**BUT** personally the most important guide for me was the edits/comments that I got back from my supervisors, mentors and senior collaborators during the years. I checked their edits over and over again to systematically diagnose what was wrong with my writing and I think those edits/comments were the most helpful resource. I went as far as creating a corpora of literature relevant to my fields of research to know how exactly people write in my domains of interest in engineering and social science but well that's going a bit too far in the beginning.
Also one thing that I have noticed which makes a major impact on my editing is switching the edits on screen and on paper. I usually first do a round on screen. Then print and do it offline and then switch again! I don't know about others but in my case I tend to focus on completely different issues on when checking the scree or printed material and if I only do one I will miss a lot more.
---
**2014 EDIT:** One additional method for editing your work is listening to it. Text-to-speech software are really helpful here and you will pick up issues that you might normally neglect. There is something magical about listening to your writing which is totally different from reading it. I definitely recommend trying this as well. Obviously higher quality text-to-speech software that have more and better natural voices will enhance the experience...
Upvotes: 3 <issue_comment>username_2: I recently posted a [lengthy answer](https://academia.stackexchange.com/questions/7656/resources-on-how-to-overcome-writers-block-especially-for-non-native-english-s/7683#7683) for a similar question a few days ago, the essence of which was to separate **what** you're going to say from **how** you're going to say it.
If you've done this for a paper, you can edit it focussing on writing style alone. This is a good way of avoiding the "big changes" you mentioned in your question: You will have made all these before actually formulating the text. As a consequence, you should also know precisely *what* it is you are trying to say in each paragraph.
Iteratively refining a text can get you stuck in dead ends, e.g. if you choose a certain formulation and then can't make it sound right. One thing I often do when I get suck with a paragraph or chunk of text that I don't know how to fix, is to just **delete** it and **rewrite** it.
If you get stuck on the specific formulations themselves, i.e. you don't know *how* to re-write a certain paragraph, you could try explaining it (remember that you know *what* you want to say, but not *how* to say it) **out loud** to an imaginary listener.
Reading a paragraph out loud is a good way of forcing yourself to re-read it. It's also a great way of checking if something sounds silly or is not really comprehensible.
**Update**
If you're having trouble reading to yourself, you may want to try pairing-up with a colleague or co-author, and reading parts of the paper to each other. Granted, this may feel a bit awkward, but just look at it as editing the paper together. Working in pairs is [known](http://en.wikipedia.org/wiki/Pair_programming) to improve motivation and productivity, and will basically *force* you to concentrate on the task at hand.
If you have problems concentrating in general, I can give you a few tips from my own experience:
* Break down your editing into short bursts of at most an hour, and focus only on a part of the paper, e.g. the abstract, a figure, or any specific section.
* The first few hours in the morning are the most productive for me. Try to find out where your own "best time" is.
* Coffee. In certain cases, the caffeine can help you focus.
Upvotes: 6 [selected_answer]<issue_comment>username_3: I think you're question is a VERY good one and one that impacts a lot of people. Because of that I upvoted it. However, the answer, which I think you at least have an idea of, is "suck it up and get to work."
Writing well takes a lot of time. Writing well includes planning, writing, reviewing, revising, reviewing, revising (ad nauseam - and I do mean nausea). If you look at great writing, you'll see it's often written by great writers. You should not think that they get it right their first time. When I (I consider myself an OK writer, not a great one) write for publication I usually write the piece and then end up editing it 10 times. In the end I spend much, much more time in the editing processing than in the initial writing process.
Writing takes time. Writing well takes more time. If you want to write well, you need to be willing to push through the discomfort and keep working on it. That said, don't try to do it all at once. Edit several times spread across several days (or weeks, if you have the time) - a fresh mind helps.
Upvotes: 3 <issue_comment>username_4: My simple strategy is as follows:
1. Draft your paper. Make it as complete as you can.
2. Give yourself atleast a12 hour period in which you dont look at the paper, no matter how tempting it is.
3. Use a read-back program that can read the paper back to you loudly. Mac has a free built-in program.
4. Listen and note which sections need reworking.
If you don't like what you hear, its likely to be poorly written.
5. Revise as if you are writing the paper for someone with little knowledge of your field.
The best advice I have got while writing my PhD dissertation was to focus on the arguments in the drafting stage and on the details in the revision stage.
Upvotes: 3 <issue_comment>username_5: I would recommend using some **versioning control system** (like [SVN](http://en.wikipedia.org/wiki/Apache_Subversion) or [git](http://en.wikipedia.org/wiki/Git_%28software%29)) for your paper. These tools gives you the freedom to change what ever you want with you paper, and have all the history recorded. You could practice any kind of change, and still keep the ability to revert to older versions. Even better, you could merge *good* elements from old version into newer versions. By reviewing your history and seeing what you changed, you can learn what types of mistakes you tend to make, and you can work to avoid them in the future.
Note that some popular note-taking tools, such as Evernote or Simplenote, also keep track of previous versions of your notes, although it's more primitive than Git or SVN.
`DropBox` provide a (terribly) simplified notion of versioning control. The advantage, however, is that it works "out-of-the-box" - no learning curve, or fancy tools.
Upvotes: 3 <issue_comment>username_6: One great way I have found to improve my writing, is to reread it side by side with someone not directly in the project (significant other, friend, ...).
So this person would read it while you are here and will directly ask you questions on what you meant or tell you this and that is not clear. Also you can propose better phrasing together, wondering if such formulation is clearer.
Supervisor comments are often great, but they most probably won't have time for such detailed discussion over your text. Having a "live" discussion is really so much more insightful on how your text is received. If you do it with someone you have a good relationship with, it can even be fun and motivating!
Of course, it is a great burden to put on someone else, it can take hours, so you need to be ready to return the favour or find another way to make up for it.
Upvotes: 2 <issue_comment>username_7: I have a lot of problems with my writing (although I am a native English speaker), and was made to do a writing course by my PhD panel.
I did this course, [Writing in the Sciences](https://www.coursera.org/course/sciwrite), on coursera, and I can thoroughly recommend it. It took a few hours of my time over around 8 weeks, but you could easily pick and choose which bits you wanted to do if you wish. There are assignments to complete, which can lead to a certificate if that is something that interests you, but you don't have to do them. I actually did all the assignments as it helped me practice writing and editing my own work. I also got to practice editing other participant's work. Although the instructor is from the medical sciences field (as are most of her examples), everything she teaches is applicable to other sciences. I am from the atmospheric sciences field myself.
I learnt some really good tips on how to go about the writing process itself (i.e. how much time to spend on each step in the process), but the majority of the course is about how to edit writing (either your own or someone else's) to make it more exciting and interesting to read. Many of the tips given already are included in the course.
Upvotes: 1 <issue_comment>username_8: When you need to rework bits of, or the whole, manuscript it might be a good idea to **alienate** yourself from the manuscript. One thing I noticed is that it's difficult to look at a piece of text I have worked on for weeks, with fresh eyes. Hard to be creative in formulating things when you are stuck in a particular thought pattern.
It does wonders to put that aside and go deal with something else entirely. If you can forget the existence of the manuscript, even better. When you take a look at it again some days after, you might be able to see awkward formulations, hanging sentences or unclear paragraphs. During an academic writing class I took a couple of years ago they had this fantastic quote, **"*the author is dead!*"**
While the [full context of the quote](https://en.wikipedia.org/wiki/Death_of_the_Author) is somewhat deeper and besides the point here, it should suffice to say that seen from the eyes of the reader, the author is long dead and gone... You, as the author, need to be aware of that and at least try to objectify yourself from the text in order to be able to see it as a reader might, and see the potential weaknesses.
---
On a different note, another thing that might be a good exercise in revising or reformulating scientific text, is to attempt to rewrite some existing text. For instance take a published article (could be yours or someone else's) and rewrite/summarise/revise it. Then submit both texts to a plagiarising-checking tool (there are some free ones online), to see if you can minimise the similarities in between the texts while keeping the message as intact as possible.
Upvotes: 2 <issue_comment>username_9: 1. Focus first of all on the *content*: your *message* -- the points you want to get across. Do *not* focus on the language -- the way you communicate the message.
This is the most important guideline, IMO. Clear ideas will lead you to clear organization and clear language. Unclear ideas will not lead you anywhere useful. Do not start by worrying about the language.
2. Especially if English is not your first language, write short, simple sentences. Very short. Very simple. Later, if appropriate, you can always combine them.
3. The secret to *writing* is...**reading**. To start with, reread and rewrite your *notes* about the message, before trying to write the text that conveys the message (see #1).
After you've written your message, read, reread, and rereread what you've written. Each time you read it, you will naturally improve (rewrite) it. When you read it, try to erase any knowledge of it beforehand - read it like your intended audience would read it.
4. Repeat #3. Repeat it again. Reread to write better. (It will also help you read better.)
All of the above apply to **editing**, as well as to writing. If you are editing the work of someone else, then you are interested, first, in understanding that writer's *message*. If the message is not clear then forget about improving the wording and provide the feedback that you do not understand what the message is (and perhaps help by pointing to language that confuses you).
If the message is clear to you, then move on to how it is conveyed. If you read carefully it will be clear to you what is not as clear as it should be, what might be missing, and what might be extra (unnecessary). You will naturally discover problems of order of presentation. Just read and reread, carefully, and you will be fine.
Upvotes: 2 <issue_comment>username_10: I also used to resist making big changes. At some point, I realized that this resistance was coming from a fear of having to delete sentences that my brain has already created and now feels emotionally attached to them as my precious pieces of writing. (Even if objectively they aren't such good sentences anyway!) Subconsciously, I would think: if I delete them now, who knows if I ever come up with the same idea on how to put something in words?
The strategy that helped me is really a psychological strategy to address that fear:
**Don't delete anything permanently.** Cut a sentence that does not fit in your current text and paste it into your "sentences bank". I have a Notion page for this purpose called "My writing snippets", which collects all those phrases, sentences and even whole paragraphs that didn't make it into any manuscript yet. But it can even be a `.txt` file on your desktop. This way you don't dread permanently deleting your prose. And, with your next writing project, or when you're stuck in your writing, you can look over your sentences bank to see if there is anything that you can utilize. I've observed that even if I never use a sentence again, saving it calms my brain that it's *still out there, just in case I need it*.
This helped me move on with applying the most impactful changes, e.g. ones that improve the flow of the story, and require moving or removing many sentences or whole paragraphs.
In addition, I do recommend removing stuff from the sentences bank once it has been applied in a manuscript to avoid self-plagiarism.
Upvotes: 2 |
2013/02/02 | 531 | 2,274 | <issue_start>username_0: I have some papers to review and I am wondering whether I should do an ***in-depth inspection*** over the whole paper format? I see nothing major but there might be some tiny format errors here and there..
* >
> Does the organizing committee *expect* reviewers to check submissions
>
> against the conference format?
>
>
><issue_comment>username_1: I would say the responsibility of the reviewer is to judge the content of the paper. So, checking if the format of the journal/conference has been followed is not part of this job. Very large and obvious deviations can be pointed out, but for example spotting that the font of the caption is in size 11 and not 10 is not part of a reviewers task.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Reviewers check content. Copy editors, prior to publishing, check format and adherence to publication style guides.
Your Milage May Vary based on the expectations of your committee. If you have a question about what you are supposed to be checking don't be afraid to ask.
Upvotes: 2 <issue_comment>username_3: In general I agree with the other two answer but somehow in every paper that I have ever received a review for there has been some sort of comment on something formatting related from at least one of the reviewers. These have ranged from suggestions for changing the fonts of the formula or captions to spotting that font for footnotes should be 9 instead of 10 for that journal etc.
I think there is some sort of a code that if you find something you don't neglect it and ask for correction but in general that's not your job and you don't actively seek them you just "catch" them.
Upvotes: 3 <issue_comment>username_4: As username_2 comments, your job is to focus on the content. However, when poor formatting clearly affects your ability to understand the scientific formatting, it should be commented upon. For instance, when somebody writes "x^2", but really means "x (Ref. 2)", that's a problem that should be commented on (because a copy editor might *not* catch that!). Similarly, if the way a graph is formatted makes it difficult to interpret (labels or legends too small to read, or are garishly presented), then it behooves the reviewer to mention it.
Upvotes: 1 |
2013/02/03 | 2,146 | 9,471 | <issue_start>username_0: Many journals sometimes publish [Festschriften](http://en.wikipedia.org/wiki/Festschrift), i.e. special issues in honor of a distinguished (but still alive) researcher in their field. Articles for such issues are typically invited articles by other groups in the same field, and current and former collaborators of the researcher honored.
A colleague told me that such articles are peer-reviewed, much in the same way regular papers are. I somewhat suspect that this might not be entirely true, and that the standards used by the editor (and even reviewers, if they are aware of the context) are lower for these special issue invited papers.
A quote on Wikipedia (from one Endel Tulving) seems to agree with me:
>
> a Festschrift frequently enough also serves as a convenient place in which those who are invited to contribute find a permanent resting place for their otherwise unpublishable or at least difficult-to-publish papers
>
>
>
So, my questions are: **in your experience, are reviewers given a hint by the editor that the paper they review is intended for a special issue? and is the review process and editorial decisions typically as strict as they would be for a regular paper?**<issue_comment>username_1: In two such instances that I have been involved since November (invited papers for special issues but not Festschrifts) no indications were given to the reviewers and strict double blind procedures were followed. I would go as far as we had even stricter procedures because of the notion that special issues are not of the same quality to the point that I found it frustrating.
I suspect this is entirely dependent on the editor and the practices vary significantly depending on the journals editor, the special issue editor, and the relation between the two.
Upvotes: 2 <issue_comment>username_2: I think there is a difference in the peer-review process, but not so much on the reviewer end. In general, when I review a manuscript, I do not consider the target journal. I attempt to point out the good and the bad and leave it to the editor to decide what to do with my reviews. The difference then arises with what the editor does. While the topic of a manuscript might not fit in with the journal in general, hence making it difficult to publish, it might be a perfect fit for a special issue. Similarly, the manuscript might not have as much data as typically required for the journal making its scope narrower, but it still might fit fine with the special issue (especially give the time constraints).
Upvotes: 2 <issue_comment>username_3: In my experience in mathematics, papers submitted to a Festschrift are held to the same standard as any other papers as far as correctness and novelty go, but there is definitely some flexibility regarding importance. The Festschrift is often considered a good place for articles that would be of particular interest to the person being honored, because they build on this person's work or involve topics close to their heart, even if the papers are not particularly important in absolute terms. Referees know the paper is submitted to the Festschrift, and I think this vision of which papers are appropriate is broadly shared among authors, editors, and referees. An embarrassing or inappropriate paper would be rejected, but for example a minor observation related to the honoree's work could be accepted.
It's hard to say how this compares with typical journals, since there's a range from low-end journals that will publish anything arguably new and correct to high-end journals that regularly reject excellent papers because they aren't quite wonderful enough. A Festschrift will never match the very most prestigious journals (there simply aren't enough thematically-appropriate papers at that level to fill it up), but it can be comparable to a middle-of-the-road journal or occasionally better.
As in <NAME>'s comment, a large majority of the Festschrifts I've seen are monographs, rather than journal issues. When they are special issues of a journal, it's generally not a particularly prestigious journal. (However, it can happen: the Duke Mathematical Journal published a Festschrift for Nash.) My interpretation is that prestigious journals generally don't want to publish Festschrifts because they know the papers won't all meet the highest standards of importance.
Upvotes: 4 <issue_comment>username_4: I think the difference is whether the Festschrift is its own book (then the quality solely depends on the editors) or in a renowned peer-reviewed journal (who'd have a reputation to loose with a bad special issue). And I'd have a look what is the title and what the subtitle - the impression of "Festschrift for Big Guy" is entirely different from "Scientific Subject" subtitle "dedicated to Big Guy" or "collected in honor of Big Guy's scientific work".
In my field, special issues of peer-reviewed journals about conferences or concentrating on a certain subject are common. The idea behind the conference issues is to ensure the normal peer-review process because conference proceedings have a bad reputation of no or no real review after the contribution is accepted (by abstract only). To the point that many people do not submit conference proceedings because they see them as a complete waste of time.
The indications I have are that the peer review process is up to the usual standard of the journal.
If you look at [this table of contents](http://www.sciencedirect.com/science/journal/09242031/38/1-2) of a special issue dedicated to Prof. Mantsch, you'll see that the special issue consists of original research, there isn't even a review article in about the historical development of the field in that issue (there is an editorial, though). Also, it is primarily a conference special issue, and this conference (of a regular series) also had the dedication (which came out most at the conference dinner speech, not at the scientific sessions).
Our paper had a two-line dedication before the abstract, and besides that it is a normal original research paper that underwent peer-review. I have not seen in my field a special issue that was primarily dedicated to someone and not primarily thematic with a dedication.
(thematic special issues):
* I don't know whether the reviewers know that the paper is intended for the special issue, I never had a review to do that indicated anything of the sort.
* In my experience not only the normal quality but also the normal subject criteria apply.
* Sometimes, the invitation takes place only after the peer-review process is over: after the acceptance of a paper we were asked whether we'd like it to be published in a special issue that was upcoming and where it thematically fit in.
* Sometimes, special issues are not filled by invitations, but the fact that a thematic special issue is planned, is circulated.
Upvotes: 3 [selected_answer]<issue_comment>username_5: I have reviewed a few articles for special issues. At some journals, you know that there's a difference, because you're recruited as a reviewer by the "special editor" for that issue.
With respect to the *standards* used, I would say that there *should* be no difference between the two. However, I think it's fair to say that some allowances might be made for special issues that would not apply under normal circumstances. At the same time, I think that the awareness of this is fairly widely known in academic circles, and therefore some allowances are made for this. These tend not to be the "super groundbreaking" papers, but often tend to be "current progress" or "latest but perhaps not greatest" work out of the labs submitting them.
Upvotes: 2 <issue_comment>username_6: I would argue that this may differ quite a lot between individual journals and, I venture to guess, may also differ between journals with different status. A basic "rule" for a journal (upheld by its editors) is to make the journal as good as possible, to attract good and high impact papers. To have an issue that is sub-par is therefore not favourable. Hence each editorial board will impose restrictions on such "festschrift"s. In the journal where I am Editor-in-Chief, we have had a tradition of such "schrift"s but we decided to not accept such themes. We do run thematic issues with guest editors but in all cases the papers and their reviewing is transparent to the Editors-in-Chief which means we can intervene and ensure a fair review process and uphold the quality we strive to ensure. In my opinion the "festschrift" is something which is generally not looked upon favourably since it signals that there may be dodgy reviewing or just buddy-reviewig involved. Most journals probably stay clear of such issues for this reason. In the end the local traditions will determine whether such "schrift"s will be produced. So the bottom line is that certainly editors would be cautious about such journal issues. This means that the review process may not be any different, in fact sometimes more strict, while in some cases anything could go through and it is this uncertainty which on the whole makes the "festschrift" concept unattractive to any journal which tries to uphold a good reputation.
So my (probably unsatisfactory) answer is: yes and no, it will vary quite substantially between journals. Such differences should not exist but they do and it is difficult to know or judge in each individual case.
Upvotes: 2 |
2013/02/03 | 803 | 3,602 | <issue_start>username_0: **Why are a majority of jobs in academia offered on a fixed term basis?**
I have noticed that most teaching or researching positions have a contract term (e.g. 3 year contract). Some contracts may be renewed, subject to additional funding, while others simply end.
I am wondering if there is an **academic reason** for this system (akin to the system being designed to compel the incumbent to continually publish in order to remain in a position).<issue_comment>username_1: This is a product of various reasons that are academic, economic, legal and institutional:
In many jurisdictions, it's easier and cheaper to remove staff at the end of a fixed contract: it can be very very expensive to remove staff on a permanent contract.
Enough able people are willing to work on fixed contracts that universities don't need to offer permanent contracts for every post.
Track record, CV and references are not enough to tell how good someone really is, nor how productive they'll be in your team; that needs an extended probation, which a fixed-term contract effectively functions as.
Funding tends to come in bursts, with no guarantee of follow-up funding; so while it can be possible to ensure a post can be funded for 6 months or 5 years, at the end of the funding, there may not be the money to fund that post. On a permanent contract, the resulting severance can be very expensive for the university. The fixed contract gives clarity to both employer and employee.
Productivity changes over time. Some employees are more productive when they have a lot of job security; others are more productive when their future employment depends on the current performance. I'd love to see some studies on the impact on productivity of needing to repeatedly apply for funding: oddly, it seems to be one area where we academics don't take a scientific approach to analysing!
Upvotes: 3 <issue_comment>username_2: I agree that there are many positions in academia that are fixed term. However, I would question your claim that a "majority" of positions are fixed term. Presumably this varies a lot by country and other factors.
From my casual observations in Australia, some positions tend to be fixed term or casual. E.g.,
* Post docs
* Research assistants
* Positions filling teaching gaps (e.g., related to maternity leave, short term increases in demand for subjects, filling-in while)
* A selection of lower-level teaching positions
* Research only positions funded by external grants or contracts
while others tends to be continuing positions most notably
* Standard faculty positions that combine both teaching and research
Standard faculty positions are often funded broadly out of revenue from teaching even if there is an expectation that you will secure additional sources of research funding. Teaching revenue is generally more stable than research funding that tends to be linked to particular grants of particular duration.
Continuing academic positions in Australia typically have a probation period lasting several years as one means of encouraging performance. That said, the promotional system means that there are other extrinsic rewards to continue performing well once a continuing position is acquired.
As can be seen from the earlier list pure research and lower tier positions tend to be fixed term. This can be because the funding is inherently uncertain or perhaps because the employer feels that they can recruit an adequately skilled employee without incurring the additional costs associated with continuing positions.
Upvotes: 3 [selected_answer] |
2013/02/03 | 2,852 | 12,345 | <issue_start>username_0: I tried asking this question in cstheory.stackexchange.com but it was closed and it seems like this question is more appropriate here.
I am an undergraduate and an American citizen who recently applied to Computer Science PhD programs in the US. Based on conversations with friends and posts on gradcafe I am very likely to be rejected from all the programs I applied to (I haven't heard anything while others have gotten acceptances and interview requests from all the schools I have applied to). I am now trying to brainstorm ideas for what to do after my graduation in order to improve my chances of getting into a PhD program when I try to apply again next year.
One option that seems to be brought up a lot is to attend a master's program. However, unless I can get funding, or transfer credit will lower my tuition significantly (I'll have 6 or 7 graduate level cs courses that will not count towards graduation requirements by the time I graduate), I'm not sure if I can afford such a program.
Another popular option is to be a lab technician or something similar. I'm not sure if such positions are available in theoretical computer science.
Is there anything else I can do?<issue_comment>username_1: If you intend to do a PhD, then you're tackling the academic career path, so you're at the right place here :)
I am as well applying for a PhD currently, and went through the process of applying for a masters, here're the things I learned the hard way,
In general, there are two phases in acceptance, the university acceptance and the departmental acceptance
1. You should avoid applying to programs directly without contacting anyone in the department you want to work in. So first, check the faculty's members, see their research interests, and contact one of them, telling him that you would want to work under his supervision for your masters because you find his research topics matching your interests and expertise. That person may then make the departmental admission easier.
2. Most of us google for the top most universities and apply there, which makes our chances lower. Try to look for the 100-200s or search for new offers.
3. When you contact faculty members, NEVER COPY PASTE EMAILS! these will be very easy to detect, and will result in considering you a spammer. Make sure you tailor your email on the person's research interests and write your most interesting qualifications in the body of the email (not as an attachment, because usually they are too busy to check that, unless you really impressed them through the email's body)
4. Try to target funding organizations that give out scholarships based on minorities, ethnic groups etc.. these are easier to get accepted in, than the ones available for all the public
5. Narrow down your focus of the research area that you like most and would love to work in. If you have worked in that area before, it will make your application more unique.
So take care of the above points the next time you apply. As for the skills you can work on
1. Do some research, try to publish scientific papers
2. Work on a research proposal, learn how to write a good one as it will help you in future applications and when contacting faculty members. Try to contact your undergraduate university's professors and try to join a research group or work with one of them on a topic that you can publish papers from.
3. Take the GRE General test, that is a must in most of the universities in the US
4. Take the GRE computer science test, which is optional in most of the applications but distinguishes your application
If you are not from an English speaking country, make sure you
1. Take the TOEFL exam, most programs require 80/120, in US they usually ask for 90, which is pretty easy for you from what I can see in your question (take care as the score expires in 2 years, it has to be valid till the time of being admitted, not just the application time)
2. Make sure all certificates/transcripts are translated to English (by a trusted entity)
It is usually difficult to get funding for your Masters, because it is mostly studying and not actual working, and you're just staying for two years, thus not contributing that much to the funding organization, but keep trying, never underestimate yourself and go on applying. And remember that even if you keep failing, you are still a long way ahead of those who never tried.
All the best
Upvotes: 3 <issue_comment>username_2: How are your grades? Grades are a large factor when it comes to Ph.D admissions and it may be worth taking courses that enable you to increase your grades.
Presumably, you are applying to a University abroad, but if there is a local University that performs research where you are, then you may want to work with a professor at your institution for a summer. The chances of getting this type of position may be slim but if you do get one, it gives you some research experience that you can put on your application materials.
If neither of these are good options, then you will probably want to spend your time reviewing related work in the area and working on your research statement. Write it using principles from the [foolproof grant template](http://theprofessorisin.com/2011/07/05/dr-karens-foolproof-grant-template/). As a potential student, you might not use all of the elements (as you are limited in both experience, as well as getting about a page's worth of writing) but you should follow the structure for the "first two paragraphs".
Upvotes: 1 <issue_comment>username_3: I have applied for FALL 13 in US and awaiting results. Therefore what I am writing here is a mix of my own experience and the accumulation of my understanding of others (includes forums/blogs).
I applied to all colleges which I thought were good in theory (my area of interest). This list was basically influenced by -
i) the papers I read during my master/undergrad and
ii) the fact that each of the college in the list must have at least 3 potential advisors in theory.
Now the BIG problem was : am I a good fit for these colleges? Frankly, these colleges never disclose their candidate profiles and mostly, the home-pages of the current PhD students do not exist. So I went to forums like gradcafe where there is ton of data but very little useful information but still worth a visit. Some people suggest to mail prof before you apply. In my case I have been advised to not to contact them unless I have strong reasons. Plus, I do not expect any prof to evaluate my profile and see if I am a good match.
What do the admission committee look into any candidate? - research potential. If you have published work then it speaks for itself. Otherwise we have typically three recommendations and statement of purpose (sop).
For recommendation letter, two things matter -
i) Is your recommender known in your area?
ii) How good he knows you? (your association)
Now ideally you should have done some research work with your recommender.
SOP is important and can be seen as you recommending yourself. It also shows your proficiency of writing and communicating with others.
GRE,TOEFL - Some basic cut-off should be cleared. I do not have any idea about the cut-offs.
Funds - If you can manage funds then you cost nothing to college and therefore would be preferred by them.
Upvotes: 0 <issue_comment>username_4: It's difficult to *really* answer your question without actually seeing your application, but here's some general advice.
* **Remember that the admissions process is random.** — There is *nothing* you can do to absolutely guarantee admission *anywhere*. The most you can do is maximize your *expected* return.
* **Calibrate your expectations.** — Are you *really* a good candidate for a top-5 department? (Hint: Do you have a STOC/FOCS/SODA paper?) For a top-10 department? For a top-25 department? *Really?* Be respectful but brutally honest with yourself. Ask your letter-writers or other faculty mentors to be brutally honest with you as well. Listen to them.
* **Identify potential advisors.** — Every department you apply to should have *at least two* faculty, preferably more, whose **specific** research interests closely match yours. Your research statement should not only name those faculty but explain why you think they'd be a good match. Ask your references (or other mentors) for feedback and advice. Listen to them.
* **Spread your applications.** — The rule of thumb I heard when I was applying was apply for four schools where you have a reasonable chance of being admitted, one or two backup schools, and one dream school. *Do **not** let the backup schools know that they are backup schools!*
* **Get good letters.** — Your letters *must* address your **potential for research** in personal, specific, and credible terms. A letter that only describes your performance in class is worthless. Your letters must come from research faculty — not PhD students, not postdocs, not lecturers, not managers. If possible, your references should have direct experience with strong PhD students (either as a reference or as an advisor) to make direct comparisons. If possible, your references should be well-known active researchers, but this is actually less important than experience with students. Since you've taken half a dozen graduate classes, you should be in good shape here.
* **Write a good statement.** — Your research statement (or "statement of purpose" as everyone bizarrely insists on calling it) *must* address your **potential for research** in specific and credible terms. Do not start with an inspiring quotation. Do not write about how computers are changing the world. Do not write about how you've been programming since you were in the womb; nobody cares. Write about your *research*. Describe your experience. Describe your specific interests (not just "theoretical computer science"). Describe a problem that you *might* want to work on, with enough background and technical language to convince the reader you know what you're talking about *and* that you actually care. Bonus points if you correctly cite one of your potential advisor's recent papers, but don't force it.
* **Get feedback.** — Send the *final* version of your research statement to your letter-writers (or other faculty mentors) and ask for their brutally honest feedback. Give them plenty of time. Expect to get your statement back soaked in red ink. Expect different people to give you conflicting advice. Listen to all of them. Lather, revise, repeat.
* But this is all about the *form* of the application. The best way to improve the **content** of your application is **DO RESEARCH**. Get paid to do research if you can, but do research anyway if you can't. Find a mentor (at your undergrad institution?) if you can, but do research anyway if you can't. Post technical questions *and answers* to cstheory.stackexchange. Follow CS theory blogs and read the papers that they write about. Keep a copy of the most recent STOC or SODA proceedings nearby to read while you're compiling, or waiting for the laundry, or riding the bus, or whatever. Talk regularly with your letter-writers about your progress. Write, write, write.
Upvotes: 5 <issue_comment>username_5: The most important thing you can do to strengthen your application is: **get additional research experience**. PhD programs focuses on research. This means that the most important criteria for admission is arguably: *likelihood for success at research*.
One of the best ways to demonstrate the likelihood that you will be successful at research is to provide evidence that *you've already been successful at research*. To do that, you need to get involved in an active research and do some serious research. So, my recommendation is: go do some research. If you've already done some, do some more. The more successful research experience you have, the better your odds of being admitted in the future.
Beyond this, it's hard to give more specific advice without understanding why you were rejected and what were the weakest aspects of your application. Therefore, my recommendation is: contact a mentor you trust (a faculty member who is active in research at a Ph.D. program) and ask them to review your application and give you advice about how to strengthen your application.
Upvotes: 3 |
2013/02/04 | 339 | 1,444 | <issue_start>username_0: **What is a "publishable" thesis?**
I have often heard this term thrown about in conferences and even as advice to new grad. students.
From what I know, it is indeed rare for a thesis to be published entirely as a book, though one can publish papers out of the thesis.<issue_comment>username_1: In some cases it might mean that the thesis could be published as a book. However, I'd generally interpret the phrase to mean that the thesis could readily be adapted and published as one or more journal articles.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I assume that, once again, this probably depends on your field and country.
In the Netherlands, apparently, it is required to leave a large quantity (>100) with your university. Also, an ISBN will be assigned, according to my contract. This should mean that anyone could quite easily order a copy. I don't want to know the costs of such an order though.
Sometimes you can also find them on Google Scholar. Although I'm not sure how i will have to proceed to have mine appear there (in years), I like to read them. They usually are well written and give a very good overview of the field in a concise manner. Reading papers to achieve that kind of overview usually takes a lot longer.
This would be my answer to the title question. A well written overview of the work, done during your PhD, in relation to what is known in the field.
Upvotes: 0 |
2013/02/04 | 1,643 | 7,003 | <issue_start>username_0: I am in the process of designing a major. However, I am worried that using a self-designed major can make an application look bad. As it stands, this potential major is about 70% computer science classes, 30% psychology and neuroscience classes. As such, this leads me to the questions:
* Are these self-designed majors seen as "weird" or simply unacceptable?
* In general, are self-designed majors unattractive for graduate admissions?
* For computer science specifically, do majors matter?<issue_comment>username_1: The disadvantage of having a "made it myself" degree is that in situations in which you are being compared with your peers (i.e. graduate admissions), you are comparing apples to oranges, and the admissions committee only know apples. A committee sees two applicants with CS degrees, even if they come from different universities, they can be somewhat certain that both have covered a certain number of bases. In these situations, your degree compared to a CS degree can look like 70% of a degree vs. 100%, even if you have a higher GPA (and this may read as "they have a higher GPA because they took psych classes instead of Operating Systems, Databases, Compilers, Networks, Computer Architecture, and Theory of Computation").
Admissions committees are less concerned with whether you took classes "related to your interests" than whether or not you passed or exceeded the same thresholds as your peers. If you're worried that people wont give your transcript a good look, most won't (especially if you end up entering the workforce). Don't get a degree in anything that will take more than 30 seconds to explain.
Look at all of the people who are doing the work that you some day want to do. Look at all the professors that you might someday want to work with. What did they get their degrees in? (here's a not-so-big secret: most professors hire students who remind them of themselves)
Get in touch with professors at research universities, admission committees, grad students, and get their opinions. Ask "What are you looking for in an incoming student?" People will be pretty forthcoming with you. Ask your professors if they have any contacts at research universities that you could talk with. Also, your professors all got into grad school - ask them how they did it. Find the youngest ones, they'll have the best idea what admissions are like these days.
Upvotes: 3 <issue_comment>username_2: I believe this would hurt your chances. From the point of view of the admissions committee, there's no guarantee that the 70% of CS (or 30% of psych/neuro classes) that you chose to include in your custom major covers everything you'll need in graduate school, and you may have large holes in your fundamentals that would give you a distinct disadvantage.
A much better approach would be to simply choose a standard major and fill all your electives with a concentration of psychology and neuroscience courses. This would still give you multifaceted knowledge while still providing the admissions group a way of measuring you against your peers.
Upvotes: 0 <issue_comment>username_3: If you intend to go into an interdisciplinary grad program, it may actually help your chances of being admitted. For example, my Ph.D. is in Human-Computer Interaction and Computer Engineering. My research had a heavy psychology component. A hint of neuroscience in my background would have certainly benefited me. In HCI, the combination of CS, Psychology, and Neuroscience could make you a quite attractive candidate. Importantly though, it depends if you see yourself applying to one of the truly interdisciplinary programs vs. just applying to a CS program to research HCI. There may be other interdisciplinary programs out there as well that would be interested in such a combination, though HCI seems to be a fairly perfect fit with that background.
The concern I would have is that you are still early in your academic career and your grad school plans may change by the time you are done earning this degree. In this case, a traditional major would probably be a better choice. Keep in mind there are also options for double majors and minors that are well-known degrees as opposed to a build-your-own degree.
Upvotes: 1 <issue_comment>username_4: If you're at a school where self-designed majors are fairly common, there may be records of what sorts of jobs people with self-designed majors have gotten (and whether/where they went to grad school).
Upvotes: 0 <issue_comment>username_5: >
> * Are these self-designed majors seen as "weird" or simply unacceptable?
> * In general, are self-designed majors unattractive for graduate admissions?
> * For computer science specifically, do majors matter?
>
>
>
Having a self-designed major is definitely *not* a problem for graduate admissions in computer science. We don't care what your major is; that's a stupid administrative hurdle. We only care what you've done.
On the other hand, an undergraduate transcript that does not cover the foundations of a computer science major **might be** a problem. My department commonly admits graduate students with non-CS undergraduate degrees, but if they haven't taken at least the core of a computer science degree and a few advanced CS classes, we're more likely to admit them to one of our master's programs instead of to the PhD program directly.
Your transcript will look different to different departments. The mixed major you describe might actually give you a slight advantage in departments with research programs in HCI and/or some branches of AI, or with interdisciplinary programs in (say) psychophysics or cognitive science. It might also hurt you at departments without researchers in those areas.
But the real issue, at least for PhD admissions, is whether the admissions committee is convinced that you have **strong potential for research in computer science**. At the top CS departments, what classes you've taken really a second-order concern (unless your grades are bad or there are glaring gaps). Your research potential and experience, as described in your statement and recommendation letters, are much more significant. If an applicant has a strong research record, and their research interests match our faculty, we may admit them without even looking at their transcript.
Upvotes: 3 <issue_comment>username_6: I'm just a grad student with no real insight into admissions processes, but I do believe that this wouldn't hurt you if you wanted to go into cognitive neuroscience. The reason is that neuroscience is such a multidisciplinary field that everyone eventually needs to learn something outside their field. Having that hurdle out of the way before beginning graduate studies would be seen as a plus (in my opinion), but it would be wise describe the combination of courses a bit in your letter of motivation.
Extrapolating from this, perhaps self-designed majors are less of a problem in multidisciplinary fields.
Upvotes: 0 |
2013/02/04 | 3,176 | 12,856 | <issue_start>username_0: I want to cite something that I have learned from a Wikipedia page. However, I'm loathe to cite Wikipedia because of the perception of it by my tutors, so I try to cite the original source.
What should be the correct thing to do when I'm unable to have sight of the primary source myself, or find it in a collection (for clarity, I should add that I have the details of the source - I just can't find it in collections available to me)? Should I just cite as much original information as I can, or should I defer to citing Wikipedia? I'm hesitant to do that, because a glance at the citation would suggest that I was 'too lazy' to source original material or just dig deeper.
For additional clarity - I *know* that citing Wikipedia is 'bad' etc. The emphasis is on how to cite something that has been learned via Wikipedia (as an example) but for which the original material cannot be seen or retrieved from available collections or searches.<issue_comment>username_1: First, I would **try harder to get the primary source**. Really. But, if that isn't possible (price, availability, etc.), you may have to do without. In that case, a few solutions:
* **Find another secondary source**, possibly one that is more “academically acceptable” than Wikipedia. For example, try to find a textbook on the topic that make mention of the fact you want to source, or a review article, a book, etc.
* If not possible, what I have usually seen people do is cite the primary source anyway. That's bad, but people do it. If you write for a journal, where the reviewers might not allow a reference to Wikipedia, you might not have any other choice.
* What I would recommend, if the format and/or editor allow it, is to cite both the primary source and the secondary source, possibly indicating the relationship:
>
> <NAME>, *Journal of Failed Experiments* **10**, 1024-1028 (1971); as cited by *secondary source*
>
>
>
Upvotes: 4 <issue_comment>username_2: I don't think you can cite something to which you have no (original) source. I mean anyone can edit Wikipedia (or any other similar webpage) so that would practically be citing a random person, without any way for a third party to check up.
Luckily Wikipedia articles usually have references you can check (to see whether or not they are actually accurate and relevant) and cite accordingly. If there is no reference then you should probably not be citing (or trusting) that piece of information.
Upvotes: 3 <issue_comment>username_3: What you're referring to is an *indirect source*. In general, you should always work as hard as you can to find the original source. If that is not possible, all of the major style guides include a way to cite indirect sources. Note that you should not cite Wikipedia (see the "do not cite Wikipeida" note at the end of this answer). If an indirect citation is absolutely necessary, it should come from a reputable, peer-reviewed journal or other academically respected source.
1. According to [Purdue University](https://owl.english.purdue.edu/owl/resource/747/02/), the **MLA rule** is to name the author of the indirect source in the text and cite the work you have in-hand:
>
> For such indirect quotations, use "qtd. in" to indicate the source you actually consulted. For example:
>
>
>
> >
> > *Ravitch argues that high schools are pressured to act as "social service centers, and they don't do that well" (qtd. in Weisman 259).*
> >
> >
> >
>
>
> Note that, in most cases, a responsible researcher will attempt to find the original source...
>
>
>
[Williams College](http://library.williams.edu/citing/styles/mla.php) further clarifies that the indirect work should be included in your Works Cited list:
>
> ...include the indirect source in the Works Cited.
>
>
>
2. The **APA rule** (also from [Purdue University](http://owl.english.purdue.edu/owl/resource/560/03/)) is to *exclude* the indirect source (called the "original source", below) from your reference list and only include the work you have in-hand (called the "secondary source"):
>
> ...name the original source in your signal phrase. List the secondary source in your reference list and include the secondary source in the parentheses.
>
>
>
> >
> > Johnson argued that...(as cited in Smith, 2003, p. 102).
> >
> >
> >
>
>
> [...] Also, try to locate the original material and cite the original source.
>
>
>
3. The **Chicago rule** (once again, from [Purdue](http://owl.english.purdue.edu/owl/resource/717/03/)) is to cite the indirect source, followed by the in-hand resource:
>
> ...Chicago discourages the use of [indirect sources]. In the case that an original source is utterly unavailable, however, Chicago recommends the use of "quoted in" for the note:
>
>
>
> 7. <NAME>, *The Social Construction of What?* (Cambridge, MA: Harvard University Press, 1999), 103, quoted in <NAME>, *A New Philosophy of Society* (New York: Continuum, 2006), 2.
>
That said, ***do not cite Wikipedia*** in a formal document (unless, perhaps, you are actually writing about Wikipedia or collaborative editing techniques). I love Wikipedia, and I believe it is reasonably well-maintained and has a lot of good information. However, you have no way to verify if the information in an article is true -- or, if you do have a source to verify it, you would just cite that source. Aside from the tired, "Anyone can edit it!" complaint, two severe issues with Wikipedia as a citation source are:
* You get whatever version of an article stands at the exact moment your web browser fetches the page. No matter how hard Wikipedia's editors work, they can't stop a bad edit from reaching your web browser if it was made seconds before you fetched the page. Wikipedia doesn't undergo any kind of pre-publication review; all review is post-publication, which means you may see totally unreviewed information. (You can mitigate this by citing a specific past revision, but it still stands that a post-publication review process means that any given revision of an article could have claims that have been reviewed by absolutely no one except the author.)
* In order for a reader or reviewer to ascertain the usefulness of a source, it must have an identifiable set of authors (or, for anonymous works, at least a consistent, reasonably small set of authors). Wikipedia makes that requirement incredibly difficult to satisfy. (Again, it's *possible* to satisfy this requirement if you cite a specific revision of a page and find out what contributors wrote each part of a page, but it is still difficult since a potentially huge number of contributors have helped build that revision.) It's hard for a Wikipedia article to be *reputable* where there are no clearly identifiable authors to which a reader could attach a reputation.
Upvotes: 6 [selected_answer]<issue_comment>username_4: *Note: This is written from the perspective of a postgraduate student in applied mathematics.*
**1) Do not cite Wikipedia.**
This is not about perception or laziness but rather, your thought process as a researcher. Suppose I read about a mathematical fact that might be useful to my research. I need to verify that the fact is true and have some ideas about why this is true.
By stating that the mathematical fact has been published in a reputable peer-reviewed journal or in a reputable textbook, I demonstrate that I have at least verify its authenticity, and perhaps even read technical details about it.
However, if I cite Wikipedia, it demonstrates that I accept facts off the internet without verifying or having technical understanding about it (Wikipedia usually don't go into deep technical details). This does not bode well for my reputation as a researcher.
**2) Try to find an academically acceptable source to cite the same information from.**
Suppose I want to use an equation. Random example: [Kullback-Leibler Divergence](http://en.wikipedia.org/wiki/Kullback-Leibler_divergence#Definition). But lets pretend there is no source or citations on Wikipedia.
What I will do is to search directly for "Kullback-Leibler Divergence" using search engines like Google, Google Scholar or Google Books. I will also try to search for the term in my university or local library's search tool.
Assuming this fails. Then, I would look at topics that the Kullback-Leibler Divergence is in or is related to. For this specific example, I would look for textbooks or materials on "Information Theory" and look up their index or table of contents for "Kullback-Leibler Divergence". If this fails, I will dig deeper: think about what this equation does and search for similar topics. For this example, it compares two probability distributions. I will then look for ways to compare two distributions in Information Theory.
Once I find a paper or textbook talking about it, it shouldn't be too difficult to locate the source or pick a suitable paper/textbook to cite the equation from. If after all these searching and perhaps asking my supervisor/professor, I cannot find anything acceptable to cite from, I would ask myself these questions: Is this equation valid? Why should I believe in the authenticity of this equation?
Upvotes: 3 <issue_comment>username_5: In addition to many good points made in other answers: while I agree that Wiki is not an acceptable *final* "authority/source" for nearly anything, the better articles do give external references that can put one onto the right track for more primary sources, as well as giving internal links via other keywords... As to how-to-cite, I have gotten more and more into the habit of at least footnoting that I *found* a reference (to a primary source) via Wiki.
Yes, of course, in one's primary "specialty", one should have better pointers to the "official" literature than Wiki, but with regard to necessary but peripheral topics for one's work, often Wiki can provide hints, which can then be *verified* afterward, after one has become aware of them.
So I use Wiki to begin to get a grip on keywords and vague ideas in things unfamiliar to me, to get started. Also, sometimes historical pointers are more readily accessible there, and then subsequently verifiable on MathSciNet, *after* one knows what to look for.
I note that "peer-reviewed" stuff should also be viewed skeptically/critically, especially with regard to recognition of prior art, and also simple correctness, since except for significant results, often referees are encouraged to *not* worry about certifying correctness, but more "novelty" and "interest". And history and prior art are often either omitted due to disinterest or ignorance, or pushed out by editing-down considerations, so that papers often do not give an effect look "backward".
Finally, in the spirit of giving credit where credit is due: when I find Wiki useful, I don't pretend that I didn't! Not that I view it as authoritative, either. A new category of information, perhaps.
Upvotes: 2 <issue_comment>username_6: Note that if the Wikipedia article doesn't have the original source for a claim, then that section of the article is a work in progress that is below the Wikipedia standard. It requires a `{{citation-needed}}`, otherwise it could be "original research" which properly doesn't belong there.
It's probably a bad idea, in your academic paper, to quote Wikipedia material which the Wikipedia itself disavows!
There should never be a need to cite the Wikipedia, since anything credible is supposed to have references to the outside. In serious work, you always borrow the citations from the Wikipedia, not the text. Citing from the Wikipedia itself is good for cafeteria arguments.
When you make any kind of citation, you are basically expressing trust in the author. This is because you are not reproducing all of the research, such as experiments. You trust that the data haven't been falsified and so forth. There is some safeguard in that the paper appears in some trustworthy publication, and that it has been peer reviewed.
Suppose that the Wikipedia is actually the only source for some paper. Firstly, that situation is wrong and blatantly violates the Wikipedia's rules about original research, so the page will probably be deleted. Secondly, the Wikipedia isn't a journal that reviews and publishes material, so you would have to take that paper completely at face value. The Wikipedia cannot lend any credibility to anything, according to its defined mission and scope.
Upvotes: 2 <issue_comment>username_7: If you consider citing Wikipedia, *don't*, as others have explained, but IMHO you should at least acknowledge it as being helpful, either with just a mention like that or a list of "non-cites" that could be used as starting points for others wanting to review Wikipedia's (latest) views on your topics.
Upvotes: -1 |
2013/02/04 | 1,360 | 5,466 | <issue_start>username_0: In many fields of academia, a professor must get grants to fund his research (e.g., medicine, biology).
Early in academic career, a scientist can have naive expectations about how things work in academia and later may be surprised by the reality. One such surprise is the compete and collaborate paradox.
Later in career, it may be not so simple to collaborate and share fully your ideas, since twice a year (or so) we all submit grants and we suddenly are less friendly colleagues who share ideas, but we compete with each other or between "groups". For example, we don't let anyone see our full grant submissions. (e.g., NIH medical grants - full text must be requested by freedom of information act and only abstracts are on the web).
* How do you handle in every day life, at conferences, in hallway conversations this paradox of collaborating and competing at the same time in academia?
* How do you determine what to share?
* Do you avoid colleagues who are known to 'tell only the minimum' at congresses and then surprise later with an accepted grant?
Philosophically, it is impossible to collaborate and compete at the same time and one has to have some ethical structure but everyone's boundaries seem to be different!<issue_comment>username_1: >
> How do you handle in every day life, at conferences, in hallway conversations this paradox of collaborating and competing at the same time in academia?
>
>
>
I ignore it, except around known jerks. I'm lucky enough to work in a research community that generally values collaboration over back-stabbing. There are a few exceptions, of course, but they fall under the category of "known jerks". I'd much rather gain a coauthor and get the result out together than to keep secrets and risk being scooped.
Yes, I have developed coauthors this way. Yes, I have published papers this way that might not have been published otherwise. Yes, I have been scooped, but only by people I had *not* discussed my ideas with.
**Your mileage may vary.**
>
> How do you determine what to share?
>
>
>
I don't share ideas or problems that students (either mine or not) are actively working on, without the students' explicit permission. Otherwise, I'm open about everything, except around known jerks. In particular, if you want my latest grant proposal, just ask.
>
> Do you avoid colleagues who are known to 'tell only the minimum' at congresses and then surprise later with an accepted grant?
>
>
>
No. Why should I?
Upvotes: 5 <issue_comment>username_2: >
> How do you handle in every day life, at conferences, in hallway conversations this paradox of collaborating and competing at the same time in academia?
>
>
>
It is very difficult indeed. Motivation for pursuing a career in academia vary significantly, and accordingly, what is considered as ethically acceptable.
From experience, it is often the issue of the *man in the middle*,
which in practice is *the* major source of frustration. Individual A talks to Individual B about his ongoing (unpublished) research. Individual B then more or less forget about where he got this information, and speaks to Individual C, who implements it, unsuspectingly. Everyone behaves ethically at his level,
but globally, Individual C effectively can be perceived to *compete* aggressively with Individual A.
A solution that some people seem to adopt in conferences in my field is to only present/discuss *published* material, which makes attending conference less interesting, as it only involves outdated research. Another sub-optimal approach is merchandizing, i.e. present one's research at a superficial, advertising level, so that the actual real issues/breakpoints are effectively not discussed.
On the other hand, research thrives in confronting honestly different perspective on a given typically complex problem, so there is a lot to be gained in collaborative behaviour.
Modern research is also fairly specialized, and conferences are the one place where you are likely to meet experts in your field who have given some thoughts to the problems you are interested in.
In the end, everyone has to balance these things out.
My advice would be to behave on the cautious side, but then again I tend not to follow my own advice. Another approach is to make sure you are so much on top of things that it does not matter :-)
>
> Do you avoid colleagues who are known to 'tell only the minimum' at congresses and then surprise later with an accepted grant?
>
>
>
Well, life is short, so interact preferentially with colleagues whose motivation for doing research seem to overlap most with your own.
**UPDATE**
Striking a balance between collaboration versus (unrestrained) competition is not specific to research/academia in fact. It is the basis of civilization! What is a bit specific to academia is that it is (poorly IMHO) self-regulated. There is no such thing as academic police/justice. [I found this RSA Animate to be instructive](https://www.youtube.com/watch?v=XBmJay_qdNc) to get a measure on how a small amount of policing in enough to get the system working.
Another point worth mentioning is that predatory behaviour is in fact not that common, if only because people are too busy with their own train of thoughts, and also because it takes time for new ideas to percolate. To understand why a given idea is novel, typically requires having spent some time thinking along similar lines.
Upvotes: 4 |
2013/02/04 | 671 | 2,565 | <issue_start>username_0: I am working on a project in which I have a direct supervisor in addition to the head professor of the lab. The direct supervisor only agrees to being written first or last on the article we are writing. Needless to say, my professor won't agree to be anywhere but last. He also feels that I deserve to be written in the first place. Is it possible to write both of them in the last place as co-last authors?<issue_comment>username_1: >
> Is it possible to write both of them in the last place as co-last authors?
>
>
>
**No**.
As expected, an author list is not a [tree](http://en.wikipedia.org/wiki/Tree_%28data_structure%29) or a [weighted graph](http://en.wikipedia.org/wiki/Graph_%28mathematics%29), but a simple flat (one-dimensional) list.
There is exactly one last author. Possible solutions or mitigation of the issue include:
* **Alphabetical author list**. This happens in some field, and is totally unheard of in some others (including chemistry and biology, which is your field, so this might not be a possibility).
* Having **two contact authors**, or have the professor who is not last author to be the contact author. In the past I have used this as a way to “pacify” a co-author who wasn't happy with his spot on the author list. (Needless to say, it's a perversion of the system, and should only be done if the author can actually act as contact author.)
* Have a **statement indicating the contributions of each author** (“X and Y contributed to this work equally”). Some journals require such statements, some will refuse to include them, so your mileage may vary. **I doubt this will pacify your reluctant supervisor**, though: people who are worried about their rank in the author list are most probably thinking about how it looks like on a publication list or CV.
* **Have the head professor take responsibility for the final decision** (as senior professor and project instigator). That's the most sound solution, but it does not mean it's an easy one.
Good luck with your negotiation! And remember that they're not yours to handle (see my last point)!
Upvotes: 5 [selected_answer]<issue_comment>username_2: This happens in biology quite frequently. Take a look at this [example](http://www.jlr.org/content/early/2007/06/25/jlr.M700077-JLR200.full.pdf):
>
> § Both authors contributed equally to this work.
>
>
>
In this case it can be any two authors on the list. If the notes are on the first two or last two authors, then this is often viewed as the two primary and equal collaborators.
Upvotes: 2 |
2013/02/05 | 1,110 | 4,534 | <issue_start>username_0: I just started a PhD and I was looking for some guidance on what I could expect in the following years, more specifically, what is the recommended way to progress, how should I allocate my research hours and other responsibilities, and , finally, when and how should I start writing my PhD thesis.
Is there any book written on how to conduct yourself during the course of a PhD? What would be your overall recommendations?<issue_comment>username_1: Have a look at this excellent memoir by a recent CS PhD: [The Grind by <NAME>](http://pgbovine.net/PhD-memoir.htm)
Although written from a CS perspective, many of his experiences transcend disciplines.
Upvotes: 4 [selected_answer]<issue_comment>username_2: There is a lot written on the 'PhD journey' but there are some things I learnt along the way that took me across the void (so as to speak!).
I am listing them in no particular order:
1. Be true to yourself and your supervisor. Keep your end of the bargain. Meet deadlines. Keep your supervisor in the loop (even on trivial matters - the matter may be trivial from your point of view). Respect him or her. Of course you can have friendly arguments. Follow his or her instructions/suggestions/advice closely.
2. If you don't know, ask. You can ask your supervisor or email other scholars. My dissertation benefited from several prominent thinkers in the field. I simply emailed them and asked for assistance. There is no shame in asking. It is a learning process.
3. Celebrate your big and little achievements. When you finish writing a difficult chapter, give yourself a treat. Set small goals - you cannot finish your dissertation in a day but you can draft a section of your chapter in a day.
4. Learn and try to become an expert in your field. After graduation, you would be expected to have advanced knowledge in your field. Be genuinely interested in what you are doing. Think of new ways of addressing the issues. Discuss your approaches with your fellow PhD students. They are often your first audience. Have a network of support.
5. Most importantly, recognise that there would be some good days and some bad days. Make the most of them both. On bad days, give yourself a break. I think the literature says that most PhD student will start enthusiastically and then lose interest in the middle years and then gain momentum again.
My overall recommendation is to never lose sight of your goal.
Upvotes: 2 <issue_comment>username_3: >
> what is the recommended way to progress
>
>
>
Steadily. Make small progress every day.
>
> how should I allocate my research hours and other responsibilities
>
>
>
Consistently.
>
> when and how should I start writing my PhD thesis.
>
>
>
**Now**, and in LaTeX. Write down everything you read, everything you do, everything you prove, everything you try that doesn't work, every crazy stupid idea you have. Write, write, write. Always in LaTeX.
Most importantly: **It's *your* PhD. *You* have to hunt it down and kill it.**
Upvotes: 3 <issue_comment>username_4: the way i see it (i use excel to track my work, but other people maybe use ms gant or just paper checklist , whatever works for you):
1. set up an excel mini sheet with Everything you need to learn to deliver your final thesis (from literature review methodologies , to coding and statistics with nvivo or spss) video courses or book , whatever fits the schedule , you can divide the day you work on your phd by 1/2, half learning , the other half thesis and research
2. set up another excel mini sheet with Everything you need to learn to advance your career on your specific domain (post phd) certifications , continuous learning , because pure research jobs are hard to find(we have high requirements in my country) , require at least 2 good papers indexed in high journals(like elite) and at least 3-5 years experience in a lab or similar research environment
3. If you dont know just ask , if you need a paper you can ask the author , if your stuck you can ask old staff on the lab , or phd team mates ....
4. want to keep the momentum , help the community , help new phd students, join a sig on latest current problems and help them on the way , also schedule some time off with friends , family , and hobbies
5. Draft a timeline and either print it and stick it on your wall or keep it on a folder , each mile stone you reach on your thesis you cross it , this visualisation of your objective on a graph timeline helps you keeping focus and discipline
Upvotes: 0 |
2013/02/05 | 830 | 3,489 | <issue_start>username_0: In my experience, academics are almost always expected to contribute some of their time to activities beyond their principal teaching or research roles. These extra tasks include, for example, attending open days, visit days, serving on one or more committees, acting as head of a student year group, admissions officer - the list is long.
Whether this should be the case is not in question here.
In one of my previous institutions, there was in place a "brownie-point" system which was supposed to keep a track of how much extra administrative/organisational/outreach or otherwise "extra-mile" work an academic had taken on. When a new task required action, the academics could use their accumulated points to argue why they shouldn't (or indeed should, in some cases) be allocated the task. Setting the value of a task relative to all the others, as you might imagine, raised some difficulties.
My question is: has anyone experience of any other kind of formalised system of evaluating and allocating these "extra-mile" tasks?<issue_comment>username_1: At my university, we have this system for PhD students.
I am a PhD student. In my contract, I have "up to 20% department duties". When I get assigned tasks not related to my research — teaching, administrative tasks, presenting our institute to visitors, etc. — I write down the hours. At the end of the semester, I report to my boss how many hours I have worked on such duties. Then, a corresponding fraction of my salary is funded from a different pot of money. At the end of my PhD, this means I will have an equivalent amount of time extra to finish my research before my contract finishes.
I think the system is quite fair, although some tasks — most notably teaching — don't get the actual time assigned, but according to a certain formula. So in practice, I *do* lose research time by doing teaching, because teaching takes more time than the formula accounts for, certainly if it's a first time. However, it's still much better than a system with no accounting at all, such as I have understood to be common elsewhere in the world.
Upvotes: 2 <issue_comment>username_2: This is similar to the answer from @username_1 but after the comment from @F'x I thought I would write it up separately.
I would think the best way would simply be to track how many hours are spent on each of these tasks by each person. If task A takes person A 2 hours and task B takes person B 4 hours, then it seems clear how much effort people put in. Though, one does need to watch out for padding any time people log hours for any purpose but that's management's job.
I used to do this in industry and while I would have preferred to avoid it, I didn't find it maddening. It simply added about 10 minutes to each day to log everything I did and who should be charged for it. It would be VERY maddening if task A was allocated 2 hours and task B was allocated 2 hours just because of some formula (like the teaching example from @username_1)
Another issue is that person A might be able to perform task B in 2 hours, in which case, person A should do it. Quantifying other than by number of actual hours invested is only going to lead to resentment and ill-feelings within the department (whether you're talking about academia or industry). As far as knowing who is better at something, that's also management's job. I say this after being in management for more than a decade before moving to academia.
Upvotes: 1 |
2013/02/05 | 1,962 | 7,752 | <issue_start>username_0: Although this stackexchange seems to be a little hostile towards metrics (especially when they are about research productivity), it is still sometimes fun to indulge in a little bit of arbitrary measurement and quantification. Sometimes it can help you set targets, or let you know what is possible. In this case I am curious about blogs.
Having a [web-presence is important](https://academia.stackexchange.com/q/616/66), but **how do you know if your academic blog is doing a good job?**
From [my own experience](http://egtheory.wordpress.com/2013/02/05/first-5/), I have noticed that my blog gets a lot more readership and mention than any of my papers. I usually find this encouraging, and at times it helps me increase productivity by incorporating blogging into my research work-flow and feeling like I am able to communicate with people before having complete results. Sometimes even receive feedback (although my blog is not at the level of regular commentators, and nowhere close to the comment activity I see on popular blogs that I follow).
However, getting more mention than my papers is not a fair standard. In fact, I have no standard by which to decide if I am doing an alright job blogging, and what I should aim for to improve the ability of my blog to engage other researchers or interested readers. Having some hard data is also useful for converting people new to blogging to the online community.
**Are there any statistics on typical readership, posting rates, and commenting frequency for small (non-superstar) academic blogs?** I would be especially interested in statistics that are broken down by area, since I expect a nutrition or cancer blog to inherently get more readership than one dedicated to Stone-duality. Of particular interest to me would be information about blogs in theoretical computer science and/or mathematical modeling.<issue_comment>username_1: I don't know if it is the answer you are looking for, but I would be cautious with looking at blog views (or even likes/tweets):
* First, they may be superficial. You don't even know if someone actually read it (maybe (s)he entered just because of a sexy title, or a nice picture, or - misleading keywords).
* Second, they is are measure of popularity, not necessary quality, with a lot of mechanisms making scaling exponential (e.g. snowball effect).
Personally, I often look at stats of page visits of my various sites... but I cannot make much sense of it. But what I find important is:
* How often I can send someone a link to my post, so it save my time of explaining something once again?
* How much I learn something from readers, or make new contacts through it?
* Do I hearing feedback, especially from strangers or people I don't know very well?
Moreover, then I can compare blogs to regular articles on this ground. Still it's apples to oranges... but now they are quantized fruits. :)
Upvotes: 2 <issue_comment>username_2: I can't say much about statistics, as I have not come across any, but in my experience blog readerships are usually small and most likely by specialists in your area. However, I view a blog as free advertising for my research, as blogs are generally more highly ranked than academic papers by google and the like. I also view a blog as an ideal platform to put your research into laymans terms. From my blogs I have had an out of the blue invited talk, and also requests from numerous researchers who I hadn't met for copies of papers or general queries about my work.
I personally find maintaining a nice website and blog well worth the effort, and I try and spend a couple of hours a week on new content, but usually concentrated when new articles are published. Ultimately it's all about raising the profile of your research, and having more accessible material is always helpful, even for specialists.
Upvotes: 3 <issue_comment>username_3: I used to track my stats compulsively, but no longer do so. this is mainly because I get lots of readers through an RSS feed, which doesn't directly impact traffic. Sometimes I'll monitor the relative hit rate of specific posts, and I have seen dramatic jumps (for example if I do business meeting blogging, or if I post on something controversial).
As a rule of thumb, the more technical the post, the less traffic it gets. The more buzzwordy, the more traffic. I had some thoughts on deep learning recently and that got huge traffic in comparison to some of my more technical posts.
Now, because of G+, twitter, and blogs, my "visibility" is diluted across all three media, and while I'm sure there's some way to monitor all of them, I haven't paid that much attention. Ultimately, I blog because it's fun, and the more I get distracted by audience response, the more I find myself distorting the things I post about.
Upvotes: 3 <issue_comment>username_4: I think it's interesting to consider the relative value to society of blog posts relative to more traditional forms of content distribution, such as book chapters, text books, journal articles and so forth.
### Obtaining Benchmark statistics
* RSS counts: Many blogs, particularly popular ones, show their RSS subscriber count. You can use the Explore Search feature in Google Reader to search for blogs you know. This returns the number of Google Reader subscribers. This is less than the total reader count, but it can give you a rough ball park.
* Page views: Some blogs occasionally post their site statistics. [Alexa](http://www.alexa.com/search?q=jeromyanglim.blogspot.com&r=home_home&p=bigtop) can provide a very rough estimate of the popularity of a site.
* Comments: It is straight forward to look at other blogs to get a sense of how many comments they typically get.
### My rough rules of thumb
I've been blogging since 2008 and have kept an eye on RSS feeds and page views over timeon my own blog. I've also picked up information from other blogs that I follow.
My main observations are that it takes time to produce content, get indexed by Google, obtain RSS subscribers and so on. These would be my rough benchmarks for academic blogging. In the fields that interest me (e.g., psychology, statistics, R) I can think of specific blogs that fall in to one or other of these categories. This helps to inform the benchmark. Anyway, these are just my casual rules of thumb; of course, they aren't anything definitive.
* RSS subscribers:
+ 0 to 10: Not popular
+ 10 to 100: Just getting started
+ 100 to 500: Moderate levels of popularity
+ 500 to 1000: Relatively popular
+ 1000 to 10,000: Popular Blog
+ 10,000+: Superstar blog
* Annual Page Views
+ 0 to 1,000: Not popular
+ 1,000 to 10,000: Just getting started
+ 10,000 to 50,000: Moderate levels of popularity
+ 50,000 to 300,000: Relatively popular
+ 300,000 to 2,000,000: Popular Blog
+ 2,000,000+: Superstar blog
### What does a page view mean
It is a little difficult to know what a page view means in terms of achieving broader blogging goals. Only a proportion of page views correspond to a person reading the entirety of the page. And only a proportion of those page views have any meaningful impact on the reader. In order to get a sense of what these proportions might be, I reflect on my own browsing. For example, I might be searching to diagnose a software error, do a tutorial on something, or get a review of a product. It might take a few search results to find what I'm looking for. That said, perhaps something between 1 in 10 and 1 in 2 search results provide useful results.
In summary, even if only 1 in 20 pageviews helped someone in some meaningful way, if you're getting a hundred thousand page views per year, that's still 5,000 instances of people being helped.
Upvotes: 3 |
2013/02/06 | 1,337 | 5,802 | <issue_start>username_0: What are some guidelines and best practices for PR statements and releasing a project summary to the general public? My first tendency is to always shorten the project summary, reduce the length of sentences and use more "crisp" words. But what else?
Does anyone have any pragmatic/practical/ advice and is there any sort of tool out there aside from the likes of MS Word readability stats that gives you suggestions to what to change?
EDIT:
Came across this tool today which is inspired by XKCD (what is not?):
CAN YOU EXPLAIN A HARD IDEA USING ONLY THE TEN HUNDRED MOST USED WORDS? IT'S NOT VERY EASY. TYPE IN THE BOX TO TRY IT OUT:
<http://splasho.com/upgoer5/>
There should be something on these lines available to help with simplifying the language for instance (perhaps with 5000 words rather than THE TEN HUNDRED which is very limiting).<issue_comment>username_1: It may be worth looking to see if your university has a press office. My university has a press office. They are happy to meet with research groups to talk about the press release process in general. They are also happy to edit copy to make it more likely to be picked up by the press. I believe they also are willing to work one-on-one and write the actual copy with you. They also have all the contacts and know how to get press-releases actually published in useful places.
Upvotes: 2 <issue_comment>username_2: More than making the text crisp and understandable, you should work to make it *relevant*. Typically your "press releases" are in the form of articles for the research community, which understands why you find your work relevant; you're advancing the field. When dealing with the general public, you can make no such assumption. You have to state very explicitly why your work is important.
If you're having a hard time with this, I've had success looking back at the grant proposal which is funding my research. In some proposals, the introduction will have some overarching, practical goal, which will be easily understood by a layperson. Couch any achievement or research breakthrough in this context and the press (and the general public) will have a much greater significance for what you did.
Upvotes: 2 <issue_comment>username_3: Talk to someone at a media relations, public relations, or press office at your university. Most universities will have folks who work in this area; they are the experts and have a ton of experience, and you should take advantage of them. They will likely be glad to help.
Here are some tips that I have learned from public relations folks:
* Identify your message. What's the takeaway lesson? Can you write it in one short sentence, in a form understandable to the average person on the street? Take a lot of time to craft this carefully. Then, your entire press release should be centered around supporting this message.
* Look for three facts or points that support the message. Numbers and statistics are very powerful.
* Stay on-message! I cannot emphasize this enough. Everything you talk about should be focused on your message. Avoid the temptation for digressions or tangents. Yes, you are a witty raconteur and can wax on enthusiastically for hours about your work, but this is not the place for it. Avoid unnecessary details; give a spare answer that provides just enough for folks to understand the message.
* Yes, I know that you and your fellow researchers are fascinated by all of the details of your experimental methodology, the alternative hypotheses you considered and rejected, the details of why your finding is correct, your calculations, and so on. Sorry, but the average person on the street doesn't care. Your top priority is to explain your bottom-line finding, why the average person should care, and *maybe* a teeny bit of something to give some intuition about why your finding is true (enough to make it sound plausible to an average person).
* Edit ruthlessly. You want as many eyes on it as possible, and ideally people who are *not* involved with your project. Lean heavily on your press office.
* In many universities, the press office will help you draft a press release. They'll talk to you informally, ask you a bunch of questions, and then work with you to write a press release. If they're available to do it, grab the opportunity; it can be very helpful.
* Brainstorm a list of about 10-20 questions that you expect reporters might ask you. Next, for each question, draft a candidate answer. Your answers should be concise (at most a few sentences) and simple; and, the chance to throw in an analogy or fact or figure can help, too. When you are talking to a reporter on the phone, have this list in front of you. This way, when they ask you a question, you can refer to the list and give your honed, crisp answer -- or at least, you have it to refer to if you need it. The reporter will never know.
* Remember that the purpose of talking to a newspaper reporter is not just to educate them about your project. It is also to supply them with as many pithy quotes that they can use in their article. The more quotable you are, the more likely it is that you will be quoted. They will be listening for those great quotes. Take the opportunity to brainstorm in advance a few short quotes, and make sure to throw them into every conversation with every journalist. Read a bunch of newspaper articles in advance so you can see what kinds of statements tend to get quoted.
The public relations folks may also be able to offer media training. If you can get the chance to take a media training course, take it! This is especially important if you might be on TV, where you have to make every second count. There are some powerful but non-obvious techniques that they can teach you.
Upvotes: 4 [selected_answer] |
2013/02/06 | 612 | 2,611 | <issue_start>username_0: There are a lot of post about peer review of papers, but how about graduate/undergraduate thesis?
Is there somewhere on the vast internet, where one can submit his thesis for a peer review.
The reason I am asking this is because I am finishing my master's in engineering and so far my advisor has not seen my thesis even once. The only "review" I got was a friend who found out only grammatical and punctuation errors and I did the same for her. But the fact that we know nothing about each other's research stops us from performing a quality peer review.<issue_comment>username_1: **There is a full-blown, high-quality peer-review process for a thesis**: it's called **the thesis committee**.
If you mean a service where you can get help improve your document before you submit it to your committee, that is something that one's advisor(s) tasks. Maybe you can actually get help from friends, colleagues, or in the most severe cases your advisor's friends, but there is not much.
The thing that may be closely related is that, in some systems (the French one at least), there is a person that is responsible for validating the PhD student's manuscript *before it gets sent to the committee*. Then, it's that person's responsibility (in theory) to do a basic check of your manuscript and your work, and decide if there is enough to gather a committee. I say “in theory”, because this person will probably get dozens of theses per year (at the very least) and can only perform the most basic checks. In practice, they most often do not check the manuscript content, but its form (does it follow the University's standards), as well as some simple indicators of your research (has the candidate published? how many times? did he attend conferences? that sort of stuff).
Upvotes: 4 <issue_comment>username_2: Your situation suggests that your relationship with your adviser has broken down. Fix it. I usually find going out for coffee and just talking, not necessarily about your thesis, always helps.
You can always get editorial assistance but you want expert assistance. This is where your adviser comes in. Make it easy for him or her. Submit perhaps a chapter at a time. Then meet and have a good honest discussion.
Other than the above, search for a newly minted PhD candidate in your department, perhaps a former student of your adviser, and ask him or her to review your work. Believe me, this does wonders because of the motivation that the newly minted PhD candidate brings to the review. By newly minted PhD, I mean a person who has already completed his study.
Upvotes: 3 |
2013/02/07 | 753 | 3,155 | <issue_start>username_0: I have been reading that many students have a PhD committee. I assume this is a group of experts supervising the student.
In my case, and I think this is quite common, I had two people. One was the principal supervisor and the other was the associate supervisor.
Almost all my contacts were with the principal supervisor. The associate supervisor was a back-up resource if and when needed.
This was a simple one-on-one contact between me and my principal supervisor.
I am just wondering how does a PhD committee work?<issue_comment>username_1: The answer to this question differs based on the country, university, faculty, department and the particular members of the committee.
In general universities and departments have regulations and procedures that describe in exact detail the roles and responsibilities of the PhD committee. No one in here can give you anything but some general idea which is already described in more detail in those documents. PhD committees have completely different roles in US and UK for instance. In US they are the ones that assess you but in UK your examiners assess you and then report back to the committee in the department.
Upvotes: 1 <issue_comment>username_2: At least in the US, a Ph.D committee will have 4-5 members, and there are rules about the composition of the committee (there might need to be at least one person from an external institution, and at least one person from a different department, or variations thereof).
The committee's formal job is to assess the Ph.D student's dissertation proposal, determine that the work being proposed is sufficient for a dissertation, and then evaluate the final dissertation defense and decide whether to grant the student a Ph.D or not.
Informally, a Ph.D committee provides a set of resources/expertise for the student to tap into for advice, research directions and even contacts for future work (yes, there's life after a Ph.D :)). There's [prior discussion](https://academia.stackexchange.com/questions/3719/how-to-select-a-dissertation-committee-member-wisely) on how to choose your committee.
Upvotes: 5 [selected_answer]<issue_comment>username_3: I want to clarify a point about your first paragraph: at least in the US, the PhD committee does *not* usually supervise the student very much. A PhD student has an advisor, who is typically on the PhD committee, and may be its chair, who supervises the student. The committee's main role is to determine whether the thesis gives adequate grounds to grant a PhD. At some schools the committee convenes only once or twice---perhaps once to approve a plan for the thesis, and once to approve it. At others, the committee might meet once a year to consider whether the student is making adequate progress.
Regardless, at least in the US, it's unusual for the committee to have a formal role beyond that. Of course it's possible that committee members besides the advisor might be involved in supervising the student, but the causation is more likely to run the other way: because they're involved in supervising the student, they're invited to be on the committee.
Upvotes: 3 |
2013/02/07 | 905 | 3,513 | <issue_start>username_0: No textbook is perfect. So if I want to supplement my course with some material (a few pages from different references every now and then, on topics I find lacking or weak in the textbook):
* is it legal to make copies of those a few pages and give them to the
students as hand outs?
* is it legal to scan them and upload them on
the course site?
* should I email the authors to take their permission
1st? what if one author is dead?
* in case the above is a violation of copyrights so what should I do then? asking the
students to go read those parts in the library? (the students would need to keep a copy
of the reading/supplementary materials)<issue_comment>username_1: First, in most cases, the channels through which you distribute the material to your students (hardcopies, restricted-access course website) is not important. Secondly, the copyright holder for each work is (again, in most cases) the publisher, not the author. Thus, getting permission from the author is not necessary.
So, the surest way to avoid trouble is to secure the permission to reproduce the content from the publisher. Publishers should have an online page (e.g., see [here for the American Chemical Society publications](https://pubs.acs.org/page/copyright/permissions.html)) explaining how to obtain this permission. Many academic publishers nowadays rely on a centralized online service called [RightsLink](https://pubs.acs.org/page/copyright/rightslink.html), where you can directly select the material you want to reproduce and the conditions in which you will use it:

The tool then tells you if you can get a permission to reproduce at no charge (it usually is if you want to reproduce only small parts, a few figures) or if you would need to pay.
Finally, under US law you may qualify for a [*fair use* right to reproduce parts of a copyrighted content](https://web.archive.org/web/20210605143659/https://www.copyright.gov/fls/fl102.html).
Upvotes: 4 [selected_answer]<issue_comment>username_2: For Germany, [§53 of UrhG](http://www.gesetze-im-internet.de/urhg/__53.html) allows actually a lot (compared to other countries' fair use policies). The deal is that flat fees on copying machines, scanners, printers, etc. (as well as on paper) are collected and redistributed to authors.
I find §53 **slightly ambiguous for the university teacher**:
(3) says rougly:
>
> It is allowed to make copies of small parts of works, of small works, or of single articles that are published or made publicly available in newspapers or journals for personal purposes
>
>
> 1. to illustrate in teaching at schools, non-commercial facilities for education and advanced training as well as in facilities for professional training in the numbers required for the course participants.
> 2. for state exams or exams in schools, universities, non-commercial facilities for education and advanced training as well as in facilities for professional training in the numbers required.
>
>
>
So the "universities" are missing in 1. Usually, I'd say they are covered by those other categories, but they are explicitly listed in 2.
However, for sure the **students are allowed to make a copy**: (2) 1. runs:
>
> (2) It is allowed to make or have made single copies of a work
>
>
> 1. for personal scientific use, if and as far as copying is needed for this purpose and does not follow commercial purposes.
>
>
>
Upvotes: 2 |
2013/02/07 | 623 | 2,639 | <issue_start>username_0: I noticed that some conference have different deadline for paper submission: an "abstract" submission deadline, before the usual "paper" submission deadline.
For example, on the [International Semantic Web Conference 2013 webpage](http://iswc2013.semanticweb.org/content/call-research-papers) you can read:
>
> **Submission dates**
>
> Abstracts: May 1, 2013
>
> Full Paper Submission: May 10, 2013
>
>
>
Why do they need the abstract before the paper?
To estimate how many papers they'll get?<issue_comment>username_1: From what I have observed, having a specific deadline for abstracts is used for two main reasons: **having a rough idea of the number of submissions** and **organise a bidding for the reviewers**.
Having the number of submissions can help deciding of a possible deadline extension and possible to "recruit" more PC members or reviewers if the number largely exceeds the expectation.
Organising a bidding based on the abstract allow the PC members to indicate their preference for each paper (e.g., I want to review this paper, I could review this paper, I couldn't review this paper), so that when the actual papers arrive, the distribution is already organised.
Upvotes: 6 [selected_answer]<issue_comment>username_2: In my field (chemistry/spectroscopy/chemometrics), the abstract decides whether you'll get an oral presentation or a poster (total rejection is extremely rare).
The paper submission deadline is usually after the conference.
Once submitted, the paper undergoes [normal peer-review](https://academia.stackexchange.com/a/7749/725) for the journal it is submitted to, which doesn't have anything to do with the presentation at the conference. The only connection is that the conference organisers have spoken with the journal editors that they'll collect papers about topics presented at the conference in a special issue of the journal.
So the paper deadline is needed by the journal editors to make people submit in time so that the special issue will be ready at the specified date.
Upvotes: 4 <issue_comment>username_3: An unspoken reason is to enable conference presenters to prepare their abstract months before their paper or poster is ready, while they are still doing their research. Some large conferences, such as the American Geophysical Union, request abstracts months in advance of final submission. What's a struggling (or highly distinguished--both find themselves in the same predicament) researcher to do? Write something, and hope that by the time of the conference, the research meets or exceeds the statement of the abstract.
Upvotes: 2 |
2013/02/07 | 1,880 | 7,864 | <issue_start>username_0: I just started a research position coming from the industry. I am supposed to work on an ongoing project and branch it out to a new direction.
There is one member of the research group that did quite a lot of work on what I am supposed to modify. So I asked him if he could share his work and code with me. He told me that it is still unpublished work and there is no way he is going to give me his code. He said this is the way he does research.
I'm sort of stumped and don't know what to do. He told me that I should go and do it on my own what he has done for about a year now. My supervisor agreed to my suggestion that I should work with this person, but when I told him that he wouldn't share the code with me, he was just laughing nervously and didn't say anything.
Is there a way that I can persuade this person to collaborate with me, or am I banging my head against a wall?
---
So just to clarify. My supervisor is (one of) the project lead. I first talked to the supervisor suggesting, then talked to the person who rejected, then to the supervisor again. This project has been going on for a year. There are about 5 people working on it in this lab. I joined the lab to extend on the work done here and to contribute in the final stages of the project. To my surprise, there is no shared code repository, but rather each person does their own thing and in meetings discuss it.
I told the person that I will not steal his code. He replied to me that he doesn't share the code because I will not understand it. I told him that it helps me understand the work by looking at the code. He told me that no.
So my plan is to read the draft papers again and try to understand it that way, then try again in a few days. I don't want to re implement the same thing he has done...
---
I was given access today to the research group server and I could view everyone's work (around 15 people) and all project material... except his directory and implementation which is permission denied. I talked to him again, and clearly he is afraid that I will steal his work and possibly has to put one more name on his paper if I find something interesting, thereby diluting his achievements. He kept telling me that this is his work, and he is the first person on the paper and I should do something else or re-implement the whole code on my own.<issue_comment>username_1: >
> My supervisor agreed to my suggestion that I should work with this person,
>
>
>
Why you suggest someone who is not willing to collaborate with you?
Since you had the initiative and suggested his name, I think its clear that **your mate is not motivated for your project** so not giving the code is an expected behavior.
Your mate is either
* **Part of the project team**. In this case, his role should be very clear. is he supposed to supply the code? if yes raise it to the supervisor and ask for help. If supplying the code is part of his project contribution, then the supervisor should play his role here and ask the student to do so.
* **Not Participating in the project**. In this case, he's doing a favor if he supplied the code to you. You should do the implementation yourself but make sure not to include him in the project later on!.. or **try to convince him that *it is beneficial to him*** to supply the code (i.e. co-authorship in the resulted paper).
Either ways, it is the supervisor responsibility to scope/assign work to students in team projects.
Upvotes: 4 <issue_comment>username_2: Talk to the other person. Yes, talk not email. Find out what his or her concern is. Perhaps his or her concern is that you may just use his or her work and not give him or her credit. Assure the person that this would not be the case. This is the right and ethical thing to do. Prepare to put this in writing if it could save 1 year of your life.
I think it is important to acknowledge that there may be other people who could also assist you. Ask. This is part of the learning process.
If all fails, be pragmatic and modify your project scope if you can in consultation with your adviser. There is no point wasting your time in anticipation the cirumstances may change.
Upvotes: 2 <issue_comment>username_3: This attitude is very common in academia, as the academic environment is often highly competitive. That said, I've never seen someone do that *within a team*. I agree with [username_1](https://academia.stackexchange.com/a/7851/56594) that if this person is indeed on your team, you will likely have to raise this issue with your supervisor.
Still, there's likely a reason why he's unwilling to share, and if you can find the reason for that you may be able to convince him to be more of a team player. Is he worried that giving you access will hurt his publication chances? You can work with your supervisor to convince him that he will still get authorship even if the code is shared. Is he afraid you'll ruin the code? Suggest using some sort of versioning to keep track of changes. Is he just being a jerk about it? If so, then it just comes down to [username_1's answer](https://academia.stackexchange.com/a/7851/56594), and you'll have to hope your boss has enough of a backbone to help you out.
Upvotes: 4 <issue_comment>username_4: As you said you sit next to him, I would suggest you to be patient with him and wait for some days...
Start your research work as you are supposed to do, and in the mean while try to be helpful and good in behavior with him. During your research if you would require some small help then surely consult him and he will answer you... I am quite sure after some days of being helpful + good behavior to him + asking and sharing some knowledge with him will surely change his attitude towards you!
I would avoid suggesting you to consult your supervisor for the same again and again because you will eventually bother him and spoil your impression. Handle things by yourself, Be cooperative. Be patient for few days and there is a chance to save one years effort.
Good luck.
Upvotes: 1 <issue_comment>username_5: At this point I am a broken record: this is one of the situations I found myself in. None of my colleagues have been forthcoming, with techniques, code, documentation--nothing. One of the team members insists that he does not document code because it should be evident how his code works by reading it. This is patent nonsense--he has forgotten what his code does or else does not want to say. He absolutely refuses to provide a conceptual overview of his system--even the postdocs complained that he wastes their time with the minutia of command line options and stories about the old country instead of describing the main algorithms and the necessary configuration to get his model to work. I have been forced to reproduce or rewrite code. It turned out to the PI's surprise that my code was better, but I must say I intensely disliked being in this situation. The other comments suggest being optimistic in the face of intransigence. I myself decided (details are scattered around this site) to get out, for several reasons:
* my colleagues were not forthcoming and preferred that I duplicate their work.
* in the once case that I managed to persuade my teammates to share some work they did, they were gratuitously patronizing as they grudgingly handed it over, although it was completely obvious they should simply have shared the work
* the work was essentially unpublishable and of low academic value
* I was misled about my role within the research group
* the pay was abysmal
* it was pointless to continue working for little money without being included in any of the group's publications. I might as well work in industry for more money and no publications.
The first applies in your case--be prepared not to receive any cooperation from your team members.
Upvotes: 3 |
2013/02/08 | 1,396 | 6,111 | <issue_start>username_0: I'm the kind of graduate student that finds many research topics interesting and wants to participate in lots of student organization activities related to science and academia. But recently, one of my professors warned me against "doing too much" beyond my research focus, both in terms of publications and in terms of extra-curricular activities. As I see it, your goal as an academic is to develop a "specialty", so it is important to focus on one narrow topic and pass over opportunities to research other interesting, but unrelated topics. But can research outside of your particular focus in graduate school really negatively affect your ability to get hired in an post-doc or tenure track position in the future? How can "doing more" reflect negatively on one's self?<issue_comment>username_1: The issue is fundamentally that of "categorization": people want to have a box to put you in. "Dr. X is an expert in field Y." Early on, if you're all over the map, people don't have a clear sense of what your focus really is. That makes it harder for them to feel that you're going to be focused on *their* needs in your next position. Instead, the worry is that you'll continue to be all over the map.
This is also a problem for young faculty: they need to have a broad enough profile that they aren't trapped in a particular "niche," but not so broad a profile that they don't have depth in any one specific field. If someone can't be recognized as "the expert in her field," where 'her field' is somewhat arbitrary in scope, that makes for problems when it comes time for promotions and tenure cases.
Upvotes: 4 <issue_comment>username_2: It's true in an abstract sense that doing more is better than doing less, but there are psychological factors at play here.
Regarding extracurricular activities, hiring committees are unlikely to value them much, and they will come across as a distraction from research. For each activity mentioned on your CV or website, someone may read it and wonder whether you might have written another paper if you hadn't been doing this instead. It's not really fair, but you don't want people to be thinking about this.
>
> But can research outside of your particular focus in graduate school really negatively affect your ability to get hired in an post-doc or tenure track position in the future?
>
>
>
Partly it depends on how good it is. If you add a truly excellent paper to your CV, it should only help. However, research outside of your specialty or done on the side is probably less likely to be excellent, and someone who looks at just that paper may end up with a lower opinion of you than you would like.
Upvotes: 4 <issue_comment>username_3: *Read broadly, publish narrowly deeply.*
This is roughly what I've been telling my students. Now all of this depends greatly from area to area, but here's what I believe to be true. Having a broad background in your area might slow you down initially when trying to publish. But over the long term (your entire career), a broad base will help you more - it will let you be flexible about topics of interest, it will allow you to see connections where others might not, and it will help you place your work in a larger context.
But from your question, you appear to be referring not just to "exposure to outside topics" but "activities related to the larger enterprise of science and academia". With those activities also, you should be careful. Maybe choose one or two outside activities and devote your extracurricular efforts there. The advantage is that by focusing, you're more likely to be able to do something meaningful, and it also prevents you from frittering away time in busy work.
Upvotes: 4 <issue_comment>username_4: I'll echo what the other answer have suggested and add a little more. On the academic job market you want to be able to explain what you do in a way that people can understand in a sentence or two. Your question seems to imply that already understand that having *a focus* is important and excelling in it is of utmost importance.
There are two ways that work or research outside of this core/focus can hurt:
1. Peripheral work may leave you with less time to make the core/focus really shine. You may simply have less achievements or publications than you would have if you had focused more on your core research. The issue is not only that people reading your CV might think this. It might really be true!
2. The second issue is that this peripheral work might be seen as a signal that you are not serious about your core body of research. Do you really care about devoting your life to the field, topic, or question that you are asking someone to hire you to work on? Are you likely to leave your career for this other thing? The core of your work might be seen as less focused than it actually is if it looks like you've got all these others things going on.
This second issue is a real risk, but it's possible to deal with this. Basically, it's your job to convey to people that although your extracurricular work is there — and although it may even constitute some impressive achievements or skills — *you* don't treat this other work as seriously as you treat your research.
This often means leaving irrelevant stuff off of your CV and website — although there are limits to what you can leave out. It also means organizing your CV so it's clear that the central thrust of your research is your priority. Many people have "selected papers" on your website or other personal materials. You can get to make that selection.
For example, I have written several technical books, served on several non-profits, and given hundreds of talks at (non-academic) technical conferences. I mention these things in brief and in passing at the end of my CV and on other pages on my website reserved for my non-academic work. I don't hide these achievements as I think they speaks to my skills and qualities as a researcher. But I make sure that when speaking to academic audiences, I — quite literally — place the core of my academic work first.
Upvotes: 3 |
2013/02/09 | 906 | 4,165 | <issue_start>username_0: I am a physics undergrad who has worked with profs mostly on areas in Quantum field theory, string theory. However, my interests have changed slightly over the areas, and now I want to pursue a PhD in Pure Mathematics, perhaps in algebraic geometry or topology. Is it OK, if I apply for a math grad school with recommendation letters from physics profs, or would this diminish my chances to get selected? Should the recommendation letter be given by a prof working in the same area as that you want to apply to?<issue_comment>username_1: **No** I don't think you need to worry about it as long as you have a good track record and evidence for your interest in the other field which is demonstrable. If you get good recommendation letters (not the boilerplate type) from people who know you well and can provide evidence for their recommendation and can talk about your merits objectively you will not be at a major disadvantage. Of course it would be great to have people who can provide recoms in the same area but this is a common thing for people to change their field when they pursue higher level degrees so you will not be the first one having this issue.
It would definitely not diminish your chances if you get strong recommendation putting you in top percentiles of your program and supporting you in your decision. It also comes down to having a very good statement of purpose and explaining in detail why you are interested in changing field and painting a clear picture for your reasons and why you think you will be capable of doing what you want to do it. I have done this personally twice and encourage you to pursue your interest because in graduate school if you want to be successful you really need to be interested and love what you do.
My advise is to talk with your profs and explain your decision for changing your field. They will most likely support you and provide justification on why you can manage (if they think you have the capacity). Good luck!
Edit: I also want to point out that I have had many friends who have jumped from Math to Physics and vice-versa (and also to CS).
Upvotes: 3 <issue_comment>username_2: I agree with username_1 that you should probably be OK, but let me sketch what some of the drawbacks are:
Ideally, an application to math grad school will have recommendations from mathematicians. The further you get away from that, the less meaningful the letters are. (For example, at least once per year I see a letter from an English professor, which is utterly unhelpful.) The basic issue is that you need recommenders who really understand what it takes to succeed in math grad school and as a mathematician. Fortunately, physics is close enough that physicists can do a pretty good job of judging this, so you should be OK. In my experience, the admissions committee will worry about two things:
One is that physicists may not appreciate certain math-specific issues. For example, the expected coursework and background. A physicist may not fully understand the extent to which someone's background is nonstandard or deficient for the math program they are applying to.
A second reason is the belief that most people's standards go down a little when making recommendations for other fields. If someone is applying to the top schools in your field, you know very well what the standards and competition are like, and you have something invested in the system and your own reputation as a recommender. In practice, recommenders from other fields seem to be a little more cavalier about making strong recommendations based on a feeling that the applicant is smart, rather than a comparison with the rest of the applicant pool. This means recommendations with be taken with a grain of salt.
So if you have equally good prospects for letters writers from math and physics, you should choose the mathematicians for math applications. On the other hand, a physicist who knows you is still a good choice, much better than a mathematician who doesn't know you. (But a mathematician who doesn't know you is a better choice than an English professor who does.)
Upvotes: 5 [selected_answer] |
2013/02/09 | 2,699 | 11,075 | <issue_start>username_0: I'm a fairly new chemical engineering graduate student. I've been doing research for one semester so far (in molecular dynamics) and in the process have learned about journals and "impact factor" (a concept I didn't even know existed in my undergrad).
My advisor has told me that getting in Nature or Science is very difficult to do. I don't think anyone in the department has a paper in that journal. In fact, looking at the professors' research in my undergrad school (a top 5 engineering school), I don't see Nature papers either.
So after reading the journals themselves, I have to say that I'm kind of confused on what makes the articles published in them different than those published in something like JACS or ACS Nano. They seem more general in scope, and maybe some of them are "groundbreaking" in a sense, but the other articles, I just can't really tell...
What kind of research would I need to do in order to successfully submit a paper to one of these journals? I've got 4 years left and have some sway in what I would like to research, so I think this would be a good goal for grad school (even if I don't reach it).<issue_comment>username_1: First, **don't obsess about it**. In chemical engineering, *Science* and *Nature* papers are rather rare, and probably even more so if you're doing theory. So, while a paper in those very high profile journals can give your career a great boost, not having one is not a career-breaker.
Now, if you want to know how to orient your research to things that get you a greater chance of being published in such venues, my first advice would be: **do something you're excited about**, something you think challenging and you want to address. If you enjoy solving the problems you work on, you'll do much better work and get a better chance of getting that shiny paper. Also, you might just be happier doing stuff you like, obviously, even if you don't publish it in *Science*.
However, it is true that some fields and subfields are over-represented in journals. This depends on journals, but very high profile journals tend to prefer:
* Hot topics. In your field, it used to be carbon nanotubes. Nowadays, I'd say “nano” is a good keyword, metal-organic frameworks are a widely published system. But… that's not entirely foolproof, because this will change and it's not certain that the choice you make right now will still be a hot topic in 4/5 years.
* Theoretical work that addresses very basic questions that are not yet fully answered: dynamics of water, the nature of the hydrophobic interaction, the Hofmeister series, that sort of stuff.
* Controversies, work that challenges common assumptions.
*Oh, and if you make it, I claim co-authorship based on the above contribution!*
Upvotes: 7 [selected_answer]<issue_comment>username_2: For *Nature*
------------
From <http://www.nature.com/authors/author_resources/how_publish.html>:
>
> The Nature journals comprise the weekly, multidisciplinary *Nature*, which publishes research of the highest influence within a discipline that will be of interest to scientists in other fields, and fifteen monthly titles, publishing papers of the highest quality and of exceptional impact.
>
>
>
Who decides if the research is "of the highest influence" or "of the highest quality and of exceptional impact"? The editors. If you want to know what they consider publishable, then you should ask them. *Nature* allows presubmission enquiries.
>
> Researchers may obtain informal feedback from editors before submitting the whole paper. This service is intended to save you time — if the editors feel it would not be suitable, you can submit the manuscript to another journal without delay. If you wish to use the presubmission enquiry service, please use the online system of the journal of your choice to send a paragraph explaining the importance of your paper, as well as the abstract or summary paragraph with its associated citation list so the editors may judge the paper in relation to other related work. The editors will quickly either invite you to submit the whole manuscript (which does not mean any commitment to publication), or will say that it is not suitable for the journal.
>
>
>
For *Science*
-------------
From <http://www.sciencemag.org/site/feature/contribinfo/prep/gen_info.xhtml>:
>
> *Science* seeks to publish those papers that are most influential in their fields or across fields and that will significantly advance scientific understanding. Selected papers should present novel and broadly important data, syntheses, or concepts. They should merit the recognition by the scientific community and general public provided by publication in Science, beyond that provided by specialty journals.
>
>
>
In addition,
>
> In certain cases, reviewers are satisfied that a paper's conclusions are adequately supported by the data presented, but the general interest of the findings is not sufficient to justify publication in *Science*. [...] Conversely, some papers provide provocative new concepts, but are not thought to be sufficiently persuasive to be appropriate for a general-interest journal like *Science*.
>
>
>
---
That said, I do not think a person should do research with the goal of having a paper published in a certain journal. A person should do research with the goal of advancing knowledge.
Upvotes: 4 <issue_comment>username_3: To be publishable in *Science* or *Nature*, your subject needs be interesting for a broad audience, i.e. it needs be sexy. It also helps if you write more speculative, and thus the rate of papers that turn out to be not correct is quite high. So, although publishing in *Nature* is good for your career, it might very well not be your most scientific work that ends up in *Nature*, but rather the most popular sounding. So write a paper on how you intend to solve the.climate problem using nanotechnology, and you'll be certain to get published ;).
Upvotes: 3 <issue_comment>username_4: As somebody who is working in essentially the same field as you—with many more years of experience—I can assure you that it is indeed very difficult to get a paper on molecular simulations published in a journal like *Nature* or *Science*. Usually it requires some sort of accompanying experimental effort, and generally needs to fit the focus of the journal.
It should also be pointed out that journals like *Science* and *Nature* are both heavily slanted toward biological sciences: of the 30 editors for *Science*, only about five work in physical science areas. *Nature* is slightly more balanced, with about a 3:2 split between biological and physical science. (But then, remember "physical science" means "anything not biology," and extrapolate how thin the coverage really is!)
So, my advice is: don't worry about *trying* to get published in *Science* or *Nature*. Instead, focus on doing the highest-quality research work you can, and then submit it to the most appropriate journals for the particular area you're working in. (Talk with your advisor about how to figure this out.)
Upvotes: 4 <issue_comment>username_5: This is an old thread, but amazingly no one really answered the question.
Yes, Science and Nature are difficult for Physical Sciences, being slanted toward biological sciences as they are. But there's a more general question of "how do I get my paper into a high-profile journal?".
The trivial answer is: "well, do high-profile research!". But the answer is, high-profile research changes with the times, and there's no guarantee that your paper will be in with the particular trends of research when you go to publish it.
So I'd say that the question is more "how do I write up, arrange, or plan my research to maximize the chances it will end up in a high-profile journal?". This is an easier question to answer.
1. Design your research with the questions/hypotheses in mind. [This excellent article by <NAME>](http://www.ee.ucr.edu/~rlake/Whitesides_writing_res_paper.pdf) covers how to design a good publication outline. The key to this is that the outline is most valuable long before the paper is published. It allows you to avoid experiments that *don't* fit into the paradigm you're trying to explain in the paper, and to think about the implications of data as soon as possible.
2. Theoretical implications are valued far beyond just experimental results in high impact papers. An important result is one thing, but an important result *that changes an existing hypothesis in the field* is valued much more. Hence, when you write your research outline, you should consider how your hypothesis approaches other theories/hypotheses in the field. If there's a convenient place to do work that more closely addresses a broader hypothesis in the field, do it.
3. Along with (2) a title that relates your work to the rest of the field, rather than the individual topic at hand, is much more interesting to editors. "New material x does y" is a perfectly serviceable title, but "New material x demonstrates theory y is wrong/right/needs to be revised" is much more interesting.
Upvotes: 4 <issue_comment>username_6: I think that to some extend this is a matter of chance. To get published in such journals your work need to be both important scientifically and interesting to wider audience. In many scientific fields people rarely have occasion to do both. Chances increase when you work in a large multidisciplinary team. So there is everything in your collaborative work: experiment and theory and applications. I think <NAME> provided [excellent advice](http://www.cs.virginia.edu/~robins/YouAndYourResearch.html): think of what is really important and do things that are really important. Then may be you will be able to use chance of publishing in such journals when it will happen.
Upvotes: 1 <issue_comment>username_7: One non-yet mentioned strong predictor of publishability in high ranking journals is whether you (or your co-authors) already published in there - this is called The Chaperone effect and it was described in [Seraka et al. 2018](https://www.pnas.org/content/115/50/12603). From a [blog post](https://www.sciencemag.org/careers/2018/12/yes-it-getting-harder-publish-prestigious-journals-if-you-haven-t-already) covering that paper:
>
> So-called “chaperoned” researchers who first publish in these journals
> as nonsenior authors have a leg up when it comes to publishing in
> these journals as principal investigators (PIs), the study found—and
> the trend has gotten stronger in recent years. In Nature, for example,
> the share of papers authored by chaperoned senior authors grew from
> 16% to 22% between 1990 and 2012, while new senior authors dropped
> from 39% to 31%.
>
>
>
It is not clear what causes this phenomenon, whether it is due to editorial biases or inherited skills in study design and writing (or a combination). However, as a conclusion, if you want to aim for a Nature/Science paper, the best is to find a coatuhor who already published there.
Upvotes: 2 |
2013/02/09 | 774 | 3,263 | <issue_start>username_0: When I read research papers I often come across many things that I'm unclear about and would like to talk over with someone. My advisor is not available to do this with me as she does not have time. I'm not sure with whom should I discuss these research papers with, in order to help me understand the papers better. I am the only student who is currently being advised by my advisor. How should I go about finding people to talk through these things with, so I can better understand the research papers I'm reading?<issue_comment>username_1: Your advisor is not the only person to go to, to get answers to the questions that research papers are raising for you. Talk to other researchers, in your department, or online with peers at other universities.
But it does sound like you are getting insufficient advisory support. Do you really just have the one advisor? Time to build up your supervisory team.
Talk to your advisor about what's expected of you, and what's expected of them. It sounds like you've got a mismatch between your need and their resource, and it's important to get that fixed as soon as possible.
You'll also find a lot of good, relevant advice on these questions: [skimming a paper](https://academia.stackexchange.com/q/670/96) and [running a reading group](https://academia.stackexchange.com/q/797/96).
Upvotes: 5 [selected_answer]<issue_comment>username_2: There are many things not clear in your question. For example, is the paper something that your advisor has asked you to read ? Or is it just something that you're browsing for your own edification ? Did the advisor say that she cannot or will not help, or that she's busy ?
I can understand an advisor finding it difficult to spare the time to explain papers that maybe even she hasn't read. But if it's something related to your work with her, then I'd expect her to help a little more. You have to realize though that just because someone is your advisor, it doesn't mean that they know more than you about every single topic :) - in fact, part of your evolution as a student will be to get to the point where your advisor asks you for help !
But I think the general answer is as EnergyNumbers indicates: find other students in your department to discuss these papers with. That's really the best way. Also, realize that working through a difficult paper, on your own or with others, is the best way to learn new material. It's a normal part of the training process.
Upvotes: 3 <issue_comment>username_3: I'd suggest that you form a reading group! (As also suggested in passing by [EnergyNumbers](https://academia.stackexchange.com/users/96/energynumbers).)
What helped me out early during my PhD was to create a series of reading groups around literatures I wanted to learn. A model I often followed was to organize a weekly meeting to read 1 book or 3-5 papers with 2-5 other students. We'd usually meet for 2 hours or so. I found other students in my cohort/program and others in the university who had similar interests. Ask around! If the papers you are reading are the kinds of things that are likely to be on your general or qualifying exams, chances are pretty good that others around you will have to be reading them as well!
Upvotes: 3 |
2013/02/09 | 909 | 3,742 | <issue_start>username_0: This question expands a bit on my previous one [Acknowledging funding](https://academia.stackexchange.com/questions/2685/acknowledging-funding).
I am unsure how to write proper acknowledgments at the end of a research paper. The wording that I read most often is *Author XY is [partially] supported by...*, which does not correctly describe my situation unless one stretches the meaning of these words.
More in detail, here are the two sources with which I have trouble:
* a foundation that used to give me a postdoc grant. I did part of the work of this paper while living off this grant, which expired six months ago, and part of it under my new employer. How should I write this? If I write *is supported by*, I give the impression that I am currently being paid in full by the foundation; if I write *partially supported* I give the impression that I just got a smaller grant.
* a research institute that paid my travel expenses for a conference. While this funding is not directly related to the paper, I met my co-author there and had a chance to discuss its state. And, besides that, it is basically my only occasion to acknowledge this grant. How should I write this? Is *partially supported* the correct wording, or is there a better expression?<issue_comment>username_1: "if I write partially supported I give the impression that I just got a smaller grant."
Not necessarily. It can mean different things like one of the co-authors was funded by them or your situation in which they funded some of it before.
I am not sure if I would mention the research institute that provided the travel expenses. If you feel like you want to do it its a different story. Then I would write that the "collaboration between the co-authors would not have been possible with out the financial support from research institute blah blah blah" or "The co-authors would like to acknowledge the support of research institute blah blah which resulted in blah blah blah".
Upvotes: 2 <issue_comment>username_2: First: check if the institution who provided the funding requires a specific sentence for acknowledgment. Some do! Otherwise, read on…
Many large research projects nowadays are supported by more than one funding source, especially if you include the institutions you are affiliated with (although, being already listed as affiliations, they need to be acknowledged specifically). Saying *“partially supported by”* does not have any negative implication to me, and saying *“is supported by”* does not imply that this is the only support received.
But if you want to avoid this particular phrasing, it also common to simply say:
>
> **Acknowledgements**
>
> We thank the John Smith Institute for funding, along with the William and Melinda Bates Foundation for post-doctoral fellowship (to F. P.).
>
>
>
The second part of the sentence makes it clear what was the support, while the first part is more ambiguous. Frankly, noöne cares! That is, except for the funding agencies: they want is their name mentioned for their statistics, but probably don't care how it is written.
Upvotes: 3 <issue_comment>username_3: >
> If I write *is supported by*, I give the impression that I am currently being paid
>
>
>
"Work by this author **was** partially supported by the Hitchcock Institute and by the Norman and Norma Bates Foundation."
>
> if I write *partially supported* I give the impression that I just got a smaller grant
>
>
>
Nonsense. Unless the institute was your **sole** source of income and equipment during the **entire** research and writing process, "partially supported" is correct. Also, it's standard idiomatic language; nobody will think twice about it.
Upvotes: 5 [selected_answer] |
2013/02/09 | 3,525 | 14,021 | <issue_start>username_0: I'm at a North American, state-run university (not an elite institution), which is relatively modern in terms of supporting mobile devices. The school is proud of its WiFi coverage in every classroom, and recently rooms were updated to have electrical outlets at every student's seat (even in auditoriums), so they can charge their devices. All courses are three-hour periods, once a week.
This mobile-friendly environment is great when integrating mobile technologies in class. I make use of [Mentimeter](https://www.mentimeter.com/) (real-time quizzes), the students can follow the PDF lecture notes and/or electronic copies of texts, take notes on their laptops, look things up on the web when I ask questions or when they do exercises, etc.
However, it makes for a challenge during courses when students use these devices in distracting ways. Definitions of "misuse" are situational, but for this question, I'll call it any use that detracts from a healthy learning environment. Concrete examples include watching a video or playing a game on a laptop (distracting neighboring students), texting, using Facebook rather than working an exercise in group, etc.
Laptops and smart phones in class are not new; in the past I was able to deal with their "misuse" relatively easily. A student would be easily embarrassed and close his laptop if you called him out when you saw that 5 students around him were all looking at his screen and smiling. I ask a lot of questions during my classes, so I could "pick on" students who caused disruptions with their mobile device (or otherwise). Drawing attention to one student in large groups is an effective way to change behavior, usually.
However, last semester was more difficult than ever, with 40+ students in an newly-electrified auditorium. During one class interruptions occurred 3 times, and I had to talk to offending students during the break about it. During the mid-term, I had one student argue with me at the start because he wanted to keep charging his iPhone at his desk in front of him. He insisted he wasn't going to use it during the exam to cheat, but I cited the policy barring mobile devices during exams and mentioned he'd have to explain that to a discipline committee - he complied. Needless to say, those "correctional" situations don't win points for the professor in the course evaluation. On the other hand, I learned that if I don't intervene (early in my career I would ignore these behaviors), students who feel distracted will complain during evaluations (most are too shy to say something during the semester).
The solution it seems is to add yet another item in my already lengthy syllabus and explain the desired behavior during the first lecture. My school has no official policy as far as I know regarding mobile device use, apart from an IT security policy that doesn't address the distraction issues. I did some searching on the web and found that some schools have policies, e.g. [McGill](https://secureweb.mcgill.ca/secretariat/sites/mcgill.ca.secretariat/files/Mobile-Computing-Commun-devices-MC2-guidelines-11June2010.pdf). What is not clear is how effective the policies are in large classrooms, how easy they are to enforce without becoming the "text police", how well they work in non-elite universities, etc.
I contrast all this with the fact that my students almost never misuse the "phone" part of their devices (they don't ring during lectures and they don't talk on them except at the breaks). None of that is in my course plan and I never have to explain it at the start of the semester or take action.
So, how to reduce misuse of mobile devices in large classes while maintaining a friendly atmosphere?
**EDIT** see this [article](http://www.academia.edu/870812/_I_Get_Distracted_By_Their_Being_Distracted_The_Etiquette_of_In-Class_Texting) for the students' perspective of the problem, and why it's not like other classic forms of distraction. In the conclusion, they state it could be useful for institutions to define policies about proper behavior in the classroom regarding mobile devices.
<issue_comment>username_1: There is nothing new in what you describe. As a student who oft got bored during lectures, I have done some distracting things during them, even though I didn't have access to any electronic device at the time (I'm young enough that they existed, but it wasn't accepted practice to bring them to class). We did crossword puzzles (collaboratively), we played [*cadavre exquis*](http://en.wikipedia.org/wiki/Exquisite_corpse), we read newspapers and discussed them, we flirted, we worked on other stuff, … **What makes you think the situation you are experiencing is specific to mobile devices?** The cause of the distractions is different from other situations, but I think the issue is the same and the solutions are, too.
So, what I would suggest is: warn your students that while what they do is their business, you expect that their behavior does not interfere with other's study or the general class atmosphere. Tell them that mobile devices are useful, but that they should be careful not to abuse them. Handle non-compliance as you would a typical incident: talk to them, give fair warning, but clearly set the limits and enforce them if it comes to that.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Your problem is quite a common one and one that I face all the time. It is easy to see the benefits brought by laptops, tablets, smartphones, etc. and for those of us who were focused students, it seems like a benefit we want to give the younger generation. However, there are a lot of non-focused students who do use these devices to distraction, both their own and others.
For me, I try to make the class more interactive, less stand-and-deliver. That is, I tend to call on students all the time and I asked one student to comment on the opinion of other students. If the students are unable to give any meaningful answer (e.g., if they simply say 'yes, I agree with his point') then I dig until it is clear that the student was not paying attention. If that's the case, then one warning if I'm in a good mood and none if I'm not.
I make it clear that they have the choice to be there or not. University is not for everyone. If they don't want to be there, get out. If they do want to be there, then act like it.
I usually don't have to do it too often but I do have to do it from time to time to remind them that school is not relaxation time. Keep the classroom a little tense...it will keep them on their toes.
Upvotes: 4 <issue_comment>username_3: Here's the policy I will try in my course next semester:
### Policy for the use of electronic devices
To promote a better learning environment in my classes, I established this policy of using electronics devices (EDs). Allowing EDs in my classes is a privilege and not a right. That is, until my university establishes its own policy in this regard.
### Goals
EDs offer several advantages in my classes. The lectures may be richer and more dynamic, you can perform Internet searches, answer questionnaires given by the teacher to validate comprehension immediately, do exercises with UML tools, take notes in an electronic document, etc.
However, there is a less positive side of the use of EDs in a lecture.
* Several studies on multitasking [1] [2] show that students who focus on a single task in a course to learn more and have better results than their colleagues who work on multiple tasks simultaneously.
* Another study [3] shows that the time to read a text is greater when responding to text messages during the reading.
* According to surveys at several universities [4] students find laptops distracting during lectures. I have personally received comments in written evaluations for my courses that showed students were disturbed by other students who attended the course with a computer, but who used it to do other tasks during class.
### Code of conduct
* You should minimize the use of EDs for tasks not directly related to the course.
* You must not access (or leave plugged into an electrical outlet) EDs during any kind of evaluation (quiz, exam, etc.), without my permission.
* You must put your ED in a "silent" mode during the course.
* You must place the screen of your ED to allow eye contact between you and me and other students.
* You must close your ED when I ask, for example when doing team exercises.
### Enforcement of the policy
Here are some indicators of possible non-compliance with the policy that are easy to detect and for which I have given several warnings in the past:
* A student with his hands between his legs who looks down occasionally, smiling...
* Many students staring at the a computer screen of a colleague...
* A laptop fan running at full speed...
In case of non-compliance with the policy, I will give you a warning. After a warning, you may lose the privilege of bringing your ED to class.
### References
[1] <NAME>., and <NAME>. "The Laptop and the Lecture: The Effects of Multitasking in Learning Environments." Journal of Computing in Higher Education, 2003, 15 (1), 46-64.
[2] <NAME>. "The Fight for Classroom Attention: Professor vs.. Laptop. "Chronicle of Higher Education, June 2, 2006, 52 (39), A27.
[3] <NAME>, <NAME>, <NAME> and Mr. Gendron, "Can students really multitask? An experimental study of instant messaging while reading, "Computers & Education, vol. 54, no. 4, p. 927-931, May 2010.
[4] <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>, "'I Get Distracted By Their Being Distracted': The Etiquette of Texting In-Class" Eastern Education Journal, vol. 40, no. 1, p. 48-56, 2011
Upvotes: 4 <issue_comment>username_4: I'll start by quoting from [username_2's answer](http://academia.stackexchange.com/a/7892/30772):
>
> For me, I try to make the class more interactive, less stand-and-deliver.
>
>
>
To me, that's the key. I'm writing this as a separate answer because I elaborate significantly differently.
Consider the technique of a "flipped" classroom. This teaching technique involves:
* eliminating lectures
* have students learn material at home (e.g., by reading text)
* in-class experience involves:
+ Students asking questions
+ Students doing exercises that provide hands-on experience
I've heard quite a bit of positive feedback by a college administration that was pushing this. I also read some feedback (one example: [this question](http://academia.stackexchange.com/a/61012/30772)) which seems to indicate some dissatisfaction, particularly among students.
I'm actually not trying to promote the flipped classroom here, but simply to use it as an example of another approach. If students aren't asked to just sit and listen (and take notes), they may be more inclined to do things other than be distracted.
Eventually, the **situation** (some may call it a "*problem*") of *electronic devices* will **intensify** (some may say it will *worsen*). I really do predict that people are going to get cybernetic implants. A lot of people may disagree because it seems too far-fetched, but here's my counter argument: the biggest problems with putting a cell phone into the body is that current cell phones use poisonous/bad/dangerous chemicals in the battery, and having the antenna under the skin might not be something I've seen yet. However, under-skin electronics are in use today, including pets that have Radio Frequency Identification (RFID tags), and people placing more advanced Near Field Communications (NFC) devices in their hands. In theory, the body can probably be an energy (power) source, which would also be useful for medical applications. Now, once these technologies improve and teenagers figure out that an under-skin antenna will allow them to text message without adults knowing about it, I predict that's no way the teenagers won't get implants.
Therefore, attacking the visible use of devices is not going to be the best approach. (Also, as more and more people use electronics, including adults in the workplace, forbidding them can become more and more challenging and risks making the instructor seem more and more irrelevantly out-of-touch.)
So, the key I see is this: Get them engaged and focused, by requiring interactive involvement. That may be by using a "flipped" classroom, or by competitions (that keep evolving as skills improve and competitors get better), or some other methods.
Upvotes: 2 <issue_comment>username_5: Ban distracting others, don't ban doing your own thing
Students will always want to do their own thing, you're likely not going to be the one to convince them that texting isn't okay in a lecture if 18 years of schooling up until now hasn't taught them that, but playing something bright and flashy in a lecture is obviously distracting to people behind them. Lecturers at my previous institution's solution to this was to tell the hall at the beginning of the course that if someone's screen was being distracting to use that same universal free wifi to send him a message through a little page he'd set up so he could give a general reminder not to do distracting shit. Which only had to happen a couple of times before the laptop users realised they should probably stop in general. They never bothered with phones because fighting against phones today is just so much more trouble than it's worth, and it's almost impossible to tell whether what the person is doing is work related or not, I personally used to set reminders for myself to google things later in the evening and google for other explanations of things that I didn't understand on mine, and I probably would have been worse off for not having it.
I don't know if you'd be able to do things similarly, but that's how it was solved at my previous university anyway.
Upvotes: 1 |
2013/02/10 | 481 | 2,059 | <issue_start>username_0: Is there any significance of having received multiple "best paper awards" (in the field of theoretical computer science) when applying for faculty positions after a postdoc? Do the hiring committees watch out for such awards or do they fall into the category of "nice to have but no one cares"?<issue_comment>username_1: Depends on the prestige of the conference or journal that you have got it from. If it was a 20 person workshop I would say no but if it was the paper of the year award in a leading journal or best paper in the leading conference in your field then it is definitely an indicator of the relevance of your research and recognition of your contribution and its quality. It is one of the factors that my faculty looks into when hiring so the quality of the publications, where they were published and these awards do matter.
Upvotes: 2 <issue_comment>username_2: Best Paper Awards — especially at top conferences — count for a lot in computer science and can certainly help you stand out on the job market. I've seen people introduced multiple times in computer science venues as having won "many best papers awards" or "multiple best paper awards." People notice and people care.
Lots of other things matter as well and will matter more. A best paper awards at a conference nobody has heard of is unlikely to help much. In that sense, I don't think that hiring committees are "watching out for" best paper awards in any systematic ways. But I think it's absolutely normal to note your award winning papers as such in your CV and I think you *definitely* should. Having your work recognized as among the best at a conference will only help so there's no reason not to mention it.
Upvotes: 4 <issue_comment>username_3: **Yes.** Best paper awards at top conferences like STOC, FOCS, and SODA—especially multiple such awards—are taken *very* seriously by faculty hiring committees.
*I'm a theoretical computer scientist currently serving on my department's hiring committee. So I probably know who you are.*
Upvotes: 3 |
2013/02/10 | 2,364 | 10,177 | <issue_start>username_0: I am trying to reproduce published results in a paper. Those results come from numerical simulations. The original authors and I do not use the same software, and theirs is proprietary (I don't have access to it). I have tried to reproduce their results, and it works qualitatively but not quantitatively: the differences between their results and mine on typical quantities of interest are between 2 and 5 times the expected accuracy of the method.
So far, I have communicated with the original authors, trying to clear out all possible sources of error I could think of (checked that I got the tricky parts of the algorithms right, checked that the “usual” parameters that were missing from their paper had the “usual” values, everything I could think of). They are forthcoming enough, and reply to my questions quickly, but it's clear they don't want to invest time in doing any serious follow-up on their side. And without access to their software, it appears I'm stuck.
Now, my question is on how to proceed. The “ideal case” for unreproducible results is to make a detailed analysis of how and why they cannot be reproduced, and possibly find out a source of failure (or at least plausible issues). This advances the field, and is probably publishable (not in a very high-profile journal). Here, this is not possible.
**I have, however, nice results that I have obtained (extending their work far beyond what was already published), and if I didn't have these differences with their paper, it would make a very attractive paper. What can I do with those?** Is it possible to publish them, merely noting the different with their paper without more comment? Or are my results simply unpublishable? I welcome any comment, especially from people who have found themselves in such an uncomfortable situation!<issue_comment>username_1: Just publish. Publish your attempts to replicate the findings, documenting the discrepancies, together with the nice results you've obtained by extending their work. Consider sending a draft to the original authors for their comments.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Can you/are you willing to throw a bone?
It might be worthwhile to discuss having them as an author on the paper. Perhaps you can strike a deal to get what you want (get access to their software, results, etc.) in return for including them in your publications and having some level of collaboration. You might be surprised just being honest about this and talking about it openly might work! My best two papers to date come from doing exactly that and then developing working relation with the people that were not that forthcoming. After that we have published two additional papers together. Who knows you might actually end up collaborating and doing bigger and better thing together if it works. I would give it a try before deciding to just publish the results the way you described.
Upvotes: 3 <issue_comment>username_3: Publishing results that contradict previous publications can be awkward, but if you can show that your method is correct beyond reasonable doubt, then it shouldn't be a problem. No code is guaranteed to be completely free of errors and no result is guaranteed to be correct just because it is published.
You don't say much about the nature of your computations/methods, but do you have any test cases for which analytical solutions are known or can be derived? If you can show that your code reproduces these results, then you can make the case that your code's results for the specific problem in question should be reliable.
Ideally, if you have such a test-case, you could ask the other authors to run it with their own code, and see if they also produce correct results. They may not want to, but that's their problem, not yours.
In summary, if you go to reasonable lengths in your paper to demonstrate the accuracy of your code/method against known analytical solutions, you shouldn't be too worried about not matching other people's results. At least that would be my opinion as a referee.
Upvotes: 4 <issue_comment>username_4: I would be very wary of publishing and as a reviewer I would be wary about recommending publication. The unexplainable difference in results hints at a mistake. That mistake is either yours or theirs. I would like to know for sure that it is their mistake before publishing. Even though you cannot compare the two methods directly, you could still publish your method independently showing that it gives the "correct" answer in a battery of test cases and then noting that it gives a different answer in the non-testable case.
You could then refer to this paper when you publish the real work. The advantage is that it removes the need to dilute the message of the real paper with the details of the method. A second advantage is it may result in the original authors running the test batter with their method. This is especially true if you call them out in an earlier draft and send them a copy prior to submission. You could also request them as reviewers.
A different strategy might be a lab visit (physical or virtual) to use their software on your test battery.
Upvotes: 3 <issue_comment>username_5: *(Disclaimer: I have no personal experience with such a situation, so I'm just going off plain common sense. That said...)*
It sounds like you've already taken every reasonable step to discover the source of the discrepancy, and you're now left with just an "unexplained deviation" between your results and theirs. You also say that the discrepancy doesn't actually affect the qualitative conclusions drawn from the results in any way.
At this point, if I were you, I'd just go ahead and publish your extended results, and just briefly note the discrepancy when you compare your results with prior work.
As long as you're reasonably certain that your results are correct (up to expected limits of numerical accuracy), you can't really be expected to be able to explain any inaccuracies in other people's results. Of course, you definitely should make sure that others can easily reproduce *your* results and verify the correctness of the methods you used to obtain them, e.g. by making your software freely available.
If you really think that merely documenting the discrepancy between the two sets of results would be publishable on its own, doing that and then citing that publication in your main paper could also be an option. Generally, though, I'd expect that to be practical only if the precise quantitative values in dispute are actually of importance to others working in your field.
Upvotes: 4 <issue_comment>username_6: I agree with others that have suggested that publishing the new results are OK. Mention that there is a difference with the old method but that it is not qualitatively different.
Many journals have a policy of asking for a comment/rejoinder from the author of any study whose work directly contradicted. If the editors think your comment within the paper qualifies, you might finally get the answer you are looking for.
But I would also urge you to think hard about how much you want to put on the line for this discrepency and how much time you want to spend on it. It sounds like the results are qualitatively the same. If you end up being 100% correct on everything, the contribution from a paper that only talks about the difference will be a slightly better estimate. In some situations, that can be worth a lot. In lots of others, it doesn't count for much. You'll know how important this is for your field.
I once found a small methodological problem in a paper in my second year of grad school. I asked a similar question to yours to a professor who asked me if the methodological error was likely to change the result or invalidate the core findings. When I said it was very unlikely, he told me that it was probably not the best use of my time to work a lot on a rejoinder.
It's tough. I think you *should* say something. A note in the paper is probably enough. For this sort of thing, I think a research note on Arxiv that you can cite might be an alternative.
Upvotes: 3 <issue_comment>username_7: All the publications in the scientific journals should be reproducible and accurate. It is very important task to examine others' results. Original authors get lots of credit if an independent researcher verifies their theory or model.
Almost all journals have a section named **Comment,** \**Letters*\* or **Letter to Editor.** Below are some links to these columns:
* **Comment** in IEEE Trans. Antennas. Prop. [A Comment on “Joint elevation and Azimuth direction finding using L-shaped array”](http://ieeexplore.ieee.org/xpl/articleDetails.jsp?tp=&arnumber=6193149&contentType=Journals%20&%20Magazines&refinements=4291944246,4294956607&sortType=desc_p_Publication_Year&searchField=Search_All&queryText=comment%20on)
* **Letters** in PNAS [Genome composition, caste, and molecular evolution in eusocial insects](http://www.pnas.org/content/110/6/E445.full)
* **Correspondence** in Nature [Reply to 'Measurement of mobility in dual-gated MoS2 transistors'](http://www.nature.com/nnano/journal/v8/n3/full/nnano.2013.31.html?WT.ec_id=NNANO-201303)
You can send your findings to the same journal and explain all of your finding. Then the editors send your comments to the authors, and they need to provide appropriate response to the journal.
Of course, this is tricky, and you need to be careful. If you think everything is precise in your code, comment on the paper is an option.
Upvotes: 3 <issue_comment>username_8: I'd like to add one point that has not been mentioned yet. It may or may not be applicable to your situation, but it might be in the general case. You mention that the output from the numerical simulations don't agree. Therefore, I suggest:
If two models can't be made to agree, it's time to do *measurements*.
Actually, this is a good idea even if they do agree, but if you can do measurements, you will be able to confirm that at least one of the models is incorrect at least for the specific situation of the measurement.
Of course this is not always possible.
Upvotes: 3 |
2013/02/10 | 715 | 3,039 | <issue_start>username_0: I am planning to start running a group seminar, with talks scheduled regularly, at my institute.
The seminars might be of interest for people in neighbouring areas, too, so I would like to have an "archive" website with all the abstracts and a calendar of the upcoming seminars, and of course I'd like to send out e-mail notifications (and optionally also a RSS feed/calendar widget for the more tech-savvy users).
Is there any software or service that can help me automate some of this setup? I thought of opening a blog-type site on some hosting site, probably either Wordpress or github/Jekyll.
Do you have experience working with similar tools? Do you think they would really save me some time? Or maybe is it better if I just add a page to my academic website, send the mails manually and forget about the other fancy addons?<issue_comment>username_1: A site hosted by your university, on your university website, will probably be a better choice than an externally hosted website. This is because you'll be able to immediately identify the seminar series with your university, and that will help to improve its branding. (It also looks a lot more professional!)
As for software, there are a lot of different options. I can't really offer a lot of guidance on this, as we have staff whose job it is to maintain our websites. Which one you pick will depend *a lot* upon the kinds of features you want, and how steep a learning curve you're willing to negotiate.
Upvotes: 2 <issue_comment>username_2: I have always hosted reading groups and seminars using a page within a wiki. There are a bunch of firms that will provide you with a wiki for free or for a small price and many that specialize in doing it for Academia (e.g., [PBWiki](http://pbworks.com/) and [WikiSpaces](http://www.wikispaces.com/) and I'm sure there are many others).
You might have to send out your own email announcements but that burden is pretty minor.
Upvotes: 1 <issue_comment>username_3: You might consider using google groups combined with a google calendar. It can be set up to provide email alerts and a calendar, and you should be able to extract an RSS feed as well.
Upvotes: 2 <issue_comment>username_4: Having once been the guy making a site for a lab only to have it abandoned because I made it too complicated, I would strongly recommend that, whatever you do, you make it **simple to maintain**; unless you're the lab PI, the site you build will likely outlast you.
Wordpress is very easy to use, with lots of built-in functionality, freely available themes, and tons of tutorials online describing how to use it. Other CMS packages have similar benefits. Unless it's strictly necessary, I would avoid "rolling your own" software; almost all lab websites are the same few pages, and you don't need something complex for that.
I agree with @seismail that you should check whether your department will make the page for you or at least agree to host it. It will definitely improve branding.
Upvotes: 3 [selected_answer] |
2013/02/10 | 921 | 3,883 | <issue_start>username_0: I have been approached by an international student about doing a PhD with me. As an MSc student in his home country he has published 3 articles in pay-to-publish venues, that are known to have little peer-review process, with his supervisor as second/senior author. These articles are not particularly good and likely would not have been publishable in more traditional venues.
I am struggling with how to evaluate these articles and the candidate. Should I simple ignore the place/type of publication and evaluate the work on its own? Can articles in pay-to-publish places really be fairly evaluated? I am worried that changing his behavior will be difficult. I don't want to accept a student whose goal is to publish things in pay-to-publish places.<issue_comment>username_1: That is indeed a tough question.
What would raise the most red flags for me is the fact that he does not have any articles in regular peer-reviewed journals. This raises, again in my opinion, the question if the candidate has simply bought himself/herself a publication list.
The student's *academic* merits should definitely be judged based on the content of the articles themselves, irrespective of where they were published, no question about that.
What would worry me, though, is this student's views on research, publishing, and the academic process in general...
Upvotes: 3 <issue_comment>username_2: I second username_1's answer, but I note that it's not actually clear from your description whether the journals they published in had peer-review. Note that some well established peer-reviewed journals charge publication fees to the authors. One example of relatively high-profile journal following that policy is [*Physical Review Letters*](http://publish.aps.org/authors/publication-charges-physical-review-letters) (flat publication fee of $690 per article).
Now, if the articles in question were not peer-reviewed, then **you should treat them as any non peer-reviewed publication**: book chapters, arxiv papers, blog posts, etc. Read them, see what they're worth. (Well, you'd do the same thing for peer-reviewed articles.) In addition, it probably depends on your field, but at least in mine being a MSc student without peer-reviewed publications is not a hanging offense :)
Upvotes: 5 <issue_comment>username_3: From your description, it sounds like the problem is more likely to have been the MSc supervior than the student. As evidenced by some of the questions we've seen here, it's very hard for people new to academia to figure out which venues are reputable on their own---and the advice we give usually includes talking to someone in academia. If the supervisor's name is on the publication, that presumably means the supervisor encouraged publication in these venues.
Especially if it's a journal which does a small amount of peer review, I wouldn't assume, without further evidence, that the student has any idea that the papers weren't fully peer reviewed. If the supervisor isn't active in the international research community, I'm not sure I'd even assume the supervisor knows that.
Upvotes: 6 [selected_answer]<issue_comment>username_4: I have one addendum to the great answers by [F'x](https://academia.stackexchange.com/users/2700/fx) [username_1](https://academia.stackexchange.com/users/495/pedro), and [username_3](https://academia.stackexchange.com/users/8/henry).
If you believe the work is good and your lingering concern is about that the student has some miscalibrated idea of what publishing should entail, *talk to them about it.*
If she/he is a Masters student, she/he probably isn't particularly set in their ways in terms of how they want to publish. A conversation with them — about this anything else that is worrying you — is a very sensible thing before you agree to spending the next *n* years working with them.
Upvotes: 4 |
2013/02/11 | 1,329 | 5,314 | <issue_start>username_0: While peripherally related to [Flying with a poster tube as a hand luggage](https://academia.stackexchange.com/questions/2448/flying-with-a-poster-tube-as-a-hand-luggage/2488#2488), I am trying to avoid this. I would like to print my poster at the conference. I am considering this for two reasons. First, it means I don't have to fly with the poster. Second, it gives me a few extra days to work on the poster.
I can see three potential drawbacks.
1. Being unable to print the poster when you get to the conference. I
have lots of experience printing posters at my university, but no
experience in the conference city
2. Not being prepared/able to return with the poster to hang in the lab
3. Getting reimbursed for printing charges
As for point 1, the conference is in a major US city with at least 4 Kinkos (large scale professional print shops) within reasonable walking distance of the venue. My poster is not until day 4 of the conference and I am arriving 1 day early. On point 2, I do not plan on hanging this poster in my lab. I am a little worried about getting reimbursed, but our on-line reimbursement system has a category for printing charges. If I cannot get reimbursed, I am willing to pay out of pocket.
Am I missing anything that can go horrendously wrong with this plan?<issue_comment>username_1: I had your problem 1 when I accidentally forgot the poster tube in the taxi when I arrived at my hotel. The problem was I arrived on Saturday and the poster session was on Sunday. For your case, where you have four days, there probably shouldn't be a problem, especially with so many print shops within walking distance. You could even perhaps arrange for the poster to be sent online *before* your arrival, and pick it up when you get there.
Upvotes: 2 <issue_comment>username_2: If you're not worried about transporting the poster or getting reimbursed, and the destination city has the same facilities as your home location, then there's no functional difference between printing locally and remotely.
In other words, once you define away all the differences, the two scenarios are the same :)
Upvotes: 3 <issue_comment>username_3: Reimbursement is related to a particular institute's policy; and taking by plane - to a particular airline's policy, so I won't speak about it.
>
> Being unable to print the poster when you get to the conference.
>
>
>
Possible problems with printing facilities:
* they may by further from the conference venue than expected (or not as easy to get to them, or masked so it's they are not easy to find even if you are nearby),
* delays larger than you expect (at least assume "the next day", in general or due to other prints ordered),
* page can be out-of-date, or they may be not working for some reason,
* they may not print A0 format (permanently or temporarily),
* local holidays (or local customs related to working hours) may be different.
(I printed posters on-site two times, and it went almost without problems; some of people I knew had problems, especially with instant printing and poster sizes).
Upvotes: 4 [selected_answer]<issue_comment>username_4: This is not to scare you but what if you fall sick and lose the 3 or 4 extra days you have.
Remember it is a new place you are going so things may not be as familiar as at home.
Its good to prepare in advance. Perhaps you can forward your poster in advance and collect it on your arrival.
Upvotes: 1 <issue_comment>username_5: I do not see any reason why you would not be able to print your poster at arrival. In the past, I have used fedex (or whatever is easy) to ship the poster to my hotel. That way, the poster is waiting for you when you get there. You do not have to carry it on the plane and no need to stress about Kinkos not wanting to print your posters...
Upvotes: 1 <issue_comment>username_6: *Printing at a conference is absolutely no problem*. Many conference centers and associated hotels have print shops and Kinko's sprinkled around specifically for people at conferences and meetings to use for print posters, handouts, and other things. You can usually call and in advance and send in some material if you like and it will be ready to print out when you arrive.
In fact, the times I've gone to Kinko's near a conference on similar errands, there were other people from the conference waiting in line to do the same thing.
If you're presenting the first morning of the conference and getting in the night before, you might want to roll it up. Otherwise, you'll lose no sleep and shouldn't have an issue. The only real downside is that it will probably be more expensive than doing it at your university.
Upvotes: 2 <issue_comment>username_7: The plan's fine.
Be sure to check if you only need to submit the poster to print and then return to get it, or if they require you to verify a proof first. Some print shops may refuse to print unless you sign off on the proof first, so be sure you ask about that so you can plan if you need to make more than one trip.
If you want to be the local hero, buy thumbtacks and scotch tape for your poster while you're out and be the envy of all of the poster presenters, as well as the conference organizers, who will undoubtedly have forgotten to bring one or both of those things. ;)
Upvotes: 1 |
2013/02/11 | 1,384 | 6,200 | <issue_start>username_0: I am a Physics undergrad who is interested in pursuing a PhD in pure maths in the future (algebraic geometry/topology) but I am a bit unsure. My question is quite general, and I don't wish to provide more background for fear of bias in the answers.
My question: Is it is advisable/possible/unfavorable/favorable to apply for a PhD in a field, different from that in which you have done most of your undergraduate research?
My research (includes just reading and understanding papers, writing summaries until now, I havent published anything) mostly includes Quantum field theory, and gauge theory. Would the selection committee turn down an application to a pure maths field, if I have no research experience whatsoever?
I would also like to ask the question other way round. What if I concentrate my undergraduate research SOLELY on topics in pure maths such as algebraic geometry/topology and take other physics courses, would I be able to apply to a string theory PhD with a high chance of success?
Should I consider spending time on both of these (which is almost an impossible task), to improve my chances in both the areas or would research in one area, and grad level courses in the other suffice?
**In brief: Should the PhD field you are applying to be the same as your undergraduate research area?**<issue_comment>username_1: You should do what makes you happy and you find interesting. PhD is a considerable investment and it is worthwhile only if you are highly interested in your research. It is quite common for people to change domain from undergrad to grad studies (between math/CS/physics/Bio-info).
Upvotes: 2 <issue_comment>username_2: You can definitely apply in a different field than your undergraduate. Some major fields of research don't even exist as undergraduate majors in most colleges (e.g., neuroscience), so it's understood that many students will come from different backgrounds.
The more similar your major is to the new topic, the easier the learning curve. If your major is significantly different you may want to take some post-bachelors undergraduate courses before beginning the PhD program to bring yourself up to speed.
Upvotes: 4 <issue_comment>username_3: When you apply for a PhD, they are not expecting you to already be an excellent researcher in the field you want to go into, most early research is purely for experience and training and seeing whether you like research and they will understand that.
What's more important is that you can explain the choices you've made. They will look at your research history in context of your PhD choice. You just have to make sure that it shows off the skills they want you to have to do the PhD, those skills are not really subject specific.
I did an undergraduate and masters in Physics and got offers from Oxford, Cambridge and CRUK among other less well known UK institutions to do PhDs in cancer research so it's definitely not essential to have experience before hand, but will also depend on how competitive your chosen field is.
Upvotes: 0 <issue_comment>username_4: Yes, this is definitely possible.
When I was doing my undergraduate degree, I was pretty sure my interests lied within the area of nonlinear dynamics and systems of ODEs, and I had picked up some research experience in my third year as well as during my MMath dissertation. My third year summer project was on critical factors and tipping points (with application to a prey-predator population model) and my MMath dissertation was on nonlinear laser dynamics. However, I was finding towards the end of my undergraduate degree and the MMath project that I wanted to do something more rigorous, involving pure mathematical analysis and how this applies to partial differential equations (and I had taken some courses in fluid mechanics and PDEs beforehand). My current project is on statistical solutions of Navier-Stokes equations, which is a very different area to what I researched at undergraduate level. My ex-supervisor gave me vanilla advice, saying "your PhD topic should be the same as your MMath project" - in some sense this is true, because one piece of advice I've heard is that one should choose an area to specialise in as soon as possible, even though this may not suit everyone (and when applying for PhDs I was very unsure about the area of research I wanted to pursue, so I had offers to study a wide range of topics with varying mathematical backgrounds). I chose my current project because there would be opportunities to fill in the gaps from my previous university, the area of research is fairly lucrative, it would enable me to see things I had applied before in a new and rigorous light. I explained all this to my supervisor when I had my interview with her and admitted that my background of mathematical analysis wasn't as strong as it could be - but I emphasised that I was willing to learn and that the project in question would be one to help me become a better all-round mathematician. Subsequently I was accepted onto the project and (bar the usual PhD student woes) on the whole I am quite enjoying it as it's a completely different area to the ones my previous university offered. My background would imply that my strengths lied in numerical methods and applied nonlinear dynamics, and even though I would probably have much better luck at PhD programmes along the lines of these, I decided that I didn't want to spend my academic career simply "number-crunching" and doing stuff I found relatively straightforward. Doing more pure stuff is harder, but in my opinion, much more rewarding (as you can actually understand the background theory of why things are the way they are rather than just running a simulation and accepting that it works).
So to answer your question, if you apply to a PhD programme and explain your strengths and limitations of your background (be honest!) and explain why you would benefit from the programme, then that can be equally as important as knowing what the research itself is about I suppose. Some people may not advise making the change of topic, but if your heart is set on it, then grab the opportunity while you can.
Upvotes: 2 |
2013/02/11 | 1,018 | 4,198 | <issue_start>username_0: I'm asking for approaches to include interesting, but not perfectly fitting results in a dissertation or a paper.
During my PhD project I have made an accidental discovery, which is what I believe you call *serendipity*. The finding is related to the overall topic of my dissertation and certainly interesting, but it interrupts the "leitmotif" of my argumentation, as this discovery is really just the result of a stupid mistake. So my question here is: **How do you eloquently include stupid mistakes (aka accidental findings) in a dissertation or a paper without sounding stupid or breaking the flow of arguments apart?** Is there even a generalizing answer to this question?<issue_comment>username_1: *“The most exciting phrase to hear in science, the one that heralds new discoveries, is not 'Eureka!' but rather, 'hmm... that's funny...”* (<NAME>, thanks to EnergyNumbers for reminding me of it)
---
If you're worried that it will distract the flow of your thesis, why not **put it in a “special” part of your thesis (e.g., an appendix) and refer to that from the main text, at the point that would be most logical**.
>
> *[Following the description of your experiment.]* In the next few sections, I describe the results obtained from operating the Pocket Helium Flux Positron Annihilator on a variety of samples: metals (section II.B), graphene (section II.C) and heavy water (section II.D). You will also find in Appendix A a description of the observations made following an accidental operation of PHFPA without a helium flux *[you may not want to be specific and say: some moron forgot to replace the bottle]* which allowed to check what happens when electroneutrality is violated on the µm scale.
>
>
>
An appendix is a good place, or maybe a small section as the end of the relevant chapter.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Thesis is a good place to place things not yet developed enough to make a full paper.
If it is at least tangentially related to you thesis topic, just add a relevant (sub)section (e.g. in further discussion, or near to the place where it is the most related).
BTW: Many groundbreaking discoveries were accidental. So I don't see a reason to value them less than ones planned in advance.
Again, if "accidental" means than some values were set such as a mistake - again, mentioning such is related to motivation/story, not the value of results.
Upvotes: 3 <issue_comment>username_3: The traditional ways people introducing important or interesting peripheral information without breaking the flow or core thrust of a manuscript is with footnotes/endnotes or with an appendix as [F'X](https://academia.stackexchange.com/users/2700/fx) has suggested.
If it's a short aside, consider making it a long footnote. If it's longer, put in in an appendix and reference it either in the footnote or in the text. Long — even paper-length — appendixes are not abnormal in dissertations.
Upvotes: 2 <issue_comment>username_4: In the social sciences, there is traditionally a section in the concluding chapter that discusses limitations of the present study and scope for future research.
You can include your 'discovery' in this section. In this way, you are presenting your 'discovery' and suggesting some ways in which it can be researched - two-in-one, I suppose!
The other section in which you can include your 'discovery' is where you highlight what contribution your research is making to the body of knowledge in your field. This is traditionally another section in the concluding chapter of a social science dissertation. You can 'wrap' your 'discovery' as an accidental but important contribution to knowledge.
In my case, I talked to a number of people in my field as part of my stakeholder consultation. I soon discovered that they were telling me far more than what was needed for my topic. I summarised this information in my concluding chapter and said that it represented an important contribution to knowledge because it would lost if not captured in writing (the stakeholders were mostly from the older generation).
So, there are many ways to include it in your dissertation.
Upvotes: 2 |
2013/02/12 | 476 | 2,143 | <issue_start>username_0: I am unsure for books but I know for certain that selected journal articles are peer-reviewed. (This just shows I am not in academia!)
**Do books go through a peer-review process? If so, how does this happen?**
If one is approached by a small publisher, does it matter if this publisher does not have a peer-review service (if there is one for books).<issue_comment>username_1: Typically, after an author presents a book proposal to a publisher, the publisher will circulate the proposal to some selected reviewers to vet the content. This is not like peer review in the usual sense: the reviewers only get to see the outline and maybe a chapter or two.
Once the publisher decides to go ahead with the book and it goes through the editing process, it might undergo further review, but nothing like a journal review.
Upvotes: 5 [selected_answer]<issue_comment>username_2: The major textbook publishers pay for "subject matter expert" reviews of completed books prior to publication, and even of new editions of previously-reviewed books. Some reviewers apparently just submit the publisher's questionnaire. Others, like myself, submit extensive comments. I've proposed corrections that were accepted for books by well-known authors.
Upvotes: 2 <issue_comment>username_3: I guess this is field-dependent because my experience (in math) is different from what is presented in the other answers. At the very least, some book series by some publishers have a peer-review process similar to that of journals for books that are about new research (not textbooks or similar material). The editors would give the whole book to several referees and ask each of them a report about the mathematical correctness, the context, the presentation, etc. There can even be a similar editorial process as the one for articles, with a back-and-forth of corrections and new reports. Sometimes referees would only each be asked about some part of the book, but each part would be covered by at least one referee. I would guess that this does not concern textbooks or "survey" type books, but my experience there is limited.
Upvotes: 2 |
2013/02/12 | 1,789 | 7,649 | <issue_start>username_0: I quite enjoy paying attention to how I design my documents and presentations. I usually spend hours thinking over and designing my slides for a workshop or presentation, so that they are aesthetically pleasing and as intuitive as possible. Likewise I recently started revising my CV I figured and I wanted to make it stand out a bit more. (Just to make it clear I don't mean making a clown of a document but just better use of colors, contrast and design elements.)
I have long wondered whether or not this is something that can backfire, since most documents in academic context are extremely plain, at least in my experience. It's very common to see the default Powerpoint slides (white bg, black arial text) or [something as hideous as that](https://www.google.se/search?q=terrible%20powerpoint%20slides&hl=en&safe=off&tbo=u&tbm=isch&source=univ&sa=X&ei=HxAaUe3PBM74sgb70IC4Aw&ved=0CCgQsAQ&biw=1282&bih=1065).
My question is as follows: is putting time and effort into design of academic documents something that can backfire? Will I risk being prejudged with first impressions such as "well he put much effort in the presentation his documents, perhaps because the content is sub-par"?
---
I realize that the question might be somewhat subjective from person to person but I encourage everyone to consider it in terms of this SE blog entry: [Good subjective vs Bad subjective](http://blog.stackoverflow.com/2010/09/good-subjective-bad-subjective/).<issue_comment>username_1: Some scientists don't know how to properly layout a thesis, a presentation or a poster. They probably never learned it and too often don't care about it. This doesn't mean that you have to follow this bad example. I always appreciate when my students care about readability of text and figures, and think for a long time how to present something in the best way.
As for your question, I'd say this really depends on what you consider 'unconventional'! If your CV bursts with colours and Comic Sans, it will certainly backfire. If it's a sleek design with understatement, it certainly won't.
Upvotes: 2 <issue_comment>username_2: It all comes down to a cost/benefit analysis. But, there is little risk to improve the design, graphics and typography of your documents (theses, figures, presentations). There is little risk that it backfires if you present a higher-quality document. In fact, the only case I can think of is if it seems that form took over content: i.e., if you have a very shiny designed presentation with just-meh scientific content, the contrast might draw attention.
One thing that might be a problem is if you put too much theatrics, 3D effects, animations, cartoons… I had a colleague who used every single “animation” possible (it was the early days of Apple's Keynote and its nice 3D effects) in the same presentation, and it was simply too much. It distracted people from his message.
Finally, coming back to the cost/benefit analysis: I believe that as in everything, 20% of the work can get you 80% of the reward if you choose wisely. People will have different pet peeves, but the areas which I think you should polish for presentation slides are:
* **Graphics quality**: no pixelated crap
* **Consistency between graphics and text**, and self-consistency of graphics: same quantities reported and plotted, same units, consistency between graph scales (as much as possible), etc. Sometimes you take pictures from an earlier paper, and they don't quite match what you are showing with them. Avoid things like “graph on the left is concentration, graph on the right is volume fraction” when they can be converted straightforwardly.
* **Careful about background colors**: to me, this makes the different between decent slides and good slides (for the presentation, not for the scientific content). If you use a colored background (not saying you should), don't include graphics with white background. Try to use graphics with transparent background (easy with vector graphics, use PNG with alpha channel for bitmap images).
* **If visualization requires it, use movies** to show a complex system: time evolution of spatial distribution, autorotation of a structure you present if it makes it clearer, etc.
Upvotes: 5 [selected_answer]<issue_comment>username_3: There are different kind of academic documents, and it might change from one field to another. For instance, journal/conference papers often have a required style, and so there is little room for improving the overall design of the paper.
About the CV, I have seen many places asking for a specific style for the CV (i.e., they give you a Word document to fill in ...), but when they don't, as long as the content respects the traditional Education/Experience/Publications, then it shouldn't be a problem.
When it comes to presentations, I would say it's a bit trickier. The overall impression I've had when talking with colleagues at a conference, is that the quality of the *speaker* matters first. I've attended excellent presentations where the slides where black and white powerpoint, because the speakers were merely using them as a support to put the keywords and the important formulae. Conversely, I've attended boring presentations where the slides where very nice, just because the speaker was not comfortable speaking.
In other words, I don't think putting time and effort in the design of slides will backfire, and it will not necessarily bring you bonus points, what matters is the quality of the presentation itself.
Upvotes: 3 <issue_comment>username_4: Good design is invisible. The goal of design is to increase understanding/clarity. If your design is truly good then it will go unnoticed but your content will be better understood. If your design is noticed and distracts, then it is bad. I feel these principles are universal.
So to answer your question, yes, good design is worth it (it increases the amount that your content is understood), but just adding "design" elements without a good understanding of the audience and their expectations will not likely result in good design.
Upvotes: 5 <issue_comment>username_5: The only other note I'd have to add is that you should also remember *who your audience is* when designing your documents.
You can have a fancy version of a CV or a presentation template, and there's nothing wrong with that. However, as an example of where this could backfire: assume you have a LaTeX'ed CV that is being sent to an HR department of a large company. The text of the CV will probably be tested against some set of keywords for "appropriateness"; if it can't match because of ligature issues, you're out of luck.
Similarly, if the documents will need ti be scanned in on the receiving end (perhaps because they require an actual signature), then the design should be one that doesn't make the scanned version illegible.
Upvotes: 2 <issue_comment>username_6: Assuming that we have similar concepts of what good design is (clarity, readability, ...), the only point I can see where it could potentially backfire is if there is a mandatory layout and you choose not to use it.
* thesis formats are often mandatory, and you don't want to risk failing because of not meeting formal requirements.
* The call-for-papers that comes with an ugly Word template that *is to be used*
* Many universities have a corporate design that is mandatory for public/outside presentations.
However, my complementary experience is that judicuous changes that leave the overall impression of where you belong intact, but e.g. allow better contrast in diagrams are usually not backfiring.
Upvotes: 1 |
2013/02/12 | 815 | 3,607 | <issue_start>username_0: I am currently applying to faculty positions, primarily teaching positions at 4 year colleges and universities. I am told many of these jobs have more than 100-200 applicants. Some of the jobs ads themselves say they get hundreds of applicants, and go on to say something to the effect 'you probably aren't going to hear anything from us', which says to me - 'don't bother us'. I have 3 questions, which overlap:
1. If I don't hear back from them at all, is it appropriate to contact the department?
2. If I hear back that they got my application and materials, is it appropriate to contact the department?
3. People that I know from the business world encourage me to be more aggressive by calling the departments to check in, and even asking if I can come visit the department. I am concerned that this sort of attitude can have a backlash effect. Is this sort of aggressive approach accepted in academia?<issue_comment>username_1: It is certainly appropriate to ask whether they got your application. It is also appropriate to enquire about how the process is progressing. I would avoid being too pushy about it, as this will not influence anyone, at least not in a positive sense. And sometimes these application processes take an extremely long time.
It may be appropriate to visit the department to give a presentation, as this is one thing academics do anyway, even when they are not applying for positions. Give a good presentation and this might help your application – though it could be the case that the people judging the application are completely disjoint from the people in the audience.
Upvotes: 3 <issue_comment>username_2: You should keep in mind that the people running searches are busy academics who are doing this as service to the department. Checking-in aggressively — following your intuition for how this would work in business — *will annoy and will be very unlikely to help*. At the moment (early/mid- February), it is still early enough in some job markets that interviews have not been scheduled so asking for updates might be seen as pushy. I think it's unlikely that your chances will *go down* if you ask, but it still might be nicer to wait.
That said, there are many situations where contacting in normal and you might to do it through one of those channels:
* For example, if you have another offer from somewhere with a deadline, it is normal — and a good idea — to contact other departments to let them know that you will have to move forward without them. I've had friends who have had interviews offers *within hours* of telling a department this.
* Also, if you have updated material on your CV (e.g., a paper accepted, positions changed, an award, whatever) go ahead and send your updated CV. You can mention in that email that you're excited to hear about any updates from their search.
Contacting search committees in this context is normal and can signal that you continue to be very interested in a job. I was told by a search committee that ended up making an offer that they thought I might be unlikely to accept an offer and that one of these update emails rovided a useful signal. Of course, if they're not interested, emailing will probably just be noise and extra work for them.
I think that emailing or not-emailing is unlikely to tip the scale either way. They're going to make a decision based on the intersection of the quality of your work and what they're interested in having in their department. But out of kindness for the work of the search committee and its chair, try to do it as little as possible.
Upvotes: 3 |
2013/02/12 | 463 | 1,943 | <issue_start>username_0: My department has several non-tenured/non-tenure-track faculty in teaching-only positions. My department is also hiring at the tenure-track level this year and these non-tenured faculty are taking an active role in the search, even voting with the tenured and tenure-track faculty on who should be hired. Is this appropriate? Is there a precedent?<issue_comment>username_1: While on the job market this last year, I talked with top departments that gave all their full time non-visiting members of the faculty a vote in tenure-track faculty hires. So there is definitely precedent.
Personally, I'm not thrilled by the shift at many universities to having a larger proportion of the active faculty be non-TT. But when this means that the *responsibilities* of non-TT jobs are similar, it is only reasonable that the *rights* should be similar too.
Benefits of doing so include all things that come from a work environment that is perceived as more democratic and where all faculty get a vote in determining the future directions of the department. The drawbacks are much less clear to me.
Upvotes: 4 <issue_comment>username_2: Actually, there's a wide variety of latitude given to hiring processes.
At both of the universities I attended as a student, undergraduate and graduate students were involved in the selection processes for new *university presidents*.
Similarly, at the school where I currently work, undergraduate students regularly sit on the hiring committees for faculty hires, and can actively sink a nomination if they have concerns about his teaching credentials. (Normally, however, this implies that the other committee members have a bone to pick with the candidate as well.)
So it seems to me that there would be nothing wrong with a policy that lets non-tenured faculty vote on such a hire. After all, they are going to be colleagues, and it makes sense that there's a consensus.
Upvotes: 3 |
Subsets and Splits