date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2014/11/03 | 830 | 3,578 | <issue_start>username_0: My department has gone paperless and uses Moodle to provide all information to students in an electronic format. With a paper based syllabus the expectations were cast in stone (okay ink on paper). Now that we are on Moodle, we can change the syllabus whenever we want. Today, we had a staff member change the word count on an essay, that is due in two days, from a 1500 word maximum with a 10% allowance, whatever that means, to a strict 1500 word maximum. The students are confused and screaming about it. What type of departmental policy should be in place to prevent changes.<issue_comment>username_1: Fundamentally, I don't think this problem has to do with the shift to a paperless format. Even with a syllabus on paper, my experience has been that a professor may well change it, often for good reason (e.g., shifting to make room for an excellent guest speaker, extending a deadline on a lab that many people are having problems with).
I think that the real problem here is that the faculty member has made a last-minute change that makes life harder for the students. Student who thought they were done with the assignment have just discovered that they have more work to do, it may be interfering with their other plans, class or non-class, and it just plain doesn't feel fair. Perhaps a good policy for that would be that no assignment can be made more restrictive once it has "started"?
There is also a place where the electronic aspect can enflame or mitigate the issue, and that may also address your original question. Online documents offer the potential for making a "sneaky" change that is not announced directly to the students. That seems to me to be something that should definitely be prohibited, and might be handled automatically by having the system send an announcement to all of the students whenever a course document changes. I don't know Moodle, so I don't know how hard or easy it would be to set up automatic notification; even without automation, however, you could certainly regulate that all non-trivial changes must have a notification sent to students.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I agree with username_1 that "online resources" is a red herring. The issue is simply that the instructor has made a last-minute change to the expectations for an assignment.
I think it's a pretty basic principle of teaching that expectations for assignments need to be provided to students in plenty of time for them to create the assignment. If expectations are to be changed after they are communicated to students:
* There should be a compelling reason for the change. (I can't imagine a compelling reason for 1650 words to suddenly be unacceptable.)
* Students should have a reasonable amount of time to take the changes into account.
* Changes that may invalidate work that students are likely to have already done should be avoided if at all possible.
* Students should be notified by some "push" method (email, announcement in class, etc).
However, I *don't* think that you need to turn this into a departmental policy. It sounds to me like you have one person who needs to be counseled about their teaching practices - take the matter up with that person. Imposing a department-wide policy for something that should be common sense is passive-agressive, wastes the time of those developing the policy, and places an unnecessary burden on everyone else who now has to check whether they are complying.
(I made the same point [here](https://academia.stackexchange.com/a/9567/1010).)
Upvotes: 3 |
2014/11/03 | 1,344 | 5,451 | <issue_start>username_0: Firstly, I know that there are plenty of questions here like this one but hopefully this is not a duplicate.
I was born and grown up in Iran and belong to a religious minority (Bahai faith). As you know Bahais are not allowed to attend universities in Iran because of their faith. I was no exception and so could not study at university because of my religious beliefs. Between the age of 18 (when I finished my highschool in Iran) and 24, I worked as a construction laborer. At the age of 24, I together with my family (my parents and sister) travelled to Turkey and became refugees in UNHCR and two years later UNHCR sent us to Australia. I studied English for two years and after that I repeated year 11 and 12 because Australian universities did not accept my Iranian qualifications. At age 30, I started my undergraduate studies in Bachelor of Electrical Engineering and I will graduate very soon. I am now 35 years of age and would like to apply to be a PhD student in the first 5 or 10 top engineering schools in the world. I have maintained a GPA of 6.9, a WAM of 93 and was on Dean's merit list every year except the first year of my undergraduate studies. There is a possibility that I also get university medal but that is not certain yet.
Should I explain these details to the graduate admission committee explaining why I started my undergraduate studies very late at the age of 30? and what I was doing before that?<issue_comment>username_1: Such information would be relevant. The real trick would be to keep such a paragraph short and to the point. As such the question would be a good draft of such a section. I have two comments:
I would not start the second sentence with "as you know". If a committee member did not know, you make her or him feel ignorant. It is good to avoid invoking negative emotions in such a letter. Especially since leaving that part out does not change the meaning of the sentence.
It was not clear from the text whether it was the Bahai faith that prohibited it's members from going to university or whether it was the Iranian government that prohibited people with the Bahai religion to enter university.
Upvotes: 7 [selected_answer]<issue_comment>username_2: I wish scaaahu had put that comment in an answer so that I could up-vote it.
Perhaps it will help you to know that I started the Ph.D. at age 56. It wasn't in a top ten university, but neither am I ashamed of my *alma mater*. Admissions committees are interested in potential for research and teaching. Show those and, with your background, you will get offers.
Do include a very brief explanation in either your cover letter or statement of purpose as scaaahu has suggested. It need not be as extensive as what you posted here. Just address the committee member who is thinking, "I wonder why...?" Something as simple as, "People of my faith are not allowed to study in university in my native Iran, so I got a late start."
Upvotes: 5 <issue_comment>username_3: I recommend that you put the information you gave us in your statement of purpose for a PhD application. At least for a US-style statement of purpose (which is usually about two pages) I *would not suggest* abridging the story you told us. Rather I agree with @scaaahu that your story is extremely compelling, much more so than what one normally reads in these kinds of statements.
If you can craft this as a narrative of the triumph of your intellectual interest and academic success over the adversities you've faced over a period of many years: look, that's awesome. If I saw that in a PhD application to my program (mathematics, UGA) then I would be passing your statement around for the entire admissions committee to read. If the other parts of your application were reasonably competitive, I would be well on my way to pushing strongly for your admission.
Let me end my saying that I was personally touched by your story. You have a lot to be proud of and will certainly serve as an inspiration to many others. Academia needs people like you.
Upvotes: 5 <issue_comment>username_4: DEFINITELY include information on what you describe. More importantly, though, you are a more mature student, and at your age you should show a very solid understanding of why you need a PhD to pursue your career goals.
"Atypical" students can be great additions for departments, but if I were the one doing the choosing, I would be looking for more than your history, academic or otherwise -- I'd be looking to see whether you understand why you want the degree, and what you intend to do with it. Give your admissions committee your whole picture.
Upvotes: 3 <issue_comment>username_5: There are some people that probably should "dodge" this issue. You are NOT one of them. So an explanation will help you, with very little risk.
You come from what most Western institutions would consider a "disadvantaged" background. You have succeeded in spite of that fact. You got a later start in university life because you got a lot of life experience in what we Americans would call "the school of hard knocks." That's very much to your credit. Most western universities would give a positive weight to "maturity" and sense of purpose in evaluating an application. You have both.
The kind of person who might have something to fear regarding age is someone from a (probably) rich family who had a "wasted" (or misspent) youth. You are not that person.
Upvotes: 2 |
2014/11/03 | 1,294 | 5,711 | <issue_start>username_0: I am applying to doctoral programs in the US.
Since I am not an English native speaker, I am wondering if it is okay to have native speakers help make my English perfect in my statement of purpose?
I ask so because I am concerned with that, since the committee definitely knows that I am not a native speaker in English, my perfect English in statement of purpose could lead them to suspect. By "suspect" I mean the argument: If this person's writing is this good, then this person's TOEFL scores must be almost perfect.<issue_comment>username_1: Of course this is a good idea. Plenty of non-natives write excellent English, and getting your work proofread by a native English speaker is always a good idea, if possible.
It's not as if you are asking the native English speaker to write the text for you?
Upvotes: 4 <issue_comment>username_2: I think about English grammar and prose like personal grooming. If you are going to a big event where you need to make a good impression, it's fine to ask people to help make sure your clothing is well-chosen and being worn perfectly. Likewise, it's fine to ask people to help make sure your words are well chosen and don't have any distracting errors. A committee should not think any worse of you for asking for help making sure that your words are clean and clear. Rather, they should think better of you for caring enough to make sure you are presenting yourself well.
Upvotes: 5 <issue_comment>username_3: It's definitely a good idea (in fact anyone in your situation should do so), I have seen plenty of people around me in my graduate school who had done so. There even exist professional services specialized for this task (it's a pretty big business in some countries).
Regarding your concern that the jury might think that you have been "cheating", forget about it:
* by providing a flawless statement of purpose despite being non-native, you are showing the jury a lot of motivation;
* if the statement of purpose was used to assess your true level of English, they wouldn't ask for TOEFL/IELTS;
* when writing research articles later on, you will still often have a native speaker around to answer questions.
Upvotes: 6 [selected_answer]<issue_comment>username_4: Whether you're writing a statement of purpose, an application essay, a novel, a grant application, or even a research paper for publication, I think it's a pretty widely accepted "rule" that the *ideas* involved should be yours, but the exact way in which they are expressed does not have to be. After all, the goal of all these forms of communication is to convey your ideas to the reader. So it makes sense to take your ideas and express them in the best way possible. If you are not particularly good at expressing ideas (in a manner suitable for the readers) yourself, that means getting someone else to help you out.
People will not assume the writing style in a statement of purpose is necessarily representative of how you would write. If they want to know about your own writing abilities, they will ask about that specifically, for example by asking for a writing sample or by using something like the [TOEFL](http://www.ets.org/toefl). Now, there is some limit to this; for example, if you could not write or understand English at all, I think it would be misleading to have a friend completely *translate* your statement into English. But just having someone proofread your writing to improve the quality of the English is fine, and in fact is something even native speakers do all the time.
I would also note that the English doesn't have to be literally *perfect*. There are many small errors in English that very few native speakers can identify, and even fewer actually care about.
Upvotes: 2 <issue_comment>username_5: Do have somebody to look at the language/grammar you are using. But be aware of letting them write too much.
I have seen applications (for academic programs, but also for jobs) being tossed because of looking "too perfect", especially for non-native speakers. It is not a matter of the grammar being very good, it was a matter of the content "fitting too well", using all the expected buzzwords, and displaying a cultural sensitivity for the German job market/academic milieu a person with this biography could not have. This looks like it has been written by a ghostwriter, or copied from a "how to submit applications" book.
I guess that a person sending such an application still has the chance to get invited for an interview, if the running is not close. But when a prof is paring down a list, a person with this kind of application can get thrown out early, because the information he provides is disregarded as "he is telling me what he thinks I want to hear, and it might or might not be really true".
So, there is indeed a case of an application being too perfect to be regarded as real. And if you are a non native speaker, you are more likely to hit such a barrier than a native speaker, because the professor expects a bit less proficiency from you.
But this barrier comes long after the case which you are describing here. We are talking about the kind of application that can be written by somebody with knowledge not only of the language, but with the whole selection process and the country's and organisation type culture. If you express what you want to say in your best English, and somebody corrects your grammar and a few word choices, the chances that you come close to being regarded as "so perfect he must be fake" are astronomically small. So do let somebody edit your writing, it has advantages (described in the other answers) and it won't become good enough to disqualify you.
Upvotes: 2 |
2014/11/03 | 912 | 3,946 | <issue_start>username_0: I am disabled and have to live with my family. I don't want to go into detail about my condition.My experience in the public education system here was less than happy, including being used as a pawn to fill roles in programs that needed funding. My education history is confusing, but essentially has left me with no chance to qualify for scholarships.
I've applied for financial aid at a local university. In response, they offer only loans, telling me that I do not qualify for anything else due to being claimed as a dependent. However, a loan would be financial suicide for me. Their disability counselors just shrug their shoulders or basically tell me to go die somewhere.
I've been finishing prerequisites here and there, though most of my learning has been done independently. I can easily test out of these classes (and most of their "graduate" curriculum as well, in fact), but they don't offer credit for doing this.
My parents make a comfortable income, but are not very intelligent with money and getting on in age. They will not be able to support me when they retire. They have to keep claiming me, otherwise I would no longer be classified as disabled and would lose my only pathetic source of income (disability pay requires that I am dependent, or so the SSA tells me).
**I want to transfer my credits *somewhere* and get the hell out, but I'll basically need to be funded entirely by financial aid. What can I do?**<issue_comment>username_1: Your problem is that you did not get enough financial aid from your local university. I think the solution is to apply to some other institutions. Many institutions think it is important to help students with disabilities. If you look around, you can find one that 1. Cares. 2. Has money to do something. 3. Has time to consider special cases. I suggest applying to smaller higher educational institutions because they are better able to adjust to special cases.
Upvotes: 1 <issue_comment>username_2: *This response is US-centric*.
It is most unfortunate that the local university was unwilling to provide any kind of financial aid except for loans. However, this need not spell the end of your academic journey. Here are several avenues you should explore before giving up.
* Begin your higher education journey at a community college. They offer lower tuition rates than almost any other institution is able to do.
* Apply for federal financial aid using the free [FAFSA](http://fafsa.gov/) application.
* Investigate other financial assistance and scholarship opportunities. As a disabled person, you should be eligible for various scholarships. There are various free scholarship websites which will permit you to find scholarships suited to your specific situation and interests. [Scholarships.com](https://www.scholarships.com/) and [Scholarship Experts](https://www.scholarshipexperts.com/) are among my favorites, but there are many other free services available, including [this comprehensive list of external scholarships](http://www.phoenix.edu/tuition_and_financial_options/scholarships/external-scholarships.html) provided gratis by the University of Phoenix.
* Make friends of the folks in the financial aid office. They are usually more than happy to walk you through the process of applying for federal, state, and local financial aid and may help you find other sources you hadn't considered or were unaware of.
* Find out where the community college Foundation Office (or similar office) is located and make a special effort to meet these people. Their job is to match students with sponsors and the rewards for your efforts can be substantial. (For example, they once put in my application for a scholarship I didn't know existed, and gave me half-a-semester's worth of tuition without any effort on my part!)
* Finally, contact a counsellor or other mental health professional. Depression is a serious illness. Get help!
Upvotes: 2 |
2014/11/04 | 1,816 | 7,381 | <issue_start>username_0: I was just wondering about this specific scenario.
Say someone was researching nature vs. nurture, but his experiments involved keeping babies in a controlled environment for the first 10 years of their life. His research is secret and he has ways to smuggle children for his research (don't ask how).
After 15 years, he publishes a paper and confirms that there is a set amount of characteristics that can be transferred via genes.
What would become of the researcher and his research? Will the researcher be jailed, but the research results recognized?
---
This question is about unethical research in general, not just ones involving human subjects.
P.S. No babies were harmed in the making of this post<issue_comment>username_1: Unfortunately, history has already forced this question upon us, and the answers are not entirely clear. The Nazis inflicted widespread and breathtakingly [horrifying human medical experiments](http://en.wikipedia.org/wiki/Nazi_human_experimentation) on their victims during the Holocaust. These yielded quite a bit of medical data, that some want to unearth and apply today.
This has ignited quite a bit of debate on the ethics of using this most obviously and supremely unethical research. The science may be dubious as well, given the circumstances under which it was performed. An excellent discussion of the dilemma may be found in the article ["The Ethics Of Using Medical Data From Nazi Experiments"](http://www.jlaw.com/Articles/NaziMedEx.html) by Bar<NAME>hen. In essence, Cohen argues that in certain extreme cases it may be possible to use the data, but only when accompanied by strong condemnation of the methods and only when it concerns information that is both otherwise impossible to obtain and of life-saving importance.
Nazi medicine is an extreme case, but [unfortunately by no means isolated](http://en.wikipedia.org/wiki/Unethical_human_experimentation_in_the_United_States), and the judgement of history and science on these studies contains less uniform condemnation than we might like. The modern consensus, however, seems to be that except in very unusual circumstances, unethical studies should not be rewarded in any way by recognition.
Upvotes: 7 [selected_answer]<issue_comment>username_2: Supplementing [username_1's great answer](https://academia.stackexchange.com/a/31122/49) on current research (not - digging old).
>
> ...but the research results recognized?
>
>
>
It seems "unlikely", perhaps unless the result is so evident, one cannot ignore it.
First, most journals have statements disallowing publishing unethical research. And without journal publication it is hard to get academic credit.
Second, if you managed to publish it somewhere, I bet that the reaction to its ethics will influence the evaluation of its scientific value. (Look at the reaction to any research results related to emotionally-charged topics. In this case it would be harder, because the reaction would be almost all-negative.)
Third, many people can think that if you are OK with one breach of ethics, you may be OK with breach of scientific procedures, or any other fraud (to support one's view of world, for fame, etc...).
>
> After 15 years, he publishes a paper and confirms that there is a set amount of characteristics that can be transferred via genes.
>
>
>
I bet:
* if you show that some traits are genetic, it won't be recognized (claiming that you are a racist),
* if you show that certain genes are responsible for certain traits, this result may be recognized (as it is easy to test it, and in more ethical way).
My personal stance is that all data should be used. (All in all, we use historical data from wars and atrocities, rather than forgetting the history; we can't change the past, but we can change the future.) However, creating lack of incentives to pursue highly unethical research might be worth it.
Upvotes: 2 <issue_comment>username_3: The problem here is that ethics change with time, location, education and religion.
username_2 in his answer points out that unethical research would never be validated; then I would rebut "What about animal dissection?"
It has been banned as unethical by many countries, but many papers still use findings from it.
The same is for much research by Nazis; they are highly unethical NOW, but at the time, for a large number of scientist it was ethical; the well-known [Bayer](http://en.wikipedia.org/wiki/Bayer) at the time "engaged in human experimentation on Auschwitz prisoners, often with fatal results."
Some researchers at "IG Farben" even got a (still valid) Nobel prize "for the discovery of the antibacterial effects of prontosil" and many more.
So I would say that actually ethical studies may be based easily on unethical papers, if those papers are valid.
And new unethical papers will need more time to be recognized (and probably the author will be imprisoned) but this is because it will be harder to fact-check the experiment in an ethical way.
Upvotes: 3 <issue_comment>username_4: There is a situation in which the exact situation described in the title regularly happened and (most likely) still happens: **military research**.
There are multiple example of knowledge acquired by the military of several nations and regimes during secret and ethically problematic (euphemism intended) investigations and experiments.
Some of the results from these are a part of our everyday life: aviation (and transportation in general), nuclear fission, some aspects of medicine and surgery, telecommunication, geolocalization, etc.
It seems like the results of these experiments are not disregarded, although contradictory to the example you mentioned, most can *also* be investigated and validated with ethical approaches.
Upvotes: 3 <issue_comment>username_5: To get some perspective, what if this doesn't happen in academia but in "real life"? If the person "conducting unethical research" is a police officer searching your home without a warrant, and the "positive result" is that he or she finds conclusive evidence that you committed a crime?
In that case, the rule is that this "positive result" cannot be used in any way whatsoever. Not only can the conclusive evidence not be used in court, it cannot even be used as a reason to investigate you further.
That seems to be the correct way to handle the situation in academia as well: The results of unethical research should be completely ignored, so there is no motivation to conduct unethical research at all.
Upvotes: -1 <issue_comment>username_6: I think <NAME>'s comment is worthy of an answer:
>
> here's also the question of reproducibility: where on earth will you
> find a second evil scientist who also happens to share research
> interests with the first?
>
>
>
Setting aside any legal consequences that may happen to the researcher, any study that doesn't have reproducible results is not going to alter the opinions of many. If the results are interesting enough, perhaps someone will find a way of testing it legally (perhaps with non-human subjects?), and so an and so forth, just like any other study.
There have been quite a few "studies" performed that would be illegal under today's law, yet the results from them have not been discarded. I see no difference in this case (with regards to the research).
Upvotes: 0 |
2014/11/04 | 965 | 4,064 | <issue_start>username_0: There is a professor at our school whose textbook I was using in class A (not taught by the professor). I've studied the textbook from cover to cover and have spotted about twenty typos (many of which are quite serious, e.g. make an exercise unsolvable). I wanted to TeX them into a list and send it to the professor yet the following two aspects concern me:
* I'm currently enrolled in his class B (not the book's subject). It's a rather small class and he knows me by name. Wouldn't it look as if I'm trying to improve my standing in his class by submitting the list?
* It's known to me that the professor is aware about some typos (since he commented on a couple of them while teaching from the book two years ago). Yet for some puzzling reason there is no errata list on his website. It gives me an impression that he might be somewhat unhappy to see the extended list of typos.
The professor is working in the field I'm interested in, so I'd definitely like to make a good impression (and more importantly not to make a bad one). So is there a way that sending the list could harm me?<issue_comment>username_1: Unless the professor is a total jerk, I don't see any way this could hurt. My experience has been that academics in general are quite happy to hear from people who are interested enough in their work to offer corrections. (This has been the case even with authors who, for whatever reason, don't post errata.)
I wouldn't worry about being seen as kissing up. But if you are concerned you could always wait to send it until after the end of the term.
Before sending it, you may want to casually mention: "I've been reading your book which I really like. I did notice a few typos though. Is there an errata list posted somewhere? If not, I could make a list of the ones I found and send them to you."
If there is any chance that a given typo is not really an error, but something you have misunderstood, you can be more delicate by phrasing it as a question. Instead of "On page 34 you forgot to require that X is compact" you could say "Are you sure the argument on page 34 works without assuming that X is compact? Isn't the punctured plane a counterexample?"
Upvotes: 6 [selected_answer]<issue_comment>username_2: In my experience, most textbook authors are happy to receive errata reports. I've sent many off over the years, and as a textbook author I'm happy to receive them. However, many of the reports of errata that I receive as an author are actually cases where the reader has a fundamental misunderstanding of the material.
So, when you submit your corrections to the author, please be polite and friendly about it, and be prepared to find out in some cases that the book is right and that you've misunderstood something.
Upvotes: 4 <issue_comment>username_3: Yes, if your professor is a decent human being and good at his/her job, you should definitely do so – though it's nicest to ask “Would you like me to send you any corrections I find?” first, rather than baldly pointing out the mistake. It's also much easier for the professor if you accumulate them yourself and give a **detailed, consolidated list** rather than mention them at random times during the class. Unless the typo might hinder the class's understanding at a particular point in the class, it's also probably best to mention it **privately** (and let the teacher mention it as he/she feels appropriate).
I had one professor in particular who would give a tiny amount of extra credit to those who spotted typos – enough to add up if one was quite helpful! Another professor gave me a printed copy of the new edition of lecture notes after I had gone through it and pointed out a substantial number of potential improvements.
Academicians, perhaps more so than others, have an interest in making sure that their printed materials are as good as they can be. As long as you are friendly and non-confrontational in pointing out typos, they should appreciate the opportunity of making these materials better while saving time.
Upvotes: 4 |
2014/11/04 | 2,776 | 11,596 | <issue_start>username_0: Is it better to wait a couple of years and publish in order to get to a prefered program or go to any program that would accept me?
Sorry in advance for the length of this document. It is in part about writing down and refining my ideas, and inner turmoil, in writing. Thanks for your patience.
So, publish now and go to a top tier school later or go now to a lower tier school and build a reputation afterward?
I have no illusions that I could get into a good program now. I graduated top of my class at the BBA and MBA level but that was 10 years ago at a regional ranked college. I have only been self employed, doing stock-trading, so no real reference there, nor great job experience. I wrote an MBA thesis but didn’t publish it. Moreover my 2 years MBA program is apparently perceived not to be in any way precursor to PhDs and some unis I looked up even require MBA holders to re-do a “proper master” prior to applying to PhDs.
I long to attend to a top PhD program. I know it should matter less than a good fit, but I have reasons for wanting that. Since my school is virtually unknown I desire the recognition that a top ranked diploma gives. Also, if I go the academic route, I believe, maybe mistakenly, that famous names do matter. I also think that their PhD programs are much more rigorous than at lower tier schools, which would result in a better preparation to publish at leading journals. An equally important reason is that I would be seeking a full fellowship as a foreign student, and top programs tend to have the most available and accessible financial aid funds.
So I was thinking that I should try to publish several papers at solid journals. My understanding is that it would very greatly help to ensure an admission at top universities, even if it would delay my application by a year or two.
If I applied now, I do not think that I am likely to get into a decent program or even to get financial help. I also find the whole application process a burocratic nightmare and an expensive one at that. Plus i would rather bother my former professors, for reference letters, only once and for a good enough school.
In order to do some research and write papers now, I will virtually need to design my own PhD program, which will be I believe similar to a lower rank school formal PhD. I have to read widely in adjacent fields, review research in my field, polish or learn new quantitative data analysis methods, learn and develop a variety of new skills.
My belief is that if I do all that properly, I would not only be accepted at a prestigious university, but my PhD years should be a breeze. This in turn would allow me to focus on producing and publishing higher quality papers during my PhD. The net result should be that I graduated from a top program with almost as many quality publications as someone who went for academia directly without a 10 year gap. This number, and quality of publications plus the degree for a top uni should fast-track me to a tenure at a top school or a least a prestigious post-doc, or research grant, or position.
Please comment and tell me what you think. I am making a lot of assumptions and I have beliefs that are grounded in my present understanding of the academic world. I would greatly appreciate to know if I am wrong or making partially faulty assumptions or totally erroneous suppositions.
---
**Update 2014.11.4:** Thank you all for your honesty. I did say that these were assumptions I was not sure that they where true. I am still rather unfamiliar with the academic world. I realize now that my post was much more naïve than it should have been. From my thesis I know that I do like research. I like the feeling of an organized world after stumbling into the dark. I like the almost physical stretching of my mind as a grapple with complex ideas. I like the sense of achievement of having written a monumental piece and being one of the world’s experts in my field. I believe that I got a similar experience with my MBA thesis which was 250 pages long though much lighter in original findings than PhD’s, I am told. For the multiple papers I thought of cannibalizing my old thesis and other old projects, but I guess that may not be realistic as they are rather outdated and maybe not that original. I hope that these answers also help others who may have had a similar strategy in mind.<issue_comment>username_1: Although @Niko and @xLeitix are spot-on that your plan is not feasible, I will try to elaborate more, so to clarify some details. Note, do not take any of this personally, since I do not know you personally nor your abilities but I need to warn you of some misconceptions you may have.
>
> So I was thinking that I should try to publish several papers at solid
> journals. My understanding is that it would very greatly help to
> ensure an admission at top universities, even if it would delay my
> application by a year or two.
>
>
>
This is simply not feasible to borderline delusional. A good journal needs 3-12 months to review a paper AFTER submission. How will you be able to submit several papers and get them accepted in a year?
Papers are not just ideas. In practical sciences like finance, you need data, a theory and experiments to prove them they have any worth. Do you have any data to work with or you simply think that your ideas are so revolutionary that they deserve publication? If you believe so, you should seriously reconsider.
Another reason that your plan is not feasible is that a typical PHD student needs at least 6-12 months for literature review **if he knows more-or-less what he is going to research about**. That is what usually advisors can help you with. They have identified a subject / gap on a specific area and point the student (who has some common interests) and then the student has to exhaustively search for any work on this specific, tiny area of research to see what has already been done and what can be done. This process does not happen overnight, as you presume it is. Note that during the months of doing this literature review, if this is an active area, several new papers might appear which make your initial idea obsolete. And then you have to start over.
>
> In order to do some research and write papers now, I will virtually
> need to design my own PhD program, which will be I believe similar to
> a lower rank school formal PhD. I have to read widely in adjacent
> fields, review research in my field, polish or learn new quantitative
> data analysis methods, learn and develop a variety of new skills.
>
>
>
So you believe that a lower-tier PHD = doing research on your own. That makes all advisors on any school (except the top-ones) obsolete. If everybody could do research on his own there would not be a need for PHD programs. And without wanting to sound mean or harsh, why do you believe you can do it on your own? You have not published anything, you do not have real industry experience or worked anywhere to remotely touch research. Also, believing that one can do research on his own is one of the characteristics of [cranks](http://en.wikipedia.org/wiki/Crank_%28person%29). I am not suggesting that you fall into this category but keep this in mind too.
Another thing to watch is that you presume you really like research, before actually doing it. Research is not always **fun** as many posts on this SE site emphasize. So, make sure you like it before actually proceeding.
So, perhaps you should tone down your idea about "top-schools or nothing". You should always aim as high as possible, but within your reach. In your shoes, I would apply to as many good (but not necessarily top) places as possible and depending on their answers I would re-evaluate my plan. During this period, I would also do a literature review on the subject that really interests me, find the active people on this area and monitor their work. This will give you a headstart when / if you start a PHD and also test you if you really want to do the PHD, during this necessary, yet tedious part (literature review) for any PHD.
Upvotes: 4 <issue_comment>username_2: To complement username_1's answer, let me put this to you: if you are smart and talented enough to write a couple of papers in a year with no supervision or training and publish them in strong journals (which as he says, is fantastically unlikely; I have never in 15 years in academia encountered anyone capable of that, I would say), then you're certainly smart enough to enter an undistinguished masters or Ph.D. program, blow away all the professors there, and get letters of recommendation that will allow you to get into a top program. That's not easy, but I have seen it happen. That's a much more plausible road to success than what you've suggested.
Upvotes: 3 <issue_comment>username_3: Although I think you should be more realistic, but I am in favor of your ambitiousness!
It frequently happens that the weak environment has bad effects on your path in your life which can not be compensated forever! Weak people try their best to convince you that you are like them while they are wrong, not only about you, but also about themselves!Unfortunately the decision is too dynamic and complex and you have to base your decision on "expectations" of what is going to happen. It is also possible that you enter a low-level university but the people there are encouraging and you can flourish, and vice versa.
You should consider some facts:
1- Ambitiousness is always good but with open eyes about what the reality is.
2- Your performance in the future is very similar to your performance in the past unless under some special changes(don't expect any revolution but some occasional improvements).
3- Put away your fears. As far as I understand, you are obsessed with somethings and they prevent you to think in a proper way. For example you are afraid of asking your professors to write for you reference letters! Ask them in an appropriate situation-time and place- and they will help you every time.
4-In reality you can not have all things together in the way you imagine. To have one thing you have to forget about the other things. The one who wants to have a peacock must deal with the hassles of going to India(Persian proverb which means: the best fish swim near the bottom)!
5-For some one like you, I suggest engaging in the process of try and error! I feel that you think more than to act! Start PhD at a university which is not top and get feedback from your environment and see if it is possible to reach what you want or not? If yes, what should be your approach in the path towards your goal(what level of concentration and hard work is needed?, should you change your environment?, etc.). Get feedback every now and then to prevent big mistakes.
6-As a last general(but useful) point: Accept that you don't have lots of things at time "t" and place "P"; This means that you can live without them! Being happy(and using what you have in the best way) or sad(and wasting what you already have) depends on your choice! Be happy with what God has given you but ask him "the best possible".
I was awake at night, so, forgive me if there are mistakes.
Upvotes: 0 <issue_comment>username_4: I think you should take the best PhD university you have for now to access more resources. And then you could publish much better stuff than you have for now, and then apply visiting graduate positions, PhD transfers, postdoc positions, or even AP to better university after you published very good stuffs.
Upvotes: -1 |
2014/11/04 | 356 | 1,567 | <issue_start>username_0: I am a full time researcher at a research institute, we are part of a university, but the institute does not have students, just researchers. I mostly have assistant researchers and interns that are undergrad. I am thinking of bringing on a final year highschool student who is really promising and has been winning some awards.
They are interested in me writing a recommendation letter (to US schools, but we are outside US). Is this advisable?
My main concerns are the appropriateness of a research based letter for undergrad admissions, as well as not being a Professor or Senior faculty as mentioned in questions specific for graduate school. However when I think about myself, I am pretty sure I had high school teachers write my recommendations, non of which had a PHD nor, obviously, were professors.<issue_comment>username_1: You're giving the student an extraordinary experience compared to her fellow applicants, so it's natural for her to ask you for a letter explaining that experience and your opinions of her work. And I think it's very reasonable to provide one.
Upvotes: 3 <issue_comment>username_2: I would say you're expected to write letters for high school students who work for you. Students usually do research to get into college, and in many cases, your letter is the whole reason the student chooses to join your lab.
When I applied to college the application asked for letters from a science and a humanities teacher, but also allowed us to submit an optional letter (which would presumably be yours).
Upvotes: 0 |
2014/11/05 | 494 | 2,341 | <issue_start>username_0: I am applying to doctoral programs in the US and I am wondering: is it common for applications to be filtered before they reach the hiring committee?
In the typical selection and admissions procedure for doctoral students, what happens after the deadline for applications has passed?<issue_comment>username_1: For US universities applications are almost universally prescreened for completeness and meeting any stated minimum requirements. The prescreen in process is almost always extremely rigid when minimum requirements are set. The wording on website is often confusing since sometimes schools want to be able to make special acceptances to people who do not meet the minimums. This requires someone from the department to beg and plead with the admin team. One place there is generally no flexibility at all is TOEFL requirements. It would be perfectly reasonable to call/email the department and ask, large departments get many such requests every year. Just realize that the answer will likely be the minimum is the minimum.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Although it's hard to say that there is a "typical" process, most PhD admissions are handled both an administrative staff member (often with a title like *Graduate Program Administrator* or *Officer*) and by an admissions committee made up of faculty.
**The administrative staff will generally filter applications.** For example, if applications are incomplete (e.g., missing grades, test scores, or recommendations), the administrator will often remove these applications from the pool. If the department has firm requirements (e.g., minimum TOEFL scores or GREs), the staff will often remove applicants who do not qualify from the pool.
This smaller pool will be reviewed by the faculty. In many departments, this will be by the graduate admissions committee who may then reach out to individual faculty members who seem like potential advisors to strong applicants. In some other departments, the files of students may be sent directly to potential supervisors by the administrative staff.
Faculty will usually be able to see the unfiltered pool but will not often look at the applicants who have been filtered out by the staff based on the most objective criteria of application completeness or eligibility.
Upvotes: 2 |
2014/11/05 | 1,915 | 8,166 | <issue_start>username_0: I am preparing an exam for a course I'm running. It's an engineering course; the exam problems require students to apply conceptual knowledge and quantitative skills taught in lectures and labs.
It's the first time the class has been offered, so I don't have a frame of reference from previous years. There haven't been any in-class quizzes or other opportunities for me to find out directly how long students would take to solve problems similar to the ones on the exam. I also haven't found any similar exams from other universities to serve as a reference.
Obviously, I am much more experienced in the material than my students, so I can't really generalize from how long it takes *me* to solve these kinds of problems.
This question is for the more experienced educators out there: **in a scenario like this, are there any methods or general rules for determining how long you can make the exam, given the time allotted?**
I don't want time to be a major issue on this exam; I want most students who are reasonably well prepared to be able to complete the exam in the given time.<issue_comment>username_1: >
> in a scenario like this, are there any methods or general rules for determining how long you can make the exam, given the time allotted?
>
>
>
The way how I always do it is to give the exam to my student assistants (TAs, master students, undergraduate researchers), and see how long it takes them. The closer the students are to your average well-prepared course participant, the better. And, obviously, you want to add a bit of leeway to allow for the fact that your course participants, unlike your exam testers, are going to be nervous and in a test situation, and that you don't want time to matter too much.
**Edit:** Bob beat me to it in a comment.
Upvotes: 4 <issue_comment>username_2: In my experience, the ratio between the time needed to solve a problem by an experienced teacher and the "average student" can vary a lot, depending on the subject, the kind of problems and even between problems of the same kind. At one end, there are problems whose solutions are pretty straightforward but which require a lot of tedious calculations, for which no shortcut exists: in this case the solution time is almost the same for the professor and the students. On the other end, there are problems which require to find a "smart" solution, where few calculations are involved: in this case, an experienced professor can solve a problem in much less time (ratios of about 4 between the solution times are not uncommon).
So, even if you can give exam problems to TAs for testing, try to judge carefully what kind of problems you have prepared, this might allow you to better trim the exam duration.
I typically consider a ratio of around 3 between the exam time and my solution time (Electronic measurements).
Upvotes: 4 <issue_comment>username_3: This is totally unscientific, but for my exams (mathematics) I use the following rule of thumb:
After writing the exam, I sit down with a stopwatch and work the exam from start to finish. Of course, I know how to solve the problems (since I wrote them) but I go carefully through all the steps and write what I would consider a thorough and exemplary solution. I note the time I spent on each problem.
Then I take the total time and multiply it by 3 (or sometimes 4). If this exceeds the allotted time for the exam, I remove or simplify some questions. (This is where it helps that I wrote down the time I spent on each question, so I can remove a question and recompute the time without actually retaking the whole exam.)
As a side benefit, this also helps ensure that I haven't made any mistakes in creating the exam, and that all the problems have the solutions I intended. It also gives me an answer key.
Upvotes: 5 <issue_comment>username_4: I like Nate's suggestion to multiply by 4 if your exam is all essay questions. If you're concerned about students not finishing, you could use multiple choice and short answer questions.
Multiple choice and short answer questions force you to ask focused questions that can be answered quickly. When I was in undergrad, I always liked test that were multiple choice and short answer because I knew that each multiple choice question would take 1-2 minutes, each short answer 2-3 and the essay question would usually take 7-10.
You could use those rules of thumb to build out a test that you are sure students will be able to finish.
You might find [This multiple choice test primer](https://testing.byu.edu/handbooks/betteritems.pdf) helpful.
Upvotes: 0 <issue_comment>username_5: If the person setting the exam has no idea how the questions are going to play in practice, the students are presumably in exactly the same situation. That may add to nervousness issues, and make it hard for them to prepare for the test.
How about giving a practice exam, which may be shorter than the real thing but use similar questions, during a class period a couple of weeks before the actual exam? If you do the questions yourself, or have a TA etc. do them as already suggested, you can use the practice test to calibrate the ratio between the time for the actual students and the TA's time.
The students will also benefit by seeing what sort of questions you pose, with an opportunity to discuss them with you, with the TA, and among themselves. That will help them prepare for the actual test.
Upvotes: 2 <issue_comment>username_6: Here's what I do, even in courses I've taught often. It doesn't answer the question you asked, but it may serve your purpose.
I always try to make up an exam that can be done in the time alloted, but I almost never succeed - I get carried away making the questions interesting, in hopes that students will actually learn from the exam as well as demonstrate what they know. I announce my failing in advance, so students won't be surprised. I make sure to tell them that since I know there are some A students in the class, I am morally bound to curve the exam so that the top grades are A.
I tell the students that after they turn in their (timed) exam they should take the questions home and come to the next class with a paper with the solutions they wish they'd had enough time to write. I tell them that extra effort won't necessarily replace their timed work, but can improve their grade.
Since I almost always give open book open notes exams, the fact that they can look things up at home isn't a real bonus. I'm aware of the fact that they can get extra help at home (i.e. cheat) but I'm always uncomfortable designing limitations to catch cheaters that deny the majority of honest students a chance to learn more.
Upvotes: 2 <issue_comment>username_7: Exam Time Calibration
* 1/8 time for professor to solve a freshman level exam
* 1/7 time for professor to solve a sophomore level exam
* 1/6 time for professor to solve a junior level exam
* any above could limit to a factor of 5 by handwriting speed, treat as "1/5" for all undergraduate
* 1/5 time for professor to solve a senior level exam
* 1/4 time for professor to solve a elementary graduate (master) level exam
* 1/3 time for professor to solve a intermediate graduate (specialized master) level exam
* 1/2 time for professor to solve an advance graduate (doctor) level exam
The rule of thumb is to time yourself writing an answer key with all required steps. Multiply by an appropriate factor as necessary. The factor is the inverse of the expected time to solve the problem yourself.
I remember taking a final exam for a Master level class, and I finished in one hour, one classmate finished in two hours, third person took four hours, while the rest of my classmates took five hours. The lower third of the class did not finish the exam after 5 hours. The exam was scheduled for three hours.
I remember noting down the time in 30 seconds increment I took for each problem on a homework assignment, so my professor wrote his time writing the answer key right next to each of my times. I was faster overall and on most problems (youth thinks faster), but slower on some problems (wisdom thinks smarter).
Upvotes: 0 |
2014/11/05 | 1,760 | 7,816 | <issue_start>username_0: This is a question from a newbie doing reviews for journals/conference papers.
What should one do if they are among the panel of reviewers for papers for a specific journal / conference, and you get a paper whose subject material you are not an expert in, or are not fully aware of, all the details around which the paper focuses on?
Is it the norm to state that you are not versed well enough in the particular area, thus you cannot make a qualified opinion, or should you just do as best as you can to review what you understand of the paper, and hope that the other reviewers will do a better job, compensating for your weakness? Are there any other options?<issue_comment>username_1: First off, "fully" is a very relative word, somewhere between expert and novice which makes a specific answer difficult at best. Editors try to identify persons they believe can provide a good constructive review of the manuscript in question. If that is the case, you are viewed as having that expertise. Note that it is not uncommon that an editor appoints experts with different specialities to cover different aspects of a manuscript. It is the responsibility of an editor to select reviewers with care to make sure a manuscript is scrutinized fairly and by peer "experts".
Now this system is not fool-proof and it is therefore the responsibility of a reviewer to decline to review if they think they are not in a position to take on such a review. There are of course many other reasons to decline but that is a different story. So in your case, you need to figure out based on the information you have received, if you have the background to provide input on all or significant parts of the manuscript in the request. When taking on the first reviews in a career, you may ask your advisor or peers about the task but remember, the fact that you have been asked is not necessarily something that is open information (in for example double-blind reviews where anonymity is requested).
So, think about how you can contribute. If you do not see that you can provide input, decline. Otherwise, take on the review. Reviewing is an integral part of academia and getting started is necessary at some point. It can also be quite rewarding since you can gain insights ito new science as well as ways to (or not to) disseminate science.
Upvotes: 4 [selected_answer]<issue_comment>username_2: An editor making a review assignment will usually make sure that you are an expert in the field before requesting you to do the review. However, many times the editor will not know your work personally and will rely on a data base where your set of skills and areas of expertise are listed.
For regular journals, you are given the opportunity to decline the review. You can do this for a number of reasons, most commonly you're too busy or not an expert in the field (if I get a review request from a crappy "predatory" journal I just ignore the email). The editor will usually then ask you to propose alternative reviewers that might be interested in acting as referees.
In special cases, such as conferences, authors might be *expected* to act as referees for the other attendees. This is an ethical commitment since you should correspond to other authors who spend their valuable time reviewing your paper. The editor will again try to ensure the paper sent to you falls within your field of expertise. However, if you are not an expert and feel that you will do a bad job, it is not fair to the authors that you accept the review. Contact the editor instead to point out the issue and ask him/her to get another paper more within the area in which you're knowledgeable.
Upvotes: 0 <issue_comment>username_3: For a multidisciplinary journal or conference, I will sometimes deliberately assign a paper one reviewer who knows significantly less about the subject. The intention is to have a slightly more detached perspective who can say whether this paper is interesting and intelligible to anybody outside of its narrow sub-sub-sub-field. It's also rare to review a paper that you are perfectly knowledgeable about, since science has so many different aspects.
What you should do when you do not perfectly understand a paper:
1. Be extremely clear on which parts you are confident that you understand and which you don't.
2. Do not assume the authors are wrong if you don't understand. It may be one of the gaps in your knowledge.
3. Do not assume the authors are right if you don't understand. They may be blowing smoke in your eyes.
4. Explain what, if anything, you found of value in the paper despite your lack of knowledge.
If your review, in combination with the others, is not sufficient, it is the responsibility of the editor or chair to obtain another. It is not your job to determine how the reviewers are distributed.
That said, if you are completely and totally lost, contact the editor / chair who assigned the paper to you and check with them. Depending on their intent, they may take you off the paper, or they may tell you that this is exactly what they want you to write down.
*If you need to do this do it soon*---it's terrible form to screw up somebody's reviewing schedule and either create a last-minute crisis or an extra delay for the authors.
Upvotes: 2 <issue_comment>username_4: For most journals and conferences that I've reviewed for, I had to give at least two grades: one for overall (something like 1=clear reject up to 5=clear accept) and one for how well I would rate my own expertise of the subject. Sometimes there's also other grades like how well is the paper written etc. In this case, the question answers itself. Otherwise, there's usually a "notes to reviewers" section where you can comment that you're not an expert in the area.
I would only decline a review due to not being an expert if it is a journal or conference specifically aimed at people who are experts in the area - in which case, whoever asked me to do the review has made a mistake in nominating me.
Otherwise, I think one of the things a journal is deliberately looking for is how well the authors can explain and present their work to a non-expert in the area. Even if you don't get all the details, you should judge how well the paper gives you a general idea of what the authors are doing, how it relates to previous work, what the novelty of this particular paper is and why the subject is important etc. In my opinion, the best review panels contain at least one expert (who will be able to comment on details) and at least one non-expert (who can focus on the bigger picture).
Upvotes: 0 <issue_comment>username_5: Virtually all review forms that I have come across so far featured a field such as *Reviewer confidence* or *Knowledgeability of reviewer*, which was meant for exactly that purpose - to state how confident you feel in the respective topic.
What will be done upon that field will be up to the program committee:
* With too low a confidence, the program committee may decide to give your review a lower (or, in extreme cases, a very low, down to zero) weight.
* On the other hand, the goal might be to intentionally involve a mixture of differently knowledgeable reviewers.
* Based upon the reviewers' self-assessed expertise, the program committee might also decide to involve another, possibly more knowledgeable, reviewer.
Your self-assessed confidence level will often be cross-checked by a question asking you to explain the contents of the submission in your own words. That helps further to determine how and what parts of the submission you understood.
So, in short: Expressing how knowledgeable you are in the field of the submission is expected by program committees, to a point that often, a dedicated form field is provided for that information.
Upvotes: 0 |
2014/11/05 | 1,101 | 4,717 | <issue_start>username_0: I am applying to doctoral programs in the US, and I am wondering:
Is it advisable to name the keynote speaker of the local conference where I presented a paper recently?
The keynote speaker is a well-respected expert in my field, and I thought of mentioning his participation in order to highlight the merit of this conference in my statement of purpose.
It seems to me that only a conference with a certain level of importance would be able to get such a reputable expert as a keynote speaker.<issue_comment>username_1: If the conference is well-known enough, then it isn't necessary to name the keynote speaker, because the conference will be recognized regardless. If the conference isn't well-known in the field, name-dropping is going to come across crass and not very helpful.
So in either case, I don't see any advantage to name-dropping.
Upvotes: 6 [selected_answer]<issue_comment>username_2: In addition to username_1's answer, and to stress his point a bit further, you should probably *not* do this. There are a number of reasons:
* It provides only *very weak* support for the story you are trying to sell ("This conference was really quite good, because ... at least one important person attended when they paid all costs for her/him."). Really, there are way too many reasons why a famous person might attend a meeting. Maybe the conference organiser is an old friend of the famous person? Maybe the meeting is at a nice location, and the famous person just wanted to have a nice beach vacation, all expenses paid? Maybe the famous person simply was unaware that the meeting was in fact terrible until it was too late?
* It sounds like you are absolutely *desperately* fishing for something good to say about this conference. A reader will wonder why you found this typically irrelevant tidbit of information so important that it had to be mentioned specifically. Is this really a train of thought that you want to invoke?
* It is just not something that is typically done, and doing something against convention in your CV / application material always has at least a small risk in itself that it will not be looked favourably upon, for reasons that you cannot always predict. A good example for this is the [h-index](http://en.wikipedia.org/wiki/H-index). Two or so years ago, I would always report my h-index, basically assuming that people would either ignore the info (if they don't believe in bibliometrics) or value the information (if they do). In the meantime, I figured out that there is a significant group of people that I would really *anger* just by mentioning the h-index, and so implying that it has any relevance whatsoever. I am not saying that mentioning the keynote speaker of a conference is the same, but it is sometimes hard to know in advance who will be annoyed by what.
And, most importantly, I think the chance of this information having any *positive* impact is so small that it is not worth the paper space, even aside from the reasons above.
Upvotes: 5 <issue_comment>username_3: If you had a nice conversation with this speaker at the conference and admire her work, that might be worthwhile to include on an application in a section where you're explaining your interest in the research topic.
However, it's probably better to be extra careful about naming the keynote speaker or other prominent researchers that you haven't been directly involved with. Depending on how you write your application, you run the risk of *implying that you are connected to their work*. This is something people hiring PhD students pay attention to and will check up on. If they contact this person and they haven't even met you, that is not going to reflect well on you as a prospective hire.
The fact that a respected individual attends a conference or agrees to speak at a meeting is not a tacit endorsement of everyone else that attends the conference.
Upvotes: 3 <issue_comment>username_4: Do not add the keynote speaker to a listing. Unfortunately, the quality of a keynote speaker has *nothing* to do with the quality of a conference. The problem is that [even top people are often willing to lend their name to a dubious venture for money](http://liorpachter.wordpress.com/2014/10/31/to-some-a-citation-is-worth-3-per-year/). Low quality or predatory conferences often attempt to puff themselves up by trying to bring in famous keynote speakers, especially ones who are well known but past their prime. In fact, you should be very suspicious of a conference if it has good keynotes but unknowns on its program committee.
If the readers respect the conference, it will stand on its own. If it is not good, nothing will save their opinion.
Upvotes: 3 |
2014/11/05 | 3,794 | 16,005 | <issue_start>username_0: At the class quizzes or at the mid- or final exams, I can see that some students are too anxious and nervous. They are not really weak students but too much anxiousness brings their efficiency down and they even may fail their important exams. In one of our important exams I could see that a student was that much nervous that he could not really sit on his chair, another one was so sick that needed to go to the bathroom or see a doctor. This may happen once per year but seeing such uncomfortable students is so sad. I really feel that I have to help them as I am their teaching-assistant.
As a teaching-assistant, when I see that there are one or two nervous students at my class; if they feel they need to consult about their anxiety I try to calm them down before the exam by talking to them privately after the class and if they are good students, remind them that they are perfect students and should not allow nervousness to ruin their exam's mark; and explain their academic potentials and their perfect progress to the professor. Of course I don't have any knowledge about anxiety, so I don't talk about their nervousness problems and I ask them to see a doctor who can help them. I only try to talk and help them about their problems in the course I am teaching. If the student has not good academic progress and I see that his anxiety is because of his academic performance, I spend more time for them answering their questions in the office hours when I am at the university.
At the exam session, when I see such students; If they need and ask for something to eat to relieve their anxiety I try to bring them bottle of water and sugar to help them feel more comfortable and answer their questions more relaxed.
However, I can not really give them extra time to solve their questions as it may be not fair to other students sitting the same exam or help them giving some clues over questions. Also, I can not give them some extra mark because of their nervousness because I think it is not fair at all.
As lecturers, teachers or even TAs who may have experience about this; could you please share me your experience and what you did about such anxious students?
What do you do about such nervous students to calm down and feel better at the exam session?<issue_comment>username_1: I appreciate the gestures but I don't generally agree with the stated actions:
>
> I try to calm them down before the exam by talking to them privately
> after the class and ***reminding them that they are perfect students*** and
> should not allow nervousness to ruin their exam's mark
>
>
>
I would avoid doing that because it's borderline patronizing. It's impossible that all of them are perfect students and if you just talk to some of them who are nervous, that would not reflect well on the instructor from the angle of being objective and fair.
>
> At the exam hour, ***when I see such students; I try to bring them some
> water and sugar*** to help them feel more comfortable and answer their
> questions more relaxed.
>
>
>
Also awkward. By doing so you're singling out the seemingly nervous students. I don't oppose distributing some candies/snacks before hand but I don't agree with delivering food and drink to students who look like nervous.
---
Here are what I usually do to lower their anxiety:
1. Evaluate if examination is the best way of approaching the assessment, and if there are alternatives. Can the exam be distributed as three smaller tests? Weekly quizzes? Final project? I often use the students' likely career settings as a benchmark: Will they be more likely asked to write an analysis report with access to information, or trapped in a room to recite formula? etc.
2. Allow a one-page, self-prepared notes. It's a good compromise between closed-book and open-book. The actual benefit, however, happens when they were preparing for this piece of notes, as they have to actively digest and evaluate the information.
3. Incorporate questions that do have an absolute answer, but focus on showcasing thought process. Emphasize that there are some questions that have no absolute right or wrong answer, but merely to test the reasoning skills of the candidates.
4. Play some soothing classical music in the background prior to the start of the exam, and then fade it down before the starting time. Bach's work well... Vivaldi's four seasons work nicely as well.
5. Make some past questions available, or at least provide some mock example questions. This is to get rid of anxiety caused by unfamiliar format and types of questions.
6. Progress from basic, memory-based questions to more elaborated questions. Try not to strike too hard at the beginning. Build up their confidence through recalling some facts/definitions.
7. Clearly delineate the points allocated for each question. This is to ensure that they know they should stop before writing the 5th sentence for the 1/2 point.
8. Lower the proportion of final grade attributed to exams. Avoid having final exams bearing too much weight.
9. Invite seemingly collapsing students for a short break, and let them make up for the time afterwards. I have only done it once. The student was in serious distress: panting, red and teary eyes, heavy patches of sweat soaking through the shirt (room has AC.) I invited the student to go out for a chat, and the student immediately broke into loud crying once I closed the door. I and another TA gave the student some prep talk (aka, try your best, the past homework has shown that you can deal with the questions, etc.) and a 10-minute calm down. We let the student have 10 more minutes at the end. The student did manage to pass.
Upvotes: 5 <issue_comment>username_2: *I'm writing as someone who is very nervous in exam conditions - though I'm e.g. perfectly fine giving a conference talk with a large audience and answering their questions. Also, you'd hopefully not be able to identify me as being nervous: I try very hard at least not to show it, and I'm told that I'm generally successful at that.*
My recommendations have less to do with dealing with particular students but with general guidelines that I'd summarize as:
**Be (or become) a good examiner, and get known for being professional and fair in your exams**. Lots of nervousness is caused by the examiner having a reputation to be unpredictable, arbitrary or unfair
* To put it drastically, make sure what you examine are the students' professional capabilities and only those: unless it is a practical course on psychological warfare, the exam should not measure the students' resistance to psychological attacks\*.
Counter example: I've been asked in one of my final oral exams "Now I have a question that I got in my PhD defense and also couldn't answer: ..."
* Make sure your questions are well posed.
+ Do not ask questions that not only require the student to have the "professional knowledge" but also to be able to guess what piece of knowlege the examiner could possibly be driving at.
Big alarm bells would be students asking back "Are you driving at XXX?" or the like - though you'd probably never get such a feed back from timid students. In written exams, an (also late) alarm sign would be if you encounter correct answers about different topics.
However, knowing that such a problem exists already allows you to re-examine your questions.
+ Try very hard to avoid wrong or misleading questions. This will happen once in a while, but really try to catch such questions.
As TA I once had to correct a question "Should *this-and-that* be done *this way* or *that way*?" where in fact each way was correct for different subgroups of *this-and-that*. Such a question has several undesirable consequences. One is: students who have at least an intermediate level of knowledge (e.g. who could correctly give examples for "When should *this way* be used?" and "When should *that way* be used?") would typically expect that the "or" is actually a XOR from the way this form of questions works in my culture (written answers, not multiple-choice). Unless they are so confident in their knowledge that they dare to answer "both" or "it depends", such a question causes unneccessary stress because students start questioning their (correct) knowledge during the exam.
(As for the concrete situation, none of the > 100 students answered "both" or "depends", but a large number did not answer at all. The answer to "What answer does the professor want to hear?" was *this way*, which was also the predominant answer the students gave.)
Having a buch of TAs doing the exam beforehand would probably have caught the problem.
* Refrain from jokes and surprises.
+ I once had an important oral exam where the professor sat facing me across the table and an assistant sat beside me writing minutes. At some point the professor suddenly said that now they're changing roles and the assistant is going on with questioning me. IMHO that was a totally unnecessary cross questioning situation.
Side note: I'd also avoid having examiner and minute writer sitting at 90° one right one left of the student. IMHO the "sitting at 90° relieves tension" advise doesn't hold for being "surrounded" at 90°.
+ Another counter-example: Oral exam about some legal stuff. Examiner declares at the beginning that he'll accept only answers as correct that literally cite the respective portion of the law. Explanations in own words will be counted as wrong. He'll give an allowance for the number of words that can be used for each answer (order of magnitude was 10).\*\*
(Just to be clear: this turned out *not* to be a joke)
+ I'd even be cautious with @Penguin\_knight's classic music and at least tell them beforehand that you'll do that and for what purpose. People do get nervous also by what is meant to be a *nice* surprise.
+ Here's the one exam surprise I liked so far: typewriting class leading up to a certificate. The teacher explained that in her experience people are so nervous in the exam that the results are considerably worse than normal excercises (of the same form). She'd therefore "smuggle" the exam in someday without telling us so we'd think it was an excercise - and that's what she did: at some point she collected the excercises with "congratulations, you've just done the first part of your exam".
* Be reasonably predictable in what you ask, i.e. keep inside the curriculum with the topics. This doesn't mean that you cannot or should not test the reasoning and transfer abilities of the students, but it should clearly be connected to the topic of the exam.
Counter-example: one of the examiners for our 2nd year oral exams had a reputation that he'd e.g. ask about thermodynamics and chemistry of a supernova if his morning newspaper wrote something about supernovae. While his research focus was on astro-physico-chemistry, I still don't think that this was covered by the physical chemistry 101 curriculum.
* I'd also consider it good to explain to the students what to expect.
One of my final oral exams had reliably left anything that had been covered in the courses after ca. 5 min. Since then I know how a lemon must feel after squeezing, and I had no idea whether I had passed or not (gave lots of wrong answers, most of which I was able to correct at a second attempt, though). I got a mark a fraction below the best possible mark and the examiner *afterwards* told me that he examined me for the best mark due to my record and that he was sure I know what was covered in the courses so he needn't examine that. I'm not entirely sure, but I think it likely that having this explanation been at the beginning of the exam would have made the experience somewhat less unpleasant.
* However, I don't see anything wrong with unusual (more realistic) exam settings *if they are trained before*:
I once had a teacher who explained that she wants to train us for real life situations and that she'd therefore include lots of irrelevant information alongside with the relevant information so we could not guesstimate the correct calculation from the given values and that part of our exam was to extract the relevant information (this usually came in the form of a general purpose collection of tables we used for 2 years of courses). We did use the same material in the lecture.
---
\* another type of course where I'd consider psychological stress (particularly as to answering *fast*) as adequate would be practical exams dealing with emergency situations. But then the course should have practically trained corresponding situations.
\*\* This was not in a university setting but a certificate course with a kind of once-off customer situation: if you need the certificate you have to take one of these, but basically noone will ever have an occasion to go back there again.
Upvotes: 3 <issue_comment>username_3: Your actions are well intentioned and I appreciate your nurturing nature. I wish academics were more like this sometimes. I think the way you attempt to ease students' nervousness may not lead to your desired outcome for all students.
>
> "talking to them privately after the class and reminding them that they
> are perfect students and should not allow nervousness to ruin their
> exam's mark"
>
>
>
This can increase stress and anxiety. Anxiety is a complex phenomenon which is not usually solved by telling people to be less anxious. In fact, by pointing out that their nervousness can hurt their grade, you are giving them one more thing to be anxious about (their nervousness). In my personal experience, deemphasizing the grade improves anxiety more than telling them how harmful their anxiety is. Mention that college is about learning and that a little bit of anxiety is healthy and that in the long run grades don't matter too much. Just study hard and do your best.
>
> when I see such students; I try to bring them some water and sugar to
> help them feel more comfortable and answer their questions more
> relaxed
>
>
>
This may be helpful for some but you again might be increasing some student's anxiety here. I get anxious before tests and I know that I would be embarrassed if the TA brought me water and candy and didn't do that for every other student. I'd feel singled out. In addition, it really isn't clear to me that sugar would decrease anxiety for students. Like anxiety, nutrition is complicated and others may react differently to sugar than you do. Also, you have to be concerned about allergies. You mention in your comment that the students ask for this. That definitely changes things, but I would avoid giving out anything to students who don't ask for it.
Talking with students who appear nervous after class is great, but I would shy away from lecturing them about anything related to how their anxiety affects their grade. Instead ask them how they feel before an exam? If they respond with something that concerns you, you can ask them if they have considered inquiring with student services about special exam accommodations (common in the USA). Remember that you are not a healthcare professional and it is potentially harmful for you to act too much like one. It is great that you show that you care about your students, but sometimes it is best to let the student talk to people who are more qualified to address these issues. You should be able to direct them to these people if they need it. Sometimes there are people in charge of "study skills", or "disability services" or a "learning skills center". Contact your department and seek these people out so you have the appropriate contacts available.
Note that some people have a lot of anxiety and don't visibly show it and others who look extremely anxious may actually be fine. I'd announce to the whole class "If you feel you get very anxious before exams and it affects your performance, feel free to schedule an appointment with me so we can talk about it."
Upvotes: 2 |
2014/11/05 | 814 | 3,561 | <issue_start>username_0: I am applying to US doctoral programs.
I have decided to mention especially four papers in my statement of purpose, so I have to present them in my writing samples. As a consequence, my writing samples take almost a hundred pages. Nevertheless, I make them into one pdf file.
Then my concern is if lengthy writing samples would instead hurt my application?<issue_comment>username_1: I would not put more than one in a writing sample, and I would even attempt to avoid that at all costs. If they are published or are working papers available online, then you should include citations and/or links in your statement of purpose and resume/CV. Otherwise, I would not include them in your application at all.
Though this is with the usual disclaimer that the answer probably varies by field. This is mostly from an economics perspective.
Upvotes: 1 <issue_comment>username_2: To point to a specific program at a top ranked school (whose requirement I happened to know off the top of my head), Stanford's Political Science admission requirements specifically state that the writing sample should be 20-35 pages (double spaced). Additionally, you can submit more than one sample if you don't have a longer essay to submit. For example, two 10-15 page papers instead of one 20-35 pager.
<https://politicalscience.stanford.edu/graduate-program/prospective-students/phd-admissions>
I don't know how strict they are about the 35 page upper limit, but I think 100 pages is going to be far more than any department will want.
Of course, this will depend on the program, but Stanford is a good example of a top-tier school with top-tier expectations.
Also, Stanford is the only program I've seen that even suggests submitting more than one writing sample. Most application forms will likely only accept one document, so unless you condense multiple documents into one, you're not going to be able to submit multiple samples anyway. I would suggest just picking your favorite and submitting that.
Upvotes: 3 [selected_answer]<issue_comment>username_3: The writing sample is to show a *sample* of your research. Just because you mention multiple papers or projects in your statement of purpose does not mean you need to include them all in your writing sample. Pick what you think is the single best paper and include that. (As username_2 suggests, if the application guidelines give a page limit, and you have several short samples, you could consider including more than one, but only if their total length is less than the stated page limit.)
Do not (as you suggest in your comment) "exploit" the writing sample to stuff in as much of your research as possible. If you have published, that will show up in your CV. If your samples are not published but were, for instance, written as class papers, that will show on your transcript in the list of classes you've taken and how well you did in them. The writing sample is not supposed to be "proof" that you've done everything you talked about your SOP; it's just a sample of one thing you've done.
Even if the ocmmittee does look at your overlong writing sample, it will probably be perceived negatively. A person who tries to stuff the sample may be perceived as unable to focus on a single topic, or as trying to show off how much they've done. At the least, you will probably be perceived as someone who did not pay attention to the application directions (which, in my experience, usually use the world "sample" in the singular and give a rough page range), which never helps.
Upvotes: 2 |
2014/11/05 | 1,038 | 4,797 | <issue_start>username_0: I am currently a referee for a paper.
One of the authors of the paper had written an earlier paper,
which I will refer to as the "original paper,"
and the paper which I am now refereeing is an extension of the original paper,
which I will refer to as the "extension paper."
I have found that large portions of the extension paper
are copied from his original paper.
In particular,
a whole section of definitions is copied from his original paper;
and some paragraphs in the introduction and literature review
are copied wholesale or with slight modifications from the original paper.
Is it acceptable for an academic paper to copy paragraphs and even a section from an earlier paper by the same author?
My intuition suggests that it is acceptable to copy the definition section,
with an acknowledgement that it came from the original paper,
since definitions are standard.
But it seems strange to me for the introduction and literature review
to be too similar to the original paper.<issue_comment>username_1: The answer depends on the relationship between the papers, and I'm not sure which applies based on the information in your question. In computer science, at least, there are two general cases:
* The extension paper is the "extended journal version" of a previously published short-form work such as a conference paper, workshop paper, or extended abstract. In this case, the rule of thumb is typically at least 30% new material. The extended paper will often contain large chunks verbatim, as it is expected to *supersede* the original paper, rather than existing as a separate work.
* The extension paper is a separate work: in this case, extensive reuse of material is self-plagiarism. Two exceptions: first, related work, methods, and definitional material may often be reused as long as it is appropriate to do so---the material should be appropriately customized to fit the new environment. If the author would just be paraphrasing for the sake of paraphrasing, though, it's not necessary. Second, introductory material may be partially shared, though it should be more heavily customized for the new context.
In all cases except for minor reuse of related work material, the extension paper must declare a clear and explicit relationship with the prior paper.
Note that many other fields do not have the notion of a "journal version" and thus have much stricter standards.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Publishers have concerns about this from a couple of directions.
1. Copyright. A publisher will generally not want to publish something without making sure that the copyright is clear. If you've previously published a paper and then recycle text from that earlier paper into the new paper you need to make sure that you have retained the copyright on the text. Chances are that if you published your previous paper with a main stream commercial publisher then you transferred the copyright to that publisher and don't have the right to publish the same text in a new paper.
2. Originality. Most publishers have policies that say they only publish original research papers. It's an editorial decision whether the new paper has enough original content to qualify as original research. Reusing the text of mathematical definitions and standard theorems is a gray area where some publishers are willing to allow some text recycling. If this is done, it's critical that the original source be properly cited or better yet that the material be treated as a quotation from the original work.
Most publishers now use software to automatically check all submissions before they're sent out for review. If there's a concern about old fashioned plagiarism or recycling of text ("self plagiarism"), then this is often dealt with before the paper is even sent out for review.
As a referee, I would note the recycling of text from the previous paper and then review the current paper and consider whether the new work is sufficiently original to merit publication. It will ultimately be up to the publisher to decide whether they're willing to deal with any liability for copyright violation that results from the text recycling.
The Committee on Publication Ethics (COPE) has some useful guidelines on text recycling:
<http://publicationethics.org/text-recycling-guidelines>
Upvotes: 3 <issue_comment>username_3: One thing to consider is to flag it up to the editor/publisher. Most review systems have a way to provide comments to the editor only (not the original author). This is a case where you can flag up and discuss these aspects of the paper. Of course if the paper actually does not contribute anything beyond the original paper (in the same words or new ones) you could reject the paper on those grounds.
Upvotes: 0 |
2014/11/05 | 510 | 2,166 | <issue_start>username_0: Is there a potential problem when an academic supervises a Ph.D. student in a very specific area in which someone close to them is *also* active and working (and supervising Ph.D. students who are publishing in that area) at a *different institution*? What if the two students are producing very similar work, at the same time? Is it right that my alarm bells are ringing?<issue_comment>username_1: It's not considered a conflict of interest for an advisor to personally work in a field similar to his/her students' - it's expected.
It's also not a conflict of interest for an advisor to have friends working in a field similar to his/her students' - it's likely.
Similarly, it's not a conflict of interest for an advisor to have a spouse, child, or immediate family member working in a field similar to his/her students'.
The exception to the above would be if the advisor actually *does* something that harms his/her student in order to favor him/herself, a friend, or a family member.
Upvotes: 3 <issue_comment>username_2: I can think of multiple sane reasons not to publicly state romantic or marital involvement with a fellow researcher. There surely is no obligation to do otherwise.
Now, in the situation you described in your comments, you mention the possibility that one adviser is leaking unpublished results of her/his students to her/his spouse's students without the consent of the person who did the work. This is indeed problematic, regardless of their marital status.
If you are worried about this situation, **discuss confidentiality** with your adviser, make sure you are on the same line and consider sealing **an informal agreement of non-disclosure** with her/him if deemed necessary. This can be independent of the spouse situation.
One potential conflict of interest is if both were acting as reviewers for papers or grants submitted by each other, or if they were members of the hiring committee that evaluated each other's application. Since married couples have a reciprocal interest in their spouse's employment and wealth, there would be a quite probable conflict of interest.
Upvotes: 3 [selected_answer] |
2014/11/05 | 2,081 | 9,022 | <issue_start>username_0: For more than 6 weeks now, I have been attempting to contact a post-doc or their (former?) PI to request access to *either the source code or software* of a tool that was published in BMC Bioinformatics. The authors did not provide the source as a supplemental file, but assure the reader in the article that it will be made available upon request. This is the first time requesting source code of a research group, but I have yet to receive any kind of reply to my polite requests.
**I am unsure how to proceed.** Pursuant to the guidelines for publication in the journal, "[i]f published, software applications/tools must be freely available to any researcher wishing to use them for non-commercial purposes, without restrictions such as the need for a material transfer agreement." This group is located in Europe, and it is highly likely that they should have a working knowledge of English, after all, the publication and their websites are in English, so I don't think there a language barrier exists.
Possible actions that I am considering:
1. Contacting the managing editor for the publication to explain the situation, and see if his/her email attracts more of a response.
2. Contacting any granting agencies who have provided supporting funding for the project to determine whether they have stipulations about providing source code.
3. A phone call to the corresponding author.<issue_comment>username_1: I think contacting the editor of the journal is your best bet. Contacting grant agencies will most likely not warrant a reply, and I don't imagine many of them have stipulations for sharing code (yet).
That said, I have been in a similar position numerous times, and I have had very little luck every obtaining the code. The editor will most likely not be willing to retract a paper because the author's won't share, and they have little incentive to do so, since it will at most garner a single citation, but could lead to more problems down the road (e.g., the code is buggy and you can't reproduce their results, etc.)
Another tip would be that senior people (i.e., PIs) usually have more luck at this kind of thing because they are harder to ignore and/or have contacts, but it can be harder to get the to actually do it, because it can become political.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Stop. Do not do any of the things you are thinking of doing.
>
> The authors did not provide the source as a supplemental file, but
> assure the reader in the article that it will be made available upon
> request.
>
>
>
Have you thought that they need to polish the code before releasing it? The fact that I am planning to release my source code does not mean that I have to do it now or whenever suits you.
>
> Pursuant to the guidelines for publication in the journal, "[i]f
> published, software applications/tools must be freely available to any
> researcher wishing to use them for non-commercial purposes, without
> restrictions such as the need for a material transfer agreement."
>
>
>
I have seen journals like that in my area (CS). Still, this initial rule proposed when those journals came out, might atone through the years. Since in certain areas, conferences are the main publishing venue, journals sometimes "relax" their original rules to get enough submissions to get them going. So, I would not count towards this rule to pressure the journal or the authors of this work. Check out some other works on this journal. Do they actually released the code? If not, then releasing the code is the exception and not the rule.
Also, "software applications/tools must be freely available to any researcher wishing to use them for non-commercial purposes, without restrictions such as the need for a material transfer agreement." does not necessarily mean releasing the source code but just the binary or a web-application created from the code. Where did you make the assumption that they should give their source code to you? The word "tool" refers to full apps and not original uncompiled source code.
>
> Contacting the managing editor for the publication to explain the
> situation....
>
>
>
And what do you think the editor would do? Punish the authors because a random stranger on the internet tells him something bad about them? You can rest assure, this action will have little effect on the authors and only reflect bad on you.
>
> Contacting any granting agencies who have provided...
>
>
>
Why do you assume that under the rules of their funding agency they should release everything as open source? I have worked in many research projects in Europe and I have never heard of such a strict rule. Perhaps there are some projects or agencies demanding that but I do not think it is the norm as you suggest it is. In many projects, participants are commercial companies and they are usually not interested in sharing their work with anyone else (except the project partners and only during the project's duration). Enforcing such a rule would make all commercial companies to not want to participate and that is against the policy of funding agencies.
Have you ever stopped to consider that the PI perhaps relocated and did not get those emails? Are you 100% sure that the authors did not reply your emails on purpose? And even if they do, are you sure that they broke some rule as you assume they did? In your shoes, I would not be too sure. And starting a full-scale war, will do more harm to you than them.
Also if you want something, be nice. Sometimes it does not work. OK. Bullying people into doing what you want is not an efficient long-term policy.
Upvotes: 3 <issue_comment>username_3: I think you need to forget about how you do things on the internet and remember how you do them in the real world. The fact that you do not think they have treated you with the courtesy and respect you believe you deserve does not justify you treating them any worse then you would hope to be treated.
You have asked for something, you didn't get it. Move on.
Upvotes: -1 <issue_comment>username_4: It's perfectly possible that your polite request is sat in a queue of jobs behind several others. If you mailed them six weeks ago, then that would just about coincide with the start of the teaching term at many institutions (such as my own). That time of year is quite crazy, so it might just be that they haven't got around to it yet.
Failing that, do you know of anyone else who might need to use the software, and might also send a request? If people feel that their "product" is actually in demand, that might act as a spur to further action...
And, to reiterate the points already made, do not contact the journal - at this stage - and *absolutely* do not contact the funders until all other possibilities have been tried (and, even then, think very carefully about taking this action).
Upvotes: 3 <issue_comment>username_5: Do you absolutely need the source code or the tool they created to reproduce their work? If so, then I think them coughing up the code would be paramount for you and any other group wanting to pursue or validate their work. If not.. then try coding your own solution.
I know, I know, your logic may be that it would be easier to start with their source code and build from there. IE: see how they did it. But, what programmers since the dawn of time have learned is it's easier to start and build your own code then it is to take on and learn someone elses. That's why programmers love to go into new things saying "we need to start from scratch". It can be a royal hairball trying to untangle someone elses' code.
If their research provides a basic process that their code simply helps expedite, then try coding your own solution and see if it also works. If they said they would provide code-upon-request, contact them again, but remain friendly. This isn't something to burn a bridge over. Research teams can sometimes get pulled different directions (esp. depending on who they work for), and a past project may get filed away and all the resources they used for it (hard drives, email addy's, etc) may get mothballed. So, asking for the code may require someone spending time digging through archives and such. (In retrospect, if they say the code is available they should have tossed it on a publicaly available repo, so folks like you could grab-n-go without having to pester them). Research goals also change over time. The person writing up the thing you read may have though the code would be available upon request.. but, the folks that funded the research may have changed their minds ... perhaps they want to patent the code as a tool to sell later?
I think you should contact them again, and simply ask that you requested the code, haven't heard from them, could someone please let you know if the code availability has changed, and if so, why? Be polite about it. At this point you just want a response saying whether you might get the code or not.
Upvotes: -1 |
2014/11/05 | 799 | 3,275 | <issue_start>username_0: I've seen that some math teachers design tests which punish errors with negative points.
Why do they assign negative points? What are some pedagogical reasons why teachers might do this?<issue_comment>username_1: It's to discourage guessing, and to avoid mark inflation.
There are many discussions of negative marking available; here is one:
<http://teach.southwales.ac.uk/assessment/negativemarking/>
Upvotes: 4 <issue_comment>username_2: There are many positives and drawbacks to negative grading, even for free response tests. It encourages academic honesty and self assessment which are important for learning and discourages "BS" answers where the student knows the answer is wrong but is trying to confuse the grader into awarding some points. Some view "BS" answers as cheating. Unfortunately, negative points for wrong answers can punish students who are under-confident, and choose not to write an answer when they actually can demonstrate some understanding. It can also reward students for not even showing up to the exam if the test is hard enough.
Upvotes: 3 <issue_comment>username_3: >
> Grade that goes below zero doesn't make sense.
>
>
>
What makes you say that? In some real-world scenarios, thinking you know the answer and being wrong is worse than realizing you don't know. For example, I'd rather have my doctor or lawyer recognize when something goes beyond their expertise, so I can consult a specialist rather than following mistaken advice. I imagine the same is true for most professionals, such as engineers.
If not answering at all yields a grade of zero, then it's reasonable to award negative points for a truly bad answer. (Of course an insightful but flawed answer may still deserve a positive score, just not as high as the correct answer.)
In practice the most common case I've seen negative scores used is multiple choice exams, for the [reason given by username_1](https://academia.stackexchange.com/a/31230/), but one can make a philosophical case for applying them much more broadly.
Upvotes: 6 <issue_comment>username_4: To give an (imperfect) analogy, on Stack Exchange posts can get negative points, thus making the poster *lose* reputation. The reason is to prevent users from posting low-quality posts in the hopes of getting a few upvotes. Downvotes force the user to only post if he's confident it's a good idea.
Similarly, giving points for correct answers on a test, while ignoring incorrect answers, encourages random guessing. Taking away points for wrong answers forces the students to be sure they really know the answer.
Upvotes: 4 <issue_comment>username_5: I see two parts in this question,
>
> Why negative points? Grade that goes below zero doesn't make sense.
>
>
>
Negative Points can make sense, as a punishment. Grades below zero not so much IMO.
I have a lecturer, who gives us assignments before the exam, we hand them in and get some points for it.
You then start your exam with that score, but if you fail a question you get negative points. This is for the reason that the exam only tests for topics we dealt with in the assignments.
But you don't drop below zero.
This method prevents people from coping assignments from other students beforehand.
Upvotes: 3 |
2014/11/05 | 1,343 | 6,096 | <issue_start>username_0: So a paper exists, it was published 3 years ago and had a novel algorithm (more or less, it's in the area of finite-difference time-domain simulations).
I've taken the algorithm and accelerated it by a factor of 100x (example, I don't have the true number), producing the same results in the end, but providing an opportunity to essentially simulate more (iterations/objects) in the same amount of time.
The methods used to accelerate it aren't particularly novel, though some aspects might very well be a bit different from mainstream ideas. All in all, a person set out to do the same thing would probably be able to do it, but I would not call it trivial. However, I know that this has not been done before.
Is this something worth publishing? I am going to ask my supervisor, but he's been very busy lately ( >.< ), I haven't been able to catch him for ~4 days. I would like some of your opinions.<issue_comment>username_1: If I understand the question correctly, the dilemma is whether to just distribute the code or whether to also make a scientific publication out of it. The way that I typically think about this type of problem is to see whether it passes any of the following tests:
1. Does the improvement enable a significant scientific or technical work that was not previously possible? For example, if faster simulation allows a control loop to be done in realtime that couldn't before, that advance may be scientifically valuable even if the methods are not interesting in and of themselves, but you have to demonstrate that value.
2. Does the improvement make a qualitative change in the operation of the algorithm which is interesting, e.g., changing a scaling property that was previously a limit?
3. Is the mechanism of the improvement interesting in and of itself, e.g., such that it teaching something about the nature of the algorithm or such that it might be applied to other algorithms or problems as well?
Any of these is a good reason to publish an improvement on an algorithm.
Upvotes: 7 [selected_answer]<issue_comment>username_2: @username_1 answer is spot-on, but I will add my extra two cents. As you said you parallelized the algorithm, which is good but how did this happen?
* Was the algorithm already parallel and just has not been implemented that way? If yes, simply parallelizing the implementation might not be good enough.
* Did you used other additional optimizations to make it efficient? For example did you use SIMD (SSE, AVX) instructions or GPUs for your implementation?
* As username_1 said, did you alter the scope of the algorithm? If the algorithm could handle e.g. only small graphs up to a size, with your implementation it might scale to much larger graphs.
* If, for example you worked on an indexing method, did your method improved the building of the index or also improved the index's query performance?
The most important thing to consider, is that you need to write a full paper to present your improved version. So, if all you can say is "I parallelized the algorithm with OpenMP and is now faster" or "I vectorized this loop" and other technical details, this will not be good enough for a scientific paper. On the other hand, if you worked on advanced techniques (SIMD, GPUs), your work might worth a publication. Still, it might not be good enough for top-algorithm conferences (where new algorithms are usually presented) and might be more suitable for conferences focused in parallel algorithms, implementations, which are more focused on the technical side of things. Also, I would worry about the fact that no one touched this algorithm for 3 years. Are you sure there is not another algorithm that is now the state-of-the-art for this particular problem? You should look into this too.
**Update**: Since you already have a done a GPU-CUDA version of this algorithm, it would be interesting to actually extend your work on plain multicores with a) OpenMP (that would be trivial) b) OpenMP + SIMD (that would be harder c) or OpenMP + SIMD + partial CUDA. Having several tuned versions of the same algorithm for different architectures and performance benchmarks for the different versions, would make a much stronger paper.
Upvotes: 4 <issue_comment>username_3: One option that you have is to submit a paper to a *demo session* of a conference. Such papers usually describe existing systems so they don't have to be new. If you are accepted, your paper is published in the proceedings, and you have the right to present a live demo of your system in the conference. If your improved implementation indeed makes 3D possible, you may have a very impressive demo.
Upvotes: 3 <issue_comment>username_4: How is your code faster than the state of the art? If it's faster only because you're a good programmer, you're unlikely to be able to publish in a worthwhile venue. If it's faster because you did some computer science, you might be onto something.
Upvotes: 3 <issue_comment>username_5: I'd say simply write a paper, submit it at one of the top venues in that area and let the reviewers decide for you. If it was rejected they will suggest the changes that could get it accepted in the following conference.
Upvotes: 0 <issue_comment>username_6: I have also published some papers in the same vein. If you can find something novel about your implementation (significant changes to the algorithm, novel optimizations, new insights about the architecture, etc.) then you have a better chance of getting published. If you only achieved the speedup by parallelizing the algorithm in a straightforward fashion, it will stand less of a chance at the higher tier conferences. 5 or 6 years ago when GPGPU programming was still very new, people were often publishing papers about GPU parallelized algorithms. This is becoming less frequent now, because many of the fundamental concepts about this process have already been explored. Much of the low hanging fruit in that area has been picked, so reviewers will tend to view straightforward parallelization of algorithm "X" as not very novel.
Upvotes: 1 |
2014/11/06 | 665 | 2,841 | <issue_start>username_0: I've heard that faculty usually gets paid for 9 months in the US and you need to use your own funding to pay for the missing 3 months.
Is this situation different in the UK and Australia?<issue_comment>username_1: In the UK, you get paid 12 months a year. I'm not actually aware of any other country that uses the US 9-month system.
Upvotes: 3 <issue_comment>username_2: I don't know of anywhere other than the US that the 9 month contract system is used.
Under the US system you can (and many faculty do) typically arrange to have your nine month salary paid out in equal installments over the whole year. If you do this then any summer salary you can arrange (e.g. from research grants, teaching summer school classes, or administrative work) is "extra" money. Most faculty that I know budget to live off of their nine month salaries and then use the summer salary to invest into their retirement funds or to pay down their home mortgage or whatever.
There are some advantages to the faculty member in having a nine month contract. For example, you're free to use the summer to go on vacation or work for some other employer (lots of consulting work gets done over the summer.) Working on research contracts and summer school teaching are optional. The down side to this system from the point of view of faculty members is that there is no guarantee that you'll be able to get a full three months of summer salary.
From the point of view of university administrators, the advantage of the 9 month contract is that it helps to keep salaries down in comparison with 12 month salaries in industry. Universities don't need faculty to teach much during the summer, so why pay unneeded employees?
Note that fringe benefits (like health insurance, life insurance, etc.) cover the entire year including the summer when the faculty member is not on contract.
Upvotes: 3 <issue_comment>username_3: In **Australia**, if you have a permanent position as an academic at a University, you would typically earn an annual salary. This would typically be paid fortnightly, every fortnight of the year. This assumes you are working for the entire year (except of course for annual leave, public holidays, etc.).
Of course, there are plenty of causal academic roles such as research assistants, teachers and so on. These are often linked to the completion of specific roles. For example, if you tutor a unit, then you'll typically be paid for the amount of teaching you do and only for the weeks that teaching is occurring.
While we don't use the term "faculty" as much in Australia, I imagine when we do, it would typically apply to those academic staff on salary.
As a side note, Australian academics are typically allowed to earn additional income doing external work if their supervisor provides approval.
Upvotes: 3 |
2014/11/06 | 5,586 | 25,131 | <issue_start>username_0: To clarify, I am asking if it's professional to ask questions that while relevant to the subject/course, and are related to the topic, but have not been discussed in class, assigned as homework, reading, etc. and are also not related to any prerequisite class that the students should already know. I am also not asking about "gotcha" questions where it's a quick "know it or not" fact, but rather an entire procedure, proof, or concept of some sort.
For example in class, using a formula from the textbook to solve problems, but on the exam asking for a proof of the formula that has been used.
Another example - in a foreign language class, asking about a never-before-seen word that may be related to some other, studied words or that has a similar sound/meaning in the native language.
Do such questions "make students think out of the box", or are they more likely to discourage students when they struggle with them? While in the real world there may be many problems similar to this where the problem is completely new, is this something that should be taught in an unrelated class?
Does it matter if the test in question will be graded on a curve where even if a student doesn't answer the question correctly at all, they could still get a good grade?
P.S. - this question was prompted by the comments in [this answer](https://academia.stackexchange.com/a/31193/19914)<issue_comment>username_1: I don't see a reasonable answer beyond "it depends." It depends on the question: some surprise questions are not actually so difficult and could reasonably be asked on a test with no special preparation, while others are very difficult indeed. It also depends on the students: you can demand more of experienced and talented students than you can of typical beginners.
There's certainly no rule that says you can only ask test questions that are similar to previous questions the students have seen. Sometimes asking unusual questions can be an excellent way to judge how well students have mastered the material. At the same time, test questions that are too unfamiliar or difficult can be unproductive. This is a balancing act that can be solved in many different ways, depending on the style of the person writing the test.
Upvotes: 6 <issue_comment>username_2: Every question on a test should be about the material in the course. Many times, however, the professor may be trying to teach a deeper concept than some of the students have learned. This is what creates a "surprise" question: the professor asks something that requires mastery of the material or insight into its deeper meaning, and the student has only learned material to a relatively shallow level.
For example, when I was a TA for a large undergraduate artificial intelligence class, the class taught two things simultaneously. The underlying concept threading through the whole class was how to think about data representation and problem decomposition. As part of teaching this, the students were also taught a number of standard AI algorithms. The tests then typically involved variant algorithms the students had never seen before. Weak students, who had learned the standard algorithms but not the underlying concept would often do badly and complain about the "surprise" questions, since they were being asked about an algorithm that they had never seen before. Strong students, who were learning the underlying concept, had no problems.
In general, then, encountering a "surprise" question means that the student is failing to learn the deeper concepts that the professor is trying to convey. Where the pedagogical problems lies, the professor or the students, is a completely different question...
Upvotes: 5 <issue_comment>username_3: I guess it depends on what you call a "surprise question". Usually, when you design tests, you don't want *all* questions to be the same difficulty. Rather, you would want to have a number of basic questions to find out who actually did not "get" the fundamental messages (and should hence fail), some intermediary questions which the majority of students will be able to do if they studied, and a small number of challenging problems, which are there to separate the excellent from the good students.
In my tests, "surprise" questions often form the "challenging" part of the test. I write them in the full expectation that only 10% to 20% of the class will be able to do them, but that is ok - not the entire class should have the best grade anyway. This way, me and others know after the course who the students that really *understood* the material were, and who just studied a lot.
*Sidenote: I teach in a european country where it is usual to have a Gaussian distribution over the entire grading spectrum - it is not like in the US where having a "B" is often already considered a bad grade. Also, at least in undergrad courses, it is not uncommon that more students fail than have the best grade.*
What makes "surprise" questions difficult for some students and attractive for many teachers is that they actually test *understanding*, *transfer skills*, and the *ability to apply knowledge* as opposed to mechanical memorisation of pre-learned procedures. This is easy to see in your "formula" example. A student that studied can apply the formula (he knows how it works, and how to apply it, and under which conditions), but only a student who really grasps the math behind it can do a proof that they had not covered before.
Upvotes: 5 <issue_comment>username_4: I don't think this has anything to do with being professional, but to what extent it is fair and/or desirable so ask such questions.
To me, the utility of unexpected questions that completely surprise students also depends on what kind of grading system they are used under, and this has not been addressed in the other answers. Under curve-grading/norm-referenced tests, one of the points of exams is to differentiate between students, so difficult surprise questions can be useful to e.g. test if students have gained a deeper understanding of a topic. Under such a system, it is reasonable (and to some extend desirable) that only a small proportion of students can answer some questions.
Under a criterion-referenced/goal oriented grading system, students are ideally supposed to know exactly what knowledge is needed to achieve a particular grade. Totally unexpected questions might be more problematic here under such grading criteria. However, what is an unexpected question is also subjective to some extent, and the learning criteria could also specifically mention fundamental understanding and the ability to apply the material to new situations. Even so, if a large proportion of students fail to understand or answer harder surprise questions this can partially be seen as a failure of the teacher/course (which not necessarily the case under curve grading), since this could indicate that the teachers have failed to convey either the required knowledge or the learning-goals needed to achive a particular grade (alternative explanations would be e.g. high goal standards or uninterested, lazy or weak students).
Upvotes: 2 <issue_comment>username_5: In my view, a fair examination question draws on any or all of the following:
1. Material discussed or presented during class contact time.
2. Material from any of the items on the course reading list.
3. Core material from any prerequisite courses.
4. Material that might properly be regarded as common knowledge for students at this stage in their education (basic mathematics, basic use of the English Language, etc.).
5. **Knowledge that can reasonably be derived as a logical consequence of numbers 1.–4.** Here, 'reasonable' is calibrated to the level of the course. Much more should be expected of graduate students who are essentially training to do point 5. for the rest of their professional lives.
In my experience, "surprise" questions usually come about because (a) students have not fulfilled their obligation to apprise themselves of the material in 1. and 2., or because (b) students are not sufficiently capable/comfortable with the subject matter to conduct the logical deductions in 5.
In either case, my view is that it *is* professional to ask questions that draw on all of 1.—5. In their capacity as educators, the main professional responsibilities of university teachers are to decide upon and deliver the appropriate material clearly, and to administer assessments capable of identifying students' success in mastering this material. A question that does not 'surprise' the majority of students can only test this mastery to a limited extent because it leaves little way to differentiate students who have truly mastered the subject from those who have merely done a good job of rote memorization. I would therefore view 'surprise questions' as an essential tool in professors' fulfillment of their professional responsibility as teachers.
Upvotes: 4 <issue_comment>username_6: My field is mathematics. I always tried to ask at least one question that looked quite different from anything that the students were certain to have seen before (though of course it relied on the relevant material), or that required them to combine several ideas that they might not previously have had to combine. On an exam in first-year calculus I’d have had at most a couple of questions of this type; on an exam in the more theoretical courses and in liberal arts mathematics courses I generally had quite a few such questions alongside the more routine ones, covering a range of difficulty. All questions, of course, required the students to write something, be it a proof, an explanation, or merely a routine calculation, and partial credit was always available.
I should point out that I was not grading to any pre-set scale. I have always preferred to construct the exam that I wanted and then interpret the results. Indeed, I refused to assign letter grades to individual exams, preferring to reserve that painful chore for the end of the course when I had as much data as I was going to get. Needless to say, I always explained all of this at the beginning of the course and again before the first exam. I also made it clear that I did not have the expectations to which most American students are used to being held: it generally worked out that the A students (apart from the rare curve-breaker) averaged 80-85% over the entire term — and I was not especially generous with A grades. A 50% average was generally a solid C.
Upvotes: 3 <issue_comment>username_7: All my real analysis tests (me being a student) were 50%+ completely new theorems to prove. It was to be expected that one would have to think outside the box to even pass the test. And I would say I learned an order of magnitude more in that class than I have in any with more standard tests. But that's a senior level course and being good at writing proofs from scratch was a skill that had been taught gradually over many lower level courses.
One could argue that being able to pass general tests with questions that require uncovered to vaguely-touched-on material or methods that involve combining techniques in ways unseen in class is the culmination of education. If that's in fact the case, it would make sense to introduce it early to cultivate the skill of being able synthesize new answers from covered material.
But it is important that it be reasonable for the student to know the prerequisite material to synthesize the answer. Don't ask a measure theory question on the first test in real analysis. Ask a question that requires use of the least upper bound property of the reals in a tricky way, for example.
Upvotes: 3 <issue_comment>username_8: I prefer a system where the homework problems are the most challenging. Also, I'm not in favor of a system where the homework is graded, because this makes it more difficult to choose good homework problems. The idea is that students learn best when they struggle a lot to solve difficult problems. One then has to accept that students may not have been able to do well on a particular problem, even if they are one of the best. Graded or not, homework should still be submitted and records should be kept about the student's performance.
The exam should serve only as a basic test that all the students who have seriously followed the course should easily pass. There is no way you can challenge the students in an exam that only lasts for a few hours as you can challenge them with homework that they would need to work on for several days.
The exam should be judged in combination with the homework. Each student's homework record (graded or not) should be taken into account when judging the exam. If it is found that the homework record is inconsistent with the exam performance, then the student should be invited to speak to the Prof. about the subject. It can be the case that the student was nervous and didn't see the solution to simple problems, such issues can be corrected in an oral examination especially if the student does not know that the meeting is in fact a secret oral examination.
It can also be the case that the student did not know much about the subject and just copied the homework assignments from other students.That will then become clear after speaking to the student, the student will then be given a failing grade for the subject.
Upvotes: 3 <issue_comment>username_9: Teachers have been asking questions that are not know to students for as long as teaching has been around. How else can a teacher make the student step out of their comfort zone where the answers have been clearly laid out and instead help them broaden their mental abilities. It is important that the questions is related to the subject being taught, but if I ask you a question about the really world cases rather than the hypothetical it will require some out of the box thinking. Yes it is professional. It is not if they are questions that the class has not prepared the student to answer.
Upvotes: 3 <issue_comment>username_10: I see this problem spanning several levels of information:
On the bottom, we have course information that is memorized. Students should not be asked to reproduce **definitions**, facts, axioms, and other types of foundation information that have not already appeared in course material.
Next, we have "**techniques**." Especially in mathematics, we learn techniques for approaching hard problems. In other fields, techniques manifest as the methods we use to make inference, the types of reasoning that we use to interpret new situations. These are very general (like integration by parts, the epsilon over n trick from early analysis courses, or less rigid logic like the the broad historical notion that starvation corresponds with instability which makes revolution more likely) and can be combined in many interesting ways. Plenty of good, unfamiliar problems can be written based on familiar techniques. It can be very appropriate to ask students to develop a new technique to solve a problem on a test, as long as the intuitive leap is somewhat reasonable. This is a judgement call on the part of the professor that can reflect professionalism.
The ability of students to adapt new techniques generally depends on their grasp of broad overarching **concepts**. Development of entirely new concepts probably does not belong on tests because students are not likely to retain them very well under exam pressure. In general, exams are a method for gauging students' current knowledge/understanding of course material. I would say that it is more equitable and appropriate to introduce entirely new concepts on homework and in lecture, so that students have a greater opportunity to internalize them.
Upvotes: 2 <issue_comment>username_11: My answer is a bit biased in favour of students since I had a bad experience with such questions.
TL;DR; 'Surprise' questions are good and necessary in some cases but don't make them weight 50% of exam.
While I was a student we had a couple of professors who would start giving you surprise questions when you do relatively good in your examination (oral). Then they could find something you don't know and seriously decrease your overall result or even fail you.
There were some professors who have made surprise questions in written exams grading them as 40-50% of the test itself.
I personally hate this. In my case it had led to the situation when you try not to learn general course material but to anticipate surprise questions. Student have more than one course at the same time and sometimes there is not enough time (or interest) to have deep(out of program) understanding of all courses and you just want to pass the course with 75% grade or whatsoever.
It of course depends on the field of study. If it is something like theoretical physics it is necessary to have skills to think outside of the box.
I liked an approach of one of my school teachers. You could get 110-115% on the exam and the grade you receive is based on 100%. 90% you could get by regular questions and 20-25% by 'surprise' questions. So if you studied diligently you could have a good grade and if you have spent more time on the subject you could even cover minor issues with 'surprise' questions.
Upvotes: 1 <issue_comment>username_12: I guess it depends on the details, the country (different education culture) and subject (some of them are fact heavy, some of them are more problem solving focused).
I happened to come from a place where a student who can solve only problems in STEM field that explicitly showed in lecture or assigned as reading, considered to be rather mediocre (C level). I assume there are different educational approaches, too.
On the other hand, giving a question way out of blue can be off limit and just mean.
Upvotes: 1 <issue_comment>username_13: There's no problem with Surprise questions as long as they're a small part of the marks and you tell students that there may be such questions.
Some of my professors would structure tests along the lines of 40% basics, 40% intermediate, 10% hard, 10% Surprise.
if you covered and understood all the basics you could reliably scrape a pass.
If you'd covered and understood all the material well you could get a good mark.
If you'd gone above and beyond and mastered the material on the course and had good general knowledge in the area you could get an excellent or perfect mark.
I liked the system as it meant in later years I could generally drag students I was tutoring through the exam based on the predictable stuff and you were also rewarded for independent study.
Upvotes: 2 <issue_comment>username_14: Among the answers given thus far, the one which is closest to my thinking is that of username_3, and the side note about the context applies too.
Now,
>
> Is it professional for a professor to ask “surprise” questions on a test?
>
>
>
It *can* be professional. As I said in a comment, it's unprofessional to ask such questions with the only purpose of failing as many students as possible.
Moreover, I don't like very much the adjective *surprise* in the title: a surprise is something unexpected, but if a professor clearly warns the students that at the exam they will find problems which have not been solved during the course, there is no surprise. So, in the following, I will talk about "new" problems.
Exams and tests have, as their main goal, that of assessing how well students master/understand a (small portion of a) certain subject. As others have well explained, new questions or problems can give a hint on how deep this understanding is.
But apart from the above stated main goal, exams and tests might also have secondary goals:
* An exam can be an occasion for learning new stuff. The well defined separation between learning and verification, which typically happens in a course, is something that drastically comes to an end when one starts working, even in academia. Learning and verification in everyday life are really interleaved, and many times learning has to be done along a stressful verification. So, a new problem during an exam can be an occasion to continue learning in a more "unprotected" way.
* An exam can be a hint, one of the many, that what has been taught during the lessons is not the whole story, and that beyond the lessons there is much more: new problems surely deliver this message.
* For a professor, an exam is an occasion to fish for good students to whom propose a thesis. Giving new problems can be a way to find students who are capable of independent thinking.
So, I tend to give new problems at the exams keeping in mind the above points.
To avoid being too general, let's make an example related to my experience. A few years ago I taught a course about sensors, transducers and signal conditioning circuits for graduate electronic engineers. The written part of the exam consisted in one problems about designing or analyzing a signal conditioning circuit or about evaluating the measurement uncertainty of certain transducer. Due to the vastness of the topic, the course could neither describe all kind of sensors and transducers, nor all possible signal conditioning circuits. So, I decided that every exam would have been made of a new problem, where "new" meant:
* A problem about a transducer not described in the course. Indeed the exam text contained a short description of this kind of transducer.
* A problem about the analysis and/or design of signal conditioning circuit not described in the course. The students, being electronic engineers, were expected to know how to analyze electronic circuits, even of moderate complexity. In more difficult cases, hints were provided.
* A problem about a known transducer applied in an unknown way.
Exams were open books and students could bring the solutions of all previous exams and all the class notes. After the written part, if successful, there was an oral examination which was more about class material.
What was the outcome of this kind of exam? The course was in general very well received by the students, even if the exam was considered hard: the percentage of success was around 30% (the pass grade is 60%). The major complaint was about the number of exercises solved during the classes, but this happens in all kind of courses. My answer to this complaint was that there were, indeed, time constraints that prevented us to solve more problems but, anyway, whatever the number of problems solved during the course, at the exam they would have found a new one (sometimes students ask for more solved problems in the hope that these will exhaust all possible cases).
From this and other experiences along 15 years, I think that students can withstand new problems at the exams as long as the motivations are well explained and, especially, as long as the course is worth of it.
Upvotes: 4 [selected_answer]<issue_comment>username_15: "New" material can be tested for on an exam if the students have access to or have been taught materials that are sufficient to make sense of the "new" material. This can apply to the following scenarios:
* **Open-book tests**. In an open-book test, it is fair to test students on material that they are capable of looking up on the spot and synthesizing with the knowledge they have gained in the course or were required to have in order to meet prerequisites. For example, in a test of French grammar where the students are provided access to a basic dictionary, it is fair to expect the students to look up a word and identify its grammatical and morphological categories despite the fact that the word never appeared in any of the lectures or required reading. Make it clear to your students which books/materials they should bring to the exam. Similarly, if your course is "Open Google", you can include a question that requires the student to figure out "Will a Hyperbarkonian Dehydrogenated Redonkulator output data that meets the following specification...?" as long as your course has taught the students how to read a machine data output specification and compare it to a requirements specification.
* **Material that can be deduced from context**. These are things where students can apply context clues in connection with course material to identify the likely meaning of the new material. An obvious, well-known example of this is reading comprehension through context clues, a technique commonly taught in lower grades. For example, consider the following question:
>
> After Mary found out about the death of her father, she blargragathed for three hours until she fell asleep from exhaustion. Which of the following counseling interventions is most likely to assist her?...
>
>
>
If your reading comprehension skills are sufficient, you can probably figure out that blargragathing is a kind of negative emotional or behavioral reaction despite the fact that I made up the word on the spot for this answer. You can now apply the counseling interventions you learned in my course to answer the question.
Upvotes: 0 |
2014/11/06 | 826 | 3,290 | <issue_start>username_0: I am a beginning researcher and am unsure as to the academic worth/usefulness for my career to publish with [SpringerBriefs](http://www.springer.com/gp/authors-editors/book-authors-editors/springerbriefs). Would it be seen as a vanity publication?<issue_comment>username_1: Its not entirely clear from its website, but it appears that SpringerBriefs is not peer reviewed. At least, that's what I get from their author instructions starting with:
>
> Springer Briefs are designed to get your ideas to market as fast as
> possible. With this aim in mind, we have outlined simple instructions
> for manuscript formatting, preparation, and delivery. After you have
> delivered your manuscript to editorial and it is transmitted to our
> production department, the manuscript will be assigned to one of our
> full-service production vendors (FSVs).
>
>
>
Given that, my assessment is that publication in this venue is likely to be essentially worthless from a scientific career point of view: it's not peer reviewed, so it's not giving you any more credit than depositing the same text in a repository like [arXiv](http://arxiv.org/), and it gets you much less exposure than something like arXiv because people have to buy it to read it.
Upvotes: 2 <issue_comment>username_2: Think "short book", not "long paper". There's been several questions here about the value of publishing a book (e.g., [Is there any value in self-publishing a book as an academic?](https://academia.stackexchange.com/questions/29881/is-there-any-value-in-self-publishing-a-book-as-an-academic) or [Pros and cons of (co-)authoring a reference book in early career?](https://academia.stackexchange.com/questions/18385/pros-and-cons-of-co-authoring-a-reference-book-in-early-career)), and the answers apply here as well. In short, the *fact* of having published a book carries virtually no weight (at least in most fields), but a *good* book can have a long-term impact.
Original research should in general be published in a peer-reviewed journal (or conference), not a book. But if you've written a few papers on a topic that you think would benefit from a consistent notation, cleaned-up presentation, extended introduction and literature review; or if you have a set of lecture notes on a current hot topic you are particularly proud of and would like to see get wider attention (preferring to trust the power of Springers marketing department instead of the vagaries of Google), then publishing them as a SpringerBrief could make sense for you. Whether it'd be worth the effort to get them into shape (and it *will* cost effort) is something you have to decide for yourself.
Upvotes: 3 <issue_comment>username_3: Yes, SpringerBriefs are blind peer reviewed. I'm not sure why one would assume that a major publisher like Springer would not send out work for peer review before agreeing to publish it.
Upvotes: 2 <issue_comment>username_4: As for the peer review, I talked with a Springer representative at the 2016 Joint Math Meeting and she clarified as follows: two rounds of peer review, once when an author indicates a desire to publish a Springer Briefs (signs the paperwork, submits a table of contents and a couple chapters), and once when the work is completed.
Upvotes: 3 |
2014/11/06 | 280 | 1,013 | <issue_start>username_0: In listing my master's thesis in CV I came across this problem: Should I put the name of my advisor in front of my name or should I put only my name (I have my advisor's name printed in my thesis) according to the US style?<issue_comment>username_1: I don't think you need to be too formal here. Just include whatever information you believe is relevant, for example:
>
> Master's Thesis: "Topic", at University X, supervised by Prof. Y.
>
>
>
Theses are different from regular publications anyway.
Upvotes: 3 [selected_answer]<issue_comment>username_2: At least with the APA style, the advisor's name is not included. The example from the fifth edition of APA (my 6th edition is lent out) is:
<NAME>. (1990) *Fathers' participation in family work: Consequences for fathers' stress and father-child relations.* Unpublished master's thesis, University of Victoria, Victoria, British Columbia, Canada.
With a hanging indent that I can't figure out how to do here.
Upvotes: 2 |
2014/11/06 | 2,868 | 12,217 | <issue_start>username_0: I am a graduate student. My department head is manipulating data in his research papers and skillfully alters plagiarized text to avoid detection. I found this out while working with him on a journal article. Specifically, he modified data points (right in front of me) to dramatically increase our R^2 value, and then he told me to do some formatting and submit it to a journal. I politely confronted him but he did not concede and I backed off from that paper. Later, other professors confirmed that most of his papers are bogus and results are fabricated.
He mentioned once that the reason for choosing nanotechnology (which he is not familiar with) is that there is very little literature available and few experts to review the paper. He gets through the review process by using a plethora of statistical analysis results (with fabricated data) to support his claim and gets through (some) editors by using fancy terms like neural network and fuzzy logic. The irony is that he does not even know the underlying theory of whatever analysis he is doing. How do I know this? I uttered a few doubts and the responses were extremely poor. He uses Minitab and Matlab tools to get things done. He once jokingly told me that he gets a paper ready overnight. Maybe it was not a joke after all.
Reporting to the management is useless as they won’t listen to me. I cannot challenge his paper, because I barely know anything in nanotechnology (neither does he). Editors won’t take me seriously since he has considerable reputation due to articles in high impact journals. So what should I do? I could not tolerate his insanity and literature pollution.
**More info:**
* I am about to graduate and leave the department for good.
* I managed to avoid submitting the fradulent paper because I eventually convinced him that the hypothesis was fundamentally wrong. I still have the original manuscript he mailed to me from his unofficial mail id.
* Institutional routes are closed, I tried complaining about his poor lecture quality once (anonymously) and it backfired for the entire class. He has 15 years of experience, 20+ journal article and numerous conference papers. I don't stand a chance against him.
**Update:** I will try to report this issue to retraction watch or through any other means possible. Still it is not possible to disprove his claims without repeating the experiment.<issue_comment>username_1: Your backing off reaction is the appropriate one: **cease any form of association with him right away.**
The description of your first-hand witnessing of him fabricating data is more than enough to ascertain his academic dishonesty. You don't need to know about nanotechnology to know that manually modifying data to make it pass a statistical test is idiotic and fraudulent. Also, experts in that field will assume the data are real when conducting peer review, so it's not something they can easily detect without repeating the (alleged) experiments.
His 'results' are unwanted. By publishing bogus science he makes people lose time and money, he's robbing legitimate researchers of their funding, he is adding noise that masks the signal. If you can afford to report his behavior please, *do it*. Depending on where this story takes place, you might lose a variable number of feathers in his striking back, but at the end of the day he is the one who is wrong.
Taking direct action to publicly expose his fraudulent behavior is risky for you, especially since he is your hierarchical superior. But, when he will get caught (because he will), if it is apparent that you were aware of his wrongdoings and still accepted co-authorship or credit for his publications, his bad reputation is going to stain your career. If you fail to prove your claim, it's *your* career that will be at risk. So, proceed with caution. Note that it is *ethically perfectly fine* to report scientific wrongdoings anonymously.
* Make sure that *all institutional reporting routes* are inefficient before bringing the issue to another level. It is not clear in your post if you actually tried or if you just assume 'they won't listen to you'.
* Since you provided the content of at least one paper, and know about its fake nature, if he submits the paper despite your protests, notifying the editor is a thing you could do. **Editors will take you very seriously** in reputable journals.
* You can also take part (anonymously if relevant) in a post-publication comment on his papers, on websites like pubpeer.com or retractionwatch.com.
Upvotes: 8 [selected_answer]<issue_comment>username_2: Publish an attempted replication of his work that demonstrates that his analyses and data are not reliable.
If you can show definitively that the work is bogus by providing reproducible analyses and release your data publicly for others to verify, your work will trump his, and you can launch a career off of this.
Upvotes: 1 <issue_comment>username_3: Absolutely get out and far away, but perhaps then you could take action. If you were the only one who has or saw the original data, even keeping it anonymous, he may figure out who reported him. Aside from Retraction Watch, [COPE](http://publicationethics.org/) has some interesting cases to read through. This is an organization that [many major journals](http://publicationethics.org/members) are members of, which deals with these sorts of ethical issues. Many of the cases they describe involve anonymous reports and it might be helpful to read through them and see what the process would like were you to contact one of the journals that published his fraudulent work.
If you do anonymously contact the journal, be sure they agree to a course of action (they will not tell the author where the information came from, they will only reveal certain information, etc.) that protects you. I'm not sure legally what they can and can't agree to, but have them tell you this before you reveal who the fraudulent author is.
I would absolutely not go through your University's channels, but maybe this is too cynical. Although this is logical and 'fair', there is a chance they will want to hide the fraud to protect their reputation, which might mean discrediting you. Just go straight to the outside parties (anonymously and after graduating and getting a job etc.) and let them work backwards to the university. A publisher (as you will see reading [COPE cases](http://publicationethics.org/cases), has much more to lose by not confronting fraud and they take it very seriously!
Upvotes: 3 <issue_comment>username_4: As it is unethical behavior by your direct supervisor I would recommend that you seek outside legal council. These things can get very nasty, and can damage your career, having pre-briefed council on call when the administration finally goes to town will be a huge benefit to you.
Legal council can also help you write your correspondence with the University administration in a way that will force action against the guilty instead of you the whistle-blower (or at least make it more likely). In any correspondence do not mention the fact that you have retained legal council, until forced to do so. At no point should you discuss matter verbally and all your responses to any written letter/e-mail should be vetted by your lawyer.
Whistle blowers are very unpopular for management as it shows that they have not been paying attention (or allowed fraud to occur). It is, unfortunately, likely they will come down on you before they come down on a professor.
The option of quietly leaving is something you should strongly consider.
Also discuss with your lawyer, if you can/should report, the professor to any granting bodies that funded his/her research. Potentially via the lawyer, leaving your own name officially out of it as much as possible.
Upvotes: 4 <issue_comment>username_5: Keep low, you are young and have no power.
Graduate
Then put queries on PubPeer - make sure the query does not allow you to be traced. By this I don't mean anonymity, because PubPeer guarantees that, even if you log in, but in terms of the actual query. For example, if he is R hacking his data, then an analysis of a series of papers will reveal a statistically impossible distribution of R values.
Engage allies - people you may not know, but who are on the right side - but do this with care, because one can get bad surprises sometimes.
Putting things right takes time, what may appear to be a lifetime to you and bear that in mind, do not expect a "result" in months or a year.
Upvotes: 3 <issue_comment>username_6: I had a similar experience. As you say, it is an open secret that the professor is a problem - I once had a formal meeting about this with one of the other heads of department, who said *"I think he should be sacked"*, but nothing ever came of it. I've also heard he has some powerful friend high up in the university. A few years ago, some of the other academics allegedly complained about him and suggested the university drop him, but instead of being reprimanded he was instead promoted from "Reader" to "Professor" (he doesn't even have a PhD).
It still irks me that someone can get a very well paid and respected job at a top university whilst being fundamentally clueless about science and the technology that they are supposed to be teaching and researching.
*"I tried complaining about his poor lecture quality once (anonymously) and it backfired for the entire class"* If we are talking about the same person then this is not new. Other students have complained about his poor lecturing, dating back 15+ years now. His "trick" was to try and come up with some idea for a course that sounded cool but left everything on the shoulders of the students. Typically this involves the idea of students coming up with their own project idea based on some vague tech (dev board) and then doing a bunch of programming with either zero lectures, or lectures completely devoid of any meaningful content.
It is very hard to do something about the fundamental issue. If you have honesty and integrity, then you will feel like you should do something, but there is a systemic problem here, and, quite honestly, it should not be down to a lone student taking this on. This isn't your problem, it's an institutional problem, and the best you can do is to find a new professor as soon as possible. Several of his students have walked away.
Later in life, I had the unfortunate coincidence of interviewing for a research position where the lead researcher had been a personal friend of one of the students who did walk away. When he asked who my supervisor was, the interview went silent then the interviewer shook his head and said simply *"I know of him. He is a terrible researcher."* How could one justify spending several years being the student of a person with such poor reputation, and yet still claim to be a capable, top class researcher yourself? It is hard, and it is ultimately self defeating. There are far better options in life. Walk away.
*"He has 15 years of experience, 20+ journal article and numerous conference papers. I don't stand a chance against him."* - This is the fundamental problem, and there is no solution. As a student, the balance of power is unfairly against you. You can't just accuse a respected person of producing useless research, or of abusing their position. It beggars belief that there is no effective oversight on these matters, and that universities are not more proactive in policing their workforce - but that's the way it is - academic institutions have historically given academics a great deal of freedom, and their position in society was respected. I honestly don't know how the situation can be fixed, other than by waiting for the professor to retire.
*"he does not even know the underlying theory.. I uttered a few doubts and the responses were extremely poor."* - This echoes the exact sentiments of a quote I heard from an irate MSc student - *"Have you ever tried to pin him down on any details? He doesn't know anything!"* I wish I had listened to his advice at the time: Walk away. Life is too short to waste fighting battles that you can't win.
Upvotes: 4 |
2014/11/06 | 1,435 | 6,086 | <issue_start>username_0: Two months ago I submitted a manuscript to my advisor for reviewing and submitting to a journal. He didn’t reply to that and when I asked him about it, he said we should think about publishing after two months and we don’t need to rush. He didn’t even bother to look into my paper.
I feel very anxious and have written a second paper this time with no discussion or involvement from him. I have put his name in the acknowledgement section for allowing me to use his lab computer. **Do I stand a chance in publishing it without his name?** The idea was never discussed by us and his involvement in this paper is almost not existent.
I have only a year to graduate and am worried for job hunting without publications. (About the quality of paper: Last year, when I came up with an idea, he rejected it claiming it was not good enough; six months later I saw it published by someone else. This time I am confident of getting accepted in reputed journals.)<issue_comment>username_1: I think this very much depends on the field you're in and your relationship with your advisor.
Usually before a grad student earns his own reputation, he's better off publishing with his advisor's name on the paper because the journal editors might then know where he's coming from. But if the paper's content is good enough, you should be okay publishing on your own. In the past many grad students published as the sole author of their papers, even in top-notch journals.
The drawback in publishing without your advisor's approval is that he might get upset with you, not the more desirable when you need his recommendation letter to get a job. But, as history proves it, advisors can be wrong and your work might be able to gain recognition by readers/editors who find value in your work.
Good luck!
Upvotes: 3 <issue_comment>username_2: The straight forward answer is that you can publish the work as you see fit. A well-written sound manuscript submitted to an appropriate journal is always welcome.
That said, however, your actions is not likely to smooth out any existing "conflict", for lack of a better word, between you and your advisor. And, just because one can does not mean it is the best solution. I am not about to judge who is is right or wrong in your situation, only someone close to you plural) would know. But, one question that immediately pops if you really have tried to discuss the matter in detail or if you have the position that your adviser should solve the problem. Lacking your advisor's side, only you can make such an assessment.
If you are in the position that your adviser is one-sidedly not communicating with you, the situation is difficult. I understand your eagerness to publish but will also mention a few things that can cause the actions to back-fire. First, you will most likely want letters of recommendation from your advisor so publishing work done in the advisor's lab without sanctioning from the adviser can become a negative aspect. You really need to objectively assess this proposed action. Second, if you are close to finishing, the timing is perhaps not optimal to ignite a conflict with your adviser. Again, you need to really assess your situation to know what ramafications can come from your actions. One partial solution, is of course to try to talk to other faculty for whom you have more confidence.
In the end, I can only see one solution: *communicaton*; and I can only advise to tread carefully over possibly mined territory so make sure you prepare your map carefully before running into solutions out of frustration or even anger.
Upvotes: 4 [selected_answer]<issue_comment>username_3: >
> Do I stand a chance in publishing it without his name?
>
>
>
Very possibly. In most fields, the merit of the work matters more than the identity of the authors.
However, that's not the question you should be asking, which is:
>
> Assuming the paper could get published without my advisor's name on it - can I take his/her name off?
>
>
>
Not really. Even if he is not pre-reviewing it now, he has had enough influence and contribution on your development as an academic and the development of your research, even your individual research, that it can probably be argued he should be be listed as an author; and it is customary in many fields to include your advisor as an author.
So if you do feel you must act:
* Inform him you intend to submit the paper, with both your names, qualifying that with "unless you tell me otherwise" or some such phrase.
* If he says he should not be listed as an author, remove him; otherwise keep him as an author.
Your advisor can always ask the conference or journal to withdraw his/her name - and that is not automatically considered something fishy, especially if he does the withdrawal and his reason is "I didn't make a significant contribution".
However, note that doing this can adversely effect your relationship with your advisor if he would rather you wait for his input.
Upvotes: 2 <issue_comment>username_4: **Do I stand a chance in publishing it without his name?**
Yes, you very much do. Your supervisor may genuinely wish not to be associated with the main idea in the paper, for their own good reasons, and if they are a decent sort (not a given in academia) they will let you proceed as long as their name is not on it, and also not try to stab you in the back later. I have been in this position as a PhD student, and luckily my supervisor turned out to be the good sort.
Now, if your paper gets accepted into a top-journal, the supervisor may turn around and demand to be on it. They may even insist it was their idea all along! They may write a nasty letter to the editor claiming that you are a rogue student. In that case you will probably have to give in to their demand to be included in the author list.
So you could be in for a rough ride. Many academics are not nice people and behave in dishonourable ways. But you can, most certainly, publish without a supervisor. In fact, the publishing itself will be a lot easier for it.
Upvotes: 1 |
2014/11/07 | 2,335 | 9,452 | <issue_start>username_0: Prior to transferring to another university, I attended a college in a different state with a renown reputation for integrity and academic rigor. The year before I left, I enrolled in a course that was taught by this new professor. It was the only 8:30 a.m. course that I eagerly woke up for- the professor would literally jump around the room, making the lessons lively and hilarious. He'd even devote the last 3-4 minutes of class to discussing some of the most insane industries around the world. I didn't enjoy my time at that college, but his classes were downright memorable.
Outside of class, he was much calmer, but always with a warm smile and approach. People only spoke good things about him, and his respective department decided to hire him full-time, on a tenure-track position.
The autumn after, I left that college and even switched majors. I kept in touch with a few individuals from that college, and one day, they told me to check out my old college's website. This professor was discovered to have plagiarized multiple papers. I won't get into the details, but the extent would make a slam-dunk case for blatancy. He no longer teaches.
If you Google the professor's name, the first page or two will only show a massive plagiarism scandal.
I tried reaching out to my former professor via email or Facebook, because for whatever reason, I feel bad for people going through rough times, even if they're severely at fault (contingent upon remorse, etc). There was (and still is) no contact information available, and after much time scouring the internet, I can't seem to locate any social media profile. It only then occurred to me how screwed up this person's future must be: never mind having a portion of your grad school wasted (although one could argue it was wasted when he plagiarized anyway), any background check on him will yield really unfavorable results.
My question is: what does a person like that do from this point forward? Is academia pretty much a "no-go"? Does he count on connections in the industry? Does one change his or her name? I thought I'd ask here since several users here have had experience with plagiarism in one way or the other.
**TL;DR** A professor that I used to like got busted for plagiarism, and his name's all over the internet. Does someone like that have a second shot at academia? How would he go about getting into the workforce? ...etc<issue_comment>username_1: Past a certain point in their career and education, I think it's hard for somebody to ever clean off the stain of serious, prolonged, and deliberate misbehavior. The more that a person has built up authority and trust on a false foundation, the harder it is to ever trust them with regards to that subject again.
When a young student engages in academic misbehavior, we treat it as a teaching moment. When an undergraduate does it, we try to put fear into them and rehabilitate them. When a graduate student does it, we try to figure out whether it is worth salvaging. When a professor does it, it pretty much destroys their academic career.
And to my mind, I think that's OK. We're talking about serious and deliberate professional malpractice that undermines the basis of the whole endeavor. That can waste millions of dollars or leave people injured or killed. Would you ever again trust a doctor who deliberately injured their patients, or an engineer who deliberately mis-designed a bridge so that it might collapse? I think that it is the same for an academic who engages in massive and systematic fraud.
Upvotes: 6 <issue_comment>username_2: First of all, I find nice of you caring about a person who you consider to have impact on your education, whatever happened.
However, let's put this situation in a perspective:
In academia, all you have is credibility. Without making any attempt to figure out who you are talking about, from your description it is clear that the person committed serious misconducts for whatever reason, for years and years, deliberately, which hurt only not himself, but potentially can destroy the career many of his colleagues, recomendors and previous supervisor. Look at the (admittedly somewhat extreme) case of Haruko Obakata (STAP stem cell scandal): a single persons dishonesty was able to destroy a whole institute, trigger a suicide of a well respected colleague and make plenty of other damage.
Academia is a harsh place with harsh competition for job. Majority of people who do PhD never got tenure track job in the academia. Many of them hard working, even smart, but has a bad luck, personal reason etc. They lose their years, too. If you want to hire someone hard working, talented person, there are long lines waiting and willing to do the job. Why would you hire someone who has no reliable track record of ANY results, but has a solid track record of being dishonest, and potentially nuking you down to the ground?
Upvotes: 5 <issue_comment>username_3: I'll answer the question: *Does he count on connections in the industry?*
I am an industry retiree. The company I retired from treats employees' integrity very seriously. It's one of the most important items on annual performance review. When they interview job applicants, they filter out people with bad past records. I don't believe that professor would have any chance getting hired by this employer. And I do know that many employers do similar things. So, he will have less than normal chance to be hired by industry companies.
Of course, he still might get a job offer with some company. But, he'd better have his acts together after he is hired. Whoever hires him would worry that he might steal company intellectual properties or even money because of his past records. They will also worry that he would cheat on his work, such as falsifying time sheets, cheating test results, etc. He will be on his manager's ding list, for sure. In other words, he will have a miserable life wherever he goes.
Upvotes: 4 <issue_comment>username_4: Before I start, let me just say that I agree with the other answers that landing a job in academia will (and should) be neigh impossible, but let's get the more interesting stuff:
Thinking back to a couple of cases I have heard of (Netherlands, Netherlands/Canada, Germany, Italy, US from the top of my head, though for the life of me, I don't know their names and the one name I do remember I don't feel like sharing, because he seemed to enjoy the attention far too much...) where people were caught for fabricating data, faking past education and plagiarizing work at least four of them *were* able to get a job again soon after (though three of them didn't work in academia). For none of those people it was a good career event of course, but it wasn't as bad as you would expect with all the media attention they received and most did land jobs in the long term (all except the Italian guy outside the world of research). In certain cases I think this is 'fine' and makes sense (e.g. if you have proven that you're able to do your job incredibly well... how much of an issue is it *for a company* that you faked your academic credentials in the past), though in other cases I have been totally dumbfounded by this (e.g. the Italian guy was able to secure research funds privately for his work still).
To narrow it down to only those that plagiarized and I know of only leaves one person and *as far as I know* he's currently without a job and is writing a book about his experiences in the academic world (he got **a lot** of publicity when he was caught), however it must be noted that he didn't only plagiarize, he also fabricated data and didn't follow correct procedures to do certain experiments.
Now, the following is just hypothesis, but from what I have heard and seen I do believe that somebody that only plagiarized, but did **not** fabricate data or participate in similar endeavors, will be able to land a job relatively easily. The pay won't be great and the company will wish to keep quiet about employing him, but then again, even if he plagiarized, such a person *does* have a lot of skill and in a lot of fields (e.g. Computer Science: Yes, English: Nope) that's enough to outweigh the disadvantages.
Upvotes: 4 <issue_comment>username_5: Maybe a job in an office of responsible scientific conduct/research ethics?
He sounds like a nice guy, and if he is, then he probably feels bad about what he did. If he is sincerely remorseful and can convince others of that, an office of research ethics might hire him to give talks on what not to do and why.
He also may still have a shot at teaching-focused jobs, especially since it sounds like he was a good instructor.
But I doubt he'd have much of a chance at getting a research position ever again... *maybe* if he went the research ethics route for a few years first, to prove he had rehabilitated himself, but even then it seems unlikely.
Upvotes: 2 <issue_comment>username_6: There are enough of us without a plagiarism record who are deparately trying to get a permanent position in academia. As someone who has had my own work plagiarized, I am more sympathetic to his victims than someone who has plagiarized multiple times.
"Nice guys" aren't always so nice on the inside. Knowingly stealing someone else's work is not a nice thing to do. If that's what he has to do to succeed then maybe he's not cut out for a career in academia anyway.
Upvotes: 1 |
2014/11/07 | 2,389 | 10,586 | <issue_start>username_0: In primary and secondary school in the US, some teachers will adjust homework so that it is a better match for students. For example, a teacher might give a strong math student more advanced math problems, while a weaker student might be given more remedial work.
I am currently teaching some 100-level undergraduate courses. While my assignments seem appropriate for the majority of students, I discovered a few that are really not ready, as they came from rural high schools that did not offer the necessary background. It is unlikely that these students can develop their skills to the same level expected of other students within the span of one year, unless I make significant adjustments to their work. As my assignments are too difficult for them, they have given up hope.
The students are too few to warrant a recommended creation of more remedial courses. I'd like to apply some of the differentiated instruction methods that are common in primary and secondary education. At the very least, I would like to: (1) offer these students alternative homework better suited to their level and (2) offer them exams more appropriate for the level that they can realistically achieve during the first semester of the two part course.
* Do universities permit teachers to differentiate their instruction, or must every student be given exactly the same assignments and assessments?
* If this is not permitted, is there some other approach or way of framing the homework and assessments as to make this seem fair for everyone?<issue_comment>username_1: While I am sure different universities will have different policies, the ones I am most familiar with would require that all students end up being taught the subject at the same level and given the same (likely departmental) exam at the end, especially for 100-level undergrad courses. This is mainly due to the fact that the next-level of class would require a certain level of knowledge, and anything less would hinder the students further. Most professors that I know would attempt to solve this by assigning large amounts of reading and allowing students who are already familiar with the materials to simply gloss over it, but being very clear as far as what they will need to know on the test via homework and quizzes, such that they can study according to their needs.
I would be very careful about assigning different work to different students, however suggesting optional reading/assignments for the entire class would be safe, and mentioning that you think they would be a good introduction or something that you can go over more personally via office hours for those who you think are struggling would be a good compromise. If you still want to assign certain individuals extra homework, I would suggest creating a pre-test and assigning it only to those who do poorly (tailor it such that they only need to do work relevant to the parts they did wrong).
Instead of creating a remedial class for the few students, perhaps the university (or department) policy could create a self-paced course for those students, or require those who score less than X on whatever entry exam is used (SAT?) in the subject to take an "entrance test" in order to make sure that they are ready to succeed in the class (and of course provide them the means to study in order to do so).
Upvotes: 2 <issue_comment>username_2: It would be very unusual, in my experience in the U.S., for one group of students in a class to be given an "easier" exam than other students in the same class. When the students who took the "harder" exam found out about it, they would have a valid grievance that the faculty member would be unlikely to win. If a colleague asked me about this, I would strongly discourage giving different exams.
At the same time, it is perfectly reasonable to give individualized instruction *before* the exam. This is more feasible with small classes than large ones. At smaller schools, it is common for faculty to meet one-on-one with students, explain background material, and suggest additional problems to help those students self-remediate.
Whether this is possible for you on how much time you have available to do it, and also on whether you can convince the struggling students to put in the extra work that will be necessary for them to succeed. Unfortunately, some students are unable to do that, or choose not to.
---
I also want to give some more personal remarks, because I empathize with the spirit of the question: not wanting to leave students behind when they arrive underprepared for a class. I also teach at a school where some students have clear potential but were underserved by their high school and don't have the background that is expected when entering college.
The question mentioned being "fair to everyone". This has many meanings. You want to be fair to each student - which means both recognizing where they are currently at, and not sending them forward without the background to succeed in their next class. You want to be fair to the other students by making the class similar for everyone. And you also want to be fair to future professors, by sending them students who have the appropriate background.
Sometimes, you will be in a position where you can't do all these things. Perhaps a student has real potential but simply can't get to the necessary level by the end of the course. Perhaps the student could get there, but has personal or family obligations that occupy their time. Perhaps a student is just not quite mature enough to put in the work needed.
This can be one of the more difficult situations for a professor. But it is also very common, particularly at institutions that are not extremely selective. If you talk about it with your more experienced colleagues, they will have their own experiences with it, and they will be able to give you advice and support.
Upvotes: 4 <issue_comment>username_3: Differentiated Instruction
--------------------------
[Differentiated instruction](http://en.wikipedia.org/wiki/Differentiated_instruction) is based on the premise that students learn better when they are pushed just beyond the point where they can work without assistance. Teaching methods and presentation of material should be calibrated to the student's level of understanding so that [learning is optimized](http://www.glencoe.com/sec/teachingtoday/subject/di_meeting.phtml) for the entire group, regardless of differences in ability.
---
**Is it fair?** The best way to "seem fair" in a college-level course is to make every effort to be impartial and objective in evaluating students ... under most circumstances this probably means they should all be subject to the same required work and examinations.
However, it's perfectly natural to offer extra homework, practice exams, and additional instruction to students that need extra help. Often, you can help by encouraging/organizing student study groups -- which can be helpful both for engaging students with the course material and connecting those that need help with a wider network of support.
---
**Is it permitted?** Aside from rules set by the legal statutes that govern the region (i.e., city, state, and federal law), what is *permitted* by a university is entirely up to the policy makers of the institution -- commonly the president, provost, and board of directors for the school (often with advisory from the faculty).
So the short answer is, it varies ... you definitely need to check with your particular institution. In addition, this will depend on the level and aim of the course. For instance, 100-level courses are building the foundation for higher level topics, so there is much less leeway for instructors to change the amount of material to be covered or the course content to accommodate different students.
Differentiated instruction doesn't seem to be widely used in higher education. The issue was discussed in a [recent article](http://www.collegequarterly.ca/2013-vol16-num03-summer/lightweis.html) [College Quarterly, 2013]:
>
> While a few higher education faculty members have embraced the notion
> of differentiated instruction, the assumption is the majority of
> college instructors will focus on the traditional teacher-centered
> strategy of disseminating information in lecture form (Burke & Ray,
> 2008; Chamberlin & Powers, 2010; Handy, 2005; Smith, 2006).
>
>
>
Upvotes: 1 <issue_comment>username_4: As others have already said, alternative exams are fraught with danger for you, and especially for students who go into the next course believing they are prepared for it.
It sounds like you are prepared to do extra work. (Good.) Reserve a room and announce a study group, open to all students. For each session, start with easier problems/examples and show how they lead to results at the level expected for the course. Many, perhaps most of the students who are motivated to succeed will show up for every meeting. There will be some motivated students who cannot attend due to either class conflicts or job requirements. The best you can do is pick a time that suits the majority. The non-motivated will self-select out, and you will have learned something about them.
You've said you *cannot* get these students up to level. If you can *get them through* with enough preparation that they can master succeeding courses with extra effort, you will have done them a great favor. That is far better than contriving a passing grade that will cause them to fall on their faces later.
Anecdote: One such study group got me through a particularly hideous master's course: Distributed Database Systems. We met on Sunday mornings, *sans* professor, and figured out just WTF he had told us in the previous week.
Upvotes: 1 <issue_comment>username_5: This does not answer the title question which, as others have said, depends on the university, but shows a possible approach to "give them hope".
You can give students the choice between two exam papers, an A-series paper which allows them to get an A grade, and a simpler, B-series paper which allows students to get at most a B or C grade. The choice between the two types should be done by the students before the beginning of the exam without seeing the exam papers.
I applied this approach many years ago in a course where the situation could be considered similar to yours: I can't say it was entirely satisfactory but at least eased the life of those students who, for lack of background (and willingness to catch up), couldn't aspire to get an A.
Upvotes: 1 |
2014/11/07 | 721 | 3,093 | <issue_start>username_0: Is there a common standard for the title hierarchy of research positions in the English language? Excluding people who do research such as students or professors, but purely researchers. Something in engineering such as, Junior Programmer and Senior Programmer.
I have heard of these, and in what I assume is a hierarchy:
1. Research assistant
2. Assistant researcher
3. Researcher
4. Senior researcher
Is there such thing as "Principal Researcher" which describes the job position as opposed to the PI on a project. Would this title be given to more than one person in the same lab?
Purpose of the question: In other languages, specifically in Asia, there are many words for hierarchy in both business and academia. Google translate does not work, as it just recommends "senior" for many of the words, but in the native language they are much different, and are all above the standard "Researcher" position. The research institute would like things like business cards and website to make sense to other countries and native English speakers. As of now, the native language shows two different words, but in English they are the same "Senior Researcher", which makes it difficult to understand/explain what position the person is in.<issue_comment>username_1: In America, at least, outside of university professor ranks there is no standard hierarchy. You will often, however, see "Associate < [no adjective] < Senior < Principal < Fellow".
In industrial research, most companies have only a few distinctions, as people past a certain rank in research are expected to instead switch to a management track and adopting management titles. There is great variety from company to company, however. For example, where I work we have an unusually deep tree of technical ranks, which somewhat parallel professorial ranks: "Associate Scientist < Staff Scientist < Scientist < Senior Scientist < Lead Scientist < Principal Scientist < Chief Scientist."
Upvotes: 3 <issue_comment>username_2: >
> Is there a common standard for the title hierarchy of research positions in the English language?
>
>
>
**No.**
-------
There is no defined, consistent hierarchy of research job titles anywhere in the world, at least to my best knowledge. Essentially, every university, often even individual departments, handle job titles, responsibilities and hierarchies slightly differently, even if of course a lot of common patterns exist. Don't assume anything about hierarchy purely based on titles.
Upvotes: 4 <issue_comment>username_3: In UK Academia it usually goes something like:
* Research Assistant
* Assistant Researcher / Researcher
* Senior Researcher
* Junior Associate Researcher / Associate Researcher
* Senior Associate Researcher
* Post-doc Research Fellow
* Research Fellow
* Senior Research Fellow
* Principal Research Fellow
Upvotes: 2 <issue_comment>username_4: In Argentina:
* Doctoral Research Fellow
* Post-doc Research Fellow
* Assistant Researcher
* Adjunct Researcher
* Independant Researcher
* Principal Researcher
* Superior Researcher
Upvotes: 1 |
2014/11/07 | 825 | 3,529 | <issue_start>username_0: I am trying to publish a paper. This was the result of a summer's work in my spare time while working 40 hour weeks in a non-math related job. I received an email response from the journal that neither said the paper was rejected or accepted: I have been asked to revise and resubmit. I was given feedback by a reviewer to make some minor changes and the editor used VERY positive language and remarked that if the changes are made then it could be published in the latest issue.
However, at first I took this as great news, since this is my first real work. But, thinking about it more I started to worry that I had gotten my hopes up too soon.
Do I still stand a good chance of it being published?
Edit: Thanks for the answers.<issue_comment>username_1: Being asked to revise and resubmit is very common and you should not be discouraged by this but rather make the requested revisions and resubmit your paper.
Upvotes: 4 <issue_comment>username_2: In the journals that I deal with, even a request for major revisions usually means that a manuscript is on its way to acceptance: it is just a question of whether you are able to put in the time and additional work necessary to address the issues raised by the reviewers.
"Revise and resubmit," by contrast, generally means that the editor sees potential, but that there are too many problems to expect the manuscript to be able to move forward on the tight time schedule of a request for revision. It's a kind of (faint) compliment, actually, and you should take it as it sounds. I recommend treating a "revise and resubmit" as a request for *really* major revisions. If you get a request to revise and resubmit, take the reviewers seriously, and take your time revising until you and your advisor feel you have well addressed all of the issues that they raised. When you resubmit, your cover letter should explicitly mention the prior version and how you have addressed key issues raised.
Upvotes: 4 <issue_comment>username_3: Yes, if you've only been asked to make minor changes, the paper stands a very good job of being published.
Now you've got to work methodically, and be meticulous with the detail.
Go through each required change one by one. If it's not an unacceptable change, make it; and in a new, separate document - a log of the changes = write one or two sentences to describe how you've made the change (sometimes, a word or two, e.g. "spelling corrected" might be sufficient). If it would be an unacceptable change, write a few sentences in your log of changes about the basis on which you're sure it doesn't need changing.
Work through all these with any co-authors.
When you send the changed paper, add a covering note. In that, copy and paste each of their requests for a change, and after each one, add your sentences from your log of changes about either how you've done the revision, or why you haven't. Your editor may have sent you a proforma or template to fill in, that would do the equivalent job: if they have, use it.
Upvotes: 6 [selected_answer]<issue_comment>username_4: I am a peer reviewer for a journal and have more than 10 publications myself. There is very high chance your paper would be published if you address the reviewers' questions and request. Make your points more explanatory where you do not want to change; sometimes the reviewers are not directly in your field of study and this is done so that a layman can at least understand your paper. Good luck with your publication hustle.
Upvotes: 2 |
2014/11/07 | 1,146 | 4,965 | <issue_start>username_0: In this several times up-voted [answer](https://academia.stackexchange.com/a/27311/10643), it is suggested, among other things, that 'if someone using an image [...] that they do not own (it) is inappropriate and should be first reported to the PI of the paper and, potentially, the publisher if no action is taken.'
In my understanding, using images you do not own is not a good idea, it's *illegal* in many countries and the *owners of the copyrights* might react and claim their rights, but it's not plagiarism per se.
Let's consider the authors of an image processing paper who use a copyright-protected stock photograph to test their algorithm. They can cite the source of the image, in which case they would still be infringing copyright. But let's say they don't: they are not claiming that the photograph is their own, they just figure the readers won't care.
**Edit:** I recently came across a paper where it was written that images were from a commercially available CD of example images, without saying which one. In this case it's clear that they do not claim that they generated the images themselves but they didn't give any reference.
Is this academic misconduct that should be reported?<issue_comment>username_1: The concepts of plagiarism and copyright are largely orthogonal.
Plagiarism is about taking credit for somebody else's work. You could copy and paste an entire book, and so long as you made it clear whose work it was, it would not be plagiarism. (although it would be a bad idea for other reasons!)
Copyright is about using a copyrighted work without permission. Briefly, any work that somebody creates is automatically covered by copyright, held by its creator. The copyright holder may sign that copyright over to another party (this is common when submitting to journals), or they may place a work in the public domain, but otherwise, any use of that work without a license can constitute copyright infringement (There are various exceptions to this, such as Fair Use, that depend on national laws in specific countries). Some works are licensed under broad licenses such as Creative Commons, which allows anybody to use the work for certain purposes. Others are not, and a specific license for a specific use must be obtained from the copyright holder.
Two examples: Imagine that I am building a presentation for an upcoming conference. For slide one I find a great image that is in the public domain, and I put it into my presentation and claim that it is my own work. In this case I have plagiarised, but not violated copyright.
For slide two, I find another suitable image, but one that is not in the public domain and does not have any permissive license attached. I use it, crediting the photographer. In this case I have not plagiarised, but I have infringed the creator's copyright.
EDIT: Just realised that I answered the question in the title, rather than the (different) question in the question body. Strictly speaking I think that the authors of your hypothetical paper have both plagiarised the image and violated copyright with it. Whether this constitutes academic misconduct is a question that I shall leave for those with more experience in such matters.
Upvotes: 2 <issue_comment>username_2: To the best of my understanding the two differ in the following way:
* **Plagiarism is primarily an ethical issue:** it refers to a false claim of creative work.
* **Copyright is primarily a legal issue:** is refers to use of a work without a legal right to do so.
They can be confusing to differentiate because a person committing one is also often committing the other as well. However, it is possible to violate copyright without plagiarizing and to plagiarize without violating copyright. For example:
* Darwin's "Origin of Species" text is old enough that it has entered the public domain, and thus is no longer protected by copyright. A person who claimed chunks of it as their own would be plagiarizing, but not violating copyright.
* If a person reproduces an image in a new paper with appropriate citation to its original but fails to pay the publisher of the original paper a $35 fee that publisher demands, then they have not plagiarized, but are in violation of copyright.
From a scientific perspective, plagiarism is a major problem, since it is a deliberate ethical violation that significantly undermines the credibility of the author. Copyright violations, on their own, are much less of a big deal, since they may well be caused by legitimate misunderstanding or disagreement about the interpretation of a minor unclear point in a gigantic wall of legalese.
Thus, in the example given of image processing being applied to an unattributed image: if the contents of the image are not of scientific significance, I would interpret it as primarily an issue of copyright and thus not a significant violation worth reporting (as a scientist).
Upvotes: 5 [selected_answer] |
2014/11/07 | 2,137 | 9,000 | <issue_start>username_0: So as people have probably seen by now, the UK is stuck in a [marking boycott](http://www.theguardian.com/higher-education-network/blog/2014/oct/29/marking-boycott-why-are-academics-protesting-about-pensions), which has started to directly affect myself as a final year student.
I have not personally been affected too much yet, but have had one module's continuous assessment removed with no suggestion of replacement. There are no current plans regarding the future of the boycott by either the teachers or the administration, and it is unclear what the ultimate result and outcome will be at this time, which is very concerning for me.
I want to express my disapproval regarding the implementation of the strike as it puts students in a difficult and non-productive situation. For example, students have been told "Examination of dissertations and theses at postgraduate level, as well as vivas, are included in the action." I don't want to damage the relationships I have with my lecturers as I plan on staying in academia, but seeing these actions is making me question my desire to stay in academia.
So what is the most effective way to complain about this?<issue_comment>username_1: I think it's a given that everyone at the university is well aware that the students aren't happy about the implications of the boycott. Be aware that your student union may actually officially support the academic staff ([example](http://soasunion.org/news/article/6013/Students-Union-Statement-on-UCU-Marking-Boycott/)) and be calling for solidarity.
This isn't the place to discuss the politics, of course, but your best method of protest will be more wide-reaching. Personally complaining to your lecturers will be ineffectual at best. For example, I'd suggest writing an article in your student magazine, and organizing or joining a peaceful, public protest *against* the boycott.
Upvotes: 3 <issue_comment>username_2: Before you can understand how to express your disapproval, you need to understand a little bit about how the UCU (the Union organizing the strike) and universities work.
>
> I have not personally been affected too much yet, but have had one module's continuous assessment removed with no suggestion of replacement.
>
>
>
It is not the responsibility of those on strike to come up with the alternative. It is the responsibility of the University who is collecting your fees to deliver what it has promised.
>
> The union in question have poorly planned this boycott and there are currently no plans to remove it. We have had no communication about who is striking and what their alternative plans are if this continues and I am very concerned about this.
>
>
>
The marking boycott has been in the works for months now. The universities and union initially had extended discussions, but were not able to reach a resolution. The union then suggested if an agreement could not be reached, that they would recommend a marking boycott. After additional talks failed to reach a resolution, the union brought asked its members to vote on a marking boycott. After the marking boycott was approved by the union members, a final round of discussions were held. After those discussions failed to reach a resolution the marking boycott was initiated. The union is not allowed to contact students. It is the university's responsibility to contact students and tell them what is happening and what the alternative plans are. Issues about lack of communication and alternative plans should be addressed to the University.
>
> I want to express my disapproval with their methods as I disagree entirely with the boycott as I believe using students as pawns is never acceptable. For example they have been told "Examination of dissertations and theses at postgraduate level, as well as vivas, are included in the action." which is much too far. I don't want to damage the relationships I have with my lecturers as I plan on staying in academia, but seeing these actions is making me question my desire to stay in academia.
>
>
>
This is something that should be directed at the union. The union chose what they are boycotting. The union could have used a research boycott, a recruiting boycott, or a teaching boycott, but the union chose to focus on current students.
Your student union may be able to help you voice your concerns to both the university and the union. It is possible that your student union is backing either the union or university and will not pass on criticism to either. If you want to contact someone directly, the UCU is suggesting students contact the vice chancellor's office at their university. To contact the UCU you could use anyone on the [UCU contacts page](http://www.ucu.org.uk/contacts).
Upvotes: 5 <issue_comment>username_3: >
> *What is the most effective way to complain about this?*
>
>
>
Individually (if you're the only one), there is no effective way to complain about this. Collective action has power by many individuals acting together.
Understand that **nobody wants job action**. Job action, such as a marking boycott, occurs because union members consider the final offer from the employer unacceptable. *Maybe they're right!* (But maybe not).
There are two ways for this situation to end:
* The employer increases their offer sufficiently for the union members to accept.
* The union, either voluntarily or involuntarily, retreats, accepts the offer previously considered unacceptable, concedes defeat, and members get back to ordinary work.
Before you blame the union or their members, study the background in detail and learn *why* they are choosing for job action. Maybe you will end up *supporting* their job action, and instead express your disapproval to the employer unwilling to meet reasonable demands. Maybe not. The student union may either support the employer or support the teacher's union. Or they may be so divided that they decide not to explicitly support either.
Note that this goes beyond academia, and applies equally well to, say, the ongoing German railway strikes, or any other strike that affects a third party.
---
**Edit 22 February 2018**: Today a national university strike started in the United Kingdom. Lecturers and other university staff are on strike. If this industrial action continues for long, student exams *will* be under threat. According to a YouGov poll, 66% of students at striking universities support the strike. 50% blame the conflict on the universities, and just 2% blame it on the union. So this is a clear example where students overwhelmingly side with academics against the universities. See [UCU news item](https://www.ucu.org.uk/article/9345/Poll-shows-students-support-pension-strikes-and-blame-universities-for-the-disruption) for details.
Upvotes: 5 [selected_answer]<issue_comment>username_4: Keep your head low. You were dealt a bad hand. You can't win, only minimize your losses.
If it's otherwise wasted time, consider traveling through Europe for a semester. Since a troll seems to have outed you as a comp sci major, go to Berlin and check out the start up scene there.
Upvotes: 0 <issue_comment>username_5: #### Consider writing a public letter/article on the topic
I agree with the suggestion in the other answers that you should look more deeply into the matter. In particular, your view should consider the underlying dispute over employment conditions and you should form a judgment over that matter as part of your overall take on the boycott. This might change your mind on the issue, or it might not, but at the very least it will give you proper context and a holitic understanding of the matter.
Now, assuming that you maintain your view that the boycott is a bad idea, I see no reason why you couldn't express that view and still maintain good relations with academics in your university. Academia is a place where it ought to be possible to disagree over a subject and put forward arguments and positions in good faith.
Academics sometimes disagree over political matters ---many of which are far more consequential than a workplace boycott--- and they are used to the fact that university is a place where they will encounter views they disagree with.
In terms of "how to complain", if you have thought out your position well, and if you are sufficiently logical and articulate in your reasoning, have you thought about writing a letter/article for a newspaper, blog, etc.? You might have the ability to put forward a useful perspective on the boycott from a class of people who are negatively affected by it, and you might be able to share some useful information that would add to the conversation on the topic. This will require you to have a fully developed understanding of the context, but if you frame your views clearly and sensibly then they might be convincing. You are pursuing a postgraduate education, so *use that inchoate education to add value to the public conversation on the matter*.
Upvotes: 0 |
2014/11/07 | 610 | 2,650 | <issue_start>username_0: I am applying to doctoral programs in the US, and, because I have two breaks in my studies as what shown in my transcripts, I want to find somewhere to explain these.
But I am concerned with that if I simply put this short paragraph explaining my breaks in studies somewhere in my application package as a supplemental material, then it may be ignored (would it?)! Thus I decided to include it in my statement of purpose as an appendix.
The number of the words in my statement of purpose is not a problem, for even when the paragraph is included, my statement of purpose still contains less than 800 words.
So would such deed of mine have any chances to annoy any member of the committees?<issue_comment>username_1: This would be fine. One fundamental point of the personal statement is to allow you to comment on anything unusual in your records (gaps in education, a semester when you got sick and had low grades, etc.)
In general, you can format the statement of purpose however you like. Using clear signposts (such as section headings) can make it much easier to follow.
Remember that the person reading your personal statement probably has a large stack of them to read, so making their job easier can only help you.
Upvotes: 2 <issue_comment>username_2: Is there any way to turn the gaps into strengths as part of your narrative in the statement of purpose? If so, that is likely better than separating them in an appendix. For example, if you spent time in industry, did it give you a better appreciation of why you wanted to return to studies? If you took time off for personal or family reasons, did it give you time to reflect or experiences that have shaped why you want to return to graduate school? More mature students who are in a graduate program because they really understand what they want out of it are often highly appreciated by professors. If that is you, then embrace the gaps in your transcript and let them be (a small) part of your main narrative.
Upvotes: 2 [selected_answer]<issue_comment>username_3: I would put such gaps in the diversity statement. That's what that optional portion is for -- showing how you are different from the norm. Many faculty prefer students who have had experience outside of school -- or who have left and then recommitted to coming back. So don't view this as just a negative.
When I read applications, I want the statement of purpose to be about the research you want to do. I do not like biographies -- the one's that start out with "When I was a little boy, I always wanted to be an \_\_\_\_\_\_" (or some similar iteration) get tossed out.
Upvotes: 2 |
2014/11/07 | 2,550 | 10,908 | <issue_start>username_0: Four of my Japanese teachers are, as you would expect, Japanese, and they all speak English fluently. But now and then, there will of course be slip ups: I often see small grammatical errors in one sensei's handouts. They don't affect the quality of the communication, but is it appropriate to point out these errors to the sensei?
I know that if I were in Japan, I would always want for native speakers to correct me if I made a mistake; more so in the written form. However, this particular sensei has been at my uni for something like 20 years, and her English is perfect in all other regards; I wonder if it wouldn't be a little insulting to point out otherwise negligible written/spoken errors. Nonetheless, I'd like to know what some teachers think.
EDIT: When I say my sensei's English is perfect, I mean that she is able to communicate effectively, not that her speech or writing are free from grammatical errors. Like any non-native speaker who hasn't learned from a very young age, she will sometimes say or write things that sound jarring in English grammar.<issue_comment>username_1: It depends on how much interest she has expressed in being corrected or trying to improve and how close your relationship with her is. Any corrections should be generally mentioned to her respectfully and in some form of one-on-one conversation.
If she has asked in class for mistakes to be pointed out, then I would not hesitate to offer the occasional correction. Frequent corrections are probably not welcome.
If she has not expressed interest in being corrected, I would only mention corrections if the mistake could potentially lead to misunderstandings either in message tone or content.
A gray area might be if the mistake is in formal communication (e.g. grant applications) and you work closely with her. In that case, use your own judgement.
Upvotes: 6 [selected_answer]<issue_comment>username_2: I would say it is more polite to ask first if your teacher is fine with being corrected. After all, we do not know who is gonna be offended by this seemingly "unoffendable" thing.
I have met a native English speaker, who asked me if I am okay if she corrects my English so as to make my English more natural. Frankly, though I do not care that, I appreciate her asking before acting.
To me, it is a general etiquette to ask before act. Indeed, you can only win the recipient's respect if asking first.
Upvotes: 2 <issue_comment>username_3: Different cultures look at such corrections in different ways. I won't pretend to understand all of them, and I know I'm misunderstanding some, but tread carefully.
In some cultures (though not Japanese culture, I think) corrections in public can be considered humiliating. Say what you want in the privacy of an office, one on one, but not in a group of people. In some cultures, what an elder has to say carries much weight, and correction from a younger person must be handled delicately.
In other cultures, politeness is key, and some things can just be considered rude that an American would never imagine to be so. It took me some time interacting with students before I realized that the answer to yes/no questions is yes, because no is rude, and I think that putting someone in a situation where they have to say no might be rude too. I find that avoiding yes/no questions in situations like this helps. For example, the answer to "can you do this?" is "yes", but the answer to "How well will this come out if you do this?" might be "it might not work at all"!!
I imagine that a sensei who has been teaching language for decades is used to just about every interaction there is. If you're interested in Japanese, though, I'd approach this as an opportunity to learn about cultural sensitivities. Approach your teacher with this, explain that you understand that there are different sensitivities with respect professional interaction, lay out the issue, and ask how this would be handled in Japanese Culture.
Japanese experience is TREMENDOUSLY VALUABLE in the worlds of business and technology. The more you learn, the more valuable you become.
Upvotes: 2 <issue_comment>username_4: You say that her English is perfect in all other regards. There may be grammatical errors in the handouts because they are written in a tight schedule and she has different priorities (e.g. writing papers, grant proposals, etc.)
The point for me is: do you think those errors are due to some lack of knowledge about English or just lack of time? Do you think she could spot those errors by herself if she cared and could find the time for that? For how many years are the handouts going to be re-used?
That makes the difference, IMHO.
On the one hand, if this is all due to lack of knowledge about English, then providing this knowledge would be welcome.
On the other hand, if this is because she doesn't care and/or doesn't have the time for that then pointing at it and forcing (or suggesting) her to spend time and take care of that is going to be perceived negatively.
Upvotes: 3 <issue_comment>username_5: In Japan, correcting a teacher's mistake is pretty unwelcome and humiliating. Several visiting professors from Japan are complaining that American students are very impolite: they ask questions. Serious. Even asking a teacher a question is an insult (they feel like they are being tested or mocked). Since your teachers are not visiting ones, but people who have lived for years in the country, I assume they are far less sensitive to American ways of teaching and communication and more open to being corrected. However it is hard to guess, and potentially it can hurt an otherwise good relationship.
So if your teacher has indicated that he/she wants to improve his/her English, feel free to help. Otherwise I would let it go. If you really think it is necessary, try to do it as indirectly as possible like during a short chat when you ask him/her how he/she learned such good English. Some praise always helps. If the answer is that people helped by correcting and teaching, then you can spin the topic into the direction if he/she needs any help in proofreading. Good luck.
Upvotes: 4 <issue_comment>username_6: In your class, will you lose points on an assignment for grammatical and spelling errors? If so, then you are not being rude by pointing out these mistakes. My advice is to mention these errors one-on-one – absolutely not in front of another student, and show as much respect and humility as possible. You don't want to embarrass your teacher.
Upvotes: -1 <issue_comment>username_7: Directly confronting a teacher infront of others can be seen as an attack.
I would advise against this especially towards people with a Japanese background because in Japanese culture showing conflicting opinions is generally regarded as rude and is so implied in a very polite way instead. Also, respect is important so correcting a teacher might be seen as attacking the authority of the teacher.
Confronting the teacher in private might be a better alternative as long as the teacher is open to this, it is done so in a polite way and is not done too often to annoy the teacher.
Indirectly correcting the teacher by bringing up the correct way to say the thing the teacher made a mistake with is also possible.
If doing so, I would advise to not bring up the mistakes, instead only the corrections, and not too often as it could be seen as condescending.
However, this can be a better way when cultural differences mean that being conflicting is taboo.
In any case, it is best to be polite and careful about what you say.
As a side note, the Japanese and English languages are very different(much more than just vocabulary) and I can say from experience that going from one to the other is a big hurdle. Even if you do succeed in getting the point accross they might still make the same mistake because it is awkward to them and hard to get used to. Pointing out a mistake that can be easily corrected generally won't be taken negatively because it is of benefit, but pointing out a mistake that won't might just cause annoyance and so should be proceeded carefully.
Upvotes: -1 <issue_comment>username_8: It sounds like no, and this has nothing to do with Japanese culture.
**Why would you correct someone's grammar?**
1. You both want the communication to be formally correct. This would apply if your teacher wrote an honor code you needed to sign, and it frequently applies between grammar nerds who get some enjoyment out of using "less" and "fewer" correctly.
2. You fear the communication may be misunderstood. From your question, it sounds like you're not talking about this case.
3. You think they want you to correct them so they can learn. For a 20 years' experience teacher, this fails the "common sense" test. He or she is communicating fine (and very successfully) with small "mistakes", everyone knows what they mean, no one cares. username_2, I do think it is rude to decide for your teacher that they are "working on" English. That's not how learning language really works; at some point you have your accent and your mistakes and that's how you talk and essentially your dialect. You don't correct a French speakers' pronunciation do you? No, it's their accent.
So... no, don't do it, generally.
Upvotes: 2 <issue_comment>username_9: The teacher has been there twenty years now, which is plenty of time to learn a language. She is a teacher at a university, communicating in English. Her English "is perfect in all other regards", and these are "otherwise negligible written/spoken errors".
At this point, I think you should *forget that she is Japanese*, forget that she had to learn English as a foreign language. She's way past that point. It's insulting to consider her as someone who is learning English. Native speakers aren't perfect either.
So the question becomes -- would you correct a teacher if "now and then, there will of course be slip ups", if they were a native speaker?
I feel you wouldn't, based on how you describe the mistakes.
Upvotes: 3 <issue_comment>username_10: No, it is not appropriate and additionally not a good use of class time.
* If you could not understand what your instructor said, ask for clarification.
* If you can offer the instructor a correction, then you probably understood what they said, and therefore, you do not need to waste your class time and the time of your classmates on corrections.
* Be aware that error correction is not as simple as you may think. It is unlikely that the teacher would benefit from having a student correcting them in such a setting. If your goal is to improve the class sessions, you are not going to notice a significant improvement in your instructor's overall English ability, as a result of your corrections, unless they are only using a very tiny range of language during your lessons.
Upvotes: 0 |
2014/11/07 | 1,904 | 7,872 | <issue_start>username_0: Usually, when I send an email to a teacher/lecturer, I always start "Dear `Mr. Atwood`" and end with "Regards/Best wishes (etc.) `Leo`". If I have to send a followup email after they have replied, I omit the opening and closing sections entirely, and just write my message. Is this considered rude? Would you prefer that a student always lead an email with "Dear" and signed their name?<issue_comment>username_1: I personally prefer to always use opening and closing conventions in my emails, just as I would a letter. It costs nothing and it adds a little courtesy, which is never inappropriate. It's also a signal of how intimate you consider the relationship: are you on a first-name basis in person, or is there a bit more professional distance?
Email is generally a less formal mode of communication, however, so I would not be offended by a correspondent who did not, and likewise, my opening and closing is not quite as formal (e.g. "Dear [person I don't know]" vs. "Hi, [acquaintance]").
Upvotes: 4 <issue_comment>username_2: I do the same. In an ongoing correspondence, where the next email is a reply to the previous one, I usually omit the complete introduction and ending sections. However, there are two cases in which I stick to the full option:
1. The person I'm writing to keeps their emails formal, so I do the same
2. A significant amount of time has passed since the last response (e.g. a recent update to a past correspondence)
I always place some kind of salute at the end, e.g. Regards, . The complete end section would include a footer with my contact details and affiliation as well my full name.
Upvotes: 4 <issue_comment>username_3: For the back-and-forth's I tend to avoid formality.
But, in order to avoid annoying the teacher, I use, say "Thank you so much, <NAME>.". Please note that I mention his or her name, instead of simply "Thanks so much.".
I think doing so can prevent us from being considered "rude" in whatever sense.
Upvotes: 2 <issue_comment>username_4: For first contact, absolutely. It not only shows respect for the person but also knowledge of the etiquette of writing.
If the other persons answer more casually, follow suit.
Upvotes: 2 <issue_comment>username_5: I'm often underwhelmed by the level of familiarity young students take with email messages-- as well as the content of those messages.
Unless you have a reason to know otherwise, address recipients formally, and with the correct title. In academic settings, the correct title is usually "Dr.".
In the message, concisely state why you are sending the email, and provide ALL the information the person you're communicating with needs to know in order to take action. For example "Can we schedule a meeting?" is NOT acceptable on its own. You need to state WHY you want a meeting, possibly with a reason why email isn't good enough, and provide the info the faculty member will need to prepare for the meeting. This will avoid having three email back and forths when one should do the job (which, IMO, is MUCH more annoying than not having "Sincerely" at the end of the message).
To summarize, there's much more to effective communication than the first line and last line of the email.
Upvotes: 4 <issue_comment>username_6: Email to instructors/teachers/professors in college can be tricky. It's important that you are respectful and use complete sentences with correct grammar and spelling. If you follow these steps, you (usually) can't lose.
1. Does your instructor have a PhD? You will find the answer to that question in the course syllabus. If so, always, always, always address him or her as Doctor when speaking to them (Dr. in writing), unless she or he specifically tells the entire class that they don't want to be addressed as doctor. So, begin your email accordingly:
Dr. Smith, OR Mr. Smith,
2. Try ending your initial email with:
Respectfully,
Leo
3. After the first email, it won't hurt to add Dr./Mr. Smith in each reply email. You probably don't need it, but taking the extra couple of seconds to type their name shows a higher level of professionalism and respect.
Dr. Smith, Mr. Smith,
4. In your reply email, again, use that respectfully ending.
Respectfully,
Leo
5. Again, correct grammar and spelling are very important. Howe wood u feal if somewon sent u an email with bad speling and gramur?
What many students don't understand is how the little things can go so far and make a difference in their classes. Showing respect to the person who has dedicated their life to teaching others is the least you can do for them.
Good luck!
p.s.
I'm not a teacher, but I have a tremendous amout of respect for them. I'm a former academic advisor at a university who loved helping students!
Upvotes: 2 <issue_comment>username_7: In American culture/usage: If you're not sure of the instructor's degree (which might be the case if you're in an introductory course taught by a graduate student), it's always correct address the instructor as "Professor X." "Dr. X" certainly won't offend someone who doesn't hold a Ph.D. (they might enjoy the elevation in status), but it's better to know the correct form of address, which you can often infer from a department's faculty directory.
I was the director of a program for many years, and so often saw students not my own who needed advice or help. Some -- again, students who had never met me before -- would come into my office and address me as "Elise," not even giving me the courtesy of my last name, much less Dr. or Professor. These students needed help in the art of addressing faculty.
I had a colleague who, when a student came to her door and addressed her inappropriately, would make them go out in the hall and come in again with the correct way of addressing her. I myself never managed to do that, but I had to admire her insistence on the proper form of address.
As far as emails are concerned, I agree with the others that you can follow suit, but it never hurts to keep one step up in terms of formality.
Upvotes: 2 <issue_comment>username_8: I actually learned over time, that a lot of professors like an informal way of communication. Especially when a relationship exists. Scientiest like to see themselves as a community and as a student you are on the way of becoming part of that community. In that sense I found many professors being very informal themselves.
I also think that it is accepted in communication between scientists to omit the academic title in the salutation.
That being said a first "Dear Mr. Atwood" should be totally sufficient to not be rude.
Upvotes: -1 <issue_comment>username_9: It is always better to be more polite than not and to use conventions, than to not be polite enough. But sometimes, it is not necessary. Personnally, I expect a minimum of politeness and respect from students. And I also appreciate when students show more respect. From my experience, the expectations vary depending on the professors. So it would be more safe to observe what other students around you are doing and have a similar behavior w.r.t a given professor. Besides, if you are an international student, depending on the country, it may be good to get familiar with the local culture. Some culture have different expectations about what should or should not be done. Lastly, if you become more familiar with your professor such as supervisor, you may eventually use more informal communication. And if you are not sure about what the professor expect, you can always ask the professor directly what s/he expect. But most likely, remember that your research advisor is probably not your friend/brother/family member, and should not be treated in a similar way.
Upvotes: 0 <issue_comment>username_10: You should never use "letter conventions" in an email, whoever you are emailing. An email is not a letter!
Upvotes: 0 |
2014/11/07 | 1,180 | 5,038 | <issue_start>username_0: I use pdf files, generated with Beamer/LaTeX, and whatever pdf viewer is available on the computer I am presenting on. While I like the Beamer/LaTeX/PDF combination better than PowerPoint type applications I am not particularly happy with its ability to embed media (sounds and videos). Further, common PDF viewers (e.g., Acrobat, Evince, and Okular) do not provide any type of "presenter" view with a clock and notes on one screen and the slides on a projector. The presenter view in advanced viewers like [impressive](http://impressive.sourceforge.net/) are pretty limited. I am thinking of switching to an HTML5 based system (possibly [reveal.js](http://lab.hakim.se/reveal-js/#/)), but wanted to know what the drawbacks of HTML5 based presentations are.<issue_comment>username_1: From a web developer perspective, the only issue I can see is compatibility. If the provided computer does not provide an HTML5-ready browser, the presentation *may not* work. Unfortunately, having been in labs and lectures with institution-provided computers, there is no guarantee that an IT department has provided the newest browser. While I'd like to think that people, especially at institutions, are keeping their machines up to date, we have to assume that some machines have been forgotten or have not been maintained.
[This HTML5 readiness graph](http://html5readiness.com/) demonstrates that IE8, [which still has a huge market share](http://arstechnica.com/information-technology/2014/07/windows-8-x-internet-explorer-both-flatline-in-june/), is woefully prepared for any HTML5 feature. IE9 has some functionality over IE8 and appears to cover the audio/video portions you require, but only IE10 appears to be HTML5-ready as of last year.
I tried the reveal.js demo on IE8 just now. The rotating transitions don't work at all (it acts like Powerpoint basically), but it was usable. Since there are no videos or audio, I can't test those, but a standard presentation would at least be doable even in that browser.
It is unlikely that any IE9- browsers will ever reach full HTML5 support due to the security risks and time drain required, so any attempt to present on an IE8 browser is likely to be bare-bones and equivalent to viewing a PowerPoint.
However, this can be largely mitigated with the use of a PortableApp web browser such as [Portable Google Chrome](http://portableapps.com/apps/internet/google_chrome_portable). A small footprint that can be carried around in a flash drive with the presentation.
From a student perspective, `reveal.js` might appear to be more confusing if you share the files, as it probably isn't just a `foo.ppt` file, but a source document, the JS, and so on. You will probably need to provide a read-me for being able to view the presentation. I'm sure everyone here knows how to open and view a PowerPoint, but some of us might stumble a bit with a more complex solution that may depend on multiple files and folders. That, and the student would need an HTML5 browser to get all the features working.
You probably will also need to direct PDF copy of the slides because I didn't see any noticeable way to print the presentation unless the stylesheet natively does that from the print menu.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Overall HTML5 Presentations are great, but that's not to say that they'll be better than the alternatives in every respect, or that you'll never have a bad time.
One challenge I've experienced is when I need to collaborate with colleagues on the presentation -- many lay people are intimidated by HTML or Markdown-based tools. There aren't a lot of cloud collaboration apps that will let you upload plain HTML/Markdown and then let your friends add, edit, or comment on your work. You may end up having to do a lot of converting back and forth between HTML and Office formats if you work with non-techie collaborators.
Similarly, people are often confused about how to open an HTML presentation when you send them an index.html file and a folder full of assets. You can remedy this by hosting things yourself in a Dropbox folder, but that isn't always awesome.
I use Reveal.js often, but I think it's fair to say that I spend more time developing an HTML presentation than I would in Apple Keynote. I usually write out my ideas in Markdown first, then convert that to HTML and edit it further within the Reveal.js codeset (from my code editor).
If your presentations are mostly made up of text and un-altered images, Markdown/HTML can be very fast. If you like to futz with the colors, fonts, and modify your images, that's faster in a presentation software like Keynote. Even simple things like cropping images, writing over images, etc. would require you to do them in an external graphics editor before incorporating them into your HTML presentation.
So basically the biggest problems with HTML presentations are working with other people and focusing too much on the visuals. Other than that, it's great.
Upvotes: 2 |
2014/11/07 | 787 | 3,502 | <issue_start>username_0: Being early in my academic career, I don't yet get many paper reviews. Until recently, I had reviewed two papers in two years, and rejected one invitation to review. My last review was completed some 10 months ago.
Two weeks ago, I agreed to review a paper. Since then, I have received two more invitations. In total, in the past two weeks, I have received as many invitations to review as in the two *years* prior to that. All are from different editors, but the latest request is from a journal for which I reviewed 2 weeks ago, and comes 2 *days* after submitting a review to a different journal.
Although I have not calculated the probability, I guess it is unlikely that the sudden string of reviews is a coincidence. I have the impression that accepting and quickly submitting a helpful review has *caused* the additional review invitations. Which brings me to the question.
Do editors within the same journal typically share with each other names of people who have written helpful reviews in the (recent) past? How about editors of different journals, that may or may not have the same publisher?<issue_comment>username_1: The journal editor (in chief) may not, but boards of editors have overlap and word does often spread that "X@Y is prompt, conscientious and fair."
Or it could be random entropy.
I wouldn't give it much thought. At best, you are being thought of as a good reviewer. At worst, it's a random quirk. Choose the interpretation that makes you happiest.
--
p.s. If you're feeling overwhelmed, it's entirely ok to tell a journal that you have to decline. Respond as soon as you can so that they can move on. Quick declinations (and referrals to potential other reviewers when possible) are also signs of good, conscientious colleagues. And I'm a strong believer that the good karma from these unrenumerated acts of conscientiousness will help in the long run.
Upvotes: 4 <issue_comment>username_2: In addition to possible sharing by editors, if the journals are by the same publisher or reviewing system, they may be sharing a database of reviewers and their expertise. Some journal management software lets you search for reviewers with related expertise, and also allows editors to give ratings to their reviewers. Thus, if you give a good review, the software may begin recommending you to editors in the same "family" of publications.
Between being a good reviewer and general growing notability in your field, within the next couple of years you may find yourself getting *many* review requests. It's important to set a boundary for yourself of how much time you want to invest in professional service, so that you can strike an appropriate balance between service and the rest of your responsibilities.
Upvotes: 3 <issue_comment>username_3: I guess it depends on the platform that manages the submission and review, but within the ScholarOne system as an editor or an associate editor there are additional filters that help you see when was the last time someone was invited for review, how many he has accepted/rejected, what is the average review turnaround time etc.
There are also two additional stats, the timeliness of the review and the relevance of the review that allow editors to essentially mark reviewers (rather simple system of 1-2-3) for a given review and which appear next to each reviewer. High averages, a good turnaround time and matching keywords in your profile make you a good candidate for a reviewer.
Upvotes: 2 |
2014/11/07 | 790 | 3,307 | <issue_start>username_0: I am a final year honours student (UK) who has come across a little bit of a predicament. About eight years ago when I was a teenager, I was undertaking a University course that I did not complete because of personal related issues. There was a lecturer that I did not get on very well with at the time because of this who I knew both from that and also from my previous school days. We had an argument once or twice; I was fairly young at the time.
When I started this new course a few years ago, (which is at a different University) I committed to achieving as much as possible, and I have managed to get a 'first' overall in terms of averages for years one and two. However in my final year it turns out that one of the examiners for my final year project is the same lecturer that I described above in the first paragraph (even though it is a different establishment).
So the question really is, should I have a reason to worry? Would under-performance as a teenager result in an unsettling experience or a bias of marks when it comes to the day when the viva is to be presented? My friends tell me that my worries are irrational as eight years is a long time and teenagers often under perform, but when I started to see if anyone else had this problem, I could not find any existing questions, and everyone on this community looked to be helpful in providing advice. If you could share any enlightening thoughts, it would be very much appreciated!<issue_comment>username_1: Anybody who has taught and *likes* teaching knows that one of the biggest rewards is seeing your students grow. From your description, it sounds like your only problems in your previous encounter were age-typical lack of focus and brattiness. If you are now a much more mature and polished student, then more likely than not, if the lecturer remembers you at all, it will be to your *advantage* when they compare with the kid they taught so many years ago.
Upvotes: 2 <issue_comment>username_2: I think your anxiety might be causing you to over-inflate the significance of the historical incidents (unless, of course, the nature of the arguments was deeply personal, for example). Eight years is a long time, and that person will have seen a lot of students come and go in the intervening period. It's perfectly possible that they won't even remember you; if they do, you might even find that you have a good laugh together at (a) how bratty *you* were back then, and (b) how pompous *they* were back then.
Upvotes: 1 <issue_comment>username_3: When I was in my first year of university, as an unruly teen, our department had a welcome barbeque for all the students, with free sausages and beer. As the event progressed it got pretty rowdy, and I got so drunk with my friends that ---on a dare--- I moonwalked through the faculty area with my bum exposed, making the beeping noise of a truck backing up. My friends thought it was hilarious. Anyway, those faculty taught me throughout my undergraduate degree, and some of them later ended up being on my supervisory panel for my PhD candidature years later. One of them recently gave me a positive reference for an academic position. So given that they had the forbearance to ignore *that*, I don't think you have anything to worry about.
Upvotes: 1 |
2014/11/07 | 563 | 2,462 | <issue_start>username_0: A chapter in my dissertation has not been published in an archival conference. I am writing a paper based on the chapter for a conference that does double-blind paper review.
The paper will include pretty much the entire chapter, which presents a method, and will perform additional analysis of the method. The analysis on its own, without the method, is not enough to merit a paper. In the interest of the full disclosure, I should cite my dissertation. How do I do that without revealing my identity?
Is citing my dissertation without name and institution, just the title, appropriate or not?
I know there were similar questions recently, but none asking about the dissertation. The dissertation is different because it is a publication, a literature search will return a hit, but does not count as one, and is considered OK to publish chapter from in conferences and journals. The field is Computer Science.<issue_comment>username_1: Your goal is to publish. The conference apparently requires that review be double blind. I do not see any reason why you should care if the reviewer figures out who you are; simply cite your own work in the normal way. Then you will have complied with the conference's requirements because you have not explicitly identified yourself.
In my experience reviewers do not like incomplete citations.
Upvotes: 2 <issue_comment>username_2: The challenge here is seems to be to ensure that if the reviewers *do* stumble across your thesis, then the failure mode will be penetrating blinding rather than accusations of plagiarism.
It is my belief that with an "extract" paper like this, the thesis should be cited in any case. In most cases, there will be some connection to other portions of the thesis that could motivate such a citation (e.g., a motivation or an application). I also think that it is good to explicitly acknowledge the relationship to the thesis, e.g., "This manuscript is based on work also presented in [cite]", though the customs of your field may differ.
Then you can appropriately blind the citation to the thesis, e.g., "Ph.D. thesis, blinded for review." This makes the relationship clear without violating blinding. At that point, you are preserving blinding to the best of your ability, and while a reviewer can certainly try to penetrate blinding if they want, you certainly won't run into any problems with misunderstanding about plagiarism.
Upvotes: 5 [selected_answer] |
2014/11/08 | 590 | 2,383 | <issue_start>username_0: I knew some college students in the U.S. who were <18 years old.
In which countries must college professors be cleared (background-checked, certified, fingerprinted, etc., like U.S. K-12 teachers) to work with minors?<issue_comment>username_1: Any special requirements or background checks regarding working with minors are a matter of employer policy or local law, and there are no standard rules across different jurisdictions. In particular, there's no way to give a definitive answer without knowing the exact circumstances.
As a general rule, though, these sorts of background checks are rarely applied to college faculty. Even in locations with unusually strict laws, there are often exceptions for college professors, despite the fact that some students may be under 18. For example, the [University of Sydney policy](http://sydney.edu.au/policies/showdoc.aspx?recnum=PDOC2011/229&RendNum=0) explicitly says "Where University staff or affiliates have direct contact with University students under the age of 18, this is not regarded as child-related work under the Act."
Upvotes: 4 [selected_answer]<issue_comment>username_2: I'm not aware of any countries where background checks are required *specifically* to work with minors. There are places where background checks occur as a general condition of employment, but it's just because *everybody* gets a background check, and not just because they could work with minors.
As AM points out, in general, minors enrolled in universities are treated like everybody else, so there's generally no reason why there would be *special* background checks.
Upvotes: 0 <issue_comment>username_3: A quick google gave me several US university that have policies on the issue. For example [Georgetown's policy](http://protectionofminors.georgetown.edu/policy) says staff involved in programs involving minors should do some training and that supervisors of those programs and those that regularly spend time alone with minors have to have a criminal records check.
I think the situation in the UK is similar with staff in one to one contact or on programs aimed at minors having to do a DBS/CRB check. See [Imperial's policy](https://workspace.imperial.ac.uk/secretariat/public/ChildProtectionPolicyFeb%202010%29.pdf).
I'm not sure but I expect the situation in at least western Europe is similar.
Upvotes: 0 |
2014/11/08 | 1,747 | 7,397 | <issue_start>username_0: Are there any formal degrees that offer training in tools for detecting plagiarism? Or on understanding fair-use, proper attribution of sources, and avoiding plagiarism?
Are there career options which focus on exposing plagiarism?<issue_comment>username_1: Computer Science has the tools for it. It is finding local correspondences in a large database, allowing for some differences, and rejecting random hits and well indicated quotes. As an example of a more classical problem in Computer Science, the first part reminds me of [Multiple Sequence Alignment.](https://en.wikipedia.org/wiki/Multiple_sequence_alignment)
Turnitin uses some Machine Learning under the hood to refine the scores; and the people working there certainly have a career in plagiarism detection.
Upvotes: 2 <issue_comment>username_2: Dr. <NAME> (PhD in Communication Sciences from University of Vienna) has essentially made a career out of detecting plagiarism, mainly in doctoral and habilitation theses from important public persons in the german-speaking area. You can find some information about him [online](http://plagiarismreports.com). He has written multiple well-selling books on the topic, sells plagiarism checks, and regularly appears in public discussions.
However, note that the life of somebody who frequently and *very* publicly raises strong allegations against professors, politicians, and other degree-holders with substantial influence is not necessarily a fun one. As you can imagine, a lot of dirt gets unearthed and thrown in his face routinely (sometimes justified, sometimes more as part of counter-campaigning by the persons he attacked).
In summary, I think he fills an important societal niche in our area, but I *really* wouldn't want his career.
Upvotes: 3 <issue_comment>username_3: The practical aspects of detecting plagiarism in text probably fall under the purview of Computer Science as @username_1 discussed in [this answer](https://academia.stackexchange.com/a/31397/22520), but developing these kinds of tools and actually applying them to detect malfeasance are very different things.
[**Investigative Journalism**](http://en.wikipedia.org/wiki/Investigative_journalism) is the professional career path that pertains most directly to the issue of detecting and exposing plagiarism. Many universities offer degrees and fellowships in journalism with exactly this focus:
* [University of Vienna](http://sowi.univie.ac.at/en/departments/science-and-technology-studies/)
* [Carnegie Mellon University](http://www.cmu.edu/hss/english/graduate/ma-pw/ij/index.html)
* [City University of London](http://www.city.ac.uk/courses/postgraduate/investigative-journalism)
* [University of Strathclyde](http://www.strath.ac.uk/humanities/courses/journalism/courses/investigativejournalism/)
* [Boston University](http://www.strath.ac.uk/humanities/courses/journalism/courses/investigativejournalism/)
* [Harvard University](http://ethics.harvard.edu/)
* [Northeastern University](http://www.northeastern.edu/camd/journalism/academics/hands-on-courses/investigative-reporting/)
Upvotes: 3 [selected_answer]<issue_comment>username_4: Suggestion: If you broaden your focus to include related fields -- recognizing authorship by writing style, natural language processing to recognize similar content, recognizing significant quotes, and other approaches which might be useful for recognizing reused content -- rather than focusing specifically on plagiarism, you're MUCH more likely to find a match than if you insist on pre-selecting this one specific application of those techniques. And you're much more likely to learn about, and be involved in developing, the cutting edge of technology than if you're tightly application-focused.
Think long and hard about exactly what you want to do, exactly what you need to learn in order to do it, and how to go about learning that. If you jump too quickly to trying to implement a solution, you're likely to solve it poorly if at all.
(That's assuming you actually do want to do research in this area. If you just want to write and sell a product, that's a different topic and academia may not be the place to work on it.)
Upvotes: 0 <issue_comment>username_5: I think it depends on what you mean, want to do and why you'd be interested.
Different software offer training on how to use their plagiarism detecting software. Fair use issues are tackled in library training and in education programs. The Illinois online network/U of Illinois Springfield offer a course on copyright issues for online educators- <http://www.ion.uillinois.edu/courses/catalog/> . If you are interested in just understanding those issues, then look for courses like that and consider education or library science depending on what you'd want to do. If there is a specific sphere like online ed, you could look into those types of programs.
If you'd want to teach others about those things, then look at courses like this, library workshops and staff positions in your spheres of interest and check the backgrounds and education of those currently employed in those positions.You could also look at academic support centers & centers for teaching and see who is giving workshops or services related to your area or interest. We often talk about plagiarism and fair use in the sphere of education, which is why looking into a degree in one of many education specializations might give you some of this training. However, it will depend on what you really want to do.
For instance, you could also look into copyright law and fair use policies and then go into the policy side with a law or ed policy or higher ed degree. You could also look into a rhetoric, English or professional comm program.
Though the OP doesn't seem to mention it, if you'd be interested in researching detection software and possible developing, or testing it, then you could look at a different set of degree programs. For testing such software and looking at how users interact with it you could look at human computer interaction programs. Such programs may allow a degree of development and design as well.
Or to develop programs you could go in a number of directions. If you are interested from a purely tech side, you could try computer science and related disciplines. However, something like computational linguistics with a focus on natural language processing or some other tech/linguistics combination would give you the theory, application and tech components to create, test, evaluate or implement a plagiarism software from a variety of angles. If that is something you'd be interested in then perhaps try something like applied linguistics and technology at Iowa State - <http://www.engl.iastate.edu/teslapplied-linguistics/> or similar programs. You could also broaden the idea of detecting plagiarism to detecting linguistic fingerprints and look into forensic linguistics.
potential areas: forensic linguistics, applied linguistics, computational linguistics, english, rhetoric, professional communication, computer science, human computer interaction, higher ed, education, educational policy, law....
What you do with the degrees will vary widely depending on the direction you go in. It could be your own company, an admin uni position, teaching, support staff, library,developer, working for a tech company, an ed company,...
Upvotes: 0 |
2014/11/08 | 1,363 | 5,948 | <issue_start>username_0: It might be possible here in India too, I haven't researched on it, but in one movie I have seen the guy is studying mathematics in MIT but his future plan was to study medicine at Harvard. I kept wondering how could he do that.
Here at India at 11th class we have to usually choose either Mathematics or Biology and then our eligibility changes accordingly.<issue_comment>username_1: At least in the USA, yes. The US education system allows the individual student a great deal of freedom in the choice of field(s) of study. You can do your undergraduate degree in one subject and your graduate degree in another.
Of ocurse, to get into graduate school in a particular subject, you have to have sufficient background. Thus it is relatively uncommon for someone to get, say, a degree in history but decide to go to graduate school in physics instead. It is possible, however. In some cases people pursue jobs in one area and gradually develop an interest in another topic, perhaps gaining research experience in the private sector or taking classes informally in order to get the background they need for grad school.
Also, getting an undergraduate degree in a particular subject in the US does not mean you only study that subject. You can take a wide variety of classes outside your nominal area of focus, and in some cases thereby get enough experience to apply to grad school in an another subject. Also, you can "double major", completing more than one official course of study. Thus someone may study multiple subjects in undergrad, and decide on one to continue in grad school.
Switching fields between undergrad and grad is not that uncommon, especially if the fields are closely related (e.g., math BS followed by physics PhD). I've personally known quite a few people who have switched fields from undergrad to grad, sometimes with a long detour outside of school. For instance, one fellow I know got undergrad degrees in political science and Asian studies, spent more than a decade as a corporate executive, and eventually went back to get a PhD in linguistics.
Upvotes: 1 <issue_comment>username_2: Some universities require students to have similar bachelors degree, some other do not require similar degree and do not have specific policy on this; while others need a related or so near bachelors field. For instance, for a masters in engineering degrees; one with bachelors in applied mathematics or physics will also be able to study masters of an engineering field.
Answers to your question varies in fields and countries, and different education systems. My general advice to you would be to see the minimum requirements of the university you want to apply for to see whether their admissions office have any regulations on having related bachelors degree to the masters or not. If they are not providing you information on this, I recommend you to contact them by email and ask your questions.
Upvotes: 2 <issue_comment>username_3: >
> Can people in western countries do graduation and post graduation in two completely different fields?
>
>
>
In principle yes, however: speaking from my alma mater (central european large public university, unrestricted access to most programmes, i.e., no need to convince an admission committee), the main hurdles to do a graduate programme based on a different undergrad degree boil down to two factors:
* **Formal requirements**, as discussed by the other answers. In my university, this was really a design decision that each degree programme individually could decide. Many (especially humanities programmes) are by design very open to all comers. Others, especially technical fields, require either *some* technical degree or even one of a very few, closely related degrees. Exceptions could be made on a case-by-case basis, usually with the obligation of additional fundamental course work.
* **Practical issues**. Even if you are allowed to, say, do an electrical engineering MSc based on a business informatics BSc, *you are still expected to have the electrical engineering knowledge of a good EE BSc graduate* right from the start. If your entire EE education was 2 credits in "Engineering for Computer Scientists", you will likely be in *really* bad shape and absolutely nobody will help you or feel bad for you. The consensus opinion will be that you had to know what you get into, and that you would now need to see how to handle this yourself.
To emphasise the second point again: I have seen a few cases where people tried to argue that, as their degree was sufficient formally for the graduate programme they enrolled in, we needed to accommodate for their very basic subject skills. This line of argumentation *never* works, and it should not. If you do a graduate in any subject, you are expected to have *a least* above-average knowledge of the things that you learn in the undergrad major of this subject. *How you do it is mostly up to you.*
>
> in one movie I have seen the guy is studying mathematics in MIT but his future plan was to study medicine at Harvard.
>
>
>
This would specifically *not* be possible here. Meds is, due to its sensitive nature, highly regulated by law, and there is as far as I know no sidestepping the formal study progression here. The other direction (Meds -> Maths) *may* work, but I am not sure.
Upvotes: 0 <issue_comment>username_4: In the US, medical school does not require any specific major. The specific requirements are one year each of biology, physics, and English, two years of chemistry (including organic chemistry), and a standardized exam (the MCAT). While it's most common for medical students to major in a field like biology, it's entirely possible for someone majoring in a completely different thing to complete the premed requirements and learn enough to do well on the MCAT, and a fair number of medical students in the US come from outside the sciences entirely.
Upvotes: 1 |
2014/11/08 | 256 | 1,057 | <issue_start>username_0: I exchanged an email with a professor on Thursday and he asked me if I am available to call him on a specific time next Monday. I replied the email to confirm the time but no reply from him. Therefore, I resent the email on Friday to re- confirm that he knows that I will make the call. However, no reply as well. So I am wondering if it is appropriate for me to call him at the time he indicated next Monday even without his reply?<issue_comment>username_1: Sure. You have booked a time, make the call. If you want to be sure, ask him over the phone if it is still a good time.
Academics are busy and slow responders, so since you confirmed your emails may have been given the lowest priority.
Upvotes: 4 <issue_comment>username_2: When he has asked you to call him by phone; call him in the office hours to the provided phone number.
May be he knew he won't have access to his email on the following days and he asked you to have a telephone conversation instead. That may be why he is not responding your email too.
Upvotes: 2 |
2014/11/08 | 566 | 2,310 | <issue_start>username_0: Im getting my degree this year (in june presumably) in physics. And while I was getting it I was at the same time getting a degree in mathematics. (By degree I mean completing undergraduate studies in both of those things, I'm from Spain and I'm not sure about the equivalence...).
Now, I dedicated to physics full-time and by the time I will have finished physics (june as I said) I will have approximately 50% of my degree in maths. Independently of the maths degree, next year I will apply to some post-graduate studies outside of my country and I would like to know if I should apply with my physics degree or if it would make any significant (and positive) change in my application to say that I've also been studying half a degree on maths and include a transcript of what I've studied in that degree.
The problems is what I commented on BPND answer, plus the fact that transcripts are 30€ plus 80€ the sworn translation, and if I can keep expenses down, that's a plus. My question is if the difference by including it will be positive. The average grades are 9.24 (GPA 3.9) and 9.2 (GPA 3.82) in physics and maths respectively, out of 10.
The GPAs were calculated with some online converter I found, using all the subjects and the credits for each one.<issue_comment>username_1: Many universities will demand a full transcript of your academic history, so, in that case, you will need to attach both (or a combined) transcript(s) to your application.
Even if they only ask for the transcript of your main studies, you might want to attach the transcript for the math lectures and exercises. Especially in this case, where math acts perfectly as a supporting discipline for a phyiscs major.
Upvotes: 4 [selected_answer]<issue_comment>username_2: **Answer:** You should apply with **both** of your degrees unless you did poorly during your math studies. However, the honest applicant would apply with both no matter what, as both of these degrees are part of your academic history.
**Reason:** Academic institutions want to see you have accomplished a lot. The more you appear to have done, the more likely you will be to get in, as long as you did well during you math studies. Furthermore, as was stated previously, universities often require a full transcript.
Upvotes: 1 |
2014/11/08 | 2,143 | 8,571 | <issue_start>username_0: For committing a crime considered very severe in the U.S. and in some countries (hence the 5-year minimum sentence), but not as severe (but may still lead to imprisonment) in many other countries? He does not have any record of academic dishonesty or anything like that; in fact, he had a solid record of publications and was still building on it before he got arrested. Can he still publish papers that are taken seriously, and obtain academic positions, in the U.S. or in some other countries, once out of jail?
**UPDATE**: I only know what he pleaded guilty to according to news. It is not something noble like an "honest crime". I have confused conviction with indictment, so now I think the crime is most likely real and hardly disputable. I want to add that he is a first-time offender, so I don't know if his prison sentence can be converted to probation, and if that matters to whom may hire him in the future.
@xLeitix's comment:
>
> I cannot think of one crime which results in a five year sentence where I would hire that person.
>
>
>
Maybe the crime of [failing to notify the town of L'Aquila of a 2009 earthquake that killed at least 309 people](http://motherboard.vice.com/read/geologists-who-didnt-predict-an-earthquake-arent-killers-italian-court-rules)? Fortunately for those scientists, the initial court ruling was overturned about 2 years later.<issue_comment>username_1: A five year prison sentence would damage an academic career in at least two major ways. The obvious way it would damage it is from the possible stain on a person's record. In the U.S., at least, being a username_4 can be very difficult to recover from, no matter what the crime, due to general societal prejudice. Add to that the fact that most crimes indicate either dishonesty or being a threat to others, and you've got a serious problem. It does, however, strongly depend on the crime. For example, if the crime is an "honest crime" that derives from ones intellectual inclinations (e.g., [the notorious example of <NAME>](http://en.wikipedia.org/wiki/Timothy_Leary), [the sad case of <NAME>](http://en.wikipedia.org/wiki/Aaron_Swartz), or [the youthful recklessness of <NAME>](http://en.wikipedia.org/wiki/Robert_Tappan_Morris)) then it may be "forgiven" from an academic point of view.
The other, perhaps less obvious, source of career damage is from the interruption in ongoing work. Typically, any working scientist has a number of simultaneous multi-year projects at different stages: preliminary work, proposal, execution, publication. Each of these fuels the others, e.g., papers from more mature projects help support proposals for new projects, pieces of ongoing work in project execution include preliminary work that leads to new proposals, etc. With any major career gap, this "pipeline" can empty, and there is often difficulty in restarting it. This can be a significant problem even for academics who take family leave; a multi-year gap for prison would be much bigger to overcome. Either, however, can be overcome with time and help from supportive colleagues.
Upvotes: 7 [selected_answer]<issue_comment>username_2: I would think having a felony on your record is poison for *any* career. In academia, it may be even worse because, in addition to the general stigma of being a convicted criminal, you will need to fight through the following issues:
* You have a career gap that is *very* hard to overlook or paint in a positive light.
* Job searches in academia are always competitive, so that even smallish taints on your resume can become major issues. Having a major taint such as a felony will make it very hard to succeed on the professorial job market.
* Some jobs may be unavailable to you anyway, for instance those that require security clearance.
* You may even objectively be less qualified after coming out of prison. 5 years of (presumably) low intellectual stimulation are a long time, and many technology fields move quickly.
So, yes, to be honest, I think that the career of this person would be pretty much "totally ruined". And, no, I don't think it matters much *why* he was convicted - I do not think that there are crimes that can lead to 5-year sentences that would not be considered a big deal by any search committee.
Upvotes: 5 <issue_comment>username_3: If the he/she can come back stronger from it more disciplined it could be a blessing in disguise, wisdom comes at different stages for different people.
Upvotes: -1 <issue_comment>username_4: In short, no.
And if you have time...
When I was young I was charged with number of crimes in US. I took plea bargain and ended up serving few years in California state prison. Although now I think I made mistakes back then, I do not feel ashamed at all of what I have done because I believe the cause was just. It took me about six years after release from prison to be able to do research in another country. I feel pretty content with my current position.
What I would recommend to your friend is not to think about the distinct future but rather get himself mentally-and physically-ready for what's to come during incarceration. I was in level-4 yard for 1 year and the rest in level-3, and the magnitude of violence one must face is probably nothing like you have ever encountered. Be polite, social, observant, and extremely violent when the appropriate moment comes. Most likely your friend will do time in much more comfortable places. I heard it's pretty peaceful in other places such as Federal prisons, other state prisons, or lower level prisons in California, but I don't think I can speak for what I have not experienced. There will be plenty of time to think and even read if someone send in books, however, so it could be a great opportunity depends on how you see it. I actually had pretty descent time in there(although I wouldn't purposely get incarcerated again). I had physics background so I never really had a chance to extensively study philosophy even though I had always wanted to. Prison gave me time to thoroughly study most of the classical works I was interested in, and I believe such experience gave me quite a boost when moving onto other field later on.
Everything must perish eventually. Isn't it already amazing to be able to entertain intellectual matters even for a short moment such as human lifespan?
Upvotes: 4 <issue_comment>username_5: I know a case where someone spent several months in jail (mostly awaiting trial) after college, but was convicted only of misdemeanors (not felonies) after plea bargains, and subsequently moved to another state and got a doctorate. (AFAIK, grad schools, unlike employers, do not have background checks, or they didn't back then.) The person garnered a decent publication record during the PhD, and then moved to another country for an academic career (which, last I heard, is going reasonably well).
Upvotes: 3 <issue_comment>username_6: One thing that there was no mention of here is that society does change, and some things which were considered totally "wrong" before can be easily overlooked today, and it also goes the other way around. So it is very possible, that although today this particular crime seems to be rather destructive to a person's career, it may be less important by the time one regains the freedom.
Of course, this doesn't help with the simple fact of not being engaged in the field for some period of time.
Upvotes: 2 <issue_comment>username_7: I'm studying Philosophy and literature and in this field it's not rare to find people who has been to prison or has been condemned for some reason (the first and more famous case is Socrates). Dostoyevsky has been to prison, even G. Pascoli, has been convicted for taking part to a socialist demonstration (and he was quite a quiet person).. But I guess that if someone enacts behaviours that damage the academic society or harass other members of the society then his life will not be that easy. However I presume that if the research has a strong content then it still has its possibilities. Although I know about some researchers who didn't had at all an academic career till someone else discovered their papers. So I would suggest that if that person has something to say, then he should try saying it, and writing it (intellectual connections are important but the content is available if you know how to search for it and do a complete research). That person should even try fixing it's problems with the law. We all do what we can and we all do mistakes.
Upvotes: 2 |
2014/11/08 | 674 | 2,678 | <issue_start>username_0: I have co-authored one paper with Professor A, PhD student B and PhD student C. B left the best impression on me as the most friendly, productive and hard-working. Now, I want to do a joint paper again. Same as last time, the subject is not in A's area of expertise (nor B's or C's), though it's highly related, but I am hesitant to work with A as he tends to rush and push to publication before getting the paper to perfect (or very near perfect), and he is usually busy so I doubt that he will spend much time on it anyways. C is OK, but probably can't contribute much to the paper. Besides, I think two authors may suffice and be optimal.
Now, A is B's adviser (maybe C's too) and was my undergraduate mentor. I don't want to induce any hard feelings to A and C if they find out they, but not B, are left out of this joint venture. In the end, I did present my draft to A and gave glimpse of it in a presentation where A, B and C were present. Another reason why I choose to work with B who is a PhD student is (this may sound silly) because I think this will actually help her career, as opposed to working with a professor or working alone, as I don't intend to pursue an academic career and am just doing this for fun. Plus, the second author can act as a serious peer reviewer, so this will benefit my paper as well.
How should I do about this?<issue_comment>username_1: It is absolutely fine to not work with A and C as long as
1. all the work and ideas in your paper are yours (and B's).
2. your preferences on who to work with are based on academic and research skills, Making decisions based on their race, gender, ethnicity, nationality, etc. is discriminatory and hence unacceptable.
I feel it is dishonest to add A and C to the list of authors in your paper just to keep them happy.
Upvotes: 4 [selected_answer]<issue_comment>username_2: It indeed is perfectly fine to collaborate with only B on a specific topic/publication.
What you might want to keep in mind is that, depending on the field you are working in, it might be common to include the supervisor of a PhD student in the list of authors, as long as that supervisor contributed -even little- by advising his PhD student also on this topic. As you wrote "my undergraduate mentor", I assume you are a PhD student yourself. So if you have your own supervisor as last author for example, it could become a problem not to include A in general. I agree with not adding A as an author just to make him happy.
Keeping C out of that paper does not seem to be a problem, as long as you don't use research done by him. But you would want to communicate that to B as well.
Upvotes: 2 |
2014/11/08 | 746 | 2,894 | <issue_start>username_0: I want to introduce the findings of a case study in my report, but it has four authors and I would like name them in when introducing it.
What would be the best way to do this, or is the below perfectly acceptable?
>
> <NAME>, <NAME>, <NAME> and <NAME> provide an interesting insight in their case study...
>
>
><issue_comment>username_1: For more than two authors, it's generally the norm to say something like "As Andria, *et al*. (2014) discuss, ..."
Upvotes: 0 <issue_comment>username_2: How to handle this depends on your field. If your field has a notion of first author, then "[name of first author] et al." is appropriate. In fields like math that use alphabetical ordering, I'd strongly recommend naming all the authors, since "et al." could be read as diminishing the credit later authors get. I would not be happy if my name disappeared into an "et al."
The sentence "[last name of author 1], [last name of author 2], [last name of author 3] and [last name of author 4] provide an interesting insight in their case study..." sounds fine to me. (I.e., the same sentence as in the question, except that I assume "<NAME>, <NAME>, <NAME> and <NAME>" are just stand-ins for the actual names, and not actually a proposal for how to format the names.)
Upvotes: 3 [selected_answer]<issue_comment>username_3: This is completely dependent on your citation style. Two examples:
* For APA, if there are only two authors, cite both each time; if there are between three and five, cite them all the first time then cite as "First et al., 2014" in subsequent citations; if there are six or more, use "et al." every time.
* For IEEE, use of "et al." begins at three authors, and you use "et al.".
As you can see there are wild variations. Check which style you're supposed to use, and check how multiple authors should be cited with that style. As a general rule of thumb, maybe use "et al." when there are too many authors.
As another user pointed out it's also dependent on your field, and it's even possible that author names shouldn't be cited in every reference (just a numeric reference like [42]) -- I just checked a math journal at random and this was like that.
PS: If you're writing something in LaTeX and are using biblatex, I want to advertise the commands `\textcite`, `\parencite` and `\footcite`. They are able to automate most of this (for example with the APA style, they correctly detect which citation is the first).
Upvotes: 2 <issue_comment>username_4: The question is not about citation, as with a works cited page, but whether or not it is appropriate to introduce *all* authors when first stating the title of the work. ie "In *such and such article* by *author A, B, C, and D...."*
I believe the question is: Must all authors A-D be noted or is it acceptable with less? In this case I'd say yes it is just fine as is.
Upvotes: 0 |
2014/11/08 | 3,541 | 15,238 | <issue_start>username_0: I recently came across a post (that I won't disclose because it contains the professor's last name):
Attendance in this course was regularly low, so the professor used a one-question exam to punish students who weren't attending a class. Those who weren't in class would have obviously failed this exam, and those who were in class (presumably) would have all gotten A's. The final remark on the note on the door was "*maybe we'll do this again some time*".
Let's assume a few factors about the course and this one-question exam, since we don't actually know how they're implemented:
1. The course has an attendance policy (3 days max missed before your grade starts dropping, for example)
2. This one-question exam has an impact equivalent to a homework assignment (could drop your overall grade by 1% if you get a 0).
3. By the language of the original post, this is *not* something the students would have expected to happen; this means that the policy would **not** be outlined in the syllabus.
What if the course *didn't* have an attendance policy and the professor is simply annoyed that students aren't attending the class? What if this one-question exam was only worth a point or two instead of a whole homework grade?
**Is it fair (ethical, if you will) to use this kind of tactic to impose additional punishment on students who don't attend lecture?**
***Clarification***: Students who are absent (if intentional, extraneous circumstances aside) deserve to lose the points for the work they missed. However, this tactic is being used to *additionally* punish students solely for being absent; this is also why I posed different assumptions about attendance policy.<issue_comment>username_1: I think that this is unprofessional, and is leaving the institution open to all sorts of action (up to and including legal proceedings, if it impacts on a student's progression, for example). For example, what if a student is unavoidably absent?
If it's only "worth a point or two", then I don't think it's worth antagonising people in this way. The sarcastic note left on the door simply underlines how poorly thought-out this whole thing was...
Upvotes: 6 [selected_answer]<issue_comment>username_2: Note that at many Universities the "grade distribution" has to be announced in advance, and changing it later in the term needs often to be approved by chair.
This closes the door for this type of idea, and opens the ground for appeals by students.
On another hand, while the exams dates for final exams and midterm exams need to be announced in advance, we are often not forced to do the same for work which is worth a small percentage of the mark. Even if I don't use it, I have the flexibility of having few Quizzes during the term, without the date being announced in advance, as long as they are not worth too much and, most importantly, as long as I announce this at the beginning of the semester.
And I had colleagues which announced at the beginning of the semester that there will be two surprise quizzes during the term(of course technically they were not surprises anymore), each questions worth 1-2.5% percent. And the students know in advance that by missing a class they might lose on those percentages...
Upvotes: 4 <issue_comment>username_3: In my experience, many professors build a little wiggle room into their syllabi for this reason. For instance, professors may assign 10% of the course grade to "class participation". It is then up to them how they assess this. Some take attendance every day; some take attendance only on a few days; some use this sort of "fake quiz" approach.
I know of one professor who included "Quizzes" as a category of points on the syllabus, and specifically announced that these quizzes would be the kind you describe: unannounced, trivially easy quizzes designed solely to check for attendance. She would have such a quiz on days when she noticed low attendance.
I think the degree to which it is ethical or allowable under policy guidelines depends on hwo transparent the process is and how much the grade penalty is. If, for instance, the syllabus allots 10% of the grade for attendance/participation, I think it's totally legitimate for the professor to assess this via trivial pop quizzes, as long as the penalty for missing them all is no more than 10% of the overall grade. Taking the points out of the "exam" portion of the grade would be less defensible, but in practice I don't think it would cause a major stir unless the penalty was large.
Upvotes: 3 <issue_comment>username_4: Having a daily quiz that counts for meaningful points can be a very effective way of getting students to pay attention to the course and learn the material day by day rather than trying to cram before an exam. This is different from simply requiring attendance, because students have to attend class *and* have to learn the material to pass the quiz. This provides students with frequent feedback throughout the course. It also fits well with the general principle that grades should be based on demonstrated student learning and not on arbitrary factors.
There are certainly situations where students are unavoidably absent from class. Thus its necessary to have some kind of policy for "excused absences." For example, you might accept excuses provided by medical professionals or the college's Dean of Students. You might also give students a reasonable number of "free passes."
Having such quizzes as part of the course design from the beginning is an entirely reasonable thing to do. On the other hand, introducing these quizzes half way through the course as a way to punish students for not attending class is difficult to justify and will likely be seen as unfair by the students.
Upvotes: 3 <issue_comment>username_5: Given the third assumption listed in the question (that this policy was not outlined in the syllabus,) then I would say **no, it is not ethical.** Regardless of how much of the final grade it is worth, assigning grades contrary to the grade distribution in the syllabus just because the professor doesn't like that some people were absent is not ethical (and likely violates school policy.)
On the other hand, instructors are usually allowed to create policies that penalize students for not showing up, such as having a portion of the course grade be due to unannounced quizzes or even having quizzes every class period. However, these policies *must be announced ahead of time.* While I personally find such policies annoying and not particularly useful in most scenarios, as long as the expectation is given to the students up front that not coming to class might (or will) impact their grade and by how much, then I don't see an ethical problem with it (and it shouldn't violate policy, either.)
The main issue here is that, whatever policy the instructor wants to use, it needs to be decided up front and included in the syllabus, not created on-the-fly in violation of what someone would reasonably expect by reading the syllabus.
Upvotes: 2 <issue_comment>username_6: At my institution we have a strict "no-absence" policy. If you can't attend a lecture for whatever reason, you must apply for a leave to the dean. Failing to do so may result in failing the respective module and since many are only offered every other year this may result in a decisive prolonguation of your studies. I personally know cases where people quit due to such issues.
Anyway, neither the dean nor the secretary can really have an eye on every single student all the time. So there is need for methods as you describe to compulse students to attend lectures and at the same time punish those who didn't. Since I find the quiz you describe, aiming only at those who were absent, highly questionable, to say the least, I quickly give you an overview of how we handle it at my institution.
There are several approaches at our institution:
Some lecturers simply don't care. It's your task to pass the exam, not theirs. If you miss many lectures, you lack a lot of material which will surely be in the exam. If you had good reasons for your absence (illness or the like), the others will easily share their material with you (besides that in such cases you told the secretary and he then informed the professor, so they knew in advance). If you, however, were absent with no reasons (laziness, sleeping late, whatever...), others might refuse to help you out.
Others let students do a quick oral resumee of the last lecture. First to compulse students who were absent for good (or not-so-good) reasons to catch up the material as the professor will continue where he ended the last time. Secondly it quickly shows if the students understood the material and if not, the lecturer can give additional help.
My own method is a quick three-questions-quiz at the beginning of each lecture. If you don't answer at all, you fail, three failed quizzes lower your grade by one point (we have a system of 1 = worst to 6 = best grade, so one point is already quite a high loss). However, if you didn't understand a topic and therefore can't answer my question, you can write down what you didn't understand and then still pass it. Like this I always have an overview what the students understand best and what not. By the number of people not being able to answer a question I can also determine the quality of my lecture. If a certain number (I say: more than 2) hasn't understood a certain topic, then it's most probably my fault and not theirs.
These methods have the big advantage of being fair to everybody. It's not a punishment, but a motivation to attend every lecture and to catch up material you missed.
Beside that the grading of a module is transparent and clear to everybody from the very beginning. We, the lecturers, have any freedom we want in how we handle it, but we have to announce it at the beginning and cannot change anymore later.
Upvotes: 1 <issue_comment>username_7: What is it that we want to achieve/check by having an exam? In my world it is what the student is capable of doing/what the student know.
How the student has achieved the information is (in my world) irrelevant.
So the only fair exam is the one that tests the skills/knowledge the students have to the subject. That cannot be tested by asking just one (silly/irrelevant) question.
So to answer the question: No it is not fair to ask a question like that! The professor will have to ask the questions about the subject. If the students that have not attended his class knows the answers they should pass. If they dont know the answers - they should fail. And that goes for the students who has attended the class as well!
Upvotes: -1 <issue_comment>username_8: IMHO it is not fair to impose penalties for non-attendance unless there is a clearly-signposted policy that attendance is required. I don't know how this varies across the world, but in some UK universities lectures are technically optional - the point is to learn the material described in the syllabus, and if a student can achieve that better with private study and textbooks than by attending the lectures, that is up to them.
(most students would be foolish to rely exclusively on this approach, but that is their problem, and doing badly in the final exam would be their punishment if it has not worked)
Upvotes: 2 <issue_comment>username_9: **To answer the actual question:** I don't feel the professor's approach is ethical, because he seems to be caring more about *attendance* rather than the learning experience his students are getting. I explain more on this below.
---
I think @NeilKirk asks a very good question in the comments of the question.
>
> "Why is attendance so low in the first place?"
>
>
>
In general, with the exception of the occasional sickness or special circumstance, the answer is that `in the students' opinion, it is not worth their time to attend.` This could be for a number of reasons:
* **The class is covering information they really don't care about.** (*one reason some teachers try asking their students what they want to get out of a class*). If this is an elective class, they should probably be taking something else instead. If this is the situation though, it's probably a required class. So, why is it a required class? Why *should* they care? Telling them how this class is valuable will solve some of this issue. *I'm assuming the value from taking this class isn't only the earned credits and grade.*
+ **Shortcuts are available**
I had this as its own section, but it falls into the category that the students don't really want to learn. If there are ways to get a good grade without going to class and without learning anything, some students will take it.
* **Going to class doesn't get them anything more than they could do on their own** Maybe they see the value of the information, but they just don't get anything out of being in class. Why go, if you learn just as much as if you stayed home and read a chapter of the book? Some people naturally learn better by themselves, and there's not much that can be done about that, but if this is a problem for most, or more than a few, of the students then the professor should really be considering changing the way they teach their material - the students just aren't learning anything more from the professor than they could from just having the resources the professor provides.
+ **Incorrect personal judgements**
*While the judgements being made are typically by the students, many of these could also be caused by the professor and all attempts to help the students avoid them should be made.* The students may have judged the class content or difficulty incorrectly, making them think they that they will easily be able to make up what they miss in class. They may have judged their own abilities or knowledge incorrectly. They may have thought getting some sleep was more important than class that day. etc..
The last bullet is what the professor in question seems to think is happening. He is trying to punish those who make "incorrect" decisions by affecting the student's grades. It is one way to get what he wants (*attendance up*), but I personally doubt the problem is mostly caused by this. If it is, then I think there are better ways to solve the problem. That said, I could consider this an ethical solution, if the students were warned in some way and if the *real* problem was then attempted to be corrected.
Upvotes: 2 <issue_comment>username_10: **Fair**? Absolutely not.
If you inform students beforehand that class attendance will count for a certain percentage of the grade, then yes this is absolutely fair. But you asked *"What if the course didn't have an attendance policy"*
Well, in that case: How exactly are the students supposed to know that failure to attend will affect their grades negatively? It is both **ethically** and probably **legally** wrong to deduct marks from grades based on arbitrary reasons (a professor not liking attendance levels).
Most higher level education classes specify the criterion for getting grades. This is to enable the students to work systematically towards getting a certain grade.
Upvotes: 2 |
2014/11/09 | 795 | 3,457 | <issue_start>username_0: I am an undergraduate level student planning to put out a research paper for publication. The second author is a PhD student who has helped me to do much editing (including re-writing my entire paper into a concise and expressive form, and also the numerical simulation part). I think he helps me a lot. My friend suggests to me to buy him a gift card at the end to express my gratitude. I am wondering would this be a good idea or not?<issue_comment>username_1: Others may differ in opinion, but I personally think that gift cards are a poor choice for expressing personal gratitude. The problem is that gift cards are effectively cash with spending restrictions. Your colleague has invested time and energy in helping you, and giving cash would essentially be saying how much per hour you feel their time is worth.
Instead, I would recommend offering something like taking them out for a fancy lunch, which invests your time and gives the two of you a chance to do some informal bonding and building of your professional relationship. Unless, of course you don't actually want to spend time alone with this colleague (many possible good reasons for that), in which case I would still recommend looking for some way to express gratitude that shows an investment of time and caring beyond just money.
Upvotes: 4 [selected_answer]<issue_comment>username_2: If I was the person who helped, I would indeed be very positively surprised to receive anything at all from you (unless it's something offensive). You don't have an obligation to do it and so most people would appreciate whatever time or money you spend into expressing your gratitude.
That said, for the choice I would try to avoid "objects" which are usually rather useless and tend to end up gathering dust somewhere. Food and/or drink are good choices in my opinion. My mother in law knows I like beer and I love it when she gets me four or five different styles when she makes me a present.
Upvotes: 0 <issue_comment>username_3: I would probably kindly reject such a gift, since it is essentially giving money and I do not want to receive money for this type of thing. I would be fine with a bottle of wine or a similar thing. It is not money and the "price" part of it is not that important (I, after all, don't even know the price).
Another choice is to invite the person for whatever you think (or know) they like: sushi, good burger, NHL match, ... Just don't "overshoot it", you're younger and they need not be exactly interested in coming for an NHL match with you, because they may plan to go there with someone else or whatever. Use some sort of common sense for this.
Last but not least (and I consider that the best option), invite them for a beer. In my country (Czechia), you even say "*Thanks, I owe you a beer*" when someone does something good for you (mostly like in your case: spending their time on your project with no reward expected), and quite often the people really end up having a beer or two. I think you can invite them for a beer in most countries in the world.
Upvotes: 2 <issue_comment>username_4: Another important issue is: do not overdo it. A present of value, say, 50 or 100$, can not only make the recipient uncomfortable, but it might also be against the regulations of the university. There are often anti-corruption regulations that prohibit university employees from receiving gifts, excluding very small token ones.
Upvotes: 0 |
2014/11/09 | 5,163 | 21,634 | <issue_start>username_0: What is the idea behind giving a student a grade? It might sound like a funny question but I'm serious.
I ask because in my previous question about [disputing a mark](https://academia.stackexchange.com/questions/30569/when-and-how-should-you-talk-to-a-professor-if-you-think-you-deserve-a-different), it's hard for me to decide how many marks and how much effort I should put into having a mark changed. If the purpose of going to school is to learn (something presumably you don't already know) then how do marks fit into the equation?
Given some of the answers and comments on the question I'm curious, how did this mentality that marks are non-negotiable arise? There seems to be the belief the prof has a totalitarian rule over the students. This doesn't make sense, especially considering how commercialized some schools have become. In any other area of business if a client pays (e.g. a student pays tuition) and is dissatisfied or has a concern about a service, then the company would work with them and either explain or change some part of the contract. How is it with teachers they get away with simply saying "that's the right mark"? (I know that's rather a facetious example but hopefully the point is clear). I'm certainly not suggesting one can or should be able to buy grades: rather I meant in business there seems to be a certain level of diplomacy which doesn't exist between students and teachers. For example in the question linked to it was mentioned that even if the wording to a question is vague, if there had been examples in class of a similar question then it should be known what is being asked for. This wouldn't happen in business. If a client said they wanted x, they're not going to pay for y; conversely if they had asked for x and wanted y, they're not going to sue the company.
It may be relevant to note that where I go to school, costs quite a bit of money (and it's a public university).
After reading this several months latter, I hope my tone didn't come across as too harsh.<issue_comment>username_1: Grades do many things, and there is no general agreement which of those things is the most important. A few things that grades do are:
* Grades motivate students to learn. Without grades, many students would not learn nearly as much, because the mere presence of grades encourages students to study.
* Grades tell others whether students have mastered the material of the class (e.g. which students can count a class towards graduation, and which need to re-take the class).
* Grades tell others how students compare with each other (e.g. to help determine which students are accepted to competitive graduate programs).
Even when they disagree about why grades are assigned, most professors **do** want to grade fairly, and will listen to student comments and consider them seriously. But, as I described in [another post](https://academia.stackexchange.com/a/30577/16122), the situation may be more complex than is apparent from a student perspective. Moreover, many universities have a system for "grade appeals" through which students can formally dispute grades. So, far from being "non-negotiable", grades are usually subject to review by the professor's superiors at the university.
>
> In any other area of business if a client pays (e.g. a student pays tuition) and is dissatisfied or has a concern about a service, then the company would work with them and either explain or change some part of the contract.
>
>
>
This is not really the case. Imagine two hypothetical scenarios.
1. *I buy an annual membership in a local warehouse club (e.g. Costco or Sam's Club), and then tell them I don't like their bananas, and I want them to buy some other kind.* They are likely to ignore me, unless many other people make a similar request. They may well just tell me to go buy bananas somewhere else if I don't like the ones they offer. There are various opinions about whether universities can afford to take a similar position with their students.
2. *I hire a professional opera singer, but then I tell her that I want her to sing pop songs, and by the way can she also lose a little weight and learn how to dance better?* She's just as likely to just tell me "no" as she is to work with me to figure out which pop songs I prefer. She has an uncommon skill that is sufficiently in demand to keep her employed. There are various opinions about whether professors are in a similar situation.
The applicability of #1 and #2 to universities can certainly be questioned. These examples are just meant to show that it is not universally true that a paying customer can negotiate freely with the company or person being paid. This goes against the idea that this sort of negotiation should "also" apply at universities.
Upvotes: 4 <issue_comment>username_2: A professor who I used to work with once gave me an explanation that I found quite useful for understanding the purpose and philosophy of grading. Universities, he said, must always struggle with a tension between two educational goals, illumination and certification.
* **Illumination** means the intellectual development of the student, bringing them a deeper understanding of a subject, its relation to the world, and the deeper issues it may touch on. From the perspective of illumination, marks are intended to be feedback to students that helps them realize weaknesses in their understanding so that they can fix them.
* **Certification** means evaluating a set of skills acquired by the students against an objective standard, to attempt to measure their fitness for certain tasks or professions. From the perspective of certification, marks are intended to be objective judgement of the fitness of the student for carrying out tasks requiring the skills taught in a class.
These two are often in tension with one another because certification pushes teaching toward rote practice and standardized testing and grading, while illumination pushes toward more open-ended exploration and interactive formats which can deliver much more benefit for apt students but are often very subjective. Most classes try to deliver both, to varying degrees of success, though some classes may almost entirely hew to a single side of the balance.
You need to decide what you're after from the classes that you take. From your "pay for service" tone, it sounds like you want the career value that comes from certification. But certification isn't valuable if the standard can be easily negotiated, and so professors have to set a standard and stick by it. Sometimes they are even forced to by regulations. From the perspective of certification, trying to negotiate for a better grade is trying to cheat the system and reduce the value of everybody's grades.
Moreover, one of the "meta-skills" that is always being certified is the ability to figure out what somebody wants from you. If you're out in industry and you deliver the wrong thing because you misinterpreted your client's needs and didn't ask for clarification, it will be difficult to argue that you should be given more partial credit.
If, on the other hand, you're after illumination, then grades are less important to begin with. In that case, it's more important to understand why you got the grade that you got, so that you can improve your understanding of the material. If you want illumination and you aren't getting it, you need to switch courses, majors, or institutions.
Upvotes: 7 [selected_answer]<issue_comment>username_3: Based on my experiences I realized that most students make more effort if the course is graded rather than its result is just marked by Pass or Fail. Consequently they learn more. Of course there are also exceptions: the situations where they make effort regardless of the grade.
Upvotes: 3 <issue_comment>username_4: This is not so much an answer as an extended comment on 'the right mark' from someone who has to allocate marks.
On the whole I would not say with certainty that a mark I have given is 'the right mark' (well, unless it's 0), only that it's close enough. For most questions that are useful for actually teaching students something, it's very difficult to be completely objective. Even if you start with a detailed mark scheme, the way you decide whether something slides across the yes/no divide will change over the course of a large pile of scripts. I also often suspect that some students get the marks more by luck than judgement. For this reason I think it's best to have worked in to the grade system a way of checking borderline cases.
Not discussing marks also acts a defense mechanism, particularly in cultures where students are focused more on marks than learning. I believe it is not unusual for a large section of a class to try and argue for more marks. This is frustrating just in terms of being time-consuming for anyone, and disappointing for people who really care about the students learning. It can also be particularly difficult when students understand so little of the topic that they cannot be shown why they are wrong.
Another thing I've come across, particularly in the context of north American large classes, is 'I'm really close to the grade boundary so it's not fair to give me the lower grade'. When there are hundreds of students, and grade boundaries are 5% apart, being 0.8% below a boundary can mean there are 5 or more students closer to the grade boundary!
In the other direction, if you really feel your grade is substantially wrong, do ask about it. It can be a mistake, particularly if the marker has not noticed part of your work, or added up the marks wrong.
Upvotes: 3 <issue_comment>username_5: First off, I would say that the primary importance of a grade is to be able to certify that you actually mastered the material. Simply knowing "X attended Y university and studied Z" tells me next to nothing about how much X knows about Z without the additional information provided by grades. This is not to say that GPA is the end-all-be-all of the hiring process, but it does make a significant difference in my ability to determine how successful the candidate was in work similar to what I'm wanting him or her to do if hired. This is especially true for candidates who are recent graduates and don't have an extensive work experience to point to. It can also be helpful in cases where a candidate can't give much in the way of specifics about prior work projects, such as if those projects were classified or otherwise secretive in nature.
Second, Regarding this portion of the question:
>
> In any other area of business if a client pays (e.g. a student pays tuition) and is dissatisfied or has a concern about a service, then the company would work with them and either explain or change some part of the contract.
>
>
>
In order to serve their above-stated purpose, it is essential that grades be objective. If you can buy a grade, it is worth absolutely nothing because it does not serve the purpose of a grade. In order for the degrees granted by an institution to be valuable, the grading process must be held to high standards of ethics and objectivity. Having those standards violated is an enormous black mark on the reputation of the institution. Furthermore, when discovered, it will likely result in the termination of those involved. For a recent example, see [UNC Chapel Hill academics-athletics scandal](http://en.wikipedia.org/wiki/University_of_North_Carolina_at_Chapel_Hill_academics-athletics_scandal).
As far as the 'customer' analogy is concerned, consider that you are not the only customer of the institution. If the institution allows you to get a degree and a good GPA without actually mastering the material, that makes not only your degree worthless, but also the degrees of everyone else in that program. Since it can be reasonably assumed that most of the other students and alumni don't want their degrees to be useless, the institution is, in fact, acting in accordance with what their 'customers' want by maintaining the integrity of the grading process.
Upvotes: 4 <issue_comment>username_6: >
> Given some of the answers and comments on the question I'm curious, how did this mentality that marks are non-negotiable arise? There seems to be the belief the prof has a totalitarian rule over the students. This doesn't make sense, especially considering how commercialized some schools have become. In any other area of business if a client pays (e.g. a student pays tuition) and is dissatisfied or has a concern about a service, then the company would work with them and either explain or change some part of the contract.
>
>
>
I don't think "non-negotiable" is the right word. Marks can be questioned and even challenged. However such challenges are (outside of the movie *Clueless*) not negotiations, because that implies a business transaction in which the student is offering something in return.
[Also, professors do not have "totalitarian rule" over the students. We don't have any "rule" over the students. We can only ask them to do certain limited things and they get to decide whether to do them or not. It is of course very common for students to drop or exchange a class because they are not happy with some aspect of how it is being run. This is really the antithesis of totalitarian rule.]
Grading is not a business transaction. You seem to think (or at least be willing to argue) that modern academia is a business transaction in which the student is the client and the instructor is the service provider. Well, there is some truth to that, but it also has severe limitations. (By the way, I have found that most businesses whose services I enlist as a paying client have severe limitations on how they are willing to work with me or (especially) change part of the contract in response to my complaints. The threat of losing my business does something in some cases and very little in others.)
It is worth thinking about what services a university is actually providing, and to whom. If all you wanted from your university was to teach you courses and give you a grade at the end, you could enroll in internet classes at little (or no) cost. Most universities -- especially expensive ones -- are also **certifying competence** and **providing prestige** to their graduates. That is why you are paying them the big bucks. This only works if the grades themselves are *not* negotiable in the sense you mean. At a very elite university there will be considerable resources available for the student and steps taken to try to ensure their success, and the average GPA may be higher than at some other universities. But I taught calculus at Harvard for several years, and every time there were some students who got D's and F's. A B- in calculus at Harvard is a discouraging grade -- literally; it is meant to signal to the student to seriously shape up or not continue studying math -- but it does certify some amount of calculus knowledge. To get an A in calculus at Harvard you must indeed be very good at the subject: Harvard wouldn't be a top American university if it gave top grades to students who had not mastered the material.
Sometimes it helps to make the situation more extreme. If you think that "the student should always be right", perform this thought experiment: I will offer you the opportunity to take the **COSAT**s, a consumer-oriented variant of the SATs. Every student who takes my exam will pass. In fact, every student who gets less than the 50th percentile will have their score reported as "satisfactory". And that's just for basic members. Silver members will be allowed to answer again the questions that they got wrong and will have their exams rescored. Gold members will be offered the same service together with additional instructional materials that will include complete and comprehensive answers to all exam questions. Platinum members get online access to the materials while taking the exam, in a patented "one-click: correct!" format. Of course the COSATs will cost money -- so do the SATs! But actually basic membership is cheaper than the SATs and silver, gold and platinum membership is surprisingly competitively priced. Are you interested?
Upvotes: 4 <issue_comment>username_7: >
> If the purpose of going to school is to learn [...] then how do marks fit into the equation?
>
>
>
Grades are a "cheap" way of assessing what a student has learned on a specific topic. Imperfect and cheap as they are, grades are nonetheless one of the best tools that we (as humans) were able to conceive for assessing learning without introducing more serious problems (experiments have been made, but with doubtful success).
But let me stress a few facts about grades and grading which sometimes students overlook:
* Sometimes students are discouraged by bad grades because they feel deflated as persons. Grades, however, do not assess personal or moral qualities: better grades do not make better people. As I said, a grade is just a measure of what a student has learned or understood on a specific topic and, also, of his ability to convey this learning to other people (don't underestimate this part!). *But no more than that.*
* Grades are not proportional to the amount of study done. This might seem unfair: I've studied so much, I deserve a better grade! Rather, grades are roughly proportional to the quality of your study. If you study a lot and you don't get good grades, probably you have to change your approach.
* Grades are the result of a measurement and, as such, are subject to uncertainty. There is no such thing as a right grade as there is no such thing as an exact measurement. The amount of uncertainty depends on many factors, e.g. the type of exam, the grader, the neatness of students' papers etc. A professor should try to keep this uncertainty to a minimum, but it will be never negligible.
* Grades can't be negotiated in the sense of "hey prof, here I should've got 3 more marks", but you can say "I think that this piece of solution, which was marked as wrong, might be right because of this and that etc.". Then, if the professor recognizes that you're right, he or she will upgrade your marks on the basis of their rubric; if they think you're wrong anyway, they will explain where your argument fails.
Upvotes: 4 <issue_comment>username_8: >
> This doesn't make sense, especially considering how commercialized some schools have become.
>
>
>
Commercialization doesn't go in pair with quality. At least in my region, private universities are infamous for very poor quality, and yes, there you can negotiate marks untill you are happy, or at least, you can expect not to fail until you pay.
>
> In any other area of business if a client pays (e.g. a student pays tuition) and is dissatisfied or has a concern about a service, then the company would work with them and either explain or change some part of the contract.
>
>
>
Not at all. You can, for example, be dissatisfied with mobile network quality, but all you can hear is 'we're sorry, but we can't assert 100% coverage neither 100% availability in peak hours'.
But going to the main point:
>
> What is the idea behind giving a student a grade?
>
>
>
You can consider it a service, which prooves the quality and quantity in which you cope with the given tasks. You become a syllabus and a limited time. The marks proof how much of the material you were able to process. It makes them very valuable - a student that has many 5 marks have usually coped (almost) fully with the expectations. A studen with 4 had some issues, a student with 3 has coped in a mediocre way, however good enough not to fail.
>
> It may be relevant to note that where I go to school, costs quite a bit of money.
>
>
>
If it would be relevant, it would mean, you are buying your certification, and it would make it of mediocre quality, because the whole point of the exams and the certification would be lost (which is the case in my country, where the private universities are considered mediocre because of that).
Upvotes: 2 <issue_comment>username_9: While I generally agree with [the answer by @username_2](https://academia.stackexchange.com/questions/31445/what-is-the-purpose-of-having-grades/31448#31448), I also wanted to note what is probably the single most important part of the reason we grade (which I don't see mentioned).
The ultimate point of education is to create individuals capable of producing work of sufficient, *consistent* quality. The ability to self-evaluate and recalibrate towards the desired quality is the essential component of consistency (at least for mere mortals).
While the instructive and gate-keeping functions of grades are important, the feedback grades convey helps us create an internal model of what qualifies as "good" and "bad" and "mediocre" work in our fields. If we don't build models (and learn to apply them to our own work), we'll forever be at the mercy of hand-holders and gate-keepers to tell us what we've done wrong.
Upvotes: 3 <issue_comment>username_10: Historically, grades are relatively recent in the university system. The first universities in the modern sense sprouted up in the High Middle Ages (circa 1200 and afterwards). They used disputations (formal, logical, oral debates) as a means of assessment. Grading began in the 1800s when professors wanted a quicker way of assessing progress; those professors using grades were considered lazy.
cf. "[History of Grading Systems](https://www.ehow.com/about_5103640_history-grading-systems.html)" by <NAME>.
Upvotes: 3 |
2014/11/09 | 9,297 | 33,963 | <issue_start>username_0: I am an undergraduate student in the hard sciences, and am thinking of pursuing a research career after graduation. I am also quite politically involved on the left, and have political articles published online. I am critical of many governments (including my own) and corporations, and of many of the applications of technology in fields I am interested in.
My politics are very important to me, and I could not give them up. That said, I often worry about how they could be detrimental to my career, especially before I'm already established. I worry, for instance, that grad school admissions would Google my name and find my views and affiliations, and that this would negatively impact my chances to get in. How realistic is this fear?<issue_comment>username_1: It will very much depend on the country you intend to work in.
In Britain, academia has long been a home for radical thought: for example, the foundation of University College London in the early 19th Century was an open act of defiance against the establishment at the time.
It's far from universally true, but academia does strive to reward merit for merit's sake; so people will strive to assess you on the basis of your research. Having said that, recruitment decisions are made by us humans, with all our foibles, prejudices and weaknesses.
There are political movements that get associated with large-scale crimes against humanity, including mass murder: Khmer Rouge, Stalinism, Fascism. If you were involved with those, that would indeed be likely to be devastating to a career in academia.
Upvotes: 4 <issue_comment>username_2: The recent case of <NAME> (who was offered a tenured position at the University of Illinois and then had the offer pulled after there were complaints about some of his political remarks on twitter) shows that being politically outspoken can have an effect on one's academic career in the United States. Although Salaita doesn't work in the physical or life sciences, I think the same thing could very easily have happened to (for example) a chemist who had made the same statements.
Unlike most professionals working for large corporations, faculty are public figures. If an employee of a large corporation makes political statements, it's very clear that these don't represent the views of the corporation and it isn't likely to cause a significant problem to the employer (although individuals within the organization might well object to the statements and respond in ways that might be unfair.) When a faculty member makes political statements the press is likely to pick these statements up and amplify them and the administration of a university is more likely to be concerned about the effect of these statements on the reputation of the university. For example, your political statements might cause a wealthy donor to stop a planned donation to the university. This is especially true when the university is a public university that depends on the state and federal governments for financial support.
A growing problem in the sciences is that even making non-political statements about scientific issues (evolution, natural resources, global climate change, etc.) can result in politically motivated attacks against a researcher. If your research interests happen to be in one of these areas and you're also politically active, you're even more likely to become the target of such an attack.
Upvotes: 4 <issue_comment>username_3: I will direct my answer at an American academic career in the sciences.
You certainly have a right to make your political opinions known. Unfortunately the internet makes it possible (indeed, trivial) to search through every public remark someone has ever made, and this means that everyone can be held to a higher level of scrutiny in this regard than in the past. As others have said, there are situations in which remarks people have made have had a negative impact on their academic career. Most academics believe that "academic freedom" should ensure immunity from retribution for a range of such political remarks...but certainly not all of them.
In fact, it is not entirely clear what constitutes a *political* remark as opposed to something else. For instance, one of <NAME>'s tweets was
>
> Israel: transforming 'antisemitism' from something horrible to something honorable since 1948.
>
>
>
This is taken from [this article](http://www.chicagotribune.com/news/opinion/commentary/ct-steven-salaita-tenure-jews-twitter-tweets-unive-20140929-story.html) in which he explains the context. The additional context he provides convinced me to view it is as a truly political remark that should come under the protections of "free speech" and "academic freedom". However, without that context....in that it contains the claim that antisemitism is honorable, it looks pretty bad. If my colleague had posted this and asked me to defend his right to keep it there, I would on the contrary ask him to take it down.
Here are some thoughts on how to be politically active in a way which is not to the detriment of one's academic career:
* Don't post on twitter.
Really, "don't post on twitter" seems close to being universally good advice. [**Added**: <NAME>agrees. I can see that if you want to post academic/scientific content *only* then the twitter effect would be at least non-negative. I cannot agree that twitter has had a significant effect in disseminating work in my field, especially in comparison to so many other electronic media. But maybe it is different for others.] But the combination of telling the whole world and strict character limits make it anathema to academic discourse or even, I would advise, to discourse by academics. In general, young people need to use social media carefully: political *articles* are different, but one's off-hand political, social and religious comments are best not shown to the entire world.
* Make sure that your political remarks can only be construed as political.
This is the moral from the above example. Political remarks advocate policy, support or criticize governments, or support or criticize political figures on political matters. Salaita's tweet (intentionally, and even rather cleverly) plays on the distinction between criticizing the Iraeli government and criticizing Jewish people. But don't play with that. Don't criticize or denigrate any ethnic group. When you want to criticize a group of people aligned around a certain practice, make sure you are criticizing the practice, not the people *as people*. For instance, if you are pro-life, don't (publicly) paint women who get abortions as immoral or unclean: that's not a political statement. Don't (publicly) criticize conservative politicians who are against gay rights by saying that they must either have terrible sex lives, be latently gay or both.
* Try to have a clear separation between your political activities and your academic ones.
The OP says that he has political articles published online. Sounds fine to me. I would think at least three times about incorporating these articles into your science classes. As a general rule, I feel free to discuss politics and religion in my (mathematics) courses because I feel these should not be taboo topics among human beings, but I always characterize them as digressions from the class, I never push a position, and in fact I try not to enunciate my own stance or position in a classroom environment. If someone wants to hear how I really feel about Islam or the midterm elections, they can talk to me after class.
This "clear separation" should work just as well in the sciences as it does in math. Academics in certain other fields might have more trouble with this: e.g. women's studies.
* Hiring committees doing significant digging into candidates' extra-academic life is the exception rather than the rule.
The OP specifically mentions graduate admissions and googling. I have done lots of graduate admissions and I cannot specifically remember ever having googled an applicant (and I often google their letter writers or their home institution). For faculty and such: sure, sometimes I get curious, but I don't feel like such googling is part of the faculty search process. If I found some political activity about a faculty candidate through the job search, it would have to be extremely significant or specifically problematic in some way for me to bring it up to my colleagues at all. To give two examples of googling academics: I have learned for instance that someone had been a union organizer and someone else was a leader of a campus pro-life organization and had run for political office. If these people applied for jobs I would keep this information to myself.
* On average, it is a little safer to be on the left than on the right.
The majority of American academics that I know are not very politically active but are considerably left of the center of American politics. This applies to me. If I learned that a potential job candidate was very active in Tea Party politics, I would take a moment to steel myself not to let this affect my decision. If I learned that a potential job candidate had been active in Obama's campaign, I would think "Well, that will make for a fun story sometime." I have colleagues whose political views are very different from my own, including one whom I respect the most, because of his great personal integrity and selflessness. But I still have to think and act a little more carefully around this colleague because of this; often I cut off a "humorous remark" just before it leaves my mouth because I remember that he will not be amused, and I don't want to make him uncomfortable.
I put this point last because it is purely contingent on "local phenomena", but I think it would be naive to expect exactly the same academic reception for political activism on the two sides of the spectrum. On the other hand, at some state universities the local politics can be *very different* from the politics of the university and the faculty. This is really beyond the scope of my answer, so I'll just say: surely it is best if state employees who run the university system know as little about a faculty member's political activity as possible, at least until tenure.
Upvotes: 6 [selected_answer]<issue_comment>username_4: I agree with the answers so far, but let me elaborate on one sentence that could be a little worrisome:
>
> I am critical of many governments (including my own) and corporations, and of many of the applications of technology in fields I am interested in.
>
>
>
Criticizing governments and corporations is one thing, but criticizing specific applications of technology could come across as far more insulting to anyone personally involved in those applications (whether through their own research or as a consultant to corporations or government agencies). It won't endear you to your colleagues if you denounce their work, particularly on moral grounds. This is not to say you shouldn't do it if you feel strongly, but you should try very hard to be as reasonable and fair as possible, and in any case you should recognize that you may be cutting off certain options. For example, if you loudly announce that people with DARPA grants are immoral because they are serving the military-industrial complex, then you shouldn't expect anyone with a DARPA grant to hire you as their postdoc, even if you would be supported by a different (and less objectionable) grant.
I don't think this is the sort of thing that's likely to derail your entire career, but it's safest to keep the list of people you personally offend as short as you can.
Upvotes: 4 <issue_comment>username_5: Likely, very detrimental.
Ask yourself this. Suppose you become a scientist, and you are considering citing a paper authored by someone you know stridently espouses ultra-conservative views, entirely antithetical to your own. If you could cite another paper instead of hers, all things being equal, would you? Would it cause you to hesitate citing her paper? Would you think to yourself, 'Ugh, I hate this guy, I wish someone else had published similar research?'
In my opinion, this is common sense. However, there's also a bevy of evidence that publishing research contra, for example, feminist theory, will harm the publisher's career. Consider the following quotations from the [Florida State Law Journal](http://www.law.fsu.edu/journals/lawreview/downloads/304/kelly.pdf).
>
> Perhaps the most physically and personally intimidating behavior was
> directed at <NAME>, who had first brought the issue to the
> public’s attention.43 Steinmetz appeared on such shows as the Today
> Show and Phil Donahue. 44 Her work was reported in various newspapers
> and magazines, including a full-page story in Time magazine.45 Yet,
> while Steinmetz’s work received some support, the public attack, the
> public attack against Steinmetz and her family evidenced the public's
> overwhelming rejection of her work. Verbal threats were launched
> against her and her children - at home and in public. Threatening
> phone calls were made to Steinmetz and the sponsors of her speaking
> engagements in order to prevent Steinmetz from further publicizing her
> work. On one occasion, a bomb threat was called into an ACLU meeting
> at which Steinmetz was scheduled to speak. Professionally, Steinmetz
> was also threatened. In an attempt to prevent her from receiving
> tenure, every female faculty member at the University of Delaware was
> lobbied by individuals calling on behalf of the women's rights
> movement... Other social scientists committed to the study of husband
> abuse and family violence were similarly mistreated.Such tactics seem
> to have proven effective. Both researchers who were involved in the
> early projects and even those who might have become involved admit
> that they now choose to give the topic of battered men "wide berth".
>
>
>
Further consider the following list of feminist methods of suppressing research, as published in the [Journal of Criminal Policy](http://pubpages.unh.edu/~mas2/V74-gender-symmetry-with-gramham-Kevan-Method%208-.pdf)
>
> * Method 1. Suppress Evidence
> * Method 2. Avoid Obtaining Data Inconsistent with the Patriarchal Dominance Theory
> * Method 3. Cite Only Studies That Show Male Perpetration
> * Method 4. Conclude That Results Support Feminist Beliefs When They Do Not
> * Method 5. Create "Evidence" by Citation
> * Method 6. Obstruct Publication of Articles and Obstruct Funding of Research That Might Contradict the Idea that Male Dominance Is the
> Cause of PV
> * Method 7. Harass, Threaten, and Penalize Researchers Who Produce Evidence That Contradicts Feminist Beliefs.
>
>
>
Moreover, just the other day I learned that a pharmaceutical corporation decided funding an endowed chair at the U of T was no longer a "priority" when the U of T decided to award the chair to <NAME>, author of the book *Pharmagedon*.
Of course politics will affect your academic career.
Upvotes: -1 <issue_comment>username_6: Academia (in the Western countries that I know of) is the realm of **political correctness**. In this regard, as long as you don't deviate too much from the spectrum of opinions it defines in your region, the risk is small. In your case, if your opinions are just the regular college-student, mainstream leftism *a la* 'corporations are evil', you will be just fine.
In addition, your opinions will most likely evolve and get more nuanced by the time you'll be looking for an academic position.
This being said, if I were implicated in the hiring process and your opinions included denial, or gross exaggeration of scientific facts, or were based on crackpot science (say you were a 'climatoskeptic', or vociferously anti-GMO or anti-nuclear energy, or you think vaccines are a corporate conspiracy that will give your child autism) I would wonder about your scientific sanity. But people with these opinions *get hired nonetheless*.
Now, there are cases where your political opinions, if made public, would close doors, but it's typically cases where said opinions would prevent you from wanting the job in the first place. Say you were supportive of an 'animal welfare' activists group that ['released' lab animals wasting years of hard work and money](http://www.nature.com/news/animal-rights-activists-wreak-havoc-in-milan-laboratory-1.12847), you'd be unlikely to get hired in a biology lab.
Upvotes: 2 <issue_comment>username_7: A recent scandal at a Swiss university shows clearly, that political involvement can even be detrimental to a career in natural sciences.
As the whole story I'm going to tell is highly political (and I actually try to remain on a neutral position), I add a large number of sources to the end, unfortunately most in German as the whole incident occured in the German speaking part of Switzerland. I personally do not name the institutions or people involved, the articles, however, do so.
Switzerland has several state universities which are, in theory, under the administration of a deanery independant from the government. However, a recent scandal at one of these universities questions the extent of this independency.
Background: The new primary school syllabus contains material from a scientifically highly questionable ideology. Two professors, heads of medical and biological faculties, with the support of professors from other natural sciences' faculties strongly opposed this referring to recent scientific research.
One of the leading professors of this opposition was soon after charged for academic misconduct and forced to retire from his position (resp. fired, not sure anymore), alone, all accusations turned out to be false. Quickly after the media had discovered this, the university blamed that professor's strongest supporter to have made up the whole charade and fired her for this with no evidence at all. Beside of it being highly questionable why she, as such a strong supporter of the first one's standpoint, should now suddenly be the one plotting to end his career, courts soon found her guiltless in all charges (and, in return, her claims for compensation are now pending).
Public pressure forced the dean to retire after the whole story became public. However, the person evidence points to as the main plotter remains unspoilt as she has certain immunities in connection with her position.
There have been similar stories in the recent years which I do not unfold here now.
The following articles, unfortunately all German, illustrate the evolution of the whole scandal. It casts a really poor light on the current state of academic freedom at least in Switzerland.
<http://www.weltwoche.ch/die-weltwoche/themenschwerpunkte/fall-moergeli.html>
<http://www.nzz.ch/zuerich/moergeli-affaere-condrau-flurin-uni-zuerich-medizinhistorisches-institut-1.18381044>
<http://www.nzz.ch/zuerich/affaere-moergeli-1.18325659>
<http://www.tagesanzeiger.ch/dossiers/schweiz/dossier2.html?dossier_id=1640>
Upvotes: 0 <issue_comment>username_8: If you consider Political Science to be a scientific career, I can cite an example in which political involvement would have been detrimental.
A political scientist named <NAME> wrote a book called Homestyle in which he observed Congressmen in their home districts. At the time, research was mostly done in Washington. To accomplish this, Fenno had to contact several Congressmen on both sides of the aisle and ask permission to follow them around in their home districts. He would have access to some of their most intimate moments with their family, constituents, supporters, etc. so they need to trust that you aren't out to smear them.
Congressmen might have been less likely to accept his request if he had been politically active (which he was not). A Republican might fear that he might be trying to dig up dirt on them, or vice versa.
So contrary to popular belief, being a political scientist is not a good idea if your goal is the advancement of a particular ideology, but I digress.
A takeaway from this for non political scientists is that people might be less willing to work with you if they believe you have a highly partisan agenda.
Upvotes: 1 <issue_comment>username_9: Let's look at the current political climate in academia:
>
> Over the
> years, I have watched a growing intolerance at universities in this
> country – not intolerance along racial or ethnic or gender lines –
> there, we have made laudable progress. Rather, a kind of intellectual
> intolerance, a political one-sidedness, that is the antithesis of what
> universities should stand for. It manifests itself in many ways: in
> the intellectual monocultures that have taken over certain
> disciplines; in the demands to disinvite speakers and outlaw groups
> whose views we find offensive; in constant calls for the university
> itself to take political stands. We decry certain news outlets as echo
> chambers, while we fail to notice the echo chamber we’ve built around
> ourselves.
>
>
> [source](https://news.stanford.edu/2017/02/21/the-threat-from-within/)
>
>
>
>
> More than nine in 10 UK universities are restrictive of free speech,
> according to a new report that raises concerns over the issue of
> censorship on campuses.
>
>
> Analysis by Spiked magazine, supported by the Joseph Rowntree Reform
> Trust, suggested campus censorship had increased steadily over the
> past three years – with a growing number of institutions actively
> clamping down on ideas, literature and guest speakers that are not in
> keeping with their own values.
>
>
> The Free Speech University Rankings (FSUR), drawn from examining the
> policies and bans of 115 universities and students’ unions, found
> almost two thirds (63.5 per cent) were “severely” restrictive of free
> speech, with more than 30 per cent given an “amber” warning.
>
>
> [source](https://www.independent.co.uk/student/news/nine-10-uk-universities-free-speech-restrict-rankings-joseph-rowntree-cardiff-ediburgh-newcastle-a7577381.html)
>
>
>
>
> Higher education’s suppression of speech is well-publicized. But in an
> odder and less well-known twist, campuses are increasingly co-opting
> the language of free speech and using it to justify censorship. One
> example: The designated “free speech zones” that exist on roughly 1 in
> 10 U.S. college campuses, according to a report released last month by
> the Foundation for Individual Rights in Education.
>
>
> The very existence of a “free speech zone” suggests that students’
> expression is limited elsewhere on campus. And even in the “free”
> zones, administrators often restrict who can speak, when and for how
> long.
>
>
> Dozens of universities have also used the language of free speech to
> justify trendy “Language Matters” or “Inclusive Language” campaigns.
> The point of these programs is to condition students to wince away
> from words and phrases deemed offensive, instead using politically
> correct substitutes.
>
>
> [source](https://www.wsj.com/articles/censorship-is-free-speech-it-must-be-the-class-of-1984-1485478244)
>
>
>
>
> A related survey question, which has been asked most years since 1967,
> inquired whether “colleges have the right to ban extreme speakers from
> campus.”
>
>
> About 43 percent of freshmen said they agreed. That’s nearly twice as
> high as the average share saying this in the 1960s, 1970s and 1980s.
> It was surpassed only once, just barely, in 2004. But in general,
> support for banning speakers from campuses has trended upward over
> time.
>
>
> Recent incidents suggest students (and sometimes their professors) may
> have rather expansive views of what constitutes an “extreme speaker.”
> Among those disinvited or forced to withdraw from campus speaking
> engagements in the past few years are feminism critic <NAME>,
> former secretary of state Condoleezza Rice, International Monetary
> Fund Managing Director <NAME> and <NAME>, now the
> Indian prime minister.
>
>
> [source](https://www.washingtonpost.com/opinions/liberal-but-not-tolerant-on-the-nations-college-campuses/2016/02/11/0f79e8e8-d101-11e5-88cd-753e80cd29ad_story.html?utm_term=.9ca4bae92c94)
>
>
>
>
> Harvard revoked offers to at least 10 applicants based up their
> digital footprint. What is more troubling is that Harvard has lobbied
> for years against a social media privacy law for applicants that would
> ban colleges in Massachusetts from being able to request applicants
> verify their digital accounts and activities which may indicate their
> political or personal opinions.
>
>
> Harvard along with other prestigious colleges have a long documented
> history discriminating against students based on religion and other
> personal attributes. A recent lawsuit is claiming Harvard for years
> has discriminated against Asians. The evidence so far demonstrates the
> troubling ways Harvard uses personal non-academic information to
> reject applicants.
>
>
> The bottom line is that if a college applicant visits websites that
> discuss hot button political issues such as the president, or far left
> or far right lawmakers, the First Amendment or Second Amendment
> rights, abortion, affirmative action, gay marriage, immigration, etc.
> its highly possible they may be denied admission to the most
> prestigious colleges in the United States. Why? Because an increasing
> number of college admissions officials are going to great lengths to
> collect their applicant’s personal political opinions.
>
>
> [source](http://www.shearsocialmedia.com/2018/08/top-college-tried-to-reject-applicant-who-followed-alex-jones-on-twitter.html)
>
>
>
>
> A Pensacola student who sparked controversy Tuesday by wearing a
> Confederate uniform to the site of a violent clash between white
> nationalists and counterprotesters has been kicked out of Pensacola
> Christian College, according to a North Carolina media outlet.
>
>
> WXII News 12 reported that <NAME>, who reportedly splits
> time living in Pensacola and North Carolina, learned Thursday that PCC
> staff had decided to terminate his enrollment.
>
>
> [...]
>
>
> Video from Tuesday showed Armentrout — wearing a Confederate uniform
> and carrying a Confederate flag — standing and saluting a statue of
> <NAME> at Charlottesville's Emancipation Park. He was
> surrounded by a crowd that chanted "terrorist go home." Armentrout
> stood in a motionless salute until he was peaceably escorted away from
> the scene by police.
>
>
> Armentrout later told the News Journal he made the trip to Virginia
> because the KKK, Neo-Nazis and other groups are destroying the history
> of his ancestors and he wants to share "the true history" of the
> American South. He said Neo-Nazis have wrongly "latched on" to
> Confederate history.
>
>
> [source](https://eu.pnj.com/story/news/2017/08/18/confederate-demonstrator-kicked-out-pensacola-christian-college/579978001/)
>
>
>
>
> A University of Pennsylvania Law School professor has been removed
> from teaching mandatory first-year courses after making derogatory
> remarks about the academic performance of black students.
>
>
> During an interview last fall, professor <NAME> said that black
> students at Penn Law never graduated in the top quarter of their
> class. "Here is a very inconvenient fact Glenn, I don't think I've
> ever seen a black student graduate in the top quarter of the class and
> rarely, rarely in the top half," Wax told Brown University professor
> <NAME> in a video of the interview that recently gained
> attention.
>
>
> [source](https://edition.cnn.com/2018/03/16/us/penn-removes-professor-for-racial-remarks-trnd/index.html)
>
>
>
>
> Shepherd is a graduate student and teaching assistant. Her sin was to
> show a first-year communications class a video snippet from TV Ontario
> of two professors debating grammar.
>
>
> [...]
>
>
> All of which is to say that when Shepherd ran her five-minute TVO clip
> featuring pronoun traditionalist <NAME> debating another
> professor, she unleashed a storm.
>
>
> [...]
>
>
> The teaching assistant was hauled before a three-person panel made up
> of her supervisor and boss, <NAME>, another professor named
> <NAME>, and <NAME>, Laurier’s acting manager of gendered
> violence prevention and support.
>
>
> The trio interrogated her for more than 40 minutes.
>
>
> Shepherd had the wit to record the proceedings. It makes for
> depressing listening.
>
>
> [source](https://www.thestar.com/opinion/star-columnists/2017/11/24/the-problematic-case-of-the-wilfrid-laurier-ta-who-dared-to-air-a-debate-on-grammar.html)
>
>
>
>
> Finkelstein was not denied tenure because of any shortcomings in
> scholarship or teaching. <NAME> had earlier described
> Finkelstein's book Beyond Chutzpuh as "a very careful scholarly book"
> and "the best compendium that now exists of human rights violations in
> Israel" (Goodman, "<NAME>uses"). The late <NAME>, widely
> recognized as the founder of Holocaust studies, said of Finkelstein,
> "his place in the whole history of writing history is assured," and
> praised his "acuity of vision and analytical power." (Goodman, "It
> Takes").
>
>
> There can be little doubt that Finkelstein was fired because of his
> criticisms of Israel's human rights violations against the Palestinian
> people, and for his fact-based criticisms of the Israel lobby. Raul
> Hilberg warned at the time, "I have a sinking feeling about the damage
> this will do to academic freedom" (Grossman). Even the DePaul
> administration tacitly conceded that his firing was politically
> motivated when it acknowledged Finkelstein as a "prolific scholar and
> outstanding teacher'' in a later legal settlement (Finkelstein, "Joint
> Statement").
>
>
> [source](http://www.worksanddays.net/2008-9/File14.Klein_011309_FINAL.pdf)
>
>
>
etc. etc. etc.
Although [this problem has existed for decades](https://rads.stackoverflow.com/amzn/click/0945999763), expressing an opinion that is "politically incorrect" has never been as dangerous within an academic context as it is today. Also, the range of speech that qualifies as "politically correct" is becoming [ever more narrow](https://www.studyinternational.com/news/offence-students-2017/). Especially (but not only) for people on the right of the political spectrum, expressing any political opinion whatsoever has become simply too risky. Many people received a grading penalty, were kicked out of college, lost their jobs, failed to obtain tenure or were otherwise punished by expressing opinions too controversial for the current political climate. And often this involved opinions expressed on social media or other non-academic contexts.
As a consequence, 54% of students report self-censoring in the classroom at some point since the beginning of college, according to [a survey by the Foundation for Individual Rights in Education](https://www.thefire.org/new-survey-majority-of-college-students-self-censor-support-disinvitations-dont-know-hate-speech-is-protected-by-first-amendment/). A similar survey by [Hamilton college's student newspaper](https://spec.hamilton.edu/free-speech-survey-support-for-discourse-strong-but-self-censorship-endures-ca7c152c902f) also looked into the political affiliations of respondents and demonstrated a striking difference between responses from conservatives and liberals. No less than 84% of conservatives indicated that “the political climate on campus prevents them from saying what they believe”, whereas only 21% of liberals reported self-censorship.
Why Liberals & Conservatives experience censorship so differently :
>
> The idea of a balanced argument at my undergraduate university [in the
> US] was ‘neoliberal’ versus ‘radically liberal’. We spoke of the
> importance of diversity, but political diversity was never considered.
> I thirsted for a deeper understanding of why half of Americans could
> hold opinions that were only met with dismissive ridicule or barely
> acknowledged. What I wanted was a wide exposure to different ideas and
> arguments, whether or not I agreed with them.
>
>
> In the US, if someone disagrees with you politically, they disengage
> from you and refuse to get to know you on a personal level. So I have
> often kept quiet among my peers, only revealing my true thoughts to
> those who have ‘come out’ to me in the same way that Madeleine
> describes. This has been compounded by the fact that my undergraduate
> degree was in gender studies, a famously radically liberal discipline.
> I am proud that I do not conform to the stereotype of a gender studies
> student.
>
>
> I wish to remain anonymous not because I am ashamed of my views, but
> because I want to be an academic and fear assumptions might be made
> about my politics. Academia is so liberal that, though I am
> politically neutral or centrist, others might regard me as being
> conservative and not want to hire me. Nevertheless, I look forward to
> working towards a future where academics have intellectual freedom in
> the form of open discussion, not anonymous letters.
>
>
> [source](https://heterodoxacademy.org/the-problems-of-campus-culture-presumption-and-self-censorship/)
>
>
>
So if you're a Liberal and your political views are sufficiently aligned with those of the academic establishment, you may not have anything to worry about. For everyone else, however, it is best to keep your opinions for yourself and not advertise them in any way if you want to pursue an academic career.
Upvotes: -1 |
2014/11/09 | 780 | 3,255 | <issue_start>username_0: I am an undergraduate and am new to paper writing.
I have been writing papers with professor A at another university since I was in high school. We are writing a paper together for an upcoming conference (which is pretty big).
I have been also writing a paper with professor B at my current institution for the same conference.
I am also writing papers with some graduate students for the same conference.
But - in total, I will be submitting about 5 different papers to the same conference (all with different people!).
I don't know if this is acceptable in academia. That is, will there be any sort of consequences for submitting a lot of papers to the same conference?<issue_comment>username_1: Yes, it is acceptable, particularly since the author lists will all be different. Go for it!
One word of warning: The papers themselves must have clearly different content. If you submit five nearly identical papers, the editors are likely to accept at most the best one and reject the rest. That will annoy your other co-authors mightily.
Upvotes: 5 [selected_answer]<issue_comment>username_2: It's not a problem as such - for example, I see in the proceedings of a recent very large conference an author that shows up on 9 papers ([LREC2014, Núria Bel](http://www.lrec-conf.org/proceedings/lrec2014/authors.html#B)), the only question is if the individual papers are strong and novel enough.
"Salami publishing" is frowned upon, and if there is significant overlap in the topics it might be more useful and more likely to get accepted if you combine two smaller papers in a single better one.
Upvotes: 3 <issue_comment>username_3: There's a big difference between a large national conference and a smaller regional workshop. When you have 5,000 to 10,000 talks plus posters at a meeting, with 50 parallel sessions, there's likely not a problem with the number of abstracts on which you're an author. If it's a small meeting of 100 to 200 attendees, with only a single session at a time, you probably can't submit more than one or two talks.
There is also a difference between being an *author* and being the *presenter.* Many big conferences do not have limits on authorship, but *do* restrict the number of presentations any one person can give as the "first author" (or, depending on the conference, "presenting author"). For example, the APS has the rule that the first author should be the presenter, and there's a limit of one contributed (and one invited) talk per meeting.
Upvotes: 2 <issue_comment>username_4: It's not a problem per se. Many professors with larger groups submit multiple papers to the same major conferences every year.
However, the fact that you are an *undergrad* and, as you say, *new to writing papers* and you are still handing in *five* papers simultaneously to this conference sounds concerning to me. Make sure that:
* the papers are all individually good quality - even *writing* 5 good papers would take me multiple months of work, and I have plenty of experience. And that's not even talking about the time required for doing the research that the papers talk about.
* the papers are actually about different research, not just the same basic idea sliced up differently.
Upvotes: 4 |
2014/11/09 | 1,536 | 6,943 | <issue_start>username_0: As a software developer just starting out in research (working for a lab) I have this idea of a software application which is meant to target a specific need: specifically to help users query data using specific and novel methodologies (navigation languages and autocomplete methods).
Anyway, I know that ultimately I want to do science and not engineering. By that I mean that ultimately I don't want to build a tool (although it could help prove my idea) but that I want to investigate (that's really what research is about isn't it?) about how the navigational and autocomplete methodologies are important for querying data (for example)
So I'm a bit troubled wondering how I can transform this application idea into a more scientific research project. Should I look at the novel parts of the application (such as the autocomplete functionalities) and investigate how that might make querying better for users? Is that even research?
I guess overall I'm puzzled on how to make the **idea** of my software application stand on its own. How do I make my software idea contribute to the current body of human knowledge? Does software even count as knowledge? I guess I'm trying to convert the idea of my software application into a piece of knowledge. Any help/clarification?<issue_comment>username_1: Your first step should be an extremely thorough search of the scientific literature, in order to explore what's already been done in the area covered by your application - that is, assess the originality of "the idea" and its theoretical underpinning.
Upvotes: 2 <issue_comment>username_2: For software to "contribute to human knowledge", it needs to *advance* human knowledge -- a new algorithm, a new human interaction technique, a new approach to coding (often embodied in a new language tuned for that purpose, for clarity, though almost all such can be implemented in older languages with a bit of work)...
If you have a really new approach to performing or using autocomplete, that probably counts. If you're just using autocomplete in your program in a place where it's obvious to an experienced practitioner that autocomplete would be appropriate, it probably doesn't. You could do some legitimate research on measuring exactly how much difference which kinds of autocomplete help which users -- but that's human factors engineering, not software engineering per se.
Programming is just a tool. If you use it to conduct research, you're doing research. If you aren't, you aren't. Writing may be a good analogy -- you need to be able to write well to communicate, but "writing well" is usually not the creative act unless you're someone like e. e. cummings who can create a new way to approach writing itself. Deciding what to communicate and how, or finding ways to measure the advantages and disadvantages of varying approaches to communication, is usually where human knowledge is advanced.
Upvotes: 2 <issue_comment>username_3: Your software idea *may* be able to become a piece of research if you can come up with a few things:
* **Research question.** Ask a question relating to your software idea - for example, "How can we do X?" "What is a better design for X?" etc. Check the literature to see what has been said about this question (and related questions) by others.
* **Research result.** What is the actual, novel contribution of your work? Is it a new technique that hasn't been done before? A rule of thumb for designing certain kinds of applications? A much better way to do a certain kind of task (for some reasonable definition of "better")?
* **Validation of research results.** What kind of convincing evidence do you have that your result is sound? Depending on the type of result you claim, your evidence may be in the form of performance benchmarks of your technique relative to state of the art, user studies from users of your application, or something else entirely.
The best way to get a better understanding of what constitutes a research question, research result, or evidence, is to **read a lot of papers in your field of interest.**
Upvotes: 4 [selected_answer]<issue_comment>username_4: Your desire to do research is commendable. But although you are a seasoned engineer, you are still an undergraduate at research and you need to slow your pace. Although you have a headstart in relation to your peers, you still need to develop some research maturity which takes time or a mentor who might give you a push to the right direction.
I was also an programmer first and went into researcher later, so I understand where you are coming from. But like me at first, you do not really "get" it. Autocomplete is not research. Period. A tool that shows a nice graph of semantic data is not research. Period. It is a DEMO and you can submit it to a demo track of a conference or a smaller workshop and that is it. But even then, unless the tool does something unusual it will get rejected. Unless you want to built the new [Virtuoso](http://virtuoso.openlinksw.com/dataspace/doc/dav/wiki/Main/) or the new [Neo4j](http://neo4j.com/) then your tool is not research. Period. And developing a GUI tool is something that I would not easily recommend, because making a GUI tool that is good enough for showing to others, takes a lot of time. That is why developing such GUI tools, is usually reserved for MSc thesis projects and students like you and is not something like a PHD student undertakes on his own. Of course there are always exceptions, but this is what I have seen.
On the other hand, developing a new, better index for autocomplete than e.g. a trie is research. But even then, building the autocomplete module is not proof. You need experiments, related work section, literature review, proofs, complexity analysis, knowledge about data structures, which you may have but probably you have not.
Conclusively, you are now a good programmer. But that does not automatically make you a good CS student. You need to build a theoretical background to formalize research questions. And that I am afraid requires time and/or guidance.
Upvotes: 3 <issue_comment>username_5: I would start with a quality blog post, with references to other approaches, justification of claims. If you can accomplish it, this piece of software *might* be a candidate for a paper.
In a journal paper you need to have something novel concrete to show, to support it with evidence and reference with other research. But it needs to be something concrete not "it is a great app, because I think so, my friends and it got 10k likes". More like "new algorithm allows to compute X with 7% less error...", "we introduce a new statistical model for classification of words based on Y..." or "75% user accomplish goal of Z with autocomplete vs 53 who...".
Software engineering and scientific research (which topic? algorithms, statistics, linguistics, psychology...) are different skills.
Upvotes: 1 |
2014/11/09 | 427 | 1,930 | <issue_start>username_0: I have a paper that was presented at an NRC workshop and was not peer reviewed. Moreover, although this work is accessible at an NRC website, it is clear from reading the literature that search engines do not discover the paper and no one is aware of its existence.
Is it ethical to submit this paper to a peer-reviewed journal for publication?<issue_comment>username_1: Different journals have different standards for what counts as prior publication. For example, most computer science journals happily accept "extended journal versions" of existing papers that are intended to supersede the prior publication, as long as there is at least 30% new content and the relationship to the prior paper made explicit. Some high-ranked biology journals, on the other hand, are so obsessed with "novelty" that they will consider a submission improper even if only an extended abstract has previously appeared. Check the policy of the journal(s) that you are considering: either it will be listed clearly online, or the editorial staff should be able to give you a quick answer about their policies.
Upvotes: 2 <issue_comment>username_2: A lot of journals give their politics for conference paper in their websites. It is mostly like following:
* [Journal of Machine Learning Research](http://jmlr.csail.mit.edu/author-info.html)
We will consider research that has been published at workshops or conferences. In these cases, we expect the JMLR submission to go into greater depth and extend the published results in a substantive way.
Some of them give numerical new content like %30 new material.
Find suitable journal which accepts such submissions. Clearly cite this is an improved version of your workshop paper. Improve your paper as suitable and submit.
As long as reviewers and editors are aware that your submission is an extension of workshop/conference paper, this should not be issue.
Upvotes: 2 |
2014/11/09 | 1,919 | 5,789 | <issue_start>username_0: Is there any research/study/survey that looked at what percentage of papers submitted to a conference or journal have been previously rejected in the same or another venue?
I am mostly interested in the computer science field (machine learning) and English-speaking venues, but I am curious about other fields and languages as well.<issue_comment>username_1: I haven't found anything in computer science, but this has been well-studied in other fields.
For example, one study1 surveyed authors from 923 scientific journals from the biological sciences in 2006-2008 and found that
>
> about 75% of published articles were submitted first to the journal that would publish them
>
>
>
(implying that 25% of published articles were rejected by another venue before finding their ultimate home).
A more common approach found in the literature is to follow up on the fate of rejected manuscripts from a particular journal (as opposed to the original target venue of published manuscripts).
For example, a study of manuscripts rejected by the British Journal of Surgery2 found:
>
> From the 926 manuscripts rejected by BJS, 609 (65.8 per cent) were published in 198 different journals with a mean(s.d.) time lapse of 13.8(6.5) months. Some 165 manuscripts (27.1 per cent) were published in general surgical journals, 250 (41.1 per cent) in subspecialty surgical journals and 194 (31.9 per cent) in non-surgical journals. The mean(s.d.) impact factor of the journals was 2.0(1.1). Only 14 manuscripts (2.3 per cent) were published in journals with a higher impact factor than that of BJS.
>
>
>
This trend is not especially new. Studies from decades ago also show large numbers of rejected papers being accepted somewhere, eventually. For example:
A study of 350 manuscripts rejected by the Annals of Internal Medicine, a general medical journal, during 1993 and 1994,3 found:
>
> Of 350 rejected manuscripts, 240 (69%, 95% confidence interval [CI]: 64% to 73%) were eventually published after a mean of 552 days (95% CI: 479 to 544 days, range 121 to 1,792 days).
>
>
>
A study of papers submitted to the American Journal of Roentgenology in 19864 found:
>
> At least 82% of the major papers and 70% of the case reports that are submitted to AJR are eventually published, either in AJR or elsewhere
>
>
>
An interesting study I came across measured the reverse phenomenon: published articles that are subsequently rejected5.
>
> As test materials we selected 12 already published research articles by investigators from prestigious and highly productive American psychology departments, one article from each of 12 highly regarded and widely read American psychology journals with high rejection rates (80%) and nonblind refereeing practices.
>
>
> With fictitious names and institutions substituted for the original ones (e.g., Tri-Valley Center for Human Potential), the altered manuscripts were formally resubmitted to the journals that had originally refereed and published them 18 to 32 months earlier. Of the sample of 38 editors and reviewers, only three (8%) detected the resubmissions. This result allowed nine of the 12 articles to continue through the review process to receive an actual evaluation: eight of the nine were rejected.
>
>
>
---
1 <NAME>., <NAME>, <NAME>, <NAME>, <NAME>, and <NAME>. "Flows of research manuscripts among scientific journals reveal hidden submission patterns." Science 338, no. 6110 (2012): 1065-1069. DOI: [10.1126/science.1227833](https://dx.doi.org/10.1126/science.1227833)
2 Wijnhoven, <NAME>., and <NAME>. "Fate of manuscripts declined by the British Journal of Surgery." British Journal of Surgery 97, no. 3 (2010): 450-454. DOI: [10.1002/bjs.6880](https://dx.doi.org/10.1002/bjs.6880)
3 <NAME>, <NAME>, and <NAME>. "The fate of manuscripts rejected by a general medical journal." The American journal of medicine 109, no. 2 (2000): 131-135. DOI: [10.1016/S0002-9343(00)00450-2](http://dx.doi.org/10.1016/S0002-9343(00)00450-2)
4 Chew, <NAME>. "Fate of manuscripts rejected for publication in the AJR." AJR. American journal of roentgenology 156, no. 3 (1991): 627-632. DOI: [10.2214/ajr.156.3.1899764](http://dx.doi.org/10.2214/ajr.156.3.1899764)
5 Peters, <NAME>., and <NAME>. "Peer-review practices of psychological journals: The fate of published articles, submitted again." Behavioral and Brain Sciences 5, no. 02 (1982): 187-195. DOI: [10.1017/S0140525X00011183](http://dx.doi.org/10.1017/S0140525X00011183)
Upvotes: 3 <issue_comment>username_2: For atmospheric science, I surveyed a number of journals. The results are published here:
<NAME>, 2010: Rejection Rates for Journals Publishing in the Atmospheric Sciences. Bull. Amer. Meteor. Soc., 91, 231–243.
doi: <http://dx.doi.org/10.1175/2009BAMS2908.1>
Upvotes: 2 <issue_comment>username_3: I suspect the numbers for journals in CS vary widely, as do the numbers for specific conferences. But I will say that most CS systems conferences, as an example, have acceptance rates in the 15-25% range. As someone commented above, most of these papers don't get submitted once and die after rejection. Some get submitted multiple times, get rejected each time, and eventually the authors give up. But I imagine a pretty high fraction get published in the same or a different conference a year or two later. I know of some cases, including a paper of mine, where something rejected one time got revised and selected as best paper in a later instance of the same conference.
So I guess it's a question of why this was asked [some time ago]. If it's to have assurances that one shouldn't give up hope after a rejection, rest assured!
Upvotes: 0 |
2014/11/09 | 2,618 | 10,805 | <issue_start>username_0: I am taking a course with a major final project. I was looking for a topic when my co-advisor noticed this. He suggested that I do a project related to my master thesis, and I agreed since I didn't know what I was going to do. He suggested that we do a certain design in half between me and a colleague of mine during a brief meeting with the course instructor. The course instructor asked if we are going to team up and I said no (I am not a team player). I submitted a proposal saying what I will do and as stretch goal, I will complete the whole design on my own.
I ended doing a very smart design in one week and my co-advisor was very impressed. He said that one part of my design could be patented. The whole design can make it to top conferences. Now he is saying you both, me and my colleague, should work together to finish the whole design. I still have 3 weeks to go and I have almost finished his part too. I feel very mad now and I don't want to give him any credit that he didn't deserve(His is part is much simpler than mine).
He is asking me to show him my design and I am not comfortable with that since he, in two occasions, performed "unethical actions" during these projects. He could easily claim that this is his work as well.
On the other hand, I don't want to start a fight with my co-advisor who suggested a paper that helped me in my design. He also knows my design and likes my colleague much more than me. He could easily tell him do this and that.(I feel like I made a mistake showing him, my co-advisor, my design).
What would be a smart move in this situation ? What words should I use to explain this to my advisor ? I just want to protect myself and get the credit for the work I did.<issue_comment>username_1: Communication is needed and you need to communicate to your co-advisor your feelings rather than second-guessing things and increase your frustration. You may lack some aspects of the picture that your advisor's see. So I would suggest the following:
1. Prepare everything to as close as the final product (manuscript) you can at this point. Add your name as sole author (unless anyone else deserves co-authorship at this stage)
2. Present the work to your co-advisor for discussion and point out what "little" is left to do and discuss what remains to be done. This can then lead to understanding of what other could contribute that is not accomplished in your work.
3. Once you have the situation a little more clear and depending on the outcome, state how you would like to see the distribution of co-authorship and take any discussion that follows.
Hopefully this will put you in a clearer position when considering taking on additional collaborators.
Please also check the tag [authors](/questions/tagged/authors "show questions tagged 'authors'") or search for *contributorship* here on academia to et input on what should be involved in adding names on a manuscript ([here is a link](https://academia.stackexchange.com/a/23822/4394) to one example).
Upvotes: 3 <issue_comment>username_2: Please do not take this personally, because I do not know you or your abilities. I am only guessing by what you say but consider that a complete Internet stranger like me, gets a negative vibe from your words. And this is not a good thing. In detail:
>
> I still have 3 weeks to go and I have almost finished his part too.
>
>
>
You never do that. Would you like the other-party to do exactly this on your part of the design? And perhaps even doing it better than you? No, you would not. You can offer suggestions / improvements on his design **after he finishes** and only in a way that does not offend / belittle him. You have a task - he had a task. Do your part and stick to it.
>
> I feel very mad now and I don't want to give him any credit that he
> didn't deserve.
>
>
>
You should be mad at yourself because you are a lousy team-player. Programming skills and intelligence can only get you up to a point. If you do not play well with others, you will usually be the first to get the boot. And the sad part is, that in that case no one will miss you. Consider this, at your next cooperation.
>
> He is asking me to show him my design
>
>
>
How does your colleague knows that you finished his part of the design too? It is obvious that not only you did something wrong (doing his part of the design) but probably bragged about it. That is totally immature, childish and unprofessional.
>
> He likes my colleague much more than me
>
>
>
I wonder why. And why do you care who he likes most?
>
> I feel like I made a mistake showing him, my co-advisor, my design.
>
>
>
Of course you made a mistake. You wanted to brag. You could actually used the time you spent on your colleague's design to improve your design. Or you simply believe that your design does not need any improvement. If you believe that, you
are seriously mistaken, because everything can be improved. So, focus on improving your design and checking for errors that have escaped your and your advisor's eyes.
I believe you must be an undergraduate from your previous posts (I may be mistaken). If you are and you want to go to grad school, please humble down. Some of the things you are suggesting sound pretty paranoid. You thought of hiding part of the work from your co-advisor, so that he would not share this with his "beloved" student, who you seem to antagonize. Your design (which you finished in a week) will be patented and it could make it to the top-conferences.
You do understand that all these sound a bit strange.
Also, grow up. All of you (you, your 2 advisors and your coleague) have a common task / goal. You all are going to be co-authors if the project comes out. Understand, that you will not get more credit by overtaking other people's work but just burn some bridges. So, work towards the project's goals and not toward your personal goals.
On your next project, make clear to everyone that you do not want to cooperate, because you want to do everything on your own. Although this is not a good long-term policy and sooner or later cooperation is a key to a good research output.
Upvotes: 3 <issue_comment>username_3: While most research projects involve collaboration, I will assume that this was a student project where there was a legitimate option to either work alone or with a partner. You chose to work alone. Honestly, if someone had a history of fabricating results, I would also not risk publishing or presenting with them.
I see 5 issues to resolve/consider:
1. One issue seems to be where you're academic project (where you had the option of working alone) ends and where the publication, presentation or patent begins. The co-supervisor may think that you have already fulfilled the personal academic project with your design so far and that to bring it to the point of publication/presentation, additional steps need to be taken. To fulfill these additional steps, they may think it is best to bring in the other student.
You need to honestly ask yourself if the other student may have something to add (a different skill set, etc.). The co-supervisor may like the other student and just want to do him a favor, but he might just be thinking of what is best for the project.
2. Another potential issue is what has the other student been told. Perhaps the other student worked on his portion of the design for his individual project and the idea is now to combine them. Even if this not the case, the fact that you chose to complete the other students contribution, even after being told that was his job, does not help your case. For future reference, you should have addressed this issue immediately- making your case for completing the whole project alone before moving forward. Right now, it could look like you knowingly completed the other students work to force him out of the project. Although based on what you say, I can see why you would do this, I am not sure it was the most mature approach.
3. The patent issue. Depending on your location (or maybe it is an international law), you may have a legitimate claim to the design. I am not an expert in intellectual property law, so please consult an intellectual property lawyer before proceeding. But, legally, you already at least co-own the intellectual property rights to your design. See [this legal blog post](http://legalblogwatch.typepad.com/legal_blog_watch/2011/01/who-owns-student-created-intellectual-property.html) for an example of student intellectual property. The law overrules academic norms,seniority etc. Although it may be unusual for a student to come up with a successful patentable design, no one here knows whether this is what you have. You may be in the small percentage of students who do, in fact have something of value. Please realize that you need to think this through, read through the legal literature and consult a professional if possible. You might take a risk (reputation wise) asserting a legal claim, but if you are really sure it will pay off, maybe it is worth it. But please be sensible and humble here. In the mean time, document, photograph, time-stamp, send e-mails to yourself...these steps won't hurt and could help if a disagreement arises later.
4. The role of the co-advisor. If ultimately presented/published, this person may be assuming they are the senior author/PI. The PI would be an author and thus, would have a legitimate say in your work. You say you want him to 'back off', but if they consider themselves the PI on a collaborative project, they are not overstepping their bounds.
5. The unethical behavior of the other student is another issue and one you need to consider. If you know this is 100% true and you exhaust your other options, you might need to bring this up. I'll call the other student John. Say "I am in an uncomfortable situation that I think I need to bring to your attention. John fabricated the results of his mid-term project. I know this because [present your evidence]. I am concerned about the long-term implications of publishing work with an unethical collaborator. As you know, this could impact all of us. So, as uncomfortable as this is, I need to bring it to your attention and ask for your help." But, check your schools rules-you may have been required to report this as soon as you knew, so be careful. Depending on the rules and culture, this may be a last resort.
But, the most important thing you can do is try to get a clear idea of expectations and roles. Perhaps you could ask the course instructor to help you. He gave you the option of working along and might be able to sit down with all of you and sort things out. Calmly ask for help, stay away from accusations and speculation and see what he says.
Upvotes: 2 |
2014/11/09 | 1,099 | 4,661 | <issue_start>username_0: What do you do when you have a conjecture, and you run experiments that confirm your conjecture, but you are unable to provide a formal proof (perhaps because it's too complicated)? Do you name them as conjectures or observations or... what?
This is in the context of a CS theory paper.<issue_comment>username_1: In mathematics and TCS (which is really a branch of mathematics), if you don't have a proof, you don't have a theorem. (You write "experiments", which I will assume means "computer calculations". Please let me know if this is not the case.) Doing some computer calculations can be interesting and even sometimes publishable, but it does not constitute any kind of proof, formal or otherwise. (**Added**: Well, unless it does, of course. You can prove a theorem by *reducing* it to a finite calculation and doing that calculation by hand or by computer or some of both. You can't prove a theorem which pertains to infinitely many cases by doing finitely many of them and claiming "and so on".) Also, although the word "confirm" is often used in this way in empirical science, in mathematics to "confirm a conjecture" means to prove it.
I see two possible questions here:
1. How do I write up computational evidence for a result that I cannot prove in a paper?
2. Can I publish a paper in which I do not prove my conjecture but only have computational evidence towards it?
The first question is more straightforward. You state the conjecture -- i.e., the statement that you think is a theorem but can't yet prove. Some discussion of the provenance of the conjecture is probably a good idea but is not strictly necessary. However, if you got the conjecture from somewhere else you must indicate that. Then you document the calculations you made. Finally, you probably want to make some remarks about why the calculations make you confident in your conjecture (if that is the case). Here sometimes informal reasoning can be helpful: e.g. if your conjecture is that for two sequences of integers a\_n and b\_n that a\_n and b\_n are always congruent modulo 691, then if you check this for the first 100,000 terms then in some naive sense the probability that this happened by accident is (1/691)^{100,000}, which is vanishingly small.
The second question is much more complicated. It can be hard to publish papers in which you do not prove a theorem but "only" give computer evidence...but not as hard as it used to be. Mathematics is slowly becoming more enlightened about the merits of computer calculations. I would say though that you need to understand the field much better to be able to predict whether a paper primarily containing computations would be publishable than to publish a more "theoretical" paper: many, many referees and journals will say "no theorem, no proof, no paper", so you should expect to work much harder to sell your work.
Upvotes: 5 [selected_answer]<issue_comment>username_2: The important thing is to be honest and clear. In any proof-oriented subject (including theoretical CS), you should carefully distinguish theorems you have proved from conjectures you believe but have not proved. It's reasonable to give evidence in favor of your conjectures (such as your experiments) or to discuss possible proof techniques that might work, as long as you are clear about what you have or haven't done.
What makes this awkward is that sometimes beginners are tempted to be a little unclear in dishonest ways. Suppose there's something you are pretty sure you could do if you had more time, and it's embarrassing to admit that you haven't yet been able to work out the details. It can be tempting to write something vague like "These techniques apply to case X as well" and rationalize it by saying it's not technically a lie, since you never actually said you applied them to complete the proof. Nevertheless, it's unethical since it misleads readers into thinking you've done more than you have.
Even if you don't feel this temptation yourself, it's important to avoid even the appearance of impropriety, so it's best to be extra careful about anything near the borderline of what you have or haven't proved.
>
> Do you name them as conjectures or observations or what?
>
>
>
Conjecture sounds like the appropriate name here. Observation might make sense if this terminology is commonly used in your subfield, but it sounds potentially problematic to me. It sounds a little too much like something you could prove but are omitting the details for, rather than something you have been unable to prove (so if you use that terminology, you should be careful to make this clear).
Upvotes: 3 |
2014/11/10 | 804 | 3,520 | <issue_start>username_0: I am applying for a university scholarship program, and as part of the application I have to write some essays but I don't understand what they want exactly in the first essay. The question is written like this:
>
> **Household information and Statement of Need**
>
>
> Describe the challenges you have faced in your path to education until
> this point. Please include the following:
>
>
> * Who has supported your academic achievements until now? (financially and/or other)
> * Which challenges did you overcome during your secondary education?
>
>
>
So my question is: when they ask "Which challenges did you overcome during your secondary education?" are they only asking about financial issues/challenges or also other kind of challenges?
Is it OK if I talk about family problems(non financial) that have negatively affected some moments of my education?
I would like also to ask if anyone could give me a link to an article or some other other online resource that has tips to writing a good essay, I have already googled but I can't find anything very useful.<issue_comment>username_1: It's hard to know for certain without knowing more about the particular scholarship that you are applying for, but many scholarships are designed to very specifically target particularly disadvantaged students and try to turn them into success stories.
If you, for example, faced discrimination and prejudice, or other institutional barriers, that would likely be an excellent thing to talk about, as this is the sort of problem that many scholarships are designed to help mitigate. If it is a more personal thing, e.g., you had an older brother who you just didn't get along with, that may not be as compelling a narrative.
Upvotes: 2 <issue_comment>username_2: They're leaving the question open-ended intentionally. They will use your answer to judge whether you're a good fit for the school's culture.
Generally when you see questions like this for things like admissions essays or in interviews, the interviewer wants to give you an honest and complete answer. You can discuss any kinds of challenges here, be they personal, financial, or strictly academic, but only bring the challenge up if the way you responded to it either taught you something or reflects a positive character trait.
The goal for your essay should be to show the reader why you deserve the scholarship or why you are most likely to use that money in a better way than another recipient might be. Because of this, keep in mind the *kind* of scholarship it is. If it's a minority scholarship, for example, remember that the scholarship is, at its core, designed to help underprivileged kids have the opportunity to go to school, and those kinds of scholarships are *especially* relevant to bright kids with decent test scores who come from places with limited academic resources or families with low income. That kind of situation might give you all sorts of stuff to talk about, from disrespectful kids and xenophobic teachers to insufficient access to technology and school supplies. If you can show that you faced some or all of these kinds of challenges but still got a 29 on your ACT (because you worked hard and applied yourself, naturally), the Minority Office might be more impressed with your application. Review the mission statement of the department that will be receiving the recommendation before writing it, and always tweak essays for other departments before you send out the letters.
Upvotes: 3 |
2014/11/10 | 1,270 | 4,123 | <issue_start>username_0: *Background: I'm writing a master's thesis with APA citations.*
In one paragraph, I cite two unrelated pieces of information that happen to come from different chapters of the same textbook. It's obvious from the chapter titles where in the book the second piece of information is located, but it's not obvious where the first piece is located. For the reader's sake, I'd like to be able to write something like:
>
> Here's what I'm doing. Here's an interesting fact **(Trout, chapter 2, 1946)** and this is what it implies in the context of my research. I then fit this interesting model, described by e.g. **Trout, chapter 8 (1946)**.
>
>
> References:
>
> <NAME>. (1946). *Ice-9 and its Applications*. Ilium, NY: Slaughterhouse Press.
>
>
>
I've never seen this done before and I couldn't find anything like it in the Purdue OWL APA style guide. Another idea would be to cite each chapter separately, as in:
>
> Here's what I'm doing. Here's an interesting fact **(Trout, 1946a)** and this is what it implies in the context of my research. I then fit this interesting model, described by e.g. **Trout (1946b)**.
>
>
> References:
>
> <NAME>. (1946a). Why ice is nice. In *Ice-9 and its Applications*. Ilium, NY: Slaughterhouse Press.
>
> <NAME>. (1946b). Containment methods. In *Ice-9 and its Applications*. Ilium, NY: Slaughterhouse Press.
>
>
>
How should I cite this? Is there an established convention for this? Am I worrying too much?<issue_comment>username_1: If the chapters are part of a unified work (e.g., a textbook or monograph), then it's appropriate to use one citation, and to say the chapter in the text:
>
> Here's what I'm doing. Chapter 2 of (Trout, 1946) presents an interesting fact. I
> then fit the interesting model, described in Chapter 8 of (Trout, 1946).
>
>
> References: Trout, Kilgore. (1946). Ice-9 and its Applications. Ilium,
> NY: Slaughterhouse Press.
>
>
>
I'm not quite sure of where APA puts the parentheses; my point is about handling chapters as prose.
On the other hand, if the chapters are separate pieces of a collection (e.g., contributed texts in a "recent results in..." book), then each should have an independent entry in the bibliography.
Upvotes: 2 <issue_comment>username_2: I would suggest the following:
>
> Here's what I'm doing. Here's an interesting fact (Trout, 1946, Ch. 2) and this is what it implies in the context of my research. I then fit this interesting model, described by e.g. Trout (1946, Ch. 8).
>
>
> References:
>
>
> <NAME>. (1946). Ice-9 and its Applications. Ilium, NY: Slaughterhouse Press.
>
>
>
However, you should consider whether citing pages would point the reader more directly to the fact than an entire chapter. Only you can judge this but it is rare that a *fact* needs an entire chapter to be stated. If you cite a theory or some larger concept the chapter may be an appropriate entity to be cited. If pages are better suited your references would look like, for example:
>
> Here's what I'm doing. Here's an interesting fact (Trout, 1946, p 56) and this is what it implies in the context of my research. I then fit this interesting model, described by e.g. Trout (1946, Ch. 8).
>
>
>
Upvotes: 2 [selected_answer]<issue_comment>username_3: Chapters and page numbers are not included in in-text citations or the reference list for monographs in APA style. The [APA style guide](http://www.apastyle.org/) is comprehensive and definitive and my understanding is that deviations are not allowed, even if they are helpful.
Both of your examples deviate from APA style. The APA compliant way is:
>
> Here's what I'm doing. Here's an interesting fact (Trout, 1946) and this is what it implies in the context of my research. I then fit this interesting model, described by e.g. Trout (1946).
>
>
> References:
>
>
> <NAME>. (1946). *Ice-9 and its Applications*. Ilium, NY: Slaughterhouse Press.
>
>
>
You need to decide if you want to give the reader all useful information or stick strictly to APA style.
Upvotes: 1 |
2014/11/10 | 425 | 1,766 | <issue_start>username_0: Is there such a thing as an interdisciplinary PhD, where the student chooses the fields, courses, etc.? If so, which U.S. universities offer such a thing?<issue_comment>username_1: In general, every Ph.D. is officially within some formal department, or other program, and a Ph.D. student will need to do at least whatever courses that program requires. Some such programs, however, are extremely interdisciplinary by their nature: some nice examples are MIT's [Engineering Systems Division](http://esd.mit.edu/) and [Media Lab](http://media.mit.edu/), where participants have the opportunity to take a wide variety of different courses connected to different disciplines. Every program has some (often relatively loose) expectations about what a student will do, however, so you're unlikely to find anything where you just get to pick whatever you feel like.
Beyond a certain point, however, a Ph.D. is not about coursework. Once you start to focus on your research, then you can do anything that you and your advisor agree is appropriate...
Upvotes: 2 <issue_comment>username_2: For a while I was a graduate student in the math department of Portland State University. At Portland State, the Ph.D. program in math *requires* students to have an "allied area" (field of study other than mathematics). The student's thesis must be related to this allied area, the student must take about 25% percent of their courses in the allied area, and the student must pass a qualifying exam in the allied area. Thus the whole program is designed to be interdisciplinary from the get-go.
(Side note: The actual name of degree is "Ph.D. in Mathematical Sciences", and it definitely differs from the traditional math Ph.D.)
Upvotes: 4 [selected_answer] |
2014/11/11 | 435 | 1,839 | <issue_start>username_0: In my statement of purpose, I am using certain arguments by scholars of my field (IR). Will it be advisable to use referencing, or might it be to my disadvantage as an unnecessary attempt to boast on my knowledge in that field.<issue_comment>username_1: In general, every Ph.D. is officially within some formal department, or other program, and a Ph.D. student will need to do at least whatever courses that program requires. Some such programs, however, are extremely interdisciplinary by their nature: some nice examples are MIT's [Engineering Systems Division](http://esd.mit.edu/) and [Media Lab](http://media.mit.edu/), where participants have the opportunity to take a wide variety of different courses connected to different disciplines. Every program has some (often relatively loose) expectations about what a student will do, however, so you're unlikely to find anything where you just get to pick whatever you feel like.
Beyond a certain point, however, a Ph.D. is not about coursework. Once you start to focus on your research, then you can do anything that you and your advisor agree is appropriate...
Upvotes: 2 <issue_comment>username_2: For a while I was a graduate student in the math department of Portland State University. At Portland State, the Ph.D. program in math *requires* students to have an "allied area" (field of study other than mathematics). The student's thesis must be related to this allied area, the student must take about 25% percent of their courses in the allied area, and the student must pass a qualifying exam in the allied area. Thus the whole program is designed to be interdisciplinary from the get-go.
(Side note: The actual name of degree is "Ph.D. in Mathematical Sciences", and it definitely differs from the traditional math Ph.D.)
Upvotes: 4 [selected_answer] |
2014/11/11 | 771 | 3,413 | <issue_start>username_0: I am planning to apply for tenure track academic positions, and I already got 3 letters from people who I have worked with during my PhD, including my advisor.
Do you think I should seek out letters from my former supervisor (masters) and a few that I have published paper with during my master's program? Does it add value to the application?
Also, I have worked with a few fellow students who are now assistant professors in other institutions. Does it make sense to ask for letters from them?<issue_comment>username_1: Start by looking at the type of academic position. What skills does the position demand? Who can best speak to those areas of strength? Do you know anyone outside of your academic department who will write a positive letter for you? For some additional guidance, consider this document.<http://careers.washington.edu/sites/default/files/all/editors/docs/gradstudents/Academic_Jobs_-_Letters_of_Recommendation.pdf>
Upvotes: 1 <issue_comment>username_2: Short answer: Extra letters do not improve an application, but could undermine it.
Elaboration: One advice often given in career center presentations or online articles about post-doc/TT applications is that the application packet should only contain the documents (and the number of documents) requested. If the announcement asks for 3 letter of reference, it means the hiring committee expects 3 letters, not 2 or 4. This is a simple yet often overlooked fundamental criterion of a successful application.
Assuming you are planning to supply only the requested number of letters and the issue is **which** letters to include, I recommend sticking with the traditional approach of letters from the individuals who you have worked with in subordinate capacity with during your PhD studies. This typically includes your dissertation advisor/committee chair, perhaps another committee member, or (if different) a PI on a grant you worked on, whether in or outside of your department (e.g. an affiliated research center).
I would advise against letters from very junior faculty at other institutions (e.g. your recent peers) as they carry relatively little clout and the hiring committee might get the wrong idea if they suspect your choice of using them as reference may have been forced, to some extent, by circumstances that prevent letters from more reputable/senior colleagues (in other words they may come across as less convincing and perhaps even suspicion-provoking reference choices - and you don't need that).
Generally, I would advice including letters from the individuals (faculty) you have worked with most recently. The dissertation advisor is an unavoidable choice and a must. Beyond that, if you worked with other faculty who were PIs or partners you have collaborated/co-authored with, choose the individuals you have worked with on most recent projects/publications as consistent with the chronology of your academic appointments/experience in the CV.
It is also a well known and accepted (if not publicly advertised) practice to pre-write letters of recommendation to save your references' time. Whether you have done this already or not, good to keep in mind. Just ask your reference in a matter-of-fact way if they prefer to author the letter or could use a summary draft (or at the very least, a current copy of your CV).
Good luck with your apps! Let us know how it goes!
Upvotes: 3 |
2014/11/11 | 1,910 | 7,652 | <issue_start>username_0: Diploma mills sell degrees at any level, ranging from bachelor to PhD. The way I see it, the sole purpose of getting milled diplomas is to deceive others. As such, I expect the consequences of getting caught to be sufficiently severe.
I wonder if there are known cases of people getting prosecuted for using such fake degrees to land a certain job. I know there are plenty of cases where said people got fired, for instance [here](http://chronicle.com/article/Psst-Wanna-Buy-a-PhD-/24239).
I am mainly interested in the consequences of obtaining fake PhDs and subsequently using the supposedly obtained titles. In Belgium and the Netherlands, for instance, the formal title of 'Doctor' is legally protected, so unjustified use *could* lead to prosecution. I assume this is also the case in some other countries (examples are most welcome), since we have a lot of questions pertaining '*can I call myself X in country Y after obtaining Z*'.<issue_comment>username_1: See here for one case:
<http://www.employeescreen.com/iqblog/fake-degree-leads-to-arrest/>
And here's the official announcement:
<http://www.nyc.gov/html/doi/downloads/pdf/pr46feraca05_26_2010.pdf>
Apparently, in the US, one can be charged with "criminal possession of a forged instrument" and "offering a false instrument for filing".
However, this case concerns a fake degree from a real institution. As noted below, the Diploma Mill case is much more complex. See here for a discussion of the legal issues:
<https://journals.law.stanford.edu/sites/default/files/stanford-law-policy-review/print/2010/01/gollin_21_stan._l._poly_rev._1.pdf>
Upvotes: 3 <issue_comment>username_2: You should probably just read the [wikipedia article on Diploma Mills in the US](https://en.wikipedia.org/wiki/Diploma_mills_in_the_United_States). Here's my short summary.
There are a few legally protected titles in some states. For instance, you can't call yourself a Doctor, Lawyer, or Professional Engineer in Michigan without having passed the relevant licensing tests and have obtained a degree from an accredited educational program.
This essentially forms a chain of trust. For a profession engineer, the state requires you obtain an NCEES license, they perform testing and also require an educational degree from an accredited institution, and they only trust a handful of accrediting agencies.
One [accreditation](http://ope.ed.gov/accreditation/) list is maintained by the US Department of Education.
For instance, the University of Michigan is accredited by an organization that the US Department of Education trusts - the North Central Association of Colleges and Schools, The Higher Learning Commission.
Thus there is a chain of trust.
Diploma mills come in two types, accredited and unaccredited. The accredited diploma mills get their accreditation from fake or otherwise invalid accreditation agencies.
The unaccredited diploma mills simply call themselves schools and claim authority to hand out degrees.
There are no federal laws that would unambiguously prohibit diploma mills, and the terms "university", "college", etc are not protected so anyone can use them for any purpose.
Some states have fairly tough laws that prevent diploma mills from claiming that state as their home, requiring accreditation from an institution recognized by the US Department of Education, for instance, before being able to award educational degrees.
This is not universal, though, so you end up with diploma mills setting up in states that do not have such protections. This helped initially, but then the internet became very popular, and diploma mills started extending their reach more aggressively outside their states.
In states where such mills are illegal, sometimes the degrees and use of them to promote yourself is also illegal. However it appears that [these laws may be unconstitutional](https://en.wikipedia.org/wiki/Diploma_mills_in_the_United_States#cite_note-13).
### Conclusion
* If you award fake diplomas, you can be prosecuted in some states.
* If you promote yourself using a fake diploma you are unlikely to be prosecuted, but there are laws under which you could be prosecuted.
* If you use legally protected terms requiring license in your state, such as doctor, lawyer, professional engineer, etc, you may be prosecuted under state laws.
Upvotes: 3 <issue_comment>username_3: First off- I'm not a lawyer.
My understanding is that in the US you generally cannot be prosecuted for claiming to have a PhD when you do not, or when that PhD was conferred by a diploma mill. Both of these actions are protected by the first amendment. However, you can be prosecuted when you *lie for the purpose of personal gain*.
Federal law and all 50 states have some notion of the crime of *fraud*. Fraud is generally defined to be *deception for the purpose of personal gain*. When you can prove deception you can build a criminal case, but where you can't prove a deception occurred it is much harder. This is why it's easy to find cases where someone fabricated credentials and was caught (e.g. claimed that Harvard gave them a degree when they in fact did not, and a simple call to Harvard verifies this). However, if you have a PhD from a diploma mill you can honestly say that you have a PhD, it just happens to be that the degree is worthless.
This is complicated by the fact that some people apparently believe (or can plausibly claim) that their diploma mill degrees are somewhat legitimate. There are a host of phoney rationales that these organizations use to mitigate peoples' sense of moral hazard. For example, "Your accumulated life experiences equate to a substantial amount of degree credit." It sounds reasonable to some people- especially since it's something that one would want to believe. Thus, someone may honestly believe that they have a Ph.D. and are an expert through the benefit of accumulated life experience.
I would compare this situation to the *Stolen Valor* laws in the US that attempt to punish people who falsely claim to have served in the military. The original law punished the act of falsely claiming you had military service, but was [struck down by the supreme court](https://en.wikipedia.org/wiki/United_States_v._Alvarez#Supreme_Court.27s_decision) on the basis that the first amendment protects speech even if it is a lie. (Some specific types of speech are excluded, such as lying under oath.) As a result, a revised version of the law was created that specifically punishes lying about military service *for the purpose of personal gain*.
Note that some states have passed additional laws specifically concerning or criminalizing diploma mills, or criminalizing the use of diploma mill degrees in certain contexts such as job applications. However, the majority of states have not.
What this all adds up to is that employers have to be diligent about tracking down and verifying people's credentials and asking the right questions. Consider the difference in specificity and potential for deception between the statements "I have a PhD." and "I have six years of postgraduate study in computer systems engineering that culminated in a dissertation." There are basic questions that could reveal a diploma mill applicant even if the interviewer didn't have an expert background in the topic. The questions "How long did you study for your PhD" or "How long were you a full-time student at your graduate university" or "Describe the degree requirements at your graduate institution" have a set of expected responses that can be followed-up upon if something sounds out of the ordinary.
Upvotes: 2 |
2014/11/11 | 761 | 3,320 | <issue_start>username_0: I have a tendency to overcategorize things when writing a paper. Currently I’m working on my thesis and I’m concerned with my level of subchaptering. Right now I’m already down to x.x.x.x and I fear going down a level deeper would look bad. I think it adds to the overall clarity of the paper to categorize everything and would allow for more efficient lookup later on, but it’s not clear to me to what degree this should be done. Are there any general best practices? Are there any rules of thumb you use?
My faculty does not have any clear policy regarding this.<issue_comment>username_1: Many, if not most, journals specify a maximum of three levels. In a book where the chapter is the top level, four may be ok considering each chapter can be as extensive as a paper. The problem of having too many levels is that the headings disrupt the reading. A good sign of this is when you end up having one heading per paragraph. If that is the case, you can probably remove the lowest level of headings and try to make the resulting segments of text flow by inserting bridges that makes paragraphs into a coherent text.
In my experience having read numerous student reports, theses and articles, three is a good goal. It is rare that a fourth level adds much in terms of structure apart from perhaps helping the writer. In fact, I often recommend students to keep a more detailed list of headings to enable them to see the structure of what they are writing but under the pretext that only a maximum of three should remain in their final version. I stand by that recommendation.
Upvotes: 6 [selected_answer]<issue_comment>username_2: I find that a good heuristic for structural depth is visual and conceptual navigation. If you think of a paper as a collection of (reasonable length) paragraphs, then for ease of navigation its structure should generally be a balanced tree with roughly 2-6 subunits at each level. Bigger than that, and it starts being hard to navigate, smaller and it starts feeling unorganized or pointlessly subdivided.
It is also important for each level of the structure of a paper to be a relatively even partition (possibly excepting the introduction and conclusion, which may be much smaller). If you find yourself with some sections much bigger or much smaller than others, then you may want to rethink your structure (e.g. should "Results" and "Discussion" be separate, or combined into "Results & Discussion").
Put the balancing and branch restriction heuristics together, and you've got a natural control on depth. One exception: some journals require a particular set of section headers, which may force the top layer of your tree to be unbalanced; you can still apply the heuristics for subsections and beyond, however.
Upvotes: 3 <issue_comment>username_3: If you think that a deep structure adds to clarity, then it is ok to use it. I think, a thesis (PhD) should be fine with x.x.x.x.x. If your supervisor finds otherwise, he should tell you about this - or ask him.
A **good supervisor** will not just write: *max structure is x.x.x.x*. Instead, he would give you recommendations on where to change what and why - based on his opinion and maybe his experience (which is only reliable if he has written more than a master thesis a year before you).
Upvotes: 0 |
2014/11/11 | 5,835 | 24,783 | <issue_start>username_0: I have been studying hard for the past year and a half (it's my second year of Computer Science) while not getting remotely close to the desired results. I passed my first year with a 7.2 GPA (5 (failing grade) - 6 - 7 - 8 - 9 - 10 (best)) which is very low given that I want to master in applied maths or an area revolving around that.
I'm doing 6 hours a day Monday to Thursday, 10 hours a day Friday and Saturday, and on Sunday I try to rest a little. I rest every 30-45 minutes for 5-15 minutes. Now, I'm isolating myself, sacrificing my social life, among other things. I'm feeling angry at myself every time I get the exam results or when a deadline passes and I haven't finished the lab/project.
I even started eating healthier because I thought unhealthy food was the problem.
What can a student in this situation do to earn better grades?
P.S: I do like what I study, no matter how angry I get, I know I will wake up the next day and head to the library/school.<issue_comment>username_1: Don't panic. The fact that you're still enthusiastic about your studies suggests to me that you can likely fix the problem, whatever it is.
If you're having trouble with just one or two of your subjects, then there's probably a gap in your background knowledge. Try to figure out where that gap is. Right now it may feel like everything in those subjects is difficult, but I suspect if you look hard, you'll find out there's just one or two small gaps in your knowledge, and that's something you can remedy. Perhaps you can take a lighter course load next semester to give you time to focus on the areas you're having difficulty. Once you figure out what the gaps are, you could either take the appropriate course, or teach yourself. This strategy may delay your graduation by a semester, but it could be worth it.
If you're having trouble with most of your subjects, then I suspect you're not studying very efficiently. You may be working hard, but not using your time well. Unfortunately, I don't have specific advice on how to improve your study habits, but there are lots of resources available for this kind of thing, and others here may be able to point you to those resources. Again, it might be worthwhile to take a lighter course load while you practice your new study habits.
Talk to your student advisor about your problem. If you think it's a gap in your foundational knowledge, t may be worthwhile to pick one problem that you found particularly difficult, and ask the instructor (during office hours, not in class) to go through it in gory detail with you to help you figure out where your knowledge gaps are.
Also, there is probably a student centre or something like that at your school, where you can get advice on how to improve your study habits.
Upvotes: 5 <issue_comment>username_2: Go slow. You can't improve your grades overnight, and if you try too hard to do, you'll do more harm than good. Different people learn at different rates, so if you don't get something straight away, don't fret. *Nobody* gets everything the first time. That might sound obvious the first time you hear it, but every time you fail, you need to remind yourself that you *have* to fail in order to succeed. Your first and largest hurdle is learning not to be afraid of failure. Contrary to popular belief, your failures will not haunt you for the rest of your life. :)
Don't spend too much of your focus on assignments and study, either. De-focusing, in fact, is a valuable research and problem-solving technique. Take a break every thirty minutes or hour to grab a snack, talk to a friend, or play a game. Return to the task that had you stumped when your mind has had an opportunity to relax a bit. This will give you a fresh perspective on the problem and will make it far more likely that the thought process that ultimately gets you to a solution will stick with you.
Figure out your learning style. Some people learn best by writing down everything they hear. Others learn great just by listening. Still some other people need to actually build things and hold them in their hands to see how those things work. Start by figuring out if you're a *visual* learner (you learn best by watching others do things), an *auditory* learner (you learn best by listening to instructions and discussion), a *lexical* learner (you learn best by writing things down and taking copious notes), or a *physical* learner (you learn best by doing things yourself). When you've figured this out, remember to employ your learning method as much as possible throughout your education, and make your strongest effort to "study" things in the way that works best for you (regardless of whether that means listening well in class and not studying at all or writing down every gosh darn thing you hear). If you need to write down everything, I highly recommend investing in a small audio recording device you can use in your classes, unless you're a super fast writer.
Speaking of recording classes, learn to use your resources, also. Your teachers or professors are there, for the most part, to help you, so never feel bad about taking advantage of office hours or after-class help sessions. Whenever you are struggling to understand something, ask for help! Sometimes someone else explaining things differently can have a big impact on your ability to understand.
Do everything that you do with the mind that you'll have to do it again someday. When you try something new or encounter new material, don't just learn what to do. Learn *how* to do it. This practice doesn't require OCD studying. Just get in the habit of wondering why things happen. If you ask enough questions, answers will come back to you. They have a natural way of that.
Also, read! Pick up reading as a habit, and do it for fun. Reading regularly will make the kind of reading you have to do for effective studying *much* easier. You'll feel less exhausted after studying, and you'll retain much more of what you do study. Best of all, books are cheap from Amazon or local bargain book stores and make for an outstanding way to kill some time.
After reading for awhile, maybe you can get yourself to start writing, too. Even if you're just writing in journals, the purposeful employment of language implicitly forces you to think about things like syntax, word choice, and tone. Writing is a really great way to engage problem-solving and analytical skills without actually doing any overtly structured problem solving or analysis.
Play games! Sudoku, solitaire, puzzle games, "code" games, video games--anything that gets your brain juices flowing. Games that involve problem-solving and strategy can stimulate parts of your brain that you actively use during studying and test-taking. Besides solitaire and Sudoku, take a gander at Zendo, Mastermind, and chess. If you're into video games, good news--basically every mainstream video game is designed to stimulate your mind (because, incidentally, that feeling causes gamers to play the game more). If you go the video game route, just be careful not to play too much. :)
Lastly, be patient, but don't let opportunities for good discussion pass by. To learn to love learning, you have to experience a kind of learning that is super engaging for you. It comes when it comes, but if you don't put yourself out there, you'll never see it. Be involved in classroom discussions, and when a topic comes up that interests you, share your thoughts on it. Eventually, when you've learned how to make connections between things you would used to have thought unrelated, you might make a comment that starts a totally new perspective on a topic for a whole class, and that's a really cool feeling. You've probably also heard before that the most effective way to learn is by teaching others, right? Well, guess what classroom discussion is all about? Put yourself out there. Discuss!
Obviously you can't do all these things right this very minute, so I refer you at this point back to the first two words in this whole mess of verbiage: go slow. Rushing yourself is the surest way to get nowhere, so make a long term plan describing what you want to accomplish within the next twelve months and daily chip away at it. Just remember above all other things that you can't reinvent your learning style overnight. :)
Upvotes: 3 <issue_comment>username_3: I am in my third year, and I am on course to get a Math with CS minor degree.
When I don't understand something, feel stuck, or get crap grades, I take a step back and ask the following questions:
* Was I careless?
* Do I lack fluency?
* Can I explain the material?
* Am I answering without sufficient proof?
* Do I know my definitions?
* Am I mindlessly practicing?
Each of these questions comes from experience.
**Careless** is easy. Slow down, think through the wording of questions, check your answers for plausibility. Unfortunately, this is rarely the true culprit.
**Fluency** is often key. Don't give in to extremism (concept-only or mechanical-only learning). Fully understand practice problems, and then practice writing out solutions with clarity and succinctness. You cannot gain fluency by mindlessly repeating problems, copying and pasting code, compiling until it finally works, checking answers in the back when you're half done, etc. But you cannot win by learning only concepts! You **must** be fluent writing out solutions. **Teaching others is perfect.**
Which leads to **explaining** nicely. Try explaining problems to yourself in the shower, and their solutions. Try writing out very neat and tidy solutions, diagrams, and other tools for deeper intuition. If you can explain something to someone else, you will use and grow these a **lot**.
As you move forward in your studies, you will be asked not to simply provide answers, but answers that you can **prove** are correct (and in the case of CSC, often demonstrate have certain running times). This means you must know the background material so you can draw on definitions, previous results, and similar proofs.
So **know your definitions**. If you cannot say in one sentence or less what a function is, a set is, a graph, a cyclic graph, a residual graph, or whatever terminology and level you are at, then you will have major problems.
And **mindlessness** will kill you. You cannot memorize definitions flashcard style and expect to succeed. You have to write them down over and over in your attempts to solve problems. You need to be fluent reading and writing the material of your major. Think about how your fluency in your native tongue came about. Understanding others, and *then* making yourself understood. It does not come from standing in the mirror mouthing 5 words of the day over and over again. I hope you see what I mean.
I have no proof for this, but I believe that some of my study sessions are 10x more productive than others. These are not the sessions where everything clicks! Those are the product of many efficient study sessions. No, efficiency comes when I turn on my mind, I slow down, and I work the really hard problems methodically.
Finally, make sure you read *How to Solve It* by **Polya** (or at the very least, read a summary of this work).
Upvotes: 5 <issue_comment>username_4: Many of the people I knew who majored in Computer Science put in a lot more than 44 hours a week on their classwork and projects; how much time do your classmates put in on their studies?
If you're spending 44 hours a week on your studies why do you feel that you are sacrificing your social life? People who work 40 hours a week at a job don't usually complain about not having time for themselves. (If you are spending 44 hours a week studying on top of having a job to support yourself, you may need to think about whether you're capable of a full-time job and a full-time course of study at the same time. Many people are not.)
As others have suggested, forming a study group may help you; you can learn from those who have mastered concepts you are struggling with, and learn even more by teaching what you **think** you know well to those who are just learning. Going to a professor or tutor for extra help might also be useful.
What parts of computer science are the most fun for you? Algorithms? Programming? Hardware? Theory? Would it help to do a personal side project that mostly just uses the fun stuff, to help encourage you to get more practice? (For example, program a little video game to relax.)
Do you have as much maths background as your classmates? There may be concepts you struggle with that they learned in other classes, like Boolean logic.
Finally, you may benefit from going back over your old coursework to see what you missed, but now have the framework to understand better. If you have graded homework or exams from your first classes, can you now easily see what you missed at the time? If not, you may benefit from going back to study those elementary concepts until they are second nature.
Upvotes: 4 <issue_comment>username_5: **Use SQ3R to Focus Your Readings**
To help your memory and focus on material that you learn from reading, try using the [SQ3R](http://en.wikipedia.org/wiki/SQ3R) method. See [this article](http://www.ucc.vt.edu/academic_support_students/online_study_skills_workshops/SQ3R_improving_reading_comprehension/index.html) from Virginia Tech to further learn how to use it. Several other strategies exist to help you to think more carefully about how you think of the material while you study are listed in this [Wikipedia](http://en.wikipedia.org/wiki/Study_skills) article, but be selective and strategic in which ones you use and when.
**Use SRS for Repetitive Practice and Review**
To practice skills that demand repeated practice, such as math, try using [SRS](http://en.wikipedia.org/wiki/Spaced_repetition). This is essentially flash cards, but controlled by algorithms based on memory research. Many SRS tools allow for cards with graphics, audio, and LaTeX, HTML, and CSS. You could add math problems from your textbook to the software and the algorithms in the software will help you to spend more time on the difficult ones, less time on the easy problems. Study with such tools daily, but do not use them in excess. As SRS takes considerable time to setup, as you will likely need to build custom study materials, so use your holiday time to get started. "[The 20 Rules of Formulating Knowledge in Learning](http://www.supermemo.com/articles/20rules.htm)" is essential reading.
**Avoid Attentive-less Practice**
Many students get into a routine of solving their textbook's problems by just following their teacher's steps or by memorizing information by rote. Think carefully about how you think about what you are learning. If you are studying math, avoid just pushing the numbers around as your teacher showed you and spend time exploring the real meaning of the problem through visualization. Spend time solving your math problems using [concrete, representational, and abstract methods](http://fcit.usf.edu/mathvids/strategies/cra.html) so that you do not merely build the capacity to solve problems on paper, but can visualize what that math actually represents:
* Concrete - use some physical items or the physical space to solve the problem.
* Representational - sketch the physical items or situation on paper to solve the problem.
* Abstract - use the mathematical language your instructor or textbook taught to solve the problem.
**Use Support Services**
Determine what academic support services are available. For example, your school might have a writing center to help you with your writing. Your school might have a tutoring center to help you with your math. Some departments may also have a meeting area where you can meet other students who are working on their homework, where you can join a study group and potentially meet your teachers or their teaching assistants, to get help outside of class. In the US, some schools offer 1-credit courses or free seminars to introduce these available services. Do not attempt to do everything on your own.
**Maintain a Fixed, Sufficient Sleep Routine**
Staying up late to get in a few hours more work done can cost you more hours the next day. Establish a fixed sleeping schedule where you wake up and go to sleep at the same time each day, even on weekends. Make sure you are getting a good amount of rest each night so that you can be very focused the next day. Break this schedule only at strategic times, e.g. to work on an important project, but not before an exam.
Upvotes: 3 <issue_comment>username_6: First things first, you gotta know what's dragging your grade to the nether regions. Are you turning in everything but scoring low, or not turning in everything? I know plenty of fellow students who would do amazingly well if they turned in everything, and I know the other case as well. Are you taking good notes, or not understanding, or some other issue entirely?
I've asked plenty of questions, but these are an important part of knowing how to improve the grade. There are some tricks that can help you improve but you gotta know/tell the problem first.
I'd normally say I'm done here but I found some handy tools recently that may be worth checking out.
Organization: Trello.com is a neat organizer you can use for sorting assignments, tracking due dates, and project ideas. What I did with it involved making a homework "board", which contains stacks of cards. I then made one stack for each class, and then one card per assignment. You can drag the cards with assignment names and info around, as well as rearrange the card stacks. What I do is sort the cards by due date and delete them when the assignment is done.
Studying: Studyblue is a neat Web app that not only does flashcards, but also shares them, searches for similar decks, and allows you to borrow decks from classmates. It also tracks your progress and learns which ones you know, and it helps me to study.
Upvotes: 0 <issue_comment>username_7: Points:
1. Try to think what you have learned, DO NOT JUST LEARN. This means that more time should be spent for thinking, thinking about where this kind of knowledge could be used, how to use it. Learning is not enough, thinking will help you understand deeply about what you have learned, which can give you more chances to get higher scores in examination.
2. Do not isolate yourself, you need fresh air by exchanging ideas with others. Trying to talk about what you are learning with other students. Others' experiences will inspire you to get high scores.
Upvotes: 2 <issue_comment>username_8: Personal experiences to improve study efficiency:
1. Practice your ability to concentrate for a longer time. Remove source of distraction (phone, PC, people) and remain vigilant to prevent mind from drifting. Given enough practice, you will find yourself focusing for more than 3 hours without noticing the flow of time, which is longer than a normal exam session.
2. If you are attending the lecture, make sure you understand the content beforehand, validate and reinforce your understanding during the lecture. Given enough practice, you will start skipping lectures because productivity is higher when you read books on your own.
3. When something doesn't add up, dig deeper instead of just memorizing. Understanding details helps reinforcing the memory, and speeding up re-acquaintance. Besides, your later courses depends on the previous ones, gaps in knowledge will eventually come back to haunt you.
4. Healthier life style helps a lot, regularize your sleeping schedule and do exercise.
5. All of above are time consuming and requires persistence, expect to study for >60 hours a week if your goal is 9/10. Good news is, you might find yourself getting smarter, doing thing you wasn't able to do before. Like remembering numbers you didn't even try to memorize, and a much faster reading and learning speed.
Be reminded that, except for extraordinary people, high GPA comes with costs. Isolation and scarification of social life have their consequences. Knowing when to stop is quite important.
Upvotes: 2 <issue_comment>username_9: You don't say exactly what you're doing in all those hours, so I don't know whether this applies to you, but in my experience a lot of students don't know how to study effectively.
I highly recommend [this article in the New York Times](http://www.nytimes.com/2010/09/07/health/views/07mind.html?hpw&_r=0) for an overview of what recent psychological research has to say on the subject. In a nutshell, one of the major messages is that for studying to be effective, it should be *active*. Time spent re-reading books and notes is time wasted. Instead you should be actually *doing* the things that your field is about — solving problems, writing code, etc.
I sometimes encourage my students to think of it this way. If you're training for a sport, you spend your time doing the things that go on in a game/match. You don't read about how to do those things, or watch videos of someone else doing them. You practice doing them yourself.
Upvotes: 2 <issue_comment>username_10: There is missing a lot of info about your particular situation so I'll come with some different answers based on different assumptions about your situation.
1. You might not have a natural talent for this. That doesn't need to be a problem in your case however since you seem very motivated and work hard. Working hard will make you smarter and more capable in time but it won't work in the short term. Also getting top marks isn't crucial if you got the motivation. E.g. I've happened to have talent and got good grades without working as hard as many of my peers. However I failed to get my Master due to lack of motivation. I know several fellow students with shitty grades, who still managed to get their Master because they stayed motivated and kept working hard. Seriously I'd rather have had shitty grades and gotten my Master degree than getting good grades and then dropping out on the final project because I had zero motivation.
2. You might have a very inefficient study technique. Research this. It makes quite a big difference. E.g. reading the same thing over and over again takes long time and is inefficient. Active work like doing practice assignments or trying to teach somebody else what you read is more efficient.
I might mention a story about a fellow student. He used to get stuck on assignments and got frustrated with himself. He was failing a lot of assignments. He finally went to a coach, that got him to stress down. He told him to not stress out if he didn't understand something right away. It isn't normal to do that. Just take his time. There was obviously a lot more to it, but this student he got a dramatic improvement. He got absolutely top marks. I noticed when discussing problems with him that he was not any smarter than me. But I noticed when reading or preparing for tests that he was way more focused than me. He could really stay in the zone. The coach had taught him how to do that.
Upvotes: 3 <issue_comment>username_11: Lots of good material is already there in the other answers.
There's a lot of focus on how to improve the "act of studying", which of course may be the problem, and can always benefit from attention.
There are some other things to add though, which will be useful to add into your mix:
1) Do hobby activities that relate in some way to your study.
In Comp Sci, this is so easy. You can make web sites, program Raspberry Pi, make little games ... being involved in actually doing stuff that relates to what you are learning can go far towards making it "click".
You always learn better when doing things that *use* the learning, particularly if you are using it in an enjoyable way.
2) Participate in the learning community. Head on over to Stack Overflow and see if you can **answer** questions. That's right, you're learning, and a great way to learn is to answer other people's beginner questions. On Stack Overflow, there are hundreds of very basic questions per day that a student of Comp Sci should be able to answer.
3) Get a learning buddy, someone who is doing well and who would be willing to have you along. Study in their room, in the library, nearby and have coffee and talk about the work.
Finally, I agree that 40 hours per week is no where too much to be spending on your Uni work and related (hobby programing, recrational reading about your art etc). At University, your learning is your life. Just do it, it's over soon and you can party for the rest of your life. Which is not to say "don't party" - just don't look resentfully at the people working 9-5 and partying the rest of the time.
Upvotes: 3 |
2014/11/11 | 825 | 2,904 | <issue_start>username_0: I am currently at a conference in the US
and I will be reimbursed by my adviser.
I was wondering what is a reasonable per diem spending for meals?
Of course I could ask my adviser directly,
but I feel bad doing so,
because I am afraid my adviser will think that
I am trying to spend the maximum amount possible.
I don't want to blow my adviser's budget,
but I also don't feel that I need to save every last penny
in terms of my meal spending.
I would like to have maybe one nice meal a day ~$20
but otherwise eat cheaper meals < $10.
What types of spending guidelines would help me not to upset my adviser?<issue_comment>username_1: Ask your advisor. Your university is likely to have a maximum allowable per diem that is based on the city that the conference is in. It varies from university to university, but mine just pays the per diem for that city regardless of how much you actually spend. Many universities do it this way. It saves time and money processing expenses on a meal-by-meal and receipt-by-receipt basis. If your university does this, then it doesn't matter how much you spend on yourself.
Also, given that the per diem is likely capped in the $40-$50/day range anyway, you are unlikely to blow your advisor's travel budget on food even if you hit the max every day.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Here's the guidelines [according to SIAM](http://www.siam.org/meetings/pdf/travel_guidelines.pdf):
>
> There are two options for meal reimbursement. You must select an option and use it for the entire trip . The options are:
>
>
> 1. Full Reimbursement - For full reimbursement, detailed receipts are required
> whether the meals are paid for in cash or credit card . Detailed
> receipts for meals showing the food and beverages ordered are
> required. If the meals are included on the hotel bill as room service,
> a detailed receipt is still required. Since SIAM receives funding from
> government agencies it is mandatory that we receive the detailed
> receipts so that unallowable costs can be segregated for government
> funding purposes. If a detailed receipt/receipts are not provided to
> support a meal item on the expense report , the meal(s) will be
> deducted from the expense report and not reimbursed .
> 2. Per Diem - Costs vary according to the area of the country ; there are no fixed per- diem rates . The U.S. General Services Administration (GSA ) updates the per-diem by city periodically. If using per-diem, the rate for the
> conference city travelling to should be used. Current per-diem rates
> are available at:
>
>
> * Domestic ~ <http://www.gsa.gov/portal/category/21287>
> * Outside US ~ <http://aoprals.state.gov/web920/per_diem.asp>
>
>
>
If you use the per-diem rates, you can comfortably afford a nice meal every day, no matter which city you are travelling to in US.
Upvotes: 3 |
2014/11/11 | 527 | 2,158 | <issue_start>username_0: I'm applying to doctoral programs right now and I'm having dinner tomorrow night with a professor and three of his grad students. I'm very hopeful to join this lab. I've spoken with the professor before and would like to focus my attention on the students. I've never done anything like this before.
Do you have any suggestions for questions I should ask and how I should act? I wanted to offer to pay for dinner or at least drinks since they are going to this trouble, but would that seem like I'm trying to schmooze my way in? Would it be weird for me to write notes on what they say at dinner?<issue_comment>username_1: To add to the Massimo's comment on not taking notes, yeah I think that would seem weird. If the conversation is about science, and a particularly interesting point comes up or you want to take note of someone's name to look up a paper, make a note on your phone.
Offering to pay might be polite, though if the prof's paying through his department or a grant I wouldn't insist. I think it's safe to say that if the grad students don't pull out their wallets, neither should you.
Just act normally. Be yourself - it is not a formal occasion. The prof will have invited his students along for two reasons:
1. Because dinner with just the two of you could be exceedingly awkward
2. So that everyone can get an impression of how you'll fit
in with the group.
Just make the usual small talk with his grad students to (hopefully) find some common ground, talk a bit about their projects, and make sure to have a good yarn about non-work-related things too. Your scientific prowess will not be assessed at the dinner table!
Upvotes: 4 <issue_comment>username_2: All of what @username_1 said except that I disagree about taking notes using your phone. It's just very hard to look professional doing so and too easy to get (or look) distracted.
Carry a little reporters notebook and pen and pull that out if you need to jot a name down. You'll get more prop points if the notebook looks well worn.
Also, even if you think you have a good alcoholic tolerance, I would drink no or close to no alcohol.
Upvotes: 2 |
2014/11/11 | 1,350 | 5,917 | <issue_start>username_0: I had submitted to a reputed computer vision journal. Both the reviewers (there were only two) marked it as "reject" with comments that alluded to the fact that they had actually "missed the point". It was interesting that the reviewers missed the point on two major levels, first on the actual aims and scope of that journal and second the actual scope of the conducted research. With 2 reviewers recommending "reject", the Editor decided to "reject" the article.
Even though there was no avenue for rebuttal, I drafted a long email with my rebuttal highlighting where the reviewers had missed the point of my article. The Editor replied saying that he himself was very much surprised by the reviewers decision because from his reading of the paper he actually saw value in it. On top of that, my rebuttal was very convincing. However, he does not feel comfortable reversing the decision because both reviewers had recommended "reject" hence he is going to refer this to Editor-in-Chief to get his opinion.
Now, what are my chances of publishing this article of mine with this journal?
**Update:** The editor-in-chief got back with a list of possible items for major-revision that they would like to see. The major-revision deadline was quite short (just three weeks), they agreed to postpone the deadline on my request. I tried my best to address the items of major-revision and the article was finally accepted. Perseverance pays! :-)<issue_comment>username_1: Chances are slim.
It is possible that the editor would ask for a third reviewer to look at the paper. If that reviewer is extremely positive, they might reconsider. Someone has to be passionate about the paper in order to get the paper accepted.
However, since both reviewers missed the main points there is a good chance that the presentation is not clear enough. So you could consider significantly rewriting the paper and then resubmitting to the journal as a new submission (it is best to coordinate this with the editor). This might prompt the editor to decide to evaluate it as a new manuscript.
Upvotes: 3 <issue_comment>username_2: First of all, your situation is not at all uncommon. On the contrary, most academics who submit sufficiently many papers find themselves in it from time to time.
The fact that your rebuttal email resulted in an editor's writing back that he saw value in your paper is already worth something: it gives corroboration that your paper has value. In general it is doubtful that an editor would write that only to be polite, because in doing so he is opening himself and the journal up to further rebuttal from you. The fact that he is passing your complaint on to the editor-in-chief is further evidence that he takes it seriously.
>
> Now, what are my chances of publishing this article of mine with this journal?
>
>
>
If you're asking for a straight-up prediction: that's hard to say. *In general* the chance that a paper that gets multiple negative referee reports is eventually published is very small. However, it is also relatively unusual for an editor to directly communicate disagreement with the referee reports to an author. The chances depend on how egregiously off-base the referee reports were. If the editors truly agree that the referees "missed the point on...the actual aims and scope of that journal" then they are going to feel like wronged parties along with you and the chance that they will at least solicit another referee report seems pretty solid. (On the other hand, if that is the case one wonders why they didn't notice it before you brought it to their attention.) If a third referee report disagrees wildly with the first two, then perhaps the editors will be inclined to accept the paper (or seek yet further reports).
Nevertheless, unfortunately my guess at the most likely outcome is that the editors will convey their sympathies to you and wish you the best of luck elsewhere. In my experience editors just do not have enough incentive to overrule referees in this situation. From a hard-nosed perspective they may be right: if your paper truly is valuable and the referees are wrong, then you can resubmit to another journal of similar quality. That outcome is in the long run almost as good for you and only detrimental to them if your paper is not just publishable in the journal but outstandingly strong beyond the sort of papers they usually accept. On the other hand if your paper is flawed and they publish it anyway then they are throwing away all the advantages of peer review.
In general, it is a rare referee report that doesn't tell you something that could improve your paper. If a referee wildly misses the point (which again, is not at all uncommon) it is not necessarily your fault...but nevertheless maybe you could rewrite the paper to make it easier to get the point. Sometimes authors work for months or years on very subtle things and then expect readers to appreciate these subtleties upon a much more casual reading. The fact that two different referees missed the point still does not imply that their comments have any legitimacy, but it does make it more probable. If two people miss the point of your work *in the same way*, then I would certainly take a crack at rewriting the paper to avoid that particular misunderstanding.
All in all, it would be safe to at least start thinking about how you could (perhaps relatively quickly and easily) modify your paper for resubmission. I would expect the editorial deliberations on this to be rather quick: if you don't hear back from the editors within, say, two weeks, then it would be appropriate to inquire politely on the status of your paper. I would not advise you to resubmit to the same journal unless you know you'll get new referees: people who have missed the point once are not your best bet for appreciating the new version.
Upvotes: 5 [selected_answer] |
2014/11/11 | 2,185 | 7,936 | <issue_start>username_0: With an eye to finding the reasons behind [high journal subscription costs](https://academia.stackexchange.com/q/29923/452): do journals / publishers make outrageous margins, or are prices truly justified by the costs to run journals? In other words, how does their budget look like?<issue_comment>username_1: Many of the commercial academic publication companies have [massive profit margins](http://libraries.mit.edu/scholarly/mit-open-access/open-access-at-mit/mit-open-access-policy/publishers-and-the-mit-faculty-open-access-policy/elsevier-fact-sheet/). In the age of the internet, profit is by far the main reason for high costs---and many of the other costs are essentially just ways of enhancing profit, such as marketing expenses. The actual costs of running a journal can be very low, given that many (including high cost journals) depend entirely on volunteer service by academics for editing and reviewing. Some disciplines have dealt with this by embracing electronic publishing: see for example, the [Journal of Machine Learning Research](http://www.jmlr.org/) which is high impact and entirely free (in a related discussion, JeffE independently provided [a link to an excellent article analyzing its finances](http://blogs.law.harvard.edu/pamphlet/2012/03/06/an-efficient-journal/))
Upvotes: 2 <issue_comment>username_2: **tl;dr: it is a distorted market. Open-access models involve cross-subsidy. Subscription models offer inflated profit margins to large publishers** (and some small, very specialist publishers).
Those market distortions mean that suppliers' budgets will conceal as much as they reveal.
You'll find a little sympathy for the devil, here. Things are rarely pure, never simple, sometimes ugly.
Market prices?
--------------
As with most questions of the form "why does this thing have this price?" the answer is: supply and demand. It's that price because that's the price at which buyers and sellers agree on a quantity of supply.
But is it a fair market?
### Is there an oligopoly?
There are very low barriers to entry: pretty much anyone with a computer and web connection can start a journal and get an ISSN for it.
Some profit margins at the big publishers (Elsevier, Taylor & Francis), do suggest oligopolistic pricing - 25-35% margins look suspicious. While any sensible profit margin analysis would also look at returns on capital employed, none of the critiques of the large publishers that I've seen have done so. There are claims is that Elsevier and Taylor & Francis, and possibly others, have used bundling (selling large numbers of titles as a group subscription) to inflate their margins. Some universities tackle this by subscribing journal-by-journal.
So yes, there are indications of oligopolistic practices that inflate prices and margins.
### Is there a monopsony or oligopsony?
Well, those high margins would suggest not, and indeed, there are many many hundreds of competing universities each paying publishers on their own behalf, so the demand-side looks competitive. No, there is no oligopsony
---
### Business models
There are three primary models of publishing business.
The **first model** is the open-access charity model, where everything is begged, borrowed or donated. So the costs are still there - and it's typically academics and universities paying - but the costs are concealed: effectively, the donators of resources are cross-subsidising the publishing, perhaps as an investment in reputation and/or impact. This model is very rare, because it's practically impossible to build a reputation from zero without any investment. Journals that I'm aware of that use this model, inherit reputation from predecessors that used one of the other two business models.
A **second model** - open access, author pays - is also along the lines of academia pays, readers from industry and commerce get the research for free, just like the open-access charity model. A large number of low-quality predatory journals are able to use this model, because of the pressure on many academics to "publish or perish".
The **third model**, the traditional journal subscription model, is that only organisations and institutions who access the papers, pay. This is the only model where academia does not have to subsidise industrial and commercial access to research. Some low-quality predatory journals use this model, though less sucessfully than the open access-author pays model, because the subscription model has much less demand-side pressure. But this model is also how large-scale scientific publishing got established, and it's how reputations of most academics, editorial boards, weighty titles, and many conferences got established. This is the least fashionable and most successful model.
Upvotes: 3 <issue_comment>username_3: >
> Do journals / publishers make outrageous margins,
>
>
>
Yes, e.g. from this [post in Nature](http://www.nature.com/news/open-access-the-true-cost-of-science-publishing-1.12676) (ironic publishing venue for this kind of article...):
>
> Elsevier's reported margins are 37%, but financial analysts estimate them at 40–50% for the STM publishing division before tax. (Nature says that it will not disclose information on margins.)
>
>
>
From [Paywall The Movie Trailer](https://vimeo.com/217495703):
[](https://i.stack.imgur.com/4t8Zm.png)
Upvotes: 2 <issue_comment>username_4: Three of the four biggest academic publishers are publicly held (Informa, Wiley, and Elsevier) which means their profit margins can be searched for in their annual reports. Links to their websites: [Informa](https://informa.com/investors/annual-reports/), [Wiley](https://www.wiley.com/en-us/investors), [Elsevier](https://www.relx.com/investors/annual-reports/). Note that Informa and Elsevier especially are multi-faceted businesses so the profit margin they report might include their other businesses. All three companies also publish books, which has a different profit margin compared to journals (book margins are usually lower).
As of time of writing (2018):
[Informa's academic publishing division](https://informa.com/Documents/Investor%20Relations/2018/2017_Informa_Annual_Report.pdf) reported revenue of 530.0m GBP and operating profit of 154.1m GBP for an operating profit margin of 29% (note operating profit doesn't include taxes).
[Wiley](https://s3.amazonaws.com/wiley-ecomm-prod-content/Q418_Earnings_Slides_Final.pdf) reported journal revenue of $901.5m and a "contribution to profit" of $275.5m (this includes an unknown contribution to profit from Atypon, which is a publishing platform). Taking Atypon to contribute $0, this is a profit margin of 30.56%.
[Elsevier](https://www.relx.com/%7E/media/Files/R/RELX-Group/documents/reports/annual-reports/relx2017-annual-report.pdf) didn't break down their academic division's revenue, so the reported 2,478m GBP figure includes numbers from Scopus, ScienceDirect, ClinicalKey, and a lot of other stuff. Notably they said 19% of their revenue was from print with the other 81% from electronic sources. Total reported adjusted operating profit was 913m GBP or 36.84% adjusted profit margin. Caveat: adjusted operating profit is non-GAAP. Based on page 186 of that same report, their real profit is significantly lower.
Something else that might be interesting: OMICS, a publisher widely-held to be predatory, reported [$11.6m in revenue and $1.2m in profit](https://www.bloomberg.com/news/features/2017-08-29/medical-journals-have-a-fake-news-problem) in 2016, for a profit margin of 10.34%.
**Small update**: [here](https://scholarlykitchen.sspnet.org/2020/08/10/guest-post-mdpis-remarkable-growth/) is an analysis of the fifth-biggest publisher MDPI's profit margins: about 1-6% per paper, after tax. The APC is about $1500, and the cost per paper is about $1400.
Upvotes: 3 |
2014/11/12 | 888 | 3,868 | <issue_start>username_0: I am applying to grad school and I asked one of my professors to write a letter of recommendation on my behalf. He happily agreed. He submitted a recommendation and forwarded the confirmation email to me so that I can have look.
On the recommendation form there are some questions that asks the professor to select things like top 1%, 5%, 10% in terms of writing, organization, maturity ..etc.
He chose top 1% in all of them. In another question "What is the group you are comparing thq applicant with ?" He wrote "He was at least in the top 1% compared to all students in the last 5 years".
I told this to a friend and he was surprised and suggested that the admission committee might not take his recommendation seriously. Another one of my referees showed me his recommendation and it was very similar.
My concern is how admission committees look at recommendations that seem too good to be true? The two professors really know me very well and they are the best options I got.<issue_comment>username_1: Saying that you gave the first proof of Fermat's Last Theorem is too good to be true. (I.e., it is literally false, since the result was resolved 20 years ago by someone else.) Saying that you are more talented at physics than Einstein and Feynman combined is technically possible but strains credulity to the point that it would have a strong negative effect if not backed up by some truly remarkable facts. However, saying that you are in the top 1% compared to all students in the last five years is obviously *not* too good to be true: it must be true of at least one student. (These questions are often muddied by not being precise enough about the cohort being compared, and you should know that admissions committees interpret them with a grain of salt.)
When I was involved with PhD admissions in the UGA math department, each year I saw several applications in which the recommender gave the applicant top marks in every category. When this happened I didn't say "Ridiculous!" but instead looked carefully at the rest of the application. It may be that I conclude that the recommender is a bit naive and/or hasn't seen as good students as I have...but that still might mean that the student's application is quite strong. In general top marks are **good things**, not **red flags**.
To my mind the fact that two of your recommenders showed you the letters is much more of a red flag than the top ratings. The strongest letters of recommendation often contain confidential information that would not be suitable for the candidate to read (e.g. comparisons to other named people). If such information doesn't appear then there is nothing inherently wrong with the practice, but nevertheless it does not inspire my confidence.
I guess if you are looking at the recommendation letters you have a chance to evaluate their suitability (which you can use in a future year; it is awkward and perhaps even ethically suspect to withdraw a recommendation letter after reading it). At least in US graduate applications, good letters are about a lot more than the slightly silly ratings. They also contain several paragraphs of text, usually occupying at least the better part of a page. If someone gives you absolutely top marks and then writes little or nothing to back them up, they look quite lazy. Though that does not specifically reflect on you, it certainly doesn't help your application either.
Upvotes: 6 [selected_answer]<issue_comment>username_2: "The two professors really know me very well and they are the best options I got."
Then you are one lucky individual. And, "don't look a gift horse in the mouth."
Try to find out from them why they think this, so you aren't blindsided at a interview, or by writing something in an essay that contradicts them. Then understand how people see your strengths.
Upvotes: 0 |
2014/11/12 | 695 | 2,561 | <issue_start>username_0: I have seen [some sites](http://www.campusexplorer.com/college-advice-tips/64C6D277/What-Is-the-Difference-Between-a-Thesis-and-a-Dissertation/) that distinguish a dissertation as what is written as the requirements of a doctoral degree.
[Others](https://www.englishforums.com/English/DifferenceBetweenThesisDissertation/bkrxg/post.htm) that distinguish a thesis as a document written for the fulfillment of any degree (Bachelors, Masters or Doctoral) while a dissertation is a more general name for a document where someone is presenting findings.
I am curious if there is any more rigorous definition which distinguishes the two, but my more immediate question is this:
**I am writing the document to fulfill a doctoral degree. Within the text of the document do I refer to it as a "Dissertation" or a "Thesis"?**
For example: "A more thorough review of this analysis is presented in Chapter 5 of this \_\_\_\_\_\_\_\_."
Maybe the fact that the [[thesis]](https://academia.stackexchange.com/tags/thesis/synonyms) is the tag used for all of these documents is an indication of the answer?
I've also seen this question on this site, but it doesn't seem to answer my question:
[What are the main differences between undergraduate, master's, and doctoral theses?](https://academia.stackexchange.com/questions/7252/what-are-the-main-differences-between-undergraduate-masters-and-doctoral-thes)<issue_comment>username_1: "Work" is just as good as either. There's no need for precision or rigor here. All three would be acceptable.
Upvotes: 3 <issue_comment>username_2: Follow the guidelines of your university's thesis office (or dissertation office, or whatever they call it). They'll probably have a format guide that specifies how to refer to the document, or if not, you can contact someone at the office and ask.
If they really don't tell you which one to use, you can probably use either, but it helps to be consistent.
Upvotes: 2 <issue_comment>username_3: To lift from the definition provided [here](https://www.englishforums.com/English/DifferenceBetweenThesisDissertation/bkrxg/post.htm#331513):
>
> A Thesis is a scholarly written document of a smaller study on a
> particular topic in consistent with every details of Research
> Methodology. It's written usually for obtaining a Masters Degree.
>
>
> A Dissertation is a scholarly document of a larger study on a
> particular topic in consistent with every details of research
> Methodology. It's written usually to obtain a Doctoral Degree.
>
>
>
Upvotes: -1 |
2014/11/12 | 494 | 2,181 | <issue_start>username_0: At the end of each semester, usually a month before final exams, my school (in the US) distributes teaching evaluations. What people will read these? Do people only see these after the final grades are posted?<issue_comment>username_1: While this may vary significantly from institution to institution, my understanding is that, in the US at least, evaluations are at least read by:
1. The instructor, who needs to get feedback on their teaching efficacy
2. Others in the department who are evaluating the instructor, e.g., as part of tenure and promotion review, as part of peer mentoring, as part of a departments' own ongoing self-management.
They may also end up being read by other administrators (internal or external, e.g., certification authorities) who are monitoring a department's teaching quality, and may be distributed to students to help them decide which classes to take. In these latter cases, it is likely that a summary will be distributed rather than the raw evaluations.
Timing with respect to finals and grading varies. In all cases, however, institutions tend to take pains to preserve student privacy so that unless somebody leaves a clearly identifying mark in the comments it should be unable to affect their grades either way.
Upvotes: 3 <issue_comment>username_2: In the university I attended, evaluations were in two parts. A number ranking (1-5) on some university wide (and sometimes department wide) questions were available to the department. A second set of open-answer questions were available to the professor only.
Other universities have different policies, including as open as making all the evaluations available (in an anonymized fashion) on the university website. Often a department secretary (or possibly assistant to the Chair of the department) is tasked with handling these evaluations. They (or your advisor) should be able to answer how your university handles them.
It is nearly universal (in the US at least) that the evaluations are not given to the professor until after final grades are turned in. This avoids the appearance of retaliation for a bad review (or reward for a good one).
Upvotes: 2 |
2014/11/12 | 642 | 2,769 | <issue_start>username_0: I am enrolled in an on-line graduate course (US public university). I do not live near the campus, so I cannot attend office hours (which aren't even offered).
For 3 months, I've been working on a research paper for the course. At regular deadlines, I've submitted my progress electronically, however, the instructor's feedback was sparse in each case:
* No feedback on whether my chosen topic was okay.
* No replies to my public posts/E-mails about some concerns I had in approaching the topic.
* Only minimal markings, e.g. pointed out a few grammar/MLA mistakes, no comments focused on my ideas, organization, etc.
I do not even know if the instructor read my work. The project is worth a significant portion of the course grade, but the instructor has given such little input and I have no idea if the paper is good or terrible. What can I do?<issue_comment>username_1: First of all, I must say that I find it possible that your instructor simply doesn't care enough. The reason for that (or the other way around) might be that the online course isn't serious enough and that the instructor will just give you some passing grade if you hand in virtually anything. While this might not be true for US universities, in Europe online courses and degrees are considered pretty unprofessional, at least in my experience.
Now to your question what you can do in your situation. Based on the information you provided, I would try to publish your paper somewhere (if not in a journal, then on a conference). The peer-review will give you most certainly feedback on your paper. Furthermore, you attend a graduate course in pursuit of a graduate degree, so publishing will become a requirement for your in the near future. Finally, the peer-review will indicate quality to your instructor and give you some certainty that your work will be higher graded.
Upvotes: -1 <issue_comment>username_2: I see three likely paths for you to take, depending on the quality of the program that you are enrolled in.
1. If you haven't already done so, explicitly ask the professor for more concrete feedback on your work. You might simply be having a communications problem, where the professor assumes "no news is good news."
2. If the professor doesn't not respond constructively, escalate to somebody higher up, like a dean. Approach delicately, as a student concerned and asking for help, rather than making demands.
3. If you don't get a satisfactory response from the dean, then it may be that the online program you are enrolled in is crap, and you should not bother investing your time and money in it. Some online programs are quite serious and good, while others are essentially just for-profit scams, and yours might well be one of them.
Upvotes: 2 |
2014/11/12 | 543 | 2,468 | <issue_start>username_0: Today I suddenly found that my email server requests a read receipt every time when I send emails. Since I am recently exchanging messages with a potential Phd supervisor, I wonder if this will make him feel that I am rude and annoying? I have already turned this feature off!<issue_comment>username_1: I do not think that this is rude. If it even mildly irritated him he might have mentioned it to you at some point. Even if he didn't, he still corresponds with you so it probably didn't bother him that much. Regardless, if you disabled the feature, he should not be getting those any more.
Upvotes: 3 <issue_comment>username_2: Unless you have reason to suspect that emails are not being delivered successfully, email receipts are pretty much useless. Here's why:
1. Not all email clients support read receipts - Mail on OS X doesn't, for one prominent example. If some of the previous emails you sent requested a receipt, and it wasn't returned, obviously either the prof's email client doesn't support it or he didn't bother clicking that button.
2. Unless the receipt is returned, you can never be sure if the email was read or not.
3. Unless the email is replied to immediately - which is unlikely if the reply will take some effort - it can easily be forgotten about. Receiving an email receipt is no guarantee that you'll get a reply.
If you have no response to an email after a reasonable period of time ("reasonable" can vary between 3 days and a month or more, depending on what is required), just send a quick reminder email.
Upvotes: 4 <issue_comment>username_3: Unfortunatly, this is highly subjective. I think everybody could clearly understand the benefits of a read receipt this is why all modern messaging platforms (Whatsapp, Telegram etc. etc.) implement this feature automatically. I could not estimate how many times I encountered the dilemma of "should I send the email again" because sometimes it happens to lose the email even though this is not in the spam folder. For me, this communication issue should be solved somehow and I would not consider rude a person trying to solve it with the means that he has. In my personal opinion, I would not consider it a bad practice unfortunately many people prefer to lose or not answer emails rather than face the writer with a clear answer. I think that a messaging platform should be a way better tool to use in the future for communications between persons.
Upvotes: -1 |
2014/11/13 | 691 | 2,966 | <issue_start>username_0: I am finishing my Master's in mathematics in Germany and I'd like to apply for a PhD in Europe, preferably in the UK. Most departments recommend that students get in touch with potential supervisors prior to submitting a formal application. I am a little nervous about it and I would appreciate advice regarding the following:
(1) Some people say it's advantageous to mention interest in specific papers published by a given professor. But I'm not sure how applicable this is to mathematics. To be honest, I haven't read a single paper by most of the people I'd like to apply to. (Reading and understanding a math paper takes a long time, so I think it's rather normal.) Is it OK just to say, for example, "I've seen you have published a lot of papers on non-linear PDEs, which is an area I'd like to do research in", or does this sound too generic?
(2) Is it OK to mention that my Master thesis supervisor or lecturer at my university recommended a given professor to me as a potential PhD advisor (they know each other), or does this sound somewhat awkward/patronising?
(3) How long should my email be? Is about 300 words too long?
(4) What should a first email accomplish? Should I just introduce myself and express interest? Or should I ask some specific questions about a potential research project straightaway?
I will really appreciate your advice, especially from academic mathematicians. I think one of the problems is that I find it a bit hard to see how the situation looks from the perspective of the potential supervisor. Do they get hundreds of such emails every year and just get annoyed when they get another one? Do they want the applicants to be very specific from the start, or is it better to first introduce oneself and see if they are at all interested before asking more specific questions about a research project, etc.?<issue_comment>username_1: (1) Either is okay. Keep in mind you do not have to read an entire paper to determine if it is interesting.
(2) Definitely do that, assuming your supervisor will recommend you.
(3) It should be readable in just a couple of minutes since professors are busy.
(4) Express interest and qualifications.
(not a mathematician, nor in Europe)
Upvotes: 1 <issue_comment>username_2: I am a mathematician in the UK. If someone doing a Master's degree in Germany wants to do a PhD with me, I would like them to send me an email of five or six lines, giving a brief indication of what is in their Master's thesis, and a very broad indication of what they would like to do in their PhD (perhaps "chromatic homotopy theory" or "something to do with operads").
I do not expect that applicants will have read any of my papers, although that sometimes happens. If your supervisor suggested that you should apply to me, then I might find that interesting, but it would not be significant; I would wait for more detailed comments in the supervisor's reference letter.
Upvotes: 3 |
2014/11/13 | 788 | 3,155 | <issue_start>username_0: American military veterans funded under the GI Bill (and possibly under other VA administrated programs) can only receive funding for classes which are required for the degree. I can find lots of university websites that mention this requirement ([here](http://veteranscenter.utah.edu/gi-bill.php)'s a good example), but I was unable to determine where this rule comes from or why it is there. In particular, is it required by the [legislation itself](http://en.wikipedia.org/wiki/Post-9/11_Veterans_Educational_Assistance_Act_of_2008) or just VA rules? When was the rule imposed and is there any information on why?<issue_comment>username_1: It looks to me like it is more or less required by the [legislation](http://www.law.cornell.edu/uscode/text/38/part-III/chapter-33), although I'm not a lawyer so I may not be interpreting it right. Educational assistance is [authorized](http://www.law.cornell.edu/uscode/text/38/3313#a) for pursuing a "program of education" rather than just taking individual courses:
>
> The Secretary shall pay to each individual entitled to educational assistance under this chapter who is pursuing an approved program of education...
>
>
>
A program of education is [defined](http://www.law.cornell.edu/uscode/text/38/3452#b) as:
>
> The term “program of education” means any curriculum or any combination of unit courses or subjects pursued at an educational institution which is generally accepted as necessary to fulfill requirements for the attainment of a predetermined and identified educational, professional, or vocational objective.
>
>
>
Strictly speaking, this doesn't mean the classes must be required for an academic degree per se, and other sorts of credentials or licenses could count. However, it does not seem to allow taking isolated classes for their own sake, but rather just as required for the overall program.
I don't know the history or why the legislation was set up this way. I'd guess that it's because the goal is to help veterans achieve qualifications that will further their careers, not to educate them for the sake of education. One possibility is that nobody really thought hard about the issue while drafting the legislation, so it wasn't a conscious decision. Another is that it was intended to save money by avoiding paying for frivolous or unnecessary courses. A third possibility is that it was intended to help veterans by putting pressure on them to follow a set degree program (rather than squandering their benefits on isolated courses that might never fit together to complete any degree).
Upvotes: 3 <issue_comment>username_2: The purpose of the GI BILL program is for Veterans to get a degree or acquire a new skill for employment after active duty. Not flounder around in college for 6 years for a Bachelors degree. If there is a class you really want to take that's not a requirement you can always change your major, then change it back. There are different GI bills with different rules depending when you served in the military. I would talk to the VA or visit the VA-GI bill web site, most questions can be answered there.
Upvotes: 2 |
2014/11/13 | 619 | 2,325 | <issue_start>username_0: I plan to do a PhD in Germany. I have read some material that said that:
>
> "The most important formal qualification for being able to do a
> doctorate in Germany is a very good higher education degree that is
> recognised in Germany"
>
>
>
"What is" or maybe should I ask "How much is" very good higher degree?<issue_comment>username_1: Basically, you need to have a master's degree. If that master's degree comes from:
* A German university
* A university in any member country of the [Bologna Process](http://en.wikipedia.org/wiki/Bologna_Process)
* A few other countries considered "equivalent" but not participating in the Bologna Process, such as the US, Canada, Australia, and Japan
then it is almost always automatically accepted. On the other hand, if your degree is from another source (such as India, China, Iran, Africa, etc.), or is from a German *Fachhochschule*, then the degree must be certified by the university as being at the same level as a German master's degree before you can be admitted. Also, after your admission, you may be required to take some additional courses to establish your candidacy (although this is usually on the order of two to three courses during your first two years).
Upvotes: 4 <issue_comment>username_2: The German grade scale at universities is:
* very good (1)
* good (2)
* satisfactory (3)
* sufficient (4)
* fail (5)
Numerical grades are commonly considered to the first decimal digit. I'd interpret the requirement for a "very good" degree as an average of 1.5 or better. This seems quite strict to me though: my department has the informal rule that we are willing to admit PhD students with an average of 2.5 or better, i.e., a "good" degree. In order to compare from international grading systems, the so called "Bavarian formula" is often used, see e.g. <http://www.uni-oldenburg.de/studium/pruefungen/anrechnungen/umrechnung-auslaendischer-noten/> .
Recognition of your university degree as equivalent to a German degree is another important factor. The [anabin database](http://anabin.kmk.org) would be a good place to start researching the situation for your specific case, if you understand a bit of German. (I really don't understand why this isn't provided in English.) @username_1's also discusses this aspect.
Upvotes: 3 |
2014/11/13 | 709 | 3,219 | <issue_start>username_0: In my research group, some time we use our funding to buy things required for our research. They can be pretty cheap (e.g. books) or quite expensive (e.g. machines).
When the funded by the project is over, I wonder who will own those stuff.<issue_comment>username_1: The funding agreement or regulations by the sponsor will usually clarify this. In most cases, it can be expected that the legal institution where the project is run buys these things and also owns them after the project. In some cases, the sponsor may own expensive equipment himself, so that it can go with the PI when changing affiliations.
I've also seen regulations where the sponsor only pays the depreciation of long-lasting equipment during the runtime of the project, so the institution would have to cover for the remaining costs.
Upvotes: 4 <issue_comment>username_2: In the US, at least, there is a generally clear distinction between two classes of things bought with funding:
* "Capital expenditure" items are individually identified in a project budget, e.g., "36-node computing cluster" or "Materials for building prototype robot".
* 'Overhead' or 'Material and Supplies' items are more routine items that are not individually identified, but are considered part of the routine cost of business, e.g., office supplies, laptops, laboratory reagents.
Capital expenditure items are typically owned by the funder, and their disposition at the end of the project is at the discretion of the funder. When this is the US government, it is called GFE - Government Furnished Equipment. In most cases, the funder lets it stay at the institution (effectively giving it to them), but not always. An institution might even be required to give the equipment to another researcher.
Overhead items are generally owned by the institution that is executing the project, and are often owned by the institution in general, rather than being associated with the particular project. These typically further break down into two sub-categories: tracked, and untracked. Tracked is things like laptops, that many institutions still consider expensive enough to keep track of who they give them to and (at least theoretically) ask for them back eventually. Untracked is things like paper and staplers, which are below the threshold where the institution cares. Again, typically all of it technically belongs to the institution, but in practice many institutions will let somebody keep a low-value 'personalized' item such as an old laptop.
The exact definitions of which things go in which categories depend on the institution, the funder, and the particular contract, but this covers most of the typical cases. Make sure you check particular regulations and customs with any particular institution before taking anything, though!
Upvotes: 6 [selected_answer]<issue_comment>username_3: Research contracts are generally to institutions, not to individuals, so major equipment belongs to institutions. Often, though, the big equipment often leaves with the investigatop if they change jobs. The idea is that in the normal course of business, gear will move both in and out of the university, and things work out over time.
Upvotes: 0 |
2014/11/13 | 248 | 1,033 | <issue_start>username_0: I recently asked two of my professors to write me recommendation letters and they promptly and kindly accepted to do that. It is now two weeks since then. I want to send “thank you” notes to them. Is it enough to send e-mails? I have heard that it is more polite to send a written letter by US post, but it is a little bit weird when we are all in the same building! Isn’t it? Can I send them some kind of gift on a special occasion?<issue_comment>username_1: Don't worry about it too much, just say thank you. In person or over email are both fine: getting a good recommendation letter is a big deal for you, but writing recommendation letters for good students is a part of normal routine for a professor.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Recommenders also like to hear the results of the process. So one way to thank them (in addition to a written or emailed thank-you note) is to let them know which programs/jobs/schools you got into on account of their recommendation.
Upvotes: 3 |
2014/11/13 | 1,680 | 6,868 | <issue_start>username_0: I am told, by both a professor at my US undergraduate school and here at Stack Academia (for example, at [Implications of being accepted without funding to a computer science PhD in the United States?](https://academia.stackexchange.com/questions/18755/implications-of-being-accepted-without-funding)), that I should not accept an offer from a US graduate school unless I am offered full funding. The reasoning is that if they actually wanted me, they would be paying me. I agree with this.
So my question is, what would be the difference between going to a school with funding and without funding, besides the money? Would I be treated as a second class citizen? And if I was offered full funding, perhaps from a TAship, what would happen if I wanted to pay for myself anyway?
Existing answers, such as at [Will self funding a PhD hurt employment chances?](https://academia.stackexchange.com/questions/8034/will-self-funding-a-phd-hurt-employment-chances), seem to focus on the money and the chance that the applicant is not strong enough for the program. While I also agree with these, I am interested in learning about external factors too.<issue_comment>username_1: Confession: I've tried this myself TWICE in the field of mathematics, so what I say comes from my own experience.
You need not worry about the existence of a caste-system among graduate students. You will not be treated any differently than any other student if you are accepted into a program and not funded. The main question is whether or not you will get into the program in the first place if you have no *external* funding sources...
In science-related graduate schools, it is quite often the case that students will not be accepted into the program *unless* they have some sort of support (i.e. department assistantship, scholarship/fellowship, etc...). Students who try to do it all *on their own* often find themselves under even more pressure than a funded student. On top of trying to pass extremely difficult courses and pursue original, cutting edge research, they may find themselves also working multiple unrelated jobs that barely make ends meet for rent, much less tuition and all other debts incurred along the way. Often, unfunded students succumbs to financial pressures and drop out to pursue more financially stable opportunities.
Students dropping out of graduate programs also make their host departments' statistics ***look bad*** in the eyes of their superiors (i.e. deans, university president, provosts, etc..) and can lead to diminished support for those graduate programs. Since universities don't want to hurt their own reputations (or lose state/donated funding), they tend to be selective of their graduate students. And I believe this is a major reason why self-funded students are often not even allowed in graduate programs: statistically speaking, their success rate is likely too low to merit taking a chance.
Of course there are exceptions (e.g. having wealthy parents, pursuing non-science graduate programs, education doctorate degrees often earned by people who work full time as teachers), but it is certainly a **red flag** if a student willingly tries to pursue a graduate degree in the sciences without any source of funding.
My advice to you: If you're offered funding, ***take it!***. If you are accepted into a graduate program and are *not offered funding* and *don't have any other source of funding apart from yourself*, then ***don't try to do it all on your own***. The sheer cost of graduate school, combined with the uncertainty of you graduating from the program, along with the nightmare of trying to pay off student loan debt for the rest of your life (even bankruptcy will not save you from student loan debt); ***it's just not worth it to you***.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I started a STEM master's program self-funded because I was changing fields. I had no education or experience in the new field so could not easily be offered a teaching or research assistantship. Once I finished the first year, I was given a teaching assistantship which covered the rest of my degree.
In general, I don't think graduate students really care if you have funding or not. You, if unfunded or poorly funded, may feel jealousy about fellowship-holders or teaching or research assistants with funding.
Upvotes: 0 <issue_comment>username_3: My path through math graduate school has been as follows: 1 year unfunded in Ph.D. program at school A, 1 year funded in Ph.D. program at school A, move to different school and spend 4 years funded in the Ph.D. program at school B. So I've spent time in Ph.D. programs both as a fully funded "regular" math grad student, and as an unfunded math grad student.
Being an unfunded grad student had a couple of potentially negative effects (I won't discuss the obvious financial burden):
1. I felt a bit disconnected from the math department. The funded grad students had shared offices, which naturally led to them getting to know each other. As an unfunded student, I had no office or access to any shared department space (break room/kitchen area, etc.)
2. My goals were slightly skewed from what they should have been. I went in to my first year as an unfunded student with this feeling that I needed to do "better" than "everyone" else. The reason for this was that I wanted to secure funding for the subsequent years. I basically met this goal, but in retrospect, doing better than my fellow students in all of my courses was not the best goal to have for a first year of graduate school.
To help remedy pont 1, I worked as a paper grader in the math department my first year, while unfunded. This helped me to get to know a few professors better, which I'm sure didn't hurt when I applied for funding the second year.
As for being treated differently: I don't think I was ever treated badly or differently just because I was not funded. There is a range of different types of funding amongst students: Departmental TA, faculty-funded research assistantship, department fellowship (with no teaching, say), NSF fellowship (in the U.S.), unfunded, partial TA (tuition waver, fewer duties, lower pay) etc. Some or all of these might exist in any given department, and some may be considered "more prestigious" than others, but in my experience these differences don't lead to a class system among the graduate students.
Upvotes: 3 <issue_comment>username_4: I tried this. Do this **only** if you do **not** have to get a job to support yourself during school. If you do have to get a job, you will probably only burn yourself out and fail. If you do not need to work to support yourself (you're rich; your parents are rich; whatever), go ahead.
Of course, a T.A. or an R.A. is a type of job, but it's a lot different than a real job.
Upvotes: 2 |
2014/11/13 | 873 | 3,723 | <issue_start>username_0: As we reach the end of the semester, the students in my class are being asked to do their evaluation of my teaching, as is common at most universities. As with most evaluations that I know of, I have been given an opportunity to provide open-ended prompts for the students to respond to.
I would like to craft open-ended questions for the students that will help me to improve my class/teaching style. This class was a large (~100 students) lecture, so I would like to focus the questions on how to improve myself in teaching large classes. Has any research been published showing which questions (or which types of questions) generate answers that are most effective at helping teachers to improve at teaching large lectures? If no research, is there any anecdotal evidence of "most helpful" questions?<issue_comment>username_1: I will provide a no-answer (which someone can delete if need be).
It seems you are primarily looking for feedback on your teaching in order to improve. In so doing you are looking for questions to have the students provide that input. Although I admire your trust in students, it is a bit like the blind leading the deaf. The likelihood that you will receive in-depth feedback on how to improve seems small. I am sure you will get lots of pointers about many details so it is not pointless but you should probably consider one or a few other approaches in parallel.
First, consider looking into a university pedagogics course. Hopefully your university provides such courses for teachers where experience is the foundation for the teaching. Second, ask fellows to attend your class(es) and provide feedback from their point of view. Third, have a fellow video-tape a lecture from the back of the room (communicating with the first rows is not difficult but the back of the room is different).
Upvotes: -1 <issue_comment>username_2: I asked [a similar question](https://academia.stackexchange.com/q/18816/5962) in April. In my case, I was specifically interested in a very short survey that I could ask students to fill out out after every every single class so I could make adjustments to lectures, class organization, and readings as the course progressed.
As I detailed in [my answer to my own question two quarters later](https://academia.stackexchange.com/a/29138/5962), I went ahead and used a series of four open-ended questions very successfully. Since late September when I left that answer, I have used those four questions very successfully in another class as well.
Upvotes: 2 <issue_comment>username_3: Actually, based on my experience, it is better to ask the questions to the students at the end of the **next** semester.
During the semester, an average student usually concentrates on passing the course, rather than thinking about the outcomes of the course.
I think one realizes what the course gave him/her after about one semester.
And this is the time that they use their knowledge of your course to understand or pass another course.
Of course, the questions are highly dependent on what you want to improve.
However, it is always more clear what to improve when the students have a chance to use the course outcomes without any expectation of the grade.
If a student says "I wish you underlined the importance of Unit 6. Thus I could understand XX201 better", this is a good feedback, whereas
"This course is sooooo hard." is not.
As for the questions, this is what I ask to my students after one semester. Not like a questionnaire, but face-to-face:
1. Do you use what I've taught you for this semester?
2. Are there any redundant topics that I've covered?
3. How would you study if you were to take the course this semester?
Upvotes: 3 |
2014/11/13 | 583 | 2,597 | <issue_start>username_0: How best to present long equations in two-column papers?
I've tried splitting them in two or more lines along operators, but that still looks a bit weird to me, especially when parentheses have to be carried along across the lines. Also, I've considered stretching them across both columns, but that seems only an acceptable solution if the equation is of outstanding importance, e.g. the final result and not some middle section of a proof.<issue_comment>username_1: When I have had occasion to deal with obnoxiously large equations, I find that there are four strategies that do well for me. In order of readability, they are:
1. Shrink the font: if you are allowed (any many venues do allow this), you can usually shrink the font on an equation a few points without affecting readability.
2. Map separable terms of the equation to new variables, which can be given their own independent definition lines. This can really help readability in a complex equation as well.
--- *The line of desperation* ---
3. Break the equation across two lines: this works up to about 1.6 lines worth of smaller-font equation. When combined with adjusting font size, you can often adjust where the break occurs to make it look reasonable.
4. Move the equations to a full-width figure, where you can play all of the same games.
Upvotes: 4 [selected_answer]<issue_comment>username_2: What a programmer would do is break the formula into sub-functions along boundaries that reflect the way the formula itself breaks up into individual concepts, define those, and then define the top-level formula in terms of those. I can't see any reason that wouldn't work here, at least to some degree...
(This is like @username_1's suggestion to define new variables, but taking it one step farther to point out that when several of those variables are of the same form, you can define a parameterized function rather than having to spell out every one.)
In my experience, what a pure mathematician would do is the same thing, but they'd call their functions operators and assign them symbols rather than names. :-)
Upvotes: 1 <issue_comment>username_3: I suggest you check chapter 3.3.4 in the book [Mathematics into Type](https://www.ams.org/publications/authors/mit-2.pdf) published by the *American Mathematical Society* (AMS).
The book sets up specific rules for breaking the equations and also how to align these after breaking. The rules are too complex to be reproduced or duplicated well here so anyone interested should download the book using the link above for reference.
Upvotes: 3 |
2014/11/13 | 2,217 | 9,307 | <issue_start>username_0: Is it unethical to refer to solutions to assignment questions that have been asked at other universities before? The questions are the same word for word.
EDIT:
A lot many answers are very helpful. Most of them revolve around introspection and the classical definition of plagiarism. I agree with that. I know I am digressing from the original topic and may seem to justify copying but at the end of the day I have to pass the course. I spend hours trying to learn all the background concepts and then try to apply them to the assignment questions. Thats what I have done till now. Not all the times I get them correct. Many of my classmates get the solutions online, rephrase the wordings and then submit it. Its a no brainer, they end up getting more marks than me. Sometimes(many a times) it feels really bad. Am I just being stupid in that case? should i follow their approach when I am not able to get some questions or to verify just to be sure?<issue_comment>username_1: Yes, it is unethical. Copying the answer from a solutions manual is considered plagiarism, even if it's from another university's website. The question bank and solutions are likely part of a question bank belonging to either an educational group or the textbook.
Additionally, there is also no guarantee the answer key is right. For example, one answerer to another plagiarism related question [said this](https://academia.stackexchange.com/a/30012/22013).
>
> Then I went to Yahoo Answers, made a bunch of fake accounts, and posted tantalizingly wrong answers to all of my own HW questions. I have told all subsequent students not to google the HW answers because there are wrong solutions out there.
>
>
>
I'm not too sure about whether or not checking your answers once you've done the work yourself is unethical, however. That's a gray area for me that someone with more experience in academic misconduct might be able to help cover.
Upvotes: 5 <issue_comment>username_2: If you get a solution from another school (or a previous year, as questions are often reused on problem sets), it's no different than getting a solution from another student in the same section who happened to finish the problem first. In other words, it is unequivocally cheating, unless there is an explicit policy to the contrary.
Upvotes: 4 <issue_comment>username_3: Ask yourself: would you be comfortable telling to your Professor that you got the answers from a website? Do you think she or he would think you did a good job with your homework if you copied it from a website?
The problem is much less about whether there are rules (and there are, no doubt) than about what is the intended purpose of homework, that is to help students learn. If you don't learn from your homework, you're not doing it right.
Upvotes: 3 <issue_comment>username_4: I do not view it as generally unethical to **refer to** these solutions. This situation is more complex than I think some other answers have admitted. Here is a list of claims:
1. In an ideal world, the point of homework is for the student to learn the material.
2. In a perfectly ideal world, we would not need to grade homework, because students would do it on their own to master the material. They might refer to other people's solutions to see if theirs are correct, and that would be fine.
3. Experience shows this world is not perfect. Students will often skip ungraded homework, and their learning and exam grades will suffer.
4. So instructors assign homework for a grade. But this isn't because the grade is really important: it's because we want the students to do the homework and learn the material!
5. Some students then get the idea that the grade is the real goal of the homework, and simply copy their assignment from others. Professors often find this unacceptable.
One important point that others have answered is that, if you are going to turn in the homework, what you turn in should reflect your own understanding of the assignment. But, equally importantly, **it is important to let yourself struggle with problems for a while before looking up the answer.** That is the only way to really learn how to solve problems.
Most professors accept that the internet exists - we know you can look up other people's answers. It used to be that fraternities had giant files of old homework and exam answers for this purpose (maybe they still do). And students study in groups all the time - research shows study groups can dramatically increase learning. So getting help is not a bad thing.
But you don't want to get help too quickly. **Make a genuine effort to answer the problems yourself first.** If you find that you are looking up the answers to all the problems (even the easiest ones), then something is off - try going for more tutoring, or studying more before doing the homework.
If you find that you occasionally need to look up one of the most difficult problems, that's perfectly normal (but it still wouldn't excuse directly copying the solution into your homework, of course).
Of course, the usual caveats apply: some professors may specifically tell you not to collaborate with anyone or use any other resources. But most professors know that students usually collaborate with each other on homework (e.g. study groups) and know that students can look up answers using other resources. We have no problem with that, as long as each student's submission reflects their own understanding in the end.
Upvotes: 6 [selected_answer]<issue_comment>username_5: The value of university is the learning. So the point of homework is not to solve the task but to learn how to solve the task. If you take a shortcut not only is it unethical but you cheated yourself out of your actual goal!
What your classmates do is irrelevant, they won't be there with you in your career when you need to call upon these skills.
So the question becomes more obvious. Did this additional material help my understanding where there was some lacking or did it make the question substantially easier where I will lose the benefit of working out how to solve the problem myself? You know the honest answer to that.
Upvotes: 0 <issue_comment>username_6: Either the policy is, "Do your homework however you like, and the teacher will grade it to let you know if you got the right answer," or the policy is "Homework is a graded assessment that is used as part of your overall course grade. Your homework is subject to the honor code / academic integrity rules / ... just as if it were an exam."
If the former, it's up to you to decide what helps you learn. If it's the later, you're cheating.
Upvotes: 0 <issue_comment>username_7: If you copied it from another classmate, is it cheating?
--------------------------------------------------------
***Of course it is*.**
----------------------
Some other student wrote the code and you're copying it, **so you are also cheating.**
If you are unable to solve it yourself, you need to seek help from the professor.
***A personal case in point:***
My CompSci teacher gave me an F once, for allowing someone else to copy **MY** code.
He simply wrote on my printout:
"Did copying from X help **you** learn anything?"
I explained to my teacher the circumstances. She had missed classes due to a death in the family. I tried to explain the assignment to her, but it didn't sink in. So, I shared a hard copy of my code, as I had expected her to read my code, and try to understand how it worked. Instead she typed it back in verbatim. So, she learned nothing beyond how to also get an F on a coding lab assignment.
**My F did not get changed, and I agreed with him on his decision.**
It certainly taught me a worthwhile lesson.
Hopefully it will help you too, without an F.
Upvotes: 3 <issue_comment>username_8: Ideally, your professor should have a policy about this. For example, here is mine. (It get's adapted a bit for each course, based on things like whether or not there is a textbook, or whether the course has TA's.)
>
> **Homework Policy:** You are welcome to consult each other provided (1) you list all people and sources who aided you, or whom you aided and (2) you write-up the solutions independently, in your own language. If you seek help from mathematicians/math students outside the course, you should be seeking general advice, not specific solutions, and must disclose this help. I am, of course, glad to provide help!
>
>
> I don't intend for you to need to consult sources (books, papers, websites) outside your notes and textbook. If you do consult such, you should be looking for better/other understanding of the definitions and concepts, not solutions to the problems.
>
>
> You MAY NOT post homework problems to internet fora seeking solutions. Although I participate in some such fora, I feel that they have a major tendency to be too explicit in their help; you can read further thoughts of mine [here](https://math.stackexchange.com/questions/190453/is-it-morally-right-and-pedagogically-right-to-google-answers-to-homework/190558#190558). You may post questions asking for clarifications and alternate perspectives on concepts and results we have covered.
>
>
>
If your professor does not have a policy, your university probably has a default one.
Upvotes: 2 |
2014/11/14 | 562 | 2,484 | <issue_start>username_0: Can you please let me know how following TOEFL scores are usually judged on graduate (mainly master) admission if a) minimum requirement is 100 and b) minimum total requirement is 80 and minimum speaking requirement is 24?
Reading & Listening: 29
Speaking & Writing: 20
Probably I would not have the chance to provide this explanation to admission board however it may be worth mentioning here that I communicate very well in my native language and I can speak better in English without exam pressure. I have learned English on my own and I have not been able to practice speaking/writing enough.<issue_comment>username_1: This largely depends on the school. Some schools have this as strict requirement others good profile may compensate lower TOEFL scores. In either case, the department secretary/chair is the one who has the answer ( or the graduate studies office).
Upvotes: 0 <issue_comment>username_2: Speaking ability is considered very important; the TOEFL is used to determine if you can speak well enough to teach as a TA at US universities. Keep in mind teaching is also high pressure, like taking an exam.
These scores might do better with degree programs which do not require the graduate student to teach.
<http://www.ets.org/toefl/ibt/scores/understand>
Upvotes: 2 <issue_comment>username_3: Besides the aspects of TA roles, there are other issues with both speaking and writing. On speaking, will you be able to participate properly in class discussions, will you be able to ask sufficient questions when stuck in a topic.
On writing, most assessment will be in written form and the inability to communicate clearly in written English can cost you a significant amount of marks.
Besides this, you need to think about your ability to listen to spoken English. Listening and reading are separate skills, and the inability to consume spoken English can entirely undermine your studies.
Based upon my experience with master students (we increased our requirements at some point, computer science is actually remarkably dependent on good English), you really do not want to skimp on the English requirement. Poor English tends to result in poor or failing results, despite the student's inherent capabilities.
Finally, most programmes **want** to admit foreign (profit making) students. Language requirements tend to be absolute minimums and going below them tends to be inadvisable from both sides (student and university).
Upvotes: 0 |
2014/11/14 | 492 | 2,058 | <issue_start>username_0: I am in the process of submitting thesis corrections and attempting to add sufficient reference information for our school's Thesis Office. They are requesting that I add in the **editors** for Conference Proceedings.
Occasionally, I can find the explicit listing of editors for a conference (I am looking at Control Systems conferences, such as HSCC, ACC, ICRA, etc). With institutional access, I can find the Proceedings on sites such as IEEE Xplore, ScienceDirect, etc., and some of the sites list the editorial boards directly on the site. Others, however, may not list this information, and I try to look at the electronic material available for the proceedings, but cannot find a reliable method to find the editors to add to my reference.
May I ask if anyone has suggestions for reliably finding the Editorial Board? Should I just contact the Program Committees?<issue_comment>username_1: The "editors" for a conference, such as they exist, would typically be the program chairs (if that position exists), or the general chairs (if it does not). This should be listed on the conference web site or the front matter of its proceedings.
Upvotes: 3 [selected_answer]<issue_comment>username_2: One possible way is to look for indexing engines( where the conference is indexed). Some databases givs you a direct bibtex or xml file to cite the conference proceedings. Such information should contain the editor/chairs information. [Here](http://dblp.uni-trier.de/rec/bibtex/conf/aaai/2014) is an example.
Upvotes: 1 <issue_comment>username_3: The editors should be the authors of the proceedings. For proceedings published by Springer, e.g., in LNCS they are called "volume editors". ACM lists General and Program chairs of the conference, without calling them "editors". On IEEE sometimes you could see sth like "title page" or "PC Credits" among the first documents in IEEExplore, but I could not find such for ICRA.
So, to summarize, there would be no official "editors" for IEEE conference proceedings like ICRA.
Upvotes: 2 |
2014/11/14 | 230 | 986 | <issue_start>username_0: I'm applying for a master's program, and one of the required documents is a motivational letter, in which I should briefly expose the reasoning behind my choice of program.
I'm using a LaTeX template which has a nice place to put a logo, and I'm thinking about putting the university logo there. Is this allowed? Is it common? Is it a good idea?
Any help would be greatly appreciated.<issue_comment>username_1: No, you should not put the university logo in the letter. You are **not** representing your university in an official capacity in a statement of purpose, and therefore using the university's logo would be inappropriate, as you would be suggesting an official imprimatur for your work.
Upvotes: 6 [selected_answer]<issue_comment>username_2: In fact, using the university's official logo without their permission may be a trademark violation, if they want to make a stink about it. Either get permission (unlikely in this case), or don't.
Upvotes: 1 |
2014/11/14 | 868 | 3,580 | <issue_start>username_0: I've read a lot on what to include and not to include in a personal statement. I have some short specific in that regard.
* At my core, I think I want a PhD more than anything because I want to deeply challenge myself. Obviously, I have a passion for my field as well, but if someone asked me why I was doing it I'd say it's because of the need to push myself. Is this something to say in a personal statement, or am I better off leaving that out?
* One school I'm applying is a top 10 school, and it's a bit of a reach for me. Is it a bad idea to say that I've always wanted to get into that school specifically? Will that imply that I'm applying for the wrong reasons? This school is right next to where I grew up, and I've always wanted to study/research there.
The personal statement is the only thing holding up my applications. I feel like I'm afraid to say a lot of things because I don't know if they'll be perceived in a positive or negative light.
Thanks for any help.<issue_comment>username_1: I doubt these sorts of statements matter very much. They are basically fluff and would presumably occupy just a few sentences in your statement of purpose. However, I'd be inclined to omit both of them, to avoid potentially giving the wrong impression:
>
> At my core, I think I want a PhD more than anything because I want to deeply challenge myself. Obviously, I have a passion for my field as well, but if someone asked me why I was doing it I'd say it's because of the need to push myself. Is this something to say in a personal statement, or am I better off leaving that out?
>
>
>
The difficulty is that there are many ways to challenge yourself. You might run a marathon, master a foreign language, or raise lots of money for your favorite charity. Because of the many ways to satisfy a desire for challenge, this motivation doesn't necessarily lead to stability over time. Once you have completed your Ph.D., your academic ambitions may be satisfied and you may feel more attracted to a different challenge. From your advisor's perspective, that would be a suboptimal outcome. The purpose of a Ph.D. is preparation for a scholarly career, not checking off "get Ph.D." from a list of unrelated challenges.
Wanting to push yourself certainly isn't a bad thing. If you didn't enjoy a challenge, then getting a Ph.D. would be a bad idea. However, a desire for challenge is not in itself a very compelling reason to go to grad school, so I'd instead highlight the reasons you chose this particular challenge.
>
> One school I'm applying is a top 10 school, and it's a bit of a reach for me. Is it a bad idea to say that I've always wanted to get into that school specifically? Will that imply that I'm applying for the wrong reasons? This school is right next to where I grew up, and I've always wanted to study/research there.
>
>
>
This won't help you get admitted: fulfilling your childhood dreams is not one of the admissions committee's goals, nor is it relevant to whether you'd make a good grad student. It could come across as off putting (for example, if you seem too interested in the university's overall fame), so it would be best omitted.
Upvotes: 2 <issue_comment>username_2: That you want to push yourself is a potentially good quality. If you want a prospective university or employer to see this quality, go ahead and include it, but think about how this can be translated into something that will also benefit them. What makes a person who pushes themselves a good candidate? Sell it from that angle.
Upvotes: 0 |
2014/11/14 | 896 | 4,000 | <issue_start>username_0: I have a general inquiry regarding the impact of past misconduct in different courses of study.
To provide an example: if a student takes a course and is removed because of poor performance (or even if they are not removed and have had a warning) could this be used against him/her in a different course in the future? Or does the code of conduct for each university specify that each course constitutes a different "chance". What about when it is not about removal, and is about academic misconduct or general behavior?
In cases where the gap between both courses is quite high (say a decade), would it be safe to presume that misconduct as (for example) a teenager could not be used without good reason in the future, or could it be waived all-together?<issue_comment>username_1: Are you talking about academic misconduct (cheating) or simply poor performance?
With respec to to cheating, policies vary from institution to institution. However, in my experience it is common in the US to have a central authority responsible for recording and reviewing cases of academic dishonesty.
For example, on my campus faculty are required to report incidents of academic dishonesty (cheating, plagiarism, forged signatures on registration documents, etc.) to the Associate Vice President for Academic affairs. Individual faculty can punish students by assignment of failing grades, up to and including failing the course. There is a formal process by which students can appeal these decisions. However, if a student has repeatedly violated the policy they can be punished further by suspension or expulsion from the institution.
A huge advantage of this approach is that if this were left up to individual instructors a student could cheat repeatedly in multiple classes and never suffer any punishment worse than a failing grade in an individual class. In the other direction, it provides students with due process if they think that a faculty member has acted improperly.
A suspension or expulsion for disciplinary reasons will typically remain on the student's transcript forever.
If you're talking about poor academic performance (low grades), that can be a very different issue.
It is certainly the case that poor academic performance in the past (having "flunked out" of a program) can affect your chances of being admitted to another program even many years later. The application process nearly always requires the student to provide transcripts from all colleges that they have attended.
In the US, a student who has "flunked out" typically can apply for readmission to the university after some period of time. Many institutions have policies that explicitly disregard academic failures (and the associated low GPA) that happened in the distant past e.g. a student who flunked out 5 years ago or more may be readmitted and start with a new GPA. However, under such policies the student's transcript will still typically show the earlier academic failure.
Upvotes: 2 <issue_comment>username_2: My UK university centrally tracks all academic offences, both major and minor, and I think this is pretty common. We are not allowed to release information about academic offences to employers or other universities. If the new course is at a new university, it is up to the applicant to inform the university. There may be a question on the application and the applicant must answer it truthfully. If the new course is at the same university, it depends on how well they track things, in this case, how well they tracked things in the past.
Past occurrences of academic misconduct are only used in deciding penalties for current occurrences of academic misconduct. Hopefully, a student would not commit academic misconduct again. If they did, the board may have discretion to ignore some past events if a large amount of time has elapsed. This is going to be specific to each board. Even if the board is allowed, discretion, they may not exercise it.
Upvotes: 2 |
2014/11/15 | 1,774 | 7,505 | <issue_start>username_0: At the moment I am involved in a neurobiology research trying to assess the feasibility of using emotional response to elicit distinct EEG (brain wave) patterns from the brain using static images. Distinct brain waves can be used as control signals to perform distinct actions traditionally done using a remote controller (i.e. flipping through the channels).
Conventionally, people have tried using different images such as a flower or a tiger, a spaceship or a musician in attempt to trigger different responses. The team believes that these input stimulus are too "lite" to produce consistent and reliable results. We wish to investigate whether pornographic images, or images depicting violence, death will produce even stronger responses. Nothing too wild, but definitely involves things that people do not talk about while doing these kind of research.
Since there are a dearth of publications on this technique for eliciting EEG, the best way for us to meet the deadline is to go ahead with these "non-traditional" trials. On the one hand I think my project supervisor will be shocked that we have even thought about using these method, or he will reject it out right claiming an ethical issue, but we are hopeful that a small breakthrough may be reached if the team followed through with this experiment.
What should the team do in this case? Should we go through with our experiment and jeopardize our reputation or should we just give up on this train of thought and risk failure by continuing with the methods that are likely not to yield any useful results?<issue_comment>username_1: First, do I understand correctly that a large part of your question is whether you should avoid discussing this significant and controversial change in research methodology with your project supervisor *because* you think that he would reject it outright for ethical reasons? If so, what am I missing that makes this even conceivably a good idea? Most supervisors will have a huge problem with that behavior *whether it yields breakthrough results or not*.
Even assuming that you have the primary authorization to run the project, your idea of, under deadline pressure, just taking a quick shot at showing project participants images of pornography, violence and death with the goal of getting suitably vigorous brainwave activity again sounds almost too-bad-to-be-true. It is in the nature of disturbing images that people who are shown them may be...disturbed. That's not a side-effect: that is an essential part of the effect you're trying to produce. People can react in unpredictable ways to being shown such imagery: if it triggers, say, depressive or violent behavior in a subject, then....yikes, you could be in so much trouble. Compound that with not running it past your supervisor: yikes squared.
Once more: **talk to your supervisor**. It is really distressing to me that you see a *possible* ethical objection and are asking other people on the internet whether or not it can be brushed aside if the results are nice. This is doubly wrong-headed: on the one hand, you don't actually know if your methodology would be ethically objectionable: scientific research involving pornographic or violent images is not *inherently* ethically objectionable; it's just potentially sensitive and needs to be handled with extreme care and professionalism. But in case what you're suggesting turns out to be truly ethically objectionable according to the standards of your discipline: **of course** you're going to have severe difficulties publishing such work, and publishing it could do you more harm than good. Exchanging ethical integrity for better research results is a terrible proposition....right? Is that really news to you?
Finally:
>
> Should we go through with our experiment and jeopardize our reputation or should we just give up on this train of thought and risk failure by continuing with the methods that are likely not to yield any useful results?
>
>
>
Meaningful research inherently carries the risk of failure: that is not a peculiarity of your situation. But you present a dichotomy of research failure versus showing pornographic and/or violent images. I'll take door number three: maybe it's just my own prejudice -- none of my academic successes have inherently involved violence, pornography or death -- but I will suggest that another solution may be lurking out there somewhere.
Upvotes: 5 <issue_comment>username_2: All research involving EEG on human subjects will require [IRB](http://en.wikipedia.org/wiki/Institutional_review_board), or equivalent ethical review board, approval in order to be published in a reputable journal. Studies involving pornography, violence, and death are not that uncommon and your IRB will have procedures in place to deal with these types of studies. They will likely require you to provide clear information about what the subjects will see during the experiment if it does not impact your hypothesis. They may require you to prescreen subjects for past experiences that may make the images more salient, again if it doesn't interfere with the hypothesis. Finally, if the images are disturbing, they will likely require you to have a support mechanism included in your debrief. Often, if the subjects are limited to students, understanding how to get emergency access to a psychologist is enough. In extreme cases they may require a psychologist to be on site.
NEVER do research without ethical approval. There is nothing in your proposal that sounds so shocking that it should cause a supervisor to think less of you. He/she may not want to go down that road on scientific or personal moral grounds, but you should still feel comfortable raising the idea with your supervisor.
Upvotes: 6 [selected_answer]<issue_comment>username_3: I'm going to approach this from a different angle as I think your premise is flawed.
Graphic images will only produce a more "energetic" response if the subject's threshold is below the image's level, so you will have to do a lot of pre-screening. This will, by definition, prepare the subject for what they are about to see and lessen the impact, thus negating exactly what you are trying to accomplish.
Consider these (extreme, but that's what you want) examples:
**Image**: teen girl getting railed by 4 very well endowed men.
**Subject A**: Average soccer mom. Reaction: vomits on EEG machine.
**Subject B**: Retired porn producer. Reaction: None. Been there, filmed that.
**Image**: Messy truck vs. motorbike accident scene.
**Subject A**: Average soccer mom. Reaction: vomits on EEG machine.
**Subject B**: Ambulance attendant / policeman. Reaction: None. Saw worse than that yesterday.
**Image**: Flowers.
**Subject A**: Average soccer mom. Reaction: None.
**Subject B**: Trauma surgeon. Reaction: Can't breathe. Hyper-allergic to pollen, image triggers psychosomatic reaction as he almost died from the real thing last week.
**Image**: Clown from Poltergeist.
**Subject A**: Average soccer mom. Reaction: Smiles. She likes horror movies, and that's a classic.
**Subject B**: Battle-hardened soldier. Reaction: Panic attack - watched movie on a sleepover when he was 9 and now has coulrophobia. Can barely handle McDonalds.
In summary, a neutral image familiar to everyone will produce a consistent response. Going too far off the well-trodden path will become increasingly unpredictable but will not always result in *stronger* results.
Upvotes: 3 |
2014/11/13 | 932 | 3,908 | <issue_start>username_0: Is it okay to mention in my statement of purpose for admission to grad school in math, that my undergraduate major is engineering because my parents wanted me to do it?
I was initially wrote my statement without including my motivation for undergraduate degree, however people said it seemed too abrupt. However, without it, I'm not sure how to tell why despite being interested in math, I still chose engineering as a major.
**EDIT**: Thank you for your responses, I should mention that, after my undergrad degree, I have managed to enrol in one of the top masters program in math in my country and I'm doing well there, and also that there was not really any scope to learn pure math in my undergrad university.<issue_comment>username_1: Do not say:
>
> "I did engineering undergraduate because my parents made me."
>
>
>
Why?
Because you are applying for graduate school. The admissions committee wants to see adults. Referring to your parents (at all) makes it seem like you are not independent -- that your parents run your life. Whether or not they do, it does not benefit you to give such an impression. You don't want to put the idea in their heads, whether or not it is true now (or then).
I suggest, if your really must mention it at all, something along the lines of:
>
> "My time as a Engineering undergraduate has made clear to me my true desires. While engineering is a interesting and worthy subject, my passion is for mathematical side I saw during my studies."
>
>
>
Upvotes: 5 [selected_answer]<issue_comment>username_2: If you change subjects, consider what you did get out of your first degree. It has equipped you with skills, even if you didn't enjoy it. You go into your new field with a different background where you can bring to bear perspectives others do not have. Don't look at the negative, look at the fact that you realised your true vocation but you do have other skills that you picked up on the way, which are never worthless. Academe is about the ability to process existing knowledge and apply/adapt it to create new knowledge; if you can do that in a field you didn't enjoy, you can do it in a field you do. That your parents pressured you is bad, but we all make mistakes in life. You will be assessed on your ability to learn from your mistakes, not the fact you have made them.
Upvotes: 2 <issue_comment>username_3: You are right that some people will see a first degree in engineering and wonder something like "if this applicant likes (and is good at) math enough to do grad work in it, why not do the undergrad in it too?" Where you are wrong is in thinking that answering that question will help your application at all. (The specific answer of "my parents made me" will probably hurt your application, but I will say that in my opinion there is no answer that will help.)
What will help? Sentences that rebut the worries or doubts they may have about you:
* engineers don't learn enough math to do well in grad school
* people who don't know what they want at 18 can never pursue their true dream
* people who change majors don't have the commitment and passion we need in this field
So, focus on what, as a math-loving person, you got from your undergrad work. Point to the courses you did well in, the electives you took, the projects you worked on, that helped you understand that graduate-level math was right for you. Talk about how committed and passionate you now are about math - and don't worry about whether teenage-you was committed and passionate about engineering. Admissions committees are aware that undergrad choices are made for a variety of reasons, including not knowing much about specific undergrad programs and not having much freedom of choice. Talk about your purpose now, not who you were four years ago. That four-years-younger person isn't applying to grad school; today-you is.
Upvotes: 4 |
2014/11/15 | 1,061 | 4,612 | <issue_start>username_0: I am working in academia in a position that is similar to instructor. The job is great but I am working under a short-term contract system with last-minute renewal notice, so I am trying to move for a more stable position.
Since the positions I am looking for are mostly in other countries, my interviews are held by video-conference. Some allow Skype interviews, but others ask for a "professional" video-conference, with high-quality video, and good and stable internet connections. Basically, they want me to use the video-conference room in my current institution.
I was wondering how people usually manage their organization for this kind of interviews. My question is 2-fold:
**Questions**
1. How do institutions feel about letting their employees using their video-conference resources for job prospecting?
2. How do researchers that are not currently working for an institution find the resources to participate in this kind of interview? High-quality webcams are expensive.<issue_comment>username_1: Regarding institutions, I have gone through the regulations in mine (the Swedish equivalent of a national laboratory), and they don't specify any usage limitations. This makes sense, after all, most of the time they sit unused, and you don't wear them out by making a call. I would assume that if someone makes such a heavy use of it that it is disruptive to others, will be asked by the administration to explain it. But a few interviews will not by all means be so.
YMMV, but if the regulations don't forbid it, you are probably fine.
Regarding the second one, an anecdote: the only online interview I had for PhD positions was done audio only over Skype. The professor didn't have a working video set.
An institution *demanding* to interview you over HD video sounds suspicious.
Upvotes: 1 <issue_comment>username_2: Since I do much of my work by video link, I have some experience with the problems of conducting business over video-conference.
For permission, in universities it will generally be fine as long as you aren't disrupting the work of others. If there is any rule regulating video, you should be aware of it (e.g., some non-university research institutions don't allow video-conference at all, and make very certain their researchers know).
For quality, I would guess that their actual requirement is not HD, but that they want to be able to read your expression as you talk. Here, the primary limitation is not your camera, but your network connection. The built-in camera for your computer or any cheap webcam will generally produce much higher quality video than you can transmit effectively. Things to do to ensure a good connection:
* Use a wired link (e.g., EtherNet) rather than wireless if possible, as your connection is likely to be better and more stable.
* Use headphones and a microphone (cheap earbuds with a built-in mike will do): the headphones will prevent echo from your speakers, and the microphone ensures consistent pickup of your voice.
* Different software provides different quality tradeoffs optimal for different connections:
+ Professional videoconferencing hardware (e.g., PolyCom), is nice if your institution has a room and you can get help using it. Its learning curve is a pain, and it's not any better quality than...
+ High-end videoconferencing software (I've had good experience with [BlueJeans](http://bluejeans.com/)), provides a fantastic quality connection but has brutal demands for bandwidth and processor power. If I use it for more than ~2 hours on my laptop, it overheats. Note that you don't have to buy this software: generally, only the meeting organizer needs to, and you can connect via a web link that they send you.
+ Skype is good for mid-grade connections: it provides nice video when given a consistently good link, but degrades badly if the link is inconsistent.
+ Google Hangouts is good for low-grace connections: its video quality is never particularly good, but it will get *something* through.
The most important thing is the high-bandwidth link: with a good enough link, Skype is generally sufficient within a continent. For very long distance connections (where the limitation is lag and undersea cables) you may want to use higher grade software. If you can't get access to a place that gives a strong connection, arrange the connection for a time when you are likely to have little competition with others for bandwidth. Early morning is generally best: during the day work activities consume bandwidth, and in the evening people are watching video.
Upvotes: 4 [selected_answer] |
2014/11/15 | 2,069 | 9,092 | <issue_start>username_0: A [recent question](https://academia.stackexchange.com/questions/31764/what-to-do-with-teachers-who-think-their-subject-is-extra-special-and-gives-st) asked how students should deal with professors who "think their class is extra special". Putting aside the question of how good students are at judging what an appropriate workload is: how should other professors handle this?
For instance, I've had students whose previous class routinely ran late, causing them to be late for mine. I once taught a class where several students had a professor immediately before me who decided, in clear violation of university policy, that their midterm exam would take place during a double length class period, which meant that would miss my entire class (which was in fact also my midterm). (This is slightly less crazy than it sounds, because it was a summer course.)
Let's stick to the case where the other professor is actually violating university policy. Students are usually very reluctant to try to enforce such policies on their professors, and I don't think I could in good faith encourage them to: even when there are written policies about these things, it's often unclear how a student would go about enforcing them, it's not clear that the relevant chair, dean, or provost actually would enforce them, and there's real potential for negative repercussions to the student.
I could be inflexible, but this seems unfair to the students, who are then stuck between two inflexible authority figures. Even if I'm right and the other is wrong, students shouldn't bear the brunt of that.
Finally, I could seek to enforce the policy on the other professor, but as mentioned, it's not clear how one does that, especially if the other professor is in a different department. (And since it's often relatively senior professors doing this sort of thing, at that point *I* worry about negative repercussions.)<issue_comment>username_1: Where I did my undergrad, students' complaints were common, and now the dean has a zero tolerance policy towards extending compulsory lessons or exams beyond the allocated time. It takes them only one student complain to get involved.
The most immediate action they can take is just not allow the booking of the room, which is quite effective in itself; and also talking to the professor in question. In case of repeated transgressions, they can attempt other actions, like giving that subject to another department in the following years.
So, the best way you can solve this is to get the people in power involved, and show that the students don't need to go through many hoops to get their rights enforced.
If you want to ensure enforcement you need the students, because they are the only ones who will surely know when there is a collision. Make sure they know there are channels open for them, anonymous, and swift.
Upvotes: 4 <issue_comment>username_2: >
> Finally, I could seek to enforce the policy on the other professor, but as mentioned, it's not clear how one does that, especially if the other professor is in a different department. (And since it's often relatively senior professors doing this sort of thing, at that point I worry about negative repercussions.)
>
>
>
I see no alternative to "enforcing the policy on the other professor". Seniority does not confer the right to violate university policy. In my opinion you should not let hypothetical concerns about your career or your tenure case stop you from standing up for your students in a situation in which policy is clearly on your side: assistant professors are university faculty, not captives who hope to be rewarded in the future for their docile behavior.
The way to bring it up is to communicate with the other professor as soon as possible. I would recommend speaking in person or over the phone, as email renders trivial a large variety of passive-aggressive behavior: e.g., they might not respond at all, leaving you to wonder how long to wait. If you talk to someone face-to-face they have to either be reasonable or display their unreasonableness directly to you. How do you look another faculty member in the face and say "I'm sorry that students will be missing your midterm, but it is critical that my midterm last double its scheduled length"? You should come to the meeting knowing the relevant policy cold. You should bring printed copies of the policy, but only take them out if things are not going your way.
You should continue talking to this person until you have conveyed that their proposal is against the policy, is specifically detrimental to your course, and that students are being caught in the middle. If they agree to that or at least acknowledge receipt of the information and still are intransigent, then you should end the conversation, calmly, by saying that you will have to take the situation up with the administration.
I would then bring the matter to your department head and see what is suggested. If the faculty member is in a different department then it may be in order for the two department heads to have a discussion. If the department head does not take ownership of the issue you should ask whether he [I happen to know that the head of the OP's department is a "he"] wants you to take up the issue with the higher administration. If not, then as an assistant professor this may be the place to drop it, but again you should communicate clearly that policy is being violated and students are suffering. Or you could take it up with the higher administration: I might have done that as an assistant professor. (As a tenured associate professor I would probably do it now, and would not worry about it jeopardizing my future promotion or dealings in the department. On the other hand, the egregious behavior you described would probably not even be attempted at my large public university.) Tenure and promotion is not a docility contest, and "He reported a rule violation" is not a point against someone's case. I think honestly the issue is mostly one of your own peace of mind, so act accordingly.
You certainly have my sympathies: it sounds like the other professor is being both selfish and unreasonable. It's hard to deal with unreasonable people -- you just can't reason with them! -- and if a situation arises in which it is primarily a battle of wills, then the unreasonable person tends to take the outside track. The fact that you care about the students and the other guy apparently really does not could indeed make you blink first. You may for instance end up having to give a makeup exam to some of the students. If so you should clearly document every time you do that and have the individual students vouch for you as well.
Upvotes: 6 [selected_answer]<issue_comment>username_3: Not sure whether student councils are really so unimportant at other institutions, but once again I would advice you to address a student representative from that class (or otherwise from that year). Point out the specific rules they are violating and they should know who to address and how to get the rules enforced, after all, that's a huge part of what the student council does (assuming it's in the students best interest, which this does sound like, keep in mind to always highlight the advantages for the students if the policy is enforced). The advantage is that you are not 'attacking' a colleague and the student council will never mention your name. On top of that - if played well - it will even put you on good terms with the student council (after all, by pointing out the specific policies that were violated you saved them work), which might be useful if you ever make an unintentional mistake somewhere.
Upvotes: 2 <issue_comment>username_4: Henry,
IMO you best bet is to ask the other professor to help you out with an issue. Explain that students are complaining that the "other professor" runs over.
The professor will then either engage with you in resolving the problem or will try not to engage. Either way summarize the discussion back to the professor in an email and try to use the mutually agreeable solution. You may have to try this a few times but if within one or two meetings you don't resolve the problem you can then escalate it to "upper management" with documentation of what has been tried.
If none of this works you could in clear conscious ask at the beginning of a course if your students have "that other" professor before you ad suggest that they not keep both.
Once you show the way other brave souls may follow suit and the peer pressure could evoke a behavior change where your voice could not.
Upvotes: 2 <issue_comment>username_5: As a student, I have e-mailed a professor to point out the difficulty caused by his overrun habit. I was much older than most of the class, which had two consequences:
1. I needed the entire scheduled 10 minutes to get to my next class.
2. I had enough experience to know that sending the e-mail would not affect my grade etc.
Even so, I would have welcomed support from the professor teaching the second class.
Upvotes: 2 |
2014/11/15 | 951 | 3,651 | <issue_start>username_0: I noticed that some teachers announce the grade breakdown in the first class or in the syllabus, while some don't. What's the best practice? If giving the grade breakdown at the beginning is preferable, is it better to use an absolute or relative breakdown?
[Example of grade breakdown announced in the syllabus](http://web.mit.edu/6.005/www/fa14/general/) (absolute breakdown):
>
> Letter grades are determined at the end of the semester. The default
> cutoffs are: a final average of 90 and above is an A, 80 and above is
> a B, 70 and above is a C. These boundaries may be adjusted downwards
> if necessary because of the difficulty of the assignments or quizzes,
> but the boundaries will never be adjusted upwards, so a final average
> of 90 is guaranteed to be an A. The boundary adjustment is done
> heuristically, and there are no grade quotas, no grade targets, and no
> centering of the class on a particular grade boundary.
>
>
>
[Example of grade breakdown announced during the first class](http://ocw.mit.edu/courses/mathematics/18-404j-theory-of-computation-fall-2006/) (relative breakdown):
>
> The first half get A, the second half get B (except in case of failure to try to do the homework or show up at the exams).
>
>
>
I am especially interested in computer science education in the US.<issue_comment>username_1: I think it depends on the situation (assuming there is no official department or university policy on the matter).
For example, I am a young (math) teacher teaching a single course that has 25 students in it, at a college that I have never taught at before. I figured it was unlikely that I would be able to write exams that effectively separated the A's from the (A-)'s (say) based on some numerical scale that I set ahead of time. So I chose to *not* put a grade breakdown on the syllabus.
After each exam, I look at the performance of the students, and I give them an individual grade update containing what I call "a good estimate" of their letter grade thus far in the course. This prevents them from just remaining in the dark all term long with regards to their grade.
I have taught sections of courses at bigger schools where the grade scale is set ahead of time and is the same for all 800 or so students enrolled in the course. This makes sense, as the content of the courses is the same year after year, the exams are similar every year, and in general everything involving the course is somehow standardized.
I would say that, in general, it's fine to not announce a grade scale ahead of time if you don't have to. Just be prepared to have *something* to say about grades, because students will likely want to know. Sometimes I give myself some wiggle room on the syllabus by saying "your grade will not be lower than the following...", so they know that a 90 (or whatever) will guarantee them an A, but they might also earn an A with a score of less than a 90.
Upvotes: 2 <issue_comment>username_2: There will surely be a strong correlation if the decision to be ambiguous on Day 1 about what precise performance will result in an A (sort of an "I'll know it when I see it" nod) is compared to the number of student grade appeals at the finale.
I have taught in arguably the most quantitative of departments at a number of schools, and the policies have always revolved around a fixed basket of accrued points to be earned. Want an A? Then get an A level of points on the assignments throughout this semester.
Few students protest their grade when they come up short. Fewer still try for a formal appeal. None have come close to winning it.
Upvotes: 1 |
2014/11/16 | 912 | 3,888 | <issue_start>username_0: Currently I am working on my Final Year Project. I have worked with the current supervisor for 4 months. In the first month, everything went smooth. Whenever I email a question to her, she will answer it within 3 days. However, as times goes by, she started to ignore my question. This happens from 2nd month onwards till now. I don't know whether she hates my for asking so many questions or simply don't have time.
What should I do to deal with this kind of situation as my Final Year Project's grade depends on her. If I don't have a good relationship with her, I think my grade will suffer.<issue_comment>username_1: Email is a terrible way to supervise work for both parties involved. It's difficult to ask a good question over email and it's difficult to address a misconception over email. From your question, there's no way of knowing which of several types of problems you are experiencing with your advisor. The answer to all of them, however, is to have regularly scheduled meetings that are face-to-face if possible, or over a video link if you are doing distance learning.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I would add that you may want to figure out what mode of communication your supervisor prefers in general, and with regard to advising students on final projects in particular. Pushing for a meeting might work, but it could also add to the supervisor's load if s/he is already overburdened with meetings with advisees. I would recommend to stop by her office, explain that you are looking for some additional feedback from her as you work on completing the project, and ask what she would prefer as as plan for communicating about it. Chances are it could be a mix of email and face-to-face.
Also, you could significantly increase your chances of hearing from the advisor by email if you adapt to her emailing style. People tend to have preferences in terms of how they communicate by email. Some like long, drawn-out emails with lots of detail. Others are absolute minimalists, writing barely a line in response to an inquiry of any length. A good rule of thumb is that if one writes short emails, one also prefers to receive/read short emails.
I do not believe I ever met someone who asked me to write them longer emails! (except my mom perhaps ;) So reviewing your past communication might suggest adjustments you could make in your emailing style, so it is "easy on the eyes" for the advisor. This small adaptation can pay big dividends in the long-term, as you teach yourself to consider your conversation partners' preferences and adapt to them. They will subconsciously perceive correspondence from you more favorably, which in turn will increase the chance of quicker and more positive communication.
This might seem trivial, but many people never intentionally learn good emailing practices. They just assume that if they get responses, their emails must be good enough. However, it does not take much effort to advance from 'good enough' to 'very good', but it could make a difference at critical times in your work or career.
A couple resources:
[Effective E-mail Communication](http://writingcenter.unc.edu/handouts/effective-e-mail-communication/) - guide from the UNC-Chapel Hill Writing Center
[Writing Effective Emails](http://www.mindtools.com/CommSkll/EmailCommunication.htm)
Good luck!
Upvotes: 2 <issue_comment>username_3: 1. they all have their works and their tasks. they are so busy .
2. it difficult for a adviser to just answer your emails and ignore his/her tasks.
3. also the same time, as your project goes on he/she need to bit google-search to help you. this a bit google-search or even thinking about your issue in your in your perspective is few time tackle, but they are really busier to **READ, think, search , TYPE and send** your answer.
So try to see him/her.
Upvotes: 0 |
2014/11/16 | 633 | 2,876 | <issue_start>username_0: After having a paper published, I submitted the preprint to arXiv, but with a different title.
Now, Google Scholar has identified the arXiv version as another paper, and by merging the two articles using the *merge* button, I cannot make the arXiv version appear next to the published paper when someone searches for it.
I guess that Google looks for the title of a paper (in the PDF, not the file name), and decides if this is the same paper. Anyway, is there any way to make Google Scholar understand that the arXiv version is the same paper and show the PDF in the search results?<issue_comment>username_1: In my experience, when you use 'merge', Scholar will not return multiple listings in the main result. Instead, it asks you which version is the 'better' version and shows that version preferentially. If a person clicks on the 'all N versions' button, however, the alternate version should appear in that list.
If you want to have a more explicit statement of the two articles and their relationship, you cannot force the search engine to do it for you---and even if you could, it wouldn't necessarily remain that way next year, since Google is always tweaking their systems. Instead, you should put this on your personal webpage, which (if hosted by your institution) will likely end up high in Scholar's returns in any case.
Upvotes: 3 <issue_comment>username_2: This may not be the answer you're looking for, but one solution would be to update the arXiv paper so that its title agrees with the published paper. (You can change an arXiv paper's title by submitting a revision.) Maybe you prefer the title from the arXiv, but using different titles causes enough hassle and confusion that I can't really believe it's worthwhile.
It's not just a matter of convincing Google Scholar. If they supply a link to a paper with a different title, some users will assume it's an error without looking closely enough to detect that it's really the same paper. The same issue will occur whenever anyone runs across the arXiv paper, since the first heuristic most people use to decide whether two papers are the same is comparing the title and authors.
If you really want to use a different title, you should take every opportunity to clarify the relationship between the papers. For example, the arXiv abstract page and the first page of the article should explicitly state that it's the same as the published paper (and give the citation). But even if you do that, readers will be confused and perhaps a little annoyed. They'll naturally wonder why it has a different title if it's the same paper otherwise, and they may wonder what else you have changed compared with the published version. If there are nontrivial changes, then you should warn the reader, while if there aren't, then it's not good to let readers wonder about that.
Upvotes: 5 |
2014/11/16 | 1,455 | 6,363 | <issue_start>username_0: I understand the concept of (peer-)reviewing as helpful to guarantee a good quality result. Clearly it makes sense that journal articles are reviewed by someone before publication. Yet, what I am still unclear about is who the reviewers are? The focus of this question is not [who can peer-review articles](https://academia.stackexchange.com/questions/19025/who-can-peer-review-articles), as I am not interested in who qualifies for being a reviewer, but rather about **how to find out about the actual people having been involved in the review process?**
It disappoints me to not be furnished with a list of the reviewers as it would help me tell if the article is likely to be well-reviewed or not. In academia, where reputation is paramount, it would seem imperfect if the people behind the reviews are kept secret. Yet I have not yet encountered a list of reviewers for a specific article and the best place to put this information seems to be with the article itself.
Another worthwhile information connected to it would be **the number of reviewers**. After all the more people investing time into a review of some contribtion the higher I assume to be the chances that flaws and problems become corrected and again the more interesting the contribution may become. Since unfortunately there is an excess of publications from people needing to make a career and reading through all of those articles constitutes an obstacle more than an accelaration of the scientific progress.<issue_comment>username_1: >
> how to find out about the actual people having been involved the review?
>
>
>
By and large, you can't. This is guaranteed by the anonymous (or "blind") peer review process used today by most publishers. I guess the main reason for blind reviews are that publishers fear that well-known professors will not be judged harshly by more junior researchers for fear of repercussions.
There are individual publishers out there that share your frustrations with the model, though - most importantly, [PLOS One](http://www.plosone.org) and [PeerJ](https://peerj.com) have recently started to experiment with a semi-open review model, where reviewers can choose whether to reveal themselves to the authors.
Upvotes: 6 [selected_answer]<issue_comment>username_2: Although @username_1 answer is spot-on, I think you are missing the main point. Anonymity is directly linked to any democratic process. Likewise, your election vote is anonymous. In this sense, a review is just a vote of confidence for the reviewed article and therefore it has to be and remain anonymous for a more objective opinion.
Another point is that anonymity in reviews not only protects the reviewers against repercussions but also protects against nepotism and mutual exchange of favors. Moreover, anonymity also ensures that all reviews are (almost) treated equal. So, a favorable, short review from a professor (who just said ACCEPT because he might personally know the authors) might count less than an informed, in-depth review even if that review comes from a PHD student. So, although the editor knows the reviewers, one reviewer cannot argue / discard with the other reviewer based on their individual status. In this sense, anonymity also protects the reviewed, since if the paper is actually good, it is more probable to be reviewed based on its merits then the authors' public relations.
Upvotes: 5 <issue_comment>username_3: In terms of conflicts and deliberate sabotage, it is the **responsibility of the editor** assigned to your paper to moderate this. The reviewers and authors are known to the editor, and the editor has the ultimate power to accept or reject a paper. If there appears to be a conflict of interest, or a reviewer is unduly harsh, or misunderstands the content, the editors have the power to overrule the reviewer, and the editor-in-charge has the power to overrule assistant editors.
You will see this in some journals where each paper has a name under "communicated by". It lets the reader know that the named editor is responsible for the review process.
Finally, the number of reviewers *is* disclosed. All journals I've ever worked with have a policy of providing reviewer comments unedited to the authors. Just count the number of reviews you receive. The editors should have no problems telling you how many reviewers were involved; I've frequently received e-mails saying "I got 2 reviews back and am waiting on one".
Upvotes: 3 <issue_comment>username_4: *The short answer is "you can't". Unless somehow required by local law, editors will not reveal the reviewers' names. This answer focuses on the reasons behind this.*
So far, two benefits of blind peer reviewing have been mentioned:
* It (maybe) avoids a situation where junior researchers are afraid to criticize senior researchers.
* It (maybe) avoids the issue of exchange of favors, where reviewers help each other by giving overly positive reviews. This is similar to a benefit of secret ballot voting, which helps prevent "trading" votes with another person, because the secret ballot makes it impossible to tell whether the other person actually voted the way they agreed to vote (as long as the vote isn't unanimous).
I see two other benefits:
* Many research areas have a small number of researchers. The anonymity of peer review (maybe) helps to avoid personalizing the peer reviews. My own subfield of mathematics has under 100 researchers in the world who could realistically referee my papers, and only maybe 25 who could claim to be experts in the specific area. I know many of these 25, and they know me. So we are often asked to referee papers for authors whom we know - there are not that many experts to do the reviews, after all. In small research fields like mine, the inevitable disputes over rejected papers could otherwise be toxic to the common good.
* The editor is responsible for choosing appropriate reviewers. Keeping them anonymous to the author cuts off an avenue of appeal where, instead of responding to the content of the reviews, the author instead just tries to impeach the reviewers. Of course, the author can already tell the editor "I don't think the reviewer understands the field". But they can't directly refer to the reviewer's identity when doing so - they have to look at the actual review.
Upvotes: 3 |
2014/11/16 | 2,122 | 9,392 | <issue_start>username_0: Sometimes when I have been struggling on a problem set for a while, I'll post a question on StackExchange, openly acknowledge that it's homework, and ask for hints (not the full solution). Typically people give good advice and help me think about the problem in several different ways, and I end up learning a lot by asking the question.
Now the thing is, we are probably not supposed to ask the Internet for homework help. But in my classes it is perfectly acceptable to go to TA office hours, where most of them will tell me the entire answer instead of giving hints. Often I've seen TAs present the solution on the blackboard in front of about 20 students (because all the students need help on the same question). We're also encouraged to "collaborate" with other students, who will usually tell you the entire answer instead of giving minimal hints to help you along.
Is it unethical to ask for homework help on StackExchange given that I learn a lot more than I would using officially sanctioned methods?
(If it matters, I post on math.stackexchange, and this is for classes like real analysis and abstract algebra.)<issue_comment>username_1: This might depend on your university's policy on cheating and plagiarism.
For our university (though the precise details may vary from course to course), you are allowed to use StackExchange to help you to understand concepts, but you are not supposed to use it to help you solve assignment questions.
We tell the students that if there is any risk that they may have read an answer or some code or whatever that might have influenced their answer, then they need to cite it. With a proper citation, they cannot be accused of plagiarism. We may, however, ask the student to answer extra questions in such a case, in order to demonstrate their knowledge.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The ethical solution here depends completely on the context. If you're using that information to complete an ungraded assignment there is absolutely no ethical gray area. Any information (tutor/book/the internet/study group) is an entirely valid resource.
However if you're turning it in for a grade the gray area immediately becomes black as you are passing off someone else's work as your own.
I would question the structure of a class that forces you into a situation where you're stuck solely with the text book but I have seen it. Talk to the professor, but if there's no grade seeking knowledge isn't an ethical violation.
Upvotes: 2 <issue_comment>username_3: >
> Is it unethical to ask for homework help on StackExchange given that I learn a lot more than I would using officially sanctioned methods?
>
>
>
Deliberating over the ethicality of the situation is frivolous. You know that StackExchange helps you to learn. So use it. Do not allow your school to restrict your ability to learn. Clearly the school would be bad, and not you, if its policies prevented you from learning.
Upvotes: 2 <issue_comment>username_4: Asking for help is ethical, asking for solutions isn't.
The entire goal of homework is to increase your understanding of new material you've been presented with, usually by using the theory in practical examples. Discussing material with others and "thinking out loud" are time-proven practices to help in grasping new concepts and techniques and as such it's perfectly ethical to process your homework in a way that most efficiently helps you to understand the exercises. As you say your university explicitly supports 2 classic methods help you on difficult assignments: TA assistance and collaboration.
Regardless of the method you employ, *as long as your goal is to increase your understanding*, and not to get out of doing the work at all, you're ethically in the clear.
Three caveats to this:
* Graded homework: it's my belief that grading homework is a way to enforce students to keep up with coursework and ensure that their understanding of the material is at the required levels to eventually complete the course. Seeking help, no matter the source, should not be frowned upon here since you're still accomplishing the majority of the work (i.e. learning it) yourself. Ethical.
* Assignments: coursework that is a sufficiently large part of your grade as to move beyond simple homework and into assignment territory is different. Seeking assistance about the general concepts involved is ethical, asking help on the details of the assignment itself most likely isn't.
* Legality: as some of the other answers and the comments mention, whether or not your university allows you to seek help from online resources is an altogether different question and it's sensible to check your university and course regulations on this or ask for confirmation from the professor or TA.
Upvotes: 3 <issue_comment>username_5: Pretty much what everyone else said. I dont think there is an ethical issue unless your code of conduct forbids it, and you make it clear that it is homework and you need some guidance versus the whole solution. However keep in mind that what your prof is looking for in terms of answers may not be what the internet comes up with. Often your prof is just wrong but good luck trying to get them to accept it (this happens especially in lower level courses). In higher level courses such as yours I wouldn't worry too much about it, its not like one problem solution from the internet is going to make or break your grade.
Upvotes: 1 <issue_comment>username_6: Personally, I don't feel there's anything unethical about it. Sure, you might be breaking your university's honor code, but I think the only ethical conflict there is that you are breaking a code you at least implicitly agreed to, not in the act itself. That is, your ethics or ethical principles in general do not necessarily align with the honor code, and I would say in this case they most certainly don't.
I namely think that the stock standard honor code imposed by universities is a dinosaur that needs to revised, and I think many professors are recognizing that. For example, one of mine mentioned explicitly that he realizes that students use the internet for solutions, and that we should just reference the source when doing so. On one of our homework problems, he also gave us a hint in his office hours and then just added "or just look up a proof on the internet", saying either way is going to be fine ultimately.
In addition, I do not see a difference between looking something up on the internet and consulting a physical book in your library. I assume no one would take issue with the latter, would they? After all, the goal is to learn the material.
Plus, research shows immediate feedback is necessary for learning, and by not asking for help on homework (be it your friends, your professor, or "the internet") you're only hurting yourself. You probably have to at least attempt the problem in earnest to get something out of it, but if you can't get it, you gain nothing by puzzling over it without success. On the other hand, if you do stumble upon a solution on the internet, you might find a new trick or a new way of thinking about it, since you'll have to interpret it on your own. Often, answers here are also given by people with a different background, so you don't get the answer served on a platter, but, instead, you have to really look into it and interpret it so that it fits with *your* specific background and the tools you are allowed to use.
So, basically, I don't think that there is anything, and I mean anything, unethical about asking for help on SE in and of itself. It just might be against the honor code. And as I mentioned earlier, the only unethical thing then is that you're breaking an agreement, which I think is an obsolete one anyway.
Upvotes: 2 <issue_comment>username_7: Asking for a direct answer would be unethical. Asking questions to gain knowledge and/or understanding should never be unethical. The goal of any class is to learn the material. Different people learn differently. It’s perfectly understandable for anyone to be stuck or confused on different points. When you need help, you need to ask someone that knows more than you. They can help you understand. Strictly speaking for ethics and knowledge transfer, you’re fine.
Past knowledge transfer it can get grey. Basically, don’t plagiarize and site appropriately. Course and college policies become grey as things like stack exchange aren’t completely adapted into the education system yet. So they don’t always have a clear or accepted fit. Because of that, it may be wrong per their policies. Then ethically, I think their policies aren’t ethical. That’s a whole other debate. However, I believe that if you follow what the point of the policies are you are fine. That’s typically, are you cheating or plagiarizing. As long as you aren’t doing activities down those paths you should be fine.
Think of stack exchange as your instructor. Can you ask these types of questions of your instructor? If you have no ethical problems with asking the same questions of your instructor, then you have nothing to worry about. Is asking questions on stack exchange really any different from asking a friend or colleague? Is it unreasonable to use the friend of colleague as a source of information? No on both. Use every tool to your advantage to learn.
Upvotes: 1 |
2014/11/16 | 327 | 1,436 | <issue_start>username_0: Online course providers like edX and coursera offer free courses, but you can only take/enroll the courses at specific times of the year (not any time). Why is that? If you just want to learn something (not for a degree in accredited institutions), it would be much easier if you could do it online anytime.<issue_comment>username_1: Actually for both of them some courses are fully autonomous "anytime" courses, while others are restricted to being offered at particular times. So far as I can tell, the time-restricted courses derive two benefits from being time-restricted:
1. It creates a "cohort" of students learning similar things and doing similar assignments at the same time, who can then help one another in the associated forums.
2. Resources needed to support the course (e.g., TA monitoring, course material updates) can track the progress of the students through the course, increasing the return on resources and minimizing disruption to students.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You kind find two type of courses.
1. Self-paced: When you select this type of courses you can enroll and take these courses any time of the year.
2. But the live courses will enable you to enroll only twice or thrice a year. This is because this type of courses will be conducted for large amount of students at the same time. That's known as MOOC-Massive open online course.
Upvotes: -1 |
2014/11/16 | 513 | 2,343 | <issue_start>username_0: I have been working on a particular topic for some years and I have 6-7 publications on different aspects of the same topic.
My supervisor wants me to write a summary journal paper that would incorporate the work of all these publications. Indeed, such a publication would highlight the contributions of the work and make it easier for an interested reader to be guided through my work. Yet, I am wondering that if such a publication would be ethical, considering that would be no extra unpublished content.<issue_comment>username_1: If you cite the earlier papers and the specific contribution of this summary paper is clearly stated (so that it is not implied that it contributes new research), there's nothing unethical about this.
Furthermore, you're adding value with this new paper, not just trying to rack up publications without added value.
*Dishonesty* is unethical. If you are truthful in your claims of novelty and contribution, there's nothing dishonest about this.
Upvotes: 3 <issue_comment>username_2: What you are describing is one form of [review article](http://en.wikipedia.org/wiki/Review_article). Although review articles more typically describe a more general state of the art on a problem, they can also be used to tie together and summarize a collection of linked papers into one coherent entry point.
Doing this well *will* require creating quite a lot of new content---it just won't be new technical results. Rather, the content created in such a paper is the distillation of a much larger body of work into a single coherent picture. This can be quite valuable for readers, because trying to reconstruct the picture of a body of work that is evolving over time and scattered across papers can be painful and difficult; additional detail about particular points can then be obtained by following the citations to the source articles. Note also that if you write this article well there will be little risk of self-plagiarism, because you'll need to rewrite everything pretty much from scratch to fit into the new and more compressed arrangement of ideas.
So, in summary: it's not only ethical, it is legitimate new work and can be highly valuable, just so long as you make the nature of the article extremely clear and include all of the relevant citations.
Upvotes: 6 [selected_answer] |
2014/11/16 | 688 | 3,191 | <issue_start>username_0: in my relatively short academic career, I've gathered that academic engineering seems to be more about creating ideas than actually iteratively improving a product using engineering methods. In my field, biomedical engineering, I worry that this is not enough because many biomedical ideas require immense funding that usually comes from large companies willing to wait a decade for profits, and thus good ideas are shelved (obviously many layers to this).
Many questions address how academics could make money, or start up companies, but my question is whether it is actually our responsibility to do so in the biomedical and biotech fields?
edit:
my assumption is that starting a business is the only way to get a product to customers. Answers that provide alternative strategies to achieve this ultimate goal are welcome :)<issue_comment>username_1: The NIH seems to take the opposite view of you. The NIH provides funding for the training of a large number of biomedical engineers. The individual NRSA mechanism (F31 and F32) provides some of the most prestigious funding for PhD students and post docs. This funding comes with a pay back obligation such if you leave academia for industry within a few years of receiving NRSA funding, you can be required to pay back the funding. The NIH is in essence saying "do not leave academia" to the best biomedical engineers it trains. If the NIH thought more people should be starting up companies, I believe it would drastically increase the funding to the SBIR mechanism and rework the pay back mechanism to encourage individuals to leave instead of stay.
Upvotes: 3 <issue_comment>username_2: I would strongly agree that it is ethically important for beneficial research (biomedical or otherwise) to be transitioned from the laboratory out into the world where it can benefit people. It is not obvious to me, however, that leaving academia to found a startup is necessarily the best way in which to accomplish this, particularly for biomedical work.
There are two reasons that I see it this way:
1. The skills necessary to be a good academic researcher and the skills necessary to found a company are very different, and different again from the skills necessary to bring a safe and reliable product to a large market.
2. One of the reasons it takes so long to transition biomedical research is the difficulty of ensuring safety, given our current state of knowledge. Yes, there are many other problems with market structure and regulatory frameworks, but fundamentally it is a lot more dangerous to put a drug or a medical device in somebody's body than to deploy an app on their smartphone, and a lot more difficult to evaluate safety than with a piece of consumer electronics. One of the values that established companies bring to the table is experience with navigating these problems.
So I think there is a strong ethical responsibility to attempt to move one's research into application, but the right way to do that for a particular case may often not be a startup, but instead to seek out tech transfer relationships with other academics, entrepreneurs, companies and even funding agencies.
Upvotes: 1 |
2014/11/17 | 595 | 2,486 | <issue_start>username_0: I am doing my master's and I am considering doing my PhD at a different institution. I asked my advisor for reference and he was fine with it. My co-advisor insisted on me staying. I tried to talk him multiple times to be able to leave on good terms before I started applying and he was very reluctant to me leaving. My discussions with him were not productive and every time he starts suggesting projects for me and tries to convince me to stay. Discussion gets heated when I say I want to leave my options open and ends that way. I have to say that he is a very good advisor, but that reason is not enough for me to stay.
I decided to apply without letting him know and I did. My advisor is now asking me to let my co-advisor know that I am applying, and that I owe him that. I am sure he will be very angry and another heated argument will start again. What to do in a such a situation?<issue_comment>username_1: You will finish your Master's before you leave, right? In that case, there shouldn't be any obligation for you to stay. Just tell your co-advisor that you're applying to a different institution.
It's your life and your decision. There's nothing to argue about. Honestly, if you tell him in person (or it comes up in a later meeting) and it starts to turn into an argument, just tell him you don't want to argue about it. If he continues, **don't argue. Just get up and leave.**
A calm discussion is fine if he wants to convince you to stay - let him present his counter-offer calmly if he wants to. My advice for dealing with anger is no different than dealing with arguments with anyone else, either personal or professional - if their anger is controlling the situation, just leave.
It's a shame that an otherwise-good advisor is so "clingy", but don't let that get in the way of what you want to do. If either advisor has an iota of professionalism, they won't let their personal wishes affect your reference letters. Perhaps it's for the better - having a volatile and possessive supervisor overseeing your PhD could lead to more problems later.
Upvotes: 3 <issue_comment>username_2: Tell your co-advisor. If he doesn't react well, take your advisor with you and have a second meeting. Maybe they can work it out colleague to colleague rather than professor to student. Withholding a good letter of recommendation because you want a promising student to stay and work with you is extremely selfish and borderline misconduct.
Upvotes: 4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.