date
stringlengths 10
10
| nb_tokens
int64 60
629k
| text_size
int64 234
1.02M
| content
stringlengths 234
1.02M
|
---|---|---|---|
2013/05/25 | 949 | 4,153 | <issue_start>username_0: I don't necessarily consider it a hardship to teach calculus or the like to students whose preparation in prerequisites is weak, but but I am offended by the practice of making it a personal policy to treat learning the material ONLY as a price paid to get a grade to put on one's resume, rather than as the thing they're there for.
* I'm wondering how to identify instances of such behavior quickly when they occur.
* How can one identify institutions that tolerate or encourage my position as outlined in my first paragraph above, and those that are hostile toward it? I think the latter---where that hostility may exist---often include respectable institutions in which lots of students want to get degrees in law or business or the like.<issue_comment>username_1: I don't think there's an easy eay to identify this behavior without students directly approaching you and making it clear through questions like the ever-popular: "Will this be on the exam?" and grade-grubbing for every possible point. Without obvious signals, it's not really clear who's in it for a grade and who's there to learn—and it would be imprudent to try to prognosticate. The results may surprise you!
As far as an institutional perspective, I again don't know if there's a way to really lay things at the feet of the institution for "encouraging" such behavior. It can vary a lot from department to department, and even faculty member to faculty member. However, one issue can be to see how seriously the department you're interested in takes teaching duties. Is it something people are doing their best to get out of, or are they trying to do the best job they can with it? Does the department encourage "out-of-the-box" thinking in how to teach classes, or is it just something to get over and done with each semester?
But a lot of it also relies on your attitude. If you make it clear to the students that you're serious about them *learning*, rather than just *regurgitating* for the purpose of an exam, the students who stick with you will probably be more motivated than if they don't think you are invested in their learning.
Upvotes: 3 <issue_comment>username_2: I understand that you are offended by students who do not take your subject as seriously as you do but that is just the way some students are.
There are students who are really interested and there are those who are not. In the case of calculus, if a student is required to take it for their non-math major then you will certainly have students who just want the grade. One way around this is to only teach elective classes but that will not work for everyone (I'm not sure it would work for any teacher).
Still, your questions are clear. How do you find the students who don't really care? I find that they usually bubble to the surface quite quickly. I tend to be quite interactive with my students, asking lots of questions. I also give them additional 'required' reading. Even the required reading doesn't get read by the students who just want to pass and be done with it. So, those students who actually read the material and can answer it meaningfully in class are the students you are looking for.
Now, as for identifying institutions I will say that I have read a lot and talked to a lot of teachers and one thing EVERY serious teacher wants is to teach a class of highly motivated students who care deeply for their studies. This is simply unrealistic and I have never heard of a teacher who actually achieves that.
I think the better question to ask is: **How can I motivate a deep love of my subject in my students?** By stimulating their desires, you will naturally end up with what you want. However, this is not easy and it takes a lot of time and effort. However, you seem like a serious teacher, so perhaps you can make the investment. Certainly the results, if you succeed, would likely be very rewarding for you.
My perspective is that a great teacher has no bad students. By this, I mean that a great teacher is able to motivate their students to *want* to learn the material. By this measure I am not a great teacher, but I keep trying.
Upvotes: 3 |
2013/05/27 | 446 | 2,088 | <issue_start>username_0: I have just done my Bachelor in computer science and got an offer to a software engineering position. Some people told me that a master degree is very beneficial for my long-term career development. Unfortunately I do not think I can get into any good master program because of my terrible undergrad gpa + no academic references. A postgraduate certificate program requires nothing.
My question is, what is your point of view on postgraduate certificate in software engineering? Is it just a joke comparing to master degrees? What about starting salary?<issue_comment>username_1: Short term wise, the postgraduate certificate will help you to find a better job with a higher salary because the certificate shows that you have some skill printed on the certificate. However, whatever the technology you learn while getting the certificate could become obsolete in a few years.
Long term wise, the master degree proves that you know more fundamentals than just a bachelor. It may not help you that much when looking for a job with better salary. Some employers would think you don't have the skills they want immediately. However, you'll learn those needed skills faster and better because you know more fundamentals.
If you want to find a job as of now, you want to have certificates. If you want to be an excellent software engineer in the future, you should get a master degree
Upvotes: 2 <issue_comment>username_2: I agree with @scaaahu that having a Masters shows that you have a strong understanding of important concepts. As you probably know there are commonalities between programming languages such as input/output, conditionals, loops, etc. the difference lies in how they are each implemented. However, by knowing one language(i.e. Java) you could learn another(i.e. C#) efficiently and in a shorter time.
It is also true that some employers might think you're overqualified since you have a Masters but others will recognize the added value your advanced knowledge could bring their company and the compensation may even reflect that.
Upvotes: 0 |
2013/05/27 | 2,033 | 8,594 | <issue_start>username_0: I am preparing a CS conference presentation and wondering how can I handle the references. I am thinking about three different possibilities:
1. Ignore them!
2. Just list them at the end of the presentation
3. List them *and* cite them within the presentation.
I chose the first option since anyone interested can go and check the whole set of references in the actual paper.
Does this mean not crediting the others for their work? How this is usually handled in CS conferences?<issue_comment>username_1: I don't know if there is a specific way within the CS community but the way most established seniors seem to do in my field is to note down the reference at the bottom of the slide where they refer to someone's results/figures.
I think this is a better approach than to list them all in the end, because the audience gets the reference together with the content, that way you don't have to puzzle the references and the content 6 months after you attended the presentation.
If the people you are referring to are people you have had collaborations or communication with, it would not hurt to have them listed in a "thanks to" or more formally "acknowledgements" slide.
Hope it helps
Upvotes: 4 <issue_comment>username_2: If the slides you're using are going to have "independent life,"—in other words, if you're going to make them available separately from the conference paper (on your website, for instance), then the citations should be included as part of the presentation. I would follow [username_1](https://academia.stackexchange.com/a/10245/18238)'s example and place the citations on the same slide as where it's needed; this will save the reader from having to flip back and forth between different parts of the presentation or between the presentation and the paper.
Not including the citations is a bad idea, because it means you are potentially failing to give people the credit they deserve for ideas that were originally theirs. Even though it's "just" a conference presentation doesn't mean that the rules of crediting people for their work should be ignored. (Citing the work of others is also the right thing to do from the perspective of "playing nice with others." Taking credit for other people's work can make them leerier of working with you.)
Upvotes: 6 [selected_answer]<issue_comment>username_3: Not including citations would be a very bad idea, asides from the reasons given above, there is a risk that someone would claim that you are plagiarising their work - even though you aren't. I have seen this happen before.
Perhaps place an in-slide (akin to in-text) reference on each slide and a slide at the end with the references, or if possible, make a clear citation to the main reference used on the slides where necessary.
Upvotes: 2 <issue_comment>username_4: Applied mathematician here; my solution is putting them on the same slide as the material. I use formats such as [Someone '99], [Lin WW, '00] (initials are almost mandatory for some common surnames), [Doe *et al*, book '04], [P and SomeoneElse, preprint '12] (my name is always abbreviated to an initial, which is a common convention). I find it a good compromise between clarity and shortness: I don't need to include a full sentence, but only the names in brackets.
You can use a different color or font to differentiate them visually from the text --- preferably something light but readable, a color that does not attract much attention.
I use them sparingly nevertheless --- overall I have typically less than 10 such citations in a 15-20 slide talk.
This makes immediately clear whether I think that a theorem is new/mine or not. Its original authors could be in the audience, so I think it's important to acknowledge them properly.
If your slides are already so cramped that these citations won't fit, then you have a much bigger problem. :)
Upvotes: 3 <issue_comment>username_3: As a policy, it is a far better idea to always add a relevant citation, in small font, below every figure, formula, quotation, etc, that is not yours and which you are building upon. I do this even in lectures, which students always get after.
The cost of adding a citation in small font is really small, but by not doing it you *risk* exposing yourself to unnecessary troubles because you *might*:
* give the impression of being careless or oblivious about the work of others
* enrage the occasional professor attending your lecture, when s/he sees her/his work is not acknowledge
* create unnecessary tensions with colleagues
* be accused of plagiarism
Do yourself a favor: cite even in presentations.
Upvotes: 3 <issue_comment>username_5: I'll first discuss the advantages and disadvantages for each of your options on how to handle citations:
1. Ignore them!
* pro
+ This technique saves time and space.
+ Most often, the citations go unnoticed during talks (and I have been criticized once or twice for showing any citations on the slides in the first place).
* contra
+ You make way for the criticism that you neglect to give credit to other authors.
+ If your slides are ever accessible outside of your talk, having the citations somewhere comes in handy.
2. Just list them at the end of the presentation
* pro
+ The slide needs not be shown during your normal talk, but can be considered a part of your "backup slides" that you show only upon request. Thus, both people who do not like to see citations during a talk, as well as people who expect a certain citation information, will be happy.
+ Citations that are referred to several times during the talk have to be listed just once (hence the reader does not get confused and wonder whether they have already seen that citation).
+ The citations can be written using a readable (in a projection!) font size rather cramped into another slide with a tiny unreadable font.
+ It does not matter how many extra slides you fill with citations, so you can even include rather elaborate info (a full list of authors rather than just the first one and *et al.*, the DOI, direct links, ...).
* contra
+ Readers have to switch back and forth between pages/slides while reading slides with citations (though the same is valid for a paper and it doesn't seem to bother anyone there).
3. List them and cite them within the presentation.
* pro
+ Citations are immediately available while reading the slide that refers to them.
* contra
+ Space is scarce on slides, which means that the citations have to be written with a tiny font, probably too tiny to be legible during the talk.
+ As you need to save space, you will tend to using the shortest possible citation format, such as *1st author et al.* rather than *1st author, 2nd author, 3rd author, 4th author*, thereby arguably *reducing* the credit you give.
+ The citation clutters the slide (which should in general only contain the most important keywords/key statements rather than all details the presenter talks about) and thereby draws attention away from the contents of your slide (e.g. how a concept presented in related work works, understanding of which is required for the next slides).
+ The citations either disrupt the reading flow on the slide (when in between slide contents), or they gather at the very bottom as footnotes (where, depending on the room the talk is given in, they can only be seen by the first few rows of the audience, anyway).
**To conclude, I vastly prefer technique 2, *Just list them at the end of the presentation* over all others.**
That leaves the question whether or not to include citation references (*[1]*, *[2]*, ...) within your slides. This depends mainly on the purpose of your references:
* If whatever information you are presenting is **self-contained**, such as a concept fully explained with a single concise graphic, the reference needs to be there mainly for the sake of giving credit. In that case, you can go the way of some books by not including a citation reference on the slide (thereby reducing unnecessary clutter) and instead only relying on a backreference on the citation slide (*bottom-left image on slide 16*).
* If the information you are presenting is a **summary** of someone else's work (for example when presenting only a conclusion or statement without presenting the proof it is based upon), or even an explicit **pointer to more information**, *do* include a citation reference right next to the information, both to signify *that* there is more to be found about your statement and making finding the additional information convenient.
Upvotes: 3 |
2013/05/27 | 2,879 | 12,630 | <issue_start>username_0: While researching a topic area I have come across a number of papers that claim to improve on the state of the art and have been published at respected outlets (e.g. CVPR, ICIP). These papers are often written in a way that obscures some of the details and their methods can be lacking in detail. Upon contacting these authors for more information and asking if they would kindly make their source code available they stop replying or decline the offer.
**Why are computer science researchers reluctant to share their code?**
I would have expected that disseminating your source code would have positive effects for the author, e.g., greater recognition and visibility within the community and more citations. What am I missing?
**For the future, what are some better ways to approach fellow researchers that will result in greater success at getting a copy of their source code?**<issue_comment>username_1: I am not a CS researcher per se, but I am writing Android code for my research in Atmospheric Physics, so my view is somewhat limited. However, I can say from my own experience that much of the code that I am developing and testing is part of a greater project that the team I am part of is developing. It is a mix of the rules I am bound by and the need to keep a portion of code under wraps for the time being.
Upvotes: 2 <issue_comment>username_2: In sharing code there are several issues:
* The first issue is the copyright matters, since some of CS researches/projects are funded by certain industrialists/funding organizations that discourage sharing sensitive information such as algorithms, code, or software while publishing in public periodicals.
* Indeed, there are papers based on certain data (collected from code execution) that unfortunately are manually modified by the authors. If they share the code, catching their mistake/error/modifications becomes very easy leading to failure in either their MS/PhD or research project which is undesirable.
* In CS research and especially publication, developing code, particularly a lengthy, complex code is a non-trivial task and in most of the cases is considered money-making and paper-generating asset. By sharing the code to the public, they are unveiling facts in very much detail which may degrade their contribution in future researches. Also they may not be the only one who can regenerate article and make credit of that particular research and code. In most of the cases, master students pick an algorithm or method, slightly change it and submit a thesis and paper based on it, that may contradict with the findings and claims of the first author. Remember [<NAME>](http://en.wikipedia.org/wiki/Thomas_Herndon) a graduate students who criticized findings of two eminent economist of Harvard university([here](http://en.wikipedia.org/wiki/Growth_in_a_Time_of_Debt) is the link ). If the codes in CS are revealed the consequences are likely catastrophic (it might not be too many cases, but if happens it will be catastrophic).
* Codes are vital property to most of the researchers to conduct experiment and research. If you have a code, you can simply play with it and modify it to generate new set of findings that might be more valuable than the initial findings. Without having authorship of the initial author, there is no credit to them.
However, Elsevier recently introduced a new feature using COLLAGE called [Executable Papers](https://www.journals.elsevier.com/computer-networks/news/introducing-executable-papers) that is currently available for [Computers & Graphics](http://www.sciencedirect.com/science/journal/00978493) journal by which codes and data are available and researchers can modify the code and input values to play with.
Hope it helps.
---
Upvotes: 3 <issue_comment>username_3: Stephen, I have just the same experience as you do, and my explanation is that the benefit/cost ratio is too low.
Packing a piece of software, so that it can be usable by another person, is difficult - often even more difficult than writing it in the first place. It requires, among others:
* writing documentation and installation instructions,
* making sure the code is runnable on a variety of computers and operating systems (I code on Ubuntu, but you may code on Windows, so I have to get a Windows virtual machine to make sure it works there too),
* answering maintenance questions of the form "why do I get this and that compilation error when I compile your program on the new version of Ubuntu" (go figure. Maybe the new version of Ubuntu dropped some library required by the code? who knows).
* taking care of 3rd-party dependencies (my code may work fine, but it depends on some 3rd-party jar file whose author decided to remove from the web).
Additionally, I should be available to answer questions and fix bugs, several years after I graduate, when I already work full-time in another place, and have small kids.
And all this, without getting any special payment or academic credit for all that effort.
One possible solution I recently thought of is, to create a new journal, **Journal of Reproducible Computer Science**, that will accept only publications whose experiments can be repeated easily. Here are some of my thoughts about such a journal:
Submitted papers must have a detailed **reproduction** section, with (at least) the following sub-sections:
- *pre-requisites* - what systems, 3rd-party software, etc., are required to repeat the experiment;
- *instructions* - detailed instructions on how to repeat the experiment.
- *licenses* - either open-source or closed-source license, but must allow free usage for research purposes.
The review process requires each of 3 different reviewers, from different backgrounds, to go through this section, using different computers and operating systems.
After the review process, if the paper is accepted for publication, there will be another **pre-publication step**, which will last for a year. During this step, the paper will be available to all the readers, and they will have the option to repeat the experiment and also contact the author in case there are any problems. Only after this year, the paper will be finally published.
This journal will enable researchers to get credit for the difficult and important work of making their code usable to others.
EDIT: I now see that someone already thought about this! <https://www.scienceexchange.com/reproducibility>
"Science Exchange, PLOS ONE, figshare, and Mendeley have launched the Reproducibility Initiative to address this problem. It’s time to start rewarding the people who take the extra time to do the most careful and reproducible work. Current academic incentives place an emphasis on novelty, which comes at the expense of rigor. Studies submitted to the Initiative join a pool of research, which will be selectively replicated as funding becomes available. The Initiative operates on an opt-in basis because we believe that the scientific consensus on the most robust, as opposed to simply the most cited, work is a valuable signal to help identify high quality reproducible findings that can be reliably built upon to advance scientific understanding."
Upvotes: 5 <issue_comment>username_4: [This article](https://sinews.siam.org/Details-Page/top-ten-reasons-to-not-share-your-code-and-why-you-should-anyway) in SIAM News sheds some light on the first question, so it might be worth a look. It argues, for a mathematical audience, why researchers *ought* to publish their source code, and lists many of the reasons you might hear why researchers do not share their source code. It does so by a clever analogy, one that compares the sharing of mathematical proofs to the sharing of source code. Take a look; it has quite an extensive list of reasons why researchers might prefer not to share their source code (as well as some responses arguing that those reasons are not good ones).
Here's a citation:
Top Ten Reasons To Not Share Your Code (and why you should anyway). <NAME>. SIAM News, April 1, 2013.
Upvotes: 4 <issue_comment>username_5: **Why researchers might be reluctant to share their code:** In my experience, there are two common reasons why some/many researchers do not share their code.
First, the code may give the researchers an important advantage for follow-on work. It may help them get a step ahead of other researchers and publish follow-on research faster. If the researchers have plans to do follow-on research, keeping their code secret gives them a competitive advantage and helps them avoid getting scooped by someone else. (This may be good, or it may be bad; I'm not taking a position on that.)
Second, a lot of research code is, well, research-quality. The researchers probably thought it was good enough to test the paper's hypotheses, but that's all. It may have many known problems; it may not have any documentation; it might be tricky to use; it might compile on only one platform; and so forth. All of these may make it hard for someone else to use. Or, it may take a bunch of work to explain how to someone else how to use the code. Also, the code might be a prototype, but not production-quality. It's not unusual to take shortcuts while coding: shortcuts that don't affect the research results and are fine in the context of a research paper, but that would be unacceptable for deployed production-quality code. Some people are perfectionists, and don't like the idea of sharing code with known weaknesses or where they took shortcuts; they don't want to be embarrassed when others see the code.
The second reason is probably the more important one; it is very common.
**How to approach researchers:** My suggestion is to re-focus your interactions with those researchers. What are your real goals? Your real goals are to understand their algorithms better. So, start from that perspective, and act accordingly. If there are some parts in the paper that are hard to follow or ambiguous, start by reading and re-reading their paper, to see if there are some details you might have missed. Think hard about how to fill in any missing gaps. Make a serious effort on your own, first.
If you are at a research level, and you've put in a serious effort to understand, and you still don't understand ... email the authors and ask them for clarification on the specific point(s) that you think are unclear. Don't bother authors unnecessarily -- but if you show interest in their work and have a good question, many authors are happy to respond. They're just grateful that someone is reading their papers and interested enough in their work to study their work carefully and ask insightful questions.
But do make sure you are asking good questions. Don't be lazy and ask the authors to clear up something that you could have figured out on your own with more thought. Authors can sense that, and will write you off as a pest, not a valued colleague.
**Very important:** Please understand that my answer explaining why researchers might not share their code is intended as a *descriptive* answer, not a *prescriptive* answer. I am emphatically not making any judgements about whether their reasons are good ones, or whether researchers are right (or wrong) to think this way. I'm not taking a position on whether researchers *should* share their code or not; I'm just describing how some researchers *do* behave. What they *ought* to do is an entirely different ball of wax.
The original poster asked for help understanding why many researchers do not share their code, and that's what I'm responding to. Arguments about whether these reasons are good ones are subjective and off-topic for this question; if you want to have that debate, post a separate question.
And please, I urge you to use some empathy here. Regardless of whether you think researchers are in right or wrong not to share their code in these circumstances, please understand that many researchers *do* have reasons that feel valid and appropriate to them. Try to understand their mindset before reflexively criticizing them. I'm not trying to say that their reasons are necessarily right and good for the field. I'm just saying that, if you want to persuade people to change their practices, it's important to first understand the motivations and structural forces that have influenced their current actions, before you launch into trying to browbeat them into acting differently.
---
Appendix: I definitely second username_4's recommendation to read the article in SIAM News that he cites. It is informative.
Upvotes: 6 [selected_answer] |
2013/05/27 | 635 | 2,222 | <issue_start>username_0: Elsevier has recently lunched a tool called [Journal Finder](http://journalfinder.elsevier.com/) by which researchers can use paper's title and abstract, and field of study to find a suitable journal for their manuscript. Here is the sample
 in which the editorial time is 12 weeks. They surely maintain database for their own, but is there any other source we can do this for other non-Elsevier journals, like IEEE or ACM? I know WoS provides some information about it, but WoS's database accuracy is not yet clear to me since I have seen lots of inaccurate information in WoS reports (for instance, number of "review article" published is often inaccurate).<issue_comment>username_1: Once a year the Notices of AMS publish the backlog of mathematics research journals containing *inter alia* the data you are interested in.
The 2012 one is here:
<http://www.ams.org/notices/201210/rtx121001473p.pdf>
and the 2013 is here :
<http://www.ams.org/notices/201310/rnoti-p1390.pdf>
and both do list some journals in informatics including the non-Elsevier ones (e.g. the Springer's Acta Informatica).
Upvotes: 3 <issue_comment>username_2: Please see the following link [Here](http://libguides.framingham.edu/content.php?pid=481207&sid=3943382). However, it is more about psychology and non-engineering journals. I with there was such an repository for CS.
Upvotes: 0 <issue_comment>username_3: Elsevier has recently launched a new toolbox including lots of useful information about journals. If you visit any Elsevier journal's homepage (I assume it works for all Elsevier journals], you will see the following box there,

Click on it and select 'Speed' link and it takes you to another page like [here](http://journalinsights.elsevier.com/journals/1084-8045/speed) (example for JNCA journal). The following information let you know the latest turn around time of this particular journal.

Hope other journals start similar approach.
Thanks and hope you find this post useful.
Upvotes: 2 |
2013/05/27 | 554 | 2,318 | <issue_start>username_0: Often I learn about conferences, even in my own country, when it is too late to submit papers. Is there a mailing list, or another way to get information about conferences in a specific location and domain?
EDIT: I am mainly interested in computer science and game theory.<issue_comment>username_1: There is a wiki devoted to call for papers (<http://www.wikicfp.com/cfp/>).
Googling can help you find earlier editions of conferences and so forth. Make a list of the ones relevant for you and when they occur – each edition of a conference will occur at the same time of year. Keep this list on your wall, perhaps sorted by month of conference (or better, month of deadline). Google will help find the current edition.
Upvotes: 3 <issue_comment>username_2: You can use <http://www.conferencealerts.com/> to locate academic conferences in your desired country. These listings are sorted by a **topic or country** which you can select on the first page. Once chosen you will get a list of the various conferences **organized by month**. Moreover, you have the option of subscribing and thus receiving **email announcements** about those events.
Upvotes: 3 [selected_answer]<issue_comment>username_3: You can also try [www.tjdb.org/CFP](http://www.tjdb.org/CFP)
it will provide all upcoming events(journals, conference, seminars, workshop, sessions). subscribe to its rss feed with particular keyword. and you can also post new academic events.
Upvotes: 1 <issue_comment>username_4: I find it helpful to keep a spreadsheet of all the relevant conferences, with columns for the next submission deadline, conference URL, organisation, and organisation URL.
When I find out about a conference too late to submit, I still add it to my spreadsheet. I try to find out if the conference is annual, every two years, or what. I put down a rough guess for the next submission deadline based on the current deadline. That way I'm prepared for the following year.
Also, when I read a paper in my field, I always note where it was published (because the journal or conference might be suitable for my own work). If it's a conference, I add it to my spreadsheet with as much info as I can find.
This is in addition to looking for appropriate mailing lists, as the other answers have covered.
Upvotes: 1 |
2013/05/28 | 4,567 | 16,618 | <issue_start>username_0: I've recently come to accept that I'm transgender (MtF, male-to-female). I won't go into the exact details of this as it's personal, but I have a "girl mode" where I identify as a female. I'm also a researcher/lecturer at a respectable university. I'm yet to reveal this side of myself at work, but it's not impossible that my colleagues or students will run into me in girl mode e.g. at the mall.
>
> **Question**: What are typical experiences of openly transgender academics?
>
>
>
I'd be particularly interesting in examples of successful academics who are openly transgender.
I know there are both legal protections and university policies which prohibit discrimination based on being transgender, but no policy can make people like you.
(*Update*: As things developed, I gave an answer to my question [below](https://academia.stackexchange.com/a/41850/8469).)<issue_comment>username_1: There are two transgender academics in biology at Stanford University (both of whom transitioned during their tenure there I believe). [<NAME>](http://en.wikipedia.org/wiki/Ben_Barres) has written extensively about his experiences (female to male). [<NAME>](http://www.stanford.edu/~rough/) is another example (MtF).
Upvotes: 5 <issue_comment>username_2: <NAME>, Professor of English and Department Chair, Colby College, Maine. Jenny wrote "She's Not There" and other great books. My heroine (happily married to a woman)
Dr. <NAME>. Lynn invented both VLSI and superscalar architectures. Professor Emeritus, University of Michigan. My heroine (happily married to a man)
Dr. <NAME>, University of Chicago, Professor of Economics. Very famous economist from Milton Friedman's old school.
Want more?? You are in the SAFEST profession to DO the transition!! Read Jenny books and Lynn's website and you'll learn how to do it, step by step.
Upvotes: 6 [selected_answer]<issue_comment>username_3: In my undergraduate institution there was a transgender logician graduate student. He taught me category theory. As far as I remember none of us had ever any problems with his regular mini-skirt outfit. In my graduate institution there was a graduate student who changed gender and made it known to everybody (faculty and students). Again, there were no issues that I've heard of, the relationship of that person with their supervisor remained very good (as far as I can tell) and the person went on to a very good job later on. Keep in mind that I did my undergraduate and graduate degree in fairly "liberal" places. Overall my experience was that if you let people know, but then move on with you regular life then it doesn't become an issue. Also it is not unlikely that some of your colleagues already "know". I guess there is always an amount of risk involved with full disclosure. Keep in mind that people shouldn't judge you on what you do in your private time. Another factor is if you already have tenure, and if you don't and things go bad if you will try to explain them by your transition?
Upvotes: 2 <issue_comment>username_4: To add a few more examples, you can have a look at the Wikipedia list of [Transgender and transsexual scientists](http://en.wikipedia.org/wiki/Category:Transgender_and_transsexual_scientists):
* [<NAME>](http://en.wikipedia.org/wiki/Ben_Barres)
* [<NAME>](http://en.wikipedia.org/wiki/Angela_Clayton)
* [L<NAME>](http://en.wikipedia.org/wiki/Lynn_Conway)
* [<NAME>](http://en.wikipedia.org/wiki/Kate_Craig-Wood)
* [<NAME>](http://en.wikipedia.org/wiki/Alan_L._Hart)
* [<NAME>](http://en.wikipedia.org/wiki/Caitl%C3%ADn_R._Kiernan)
* [<NAME>](http://en.wikipedia.org/wiki/Anne_Lawrence)
* [<NAME>](http://en.wikipedia.org/wiki/Alexia_Massalin)
* [<NAME>](http://en.wikipedia.org/wiki/Deirdre_McCloskey)
* [<NAME>](http://en.wikipedia.org/wiki/Christa_Muth)
* [<NAME>](http://en.wikipedia.org/wiki/Rachael_Padman)
* [<NAME>](http://en.wikipedia.org/wiki/Joan_Roughgarden)
* [<NAME>](http://en.wikipedia.org/wiki/Julia_Serano)
* [<NAME>](http://en.wikipedia.org/wiki/Sophie_Wilson)
There is also a list of [LGBT scientists](http://en.wikipedia.org/wiki/Category:LGBT_scientists_by_nationality).
Upvotes: 4 <issue_comment>username_5: (Original answer March 2015)
I guess it's time to answer my own question. Not long after posting the original question I began living exclusively as a woman (barring some short family-related interruptions).
I'll list some themes that applied to me that I think would apply generally:
* *Work interruption*: As much as I tried to avoid it, transitioning interrupted my work. I spent a lot of time learning how to behave, e.g. to minimize the likelihood of being attacked, to know how to react when people ask intrusive questions, and so on. Medically, you're sent to doctors, psychologists, psychiatrists, etc., who make you jump through all sorts of hoops over a long period of time. The various surgeries trans people get can take you out of action for weeks to months.
* *Comments at university*: There were some "hiccups" transitioning at university (smart people can make some really dumb comments). However, they were a drop in the ocean compared to the hate I received from elsewhere.
* *University accounts*: IT were totally unprepared for my transition. E.g. my request for a Rebecca Stones email was refused and a subsequent email about the matter was ignored. So my students referred to me by name X but email me under name Y which I found humiliating.
* *Bathrooms*: There were a few surprised looks initially when sharing the ladies room with my female students and other female academics. I never heard any complaints about it, and it didn't seem to be a big deal.
* *Applying for jobs*: I have noticed no significant difference when applying for academic jobs in my area.
* *Academic record*: It takes time to get your academic history records updated, which makes applying for jobs beforehand awkward. During this time, I took up an adjunct position where administration requested a record of my PhD; I refused to supply this but indicated that several professors at the university witnessed me obtaining my PhD. They refused to accept this, which resulted in a prolonged exchange of emails. Eventually, I cited the university's privacy policy, after which they backed off.
* *Prior publications*: I had to come to terms with having publications under my dead name. I absolutely hate hearing or seeing that word refer to me. I had to decide whether or not to include those publications on my CV and cite them in my publications (which could out me as transgender). For my CV, I include these publications, but I list only the author surnames (I still feel uncomfortable with the thought that whoever reads my CV will Google these publications). I cite them in my publications, as citing them doesn't directly imply that that name belongs to me. I'm hoping I can bury these publications in new publications over time.
* *Travel*: As an academic, I travel a lot. Consequently, I have to keep in mind (a) attitudes towards transgender people when going through immigration (you're not at your most "passable" after a long flight), and (b) local attitudes towards and laws regarding transgender people. E.g. I want to go to a conference in Chile later in the year, so I Googled "transgender Chile" and upon reading things like "[had her face disfigured with a blowtorch](http://en.wikipedia.org/wiki/LGBT_rights_in_Chile)" was a bit unsettled.
(Update August 2015)
I attended the conference in Chile ([SIGIR 2015](http://www.sigir2015.org/)) and I'm pleased that my face was not disfigured by a blowtorch. Here's a photo of the women's support group (I'm in there somewhere):
[](https://i.stack.imgur.com/GRdaum.jpg)
I'd also like to add:
+ Travelling can also interfere with (a) access to medicine, (b) access to [non-prejudiced] medical help, (c) other procedures, e.g. hair removal, and (d) the ability to buy clothes that fit.
+ Sometimes conference accommodation will be organized by gender (with or without the participants' prior knowledge). This opens the possibility of being humiliated in front of colleagues (e.g. if the accommodation staff have their own opinion on your gender), and the possibility of sharing accommodation with someone who is uncomfortable with you.
* *Reacquainting with colleagues*: I have a lot of international contacts, many of whom are still unaware of my transition. From my experience, it's much better to meet in person than email someone an explanation (when all sorts of weird ideas about who I am can arise). This is hard to do when your contacts are spread globally.
* *Transgender students*: Transgender students seem to significantly appreciate that *someone* employed by the university is transgender. Having a transgender member of staff goes against the stereotype that it's just undergraduate activists out to cause trouble who are transgender.
* *Online university resources*: From my experience, university LGBT webpages cater almost exclusively to LGB students, with no usable information for transgender people.
(Update August 2015)
* *Male-dominated area*: I'm currently in computer science which is kind of a "boy's club". Outside of academia, being in a male-dominated career is sometimes used to discredit transgender women (among other things). This results in a tug towards more female-friendly career paths, both outside of academia (e.g. nursing) and inside of academia (e.g. biology).
* *Female role models*: It has become important to me to see and interact with successful female academics, especially those in my area. (I was very encouraged by the healthy female presence at the SIGIR conference.) Interestingly, it seems that *I'm* meant to be a female role model nowadays.
* *Female co-authors*: I'm not sure if this is just a co-incidence, but I've recently found myself with a rapidly growing number of female co-authors.
* *International lifestyle*: I'm currently a postdoc in China. While I speak enough Chinese to "get by", and while my colleagues here are friendly, I feel isolated and lonely, and this is exacerbated by the lack of access to a transgender community. (See also this question: [Is feeling lonely and uncomfortable in my (foreign) country of study a valid reason to drop out of a PhD?](https://academia.stackexchange.com/q/35352/8469))
* *Unwelcome attention towards my gender*: Generally, I don't mind if people know that I'm transgender, as long as (a) this is not the only thing about me they consider and (b) they are not mean to me or my friends and colleagues as a result. Thus far, to my knowledge, unwelcome attention has been negligible at university. However, it's unclear if this trend will continue (after all, we only need a one-in-a-thousand to cause trouble). Things I'm concerned about:
+ Maybe some transgender person somewhere does something evil, making headlines. How would a university react to the resultant anti-transgender backlash?
+ What if a student makes a complaint because of my gender? What if the student is well-prepared, having extensively read online anti-transgender literature? What if the student makes a religious objection?
+ If I become successful and notable, hate material will probably be written directly about me. Here are some examples (and it doesn't take much Googling to provide more examples):
>
> Boylan, a member of the all-male, all-white, all-heterosexual, all-middle-aged transgender leadership... ([ref.](https://gendertrender.wordpress.com/tag/mens-sexual-rights/))
>
>
> "Lynn" Conway, computer geek and head honcho of the raging autogynephiliacs who tried to destroy <NAME>. ([ref.](https://www.reddit.com/r/GenderCritical/comments/3d5udv/great_advice_from_umichigan_for_mtts_youre_even/))
>
>
> <NAME> isn’t a woman: wishing can’t make it so, not even wishing and flashing scalpels. Neither is <NAME>. ([ref.](https://westhunt.wordpress.com/2013/05/08/transsexuals/))
>
>
>
* *Unwelcome attention from men*: After meals, I like to go for a walk to get some exercise. Consequently, I have now had three four unwelcome sexual encounters within walking distance from my office, during the day, and with other people around at the time. I also had a male staff member in another faculty ask for "random sex". I'm afraid to tell others about these incidents, fearing they might think I encouraged them (esp. if they attribute it to being transgender). I also simply don't have time to waste sobbing about each one; they're much too frequent.
(In today's encounter, an elderly gentleman approached me and asked for the time. I found my phone and gave him the time. I also discovered his penis was hanging out of his pants. He indicated towards some nearby bushes and said "gēn wǒ péngyou wáer", which translates to "play with my friend". I walked straight back to the office, nearly in tears. It's thirty minutes after the incident now, and I need to discuss a paper with a student.)
* *Paper cited during talk*: Recently, one of my co-authors referenced our joint paper at a conference talk while I was in the audience. I was not impressed that my dead name initials were on display, forcing me to sit there with that on the screen for all to see, and I was quite fearful that someone might ask if there was a connection between the two names. If I had known that was going to happen, I would not have attended the talk.
(Update February 2017)
* *Surgery*: I'm not sure if I should say this, but I ended up having "bottom surgery" back at the end of 2015. It was the most physically painful experience of my life, by a long way, and the pain didn't go away until about two and a half months later. *So much pain.* During this time, my [spoons](https://en.wikipedia.org/wiki/Spoon_theory) were limited. E.g. I'd go to the office to prove I still work there (I mostly just stared at the walls), and I'd need to spend the next day recovering. I would bleed a lot, and this resulted in some additional accommodation expenses. I would not be able to stand for longer than around 5 minutes without it being too painful (requiring a wheelchair at the airport).
I managed to go to the [2015 CMS meeting](https://cms.math.ca/Events/winter15/) shortly after surgery, where [<NAME>](https://math.stackexchange.com/users/1277/yuval-filmus) (who I knew through math.SE) sat next to me just after surgery, although I don't know if he realized any of this. (And I remember being a bit snippy with [<NAME>](http://www.math.mun.ca/~dapike/).)
The painkillers caused hallucinations, e.g. I'd be walking and suddenly "There's no floor! I'm falling!! Oh wait, there is a floor." I became afraid of stairs. This was happening while I was at work, although I tried to conceal it. I also sent out a few "interesting" emails.
More than a year later, I have no regrets about having surgery. It has resulted in me having a lot more confidence. If someone denies my gender, they are simply being unreasonable (although, nobody ever does). I'm now far less afraid that someone is going to find out about my past.
* *Bodily maintenance* (dilation) is both time-consuming and painful, particularly at the start. It's embarrassing but it nevertheless must be done, including while sharing accommodation with other women. I got used to making "chit chat" while doing this. Airport security sometimes inspect your dilators (which need to be in carry-on baggage) which can be embarrassing and worrisome (esp. when travelling through non-transgender-friendly countries).
* *Conversion to Islam*: Perhaps contrary to popular expectations, Islam is a relatively transgender-friendly religion. Often the attitude is that someone's gender is innate and decided by God, and being transgender is considered along the lines of a birth defect. Of course, being Muslim results in its own complications (e.g. hijab). I haven't had any problems praying in the female prayer areas (including on-campus ones).
* *A publication about transgenderism*: Perhaps this is not a typical experience, but I published a paper about transgender bathrooms usage ([here](https://dx.doi.org/10.1007/s12147-016-9181-6)), which was mentioned in the [Washington Post](https://www.washingtonpost.com/news/wonk/wp/2016/12/21/the-people-mostly-likely-to-care-about-who-uses-womens-restrooms-arent-women/?utm_term=.8710be8cc743).
Upvotes: 7 |
2013/05/28 | 727 | 2,927 | <issue_start>username_0: I am an undergrad from India from a good university( not IIT,NIT).
Our GPA is calculated on a different percentage scale and mine is 73-75%
which is a very good score according to our system.
My GRE score=1600.
I have an internship at a major company.
I am thinking of writing a research paper but have no research experience.
Do I have a chance at the top colleges like UC Berkeley,or Michigan or even Carnegie Mellon?<issue_comment>username_1: Top PhD programs in Computer Science are very selective. Just having good or even excellent grades is not enough. At least, you need to have very strong recommendation letters and preferably some undergraduate research experience. I would suggest applying not only to top schools but also to second tier schools.
Upvotes: 2 <issue_comment>username_2: **Maybe.** There's really no way to tell without looking at your complete application.
If you applied to my department's research MS program, your good grades and GRE scores would *probably* attract enough attention that someone would actually review your application. Most of our MS applications are rejected without review. But that's as far as grades and test scores will get you.
Since you claim not to come from an IIT or NIT, there might be some question about how good your university is. If we've never admitted someone from your school before, we don't know how to calibrate your grades or recommendation letters. Every year we seem to get applications from two or three new schools in India that even our Indian faculty and graduate students don't recognize. We do sometimes admit one or two *truly outstanding* students from unknown schools as a way of gathering data.
Having an internship is definitely good, especially if you did something creative and independent, and not just "My boss told me to implement this thing, so I did."
But the admission decision would really depend on your research statement (or "statement of purpose") and your recommendation letters. As I [have](https://academia.stackexchange.com/a/2722/65) [written](https://academia.stackexchange.com/a/7601/65) [many](https://academia.stackexchange.com/a/974/65) [times](https://academia.stackexchange.com/a/9877/65) [before](https://academia.stackexchange.com/a/5349/65), graduate admissions committees at top departments are looking for **potential for research excellence**. My committee would judge that potential from the content of your statements and letters, and from the credibility of your letter-writers. Research *experience* is certainly helpful, but it's not necessary, especially for an MS.
**This is really an question for your letter writers.** Ask them directly if they can write strong and substantial letters *that focus on your research potential*. If they look uncomfortable (or they don't know what "research potential" means), you should probably aim a bit lower.
Upvotes: 2 [selected_answer] |
2013/05/28 | 585 | 2,612 | <issue_start>username_0: In this application cycle, I got a PhD offer from Nanyang Technology University at Singapore and will study in the field of programming language. However, I recently received a mail from my supervisor which is said that he will soon leave NTU for some reasons. After searching the faculty list of NTU I found there is no other professor who works in programming language.
I don't want to compromise on the topic I will do during my PhD. Maybe it is a good choice for me to find a matching supervisor in some other universities.
But I have to go to NTU to do my PhD because I have no alternative choice now since this application cycle is ended.
What I want to know is that is it possible to transfer my PhD from singapore to US if I re-apply for some American universities after my current supervisor's leave?
Any advise will be so much appreciate for that I somehow don't know what to do.<issue_comment>username_1: It certainly is possible, even across countries. Some of my students are following me on my move from Belgium to Sweden. To ensure that this was possible, I needed to enquire with the administration on both the source and target of the move. There were some restrictions on both sides – for instance, if a student is too close to finishing, both sides are reluctant to let the move happen.
Upvotes: 2 <issue_comment>username_2: In general, it is difficult to *transfer* between universities in different countries *unless* one is moving with the thesis advisor. This is in part because of funding rules: usually, money in one country cannot be used to fund graduate students working or studying in another country. (In the US, for instance, graduate fellowships are normally valid only at US universities.)
If you were to attempt to transfer on your own, the most likely scenario is that you would be expected to start the PhD over; depending on the department, they may not recognize coursework completed at your old school, or at best may choose to give you placement out of the equivalent courses, but still expect you to complete additional electives. It would be even more challenging to move your project over, if funding isn't available to work on the same project.
Since you are just starting, perhaps it would be possible to complete a master's degree in Singapore, and then try to transfer to another university for the doctoral studies. (I'm not sure how doctoral programs in CS handle an international master's. In my department, though, students with master's from abroad were still expected to complete the "core" coursework requirements.)
Upvotes: 3 |
2013/05/28 | 1,435 | 6,063 | <issue_start>username_0: This is a followup to [Why are CS researchers reluctant to share code and what techniques can I use to encourage sharing?](https://academia.stackexchange.com/q/10247/285). That question specifically asked how one can succeed in obtaining researcher's source code.
As discussed in the answers to that question, the reasons largely boil down to competitive advantage and people thinking their code is not good enough. The former issue is difficult to address. However, one could try to address the latter issue, making the reasonable assumption that this behavior stems from the surrounding academic culture. There may be additional aspects of the academic culture that discourage code sharing, and which do not relate to competitive advantage.
So, one could instead ask the general questions what concrete actions one can take to change this culture? Or, to put it a little differently, how can I help change the academic world so that more researchers are willing to share their source code?<issue_comment>username_1: In the long term, this will only happen if you change the culture (just as you say). How do you change the culture of a field? Very slowly, and only with enormous effort. You talk to other researchers. You articulate your values, and seek allies who share your values. You patiently make the case to your fellow colleagues, perhaps by writing opinion pieces. You don't harangue or attack them; instead, you gently lay out the reasons why it is good for science and good for them to share their code. Remember, in all likelihood you all share the same common values: the love and dedication of science and the pursuit of truth.
You lead, by acting as a model for how you would like others to behave. You do what *you* think is right, and set a good example.
You try to persuade referees and program committees to value and reward researchers who do share their source code. Recognize the amount of extra effort this takes, and (if you believe it is valuable) reward it accordingly: bump up your rating of their paper correspondingly, and make the case why others to do so. When you write letters of reference or evaluation for another researcher, if they share their source code, give them kudos and explain why the hiring or promotion committee should view this positively.
Ultimately, this is not something that a single individual can change. Only the entire community, acting as a whole, can make this kind of change. Individuals can catalyze and facilitate that change, but like any other kind of reform, it takes extraordinary patience and effort, as well as buy-in from your colleagues. It's not unusual for this kind of change to take a generation or two. But keep your chin up: remember, you're doing this because you believe it is good for science and good for your field!
Upvotes: 3 <issue_comment>username_2: IMO talk about "cultural" impediments are overstated. Academics are as rational agents as any, and currently the academic system (mainly publish/get grants or perish) provides little **incentive** to publishing the code or making analysis entirely reproducible. Some fields have started intiatives to encourage this by making either analysis and publicly releasing data mandatory or strongly recommended for publication (e.g. [The Journal of Applied Economics](http://www.ucema.edu.ar/journal-applied-economics)) or for funding (e.g. [NIJ](http://www.nij.gov/) grants frequently have stipulations to post the data to [ICPSR](http://www.icpsr.umich.edu/icpsrweb/landing.jsp)).
Greater awareness of technical computing skills necessary for reproducible analysis will help (see [Koeneker & Zeileis, 2009](http://www.econ.uiuc.edu/~roger/research/repro/JAE-RER.pdf)), but on its own won't spurn greater compliance, even if the already discussed negatives are largely mitigated. It still will be more work to publish the code than to not publish the code. When it helps your tenure case, then it will become more commonplace.
Upvotes: 2 <issue_comment>username_3: Researchers are unwilling to share their code because it's a lot of extra work -- for which there is little or no recompense. When I write some to simulate an experiment or an algorithm to verify numerically the result of a calculation, it is not production-ready code that can be easily run in another environment: at the very least another researcher needs Matlab or Mathematica, they need whatever special toolboxes or auxiliary code I am using, they need the data files in the right place on the hard drive, they need to understand how to program themselves so that when some small glitch arises, they can deal with it. When I try to run my own code from a year ago, it almost needs some massaging: perhaps a needed file has been moved from one directory to another, perhaps a toolbox has had its code base "updated" and no longer works exactly the same way.
So -- here's what usually happens. Someone contacts me and wants to try out my code. I warn them of all the above issues, but they insist that they know what they are doing and will get back to me with any problems. I spend 2 or 3 hours preparing things, checking stuff out so that they will have an easier time, explaining to them how to put things together so that it will all work. I mail it to them. And I never hear back. It happened again last month. So -- how likely am I to "share" code in the future? Just a little bit less likely than last month.
Now to the question: how can you get researchers to share their code? First, when you ask, follow through - don't "take the code and disappear." Second, try to get the researcher interested in *why* you want the code, what you might do with it. Third, the burden is on you to take research-style code (poorly commented, bad error checking, disorganized structure) and to make it do something. Fourth, return something: when you do make some progress, let the researcher know. Fifth, don't ask for impossible things: can I compile the code for your machine (that's different from mine)? (Hint: no).
Upvotes: 2 |
2013/05/28 | 1,747 | 7,148 | <issue_start>username_0: This thought came into mind when I was reading [Points of view: Elements of visual style](http://www.nature.com/nmeth/journal/v10/n5/full/nmeth.2444.html).
While there are a lot of questions on software and strategies to make and edit figures, I am more curious about what are the things to consider (design rules) while laying out figures. Factors that come to mind include typography, color template, and proportion.<issue_comment>username_1: There are many aspects of designing figures/illustrations that are covered extremely well by the books written by [Edward Tufte](http://www.edwardtufte.com/tufte/). I would strongly recommend his *The Visual Display of Quantitative Information* as a starting point.
Some basic notions are to strive for simplicity and clarity. This may seem very obvious but there are many pitfalls and Tufte provides good examples and ways to think about even simple graphics. Design issues such as fonts, line weight, color etc. and thinking about what it is that should be conveyed in a graphic helps to understand how to design efficient graphics. There is a defintion of graphical excellence by Tufte that says that an illustration that provides the reader with as many thoughts as possible in as short a time as possible by using as little ink as possible consitutes such excellence.
Obviously it is impossible to provide a full answer here particularly since it would repeat what is already in print. My personal view is that the book mentioned above is a good foundation to start critically view and discuss illustrations you encounter. Discussing with peers can be very useful. At the same time there is also much tradition in graphics so new ideas may not necessarily fall into fertile soil rigt away.
Upvotes: 5 <issue_comment>username_2: There are many books on this subject, as [username_1](https://academia.stackexchange.com/a/10285/495) points out, as it is something of an Art/Science in its own right.
However, if you don't feel like buying and reading one or more books on the subject, here are a few basic things that I consider both important, and easy to follow:
* **Consistency**: While people can generally argue for hours over the merits of one colour scheme over the other, what is more important is that you use a consistent one for all your figures. Same goes for fonts and line styles. Also, if you're going to use blue squares to represent some *thing* in a figure, be sure to use blue squares for nothing else throughout your entire paper.
* **Conciseness**: Try to reduce each figure to making a single point, i.e. try to think in terms of "what is this figure trying to say". It's tempting to pack more and more data into a single figure, but in the end this will usually dilute the message. One figure, one statement.
* **Clarity**: Once you've decided what it is that your figure will say, remove anything from it that does not contribute to this single statement. E.g. do you really need every data point/curve you've drawn? If your statement refers to part of a flow chart or class diagram, do you really need all the other, less relevant boxes/labels/methods there too? Also, if the salient feature of your figure is not immediately clear, don't shy away from adding an arrow or something to highlight it.
* **Completeness**: Holding the balance to clarity's minimalism, make sure there is also nothing *missing* from the figure which is needed for the statement you want to make. Figure axes labels are a favourite.
Funny how they all start with "C". This is *not* intentional.
Upvotes: 6 [selected_answer]<issue_comment>username_3: I rather like the framework presented by <NAME> in ["The Back of a Napkin"](http://www.danroam.com/the-back-of-the-napkin/) as a guide for S (simple/elaborate) Q (quality/quantity) V (vision/execution) I (individual/comparison) Δ (change/as-is) and the who/what (portrait), how much(chart), where (map), when (timeline), how (flowchart), why (plot).
In general, these two dimensions (technically 3 dimensions but some aren't used) are presented in a matrix to demonstrate the basic forms of diagrams you can use for communicating concepts. The book is a little more high-level than many books on this topic for scientific presentation but I think it's important to be able to select the right picture for your message.
Once you get down to nitty-gritty details, then <NAME>'s ["Writing for Computer Science"](http://rads.stackoverflow.com/amzn/click/1852338024) has a number of really good specific pieces of advice for how to lay out diagrams and tables, including how to typeset them so that people remember how to find them and how to make it so that tables don't look all crowded. The book's title says "computer science" but much of the advice is reasonably general to most quantitative research fields.
Upvotes: 3 <issue_comment>username_4: [The manual for the TikZ LaTeX package](http://mirrors.ctan.org/graphics/pgf/base/doc/pgfmanual.pdf) contains a very good, 6.5-page section with very reasonable (tool-independent) guidelines (and examples). Just to whet your appetite (and give an idea about the kind of advice contained there, and also to make this answer usable by itself – though please do read the linked document, there's much more to it than what I quote below!) let me quote the list of subsections and short quotations.
* Planning the Time Needed for the Creation of Graphics (*As a general rule, assume that a graphic will need as much time to create as would a text of the same length.*)
* Workflow for Creating a Graphic (*In a good journal paper there is typically not a single sentence that has survived unmodified from the first draft. Creating a graphics follows the same pattern.*)
* Linking Graphics With the Main Text (*Stand-alone figures should have a caption that should make them “understandable by themselves.”*)
* Consistency Between Graphics and Text (*Do not scale graphics. [...] Use the same font(s) both in graphics and the body text.*)
* Labels in Graphics (*In addition to using the same fonts in text and graphics, you should also use the same notation.*)
* Plots and Charts (*The first question you should ask yourself when creating a plot is: Are there enough data points to merit a plot? If the answer is “not really,” use a table.*)
* Attention and Distraction (*When you design a graphic, you should eliminate everything that will “distract the eye.”*)
Upvotes: 2 <issue_comment>username_5: Tufte has been mentioned already, but since you're asking for principles, this one deserves to be emphasized:
>
> Maximize the ratio of data-ink to total ink
> That is, remove anything that doesn't express data, or look for clever ways to make things express data.
>
>
>
Here's an example:

The standard bar chart from any plotting program has a thick border around it and usually some kind of grid. Here, the border has been removed, since it doesn't express any data, and the grid has not only been removed: Tufte has actually managed to *express* a grid, by removing ink.
Upvotes: 3 |
2013/05/28 | 1,730 | 6,966 | <issue_start>username_0: As it can be appreciated from this [list of journals with varying preprint policies](http://en.wikipedia.org/wiki/List_of_academic_journals_by_preprint_policy), certain journals consider a preprint to be "prior publication". In other fields like Chemistry, there is a strong policy against preprints.
I'm curious about those reasons, if there are other reasons, and if they hold weight.<issue_comment>username_1: There are commercial reasons for journals to be the only place where the article can be obtained. (advertising on the site or in the print journal). So simply violating their policy (if stated clearly) is one valid rejection reason.
(I personally disagree with this reason but such is life)
Upvotes: 2 <issue_comment>username_2: A a chemist, I'm very well aware of this.
Here's the [ACS Journal Editors' Policy on Preprints](http://pubs.acs.org/meetingpreprints/preprints.html)' point of view about the disadvantages of preprint servers:
>
> The disadvantages of preprint servers include: the potential for flooding the literature with trivial and repetitious publications, thus making extraction of reliable and valuable information more difficult; absence of peer review; possible premature disclosure with inadequate experimental details or supporting data; premature claims of priority; potential lack of proper references and credit to prior work; abuse of multiple revisions or updates; possible lack of duration and long term archiving.
>
>
>
Personally, I find the two concerns about
* "premature claims of priority" and
* "abuse of multiple revisions or updates"
the most relevant points.
* "flooding literature with trivial publications" is IMHO an issue with and without preprint servers,
* "repetitious publications" for me fall into the same category, as do
* "inadequate experimental details or supporting data".
* "absence of peer review" is clearly visible with papers from preprint servers - which is IMHO an advantage over journals where the peer review is uncritical.
* "long term archiving" depends IMHO more on the responsible organization behind the server (I'm not any more concerned that arXiv could shut down than e.g. Langmuir, Analyst or Analytical and Bioanalytical Chemistry)
There have been "experiments" with preprint servers for chemistry some 10 years ago [[1]](http://www.iupac.org/publications/ci/2002/2404/preprint.html) but AFAIK they did not develop the momentum e.g. arXiv has, and they seem to have died meanwhile.
See also: [<NAME>: The Role of Electronic Preprints in Chemical
Communication: Analysis of Citation, Usage, and Acceptance in the Journal Literature, Journal of the American Society for Information Science and Technology 54.5 (2003): 362-371.](http://www.asist.org/Publications/JASIS/Best_Jasist/2004Brown.pdf)
(the discussed server seems to be down - or at least I can't get a response).
---
Personal point of view on the problem
-------------------------------------
The possibility to be able to publish a manuscript on a preprint server *before* submitting it to a journal is not as imortant for me personally as the possibility to make the final contents of the paper publicly accessible.
Thus I can live quite well with not being allowed to submit manuscripts that are already available on preprint servers as long as I'm allowed to also publish the manuscript (preferrably the final version after peer-review) after I submitted it to the journal.
* either at a preprint server [(arXiv)](http://arxiv.org/abs/1301.0264), or
* on institutional, [personal or project web pages](http://softclassval.r-forge.r-project.org/2011/2011-07-01-ABC-Glioma-paper.html)
* (preferrably [both](http://softclassval.r-forge.r-project.org/2013/2013-01-03-ChemomIntellLabSystTheorypaper.html), of course)
Upvotes: 4 <issue_comment>username_3: One key issue to keep in mind when comparing different fields is the scale of money involved. For example, according to their [financial statement](http://portal.acs.org/portal/PublicWebSite/about/aboutacs/financial/CNBP_032302), in 2012 the American Chemical Society received $421 million in revenue for electronic services, including both journals and the Chemical Abstracts Service. That's a staggering amount of money for a scholarly society. (For comparison, the American Mathematical Society's [2011 revenue](http://portal.acs.org/portal/PublicWebSite/about/aboutacs/financial/CNBP_032302) from Math Reviews and journals was $15.5 million.) The ACS is the gatekeeper for publications and data that are worth a fortune to industry, so they have a powerful incentive to maintain that control. It's no coincidence that they are much less friendly towards open access, the arXiv, etc. than corresponding groups in mathematics or physics are.
Upvotes: 3 <issue_comment>username_4: As said in comment, one reason not to allow preprint publication alongside journal publication is to preserve the incentives to subscribe. To add a note to this point, let me remark that most of the preprint-friendly publishers (this adjective includes Elsevier and Springer: they don't do *everything* wrong) do not allow the final journal-template version of the paper to be deposited in an open repository. In other words, most publishers do forbid open distribution of published papers in some way, they draw the line at different points. Of course, drawing the line after or before the preprint version of the article makes the most important difference.
Another reason in some field, alluded to in the question, is a way to understand the pretty general policy that journals' goal is to publish novel publications. In all fields, this notably means that you are not allowed to submit to a journal a paper that has already been published. In some fields, notably humanities fields (at least in France) this extends to journal refusing to publish articles already available as preprint. As far as understand, preprints are then really seen as publications, in the sense that they are no more novel. Of course, they are not considered as publications in the same way than journal articles in CVs...
Concerning the weight of these reason, it feels to me like tradition has a lot to do with it. Some tradition are easier to sustain in some fields than others; e.g. Chemistry can ask both reader-side for subscription charges and author-side for pages or color figure charges, as the field has some money notably due to its experimental nature; such tradition would be more difficult to sustain in humanities where money is much scarcer. As another example, it might be that the strong weight of book publishing in humanities is related to this "preprint is prior publication" point of view: it is more common for publisher not to allow books to be made available, and a field where books bear at least as much importance as article for idea dissemination seems more likely to adopt the same policy for articles.
Upvotes: 2 |
2013/05/29 | 971 | 4,341 | <issue_start>username_0: While trying to select the journals I can submit my present work and future work, I realize the time between the date the authors send the article to the journal and when the revised article is submitted. Usually this is something between 6 months and more than 12 months.
Althought I am focusing on the Impact Factor (IF) to select journals, I think that 12 months to have a feedback that might include "add more variables analysis","this article will be more valuable if you test also model x,y and z", can have a huge impact on you work. It will make you stop what you are currently doing and possible spend up to 2 months making the changes. This might make a big mess on your work if you have deadlines to respect, however this happens to everyone in academia.
My question is, besides the IF journal based selection, how can we select a journal with enought quality with lower IF? What alternative criteria should one use to select the journal? Should we choose newly created Journals?<issue_comment>username_1: Sadly, the published times between when a paper is submitted, revised and published should be taken with a pinch of salt. Some journals these days have changed the review process so that there is no longer a "revise and resubmit" option, and instead the paper is rejected but with an encouragement to resubmit a revised manuscript as a new paper. This means the "submitted" date on the published paper is the date of the submission of the *accepted* version, not the initial submission of the paper. I personally think this is unethical and no longer review for journals that have this policy as I think it is unfair on the authors, firstly because it deceives them into thinking the journal has a more rapid review process than it actually does, but more importantly because it deprives the authors on priority on their discovery.
So as well as IF, I would say that the review process is a factor to consider.
A good way to choose a journal is to see where the leading figures in your field publish their papers.
Upvotes: 3 <issue_comment>username_2: I largely prefer to talk with people about what they think of the various journals, see who is in the editorial board, and look at what they publish to determine which journals I consider as good. IF can be very biased even at the scale of a journal (Chaos, Soliton and Fractals had a pretty decent IF...)
Other criteria include price and politics of the publisher (e.g. see the cost of knowledge pledge and the blog posts around it), quality of the publishers work (do you have to check proofs in three days with no indication of what has been changed in your paper?), dissemination (is the journal subscribed by a lot of libraries? is it open access? Is it read by many people?), editorial standards (ranges from "any editor do what she wants and takes decisions alone" to "all the editorial board must approve a paper for it to be published, and the name of the handling editor appears on the paper"). All of them may be difficult to determine, that is why talking with other people in your field about journals is important.
Upvotes: 5 [selected_answer]<issue_comment>username_3: Many good suggestions have been given in the answers by username_2 and username_1. I would like to add to these answers by including the following.
Most journals are fairly specialized. The impact factor (IF) tells us how much a given paper published in a specific is referenced on average. So although it may be important to publish in high IF journals it is also (I would say more) important to make sure the paper is seen and read. A high IF journal ensures that this happens to some extent. But, sometimes ones subdiscipline is poorly represented in such a journal and it may turn out that a lower ranked journal may be the major outlet for papers in the discipline. Knowing where people publish their papers is therefore a good guide to the palette or possiblities.
Therefore, take a careful look at where papers you refer to are published and try to assess where your "audience" is likely to look. Such journals are also likely to provide very good insightful reviews. So when you look at journals try to look at your possibilities from all directions and assess the best journal based on all of the suggestions made in the answers here.
Upvotes: 3 |
2013/05/29 | 807 | 3,598 | <issue_start>username_0: If I want to go to graduate school to study physics and am a math major, how bad is it to have recommendation letters from math professors, mostly? The problem is that I just have four physics courses and after being picky about professors, I might not end up with all physics professors. Further, math professors know me better.
So, will it hurt if my recommenders are more mathematicians than physicists?<issue_comment>username_1: The answer to your question is no, it won't hurt.
How you prepare your application, however, will likely depend on the type of research you want to do while working on your PhD. For your case, if you want to pursue theoretical research, then I think recommendation letters from mathematicians will be a good asset.
If, on the other hand, you want to work in experimental physics, then I still don't think it would hurt, but I would work extra hard at highlighting relevant skills and experiences, such as experience working in a machine shop or with electronics, in other parts of your application.
You still want your references to be good judges of academic performance, however. By academic, I mean that they should work in academia or be experts in research, critical thinking, and other important academic traits in your field. A life-long high school teacher, for example, may not be a good choice.
To summarize, having letters of recommendation from academics outside of your major field should not hurt your chances of getting into graduate school. The letters should speak to your character, work ethic, and natural abilities, not to the skills you possess. Those can be highlighted in your application.
Qualifications: I'm a senior (sixth-year) graduate student in a physics-related field and frequently assist my advisor in taking on new graduate students. We've taken on people with engineering, physics, materials science, and **mathematics** backgrounds.
Upvotes: 2 <issue_comment>username_2: To point out one pitfall in username_1's answer, there is one situation in which getting letters from out-of-discipline people *can* affect an applicant's chances: if the letter writers do not directly support the candidate's application to the specific discipline. In other words, if you're a mathematician applying to physics programs, your math professors should be explaining why you will be a great *physics* graduate student, not a great *math* graduate student. (Or why you will be a great graduate student in *any* department.) "Dissonance" between the letters of recommendation and where you're applying could make some reviewers question if you're seriously interested in applying to physics departments, which could hurt your chances (albeit perhaps only slightly).
Upvotes: 3 <issue_comment>username_3: I would like to speak to this issue in terms of applying to ***interdisciplinary*** programs like information science, HCI, communication etc. It matters little what the specific home department the professor belongs to as long as they can point out 2 things - 1. How are you as a potential researcher in *that* particular discipline that you are applying to and 2. How can they (the recommenders) recommend you in context to **(1)**
For instance, in my case, my recommenders came from civil engineering, statistics and computer science and I was applying to an information science department. I don't know what their recommendations contained but they could very well certify that I was doing ***interdisciplinary*** work corresponding to most areas in information science during my masters degree.
Upvotes: 1 |
2013/05/29 | 1,134 | 4,789 | <issue_start>username_0: Faculty tend to fall into these groups:
1. early adopters
2. hesitant but willing
3. refusing to adopt b/c no time
4. refusing to adopt b/c not technically inclined
I would like to know what strategies I can use to bring the **3rd & 4th group** on-board.
Context
-------
A new learning management system has been adopted where instructors are encouraged to post their slides/lecture materials online.<issue_comment>username_1: First of all, you need to make it exceedingly easy for the Luddites to put their material online. If it is hard to do, or if it takes a particular skill other than "go to this website and click a few links" than you're going to have a hard time ever convincing them to get onboard. If they absolutely need to be trained, you may have to have a mandatory training session where they get their first set of lectures or whatever online (but don't expect any other forthcoming material to be posted without more prodding--then again, they may see the light if it is easy enough and they can see the results online).
I'd suggest one or more of the following four suggestions, in order of preference:
1. Ask for student volunteers to help these faculty put their material online. Whether these are TAs for the class, or paid/hired students not already assigned, or strictly volunteers is up to you and budgeting concerns. You may have to continue this process for subsequent classes if the faculty aren't willing to learn how to do it themselves.
2. Automate the system (which goes back to my original comment). If paper is involved, have someone set up a scanner to handle loose-leaf material, or via a photocopier/scanner. If it is just soft-copy document uploading, this can be automated with a drop box on a shared drive -- just drag materials into the box and it ends up online. This won't make for a particularly organized system, but at least the material will be online.
3. Wait them out and let attrition work its magic. You may always have reluctant faculty, but as older Luddites retire you should find this less of a problem. If there are only a few faculty that don't want to come onboard, this is definitely the easiest method, and you're really not losing too much by waiting.
4. Make posting the material mandatory. I can almost guarantee this won't be possible for tenured faculty, but maybe you can provide some incentive rather than simply encouraging them to put the material online.
I don't think you'll get very far with a simple plea for coming into the 21st Century -- if they are refusing to adopt, they probably feel they are too busy, or don't like the whole idea of it.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The question has a hidden assumption: that the technology has no problems and it's merely the faculty that need to be convinced. Having used a few different learning management systems (Canvas, Moodle, Blackboard, homegrown stuff), I can tell you that this is rarely true.
The people resistant to change are probably resisting because they've seen different incarnations of this technology come and go, and find it annoying to have to keep learning a new system *that doesn't present any significant advantages over what they're doing*. The *significant* part is important. There's a cost to making a change, so the new system can't just be as good.
So I'll add to Chris's excellent suggestions as follows:
* make it seamless not just to import, but to **export** easily. In the world of online software, it's important not to have things be gated. I want assurances that if your new system goes away tomorrow, to be replaced by the next new system, that I can easily transfer material from the old system to the new with a few clicks.
* demonstrate why this new system isn't going away in a year to be replaced by something else. How you do that is up to you and depends on the system you're pushing.
Bottom line: the perceived attitude in the question is that the faculty are at fault for not adopting new technology, but the truth is that most new tech is crappy and short-lived, and it's natural to want to wait things out. So you have convince people that the new approach is not crappy and will last.
Upvotes: 4 <issue_comment>username_3: I can think of three strategies, which complement each other:
* Win Group 2 over first.
* If there are external reasons for using the technology, make them clear. For example, you might not persuade faculty that using this software is intrinsically a good idea, but you *might* be able to persuade them that making the dean happy is reason enough.
* Take the time to listen to their objections. They will be more willing to listen to you if they feel that you have listened to them, understood them, and respected them.
Upvotes: 2 |
2013/05/30 | 745 | 2,969 | <issue_start>username_0: I have read and been told that research is the single most important factor for applying for PhD programs in STEM fields. But, I also hear that GPA and GRE scores are the first cutting point for adcoms.
What is actually more important, grades or research? Will committees look at applicants with low GPAs?
In my scenario, I have a ~3.4 GPA overall, ~3.7 in Major (CS). This is not stellar. But what I do have is 1 first-author conference publication (Best Paper Award at conference) and 1 first-author journal publication as a Junior, with more other work/papers in progress. And my GREs are 158V/170Q/4.5W.
I'm very interested in top schools, but I'm worried my GPA will hold me back.
Will admissions throw away my application at sight of my GPA? Or will they take the time to review my whole application?<issue_comment>username_1: Graduate admissions committees should, in principle, be able to review all of the applications they receive in full; this is not like undergraduate admissions, where a small team may be responsible for 10,000 or 20,000 applications. That said, some of the larger graduate departments may receive several hundred applications per year, and it may be necessary to do a preliminary screening before deciding which applications will be examined in further detail. However, what gets through such a screen can vary strongly from school to school and department to department. For instance, if you're at a school whose alumni regularly go on to graduate schools and have a track record of success, that can also be a "plus" factor. If you're near the top of your class, that can also mitigate "weak" grades somewhat (because it indicates that your school resists grade inflation).
I would hope that graduate admissions flag applications with publications listed, but it depends on whether or not the database reports that summarize applications actually can do a screen for the presence of publications.
Your specific case, however, is unfortunately in the "no man's land"—not a clear "read no matter what," but also not an automatic "throw away," either. It is probable that you will have a tough time if you look only at "top 5" or "top 10" departments, but you should be able to get considered by many good programs.
Upvotes: 4 [selected_answer]<issue_comment>username_2: Your grades aren't stellar, but are good enough, alongside your strong GRE scores, to make the "first cut." (One of my own graduate advisers told me that it gets worrisome below 3.3, but you've cleared that hurdle.)
Your publication record is outstanding and ought to get you in during the later screening.
This is a case of one area being very strong and the other, "not too bad."
Upvotes: 0 <issue_comment>username_3: >
> What is actually more important, grades or research?
>
>
>
As JeffE said, research experience is most important. This is because it directly relates to the responsibilities of a PhD student.
Upvotes: 0 |
2013/05/30 | 1,189 | 4,660 | <issue_start>username_0: I'm doing an astrophysics thesis with a lot of programming in Python. I'm currently using gnuplot for my plots, but I wonder if this is actually looking quite professional. Are there other options which look better and are still easy to use?
Here an example of a figure in my thesis:

The vertical line I got with the following command:
```
set arrow from 4861.3,-1200 to 4861.3,2000 nohead lc 2
```
I know it reaches out of the figure, but I use this command for all of my script and I know only at the end what the upper and lower boundary of the y-axis will be. Every peak is different.<issue_comment>username_1: If you do the programming in Python, you could also do the plotting in Python with [matplotlib](http://matplotlib.org). With a little adjustment of the plotting parameters, it is possible to produce publication-quality plots with this software.
Alternatively, if you need fancy annotations etc., I can recommend the [pgfplots](http://pgfplots.sourceforge.net) package for tikz/LaTeX. You could export data from your Python program to a csv file, and then use that as data source for plotting with pgfplots. If you are also using TeX for the main text, it allows to to produce graphics which nicely fit the formating of the text.
Upvotes: 5 [selected_answer]<issue_comment>username_2: The choice of plotting choice will depend on several factors. First, it is important to state that there are many options, from GNUPlot, through commercial plotting packages such as Grapher and Origin, plotting capabilities of Matlab and R to plotting using specific packages such as pgfplots (LaTeX) or graphics packages accompanying programming languages (e.g. [PSPlot](http://www.nova.edu/ocean/psplot/)). What you chose will depend on other factors such as what your peers use, what you may have become familiar with and perhaps what you can afford (thinking of open source vs. commercial).
My personal experience has been that there is no single software that can do everything and so for me the key has been to identify what I need done and to minimize the number of software I need to accomplish it. This has led me to choices that are not common to my peers and has also left me to find my own solutions, not dreawing so much from other pesons experience (thank heavens for sites such as stackexchange)
So stick to what you know as long as it can do what you want but always keep an eye out for new solutions and try to figure out what others are doing that may impress you.
Upvotes: 2 <issue_comment>username_3: It is difficult to produce professional-looking output from Gnuplot (even harder than it is from Matlab, which I also wouldn't recommend).
Since you're already using Python, [matplotlib](http://matplotlib.org/) is the obvious choice. You you can even make a decent attempt at producing [full figures](http://neuroscience.telenczuk.pl/?p=331), not just one panel.
Typically astrophysics doesn't have much reference to astronomy these days. However, if your thesis does, you might also want to check out [APLpy](http://aplpy.github.io/) which adds on to matplotlib.
Upvotes: 3 <issue_comment>username_4: For the plot you've shown, I have a couple of suggestions to make it look more "professional". (Maybe you're already doing some of these in the actual document.)
* Use a vector-based file format; your lines will look smoother, up to the resolution of your printer. I suggest `set terminal pdf`.
* Set a meaningful title for the curve: `plot "tkrs.txt" using ... title "Dilithium flux density"`. Or omit it completely: `plot "tkrs.txt" using ... notitle`.
* If plotting a mathematical function (rather than a dataset), `set samples 5000` or something similarly high for your final output.
* Try to choose a font for the labels that matches the paper's text as closely as possible. See `help term pdf` for more info, and other interesting tweaks. You can also use non-ASCII characters (e.g. `Å` instead of `angstrom`).
* Set the plot to a size and aspect ratio that fits nicely on the page, preferably so that your word processor (or LaTeX) doesn't have to rescale it.
* For your vertical line, you could cheat and use a parametric function, so that it will be clipped to the boundaries of the plot. It's a little tricky because if you make the line extra long, by default the plot will be rescaled to fit all of it. But there is a way to avoid this:
```
set xrange [ ] writeback
set yrange [ ] writeback
plot "tkrs.txt" ...
set parametric
set trange [-1200:10000]
set xrange restore
set yrange restore
replot t, 4861.3 notitle
```
Upvotes: 3 |
2013/05/31 | 2,880 | 12,152 | <issue_start>username_0: I'm a high school senior about to graduate in 1 month. I have a strong passion in math, and I want to be a mathematician. What is the best path to getting into a top grad school? How many REU's should I try to do? Any publications? How about graduate level courses? Do you need a 4.0 in undergrad? I'm also self studying as much math as I can, from Artin's Algebra, Munkre's topology, and baby Rudin. How much math should I know by the time I apply for grad school? I would greatly appreciate your feedback!<issue_comment>username_1: Here are my suggestions, having just finished a year of graduate school in math. It's therefore mostly anecdotal and should be taken lightly!
**REUs:** Try to do as many as you can! You get to meet other people who like math, learn new stuff, practice struggling with research, travel a bit, and get some cash to boot. They also, of course, look good on applications.
**Publications:** I don't personally have any publications. I wrote a few papers during my REUs and projects, but they were only published on the REU websites. So they're not necessary to get in. However, I did have a great deal of trouble getting acceptances. Maybe a publication would have helped, but I think it's very rare for an undergraduate to actually publish a paper.
**Graduate Courses:** I took several of these as an undergraduate. I enjoyed them, but realize now that I should have taken them a little more seriously! I've forgotten a great deal of what I saw in them. However, I have noticed that I'm quite strong in the area I took graduate courses in compared to my peers. So they definitely give you an edge! However, don't become too obsessed with loading up with graduate courses. Three of them is quite a lot of work, if you give them justice. Since most graduate courses are graded very lightly, you can make high marks in them without putting forth as much effort as you would in an undergraduate course! (At least, this was how it worked at my undergraduate institution.)
That said, keep in mind that some of your time in college should be spent having fun, too. Don't become a math robot just yet! You have time for that in grad school. :)
**Reading Textbooks:** The fact you're already reading the "core" undergraduate books before even entering the university puts you far, far ahead of the curve. Many people won't learn those things until sophomore or junior year. I certainly didn't. Make sure you're doing the bulk of the exercises in those books, especially Rudin. Try to prove statements you come across without looking at their proofs. I feel that this is where most of the learning happens. You can easily read things and not understand them, so just watch out! Other than that, finish those books and then you should be set to take the advanced undergraduate/first year graduate courses at your institution.
**The Math Subject GRE:** I hate this thing and did very poorly on it. You'll blow the math portion of the general GRE out of the water. It's easy stuff for any math major. However, if you don't spend a little time reviewing, you can really mess up the subject test, since it's timed and covers things you might not have thought about for several years. The topics are almost completely disjoint from what a student taking graduate courses has been doing. Look at a few practice tests, identify what sorts of questions are asked, and train yourself to quickly answer such questions. Speed is key. You don't want the subject GRE to be a weak point on your application, especially since it's an easily prevented weak point.
So, do REUs, take graduate courses, don't sweat not having publications (but if you can get some do), and study for the subject GRE!
Upvotes: 5 [selected_answer]<issue_comment>username_2: Let me add one point to Zach's answer.
Your primary goal should be to **collect strong references**. Admission committees in top departments are looking for evidence of research potential. Aside from actual published research, the most compelling evidence is a strong letter from a well-known active researcher, who writes about your research potential in specific and credible detail, based on direct personal interaction. To get those letters, you need to engage with professors as a *potential colleague*, not just as a *student*.
Yes, take advanced and graduate-level math classes, but don't *just* sit in the back and quietly get As. Ask (and answer) intelligent questions in class and in office hours. Don't limit your questions (in office hours) to course material. Ask about undergraduate research opportunities (*not* just REUs) with the explicit goal of peer-reviewed publishable research (*not* just reports on some REU website).
Keep in mind that becoming a published mathematician can take several years of effort, and there's no guarantee that you'll get there before you graduate. But the sooner you start, the closer you'll get, and the more chances you'll have to impress the faculty you work with. **So start *now*.**
The first three profs you ask may tell you that publishing mathematics research as an undergraduate is impossible, because you need to take a five more years of classes before you can even understand the problem. They're wrong; ask a fourth.
Upvotes: 4 <issue_comment>username_3: **Some advice first:** Ultimately you may change your mind about what you want to do in your academic life (and you probably will, which is to be expected) so what I would do is make sure I have a back-up plan. By "back-up plan" I mean whatever field you may choose, make sure it coincides with another existing field (in some obvious, explicit way). That way, if you do end up hating say analytic number theory and you want to specialize in algebraic geometry instead, the path would be continuous, and you wouldn't face too many rough patches. Since you're passionate about math, I'm assuming you have a very, very rough idea of what you may want to specialize in grad school (applied math seems out the window). If not, that's okay. You've got years to think about that. If the answer is yes, then in addition to the core classes (abstract algebra, points-set topology, real analysis are the norm), you should self-study additional books that pertain to your field of interest. If you like number theory, start with <NAME>'s book. If you like geometry, start with Pedoe's book. And so on and so forth. In the meanwhile I'll give you an outline of what I think is the "Ideal" student's path towards a top grad school (for these purposes, I'm going to say all of the schools in the top 10 programs). This will be completely subjective of course, so disclaimer: don't get on my case. I'm also going to completely ignore the general education requirements and just focus on the math part. Since I'm not a believer in REUs I won't mention anything about that. I don't think they're a good reflection of research ability because not too many of them are exceptional. (i.e no first authorship. No grad school is going to believe you made progress on the Hodge conjecture if you say that on your personal statement. You'd only be BS'ing yourself). Moving on :-)
**Freshman year**: Get your core classes over with. You'll probably be able to finish half of them in this time assuming you declare your major this early and you petition your academic adviser to take more than what the norm is (and if you can prove your competency in the subjects you've listed). They may be able to waive the prerequisites and make an exception for you. In the meanwhile, don't be a stool in class and just get A's. Ask and answer questions regarding your coursework. Go beyond that and take advantage of your professors' office hours. Don't worry if you think you're intruding: it's part of the job description, and they'll probably like the enthusiasm. Talk to them about doing research (it's definitely ***not*** too early) in an area of interest. Provide your relevant mathematical background so that they may guide you. Ask them what other professors you should talk to if you want to do undergrad research in X, Y, or Z. Collect as much information and ask as many questions as possible. In the meanwhile, maintain a consistent average (3.8+ would probably be favorable). If all goes well, you should be able to get started (or at least have a topic and someone to work with) by the beginning of your sophomore year. If you wanna have fun, you should take the Putnam exam (I don't know how relevant scores are to grad admissions) and see how well you do. If you somehow become a Putnam Fellow, that's going to significantly increase your chances (for bureaucracy reasons probably since the Putnam exam and research mathematics require different tool sets). Still, you can only take this exam a maximum of 4 times in your undergrad years (once for every year). So I'd take advantage of that. If your university has an honors program that results in a senior thesis, I'd take advantage of that as well. Sometimes very exceptional theses get published in journals and that'd be a good credential to have.
**Sophomore year**: By the end of this year you should be well done by your core requirements (assuming you're persistent and keep petitioning to take more and more credit hours). More importantly, at the end of this year you should have a very rough idea of what you may want to study in grad school. But it should be more concise than whatever you're thinking about now. At this point you should also think about starting at least 2 more research projects (the reason is, you need at most 3 letters of recommendation for places like Princeton). If you can get 3 letters of rec. from professors who know you well (i.e the ones you worked with in your 3 research projects) that'll look very favorably on your application. Once again, take the Putnam. Maintain your GPA as close to a 4.0 as possible. You know, the usual.
**Junior year**: Now would be a good time to start taking grad-level classes. Try to coincide these classes with your area of interest. If you like algebraic geometry, take a class on that and see how you like it. You should know the drill by now. If you're still working on these projects (and you probably will be) don't lose focus. If a publication seems like a pipe dream, that's okay. ***Publications are not expected of undergrads***. But it would look great of course. If you have not already, you should think about which grad schools you may want to apply to. Again, maintain that high GPA and take the Putnam to ease your sure to be troubled mind. By the end of Junior year, you should start studying for the GRE and the GRE Subject test. If you happen to have done extremely well on the Putnam (i.e became a Putnam Fellow or scored in the top 100) you'll probably find the studying part trivial as you've had much practice with problem solving and adeptness. Of course don't get cocky: you should still review anyway.
**Senior year**: Take the GRE and the GRE Subject test. Kick ass at them. Try to finish up your 3 research projects if you have not already and prepare to submit them to a peer-reviewed journal. The process may take time so don't be discouraged if you're still not finished by the time you apply. Be sure to do well on your senior thesis as well assuming your university offers them. Get those amazing letters of rec. from the 3 - 4 year relationship you've established with your professors. Go overload on those grad classes (or self-study) and work at optimum. Get acquainted with your potential field. After all is said and done, write up your CV, do your applications, and send them in. After that, take a sigh of relief, put up your feet, and relax. Take the Putnam for the final time. Enjoy your remaining months in college before you're off doing your PhD.
If all goes well, you'll get published, you'll get praised, and you'll get into your schools of choice.
Upvotes: 3 <issue_comment>username_4: Warning: I am not coming from an US system.
What I would miss is *specialization*. You do want to focus on a topic in mathematical research? Shape your reading and background on it.
Upvotes: 0 |
2013/05/31 | 914 | 3,569 | <issue_start>username_0: Back in my university days, I got into the habit of saving papers I had to read to my hard disk. At first I did this simply to organize them more conveniently, and have quicker access to them.
However, my reasons for saving those papers to disk gradually changed as graduation approached. In the end, I saved ***a lot*** of papers, because at one point I realized that I had unrestricted, unlimited, free access to an absolutely fantastic source of richness, and that soon after graduation, I'd lose all of that.
Years later, I now frequently end up in discussions, or get asked questions, or otherwise end up at a point where a common access to one of those papers really helps to progress the discussion. I often just send that paper around without thinking twice about it, more because I believe that's how science should work than anything else.
I believe most of those papers are however *not* in the public domain, meaning, people not associated with a university or other institution that has access, can *not* access the paper without some payment to its official publisher.
So is any of this legal? If not, what are the possible repercussions for me personally, and for the people I sent it to?
I realize this is a touchy issue, and there are many initiatives to open scientific publications up for the general public. A related question would be: do these initiatives (like [all of these](http://science.okfn.org/tools-for-open-science/)) exist partly because of this reason?<issue_comment>username_1: >
> So is any of this legal?
>
>
>
Unless your sharing falls under [fair use](http://en.wikipedia.org/wiki/Fair_use) in your particular country (and with the Internet, sharing online is a tricky business anyway), you should not share the papers.
If you want to share the papers, linking to the original source is the best option, and [Google Scholar](http://scholar.google.com/) or other online repositories can get you pretty far with a lot of material. If you can't find an online source, you are limited to providing a cite and hope that whoever wants to find the article can use their library to source it.
>
> If not, what are the possible repercussions for me personally, and for the people I sent it to?
>
>
>
Practically? Unless you are sharing the files openly to a broad audience (i.e., so they are available online), no one is going to track you down to sue you. I would avoid linking directly to articles that aren't behind a paywall, but emailing them to people you are collaborating with is unlikely to cause you any trouble.
Upvotes: 4 [selected_answer]<issue_comment>username_2: As your user page indicates that you are in Germany, you may be interested in [§53 UrhG](http://www.gesetze-im-internet.de/urhg/__53.html):
>
> (2) Zulässig ist, einzelne Vervielfältigungsstücke eines Werkes herzustellen oder herstellen zu lassen
>
>
> 1. zum eigenen wissenschaftlichen Gebrauch, wenn und soweit die Vervielfältigung zu diesem Zweck geboten ist und sie keinen gewerblichen Zwecken dient,
>
>
>
(Rough translation:
*(2) it is permitted to make or let be made single copies of a work
1. for personal scientific use, if and as far as copying is needed for this reason and is not for commercial reasons,*
)
which is a kind of continental european fair use policy.
Working in science I think that putting papers I read in a personal private archive is needed - if only to be able to answer specific claims or questions about papers I cite in my papers, presentations, ....
Upvotes: 3 |
2013/05/31 | 643 | 2,624 | <issue_start>username_0: I think I must be stranger than usual sometimes, I always seem to take on topics that are somewhat more difficult than the mainstream, in very trying conditions - but the outcomes are very much worth it (not for me per se, but the benefits for everyone, potentially).
I try to instill this in my students, by 'gently' mentoring them and encouraging them to push their own limits a little further each time. Also, I teach them that 'failure' is just another step to success. For the most part, my students take on the challenges and the work, ideas and enthusiasm from them, frankly humbles me.
However, I am always wishing to learn new techniques to encourage my students to push, and even exceed their limits.
What strategies are most effective for encouraging students (and colleagues for that matter) to take on the difficult topics?<issue_comment>username_1: As a reflection on my own experience as a teacher, I am convinced that motivation is not a static, unchangeable property of particular students, but it is a multifaceted concept, a variable state of mind, created through the interaction of the student with the subject matter in a particular environment (teacher, group, topic, etc). A good teacher arouses the motivation indirectly, by creating the right environment for learning. The students' effort ensues almost magically.
See this interesting article:
Linnenbrink, <NAME>., and <NAME>. "Motivation as an enabler for academic success." School Psychology Review 31.3 (2002): 313-327.
Upvotes: 4 [selected_answer]<issue_comment>username_2: The most important aspect is the teacher's attitude. It sounds as though your students are already benefiting from yours.
As @Cinco says above,
>
> motivation is not a static, unchangeable property of particular students... A good teacher arouses the motivation indirectly, by creating the right environment for learning.
>
>
>
Based on my experiences with some awesome professors, I will say DO
* hold your students to high academic standards.
* assume that the students are capable of doing more than they are currently doing.
* show them how the subject you are teaching is relevant to them.
* be enthusiastic.
* recognize their great work, and always expect more and better in the future.
Perhaps most importantly, *have fun!* Enjoyment is contagious, and a good professor can make it fun to learn about anything. A *great* professor can make it fun to struggle and push against the limits of our abilities.
(Sorry, this is rather subjective...it is hard to quantify what makes a great teacher!)
Upvotes: 3 |
2013/05/31 | 658 | 2,828 | <issue_start>username_0: Paper-based exams are not fully representative of knowledge, and it is good to consider oral presentations of students as a factor for the final grade to some extent. This is somehow the case in graduate courses, where the number of students is lesser, but how to follow this strategy for crowded classroom of inexperienced undergraduate students. I mean a classroom of 50+ size with students who do not have experience in scientific discussion!<issue_comment>username_1: I've had classes of this size where I have each student do an individual presentation. It is **very** time consuming but I also feel it can be **very** worthwhile.
If you give each student 10 minutes to present some information and you have 50 students then you will have 12-13 hours for presentations (allowing 15 minutes total per student including Q&A, changing students, etc.) If you teach in 2-hour sessions then it will consume 6-7 sessions. If you have 30-32 sessions per semester it is doable but it also removes a significant chunk of time from lecturing.
In my case, I lectured for several weeks (giving the students time to do their research and giving them the foundations they needed for their presentations) and then had the students give their presentations. Then I continued lecturing with other assessments later on. The module was not about presentation skills but I do feel that in each subject, we need to teach the students some general skills (structuring an argument, how to format text, how to research, giving a presentation, etc.) in addition to the module content.
The students ended up understanding the material quite well when judged by their presentations and I found many of them quite eager to learn how they could improve their presentation skills.
In the end, I was happy with the overall results and plan to do it again.
Upvotes: 2 [selected_answer]<issue_comment>username_2: I would like to add to username_1's answer. If you have **teaching assistants**, please do make good use of them in this aspect.
As a TA, I have had excellent experiences in mentoring undergraduates to prepare research presentations, final papers, projects and proposals for these papers and project. Of course, we were generally in a class of 120-160 students so that speaks to a classroom scale higher than what you are suggesting. There were 3 graduate TA's, usually and we each had about 50 students to mentor. We found that we could devote significant amounts of time to each student when we met them on a one-on-one basis. Of course, the professor also met them one-on-one and there were a couple of rounds of iteration of their final presentations - which was very, very useful for the professors, the TA's and the students. As mentioned previously, it was a time sink, but very well worth it.
Upvotes: 2 |
2013/05/31 | 583 | 2,479 | <issue_start>username_0: Consider a case of a new researcher that does not know well all possible journals and has no access to human advice (from colleagues). (please do not comment that this is the best - out of scope of the question!)
Or one journal rejected the paper and the author is looking for outside the box candidates.
---
**Possible examples:**
1.
E.g., in medicine, one can use this tool (ETBLAST) by submitting full text of the article
<http://etest.vbi.vt.edu/etblast3/>
Some journals even require top 3 similar papers (found by ETBLAST) to be pasted into a form during submission (making sure authors well addressed related literature).
2.
Or Elsevier JournalFinder service.<http://journalfinder.elsevier.com/><issue_comment>username_1: You need to know your field. Part of knowing your field is reading lots of papers. This should give you an idea of what kinds of papers each of the major journals publishes, and what journal may be best suited to you. If you're a good researchers, you ought to already know what are the main journals that publish research in your area. If you don't already know that, then maybe you need to spend more time reading published work first.
In particular, the best way to figure out if a particular journal is a good fit for your paper is to read a bunch of papers published at that journal. That will give you a pretty good sense. You can use the "call for papers" as a further sanity check, but nothing substitutes for reading what else they have published.
There is no shortcut. If you are a knowledgeable about your field, a random webpage is not going to know your field better than you do. If you are not knowledgeable about your field, then the first thing you need to do is to fix that.
Upvotes: 4 <issue_comment>username_2: Presumably you have included references in your article! Just check which journals you tend to cite most (or at least twice each in your article). Those must be good and relevant enough (since you cite the work published there in the context of your own work). Then check their guidelines to be sure (possible length constraints, level of detail, etc.)
If you do not want to go to the most prestigious journals because you fear your work is not groundbreaking enough, or because you want an easy ride, or, as you said, because they rejected your article already, here is a relevant rule-of-thumb: do not publish in a journal which you would never cite for whatever reason.
Upvotes: 2 |
2013/05/31 | 910 | 4,012 | <issue_start>username_0: At most scientific conferences and talks I have attended, the speakers generally present black text on white backgrounds, which I personally find rather dull. Is there any reason / explicit convention which should stop one from presenting light coloured text on a dark background?<issue_comment>username_1: Conferences tend to take place indoors in dimly-lit rooms. Dimly-lit rooms are a good place to fall asleep. Using a white background helps avoid this. Also, dark text/lines on a white background are easier to read (I've seen more than one study showing this).
If you are going to be showing a lot of astronomy or fluorescence biology images, where the image is mostly black with some interesting colorful things in it, you probably want a black or dark background for at least those slides; at some point you should just switch over. Also, if it is essential to understanding for the viewer to discriminate many shades of color, it's easier with a black background because you can use a wide variety of discriminable pastel colors that would be washed out to invisibility on a white background. You can't even use saturated yellow or cyan on a white background and expect it to be seen.
But for most scientific presentations there are good reasons for a light colored background.
Upvotes: 5 [selected_answer]<issue_comment>username_2: In HCI/information science many conference presentations, especially in the best conferences like CHI or CSCW, tend to have nice colored backgrounds. I would even argue that in certain sub-fields of HCI or information science, just having a vanilla black or white colored background tends to be the exception rather than the norm.
[This](http://jeffhuang.com/best_paper_awards.html#chi) is a brilliant compilation of best paper awards in many sub-disciplines of computer science. A simple google search will often yield the conference presentations and slides of them. Under CHI, you will find that very often, the slides are rather innovative when it comes to the color scheme.
Upvotes: 1 <issue_comment>username_3: Most private companies have their specific color schemes and logos that must be shown on every slide. That's just the way it goes in industry. So it's just a matter of hitting the right conference :). I am pretty sure that at communication science conventions, you won't see any single white background presentation (or black background, for that matter).
Ideally, the color scheme should be a nice touch to your presentation, not a decisive point. The content should play the more important role, and the font size is arguably more important than the color it shows in: if nobody can read your font 6, what's the point of the slide? On some ways to make your presentations more effective without using multiple nested hierarchies of bullet points, see [this multiple award winning one](http://abcnews.go.com/images/us/how_to_win_in_anbar_v4.pdf).
Upvotes: 1 <issue_comment>username_4: Whether light foreground on dark background or vice versa is easier to see depends on the light conditions of the presentation room.
Projectors are still not very powerful: the white of a projector will usually not stand out even against a comparably dim room illumination.
If it is really dark, bright foreground on dark background allows to show more shades of colour and brightness. Because the projectors are not too powerful, the risk of uncomfortably bright foreground is not very high.
If the room is not really dark, bright foreground on dark background may be very hard to see, as the eyes adapt to the overall light conditions and few projectors are powerful enough to make white text on a black (= grey because of surrounding illumination) stand out enough to be easily readable.
Note how the projectors becoming more powerful allowed a transition from white-on-black in really dark rooms (which were needed because the white was not that bright) to black-on-white in rooms with a dimmed overall illumination.
Upvotes: 1 |
2013/05/31 | 1,154 | 5,094 | <issue_start>username_0: I am a year away from completing an undergrad degree in Computer Science and I find myself increasingly interested in more advanced subjects from pure mathematics that sometimes lie outside the scope of my major, and I think I would enjoy doing graduate work in pure math. But I feel like the amount of formal education I've had the opportunity to have in my undergrad isn't quite sufficient for graduate work in math.
If I were to go and buy a book or two on some subject I'd like to learn more about that I can't take a course on (say, Abstract Algebra or Number Theory), and spend this summer getting comfortable with it, is it likely that a graduate admission committee take my efforts into account?
My situation is worded specific to a CS --> Pure Math trajectory, but it's part of a more general question. Do graduate school admissions take into account personal study, or do they only care about formal university education?<issue_comment>username_1: I have limited knowledge about this and I speak from experience as a graduate student who has gone through a similar situation *(from the outside)* as well as being the student representative on our graduate admissions committee *(on the inside)*. Obviously, a professor or academic with much more experience than I have can attest to this in better ways.
In **general,** formal education is given preference. Grades in relevant courses are given more importance than others. Note that there is a lot of competition for slots in a PhD program. Self study is a very fuzzy area and there is little scope for the graduate admissions committee to evaluate it. There exists of course, two mutually non-exclusive ways exceptions to this general norm:
1. A respected recommender in the area attests to the fact that you did engage in significant amounts of self study and that has contributed to your overall knowledge.
2. You do self study. Then you do research based on it. Then you publish in a non trivial journal or conference. That automatically attests to your knowledge in certain ways.
Otherwise, ask yourself this, why would graduate admissions committees believe you, especially when there is usually bound to be a few more well qualified applicants with good scores in relevant courses?
Upvotes: 5 [selected_answer]<issue_comment>username_2: If possible, I would recommend first seeking out a math professor at your university, preferably someone you know and who holds you in high regard. If you are home for the summer, e-mail would suffice, but an in person meeting would work much better.
Anyway, explain your plans, and:
* Ask the professor's advice about your self-study plan (maybe the prof has different suggestions)
* Ask if he or she would be willing to evaluate your progress at the end of the summer (e.g., you show up to his/her office and take an informal oral exam), and to write you a rec letter if you've done a good job.
Good luck!
Upvotes: 2 <issue_comment>username_3: Another option for the special case you describe is to take the math subject GRE after your self studies. This would validate your knowledge of the undergraduate curriculum in pure maths. They might take into account that you did not take any of the courses (you probably won't get the same score you would have gotten as a math major, because you don't know real analysis, topology, etc.)
Upvotes: 0 <issue_comment>username_4: The other answers are of course reasonable, but my own viewpoint, perhaps an outlier, is that energetic/extensive self-study is a much more positive indicator about motivation and self-discipline and genuine interest than course attendance with good grades.
Yes, there is the obvious point mentioned in other answers: how to measure or certify self-study? The GRE subject test is very iffy, in a variety of aspects. Thus, the ideal circumstance is coursework *and* self-study, to have certifiable conventional ("passive") education as well as demonstrating initiative and interest. Further, very often the available undergrad curriculum really doesn't prepare people for grad school, so I'd strongly recommend substantial self-study in any case. Perhaps best under the aegis of math faculty who can provide some certification in letters of recommendation.
The thing that *might* be missing from self-study, if that's all one has as mathematics background, is the "regular drill" on basic reflexes that, in any case, routine coursework does cultivate. If one has to stop and think too much, the slowdown/cognitive-load can make routine things effectively impossible.
Also, beware, the usual first year-or-two of math grad school include "routine" coursework and exams that do presume a "standard" undergrad background, including routine drill on a fairly standard body of material. With an extremely thin or idiosyncratic background, one must play a lot of catch-up. This can have several bad effects: may give the impression of incapability, may make you tired and discouraged, may stifle natural interest. So "brace yourself" if that's the path you find yourself on.
Upvotes: 3 |
2013/05/31 | 1,755 | 7,056 | <issue_start>username_0: I have some literature written by Chinese authors who I would like to attribute in my bibliography with their names in Chinese characters (汉字). The problem is that, as the literature itself is not in Chinese, their names appear only in their romanized form (Pīnyīn or Wade–Giles).
What resources can I use to find out the correct characters for their names, e. g. library catalogues with both forms given, author lists, etc.?
I am not only looking for Chinese names but Japanese ones as well.
**UPDATE:**
Examples include:
* <NAME> / <NAME> (1982): *Wörterbuch der chinesischen Redensarten. Chinesisch–deutsch; Tetragramme des modernen Chinesisch* [= 漢語成語]. Berlin, New York: de Gruyter.
* <NAME> (1964): A dictionary of Chinese idiomatic phrases. Hong Kong: Eton Press.
* <NAME> (1896): Some Japanized Chinese Proverbs. In: *The Journal of American Folklore* 9, 33, pp. 132–138.<issue_comment>username_1: I don't know much about other languages. I'll answer the question in Chinese case since I am a native Chinese speaker.
The literature you have is not in Chinese. The best way is to contact the authors. They will give you the correct answer.
As far as I know, there is no such reliable resources at the moment. There is no unique and consistent way of translation between English name and Chinese name. As you already know, Pīnyīn and Wade–Giles are two of them. I don't think there is one-to-one correspondence relation. There are other issues, such as traditional Chinese character and simplified character. You won't know the correct answer unless the author tells you.
If you have no way to contact them, try their collegues or others who might know. Don't use the Chniese name unless you are sure. Use whatever the name appears in the literature if you are not sure.
Upvotes: 2 <issue_comment>username_2: Transliteration between major East Asian languages and Latin alphabet orthography is not a bijection, which means that the exact same spelling of an Asian name in Latin alphabet can correspond to many different spellings in their native language and vice versa. It's somewhat similar to how you can't tell if it's "principal" or "principle" by just hearing the sound /prɪnsəpəl/. In this example, pronunciations and spellings are not a bijection.
This problem is particularly severe for certain names such as typical Chinese names, so some journals allow Asian authors to write their names in their native languages to mitigate the difficulty in identifying a researcher. For instance, see this editorial by American Physical Society: [Which Wei Wang?](http://pra.aps.org/PhysRevLett.99.230001)
I was born, raised, and educated all the way up to my Ph.D. in Japan. But I can't tell how my namesakes in Latin alphabet would spell their names in Japanese because they may not be namesakes in our native language. The exact spelling in Latin alphabet may not always mean the same pronunciation in Japanese either.
So, there is no easy resource to resolve names in Latin alphabet back to their original spellings. Sometimes you might be able to make a fairly reliable educated guess if you're as proficient in the Asian languages as native speakers. But you'd run into the Asian version of Steven vs. Stephan and Erica vs. Erika. So, I'd recommend you ask the person(s) directly unless you have definite evidence such as a copy of a recent paper written by them in their native language.
Upvotes: 2 <issue_comment>username_2: >
> I am aware of this
>
>
>
If you're already aware of the transliteration problem but still asking the question, then I assume you have other information about the authors to help identify them, e.g., affiliations. If that's the case, I'm curious why you don't use the most useful searchable database, namely <https://www.google.com/> Depending on how much information you have (and possibly how proficient in the target languages), you'll eventually be able to identify them in their native languages unless you're talking about very obscure or older-than-the-internet authors. Since the authors you want to identify wrote something in foreign languages, I think chances are they have personal or official websites that have their names both in Latin alphabet and in their native languages. If they don't, you may find some pages that help you identify them in their native languages.
I don't get why you think a database on literature is particularly useful for that purpose either. You can use any available resource. For instance, if the authors you want to identify are active Japanese researchers, you can search them in one of the researcher databases found here: <http://read.jst.go.jp/> (in Japanese) <http://read.jst.go.jp/index_e.html> (in English). (The English version is the first hit on google for "japanese researcher database" by the way. You'd run into these sites very frequently if you google Japanese researchers, too.)
Anyway, as an example, assume you want to know, say, the Japanese spelling of my former Ph.D. supervisor <NAME> at Nagoya University. And you want to avoid navigating the internet in Japanese as much as possible while searching. Then, you go to [the English version of Read & Researchmap](http://researchmap.jp/?lang=english), click "Researcher Search" to get to [the researcher search page](http://researchmap.jp/search/), and do the usual search with the information you have (i.e., the name is spelled <NAME> and he's at Nagoya University). You'll be directed to [his information in English](http://researchmap.jp/read0164386). Then you switch to [the original Japanese page](http://researchmap.jp/read0164386/?lang=japanese) by clicking 日本語 to check how to spell his name in Japanese.
Of course, you don't need to use the Read & Researchmap to know his name in Japanese. You can simply google him. If you know publication titles and his name in Latin alphabet, you can surely locate his personal website, where you can see how to spell his name in Japanese.
Exactly what kind of situation are you in? You talk about literature, so I assumed you wanted to cite/quote works by Chinese and/or Japanese authors. And you say those works are not in their native languages, which, I assume, means that you know more about the authors than just their transliterated names (unless you're trying to cite/quote them without reading them). Was the additional information you have not enough to identify them through google? Are libraries' databases and such on books etc. really the only kind you can identify them with what you already know about them? Maybe, they're from 19th centuries or something or way too obscure for the internet to be of use? Or is this question "What resources can I use to find out the correct characters for their names?" asked as a very very broad inquiry for Internet Search 101? If that's the case, it's too broad to answer because you don't tell us what you already know about the authors and why you can't identify them through usual means like google.
Upvotes: 2 |
2013/06/01 | 482 | 2,045 | <issue_start>username_0: Do you have to have a PhD in order to become a PI on a grant for your own funding?
What if you only have a Bachelor's or a Master's degree, and are not even working towards a PhD?<issue_comment>username_1: As <NAME> points out, who qualifies as an eligible PI varies widely between different programs. However, in general, one important distinction can usually be cited: you must be a *professional* researcher (as opposed to a student or trainee) to be eligible to submit a direct grant. Students would normally need to have their advisors submit the grant proposals on their behalf.
You need to consult the specific rules of the grant you're interested in applying for. Moreover, if you are at an institution that often applies for grants, you should check with the local grants administration office for more guidance. They may have internal policies regarding for whom they will submit grants.
It is also important to note that different countries and different agencies have varying standards. Here in Germany, for instance, you *must* have the equivalent of a doctoral degree to be eligible to be the coordinating Principal Investigator of a proposal. You can participate as a team member without a doctoral degree, but not be a principal investigator.
Upvotes: 3 <issue_comment>username_2: It very much depends on the grant in question. For example, while most grants in the United States need a PhD (and often university rules dictate more than a PhD, like not giving grant-writing privileges to adjuncts), there are often smaller grant programs that expressly allow non-PhD PIs.
For example, my university has a translational research program that has pilot project grants, to help researchers generate the preliminary data that is so important for major grant submissions. Whole the $5,000 and $50,000 tiers are restricted to non-adjunct faculty, there is also a $2,000 tier that merely requires a faculty mentor on the project, and is specifically targeted toward graduate student PIs.
Upvotes: 3 |
2013/06/01 | 1,058 | 4,597 | <issue_start>username_0: I recently finished my masters degree and co-authored a lengthy paper (It was my idea and I took the lead on the research and writing). The paper is not for publication but provides valuable information and research for the the department faculty and dean.The paper is a program development proposal that our academic department would like to implement. Our group discussed sending the paper to the Dean in conversation with her, and one of the co-authors brought it up with her again at the end of the semester and offered to send it to her individually without consulting our group.
I'm okay with her sending it, however, I felt that she acted independently and did not consider the time-lines of the other co-authors for doing final edits, or offer to CC us when she sent it.
My name comes last on the paper (alphabetically) even though I wrote and researched more than the other two authors combined. I have the sense that my co-author is taking more credit than she is due on this project since her name comes first and because she acted independently.
I requested that I be provided an opportunity to review the paper and to be CC'd on any future sharing of our work with others.I think my co-author was offended.
I am interested in asking my two co-authors if it is okay to have my name as the first author on this paper (then she can send it).
Please advise if I am being petty or if this is a reasonable concern on my part. I would like for the Dean and faculty to remember my contributions. The co-author who offered to send it has already made her mark on the program in other ways and I am not sure I have stood out as much.<issue_comment>username_1: I understand why you are a little peeved that she offered to send it to the Dean without checking with you first, but I think you need to swallow your frustration on that point.
Instead, you need to figure out what your goals are. Right now you are reacting: you are letting your emotions control your reactions. Instead, you should figure out what end goal you want to achieve.
Do you want to make sure that the paper is in good enough shape, before it is sent to the Dean? Then negotiate a timeline with your co-authors. This might mean you both have to give a little: you might have to work extra hard to get your revisions in; and they might have to be willing to delay sending the paper a little bit until you've all had a chance to revise to your satisfaction. That's a perfectly reasonable request.
Do you want to be listed as first author? If so, why? You said the paper is not going to be published. So why do you care? If you think your contribution means it is appropriate for you to be first author, and you care about it, then raise the point with your co-author. But approach it with humility and gentleness. Remember that we're human; we have a tendency to overestimate our own contribution and underestimate others, so you need to correct for that. Also, definitely do *not* ask to be first author as a way of getting revenge; that is petty and beneath you. Take the high road.
And, take this as a lesson. Generally, it is best to discuss authorship early in the project, not wait until the very end. In my collaborations, there is often an understanding from the start about who is the lead on the project; the expectation is that the lead has the overall responsibility and will most likely put in the most work, and in return will likely end up as first author. These discussions about authorship are often easier to happen earlier than later.
Anyway, bottom line: Figure out what you want to achieve (what you want the end state to look like). Then, figure out how to ask for that. Set aside your emotions and your negative reactions to your co-author's initiative in moving things forward without checking with you first, and focus on what end state you want to achieve. I suspect you might find that you share pretty much the same goals as your co-author, and there's no need to get upset or strain your relations with her.
Upvotes: 3 <issue_comment>username_2: D.W.'s answer is great advice. I just want to add something for the (your) future.
What counts in the end if you intend to continue in academia are hard publications. Since this seems to be a soft "publication", I think you should ask yourself; can the material be re-written and actually published? If the answer is yes, then you have to ask yourself, should you take on the job as first author (you have already done a lot). Most often the one who takes the initiative will be spearheading the work.
Upvotes: 1 |
2013/06/01 | 4,338 | 18,646 | <issue_start>username_0: My husband got a math PhD in 2009, and could not get a research post-doc in his chosen field. We had young children at that point, and he spent two years as a lecturer, applied for hundreds of jobs, and finally took a tenure-track job at a teaching university. Now, he honestly believes that a research position is impossible for him. He loves to teach, but hates the endless grind of "administrivia", the low pay, and the mental laziness of the students he's required to teach.
I have insisted on him getting counselling, but he refuses to take anyone's encouragement that other jobs are possible, saying things like, "You just don't know the academic world. If I go to the NSA, no one will hire me in an academic position. If I become an actuary (because we do need more money) then all of my time will be sucked into studying for exams, and I still won't be able to do research." In his mind, no one in our circle of loved ones has the authority or experience to give him accurate encouragement. Is a research position possible after spending time at a teaching university? Would he be able to get back into academia if he had to leave to do actuarial work/industry/something that helps pay the bills?
---
Update.
He spoke with a mentor of his from undergrad, and was given incredibly specific guidance on where to go from here. His teaching load is so heavy, (4/4 or 4/3) and the mentor gave him a few places to apply to in his field where the teaching load is more conducive to *some* research, but is not at a research-specific university. (3/3 or even 3/2!!!) He plans on speaking to his advisor, getting some *glowing* letters of recommendation, and starting a job search soon.<issue_comment>username_1: I am in a very similar situation as your husband, as a high school teacher very much wishing to be in research. So, I can empathise with your husband's and your dilemma (as it definitely affects family and friends as well).
Everyone's situation is different, but I can tell you how I cope in general. Knowing that postdocs are few and far between, I cope by continuing my own research as much as I can in the spare time I have - getting papers published and continuing to build my research profile.
Research positions are indeed possible after teaching, but as an insurance I would advise still publishing work and developing his research profile.
I hope this helps (and I hope good luck finds you and your husband).
Upvotes: 4 <issue_comment>username_2: The situation you describe is unfortunately common. It is an unfortunate reality that the number of people well qualified for research jobs is greater than the number of jobs.
A few notes:
* Hiring committees will be looking for good publications, and for recommendation letters coming from leaders in the field attesting to your husband's impact and further potential. If he continues to do good research, publish it in well-known venues, and speak about it at conferences, then he has a good chance. If not, then unfortunately he is competing with people who are.
* Most of the complaints you mention are common at research jobs as well. I teach at a large state university, where we have our share of poorly prepared students, and/or students who are just going through the motions. Indeed a Harvard professor once quipped to me that "we have remedial classes here, too".
* I do know people who have successfully moved from one teaching position to a different teaching position, and been much happier afterwards. Some departments have more motivated students, pay better, and/or do a better job of keeping the paperwork down, and your husband could look for one of these.
* I know people who have taken a variety of non-academic mathematical jobs, and for the most part they are quite happy with them!
The bottom line is that your husband can probably get a research job *if* he can sustain a very strong research program during the interim. Otherwise, there are likely to be appealing alternatives as well.
Good luck!
Upvotes: 6 <issue_comment>username_3: Yes. It is possible. Because I know this guy who had been in exactly the same situation except that he got his math Ph.D. a few years earlier than your husband, and got a job at a reputable research university. If he's somehow believing that a prestigious postdoc position is necessary, I know a person who ended up in a gratuitous position for a short while and then landed on a very prestigious postdoc job in math. In both cases, they had strong publication records and also convinced other researchers in the same fields that they are something.
Of course, there must be way more math Ph.D.'s who wanted jobs at research universities but got stuck somewhere else than those who succeeded. So, it's true that, statistically speaking, chances are very slim, especially if he himself doesn't think he can make it. But there are things he's in control of, and he can make the probability fatter. Do good research, publish it, and show others what you're made of.
Ah, I almost forgot. If he's looking for a job this year in combinatorics, information theory, coding theory or quantum information science, well, sorry, but he should wait another year. There is a talking duckling in California looking for that sort of job this year, and that duckling should get it. I know your kids are super cute, and they want their dad to be happy, too. But this duckling is even cuter. If your husband's field is different, then no problem. Kick his butt and tell him to apply to as many jobs as possible with his awesome publication list he's going to develop and strong recommendation letters he's going to get (unless his depression needs a professional help. In that case, that should be fixed first, I think). Good luck!
Upvotes: 4 <issue_comment>username_4: I've not read other opinions however here is mine,
Since your husband wants to do research in Math, he is lucky, because he does not require high end instruments to do his stuff. So he basically can do research wherever and whenever he wants, meaning, he need not be in a specific place to do his research. In the mean time he can get employed in a place where he can earn money using his skills.
So while his research is done at home, he can work elsewhere till he feels that his research has matured enough to require a lot of attention. Once his work is publicized I'm sure there will be a lot of people to come forward and assist. Some of us developers use and believe in open technology as the future, hence we publish whatever we do for the benefit of the world. He can create a blog to update the world about his research.
He can use social media to find and meet people with similar interests or even conduct teaching sessions to people whom *he* wants to teach for a fee or for free. Example Google hangouts. Using the internet will give him better exposure than the closed walls of any university.
I help my father with his automobile business and then follow my passion at home, I don't know if any of my work is worthy of being called research but bits of it I publicize is certainly helping people who wants to learn :) .
Upvotes: 0 <issue_comment>username_5: I'm going to disagree a little bit with the other answers. This is a frustrating situation. Unfortunately, as username_2 stated, the number of qualified mathematicians exceeds the number of academic research positions. This makes a difficult position.
I fear your husband might have accurately assessed the situation. While I appreciate the suggestion others made of continuing to do research on his own time, this is very difficult. If you have a full-time non-research position and a family, that doesn't leave a lot of time for research -- so it's very hard to sustain a level of research output that will be competitive with the competition. Also, others who do have a research faculty position may have students and collaborators, which further boosts their research output; your husband won't have that advantage. So, while in principle your husband could continue research on his own time to try to build a research portfolio in hopes that this leads to a tenure-track academic research position, in practice your husband is at a disadvantage. He would be climbing up a steep hill.
My suggestion would be for him to get advice from someone more senior who he respects. Is it worth his time for him to continue his research on his own time, and continue applying to hundreds of research positions each year for the next few years? Maybe, or maybe it's a waste of time.
Alternatively, perhaps he might consider other career alternatives. Rather than being entirely set on an academic research position, maybe he should consider other career paths. Even if he has his mind set on an academic research position right now, I suspect there are a number of other directions where he could be happy. Maybe he should consider the actuarial path, or consider a NSA job? Or consider Wall Street (a job in the financial sector)? Maybe he could teach himself computer programming on his own time and pursue a job in the computing industry? Perhaps there are other opportunities. This kind of change is scary and requires some courage, which is undoubtedly especially difficult when you are depressed: your support will undoubtedly be helpful to him.
A third option is to find things to love in his current teaching position. I certainly sympathize with the trials; they are real, and a drag, to be sure. On the other hand, there's a lot to love about teaching, too. You get to help young students discover the beauty of mathematics: even if it's just one out of a class of 30 students who finds a real passion, that can be very satisfying and rewarding. Unfortunately, the administrivia and the laziness of students is a constant in academia and would probably be present even if he found a research position; the trick with dealing with them is to find other things in his life that are rewarding and satisfying, and focus on them. For instance, perhaps he might enjoy doing math research in his own time, not with the goal or any illusions that it will lead to any research position, but entirely for its own sake: for the love and pleasure and beauty of it. Or maybe he might offer to set up a special enrichment seminar or program for students who do love math to learn more: maybe run a program to prepare for the Putnam exam or Math Olympiad. If he offers to do this on his own time, as an overload, I imagine his department chair would jump at the opportunity, and it might provide a chance to do something rewarding and fulfilling for him and be a great inspiration to a few students. Or maybe he might find something else in his job that is rewarding and worth doing.
Upvotes: 5 <issue_comment>username_6: >
> Is a research position possible after spending time at a teaching university?
>
>
>
Yes, it's possible. There's a mild stigma to applying from a teaching university, since it advertises lack of success getting a research job in the past. Because of the high ratio of candidates to positions, academic hiring is often risk averse and the perception that nobody else wants a candidate will worry search committees. However, this can be overcome.
One thing to keep in mind is that the past difficulties could simply have been bad luck. Even very strong candidates typically do not get many job offers, and bad luck can easily change "a few" into "zero".
It's possible to strengthen one's application by writing additional papers, but this generally requires *better* papers or more papers *per year*, since search committees will normalize for time: if you've spent longer they will expect more. However, the focus is more on the future than the past. If you have a productive few years and show signs of maintaining that productivity in the future, it can make up for fallow periods in the past. If you quit publishing, then the chances of a research job will rapidly drop to zero until you resume publishing. (There are a lot of people who would like to do research but aren't prepared to actually do it, since they aren't up on the current research literature. You can't get a research job unless you demonstrate that you aren't one of these people.)
In an ideal world, search committees would also normalize for the applicants' circumstances. For example, research productivity at a teaching university would be viewed as evidence that the candidate would be even more productive at a research university. Unfortunately, in practice these effects are often underestimated or not taken into account at all.
One possibility is that there's something wrong with your husband's application. For example, maybe his research statement is not compelling, or one of his letter writers is insufficiently supportive, or maybe one of them does not know how to write an effective letter of recommendation. (You'd think that should never happen, but some well-established mathematicians simply do not know how to write effective recommendation letters. If his thesis advisor is one of them, and the other letter writers are less energetic since they assume the advisor will make a strong case, then it could be really bad for his job search.) I'd guess that 5-10% of job applicants have something seriously wrong with their application that they seem totally unaware of. This is a low fraction, so your husband probably isn't one of them, but if possible he should discuss all aspects of his application with a trusted mentor who has guided numerous students to the sort of job your husband would like. Sadly, he may not have such a mentor, but it's good to keep an eye out for one. For example, if he strikes up a conversation at a conference with a senior mathematician who seems approachable, it's worth asking for job search advice.
>
> He thought a hiring committee would scoff at the lower output of papers done by someone in a teaching job vs. someone churning out problems in a research post-doc. He even wondered if not having "XYZ Awesome Post-Doc" on his CV would trump any research he did.
>
>
>
Lower productivity because of other duties is a serious factor here. The CV prestige issue is real but considerably less important: prestige might serve as a tie breaker but won't get anyone a job if their actual accomplishments aren't commensurate.
Letters of recommendation will be by far the most important factor. What your husband needs is really strong letters that address his circumstances, talk about how impressive his research is and why, and explicitly make the case that he belongs at a research university. Letter writers generally recycle letters from year to year with some updates to incorporate recent papers. If his letters are not updated to address the teaching/research university issue, then they will not help him. In particular, if they don't say in strong terms that he ought to be at a research university, then they'll be viewed as damning him with faint praise.
This is something he can discuss with each letter writer, along these lines: "As you know, I've been working at University X for the last couple of years. It's great to be in a tenure-track job, but I'd really like to work at a research university. To move to one, I'll need letters of recommendation that address this issue and make a strong case that I should be at a research university. Would you be comfortable writing such a letter for me? Of course I'll understand if you can't write one, since I know I'm asking a lot, but I'd rather ask someone else than waste everyone's time with an application that doesn't have the support it needs."
This is an awkward conversation, but it's much better to ask than to leave it to chance.
>
> Would he be able to get back into academia if he had to leave to do actuarial work/industry/something that helps pay the bills?
>
>
>
It's possible he could get another teaching-oriented job, although it's by no means a sure thing. It would help a lot if he could spin the other work as informing his teaching. For example, he could teach actuarial mathematics or incorporate realistic industrial applications. In that case the non-academic experience could be an advantage; otherwise applying from outside academia would put him at a moderate disadvantage.
Unfortunately, the chances of getting a research-oriented academic job from industry are low (assuming the industrial job is not at a prestigious research lab like Bell Labs). There are people who have done it, but it's not likely or easy. Few people can maintain a high-quality research program on the side with no support while holding a full-time job, and this is necessary for returning to a research university. In particular, there's virtually no chance of returning based solely on having done research in the past, without having actively continued in the meantime.
The way it typically plays out is that you reluctantly go to industry intending to maintain your research and keep applying for academic jobs. For the first year or two, you complete and write up work you began in academia, and it feels like everything is going well, but your job applications don't do any better than they had before: you've got a more substantial research track record, but your industrial position emphasizes your inability to get a job in the past. Still, you figure that accumulating more papers will eventually tip the balance in your favor. Unfortunately, the next few years don't go as well. It's hard to find the time for research, you have few people to talk to or derive inspiration from, and progress is slow. However, you're gradually getting somewhere, so you figure it will just take a little longer. A few years after that, you start to lose your resolve to do research at all. Even if your applications were successful, would you really want to take a 50% pay cut, give up your job security, and move to another city to restart your career? And it's hard to keep your focus on an incredibly time-consuming hobby that seems like it may never lead anywhere professionally. You quit applying to anything but dream jobs you're pretty sure you won't get, and eventually you give up on them as well.
The good news is that this path doesn't generally end in depression, but rather the discovery that there are plenty of fulfilling life paths outside of research universities. It's by no means a bad outcome. However, leaving academia for industry can be really stressful if you want to return to a research university, since you have to either give up or work like hell to maintain your research program.
Upvotes: 6 |
2013/06/01 | 1,129 | 4,633 | <issue_start>username_0: In my lab, a few of us submitted our papers independently to a conference last week. We all had referenced each other's work in our papers. I have a concern if this is a bad practice in academia. It also looks like a bad habit that has festered in our lab: the last two PhDs who passed out from our lab have a total of 15 citations for their papers, of which 13 are from within our community itself.
Is this practice of repeatedly citing from one's own micro-community acceptable? It seems unavoidable, as we are all working together, familiarizing with each other's work and trying to build on them.<issue_comment>username_1: While I am ***not*** insinuating in any way that your lab does this in a malicious and self serving way, this reminds me of the notion of [citation cartels](http://scholarlykitchen.sspnet.org/2012/04/10/emergence-of-a-citation-cartel/)
Its quite possible that this is an artifact of the research itself and its quite alright to self-cite (or group-cite) a few papers especially when one work builds upon the other. However, there must be more papers in the research community outside your own lab that must be citable? Surely, not 86.6% citations must be from within?
Speaking from my ***limited*** experience, if I were a reviewer and I saw papers with this kind of a pattern, I would certainly suggest the authors to include specific, relevant papers (which I have knowledge about) from outside their own lab in the literature review. I might of course be completely wrong and someone else with more experience in academia might have a different opinion.
Upvotes: 3 <issue_comment>username_2: It is only bad practice if the work is not particularly relevant, or if you cite your own group's papers instead of other relevant work. The point of citations is to lead the reader to other related work necessary for some aspect of the paper (result, context, etc.).
That said, it is worth examining why those papers aren't getting cited. Maybe nobody else is working in a sufficiently related area but it's critical background for the rest of what the lab is doing--then I wouldn't worry about it. But if other people *are* working in the area (e.g. to the point that you might worry that if you take too long that someone else will publish the same results you're trying to get), it can be a sign that the results either aren't contributing to the field much, that the style of presentation isn't good enough for others to really understand what you're saying, or that the lab members aren't going to enough conferences to present.
Upvotes: 4 <issue_comment>username_3: If I saw a pattern of only or mostly citing work from your own team, it would make me suspect that the authors have poor awareness of related work. How likely is it that there is no other relevant work being done by anyone else in the literature? Nothing else that has any relevance whatsoever (techniques, methods, related results, inspiration)? That's awfully hard to believe. It would make me suspect that the authors are either ignorant about the field, or are doing a poor job of citing related work.
It's not clear to me what you meant by the 13 of 15 statistic. If you mean that they wrote a paper or dissertation where 13 of the 15 papers cited in the related work section is to people within their same group, yes, I would tend to look down on that. It isn't *necessarily* wrong, but it would certainly make me ask some pointed questions and make me suspect that there is something wrong. It is important to be a good scholar and to understand other work in your field.
Maybe you've heard the saying: "a week in the lab can save a day in the library". (This is not a recommendation!)
Upvotes: 2 <issue_comment>username_4: If your lab is the leader in a certain field, you would be aware of it.
If your colleagues are pure geniuses, you also would be aware of it.
In both cases you won't be here to ask about it.
In others cases, extreme repetition is not a good sign.
(Keep in mind that little repetition is good as it shows interest in a field, but 13 in 15 is a red flag, a big bad red flag with a big bad skull on it)
Upvotes: 1 <issue_comment>username_5: I think it really depends, is it justified or are you just citing each other to rise your charts? I think if they are justified, it is fine. Some labs really do have famous works and they just add on it. I think it would become the advisor's problem after a certain point, which would mean he has to renew his work. That said, it doesn't look great when people who have their citations come -only- from their previous work.
Upvotes: 1 |
2013/06/02 | 279 | 1,079 | <issue_start>username_0: Currently, I am applying for PhD positions at United States and Canadian Universities. I will be 35 when I start my PhD in Computer Science/Machine Learning, and probably I will finish it around the age of 40.
Will I be banished from academic positions considering a possible PostDoc time and what about industry positions ? Should I plan my study towards industry or academia from the research perspective ?<issue_comment>username_1: I doubt you'll get banished, then again, I am in the same boat - though I'll be finishing my PhD by the time I am 37. Though, I think it will be a bit harder for us, but not terribly so.
Look at it this way, I presume you have alot more practical workplace experience, that will be to your benefit.
Upvotes: 2 <issue_comment>username_2: No.
---
Age discrimination is illegal in the US and (I assume) Canada. If you're worried, just don't put your date of birth on your CV. Most people don't.
Hiring committees just don't care how old you are. All that matters is the quality and impact of your research.
Upvotes: 4 |
2013/06/02 | 850 | 3,614 | <issue_start>username_0: When learning a new subject, I would frequently use lecture notes found somewhere in the Internet. When writing a paper (or a master thesis, as in my case, but the rules should be similar, I believe) one should give some reference for used results which are not common knowledge, if I understand correctly. This make me wonder: what do I do if I want to reference a result I found in some notes?
The natural thing to do would be to just add these notes to bibliography. What format would be preferable for this? Note that there will generally not be much publishing information, perhaps not even a definite year and place. (A BibTeX template would be .)
Secondly, is it OK to cite such materials as a reference?<issue_comment>username_1: To my taste, citations are fulfilling several purposes, some of which may not be fulfillable simultaneously. So, one should be honest about where one found a result, even if the source is not widely available. Thus, cite (in the best, most usable form possible) the lecture notes. Still, yes, *accessible* sources meet another criterion, namely, helping readers reproduce/understand your results.
Edit: in light of various comments and other answers... another purpose served by spending *some* (not unlimited) time finding original sources (even while being honest about the source one actually *used* or \_learned\_from\_) is to give at least a lower bound for the age (and locale of origin) of the idea. Nevertheless, at the same time, it certainly *can* happen that a much later exposition does a much better job of explaining... after all, benefiting from hindsight.
Yet another reason to exert some effort to credit original sources is to dampen a bit a tendency that otherwise can dominate, namely, some form of "Great Man/Woman" syndrome, in which a very few people are portrayed as being responsible for nearly all good, big ideas.
Upvotes: 3 <issue_comment>username_2: You should make a good-faith effort to find and cite original source of the results (to give proper credit). You should only cite the lecture notes if (1) they *are* the original source, or (2) the original source is inaccessible, either literally (out of print or unpublished) or figuratively (written in a foreign language, with excessive generality or formality, or just badly).
Finding the original source may require significantly more scholarly diligence on your part than the author of the lecture notes, since most lecture-note authors (including myself) are fairly sloppy with references. Such is life.
Upvotes: 5 [selected_answer]<issue_comment>username_3: From a general point of view, lecture notes are [gray literature](http://en.wikipedia.org/wiki/Gray_literature), meaning they might **lack standard bibliographic metadata** (you mentioned the year and place), may be harder to track down for readers, or not long-term available. Thus, one should generally **prefer to cite conventional literature** (such as books or articles in journals) over gray literature.
For a masters thesis, it should be fine to cite gray literature, but do check with your advisor. When you do so, you might as well discuss the format he'd recommend for citation. If you found the lecture notes online, one idea would be to **cite it as online source**, where key metadata would be the URL and the date of access.
In contrast to a masters thesis, many publishers **discourage or forbid the citation of gray literature** for journal papers. So if you want to make a paper from the thesis and the citation is essential, you would have to find the original source.
Upvotes: 2 |
2013/06/03 | 416 | 1,771 | <issue_start>username_0: I am a hobbyist mathematician in China. I study maths by myself and make some course videos to teach commutative algebra, functional analysis and other topics on the internet, like the [Khan Academy](https://www.khanacademy.org/) and [MIT open courses](http://ocw.mit.edu).
Now I need a .edu email to get into some sites such as [ResearchGate](http://www.researchgate.net) and [arxiv](http://arxiv.org). Is there any organisation that will help me like this: I show them some material, such as videos and papers, and if they think I am no weaker than some college teachers at least, they would give me a .edu email?<issue_comment>username_1: Of course, becoming a student or staff member is one way to get a university email.
Some institutions provide alumni email.
Many universities have various unpaid affiliates. Such affiliates are sometimes eligible for a university email. However, such affiliates are often expected to contribute to the school, faculty, or university. For example, you might publish with the university as your affiliation or you might give occasional lectures or you might supervise a research student. These sorts of affiliations are typically obtained by building up a relationship with some academics in a given department and making enquiries.
Upvotes: 3 <issue_comment>username_2: As a researcher whose is that of a very small university in Europe, whose domain name and institutional email addresses do not end in `.edu` (nor in `.ac.uk` or any other recognizable pattern): **any website that uses email domains as filters has a fallback mechanism** (or exception handler) that you can reach if your own email address doesn't fit into the patterns they recognize. It may take some explaining, though…
Upvotes: 3 |
2013/06/03 | 1,735 | 7,479 | <issue_start>username_0: Some universities have independent `departments`, but some put departments under supervision of a faculty. For instance, a university has fifty independent departments, but another has 5 faculties containing departments. In the former, the Dean of Faculty is intermediate between the department chairs and the university administration.
I understand that most of these structures come from the university history, but how these structures affect the department performance, and why universities prefer different structures, instead of a standard one (which should be most efficient).
For example, what is the different of `Department of Computer Science` in one university, and `Department of Computer Science` in another university which is part of `Faculty of Engineering`?<issue_comment>username_1: The faculty is a collection of departments. In my system we have the Faculty of Sciences, Law, Humanities, and Social Sciences. In addirions there can be Faculty of Theology, Arts, Languages, Educational Science, Medicine, Pharmacy and probably many others.
The term Faculty is known from the University in Paris already in medieval times. It was a way in which major fields distinguished themeselves from a genral body of learning. The faculties of Philosophy, Law, Theology and Medicine can be found back to the 13th century. As the universities grew and knowledge became more specialized departments started to form within these faculties and we now have the system of faculties as an administrative level in many university systems. Departments are, however, relatively modern creations from the late 19th century.
Upvotes: 4 <issue_comment>username_2: The distinction between "faculty," "department," and "school" depends a lot on where you are. As Peter suggests in his answer, a faculty can be a collection of "departments." However, a faculty in Germany (for instance) consists of a number of "chairs," each of which is closer to a professorship in a department than an actual "department." Thus, the faculty is effectively halfway between the American "department" and "faculty" in its function, as it combines some of the hierarchy and responsibilities of each.
The reason for having multiple subdivisions is that there are often many university functions—including personnel and budget decisions, facilities management, teaching, and so on—that can vary widely across the entire university, but significantly less among certain departments that have a similar focus. For these departments, it makes sense to combine these duties in a central administration, rather than duplicate the effort across multiple departments.
Upvotes: 5 [selected_answer]<issue_comment>username_3: Sometimes using one word or another is seen as more prestigious or as more independent than another. In one place I was at, they made a big deal of changing their name from "Department of Computer Science" to "School of Computer Science".
Similarly, some people perceive one Faculty as more prestigious for the purposes of undergraduate recruiting. In many Universities in North America, computer science moved from "science" to "engineering" and my father perceived that computer science departments in "engineering" were more promising than those in "science" (as at the time engineers made more money and had more job opportunities than scientists). Of course, the real reason that this happens is often a question of autonomy, funding, and mutual interests behind the scenes.
In the end, the question of "For example, what is the difference of Department of Computer Science in one university, and Department of Computer Science in another university which is part of Faculty of Engineering?" is very, very difficult to answer and I would argue that the differences are much more dependent on the actual "department" than on what faculty/college/administrative area it's under. In North America, most departments (especially computer science departments) are relatively autonomous and do not rely heavily on their college for assistance - it's simply a bureaucratic layer through which funds go through.
Upvotes: 3 <issue_comment>username_4: In the Nigerian University system, a college has several faculties under it. A college is headed by an elected provost while an elected dean heads a faculty. Departments are under the faculties with an acting or substantive head (HOD).
Upvotes: 3 <issue_comment>username_5: In Spain, "Faculties", "Schools" and "Departments" (literally facultades, escuelas and departamentos) are legally defined in [articles 8 and 9](http://www.boe.es/buscar/act.php?id=BOE-A-2001-24515) of the "Organic Law" (ley orgánica) that governs universities. One might summarize (paraphrasing the law) the technical distinction as that faculties/schools (these are essentially equivalent) are charged with organizing teaching and academic processes, while departments are charged with realizing teaching and academic processes. What a department teaches is determined by the faculty/school in which it teaches, but how it teaches the material is determined by the department. The power to create or destroy faculties/schools resides with the regional government, while the power to create or destroy departments resides with the university. A faculty has a Dean (decano), a school has a School Director (functionally equivalent to a dean), and a department has a Departmental Director. A department can belong to several schools/faculties in the same university (for instance a mathematics department can be responsible for the teaching of math in several different engineering schools in the same university, although there are also universities in which each school has its own mathematics department). While the overall organization is similar in concept to what one finds in US universities, it is more rigid in the sense that the powers, competencies and responsibilities associated with each administrative structure are fixed by law.
Upvotes: 2 <issue_comment>username_6: Around here (Universidad Técnica Federico Santa María in Chile), we had "facultades" under a dean, some 10. For example, chemistry had chemistry (the science), chemical engineering and later (as an offshot from chemical engineering and mechanical engineering, mostly by historical reasons) materials science. Others, like electricity had just electrical engineering, and science had mathematics and physics. Around 1975 it was reorganized into three "facultades", engineering, science and business administration (the last was essentially an external institution under the umbrella of UTFSM for legal reasons relating to the right to grant professional degrees). Note that science had three departments (mathematics, physics and chemistry) while engineering had some eleven. Around 1990 the "facultades" were dissolved, and we have just departments. The Universidad Adolfo Ibáñez (essentially the business administration "facultad") had regained its independence before. Our current departments are more or less the "facultades" up to the seventies.
As you see, this is mostly an internal organization issue, which very well can change, and names vary.
Upvotes: 1 <issue_comment>username_7: Faculty is equivalent to College. For example, Faculty of Science contains several departments e.g. Department of Biology, Dept. of Chemistry, etc. This is the same as College of Arts and Science which have several departments e.g. Biology, Physics, etc.
Upvotes: 0 |
2013/06/03 | 717 | 2,952 | <issue_start>username_0: I will start my PhD in computer science at a UK university soon. Even though the university has given me a partial funding, I still need to cover part of my tuition fee and living expenses. My university offers teaching assistant position but its payment is very limited.
I've been looking and seen that most, if not all, of funding sources are only available for UK and EU students. So, I want to ask if there are any sources for international to apply.<issue_comment>username_1: Your best bet is probably to check if there are fellowships available for study abroad from your home country. However, depending on where you're from, there may be special programs available. For instance, there are [Commonwealth Scholarships](http://cscuk.dfid.gov.uk/apply/) available for people living in members of the British Commonwealth.
There are some databases of UK awards for general students: [here](http://www.educationuk.org/nc?pagename=GLOBAL%2FSearchLayoutTemplate&c=Page&SearchResultType=scholarship&cid=1262432674923&activeFinderCombo=2&checkVal=1&finderSearchType=finder&finderIndex=2&SearchTerm=&Subject=Please+Select&Country+of+residence=International+%28outside+European+Union%29&Level+of+study=Postgraduate), for instance, is a list of postgraduate scholarships available to citizens of any country.
Upvotes: 3 [selected_answer]<issue_comment>username_2: You may also use website that are specifically targeted at helping at finding PhDs. The following website allows you to filter using funding and nationality as parameters:
<http://www.findaphd.com/>
Upvotes: 2 <issue_comment>username_3: One possibility would be to look for full PhD scholarships rather than your current limited partial funding. This is typically handled directly by departments of each university. You can get a good idea of those by a web search for `site:ac.uk International PhD Scholarship`.
Upvotes: 2 <issue_comment>username_4: The UK PhD student funding situation is quite dire, due to the concentration of funding into Centres for Doctoral Training (CDTs), a special type of institutional grant from the Research Councils. It means that there is no funding available in normal project grants to hire PhD students. If the university/department doesn't have a big CDT grant in your area, you're pretty much out of luck---as is your professor/lecturer in terms of being able to expand or in case of young lecturers even build a research group. There's only very limited funding based on REF results, which are then fixed for several years, and there is a brutal competition within the department for those studentships. (My department has more lecturers/professors than PhD studentships.) The next REF isn't due in many years. As already mentioned, your best bet is a grant from your own country. Indeed, students with their own grants from their home countries are the best bet for many UK lecturers to get any PhD student at all!
Upvotes: 2 |
2013/06/03 | 735 | 3,240 | <issue_start>username_0: So I wrote a seminar paper in grad school (MSc) with examples on how companies use method x to improve their security. It includes zero *original* research, no new methods and it would never ever be published in a journal, because the substance is really not there.
It gives a theoretical background of the problems itself and then different current examples from 2 companies that I researched over the internet and a conclusion. Thus, not only academic sources but also other sources (companies) are used and it basically shows the **application of academic concepts**.
Other researchers could use this paper to potentially find ways on how to dig deeper and how to address some problems specifically, but I did not point something like that out myself.
**My question:**
Does this count as research and does it make sense to hand those kind of papers to PhD admission departments?<issue_comment>username_1: I would say yes, it is good to show this if you are planning to apply for grad school as a way to show your capability for conducting research, your interest, knowledge of the topic and dedication. It might, or not, give a potential adviser a hint of your capability. Then, if the paper is or not mature or appropriate for a peer-reviewed journal would not, in principle, be an issue for the purpose you mention.
Upvotes: 2 <issue_comment>username_2: This is akin to a *review article*, in the sense that you're interpreting and reporting on existing work, rather than creating something new. There's nothing wrong with a review article, and many scientists do write them during their careers (but usually not before they've started a doctoral program!).
That said, while I believe that it is not nearly as good as a "standard" research paper in establishing one's capabilities to do *original* research, it may have some merit. If the schools to which you are applying allow you to submit additional documentation, then you could consider sending it in, if you believe it is of sufficient quality.
Upvotes: 5 [selected_answer]<issue_comment>username_1: I would also say yes, primarily as your research and analytical methods could be considered as original research - this aspect could possibly be refined and submitted as a paper. Definitely do submit your work as part of your PhD application, perhaps with a focus on the methodology that you used and an evaluation of the method.
Upvotes: 2 <issue_comment>username_3: In the Definition section of [Research](http://en.wikipedia.org/wiki/Research) Wiki page,
>
> A broad definition of research is given by <NAME> - "In the broadest sense of the word, the definition of research includes any gathering of data, information and facts for the advancement of knowledge".
>
>
>
I think your paper counts as research. I would hand those kind of papers to PhD admission departments if I were you. At the very least, the paper shows that you have the research potential even if it contains no original idea or new methods.
However, you need to make sure the paper is of good quality. If the quality is poor, it could have negative effects. You probably want some experts (like your advisor) review it before you send it out.
Upvotes: 2 |
2013/06/04 | 547 | 2,286 | <issue_start>username_0: I need an specific advice on a specific area (not the main one) in my PhD research (e.g. is it a good idea to include the factor x in the review and analysis of the topic y?).
As far as I know, my supervisor is not an 'expert' in this specific area.
I think that someone else, who has recently done a PhD in my department and has *good connections* with my supervisor, is an expert in that area and can help.
In academic etiquette: should I make my supervisor aware that I am asking an advice from that person?<issue_comment>username_1: I would approach your supervisor with request for advice and depending on what the problem is, (s)he may recommend you speak to some other scholarly expert in the field. If (s)he doesn't, you could suggest yourself that it might be an idea to speak to some expert. Your supervisor and you could work out a viable solution.
This has happened to me numerous times during my PhD - the nature of the project meant that other scholars had to be consulted. My supervisor had no problems with this, but I made sure that I always let him know, and in many cases, he had recommended I speak to other scholars and was also able to recommend various experts.
It is always polite to keep your supervisor in the loop.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Yes. In the same tone that you use to describe finding a helpful book in the library. Consulting experts, whether in the flesh or through their published work, is a normal and expected part of doing research.
The tone of your question suggests that you are worried that your advisor might be insulted by not being consulted first. If this is a real concern, you need a new advisor.
Upvotes: 4 <issue_comment>username_3: Yes. I am a graduate student and I have done this quite a number of times. Usually its no problem. In fact my advisers have always encouraged reaching out to various people in their specializations to ask for advice.
What I do is bring up the topic and then say something like "What do you feel about me reaching out to Dr. XX or Prof. YY about feedback regarding this analysis? Or this writing?"
Usually, they are very happy to agree or suggest alternatives. I recommend that you try this approach and see what comes of it.
Upvotes: 2 |
2013/06/04 | 702 | 2,863 | <issue_start>username_0: Regarding
[this question on when to ask about a manuscript's status in review](https://academia.stackexchange.com/questions/900/when-how-should-i-ask-about-a-manuscripts-status-in-review/10443#10443)
I wonder (since it is happening to me): What does it mean when a paper is "with editor" for two months after a first review?<issue_comment>username_1: There's no way to know for sure except to ask the editor.
If you mean that the editor received the first review from a referee two months ago, it may be that the editor has not yet decided what to do with the paper, based on the review. Another possibility is that the editor wants a second review, and has not yet found a referee for it.
Upvotes: 0 <issue_comment>username_2: So, after the first round of review, you were asked to revise your manuscript extensively. And you made changes accordingly and resubmitted a revised version. Two months has passed since then, but it doesn't seem like the revised manuscript has been sent out for external review because the status of your submission you can see online hasn't been changed and is still "with editor." (Edit: I know some journals use "being handled by editor" or something similar at any stage of the review process except when the manuscript is with the author. You're sure it wasn't "with editor" during the first round review, right?)
If that's the case, most likely it means the editor is having hard time finding referees for the second round or maybe simply taking their sweet time examining your resubmission. Or possibly they forgot about your submission so you need to remind them.
I don't know what the norm is in your field. But if I were you, I'd send a short and polite email to the journal to ask what is up with your paper, probably after waiting a bit more. If it doesn't work, I'd send another polite email to someone else working for the journal. In any case, you should understand that the editor is a volunteer (unless you're submitting to a journal with fulltime editors like Nature, Science, PRL etc.), and sometimes it takes a while to handle your submission for various legitimate reasons. So be polite when asking what's going on.
Edit: And the best way to know if two months is too long is to ask experienced researchers in your field like your advisor if you're a student.
Upvotes: 2 <issue_comment>username_3: It most likely means that your manuscript is undergoing a second round of review, after you made revisions to it. In most fields, two months is not overly long time to wait for reviews, especially if you made large modifications to the manuscript, or if the editor is not sure what to do and asked for an additional referee’s opinion (or adjudicating referee, in some cases).
You can, however, write a nice email to the editor enquiring about the status of your manuscript.
Upvotes: 0 |
2013/06/04 | 646 | 2,716 | <issue_start>username_0: I am an electrical engineering (communications) undergraduate student. But my main interest is physics ( and/or CS, esp. it's common fields with physics - *but it doesn't affect my question here* ), and I want to pursue my graduate studies in physics or computer science. However for application, as an engineering student I'm at a disadvantage as
* I think my major is not considered *rigorous enough* by people in those fields
* I don't have much (official) coursework in those fields, although I have studied (and am studying) even more than is expected from physics majors
* because of the time I've put in studying ( and more importantly, exploring different areas of) physics and computer science, my GPA so far is not good in electrical engineering.
So with these circumstances, I think the best or maybe the only way is to continue my studies in physics (and/or related areas) more seriously to do more rather good quality research projects and this way, show my ability and qualification for graduate studies in physics (and/or CS).
But, doing so I'll need more time for these additional studies and projects, and I think I have to stay one year more at undergraduate school (5 years). Is this considered a negative point in application for graduate school?<issue_comment>username_1: I think there are a lot of students who spend an extra year as an undergraduate, and I know of a number of very good students who have done so, and will almost certainly get into very good graduate programs.
However, that said, I think the value of the extra year as an undergraduate depends largely on what you spend that year doing:
* Will you be taking classes that will help your application to a physics program?
* How much research will you be doing, and how will it help your application? (Is it physics-related, or EE-related?)
* How strong is your GPA in your physics classes? (This won't make up for a weak GPA in EE classes, but it can at least partially mitigate it.)
* How much will it cost you to spend an extra year before starting graduate school?
As for rigor, I don't think there's as much stigma as you might think. There's far more overlap between fields now than there used to be, and engineers do things that used to be primarily in the province of physics, chemistry, and even mathematics.
Upvotes: 2 <issue_comment>username_2: No.
---
**Nobody cares how long you took to graduate.** There is no advantage to graduating early, and there is no disadvantage to graduating late. It is *extremely* common for students to take more than four years to get an undergraduate degree, especially if they change majors, as you are effectively doing.
Upvotes: 4 [selected_answer] |
2013/06/05 | 2,542 | 10,433 | <issue_start>username_0: I have some questions regarding my personal statement for graduate school in math. I hope this is not the wrong place to ask. My question(s) are comprehensive and long, as the title suggests, thus I have numbered and sub-numbered the questions.
Note that these questions are targeted at schools that do not allow me to upload a CV otherwise I think half of these questions would go away.
1. **Courses**
1a) Is it silly or a waste of space to talk about your math background? Or should I assume the graduate committee already has this?
1b) Some schools I have checked out actually asked me to list out all my junior/senior courses along with their books, I guess for those schools I don’t need to? What about those who do not?
1c) Should I write or mention courses I have self-studied? Or is this completely irrelevant to them?
1d) Should I also bother explaining one W and one ‘bad’ mark that happened in the summer?
1e) Should I mention my math department is understaffed and I tried to take as many “hard” classes as possible? How understaffed? We have only at most four math classes at the senior level every year. We are so small that most junior/senior classes stop only at the introductory level.
For example, we only have: introductory PDE, introductory Number Theory, introductory Algebra, and Topology does not even exist at my university.
Very rarely do we get continuations to those courses. In comparison with all the other areas, we have quite a lot of Analysis courses, but all of them are focused in Optimization (excluding Real Analysis, we usually have one to two Analysis classes).
We have no Calculus of Variation, no Measure Theory, almost nothing.
FYI, I had to go out my way to bug a professor to request an extra Analysis course this year to the unit head and even then I am short on Math classes next year.
1. **TA experience**
2a) Should I talk about this? How will they even verify me? Because I have done some things that most TA don’t do at my university – writing exam solutions. The prof I TA’d for left everything for me to do, except the teaching and actually writing the midterms/finals. I never had a class with him, so I am not so sure about asking him to write a letter for me.
2b) I also TA’d for another prof at another campus during one summer term(same university, but different Math department), should I mention this?
**EDIT**: **I can provide a link of my exam solutions through the prof's site. I think he will give me permission, should I include this?**
3) **Research Experience**
I have *very very* little experience, so much that I could probably only write one or two short sentences about it. I also have no publication, but I think the prof I worked for can
confirm that I did do research under him.
By the way, the “research experience” I had was a problem the prof had written by hand on a math paper and he asked me to answer the question he posed. It was not an analytic problem, it was coding, graphing, and writing a report.
4) **Area of Interest.**
4a) I already know my area of interest, I am wondering if it is a good idea to write why I got interested in the first place or is this completely irrelevant to the graduate committee?
My reasons are rather absurd, I am going into my desired area because of a textbook writer and the textbook I read by him isn’t even the area I was interested in, although the writer did write a book in the subject and I was simply in love with his style of writing.
I later found out the writer’s background and plus some neat stuff I read on the Internet sealed the deal for me. If people think this reason isn’t silly or “cliché” (e.g. “I liked puzzles when I was young”), then please tell me.
4b) Also one major problem is that I can’t talk too deeply about my area of interest. i can mention specific subfields, but that's about it. For instance, if I liked Number Theory, I could mention "Analytic Number Theory" and the "Riemann Zeta" or if I liked Differential Geometry/PDE, I could mention "Geometric Analysis".
So would it be better to omit the details if I can't comment too much on the details of the subject and simply write "Number Theory"?
5) **Thesis Advisor**
I can find people and mention their names easily on my personal statement. I am just curious if I should narrow it down to only ONE person? Does it look bad that I am just listing out the people whom I want to work with instead of writing down just one name?
6) **Scholarships/Award**
I have never liked the word 'Award', so i am going to use 'Scholarship'. Do I need to mention about a scholarship I got from a professor? Again, how can I be verified for this? I think I could ask the prof who gave it to me (whom I did reasearch for) to mention/confirm this?
7) **Skills**
How much will it add to my application if I tell them I can use LaTeX (honor's thesis not required for honors degree at my university. I asked one of my profs why and even he doesn't know.), high proficiency with Mathematica, Maple, Matlab, etc...? I was going to add Photoshop, but then I realize how pointless and irrelevant that is. I can also use Python, but since I am postponed my 1st year computer science requirements till my last year I do not think they will buy this. Also my school teaches Java.
Thank you very much for reading and taking this time to read this ridiculously long question(s)
**EDIT CLARIFICATION:** I am applying to US/Canadian universities<issue_comment>username_1: Some ideas on your statements/questions:
(1a)-(1b)-(1c): It seems to be your undergraduate university is not a very well known one and/or even not considered strong in mathematics. Graduate schools may be wanting to be sure you covered what they consider necessary as undergraduate mathematics. Upon being asked I'd consider sending along a short syllabus of the courses you took. I'm almost sure your university must have these things.
(1d)-(1e) Don't even mess with this unless specifically asked, which I think it's unlikely to happen.
The lack of any topology/measure theory (and perhaps more) courses in your university (or college...?) is a rather serious one, imo, and it may point, again, at some lack of elementary basis most mathematics depts. are supposed to have.
in fact, I think it is likely some universities could require from you to complete several courses before they considere you as an actual candidate for graduate school in mathematics...are you sure that what you studied in that school of yours was "mathematics"? Perhaps it was something like "applied mathematics"?
I don't think serious graduate schools require TA from undergraduates. In fact, mentioning you TA'd some course before being a graduate could be considered as (another) sign of a low mathematics level in your school.
To require research? From an undergraduate? I don't think there's such a university. What could be required, imo, is good skills to "hunt" for books, papers, etc. in a mathematics library and, in our days, perhaps also in the web.
No need to dwell a lot with your area of interest. Perhaps mentioning some of the wide areas (analysis, topology, algebra) could be enough, though imo most decent graduate schools require from graduate students to take two or more rather hefty, year-long course in some of these areas, and only *later* you begin to drift towards your love...
The same applies, imo, for thesis advisor.
Please do mention any scholarship-award you got that's connected to your studies. This may be rather important.
About skills: I don't think anybody will really care about it. Most probably schools will be more interested in finding out about your seriousness, love for the subject, responsibility, etc.
Good luck!
Upvotes: 3 <issue_comment>username_2: Responding to questions approximately in order:
It is not a waste of space to talk about your mathematics background. Mere course titles tell almost nothing, so it's good to explain more. Telling the *authors* of the texts used explain a lot to experienced mathematicians.
Especially if your school has a relatively weak program, and even if not, telling what self-study you've done is very important. It is all the more important as an indicator that you take initiative, are driven by curiosity about mathematics, independent of grades and structured programs.
Explaining briefly that you've had TA experience is a small plus, because almost all grad students in math are supported by TA work, so knowing in advance that you can communicate will ease the minds of admissions committee members.
Despite the contemporary pretense that undergrads "do research" between their junior and senior years, it is very rare that any sort of genuine research occurs. Sometimes, but rarely. After all, if research only takes 8-10 weeks in the summer, with almost no prior background, why does a PhD takes years? :) (There *is* a good purpose served by the summer programs, though, of giving undergrads the idea that mathematics is not confined to a classroom and textbooks, as well as creating social connections with other undergrads seriously interested in math. But these situations don't really produce cutting-edge research.)
About "specific interests": of course it is vastly better to have tentative, ill-formed, and inevitably ill-informed, "interests", rather than *not*. :) I'd encourage you to tell how these interests arose, giving the admissions committee some insight into your approach to mathematics.
It's good to mention scholarships. People will not be so skeptical that you need to document it.
Computing skills are a positive, and deserve a brief mention. Again, there is little need to offer "proof".
In summary, it is a mistake to think that one's transcript explains what the admissions committee wants to hear or needs to know to make a reasonable decision. An informative personal statement makes a huge difference, especially in communicating your motivations and learning outside classrooms. Also, letters of recommendation from mathematicians well-acquainted with you, from contexts of relatively advanced mathematics (rather than elementary) are very important to give an idea of how well you'd fare with more advanced/sophisticated work. (After all, many people do well-enough in undergrad material, but find that graduate-level mathematics has a slightly different nature... of less interest...)
Upvotes: 3 |
2013/06/05 | 538 | 2,344 | <issue_start>username_0: For example, I download all the content of Academia@SE, later analysis it in a data mining paper, and submit the paper in the end. Is it OK to do so? Do I have to ask the permission from the administrator of the website? And does he or she have rights to forbid my academic use? Thank you.<issue_comment>username_1: I believe you can do it with StackOverflow data, as long as you cite/attribute it properly. [This](http://blog.stackoverflow.com/2009/06/stack-overflow-creative-commons-data-dump/) article affirms it. However, I do not know whether this can be extended to the rest of StackExchange. A question to the mods or to the support team might help you clarify.
Upvotes: 2 <issue_comment>username_2: Disclaimer: I am not a lawyer. If you are seriously concerned about this issue, you should consult one; your institution probably has intellectual property lawyers on staff.
There is a general principle that "you can't copyright facts". Wherever you get your data set, you probably can legally publish any analysis of that data, without requiring anyone's permission. However, you may not be able to legally reproduce the data itself.
Of course, by standard academic ethics, you must properly cite and attribute the source of the data. And if you can't guarantee that the data will remain accessible, it could affect the reproducibility of your results and hence the quality of your paper.
Upvotes: 2 <issue_comment>username_3: Your University may have an Institutional Review Board (IRB) that reviews how you conduct experiments. This board may be known by various names (Ethics Committee, Experiment Review Board, Human Subjects Research, etc.) but they are generally the ones that you would go to to consult about whether what you are doing is within the scope of ethical behavior and good treatment of human subjects data.
As StackOverflow and associated StackExchange repository data is available under Creative Commons Attribution Share-Alike (as @piotr\_migdal linked above) and is publicly available, your IRB will probably tell you, "It's fine" and not require review. However, it depends on the IRB and the institution and the nature of the data.
There are entire research disciplines built on scraping web sites, software repositories, and social media, so don't feel bad for doing it.
Upvotes: 3 |
2013/06/05 | 924 | 3,973 | <issue_start>username_0: I have been working on a review paper. After publication, how will it add on my academic research profile? When I will apply for MS or PHD admission, will it count as publication?<issue_comment>username_1: I have ***limited experience*** regarding since I am still a graduate student but from what I understand, a review paper is also a research paper. However, unlike a piece of research, where you study the existing literature, develop research questions and hypotheses, collect data, run experiments/analysis and make inferences which accept or reject your hypotheses, a review article is a summarization and collation of existing articles in a given, specific research topic.
There has been some semi-formal writings on this already namely, [this](http://scientopia.org/blogs/genrepair/2011/02/01/do-you-really-count-a-review-article-as-a-publication/) and [this](http://www.nature.com/nature/journal/v489/n7415/full/489177a.html). The consensus, so far, seems to be that review articles make fine additions to your publication record but *not as fine as* articles where you actually did your own research.
Upvotes: 2 <issue_comment>username_2: A review paper is likely also known as a "survey paper", where you read (i.e. survey) related works in the field and then comment on them. Usually, a review paper should be able to contribute a small amount of knowledge in its own right to the field by providing a taxonomy of work.
Another type of paper that reviews extensively related work but isn't actually a review paper is a ["systematic review paper"](http://www.sciencedirect.com/science/article/pii/S0950584910000467) in which you usually ask a meta-question about the field.
If it appears in a refereed, peer-reviewed journal, then yes, it is a publication. In fact, if done well, these works can often have pretty high impact and can be cited very frequently. However, as already noted, since they don't usually involve substantial original research they need to be augmented with traditional research papers. If a graduate student has only survey papers or systematic review papers, I'd wonder as a search committee reviewer if this student did nothing but read related work rather than working on research.
With respect to MS or PhD applications, I'd think that the fact that you have a publication at all is already a bonus point for you. Most students who apply to these programs don't have publications.
Upvotes: 4 [selected_answer]<issue_comment>username_3: One important distinction should be made between papers in the humanities and the sciences. In the sciences, it would be much more important to have "original research" papers where new ground is broken. In the humanities, by contrast, the act of studying the existing literature and critically evaluating it may, in and of itself, be considered an act of research. (Similarly, in medicine, "meta-studies" in which the reports of various experiments are synthesized to produce overall results and recommendations may also be considered very important, although they augment direct clinical research, rather than substitute for it.)
Upvotes: 2 <issue_comment>username_4: I have little experience, because I am still an undergraduate student but from what I understand:
* *Research paper:* A paper in which results and discussion are derived from an experiment.
* *Review paper:* A paper in which results and discussion are not described.
Upvotes: -1 <issue_comment>username_5: I would describe a review paper as different from a research paper. A research paper is one's original work that may be researched scientifically or otherwise, but a review paper is where someone goes through work already done/researched and gives suggestions as per that field of research. The suggestions would be if the objective, goal, problem were met by the researcher. Whether the research is of value now or in future, solutions to the problem, what is interesting, etc.
Upvotes: -1 |
2013/06/05 | 2,864 | 12,394 | <issue_start>username_0: I've just written my first mathematical research paper. It proves some new results, which while not ground-breaking are (according to an expert in the field) at least somewhat interesting and surprising. At the moment however, I spend more of the paper developing the background material (giving standard definitions and constructions, proving standard lemmas) than proving the main theorems.
Is this a problem? The way I see it, there are several arguments for and against:
**For:**
* The background material is "standard" in the sense that anyone who works on this class of problems would know the definitions or results in some form. However, this is at most a few hundred people, while if I include the background material my paper should be comprehensible to an advanced undergraduate.
* Some of the background results are part of the folklore of the field, and I've never been able to find a proof of them in literature. While they are believable and not hard to prove, I feel someone should bother doing it. More selfishly, this is one more reason for people to cite my paper.
* I don't know of any one reference which states *all* the background material I need, so if I don't include it my readers have to chase down multiple sources and I have to use conflicting notations.
**Against:**
* It may be annoying to an expert in the field, although they could skip much of it and mainly refer to the background section for notation.
* It makes the paper longer, although even with the background the paper is not long (13 pages).
* From what I've heard, it is generally considered bad practice to restate definitions and constructions states elsewhere and to reprove theorems available in literature. In part this is because it gives the impression that I haven't read the literature. This is exacerbated by the fact that I only cite ~5 previous works, mostly for further reading or alternative presentations of some of the background.
I'd like advice on this from someone with experience writing such papers.<issue_comment>username_1: It's not entirely clear whether the background material is just background, or is needed as scaffolding for the results you prove. Also, if certain lemmas are "standard" why are you proving them ? surely there's some text or other paper that proves these lemmas, and which you can cite ?
In general, in a paper you provide the scaffolding needed to prove your results. It's not necessary to provide other atmospherics unless its part of a larger discussion of related work.
Upvotes: 2 <issue_comment>username_2: As you rightly perceive, there are conflicting desiderata for "formal papers". The main tradition for refereed-journal publications is to assume one is writing for the experts, not for anyone who'd need much background or context. Yes, this makes reading such papers needlessly difficult for non-experts. Yes, the necessary background is likely scattered among several prior papers, and some occurs "nowhere", in the sense of being apocryphal... lost in the mists of time? :)
Nevertheless, the highest-status refereed journals would probably want more-discursive broader explanations edited out... and the referees and editors might interpret your inclusion of such things as "amateurish" ("It's just not done..."), to your detriment.
But the expectations or standards of refereed-journals is certainly not the only criterion, and is manifestly antithetical to the obviously desirable outcome of wider dissemination of ideas, collection and organization of otherwise-chaotic literature, and so on. Some people will tell you that somehow it's not ok to "mix" "research" and "exposition"... but this is silly, even if traditional. But, then, given the traditional predilection of refereed-journals, if you want a more discursive version of your write-up, you may have to reconcile yourself to having two versions, one for referees, one for exposition/wider-dissemination.
The for-referees version should probably *not* cite the discursive version, but, instead, do the quasi-unhelpful bit of citing the disparate standard sources. The discursive version could cite the for-referees version, as well as the standard sources, and still include your own updated presentation, bringing apocrypha to the light of day, and so on.
But, I fear no single refereed-journal-publishable version could meet the nicer-exposition requirements you (completely reasonably) would like to impose.
Upvotes: 3 <issue_comment>username_3: If there are proofs in the literature for some of your background materials, it is better you do not prove them in your paper unless your proofs are much simpler or they use a method which will be used again in your other proofs (I have seen both instances in papers written by well known mathematicians). However if you cannot find a proof for some of the easy (or well known) results in the literature, it is a great help for your readers to add your proofs. About introducing your notation and/or definitions, you can be even more generous to your readers. Although there is a downside for including lots of backgrounds, not including some of them can delay the referee process of your paper. Do not hesitate to cite more references if they help to find some of the background materials. And finally, I suggest you devote some paragraphs to explain the motivation of your work.
Good luck with your paper!
Upvotes: 3 <issue_comment>username_4: I'd recommend putting in as much explanation, context, and background as you reasonably can. In principle you could certainly put in too much, but in practice I don't think I've ever seen a mathematics paper I thought actually did have too much. If you structure your writing so experts can easily skim things they already know, then I don't see why extra explanation should trouble anyone. Tradition requires being concise, but that tradition was based on the economics of publishing. Space in printed journals used to be a scarce resource, and if you spent it on unnecessary exposition you were keeping someone else from publishing at all. Publishing still isn't free (and readers have only so much patience, as well), so you shouldn't include enormous amounts of unnecessary background, but traditional writing styles should be adjusted to account for the greater availability of space.
It's possible that a referee or editor will object that they don't like your style. You may be able to resist making any changes (it depends on how exciting your paper is and how high your own status is within the community), and if you do make changes you can still keep a lot of explanation.
If necessary, it's worth having two versions as Paul Garrett suggests. If you do, then you should make it very clear which is which and what the differences are, to avoid serious confusion if someone refers to the paper without realizing they need to specify which version.
>
> The background material is "standard" in the sense that anyone who works on this class of problems would know the definitions or results in some form. However, this is at most a few hundred people, while if I include the background material my paper should be comprehensible to an advanced undergraduate.
>
>
>
One reason to be generous with explanation is that it's easy to overestimate how much people know. Aiming at advanced undergrads is a good way to make sure grad students really can read the paper. If you aim at experts, you may write a paper only your own collaborators can comfortably read.
>
> Some of the background results are part of the folklore of the field, and I've never been able to find a proof of them in literature. While they are believable and not hard to prove, I feel someone should bother doing it.
>
>
>
This is a great reason to include a proof, although it's important to consult with experts to confirm that there really isn't an accessible proof in the literature.
>
> It makes the paper longer, although even with the background the paper is not long (13 pages).
>
>
>
You should make sure it doesn't look like padding, say with just a few pages of original content among the 13 pages.
>
> From what I've heard, it is generally considered bad practice to restate definitions and constructions states elsewhere and to reprove theorems available in literature. In part this is because it gives the impression that I haven't read the literature. This is exacerbated by the fact that I only cite ~5 previous works, mostly for further reading or alternative presentations of some of the background.
>
>
>
This does seem like a worry, and five references is not very many, even for a 13-page paper. You might not strictly need any more, but I'd recommend tracking down additional references for background. As a general rule it's best to cite generously, giving plenty of credit to other authors and offering many resources to readers.
Regarding giving the impression that you haven't read the literature, it's important to be clear about what's background from the literature (with a citation), what's folklore, and what's your own contribution. If you reprove something you say is known but don't give a citation for, it can look like you were too lazy to track it down.
Upvotes: 6 [selected_answer]<issue_comment>username_5: One factor to add is that in many mathematics papers, there are many different definitions and notations for a subject. Some of these approaches may make your results seem more or less natural. Even someone who is familiar with the subject may become confused by the way in which you present things.
One good reason for a background section is so that you can "set the scene" using the presentation that you think is best. Sometimes just setting things up in a coherent way is more valuable than the new results themselves.
Upvotes: 1 <issue_comment>username_6: Warning: I have experience with "mathematical" papers in the Computer Science community. In our field, papers are supposed to be (to some extent) self-contained; moreover, it's more the writer's responsibility to make stuff clear, than the reader's responsibility to study the stuff (this generally helps to get more readers and make your research have impact; also, readers don't have time to read all papers they'd want to, so be gentle to them).
I'd optimize background for skippability. In fact, optimize all sections for that, but especially such background.
For concreteness, I'll give an instance of what is reasonable/can be recommended in our field.
```
\section{Background}
In this section, to make this paper self-contained and to fix notation,
we summarize
the theory of representable functors % don't be as vague as
% "background on X"
which we'll use later in the rest of the paper/in Sec. YYY. % Help readers figure out
% whether they actually need this background,
% if it's only needed for part of the paper.
\subsection{Standard definitions}
In this subsection, we summarize background which is available in the
literature, though spread across different references~\citep{1, 2, 3}.
% Don't order the material necessarily by reference, but by how they
% are best presented.
% ...
\subsection{Folklore theorems}
In this subsection, we present some basic results. Although they appear to be
folklore/almost obvious/believable % what you prefer
for experts, we found no proof in the literature, so
we include the proofs for completeness.
```
You can include something that is part of the literature; to show you know the literature, you just need to also add a citation — doing otherwise can be bad practice (if it's very clear the result is not yours), and might be taken for unethical by somebody who misunderstand what you claim.
But you should decide whether to re-include the proofs though — can you lift the statement directly, adapting the notation? Or is the setting different enough that you need to adapt the proof? Or is it important that a reader knows the proof to get your paper, because you reuse similar proof ideas?
However, things in mathematics might be different; advice still should have similar consequences, but conventions are different. Let me say that the habits of mathematicians are quite frustrating to e.g. computer scientists. I've seen at least one respected computer scientist define some standard references (in mathematical style) "unreadable" or "write-only".
Upvotes: 0 |
2013/06/06 | 1,545 | 7,171 | <issue_start>username_0: I recently tried submitting a paper to a journal. It was mandatory to suggest three reviewers. Is this a norm in journal submissions? If yes, how should one choose reviewers if I do not personally know any experts in the field? I have been submitting papers to conferences and never found such conditions there.<issue_comment>username_1: I was asked to do that several times by an editor after being told (s)he couldn't find referees for my submission (to the point that I now spontaneously tell the editor upon submission that I can suggest referees if need be), but I don't know of any journal (or conference) for which this is *required*.
Anyway, you don't need to know experts personally: you are *suggesting* referees, not *forcing* them on the editor (or your work on them), and whether or not you actually know them should be irrelevant (it's even better if you don't). Read your bibliography, see which authors come up most often, or whose work form the most important basis for your submission, or who would be the most interested in reading it based on their own work, and I'm sure you'll have plenty of names to suggest.
Upvotes: 4 <issue_comment>username_2: Some journals explicitly ask about suggestions for reviewers with a submission, some will consider any suggestions that you make in the cover letter, and others (probably) will just ignore any such suggestion.
There are in fact scientific studies about the comparison between reviewers suggested by the authors, and those selected by the editor, for example [this article in BMC Medicine](http://www.biomedcentral.com/1741-7015/4/13/). The overall conclusion seems to be that reviewers suggested by authors provide reviews of equal quality than those selected by the editor. While they are more likely to suggest acceptance in the initial review, at later stages these suggestions seem to equalize.
As an author, you should have a high interest in getting over that initial review, and if you do it well, suggesting reviewers is a very good opportunity for that. I'd always suggest to make use of such an opportunity, since you probably can judge best which potential reviewers **will look favorably** at your paper. And that's of course what you want.
If you know an expert personally, that's usually a good option. It has to be handled with care though. When you're too close to a suggested reviewer, the editor will give significantly less weight to the recommendation of that reviewer if he knows about personal ties. But if you go to conferences and talk to people about your research, you could suggest them as reviewers afterwards if they have similar interests. Or look at your reference list, as suggested in the answer by username_1.
Upvotes: 3 <issue_comment>username_3: In my opinion, this is a practice that should be strongly discouraged. While on average reviewers selected by the author give fair, high quality reviews, that doesn't mean that the unscrupulous can't exploit this opportunity to select reviewers that share opinions that are far from the scientific mainstream in order to get dubious arguments into the peer-reviewed literature. This is especially the case where the paper is on a contentious topic that is only tangentially relevant to the journal, so the action editor may not be able to easily find adequate reviewers from within their own field.
As an example, there are numerous papers published on climate related issues in energy, astronomy or general physics journals, which can easily be shown to be fundamentally flawed. Where the journal asks for the author to recommend reviewers, it does raise the question of how much this contributed to the evident failure of the review process. It seems to me to be better to avoid the problem ever arising.
Ultimately if the action editors cannot identify satisfactory reviewers by themselves, the work probably doesn't belong in the journal in the first place.
Being able to specify people who *shouldn't* be used as reviewers is, of course, another matter entirely.
To answer the question directly, suggests the names of reviewers that you consider to have the required expertise in your field and who can give you a rigorous, but constructive review. Don't choose people you know personally if there is someone equally well qualified that you don't know. I recall reading that when you receive reviews you are getting advice for free from experts who's time you couldn't afford to buy, so why not attempt to get the most value from it as you possibly can?
Upvotes: 3 <issue_comment>username_4: Being editor of a journal where authors can provide preferred and non-preferred reviewers, I can provide some "inside" thoughts on the subject based on what has happened in "my" journal. Note that it is possible to suggest names for review but also provide names which are not preferred. The latter can be because of a scientific disagreement, personal issues or whatever. Such suggestions appear but not often and we usually follow the suggestions (not that we have to!).
When it comes to the preferred or suggested reviewers, I have been tempted to use such reviewers on occasion when it has been hard to identify reviewers directly. Sometimes because the topic is local and where it would make sense to have local input. In these cases, I cannot remember a single reasonable review that has come out of such reviewers. This can be for several reasons but most often the review is a close colleague who might have an incentive to help the author. In some cases the preferred names have been very senior scientists who, I am afraid, has lost touch with the subject and provide poor and in some cases almost non-existent reviews. Out of all immediate "Accept" review recommendations I get, the vast majority have come from these reviewers. So, I not longer trust these names and avoid them at all costs unless I personally know or know of the reviewer and his or her good reputation.
In addition to what I just describe, I also must state that it is often the weakest manuscripts that have listed several suggestions. This can be identified by the disparate review results, sometimes one accept (by the suggested reviewer) and one reject.
Now, in principle, there is nothing wrong with suggesting reviewers, I have done so myself when being requested. I have then as a principle gone for established and well renowned names in the community. The problem lies in suggesting names for a purpose other than to get a fair and objective review.
It is clear that the system can and is abused and since I became Editor-in-Chief, I have come to rely less and less on these suggestions and now mostly look upon them with suspicion and make selections from my own understanding of the field and investigations into the subject literature. The best suggestion, I can provide is to not avoid mentioning names but pick names that in your opinion can provide good constructive critique on your work (and not just favorable). A note on why you have selected names as preferred or non-preferred would greatly help as well since it puts your choice in a perspective.
Upvotes: 6 [selected_answer] |
2013/06/06 | 748 | 3,129 | <issue_start>username_0: Considering the ethical and professional implications of self plagiarism, it is very useful to check new manuscript against old ones (which are under review and not yet published) to decrease self plagiarism. In a white paper by ithenticate (download from <http://www.ithenticate.com/self-plagiarism-free-white-paper>), one way to avoid self plagiarism is to significantly paraphrase those text which are need to be used in new article. However, I have difficulties in identifying the similarity percentage of my current work against yet-unpublished works. Online tools (that I know) check the manuscript against published works. I have 3 journal papers, 1 is published, 1 is under review, and 1 is under preparation. Now, I need to check the third one to see the similarity score. Any suggestion?<issue_comment>username_1: Sourceforge offers a few free tools aimed at detecting plagiarism (see <http://sourceforge.net/directory/os:linux/freshness:recently-updated/?q=plagiarism>) for various platforms.
Upvotes: 0 <issue_comment>username_2: Does the paper make an original disciplinary contribution to scholarly knowledge?
Has any reused material been rewritten from scratch?
Has the reused material been cited in relation to the paper under review?
If so, it should be fine. The first one is the big one, I assume that people with a doctorate in their discipline should know whether this has been achieved. One nostrum I use is meeting at least one of the three: new evidence set, new theoretical tools, new analysis.
As an example of this nostrum: Case A organisation demonstrating with Marxist class analysis concept Q.
* Case B with Marxism demonstrating Q is novel
* Case A with Patriarchy analysis demonstrating Q is novel
* Case A with Marxism demonstrating R is novel
Upvotes: 1 <issue_comment>username_3: *First suggestion*: Never copy-paste material from earlier works, always write everything from scratch.
This prevents you from moving critical passages from one paper to the other. It is of course still possible to end up with a similar or even identical sentence by accident, particularly if you describe a method or something similar. If you end up using the same method for several papers it is also possible to simply reference your original description in subsequent papers. The [Ithenticate white paper](http://www.ithenticate.com/Portals/92785/media/ith-selfplagiarism-whitepaper.pdf) mentioned by you provides good advice. Yuo can slso visit the [COPE (Committee On Publication Ethics)](http://publicationethics.org/) web site and search for self-plagiarism.
*Second suggestion*: Use the softwares such as *iThenticate*, *turnitin* or the like depending what you are looking for. But, I would not blindly just look at percentages of overlap but focus on where the overlaps occur. If it is in the methods section describing a series of steps ina process it is clearly not as critical as if it is in the discussion or conclusions. If you find overlap and you can identify what it is, then go to my first suggestion and critically review if you need to say it again.
Upvotes: 2 |
2013/06/06 | 1,302 | 5,752 | <issue_start>username_0: My university is on a drive to unify its corporate identity (it makes me sick just typing that). This drive includes branding our lecture slides, research talks, and research posters. The branding templates are not released under an open license and utilize a copyrighted logo and proprietary fonts. I think this prevents me from releasing my talks and posters under a free and open license (e.g., [CC BY](http://creativecommons.org/licenses/by/3.0/)) which is one of the tenets of [Reproducible Research](http://www.rrplanet.com/).
Apart from creating two versions of everything, is there any way to reconcile this apparent incompatibility?
Is my understanding of licenses like [CC BY](http://creativecommons.org/licenses/by/3.0/) wrong? Can I release something (e.g., a research talk or poster) with a copyrighted logo that I don't hold the copyright on under a free and open license?<issue_comment>username_1: My suggestion is to follow your school's guidance as much as you have to, but figure out a way to remove (by script, macro, by hand, or otherwise) the branding for distribution. Have both versions available on your website. If you have a presentation that you may want to distribute, put a note at the bottom of the title slide, or at the end:
>
> A freely distributable version of this presentation is available here: [web address].
>
>
>
If your branding mechanism is a master/template slide, it is trivial to remove this for a non-branded version. I have a hard time believing that printing issues (for posters or handouts) should be a big concern for redistribution -- if it isn't digital, very few people are going to scan/copy for redistribution anyway.
Upvotes: 2 <issue_comment>username_2: My university has had enforced its "corporate identity" since the 1990s. It is described in details on a web site and available in a series of templates. Although I get a sense of tiredness when I read the material surrounding the "identity" I can also see benefits, to recognize the university "products" (sorry) among other materials at a congress etc. But, the question was about reproducing material.
As I see it I would want to have the logo on material such as posters or presentations so that people can identify my affiliation. I am free to post presentations and posters or other materials on my university site with the logos on them. If I want to put some material out that is mine I simply would not use a unversity logo. An example: I have written several hopefully useful booklets on scientific writing to be used by students. This material is my initiative and is not the result of the university asking for it. These booklets are distributed for free using our web-page and I would gladly distribute them more widely if there was demand. So from this perspective I can see two different "products" where one benefits from the logo and one where I do not want it.
Now the rules of my university says the logo is copyrighted which means others cannot use it. this still means I can post material with the logo in public places. The problem arises if someone takes my, say, presentation and uses it as their own. Then they break the copyright and make themselves guilty of a kind of fraud by associating themselves with an organisation to which they do not belong. I still have done nothing wrong, posting material is fine and even encouraged. The copyright also prevents people from the taking the university logo and adding it to their own "product" fo rexample showing it on their web-page or using it for commercial uses.
So now, the content. You seem to indicate that you will be prevented from displaying your work without the branding. I do not think this is correct. The laws on copyright and particulary intellectual ownership in my country is very clear. If you have created something it is yours. In a commercial company you may end up sign off this right by becoming employed so that the things your develop within that company belongs to them, not you. That is how research in pharmaceutical companies work, for example. My university system has made attempts to gain rights to lectures etc. but this has so far failed miserably due to the strong laws. You need to check these laws that apply to you since I do not know how they may vary internationally; I would expect them to look fairly similar.
You mention "proprietary fonts". This means the university has selected fonts and bought them from a font foundry so that you can use and copy them for free within the university system. This does not mean other cannot use them, they must simply buy them first so providing copies to persons outside of your university would be illegal. Since fonts are not included in Office templates or in pedfs resulting from your templates there is nothing illegal about distributing such documents. If you were to take the fonts and produce a product that you were to sell for your won personal gain, you will, however, break the law.
The bottom line, then, is that you can put your material in the creative commons as long as you avoid the logo (ater all who would want to use a figure with somebody elses logo in it?). I cannot see the university preventing you from doing this unless they explicitly ask you to waive your rights. Material with logo has its place when you want to make sure your afficilaition is clear. If you do not want that, then I believe you are free to post things in another style.
Finally awareness of the laws and regulations concerning intellectual rights are important and I strongly believe it is good to carefully look at whatever applies in each of our cases so that we can react if someone tries to infringe on such rights.
Upvotes: 4 [selected_answer] |
2013/06/06 | 849 | 3,961 | <issue_start>username_0: Doing a systematic review, either quantitative or qualitative, requires developing a well-defined protocol as a method of conducting it. This includes defining the search methods used in the identification of eligible articles, which include two methods:
1. *database search* using *search queries* (a main method)
2. searching in the references and cited-by sections of the selected articles (which are already retrieved by the first method)
Let's assume that an author (of a large review) does their best to formulate the *search queries* to better represent the review question, but later they realized that **considerable amount (e.g. 50% or more)** of the relevant articles are found *only by using the second method* (searching in articles);
Is the second method less 'systematic' than the first one? Does it compromise how much 'systematic' the resulting review is?
If so, does the above scenario affect the validity of the *search queries* used (i.e. should they be reformulated to increase the recall of search and retrieve more of relevant articles)?
**Edit:** More extremely: If 90 out of selected 100 articles are found only by using the second method, how much does this affect the quality of the systematic review?<issue_comment>username_1: Databases are not complete and published articles do not necessarily reference all pertinent literature so it seems unlikely that one would necessarily capture everything relevant by using just one method.
Databases are probably relatively complete when regarding more recent publications. I would not like to define "recent" however but I see it as mostly post-1990's. But my guess is that databases are centered around more widespread journals and more local journals may not be well-represented. This means that depending on the search area they may be more or less complete. Going back in time more and more will likely not be found in databases so if the topic has a vital history then database searches will cover the "recent".
Reference lists may pick up more older material but that is of course dependent on the authors willingness to research literature. It is possible one might pick up more esoteric references this way but I fear the selection will be fairly random and not comprehensive.
So to use both methods seem like the safest way forward to me. Depending on the subject matter, having deeper understanding for where and when things might have been published in the past may be vital in order to capture most relevant literature on the subject. To venture so far as to say everything will be found is difficult. Particularly during the cold war much was published in for example Russian journals that never reached the west. Many discoveries published in the west were therefore missing out on the eastern counterparts and may not even have been first. Much has thus been lost by lacking translation and that goes for many if not most languages. One must also remember that the publication scene as we see it today was not utilized earlier, when internal reports and local journals may have taken up much research. To rely on just one of the two parts for methodology may therefore be inadequate.
Upvotes: 2 <issue_comment>username_2: I would add to the other answer(s) that you could validate the first approach using the second approach. If your search criteria systematically miss relevant articles *in the journals covered by the database*, then that's a clear sign you need to reformulate your search criteria because they're clearly not sufficient.
The second thing to add is that any search should also try to find unpublished articles. These are definitely not going to be in databases, but may be cited in papers (thus the importance, at least in my view, of the second approach). It might be worth adding a third approach that involves search conference abstracts, where unpublished (especially recent) work is likely to be found.
Upvotes: 3 |
2013/06/07 | 840 | 3,477 | <issue_start>username_0: Knowing that there are some members of hiring committees here, I am hoping for some insights.
Having read [Rise of Altmetrics](http://chronicle.com/article/Rise-of-Altmetrics-Revives/139557/?cid=wc&utm_source=wc&utm_medium=en), I started to wonder what would be the best way to measure the impact on my publications. I do care for many reasons but one of the reasons is to impress others enough to be able to get a job/promotion/tenure.
So, my question is, do hiring committees take alternative metrics (Facebook likes, mentions in blog posts, etc.) seriously when trying to measure the impact of someone's work or are other issues like impact factor more important?
[This question](https://academia.stackexchange.com/questions/1020/what-is-a-fair-metric-for-assessing-the-citation-impact-of-journals-across-disci) is related but is more about measuring the impact of a journal but my concern is measuring the impact of my own work (which might be in several journals).
[This question](https://academia.stackexchange.com/questions/4965/are-academic-indicators-h-index-impact-factor-etc-really-adopted-by-institut/4966#4966) is also related by asking how impact measurements affect job prospects and has some excellent answers but my question is specifically about 'unofficial' metrics like tweets, downloads on SlideShare, etc.
[This question](https://academia.stackexchange.com/questions/1206/how-many-people-read-an-individual-journal-article) is also related asking how to measure readership of journal articles but my question is whether alternative measures are taking into serious consideration but those making hiring/tenure decisions.
I know there is the h-index (with its own flaws) but that seems to measure my publications as one unit (all publications taken together, therefore measuring me overall). I'm more interested in measuring the impact publication by publication in order to show an improving trend.
On a side note, it seems that there is a general feeling that a publication in a high impact factor journal equals a high impact publication. This feels a little off to me since one might convince the editor their work is important while at the same time fail to convince their academic community of the same.<issue_comment>username_1: I served on a hiring committee in mathematics at a research university in the United States, and I don't believe that any of us paid attention to the sort of metrics you describe.
My personal inclination is that I would not recommend listing any of this information unless it is unusually notable, e.g., if someone particularly well known blogged extensively about your work. However, different countries, universities, and departments might operate differently.
Upvotes: 4 [selected_answer]<issue_comment>username_2: If altmetrics such as the number of Facebook likes, the number of mentions in blog posts, etc. would be considered as a measure of impact and rewarded by hiring committees, then scientists would have significant incentives to increase their altmetrics. Such altmetrics are very easily manipulable (e.g., by creating fake Facebook accounts, fake blog posts, automating downloads), so once they will start to be artificially increased their relevance for estimating impact would disappear.
I have not heard about altmetrics being taken seriously by hiring committees, and given the argument above it is likely that it will not happen in the future, neither.
Upvotes: 1 |
2013/06/07 | 2,676 | 11,286 | <issue_start>username_0: I have been working on an idea for last 2 years almost independently along with other research works. My advisor did not believe in my work much initilally, so I did not get an RA for two years even after requesting. Recently, I am getting encouraging results with some specific examples and scenarios with good hope for success to solve a complex problem using that idea. I have not published the work yet. Initially, my advisor was not interested in the idea partly because the work is not his area of expertise and insisted that I spend my time in other research projects with a senior colleague. I pursued it with my interest in spite of RA support, but with new results and potential benefits of the approach my advisor became extremely interested and even described the work as the next big idea in our lab meetings. I am happy about it or maybe he says it to make me happy.
However, recently I encountered a situation which was difficult for me to comprehend. I found my advisor present a perspective paper along with many other renowned experts in the field, proposing and highlighting the approach I have been working on as the future direction and visionary in the field along with other important developments in a conference. Even though I was not a co-author in that paper and my work was not cited or even acknowledged, I consoled myself as my advisor was alluding me that he was promoting the idea; it was an advertisement of the work (of course with out any acknowledgement).
As he was not the first author of the perspective paper and there is a possibility that first/other authors can make claim of it, he asked me to file an updated technical report in the department before the paper is published. It looked to me like he wanted to promote himself among his colleagues with that idea with out acknowledging it to me before the audience and greater scientific public where it matters.
I happened to attend the conference as a PhD student, and found that the presenter of the perspective paper (whom I don't know) presented more than half of his talk on my idea with my slides that I shared with my advisor, and there was no acknowledgement or mention of my report or work. It was even worse to see that some of the terminology that I planed to use, was disclosed and few misinterpreted while explaining.
Even then, people really seemed to liked the idea and the approach and many are convinced that the idea is going to impact the field. While I saw a very drastic change in the way my advisor treated me recently, but what really made me sad was when my advisor asked me to refer to this perspective paper (to which I was not a co-author) in my impending submission (on the idea).
I feel like it was unfair but I don't know if research is done this way in academia or if it is perfectly legit to do something like that. I decided not to cite the perspective paper with possible consequences. I just wanted to know how other students handle such situations effectively and if such a thing is a common practice.
Edit: I do have all email traces and even a previous publication explaining part of the idea and a recent technical report submitted to the department with the complete idea.
UPDATES
June 2014:
I have continued with the situation I described above honestly because as a student I hardly have any options and as suggested by many that it would be an academic suicide. But, it had impacted me severely, mostly because I believe that any good idea I will bring to the table will be stolen or misrepresented and there will be cleaver manipulations to take ownership of them. I will take two steps forward and three steps backward. I could hardly perform in my potential. I will let you know my ordeal soon and many thanks for your kind help and support.<issue_comment>username_1: Check the policies of your university when it comes to intellectual property, specifically for the faculty you are studying under. Make sure you fully understand the guidelines, and I mean absolutely certain.
**If** you find that there is a discrepancy, meaning that this is frowned upon, then you have your original presentation (although, to be honest, I am not sure how credible this would be as evidence).
Perhaps, you cold speak to your supervisor about completing a co-authored paper on the topic to be peer-reviewed published in a journal (This is what my supervisor and I do).
Upvotes: 2 <issue_comment>username_2: The only way you're really going to be able to establish some sort of claim to recognition is if:
* Your advisors did not independently come up with the idea, and choose to change their stance and give you credit for the idea; or
* You can establish conclusively that this was *your* idea, and not your advisors'.
The best way to do this is if you have a *verifiable* documentation trail supporting your claim. This means that you have conclusive records showing that the work exists. This would include things like emails, *verified* laboratory notebooks, and other documents that can be dated and that demonstrate that you came up with the idea. The challenge, of course, will be showing that you came up with it *independently* of your advisors (which would require that you have documented proof showing that they discouraged you from working on it.)
Upvotes: 3 <issue_comment>username_3: While the other answers are good, I have an alternative approach for you.
Have you actually gone and spoken to your advisers about this? A *perspective* paper is just that. It talks about concepts and the ***next big thing*** - which very well may be your work.
I suggest that you have a nice, sit down, frank discussion with your advisers about this and make clear what future directions and expectations are regarding publications, collaborations and co-authorships. That would clear the air quite a bit, which frankly now, is rather hazy.
Upvotes: 3 <issue_comment>username_4: As far as I understand the situation, it seems to me that your advisors' behavior is borderline, even if likely on the wrong side of the border. Since it does not look like a very clear and frank misbehavior, even if you are perfectly right it would be tremendously difficult to prove it beyond doubt. In case of doubt, you could end up being seen more as the trouble maker than as the one who came up with that great idea; the surrounding people to whom you could complain (department head, etc.) would hesitate *a lot* to go against tenured faculty when the misbehavior is not crystal clear, etc.
With this in mind, I would strongly advise you not to confront to wildly with your advisors. You can (and should, as advised by username_3) discuss with them the fact that you are uncomfortable with the way they presented things, with the use without permission of your slides, and so on; but **always let them a way to discuss it calmly**. If they feel cornered, there will be little chance of the situation not degenerating into a conflict, and it would be very difficult for you to survive professionally a conflict with your advisors.
You could cite their perspective paper in a way that friendly makes explicit that the idea is yours. Your aim should be to get decent credit for your idea, even at the cost of letting your advisors benefit from it: think about what you have to gain or loose first, rather than about what they have to unduly gain or what you can cost them.
If everything goes smoothly, you can get into a good position to build on your idea, and be on track for your career.
Once you're a respected tenured faculty, you should remember this episode and be supportive of young researchers.
Upvotes: 5 <issue_comment>username_1: I was in a similar situation before (in biology) about 5 years ago (and this was an ivy league school on the East coast of US) – I had paper A published and paper B (following from A) in the works, when the advisor tries to work on paper C (following from A, borderline with B). The deal is that neither B nor C would've been possible without A, and one of B or C was necessary to show the full impact and worth of A (think detailed theory paper A vs lab experiments B and C). For other reasons, it couldn't be written as a larger A+B or A+C paper, but that's besides the point. The problem here was that the advisor, being faster at churning out a paper, finished C before I finished B despite starting later and then he insisted that we focus on polishing and submitting C before returning to B (I was 2nd author). Indeed that's what happened and we cited C in B when B was also eventually published (thus changing the "science order" from how it was). Note: I was not actually worried about publishing C, just its appearance before B.
My issues were:
* A was my idea, my work.
* The idea for C was also mine (leading from A), but was shelved (by me) until I had the time to run the experiments for those (the assays and lab experiments weren't a small deal).
* I'm the student needing advising, not poaching of ideas.
* publishing order of C before B looks like it was the advisor's bright idea, when it was not (no, really... this didn't start with the "here, work on my old unfinished idea")
The advisor's view was:
* A is already published, so it's "out there" for everyone including him.
* B and C are not exactly the same, so what's the big deal?
* In the long run, precedence differences of O(weeks) won't matter, period.
* I'm also an author and I now have 3 papers instead of 2.
In the end, I came to terms with it and in hind sight (after 5 yrs), should not have made such a fuss because
* I got 3 papers instead of 2
* It was a fresh change of roles (he did do the work, and I was in an advisory role)
* In the long run, precedence differences of O(weeks) didn't matter (might be different in other fields), and since I continued publishing in the same field, it now looks like I'm the man behind the plan.
* Resentment never did anyone any good.
* The dude is a hell of a supportive advisor in all other ways, so this wasn't worth burning bridges. Maybe he genuinely didn't see things from my PoV and didn't intend to poach.
The bottom line is – what you're describing, while borderline, might not be uncommon. Especially, using graduate students' results in a presentation to a funding agency, but passing it off as their "project" is very common, because despite what you might want to think, a lot of times, it's the reputation of the PI that brings in the money than the merit of the idea itself (i.e., a mediocre idea from a rock star PI has more chances of getting funded than a rock star idea from an unknown researcher).
However, it was absolutely wrong of them to have not included your name or acknowledged your contribution (which has never happened with me). You might want to bring that up, but you should think if you really want to burn bridges for a "small" reason. I say "small" in quotes because while yes, from a strict ethical PoV, they might be in the wrong, you're justified in your anger and is *not* a small issue for you, in the long run, the objective function of life is a multivariable function. Don't just fixate on one and make a decision (that you might come to regret) based on a local minimum that you're stuck in now.
Upvotes: 4 |
2013/06/07 | 1,035 | 3,907 | <issue_start>username_0: I am using Mendeley to keep track of my papers. It allows me to group the papers in folders, and I can easily export BiBTeX for a folder which I can include in a paper I am writing.
However, almost always I need to fill in all the details when I add a new paper to my library. "Search by Title" feature does not do a great job.
I think everyone around the world is creating their BiBTeX (or any other format) entries by hand. Is there a community driven (wikipedia like) website/service that unifies this process?
What I have in my mind is something like this: People add article meta information (title, authors, year etc.) to this public DB, if they see a mistake they correct it, if they see duplicates they prevent it by deleting one. And one can export entries as BiBTeX, etc.
The closest thing I saw is [Google Scholar](http://scholar.google.com/). But because it tries to automate nearly the whole process there are still lots of mistakes.<issue_comment>username_1: JSTOR and many publishers include BiBTeX format for articles on their webpages:
Example from Cambridge University Press (click "How to Cite This Article"): <http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=92189>
Example from JSTOR (click "Export Citation"): <http://www.jstor.org/stable/2585925>
Upvotes: 2 <issue_comment>username_2: Sadly, not yet...
Many publishers view their bibliographic metadata as commercial property and hold-on to it fairly tightly.
Mendeley will release little bits of the article data they have accumulated from all their users, but they will not let anyone download *all* of it - it's valuable property that they want to keep 'in-house'.
There are a few truly community-driven projects trying to change this e.g. [Open Citations](http://opencitations.net/) & [Open Citagora](https://docs.google.com/presentation/d/19047x2Awo_H5Gkx79i68eQ6FunUeBVGV5_9pBK-A8eU/edit#slide=id.gd13c835f_0142).
These projects aim to build truly public & open databases for *anyone* to build upon & annotate. The other tools mentioned in the question aren't really public or fully open.
Upvotes: 3 <issue_comment>username_3: *This applies only to mathematics.*
Personally I get most of my bibtex entries from [Mathscinet](http://www.ams.org/mathscinet/index.html) where one can search for a particular article and then ask for bibtex. Here is [a related question on MathOverflow](https://mathoverflow.net/questions/20551/sources-for-bibtex-entries); some of the answers there seem transferable to other fields.
[I agree that these are not community-driven resources, but address the related question of where to obtain bibtex entries.]
Upvotes: 2 <issue_comment>username_4: I find that Jabref's import from pubmed (pubmed search -> display list of pubmed ids -> copy to Jabref) does a very good job for the bio/medical fields.
Upvotes: 2 <issue_comment>username_5: I use [citeulike](http://www.citeulike.org/). Their smart bookmark lets me create an entry automatically from most articles on the web. Once it is in your library, you can easily export it as BibTex (1 click). I almost never have to type in BibTeX entries anymore.
Although, a number of journals which do not have DOI support, and conference proceedings, unfortunately it comes down to editing the BibTeX entry myself. (Sometimes someone else who uses citeulike would have already done that and you can search for the entry and copy it to your library.)
Upvotes: 2 <issue_comment>username_6: You might want to check out the "Initiative for Open Citations"
<https://i4oc.org/>
>
> The Initiative for Open Citations I4OC is a collaboration between
> scholarly publishers, researchers, and other interested parties to
> promote the unrestricted availability of scholarly citation data.
>
>
>
It seems to be growing in coverage. It could form the basis for more open search tools.
Upvotes: 3 |
2013/06/08 | 1,101 | 4,372 | <issue_start>username_0: I've just read a paper that cited the same reference in two successive sentences:
>
> This is the first sentence (xxxx 2013). This is the second sentence
> (xxxx 2013).
>
>
>
Up until now, I would have cited the reference just once, like this:
>
> This is the first sentence. This is the second sentence (xxxx 2013).
>
>
>
Which method is correct?<issue_comment>username_1: Neither is correct, it is a matter of style.
Refer to the style guide of the journal, publishing house or conference that you're writing for.
Upvotes: 3 <issue_comment>username_2: In general terms, the reference should be made where the cited information occurs. If you cite in the second it is not clear from where the information in the first originates. A similar problem occurs if you cite an entire paragraph by adding a reference at the end of a paragraph ass "(Xxxx, 2013)" (I am fully aware that this is the norm in some fields).
Citing the same reference in two sentences is clearly wrong. The solution as I see it is to write the sentences so that it is clear they belong together. There are several ways to do this. One way is to avoid the passive, parenthetical, reference and use the active reference where only the year is in parenthesis. As an example, you can start the first sentence by stating "Xxx (2013) states ..." and then in the second say "They furthermore ...". In this example we provide a bridge between the two sentences so that it is very clear it is the same reference that applies. Instead of "They" you can also use "Xxxx".
There are clearly numerous ways to bridge sentences so the form depends on what you need to say. As a result I would recommend putting the reference in the first sentence, not the second.
Upvotes: 6 <issue_comment>username_3: This is exactly what the abbreviation "Ibid." is used for:
>
> This is the first sentence (Xxxx, 2013). This is the second sentence (Ibid.).
>
>
>
It derives from the latin word "ibidem", which means "in the same place".
Source: <https://en.wikipedia.org/wiki/Ibid>.
Edit: Disclaimer
================
Following the comment discussion below this answer, I would like to state clearly that the usage of "Ibid." is highly dependent on the field of study and the general citation style you are using. If you have never encountered this abbreviation before in your field of study, you should probably not start using it.
Upvotes: 2 <issue_comment>username_4: I think if you are writing something that refers to several sources repeatedly maybe you should use a different referencing system. Maybe use superscript numbers like the Vancouver referencing system.
I assume this is a problem more likely to be faced when writing a cohort / review paper.
Upvotes: 0 <issue_comment>username_5: What I would do in this case depends on whether you're citing two different claims/results or just two pieces of text within that paper related to the same claim/idea/point.
* If it's **two different results**, definitely cite them separately, regardless of whether the citations are closeby or not; and I would make an effort to indicate, with each citation, the exact location of the specific claim/point, so it would be *clear to the reader* that these are two distinct claims. (If you're using LaTeX, it would look something like`\cite[\S 1.2]{ThatXXXPaper}` and `\cite[Appendix B]{ThatXXXPaper}`.)
* If it's **the same result/claim/point**, and you're just citing the continuation of the text, take the advice in other answers, i.e.:
+ It may depend on the stylistic conventions in your field
+ It may depend on the stylistic conventions of the conference/journal to which you're submitting the paper, or your university's regulations if it's a thesis
+ You might want to use "ibid." (ibidem) instead of repeating the citation
+ You might be able to cite just once at the end of a paragraph (assuming that doesn't create ambiguity)
+ You might want to avoid the second citation by appropriate rephrasing as @PeterJanson [suggests](https://academia.stackexchange.com/a/10513/7319).
Upvotes: 2 <issue_comment>username_6: APA - Documentation does not need to be repeated for every idea within a single paragraph. For example, if you retrieved information for three consecutive sentences from the same source, you can put the information after the third sentence.
Upvotes: 2 |
2013/06/08 | 900 | 3,809 | <issue_start>username_0: I've degrees in International Economics and Business Administration.
I consider myself an entrepreneur, certainly my past degree did give me perspective but I feel it lacked solid practical knowledge, so in my entrepreneurial career it didn't help me much.
I was thinking to go for computer science degree. But having bachelors degree in Economics/BA it will be hard for me to go directly for masters? (even though I've fairly good knowledge of the subject). Restarting bachelors degree in a completely different field could be a total waste of time for 26 years old.
Also I'm aware that allot of students do it the other way around, first they get science degree and than they do masters in business administration. Do you think my way will be harder and more intense? as master in computer science can be much challenging than masters in business administration or computer science bachelors?<issue_comment>username_1: I can think of a couple of options off the top of my head:
* Apply for a Masters in CS anyway and see how you go, you *might* get credit for your experience.
* perhaps do a postgraduate certificate/diploma in CS (if available) and use that as a stepping stone to the Masters degree that you desire.
* Perhaps consider a MBA that has a strong focus on CS (as a major).
It would also depend if you are intending to pursue your proposed Masters by coursework, research or as a mix of the two.
Upvotes: 2 <issue_comment>username_2: Another option is going for a **masters degree in business informatics**, a lot of european universities would admit you in it. I do not know if this kind of discipline is big where you live, but it is one of the biggest disciplines in a lot of universities in germany and austria. A study from the university of vienna has shown that graduands from business informatics earn the most in the industry and have the least problems to get jobs out of all graduands.
You would have to take roughly one semester worth of courses in algorithms, software development etc. extra, but that should neither be an intellectual problem, since you already know a lot about it, nor a practical problem, because you will get a more academic and thus systematic idea of what matters in computer science.
Even if you got into a computer science master right away, well how would that help you? You seem to have many practical skills from CS, but you would likely end up doing it just for the degree, because the students who have a bachelors in CS will most likely be much better prepared for doing the necessary methods for a masters program in that field. If you want the pure CS, you are likely much better off just starting with a bachelor, maybe aside working if you are already really good at CS.
If you care about how *hard* a master will be, well that can't be answered right away. But you have to keep in mind, that economics is a social science and thus not as exact as a technical science.
Upvotes: 2 <issue_comment>username_3: As a serious question: **what do you hope to gain from the additional degree**?
You haven't really indicated *why* you want to get a master's degree in computer science, other than "it's the next step." You also haven't indicated why you think you'd be able to get into a master's degree program (do you have enough CS courses to convince a committee?).
But, most importantly, the question to ask yourself is whether or not there is anything really to gain from the two years or so spent earning a master's degree that you can't obtain via another route. If you can do that, then it's probably worth trying. If, on the other hand, it's just to improve programming skills, there are *lots* of better ways to do that than to spend two years in CS (which doesn't really do much in that regard).
Upvotes: 2 |
2013/06/08 | 1,535 | 6,220 | <issue_start>username_0: This past year I was accepted into two mid range phd programs in mathematics without funding. Here's what my profile looked like:
Domestic White Male
Unknown state school in the midwest
Majors: Mathematics and Philosophy
GPA: 3.93
GRE:
Q: 168 (97)
V: 167 (97)
W: 5.0 (92)
M: not taken
Interests: Analysis, Medical Imaging/Modeling, Mathematical Physics
Major Coursework:
Complex Analysis, Discrete Structures, Applied Math (survey course mostly in Fourier methods and PDEs), Mathematical Statistics.
Recommendations:
One from a top applied mathematician who was very late in sending them out, one from a professor from the previous year who is well regarded but has not had much contact with me and one from a young professor who was fresh out of post doc.
Other:
I'm 30 years old. Some former work experience as a co-op student in engineering at a national lab, dean's list, normal stuff like that. I had only completed two semesters at my current school, though. About 6 years ago I was an engineering student at a different institution with poor grades and I did not finish my degree. Also mid to high level (lots of national, some international competition) as an athletic coach working with kids, teens and adults. Some international experience as an athlete as well. I also speak basic German.
Here's how my profile has changed:
Current GPA: 3.94 (.01 difference! However, this does mean 6 more As and 1 more A- to counterbalance my poor record from the early-mid 2000s.)
GRE Subject Score: 660 (52)
Additional Coursework: Formal Logic, Real Analysis, Advanced Linear Algebra, Intro to Abstract Algebra. This summer I am doing a course in number theory and an independent study in Galois theory.
Recommendations: I am going to give the late professor a much longer lead time this time around. He has said that he wants to help me but is always extremely busy. I will also be asking (and almost certainly receiving) a rec from the professor I am doing my independent study with.
Interests in pure math have shifted away from analysis, more into algebra. In applied, medical applications (organ/system modeling and imaging more than bioinformatics) have taken the lead over mathematical physics.
Other changes: Medalist in school's math competition, math tutor in our honors college and privately. Taking a Spanish immersion course so I can list basic spanish on there as well.
I'll be taking the GRE subject again in October and am using saylor.org and MIT's opencourseware to review older subjects. I graduate in August and will be working for a year. I'm thinking about applying to teach at a high school or community college for that time. I also plan on taking one or two graduate courses per semester.
I plan on applying all over the place (>15 apps) compared to the 8 from last time. I would prefer to go to the west coast. Can anyone let me know if the mid level UCs (Irvine, Santa Barbara, Davis) are reasonable with what I've got now, especially since when I worked at that national lab I was technically a UC employee? Any other recommendations on schools or anything I can do over the next year to boost this?
One other thing: I have no REUs, but is a good independent study with a strong recommendation from it a decent substitute?
Thanks for taking the time to read my post.<issue_comment>username_1: I can think of a couple of options off the top of my head:
* Apply for a Masters in CS anyway and see how you go, you *might* get credit for your experience.
* perhaps do a postgraduate certificate/diploma in CS (if available) and use that as a stepping stone to the Masters degree that you desire.
* Perhaps consider a MBA that has a strong focus on CS (as a major).
It would also depend if you are intending to pursue your proposed Masters by coursework, research or as a mix of the two.
Upvotes: 2 <issue_comment>username_2: Another option is going for a **masters degree in business informatics**, a lot of european universities would admit you in it. I do not know if this kind of discipline is big where you live, but it is one of the biggest disciplines in a lot of universities in germany and austria. A study from the university of vienna has shown that graduands from business informatics earn the most in the industry and have the least problems to get jobs out of all graduands.
You would have to take roughly one semester worth of courses in algorithms, software development etc. extra, but that should neither be an intellectual problem, since you already know a lot about it, nor a practical problem, because you will get a more academic and thus systematic idea of what matters in computer science.
Even if you got into a computer science master right away, well how would that help you? You seem to have many practical skills from CS, but you would likely end up doing it just for the degree, because the students who have a bachelors in CS will most likely be much better prepared for doing the necessary methods for a masters program in that field. If you want the pure CS, you are likely much better off just starting with a bachelor, maybe aside working if you are already really good at CS.
If you care about how *hard* a master will be, well that can't be answered right away. But you have to keep in mind, that economics is a social science and thus not as exact as a technical science.
Upvotes: 2 <issue_comment>username_3: As a serious question: **what do you hope to gain from the additional degree**?
You haven't really indicated *why* you want to get a master's degree in computer science, other than "it's the next step." You also haven't indicated why you think you'd be able to get into a master's degree program (do you have enough CS courses to convince a committee?).
But, most importantly, the question to ask yourself is whether or not there is anything really to gain from the two years or so spent earning a master's degree that you can't obtain via another route. If you can do that, then it's probably worth trying. If, on the other hand, it's just to improve programming skills, there are *lots* of better ways to do that than to spend two years in CS (which doesn't really do much in that regard).
Upvotes: 2 |
2013/06/08 | 967 | 4,372 | <issue_start>username_0: This might sound like a silly question, but I am not a native speaker of English, so I find it sometimes difficult to write my first draft in that language. What I usually do is write my first draft in Italian, my mother tongue, and then translate it into English. Once I went to a short course on writing in English, where the lecturer advised us that it is better to write the first draft in English, even though it can be very difficult for some people.
What would be your advice in this case? Should I still stick with writing my first draft in my mother tongue and then translate it into English? I feel more productive in that way, but any advice would be helpful.<issue_comment>username_1: **Personal opinion from a non-native english speaker:**
You should write it as much as possible in *english*. Start working with bullet points of your ideas and then transfer them to full sentences after you are done. Since most academic papers are in english, it has already been mentioned on this platform a ton of times, that you should get used to the language, i.e. the vocabularies and the way people reason, in your field.
Upvotes: 4 <issue_comment>username_2: For better or worse, the lingua-franca of science nowadays is English. If you plan on staying in science, you can use all the English training you can get. So, I would advice writing everything in English. Do try and get feedback from someone who is good at English, preferably a native speaker, to point any language errors you might not see yourself.
Upvotes: 3 <issue_comment>username_3: I think I understand how you're more comfortable drafting in Italian first then translating it to English, however I believe this is a mistake. Many terms and ideas don't translate properly from your mother tongue and it often shows.
Go ahead and draft it in your best English and if you have trouble expressing some ideas then make the note in Italian and you can go back and work those into your most colorful English with your final draft.
It turns out that English has a vast variety of words at your fingertips when writing and this allows you to be very unique while maintaining creativity and genuineness, that's why most books now are written in English.
Upvotes: 3 <issue_comment>username_4: Not only I suggest you write everything (research related) in English, but I also suggest you use English in your personal research notes, or even in your thinking process. In this way you skip the unnecessary step of translation and therefore you can read, learn, speak and write faster and easier. In fact when I started reading English books in my undergraduate, I realized it is better I skip translation and try to understand everything in English and try to solve problems in English. Since then, all my practices and my personal notes have been in English. When I am thinking about a problem or a statement I automatically switch to English and avoid my mother tongue.
Upvotes: 3 <issue_comment>username_5: On the one hand, writing in English is better because the phrases and sentence structures you will end up using will sound more natural to the reader. On the other hand, if you are writing the first draft before having a crystal clear idea how your argumentation will flow (i.e. if, for you, writing is also a tool for *thinking*), then writing in English might take take away from this process because it's difficult.
I would certainly agree with everyone else that using English for the first draft is a good idea. But when I write, I also write an outline of the draft in a separate document. This outline summarizes each paragraph I plan to write, usually in a single sentence, so that it gives me a good overview of the structure the manuscript is going to have. I use it to combine the points I want to make with other findings I think are important to mention (usually I find that these don't match up very well on the first try!), and then I shuffle things around, add and remove items, until I have a story that will flow naturally from the questions I ask to the conclusions I make. And only then do I start writing the draft. And this outline, I would recommend writing in your native tongue. It should be easier to switch to English afterwards, when you begin writing the draft, because you'll already have a clear idea of what you want to say.
Upvotes: 2 |
2013/06/08 | 2,127 | 8,728 | <issue_start>username_0: **TL;DR**: I'm a pure math graduate student who doesn't like research mathematics. Should I continue and get the PhD because I suspect I might like teaching at a 4-year liberal arts college?
---
I am currently in a pure math PhD program at a fairly good university. I just finished my second year there, and after passing qualifying exams have been awarded a Master's.
Ever since I arrived in grad school, I have been fairly dissatisfied. I went to grad school because math in undergrad felt relevant, and I loved the feeling of leaping from logical lily-pad to logical lily-pad en route to proving something. In grad school, though, these feelings have become fewer and far between. I feel like things have become more mechanical and more like banging my head against a wall. For the most part, I find it very difficult to motivate myself to do my work; I never look forward to getting started in the morning. I have finished required courses and qualifying exams, but the difficulty continues as I do a reading course in preparation for work with an advisor.
Overall, I have realized that research math is not for me. I have quite enjoyed my teaching experiences, which so far consist of leading recitation sessions, tutoring, and the first week of teaching a summer course. Because of the heavy emphasis on "teaching to the test" in secondary education, among other things, I suspect I would enjoy teaching at, say, a liberal arts college more than teaching at a secondary school. However, I feel pretty inexperienced in teaching, and so I don't feel certain by any means about these feelings. This is now the only reason I would want to stay in graduate school. Is this enough reason to continue for the next 3-4 years to the PhD?
According to my advisor, I would be in graduate school another 3-3.5 years for the PhD. I would love to work at a liberal arts institution in the US, but ideally one where the research load is minimal/nonexistent. The impression I got from skimming MathJobs recently was that such positions were relatively rare compared to research-intensive positions. Do you feel like I would be very likely to find such a job if I stayed for the PhD?
Any advice is much appreciated! Thank you, everyone.<issue_comment>username_1: I know of people who have completed PhDs at prestigious institutions (in computer science) just so that they could teach at a liberal arts college. It certainly seems to be one way into that profession, and an admirable profession it is indeed.
Doing a PhD requires a lot of motivation and hard work, even if you are not aiming for a high profile research career. The question you need to ask yourself is "Are the benefits of finishing the PhD worth the effort? Will you be motivated enough to complete if you are not interested in research?"
Consider also doing some pedagogic studies, so that you know the theory of how to teach and, more importantly, how to help students learn.
Upvotes: 4 <issue_comment>username_2: Notably given the especially difficult employment market in academia (in France but I guess it is the same everywhere), I always advise: do a PhD for the years doing research themselves, not for what you expect to gain from the title. As username_1 stresses, doing a PhD requires a lot of motivation and hard work; but if you do not like the good times enough, it can be really wasted years. If you think you won't enjoy the years of your PhD, you should consider seriously all other opportunities you have.
By the way, I think the same applies to postdoc positions.
Upvotes: 4 <issue_comment>username_3: You should know that these days, most 4-year liberal arts colleges in the US expect their tenure-track mathematics faculty to do research.
Colleges want to be able to offer their students the opportunity to be taught by experts who are contributing to their field. There is also increasing interest in getting undergraduates involved in research, which means the faculty have to have research programs to get them involved in.
Of course, there is a wide spectrum of expectations. At the most selective liberal arts colleges, research expectations can approach those of a mid-level research university, demanding a regular output of papers published in good journals. Elsewhere there can be more flexibility, replacing a specific requirement for "research" with the broader term "scholarship"; they might require only occasional publications, and they could be projects with students, or articles about teaching.
But in general, if you want an academic job in mathematics that doesn't require you to do any research at all, you're going to restrict yourself to the least selective tiers of liberal arts colleges, or to non-tenure-track positions (and often liberal arts colleges tend to have relatively few such positions, compared to large universities).
You might have a look at [MathJobs](http://mathjobs.org) to get a sense of what jobs are out there, and what they expect. Note that there are not so many listings in summer, since this hiring cycle is mostly finished; many more will appear in the fall.
Upvotes: 6 [selected_answer]<issue_comment>username_4: Although it is generally a good rule to think that one oughtn't commit to things one doesn't want to do... and the other answers reflect this in several good ways... sometimes there *is* an "entry fee" that is unpleasant to pay.
Yes, it is true that there is an ever-greater pretense that all faculty in colleges and universities "do research", but, as one might imagine, not quite all of this is cutting-edge... In fact, the requirements of completing a PhD at most "good" places are a bit more strenuous than the "research" required at little colleges. In particular, as I gather from substantial anecdotal evidence, it is possible to be much saner/human in "small" situations, about pretension-to-research. True, it may not be wise to be "too honest", as in many professional/human situations.
That is, you might try to view "the PhD" as simply a college teaching license. Certainly if you do *not* have it you'll be at an extreme disadvantage forever... One might view it as a prolonged licensure ordeal?
And, at the same time, it is quite excellent that you have realized so clearly that you don't want to "be a researcher". This is much better than the self-conflicted delusional versions of the story. But what remains is to gain the credentials. "Cred".
This is not necessarily a recommendation to stay in your PhD program, especially since your recoil has been fairly strong (though one doesn't know how to interpret printed words' intensity...) But, sure, no one likes to take "drivers' training", and many other things. But it can be done, routinely.
The last adverb is a significant point: unless you're at an elite place, and unless you are truly severely allergic to "higher math", ... "it's not that hard" to finish the PhD. The fact that you've already done the qualifiers and such shows that it is easily within your power... if you so choose.
So, srsly, the question is about what you want your appearance to be for the rest of your life. Not that being PhD'd makes anyone a better person! But, it adds something to the ol' resume, undeniably.
And, again-at-the-end, being "too honest" about one's disinclinations is not necessarily a good thing.
Good luck, ... in figuring out complicated things.
Upvotes: 4 <issue_comment>username_5: I can't comment on the US, but in Canada there are a number of "lecturer" positions, even at research universities. Lecturers only teach; they don't do any research, and it is not necessary to get a PhD to be a lecturer (though I'm sure it doesn't hurt, and some positions do request a PhD). It seems that these positions are becoming more common, especially because the current Canadian government likes to cut costs, so funding is harder to come by. I would suggest searching for these types of positions, and hopefully that will give you an idea of what is out there, how many positions are available, and how many require PhDs.
Here's [one example](http://www.psychology.uwaterloo.ca/positions/lecturer.html) at my university that does require a PhD (in Psychology, not math, sorry).
EDIT: Having said all that, I don't think it's worth continuing your PhD if you're not enjoying it. If you don't like what you're doing, it's going to be very difficult to put the time and energy into completing your thesis. If the area of research is a problem (you're not interested in your research project, or you simply feel stuck), perhaps you can ask your supervisor for a different project, or switch supervisors, or even switch to another department, program, or university.
Upvotes: 2 |
2013/06/08 | 885 | 3,895 | <issue_start>username_0: I have 2 months where I should write the literature review for my PhD thesis. However, I don't know how to write one.
In general terms, what would be a good way to start the literature review? For instance, how can I organize the reading and divide it in topics?
I would appreciate any helpful ideas, books, or articles.<issue_comment>username_1: Depending on your topic, I recommend that you start taking the basics from books. After that you can get more specialized information from articles or journals. Do not forget to keep a list of the references that you are obtaining, maybe with a little summary of each of one.
Another advice is that you start writing about the information that you get, then cluster your information by similar sub topics, so you avoid the formation of information islands.
Good luck!
Upvotes: 1 <issue_comment>username_2: Since you don't say what field you are in, this may not work for you - but I'll describe how I do a literature review in the biomedical sciences. I've tried to be general, so it could apply to other fields, but if you're not in science at all it might not work for you.
The first step to writing a literature review is defining your topic. What is the key question/ concept you are trying to explain or summarize? This can be difficult if you are really starting from zero on a topic, and will probably need to be refined as you work on your review. But it's important to define this at least in draft form before you start.
The second step is coming up with search terms. Begin by searching your question/ concept on Google Scholar or another large database - if it's a biomedical topic, Cochrane Reviews is a good place to start. I'm not sure if there are similar review databases for other fields. Look for a few key papers in the area, read those, and look at their reference lists. Also, record their 'key terms' (usually below the abstract) - these will help you define search terms. Pull relevant papers from their reference lists and repeat. Once you've gone through ~ 10 papers you will probably have a good sense of the type of key words that are important for your topic. Use these terms to build an improved literature search.
Third, you need to actually do the literature search. Your field probably has a database of journals - use this and your search terms to identify all relevant papers over whatever time frame you are interested. You'll probably need to refine as you go, so you don't get swamped with papers. I prefer to go use a systematic search strategy, even if I'm not doing a systematic review, since this removes some subjectivity. This means going through all results, and reading any papers which have titles or abstracts that suggest they may be relevant to your question.
Fourth, you need to read and take notes. Make sure your notes are indexed by paper, so that you always have the citation of the original paper with whatever facts you note down. I find OneNote is a great way to do this, but there are other tools I'm sure.
For me, once I've read 20 or so papers, I usually have a pretty good idea of what the answer to my question/topic is, and what the nuances are. Then it's just a matter of organizing the notes that I've already taken into a rough outline of what I want to say (remembering to keep label all the facts/opinions with their citations). After that, you're ready to start writing, and it's not too difficult since you've already got a good sense of what to write as well as a rough draft in the form of notes.
One important caveat is that you should always re-write all your notes in your own words as you go through the outline converting it into a paper. This way, if you occasionally just copied the original author's text when taking notes, you won't end up with their words in your paper (which would be plagiarizing).
Upvotes: 5 [selected_answer] |
2013/06/09 | 669 | 2,979 | <issue_start>username_0: In a literature review, we look at recent publications and put what other researchers have said in this context. However, how can I develop my own argument? Should I write it with my own words or should the argument be based in what others have said?<issue_comment>username_1: The literature review is just that, a review of the published literature - synthesised and analysed. Developing your own arguments should occur after this review as you then have something to refer back to when developing your stance.
A study technique I use, is as I am reviewing the literature for this chapter, I keep a notepad near by and jot down parts of my argument as I proceed.
Upvotes: 2 <issue_comment>username_2: <NAME>'s answer is very accurate, however, I would add that I think it is possible to interpret what someone wrote in a different way than others have previously interpreted. However, your arguments should not be confused with the other authors' arguments.
Originality in analysis is often welcomed. However, originality in structure should be done very carefully. Common structures exist for a reason. In my field, it is common to switch between the reviewed author's points and your own:
>
> Jones (1991) claimed that the primary motivator for workers was public
> recognition while Smith (2004) believed that workers were more
> concerned with monetary compensation. I will show that both were
> correct by showing that workers really care most about their
> compensation being publicly higher than the average for their
> position.
>
>
>
Upvotes: 4 [selected_answer]<issue_comment>username_3: I would offer a slightly different answer from either of the others, both of which are excellent. Each of these answers and my own are probably (at least somewhat) discipline-specific.
Rarely, in my view, should a literature review be just that - a simple review. Instead, the best structure for papers in my field is an introduction, followed by a single section that provides theory and specific hypotheses derived from that theory, followed by empirics, etc. There is no "literature review" per se.
Instead, literature that is relevant for establishing a problem is cited in the introduction and literature relevant for building new theory, elaborating existing theory, and/or challenging existing evidence/arguments is cited in the course of making one's theoretical argument. From that argument come testable hypotheses, all of this being stated in one section (possibly with subsections, depending on the scope of the paper) between introduction and research design.
In this way, relevant literature is reviewed in the course of constructing arguments, with quotes and references both bringing credibility to your arguments and being used to demonstrate weaknesses in extant work. Thus, your review should be your own argument largely in your own words, with others' work cited and sparingly quoted where it helps you.
Upvotes: 2 |
2013/06/09 | 1,248 | 5,138 | <issue_start>username_0: At my university, we are informed of our yearly raise with a letter mid-summer. There is no scheduled meetings with the chair or anyone else to discuss performance or the raise -- as far as I can tell, we just wait and see what it is. It tends to be a small cost-of-living raise in the range of 3%.
Should a pre-tenure assistant professor actively seek to meet with the chair and ask/negotiate for a larger raise, or is this not done in academia?<issue_comment>username_1: My feeling is that it should not hurt to ask, as long as you ask in the right way.
Negotiations are always tricky but having an open conversation with 'the boss' should always be a choice.
You might want to meet the chair with another issue and bring up the salary question as an additional point, as opposed to the main reason for the meeting. Also, approaching it as a question: You're looking to better understand school norms and who better than the chair to explain them to you.
Upvotes: 3 <issue_comment>username_2: >
> Should a pre-tenure assistant professor actively seek to meet with the chair
>
>
>
**Yes!**
--------
You should *insist* on meeting with your department chair at least once a year, if not once a semester, to discuss strengths and weaknesses in your evolving tenure case. Make no mistake—your tenure case started evolving the moment you signed your offer letter. If you are doing the right things to get tenure, you need to hear that. If you are missing something important, you need to hear that as well. Symmetrically, if you think things are going well, your chair needs to hear that, and if you think things are going badly, especially if something in the department is proving to be a burden, your chair needs to hear that, too.
>
> and ask/negotiate for a larger raise
>
>
>
Oh, I suppose, if you have good pitch. You're not going to get a bigger raise just because you ask for it.
Upvotes: 4 <issue_comment>username_3: As in most fields, you only have so many negotiating "chips". You can spend these chips on whatever you want, but some things cost more chips than others. For example, a raise probably requires your chair to get approval from the dean and is therefore "expensive". Asking for some extra research/travel money is "cheap", especially if it is at the end of the fiscal year and there is extra money in the budget, since the chair likely controls that pot of money. Teaching load is often controlled by the chair, but does require someone else to pick up the slack so do cost some "chips". Basically you have to decide what you want and if you are willing to spend the chips to get it. What makes it difficult is knowing exactly how many chips something will cost.
Upvotes: 4 <issue_comment>username_4: It only makes real sense when you have a competing offer, which means you have been on the market this year, which means you are considering leaving, which means that the chair either knows this or will get to know this when you come to talk to them. It is unlikely that the chair will be excited to give a raise to the person who is going to leave, anyway. On top of that, as <NAME> put it, negotiating a salary increase means going at least one level up to ask for the extra money for the chair, and there should be a very good reason for this. Even if you just got the NSF Career award, the chair will just say, "Great, you now have the research money to support yourself -- congratulations! No salary increase, though".
Most likely, if you are going to complain about your pay being too low, you will hear a story about salary inversion, i.e., the tenured faculty in your department making less than you do, which is only justified a small fraction of the time for the deadwood faculty. So the priority will be to raise the non-deadwood inverted salaries -- these guys are here to stay -- and then address your concerns -- you are not here to stay yet, sorry -- subject to any remaining money. Having heard you complaining is not likely to make the chair want you to stay, either.
In a very rare event, you can come up with a scheme in which you bring the department more money, and then your request to see an extra cut of the pie may be legit. This could mean creating a new program/degree/track that is going to be the nations first program in interdisciplinary bubble sort or bean scouting or some other BS, but then it means that you are committing to a teaching track rather than a research track. If your research is stellar, and you want to diversify into teaching, that's probably OK; but most of the time proposals like that from a junior faculty will look weird and awkward.
As username_2 said, you MUST talk to your chair once in a while, and the annual review time is one of the best opportunities to do so. Your case was reviewed by the faculty not so long ago, so the memories of your file are reasonably fresh in the chair's head; it would be easier for them to connect the dots than it would be say in September when the new semester starts, with new students and new courses and all other hectic stuff happening in the department.
Upvotes: 4 |
2013/06/10 | 979 | 3,853 | <issue_start>username_0: Is it illegal to share students' grades with somebody else (e.g their parents) without their consent in the US? What are the most common penalties for such a violation of privacy? Is jail time possible as a punishment?<issue_comment>username_1: In the US the Family Educational Rights and Privacy Act (FERPA) protects the privacy of students. This "act" is not a law (on further reading it is a law), but rather a stipulation by the Department of Education that universities must obey in order to receive funding. As such, I do not believe that violations are classed as a criminal offense and hence cannot lead to jail time. Universities which violate the FERPA can lose their funding and likely have grounds to dismiss employees who violate the act. The university might even be able to seek damages against those individuals, but again, not jail time.
I would suggest that you not violate FERPA. If you intend to, or have done so (accidentally or otherwise), I suggest that you seek legal advice.
Upvotes: 5 <issue_comment>username_2: A number of people have mentioned [FERPA](https://en.wikipedia.org/wiki/FERPA). Looking at [the text of FERPA](http://www.law.cornell.edu/uscode/text/20/1232g) and at [this University web page](http://www.registrar.clemson.edu/FERPA/violation.htm) suggests what some others have said: violations of FERPA can result, ultimately, in withholding funding from the university but the law does not list criminal sanctions for either individuals or institutions for violation. That is also my understanding.
That said, there other privacy laws or other kinds of laws that might be violated by disclosure of educational records. There are a variety of other state, local, and federal laws — plus plenty of common law tort law, that could take effect. And besides, people sue for all kinds of things including things that aren't even in violation of a law.
To be clear: I am not a lawyer nor a legal expert and this is not legal advice. But, as a non-lawyer that likes to believe that world has certain common-sense limits, it seems *insane* to suggest that telling a parent a grade could result in jail time. If you're worried and need a "real" answer, you should find a lawyer and ask.
Upvotes: 3 <issue_comment>username_3: This is not legal advice... If only people giving mathematical or dietary advice without credentials could be sued for failing to make a similar disclaimer! :)
There are two parts to this issue. First, is it ok to tell people students' grades? In the U.S., almost entirely "no", if the student is 18 or older. FERPA. It doesn't matter whether or not the student's parents are paying the tuition, the student's grades are privileged personal information.
It *is* acceptable use to disclose student grades for (e.g.) intra-math-departmental function, such as advising, admission committee work, and other "privileged" use.
A traditional practice that is no longer ok is posting grades on instructors' doors, for example.
For "old" people, the idea that one is not in fact legally entitled to know the grades of the student whose tuition you're paying will seem strange. Indeed, decades ago, the grades were sent to the parents directly, in paper mail.
But, now, 18-or-over people are essentially legal adults in the U.S., and their school records (and medical records) are not automatically open to their parents.
Thus, despite intuition to the contrary, simply do not give grades to parents, ... without seeking legal advice about extenuating circumstances, such as emergencies.
Edit: but, then, "jail time"? Who knows? But maybe monetary damages if someone sues you for violation of their privacy rights. Apart from the risk of this, if we think it through, maybe kids' grades (if they're "adults") should not be divulged to anyone... So don't do it?
Upvotes: 3 |
2013/06/12 | 2,593 | 10,427 | <issue_start>username_0: I am in the second year of a research degree, and with one more year I could finish my PhD. However, I got a very good job offer at a leading company in the field, which would mean I'd have to finish my degree now (which would then be an MPhil) and start working.
I am hard pressed to make a decision, but I find it very hard to judge: what are the benefits of finishing a PhD over taking a job at a good company? Often people do a PhD to get into such a company, but what if that is not the case? Also, I feel that at this point, a job would be more challenging than a third year of a PhD, which would just be a continuation of the same thing I've been doing for two years now.
On the other hand, people tell me it would be silly to quit now that I am so close, because "a PhD is worth so much more." I would very much like some extra input on this: is a PhD 'worth more' in the end? And what are other things that I'd have to take into account to make an informed decision?
I'm only 23 years old, so I could even do a PhD later, but I'd have to start from the beginning again. Would it be an advantage or a disadvantage to have worked for a couple of years, when applying for a PhD again later?<issue_comment>username_1: Well, firstly, congratulations on the job offer - you must have seriously impressed them to be offered a position before the completion of the PhD.
So, we come to the crux of the issue, the main thing is (and you have no doubt heard this before), but the decision is up to you, and by that, I mean totally up to you. But, to be a bit more helpful let's look at the information you provided.
You are young and in later years, should you choose to pursue the PhD, it *could* be likely that you will be able to have some credits towards a future PhD (this would depend on the college). If I understand you correctly, you'll be able to get a MPhil? that will help with the first point (credits towards a PhD) and certainly is recognition of the study you have done so far.
So a couple of key questions you have to ask yourself are:
* Which of the job or the PhD will lead you to your goals quicker and in a more enriching way?
* How much would working at this company mean to you?
* and related to the question above, will that opportunity resurface upon completion of your PhD?
Another option, once again depending on the policies of your university is to defer the final year of your PhD or complete it part time, while you work (which is what I am doing).
-
Upvotes: 4 <issue_comment>username_2: I would say that it depends a LOT on the field you work in.
I can tell you my personal experience in my field: programming (although it was with the Masters degree, not with a PhD).
I started working during college, and by the time I finished it, I already had a bit of experience, and was able to get good jobs, although this kept me from finishing the Masters. So far, I had no issues at any interviews with this, I got some certifications and made some specialization courses in my free time, I learned what I wanted, and this helped me a lot.
I also have a few years of experience ahead of my former colleagues that didn't do what I did and finished their studies, but I don't regret my decision.
On the other hand, some of my colleagues managed to get great internships after finishing their studies at very big, picky companies, where I could potentially go in a year or so, but not as an intern (and a good job).
There are a few things you need to ask yourself:
* **Are you doing the PhD to get a job similar to what you have been offered, or to get a much better job in a few more years, and how big is that difference between what you have been offered now, and what you can find after the PhD ?**
* How important is the experience in your field of work ? In more theoretical fields, your PhD can greatly outweigh the experience you can get.
* Do you think that money will ever be an issue (since doing the PhD probably doesn't pay), and unfortunately you have to think of this as well.
* If you don't do the PhD now - how likely are you to resume it in let's say 5 years.
* How well is the university rated in your country ? (check <http://www.arwu.org/> )
Also, you can go ask your professor at the university and see what he has to say, he probably saw both scenarios (people who quit to get jobs, and people who didn't) - and he/she will probably have some advice for you.
This is as much as anyone can say, after all, it's your life, and your decisions.
PS: I'm 25, so not much older than you :)
Upvotes: 4 [selected_answer]<issue_comment>username_3: Is your goal to go into industry after finishing your PhD? If the answer is "yes" and the current job is perfect, my advice will usually be to go. If you goal is to go into academia and to become a professor, the job is a distraction and you should pass on it and finish the PhD.
Of course, things vary by field but my general advice, which I believe is true in most (but not all) fields, is that you need a PhD if you want to be a professor but that it is not critical for doing most other jobs. And because PhDs take up time that you could be sending getting work experience and promotions, it may even be harmful for these other pursuits.
Potential students make the wrong comparison when they judge whether or not a PhD is helpful to their career goals. It is common for people to compare themselves without a PhD *now* to themselves with a PhD *now*. Obviously, the PhD will provide a leg up.
But that's not the accurate comparison because there are opportunity costs to doing a doctorate. The true comparison is between (a) yourself with 4-6 years of experience in industry and no PhD and (b) you with 4-6 years work in a PhD program, no industry experience, and a PhD. In *most* fields, (a) is better, or at least not usually worse, than (b).
Doing a PhD includes a lot of effort that goes into navigating and succeeding in an academic environment. If you are going to be a professor, this is important training. If you are not, you are likely better off doing the things you want to end up doing somewhere more appropriate.
Upvotes: 3 <issue_comment>username_4: The question is basically, Academia or industry?
I don't think doing both is a good idea at this moment.
Given that you are 23 years old, you probably never have a real industry job before. You will have to devote yourself to the new job if you accept the offer. You probably have to work 50+ hours a week in the first year. It's very hard to do both at the same time if not impossible.
You need to ask yourself a question. What was your reason to study for PhD in the first place? Were you interested in research? Or you wanted to have a PhD in order to have a better industry job?
If your ultimate goal was to have a good industry job, then you just got one. Grab it.
If you wanted to do research, being a retiree from industry, I can tell you that you are not going to have much freedom to do the research you want to do. You'll have to do whatever your employer wants you to do. You'll become a money maker.
So, we are back to the basic question. What was your career plan? If you did not know the answer when you started the PhD program, now is the time to make that decision. If you still don't know the answer after pondering over for a while, then you need to think about <NAME>'s suggestion, that is, take leave of absence from Academia. This is not a true good idea, just a temporary solution. It delays your decision. Your attitude will be like going there to test water. Chances are, you'll not devote yourself to your new job. I am not too sure you'll succeed the industry job that way.
My suggestion for you, figure out where you want to be 10 years from now. If the answer is Academia, continue your PhD. If it's industry, take the offer. You don’t need to have PhD to be a successful business person after all.
Upvotes: 2 <issue_comment>username_5: I am in my last year of a PhD program but decided to take up job now. To be clear and frank about my view towards a doctoral degree, I never enrolled in the PhD program to have Dr. in front of my name. I enrolled as I liked the project would I will be working on and the skills I would be learning from it. Also this would be my main opportunity gain new skills and test my research interest and skills.
I always wanted to work in industry, I worked before my MSc one year in industry. Now I got the opportunity to start working with the current market rate pay with my current work experience. I don't have any regrets if I can't complete my PhD while working at the new job. I can say as most of PhD students in this situation are thinking what others going to say, guilt of living in middle. But I can say that something new to start, something should end. If you think the opportunity is great then go for it. You're the one who will live with your decision, don't worry what others are going to say. If you become successfully in future with your current decision then same people who ridiculed you, will come to you and will ask your help.
I am lucky as my supervisor is realistic and his opinion is that it will be up to me whether I get PhD or not, as working your normal job and completing remaining PhD stuff is tough task to do. So he gave me full freedom to choose as its my life and my career and I will decide which direction it will go.
I am considering taking up job before finishing PhD not just because of job offer. I have a family situation where I need to have job soon and I think this is right choice for me and my family. At the end of journey if I get PhD its all well and good but only if the knowledge skill learned will stay with me always.
Regarding PhD will help you in industry to get job, its too tricky. It depends upon your PhD research area. as not all industry want to have PhD students. and with my interview experiences I realized very few companies prefer PhD students more then someone with a BSc or MSc. Companies are worried about PhD students as how the student after 3 years of research work will fit in industrial world? If not, then company will loose all investment on PhD students when he resigns job as he unable to adjust to the industrial world.
My opinion is Focus on what your rational brain decides and also listen to what your heart wants. When both are same, then go for it.
Upvotes: 2 |
2013/06/12 | 981 | 3,624 | <issue_start>username_0: In each research paper, there are a lot of things that I want to highlight for later use such as definitions, explanations and concepts... While most of them focus on the topic of the paper, there are some relating to a different or broader topics, e.g. a paper about investigating performance of a specific system may introduce different benchmarks and metrics for performance evaluation and explain why those approaches are applicable for this specific situation.
Normally, I just highlight all of them, put some notes directly into the paper or using [Evernote](http://evernote.com/). However, when I want to look for all highlights and notes about one specific topic, I find it difficult as they scatter in different papers and documents. So, are there any tools or techniques to affectively highlighting important points and group them by topic while reading research paper?<issue_comment>username_1: This probably isn't an ideal solution; but I still find it to be the most effective in the long run:
I simply keep a log of what I read, and I specifically do this using tex. That way, you will have a searchable document for any words that you may be looking for in your notes. You can also use the `makeidx` package to add an index to your text document, which lets you "tag" sections of your documents with various keywords. I also make sure to cite what I read, even if it is for my own purposes. This is especially useful when you are writing a paper, since most of your references will be ready in `bibtex` format.
It took me a while to get used to, but I find it very nice to have a well organized research log.
Since requested in the comments, given below is what the log looks like. I don't want it to become a full latex document, but it should be sufficient enough to give you an idea. Note the index markers, which are basically "tags" that don't show up in the text.
```
\section{Paper #1 Name, Authors, Date, \cite{...}}
My summary of the motivation and findings of the paper, or whatever I find interesting/useful.
May be as short as a few sentences or as long as a page, depending on how relevant it is.
\index{an important word}
\section{Paper #2 Name, Authors, Date, \cite{...}}
Same thing here.. \index{another important word or two}
\section{Paper #3 Name, Authors, Date, \cite{...}}
and so on..
\printindex
```
Then you can look up your page numbers in the index, which is included at the end of the document. For more information about the package, see [CTAN](https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=2&cad=rja&ved=0CDYQFjAB&url=http://www.ctan.org/pkg/makeidx&ei=rkS_UfmyE9LB4AOUqoDQBw&usg=AFQjCNFSKosvPcHUnkeA1dsO9RiBlBZ5QQ&bvm=bv.47883778,d.dmg) and [Wiki](http://en.wikibooks.org/wiki/LaTeX/Indexing).
Upvotes: 6 [selected_answer]<issue_comment>username_2: I currently use the software called [Papers](http://papersapp.com). You can use tags to arrange PDFs according to topic. Another alternative is to use "collections" since a single PDF can be placed in more than one collection.
Next, filter PDFs that match a certain "tag"/"keyword". In the example below, it's "hydrogen embrittlement":
[](https://i.stack.imgur.com/ZWhe1.png)
Lastly, for each PDF, you can get a list of all highlighted text (summarised in the right column) as follows:
[](https://i.stack.imgur.com/grUvp.jpg)
The page numbers in the right column are clickable, so its easy to jump to the relevant place in the article.
Upvotes: 2 |
2013/06/12 | 581 | 2,400 | <issue_start>username_0: Well, I don't know how to confirm to myself that the words which I'm writing are my own words. Now, maybe those words are based on what I have read recently because if not there won't be anything to say. Should I for example just comment in what other researchers say which I'm paraphrasing what they wrote or can I also write as if I was talking (words which are based on what I have read)?<issue_comment>username_1: I don't know what the purpose of your writing is, so the answer really depends on that. It sounds like you're writing some kind of review of prior work (either as a survey or as part of a paper) ? In that case, the goal here is not to regurgitate (in your words or via paraphrase) what others say. Rather, you should be reading what they say and thinking about it (and seeing if you're convinced by it).
Only after that should you even attempt to describe the work. And when you do so, put all reference material away. If you can't describe someone else's work without referring to it, then you don't really understand it yet. In this way you'll ensure that you use your own ideas/thoughts to express what's gone before.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I second @username_1's remarks, but/and the situation is truly more complicated, as your explanation of your question correctly indicates.
For example, complicated, sophisticated things are often so easy to mis-represent that it seems as though there's almost a unique path, a unique narrative, that is correct. And it may be non-trivial to learn what this path is. Thus, in effect, the phrasing of a high authority is not only "burned into one's retinas", but, also, seems the only correct thing to say.
So, first, do *not* try to "say something else", just for the sake of avoiding "quoting", because that seeming paraphrase may be wrong, or deficient, or ...
But, at the same time, extended passages should not be quoted, unless put in quotation marks. It's best to internalize ideas well enough to give "the standard definition" and such things, even if that ends up sounding very similar to other sources. Similarity is inevitable in many cases.
As in many scenarios, honesty is a clarifying criterion. That is, is what you write coming out of your own head, even if it resembles other sources, or are you quite literally copying? The latter is not so good.
Upvotes: 3 |
2013/06/13 | 7,863 | 29,262 | <issue_start>username_0: Some sexism is obvious, blatant, and/or deliberate. Fortunately, my understanding is that this kind of sexism is *mostly* a thing of the past.
However, female colleagues have told me that more "benign", but still harmful, sexism is still very prevalent in academia. In particular, I've heard that implicit assumptions that women are less capable, or more inclined to be teachers than researchers, are distressingly common. As a mathematician, the results are distressingly obvious: most math departments are overwhelmingly male.
What are some ways in which sexism is unknowingly perpetrated by well-meaning people, in academia in particular? And how can those of us who are well-meaning make sure to avoid doing so?<issue_comment>username_1: [NCWIT (National Center for Women & Information Technology)](http://www.ncwit.org) has an article specifically about [stereotype threat in computing](http://www.ncwit.org/resources/how-do-stereotype-threats-affect-retention-better-approaches-well-intentioned-harmful).
While that resource is student-oriented, it illustrates some examples of how even well-intentioned comments are indications of stereotyping.
Upvotes: 3 <issue_comment>username_2: >
> And how can those of us who are well-meaning make sure to avoid doing so?
>
>
>
**If you see something, say something.**
You're correct that blatant sexism is mostly gone. I'm a female computer engineer, and I have never been told that I can't do the job because I'm female. What remains is "death by a thousand paper cuts" sexism. For example, I was recently at a conference, and the small group I was in was discussing one of the presentations. An older professor in the group remarked that the presentation was particularly good because the presenter was beautiful and wearing tight clothes. Nobody called him on it: after all, he was trying to give her a compliment, mentioning it would be rude, he's older and things were different then, and so on. And it's hard to blame them, because if that were the only sexist comment I heard that day, it wouldn't be such a big deal. But shortly after that, someone jokingly said I should marry my PhD supervisor. Then someone said a new tool is so user friendly that "even your mother could use it".
Each of these incidents on its own is fairly mild: most people probably don't even realize they're saying something exclusionary. Unfortunately, if you're a woman, each one reminds you that you *don't belong*, so the cumulative effect is very discouraging. However, if you correct every person who says something mildly sexist, you get a reputation as a bra-burning feminazi. As a result, most women keep quiet, or only point out the particularly insensitive comments. It would make an enormous difference if *other men* would call out the ones who make inappropriate comments. You don't have to be dramatic or rude about it, but a quiet "I don't think that's appropriate" would take a lot of the burden off women. And naturally, you can try to be more aware of the things *you* say to make sure you aren't accidentally discouraging women. A lot of sexist ideas are so deeply ingrained that otherwise fair and open minded people will say them without a second thought (like the "even your mother" comment).
I don't want to downplay the other valuable things you can do, such as volunteering at math/science camps for girls, but if enough men followed this rule -- even part time -- 80% of the sexism I encounter would disappear. To modify <NAME>'s quote a bit,
>
> All that is necessary for sexism to triumph is for good men to do nothing.
>
>
>
June 27, 2014: After a year of comment trolls, I think it's time to add a point about men's rights activists. The key mistake that MRAs make is treating rights as a zero-sum game. The fact is, gender stereotypes hurt everyone. They hurt women who want to be engineers; they hurt men who want to be nurses. They hurt women who don't want to have children; they hurt men who want to be stay-at-home dads.
Sometimes, these stereotypes result in a disadvantage for men. For example, username_4 points out that American men have to register with the selective service system to get student loans, while American women do not. MRAs point to these cases and try to claim that feminism oppresses men. This is, at best, willfully ignorant: all these inequalities come from the same outdated beliefs and stereotypes, which feminism opposes by definition. The fact that these cases exist just indicates that gender equality is not a solved problem. (Women in the American military were not *allowed* to be in units tasked with direct combat until 2013.)
My answer discusses a woman's point of view because that's what I know and that's what the question asked about. I would be thrilled if a man in a predominantly female field added an answer discussing his experience.
Upvotes: 7 <issue_comment>username_3: >
> What are some ways in which sexism is unknowingly perpetrated by
> well-meaning people, in academia in particular?
>
>
>
I'm not qualified to answer this question myself, but I'll point you to some excellent blogs that detail this, and a tumblr that explores the more general idea of 'mansplaining' (not limited to academia alone)
* [Female Science Professor](http://science-professor.blogspot.com/)
* [The Accidental Mathematician](http://ilaba.wordpress.com/) (in particular, her post about [why she doesn't participate in Mathoverflow](http://ilaba.wordpress.com/2011/03/28/why-im-not-on-mathoverflow/) is quite interesting)
* [Academic Men Explain Things to Me](http://mansplained.tumblr.com/)
Upvotes: 5 <issue_comment>username_4: Sexism can get perpetuated unknowingly in all sorts of was via the law. I suggest studying the book Legalizing Misandry by <NAME> and <NAME> to see how this can happen. The law also has all sorts of effects on what happens on college campuses and how sexism either gets continued or abated.
We know that at present there exist programs for woman to get ahead in academia, because of their gender, but extremely few, if any, in comparison for men. Have you ever heard of a scholarship program that specifically targets boys in the humanities where they are a minority now and have historically come as a minority also?
[There exist](http://en.wikipedia.org/wiki/Men%27s_colleges_in_the_United_States) only three non-religious male colleges in the U. S. [There exist](http://en.wikipedia.org/wiki/Women%27s_colleges_in_the_United_States) many more women's colleges.
There exist women's studies programs, but male studies programs have trouble getting a foothold in academia. Ryerson [makes](http://theeyeopener.com/2013/03/rsu-rejects-mens-group-on-campus/) for an interesting example. Sexism, in the form of misandry, has popped up before when a men's [group](http://fullcomment.nationalpost.com/2012/05/20/robyn-urback-on-shocking-anti-male-hatred-on-the-sfu-campus/) formed.
Women [make up](http://www.prb.org/Publications/Articles/2011/gender-gap-in-education.aspx) the majority of graduates in the U. S. in terms of high school degrees, associates degrees, bachelors degrees, masters degrees, and doctoral degrees. Trends also [come](http://i0.wp.com/www.avoiceformalestudents.com/wp-content/uploads/2013/09/Four-Graduation-Rates-Degrees-Associates-Bachelors-Masters-Doctorate-by-Sex-and-Percentage-United-States.jpg) as revealing.
Also, some campuses have a standard of "preponderance of the evidence" for rape cases, which in effect eliminates due process mostly for men, but not for women, and such a standard thus effectively consists of a sexist policy against men. Some other policies can unintentionally end up sexist against men as a study of Daphne Patai's Heterophobia makes clear.
Male sports teams getting cut also comes as another way in which a policy has ended up [sexist](http://www.nytimes.com/2011/05/02/sports/02gender.html?_r=1&) [unknowingly](https://www.youtube.com/watch?v=GUvuLuLarr0).
If you want to avoid sexism in academia and elsewhere you need to care about men in general just as much as you care about women in general.
Sexism can perpetrated by people by having policies that require different standards for people of different sexes.
In order for a male to get student loans in the United States he has to register with the selective service system (which is still a real entity). No female has such a requirement placed upon them in order to obtain such a loan.
Sexism can get unknowingly perpetrated by believing that an educational equity for one sex can qualify as sufficient for both sexes. The [Women's Educational Equity Act](http://www2.ed.gov/programs/equity/index.html) is one example of this.
Lifestyle, knowledge of one's educational opportunities, and health in general can all effect academic achievement as well as one's ability to achieve academically. I can't speak to the extent of things here, but one person had the [following](http://www.patheos.com/blogs/rogereolson/2013/02/discrimination-against-boys-in-education-and-elsewhere/) to say:
"As I have said here before, I have worked in higher education for thirty-one years and have never seen a poster or service announcement on any campus aimed at promoting positive lifestyles, health, educational opportunities, etc., for boys and young men. I have seen numerous ones for girls and young women."
Sexism can also get perpetrated in academia via "equity hiring" as <NAME> [makes](http://pjmedia.com/blog/academic-hiring-and-the-diversity-mandate/) clear...
"Next came the creation of a shortlist of three or four candidates for interview; some members of the department were keen to stack the list with members of the diversity groups. To this end, there was much sophistry about why a (white) male candidate’s book with a prestigious university press was really no better than — was actually perhaps a bit inferior to — a female candidate’s single article with an academic journal of no repute; or about why a (white) male candidate’s expertise in highly competitive Shakespeare studies was no better than — was actually far less original than — a female candidate’s untested, largely speculative work on an obscure seventeenth-century woman playwright. Thus were well-qualified white men kept out of the competition. Moments of levity occasionally occurred when we were forced into elaborate interpretative dances to determine if a male candidate might be black or Asian or gay, though usually the savvy candidate made that clear in his cover letter.
At the hiring stage, there was the same special pleading. Poor presentations by women candidates were praised as “provocatively unorthodox” or “strategically unconventional” while polished ones by men were criticized as “safe” or “unoriginal.” Women’s mistakes could be overlooked or seen as strengths (“I like that she was courageous enough to present on material that she is still working through”) while men’s mistakes were definitive (“I’m shocked that he could be finishing a PhD and still not know that [minor detail”]). One male candidate who had given the best demonstration class I’d ever seen was criticized by our leading feminist professor — presumably because she could find no other faults — for having never visited England to do archival work, a criticism the poverty-conscious lady would almost certainly never have made of a struggling single-mother candidate. That a man might have life circumstances preventing him from travel seemed not to have occurred to her."
That hiring practices may more often than not, favor the hiring of female candidates over male candidates gets further suggested by [this](http://www.pnas.org/content/early/2015/04/08/1418878112) experimental research.
The number of answers which only address sexism against women here, without also addressing sexism against men makes it very clear that you can NOT eliminate or reduce sexism by focusing exclusively on the needs or interests of just one sex. In fact, that comes as a way to perpetuate sexism.
This [document](http://scholarlycommons.law.wlu.edu/cgi/viewcontent.cgi?article=1381&context=crsj) starting at about 532 suggests things goes even deeper than this answer has merely outlined.
The University of York recently withdrew it's intention to mark [International Men's Day](http://www.theguardian.com/education/2015/nov/17/row-after-university-of-york-cancels-international-mens-day-event).
The key to avoiding sexism lies in fully understanding the plight of all sexes and acting accordingly.
And no discussion of sexism in academia is complete without at least some indication of the issues of [trans-people](http://www.nytimes.com/2014/10/19/magazine/when-women-become-men-at-wellesley-college.html?hp&action=click&pgtype=Homepage&version=HpSum&module=second-column-region®ion=top-news&WT.nav=top-news&_r=0) in academia.
Upvotes: 3 <issue_comment>username_5: This is a shaky ground, as a bizzare negative rating of [username_4's answer](https://academia.stackexchange.com/a/10602/739) shows. A entirely positive (in the sense of only trying to come up with a meaningful psychological theory; an economics term here) [speech by <NAME>](http://web.archive.org/web/20080130023006/http://www.president.harvard.edu/speeches/2005/nber.html), then President of Harvard, at NBER conference that promoted gender equality, costed him his presidency... not because he was biased, but simply because the bloody journalists highlighting the event had no clue about math. His argument was very simple: if we look at IQ curves, we see that females have an average IQ about the same or higher than males, and about 20% lower standard deviation. Mother Nature plays conservatively with females while experimenting on males throwing their abilities around. Whichever of these experimental traits the females find attractive in males are worth retaining through natural selection. So if we look at IQ>80 or IQ>100, we see more females; but at top 5% of the IQ distribution (115 or 120-ish on IQ scale), the ratio is roughly 2 males : 1 female. University professors are likely to be even higher on the IQ scales, probably circa 140+, at which point the ratio may be 5:1 (although of course the quality of the normal approximation in the tails becomes VERY questionable). Other examples Summers gave were underrepresentation of Jews in farming and agriculture, and of whites in the National Basketball Association (which may have to do with biological differences in genetic makeup for height, as well as how the proteins are being processed and muscles are built, between races).
In all likelihood, conditional on having received a Ph.D. (probability theory term here), there is little difference between males and females in ability to produce meaningful research. Success in academia is dictated by [other personal traits](http://www.ncbi.nlm.nih.gov/pubmed/19070437). They are [correlated with gender](http://scholar.google.com/scholar?q=gender+and+personality+traits), although again conditioning on high IQ/having finished a Ph.D. may change that correlation structure a lot.
Economists argue for distinguishing between the equality of outcomes and equality of opportunities. Do Nordic countries show greater equality of income because they redistribute more, or because they provide better equality of opportunities? [Read here](http://onlinelibrary.wiley.com/doi/10.1111/j.1475-4991.2008.00289.x/abstract). Is stunning income inequality in Brazil the result of discrimination, differential "circumstances", or what? [Read here](http://papers.ssrn.com/sol3/papers.cfm?abstract_id=497242). Academia, as any other walk of life, should strive to create equality of opportunities; equality of outcomes may or may not follow, depending on whether the individual traits are correlated with whatever we are trying to equate (gender, hair color, height, language spoken at home, etc.). Forcing equality of outcomes will aggravate those who have benign better abilities and make people of lesser abilities lose incentives to strive to do better. (Been there in the Soviet Union, done that, trust me.)
If one claims that "women are better teachers than researchers", just ask: "Hm. That's an interesting observation. We are both scientists; can you give me some references to any peer-reviewed literature on this?" (note that I gave mine above :) ). If they can't, that's a biased accusation you can go with to your superiors (chair, dean, university president). Just like drivers found guilty driving under influence, and undergoing humiliating driving clinics, whoever makes such claims should be made to go the literature on the cross of psychology, sociology, gender studies and labor markets to dig into the issue -- who cares if they are good in math, if they have big mouth, they should learn to control it. Join your local committee on women in faculty, or whatever the name of that could be in your university to make sure you know the ways that the university can protect you against sexism, and make good use of what you learn there.
Upvotes: 4 <issue_comment>username_6: One of the first thing to do is to realize how we are biased, even when we don't know it. See in particular [this reference](http://www.pnas.org/content/early/2012/09/14/1211286109); the abstract alone is informative:
>
> Despite efforts to recruit and retain more women, a stark gender
> disparity persists within academic science. Abundant research has
> demonstrated gender bias in many demographic groups, but has yet to
> experimentally investigate whether science faculty exhibit a bias
> against female students that could contribute to the gender disparity
> in academic science. In a randomized double-blind study (n = 127),
> science faculty from research-intensive universities rated the
> application materials of a student—who was randomly assigned either a
> male or female name—for a laboratory manager position. Faculty
> participants rated the male applicant as significantly more competent
> and hireable than the (identical) female applicant. These participants
> also selected a higher starting salary and offered more career
> mentoring to the male applicant. The gender of the faculty
> participants did not affect responses, such that female and male
> faculty were equally likely to exhibit bias against the female
> student. Mediation analyses indicated that the female student was less
> likely to be hired because she was viewed as less competent. We also
> assessed faculty participants’ preexisting subtle bias against women
> using a standard instrument and found that preexisting subtle bias
> against women played a moderating role, such that subtle bias against
> women was associated with less support for the female student, but was
> unrelated to reactions to the male student. These results suggest that
> interventions addressing faculty gender bias might advance the goal of
> increasing the participation of women in science.
>
>
>
Moss-Racusin et al., 2012 Science faculty’s subtle gender biases favor male students PNAS 2012 109 (41) 16474-16479; published ahead of print September 17, 2012, doi:10.1073/pnas.1211286109
Upvotes: 4 <issue_comment>username_7: One issue I've often heard cited involves having a family.
For students on a traditional timeline in academia, the years from age 22 to 36 or so are critical: you go to grad school, get a PhD, perhaps do a postdoc or two, search for and land a permanent job, and finally get tenure. You have to be constantly advancing and publishing your research, traveling to conferences and seminars, and generally impressing the community in your field. Odds are you'll have to make at least one long-distance or international move, perhaps several. Any delay or gap during this time could permanently derail your career.
Coincidentally, the years from age 22 to 36 are also the best time (biologically) for a woman to have children (assuming she wants to do so, and most do). And bearing and raising a child can certainly put a major crimp in your ability to keep doing the things listed above.
It's pretty clear that this system was designed for men: men with wives who didn't work and could take care of children full-time. It's very hard on anyone who wants to be an involved parent, and doubly so for the woman: pregnancy, childbirth, breastfeeding, and so on can't very well be delegated or shared!
Universities do tend to have standard-to-generous maternity and family leave policies, which certainly help with this in the immediate term. What's less clear is whether (possibly male or male-dominated) advisors, hiring committees, chairs, deans, and tenure committees are as understanding in the long run. If a candidate has a six-month or one-year gap in her research output, and then a slow period afterward as she gets back up to speed and learns to balance her work and family, it's hard to imagine that won't hurt her career, no matter what explanation is attached in her file. And God forbid she should want to have *two* children!
So this is a kind of sexism that's not based on any one person's behavior, but on the structure and norms of the community.
(Disclaimer: I am a childless man and have no personal experience with this issue, but I thought someone should mention it.)
Upvotes: 5 <issue_comment>username_8: I'm in a Computer Science program at a school that has a heavy focus on engineering fields. There are only a couple of girls in the program, but they are both excellent students. I've spent a bit of time studying with both of them, and one in particular has been able to help me with understanding some difficult concepts.
In a programming class that we took, the professor noted that she was an excellent student. When she went to his office after a test to discuss her grade, he began to preach on women in computer science. He told her how supportive he was, and relayed a story about sexism, which he condemned.
She expressed to me how uncomfortable it made her to be singled out like that. She has told me several times that the professors want her to be a "poster-child" and "become a leader". This is all well meaning, but she simply wants to be a (very good) student, and fit in with the rest.
I am of the opinion that when a woman is in a mostly male field of study, she can have undue pressure put on her by well-meaning staff and peers. It's almost as if they are told that if they are not a leader, then they are letting other women in their field down.
Until we stop treating women like a rarity in certain fields (even if they are), they will not feel comfortable there.
Upvotes: 4 <issue_comment>username_9: I can't spot a link to this article posted yet:
<http://www.slate.com/blogs/xx_factor/2014/12/09/gender_bias_in_student_evaluations_professors_of_online_courses_who_present.html>
Someone else showed me the article, but I think the point is worth mentioning here. The original paper is available [here](http://link.springer.com/article/10.1007/s10755-014-9313-4), but the report gives the gist:
**Students gave significantly lower ratings to the same people** (in online teaching where their identity was entirely concealed) **where the only distinction was the gender they told the students.**
Upvotes: 3 <issue_comment>username_10: There is a bias — against men — in the recruitment process for tenure-tracked positions.
From [a 2015 study](http://www.pnas.org/content/112/17/5360.abstract) :
>
> Men and women faculty members from all four fields preferred female
> applicants 2:1 over identically qualified males with matching
> lifestyles (single, married, divorced), with the exception of male
> economists, who showed no gender preference. Comparing different
> lifestyles revealed that women preferred divorced mothers to married
> fathers and that men preferred mothers who took parental leaves to
> mothers who did not. Our findings, supported by real-world academic
> hiring data, suggest advantages for women launching academic science
> careers.
>
>
>
More details on this study, [from The Economist](https://www.economist.com/science-and-technology/2015/04/18/the-unfairer-sex) :
>
> [](https://i.stack.imgur.com/T8kIj.png)
>
>
> Dr Williams and Dr Ceci conjured up trios of hypothetical candidates
> for tenure-track jobs in various fields. In each case two of the three
> were fantastically qualified and one, there to act as a foil, slightly
> less so. They sent the three candidates’ CVs, together with mocked-up
> interview comments about them, to 873 high-level academics in the
> departments of biology, economics, engineering and psychology at 371
> American universities. They tweaked the particulars of each trio to
> match the relevant discipline, and randomised which of the two
> outstanding candidates was referred to as “he” and which as “she”.
> Respondents were asked simply to pick the best of the three.
>
>
> As the chart shows, professors of biology, engineering and psychology
> all chose female candidates over equally qualified male ones, and did
> so by an overwhelming margin (as high as three to one in the case of
> psychology). Moreover, they made this choice regardless of whether
> they, themselves, were men or women. The sole exception to this
> pattern was economics. In this discipline male professors showed a
> slight preference for men, though females had a strong one for women.
>
>
> When Dr Williams and Dr Ceci carried out further experiments, looking
> in more detail, they found that the pattern they had discovered held
> up regardless of whether or not hypothetical candidates were married,
> had children or had taken a period of parental leave. These factors,
> often cited as damaging to women’s academic careers, seemed to weigh
> little with the professors in question.
>
>
> A criticism of the researchers’ method is that the professors knew
> they were involved in an experiment (though they did not know its
> purpose). They may therefore have chosen the female applicant simply
> because they knew they were being scrutinised and wanted to show their
> feminist credentials, knowing that they would not have to live with
> the consequences. To control for this possibility, Dr Williams and Dr
> Ceci also sent out 127 identical CVs—half purporting to be of women
> and half of men—to 127 other academics, asking them simply to rate the
> candidate. Their idea was that an absence of applicants for comparison
> would reduce any pressure to be politically correct.
>
>
> In this case, too, the women triumphed. Notional female candidates
> scored a full point higher than male ones on a ten-point scale.
> Presented with identical track-records, respondents seemed simply to
> think more highly of women.
>
>
>
Upvotes: 2 <issue_comment>username_2: <NAME>, professor at ENS Lyon and member of the French Academy of Sciences, recently gave a speech on science and research in general ([available here, in French, includes a video and a retranscription](https://www.rnbm.org/la-science-dont-je-reve-par-laure-saint-raymond/)), on the occasion of the election of new members of the Academy. Here is part of what she said, translated by myself, with some context added for people not familiar with some French-specific terminology:
>
> Creativity is stimulated by diversity. [...]
>
>
> The only true efforts currently made to expand horizons are those in favor of parity [between men and women], and it must be said that they are not always successful. First among "not so good ideas" is the imposition of quotas in all [recruitment] committees (an immediate corollary being that women are called upon much more often for administrative tasks) and the ever increasing pressure for recruitment. Within our own [French Academy of Science], incentives to elect women are numerous. It is fortunate that we do not know what debates preceded our election, but for women, the doubt of having been chosen to improve statistics remains... The balance between genders, like social diversity, cannot be decreed. We must gently get rid of prejudices (still shared by part of the scientific community), recruit people through alternative paths for [*classes prépas*](https://en.wikipedia.org/wiki/Classe_pr%C3%A9paratoire_aux_grandes_%C3%A9coles) and competitions, and maybe change our evaluation criteria: privileging originality and esthetics over quantity and technical brute force.
>
>
>
(The last sentence is more of an echo to what she had previously said in her speech. I found the whole speech very interesting and I can only recommend everyone to listen or read it, if you can speak French. I am not brave enough to translate everything – please forgive me.)
One of the key points in what I translated above is the "not so good ideas". In French we say "*fausse bonne idée*", "false good idea", to mean an idea that looks good on the surface, but is actually not so good, or even harmful, when you actually think about it. I wasn't sure how to translate it, so I hope it's clear now.
Upvotes: 0 |
2013/06/13 | 1,242 | 4,910 | <issue_start>username_0: Just as the title says: do you spell out Thm., Prop., Eq., Ch. and comparable abbreviations in a mathematical paper?
I suppose that, if in doubt, it is always best to stay consistent (in whatever way) throughout ones writing for the least. Do journals have (different) policies on this, or is there a prefered style when in doubt?<issue_comment>username_1: It is a matter of style. I would say **yes**, expand them. In my opinion, authors tend to over-abbreviate making documents harder to read. For instance, I can't work out what you mean by Ch. (**C**onjec**h**ure?)
Upvotes: 5 [selected_answer]<issue_comment>username_2: In my experience, authors almost always spell out words like "Theorem", "Proposition", and so on. I expect that journal styles will generally require this. I can't remember the last time I saw a published paper that abbreviated them.
But if you're writing a paper, you must have read a lot of other people's published papers. Surely by now you've formed your own opinion of the consensus?
Upvotes: 4 <issue_comment>username_3: To expand on Dave's answer slightly, abbreviations should follow the guidelines of the specific venue to which you are sending a paper. If they expect no abbreviations, don't use them. If they have standard ones specified, use those as appropriate. Typically, I would only use something like "Prop." for "proposition" when it's referring to something with a specific number, *and* that's what the style guide calls for.
Other abbreviations should be used to improve readability: for instance writing out "fast Fourier transform" one hundred times during a paper can start to get more tedious than using "FFT" as an abbreviation. But shortening individual words should only be done if it makes reading the paper easier, *not* simply to shorten it.
Upvotes: 4 <issue_comment>username_4: One heuristic is to write things as you would read them out loud. In other words, don't try to save space in print unless you would use the same abbreviation in speech.
For example, I'm happy to say "i.e." or "e.g." orally in certain situations, so those abbreviations can be fine (indeed, it would sound really weird if you wrote "id est" or "exempli gratia"), but I would never refer to "Sec. 5" or "Eq. 3" when speaking. Trying to pronounce "Ch. 2" as written would be even worse. By this standard, abbreviations like NASA or FFT are OK, although they can of course be overused and they may be more cryptic than the writer intends.
This principle extends beyond abbreviations: try not to write anything that would be awkward to read aloud. (For example, mathematicians sometimes violate this by juxtaposing formulas with no words in between them.) Of course this is not an absolute rule, but following it will generally make your papers easier and more pleasant to read. The effects are admittedly small, but if you are explaining complicated or subtle ideas, you shouldn't add to the difficulties with clumsy writing.
Upvotes: 3 <issue_comment>username_5: Well, yes, you in general spell out these words. However, there are basically three types of their usage and you can draw the line between Abbr./Spell-out any where between them.
1. **In-text usage at the beginning of a sentence.** Theorem 3.5 clearly shows that Foo is actually a Bar of chocolate. Equation 3.17 confirms this. Equation (3.18) is irrelevant.
2. **In-text usage in the middle of a sentence.** We see in Thm. 3.5 that Foo is actually a Bar of chocolate, which is confirmed by Eq. (3.17), noting that (3.18) is irrelevant.
3. **Parenthesized usage.** We see that Foo is actually a Bar of chocolate (Thm. 3.5).
(I personally prefer no abbreviation for any references, and omitting the word "Equation" whenever possible, but that's not the point.) The point is that you should draw the line somewhere and be consistent throughout your document, and especially be consistent with "Fig." vs. "Table" etc. The exceptional things are:
* **Equations** with 5 (five!!!) possible styles: Equation (3.18), Equation 3.18, Eq. (3.18), Eq. 3.18, (3.18).
* **Bibliographic citations** which usually have a prescribed style and you should only be consistent in whether they can be a grammatical object in the sentence or not.
* **Chapters/sections**, where: first you don't need to follow the LaTeX's terminology, and second you can use the sign "§" for them.
Upvotes: 2 <issue_comment>username_6: In a mathematics paper, Prop can only mean Proposition, unless it's something strange about the mathematics of plays or more likely propellers, in which case "prop 3" might mean the third stage prop or propeller. Same for Fig unless it's something strange about the mathematics of fruits, or Def unless it's a statistical analysis of larg-intestinal flora. But Theorem is short enough already. Significantly shorter seems OK unless it's atypical or otherwise ambiguous!
Upvotes: 1 |
2013/06/13 | 815 | 3,425 | <issue_start>username_0: Some journals and conferences charge money before/after paper acceptance, similarly there are some which don't (like Open Access Journals). Does money matter in judging such journals/conferences before sending paper ? Unfortunately my organization is on a cost cutting and isn't funding for any such activities. Is there a list of journals which do not charge fees or charge modest fees, so that I can publish there on my own ?<issue_comment>username_1: Generally speaking, whether a journal charges money for publication or not should **not be a criterium** for judging the quality of one journal compared to another one. Use other means for that, for example checking where good papers are published, the academic reputation of the editorial board, opinions of colleagues, and, yes, even numeric factors such as the impact factor of the journal.
There are very high-quality open access journals, for example the [PLoS](http://www.plos.org) series of life science journals, but also no-quality open access journals, for example the ones produced by most of the publishers on [Beall's list](http://scholarlyoa.com/publishers/). Some very high-quality non-open access journals ask for page fees (or even submission fees, as I learnt recently), others don't.
I am not sure whether there is a specific list of journals which don't require paying fees, or an overview listing publication fees for a range of journals. There is the quite well-known [SHERPA/Romeo](http://www.sherpa.ac.uk/romeo/search.php) list, which distinguishes between different levels of self-archiving. I guess most of the green journals (highest self-archiving rating) listed there will be open-access journals and ask for publishing fees, so you could go and check the other journals in more detail, by looking at their instructions for authors.
Upvotes: 2 <issue_comment>username_2: I must disagree with the CS folks in the comments†. Whether a journal charges or not is immaterial to the quality of the journal. You can only gauge its quality by being an active researcher in that field, not by silly metrics (this is an order of magnitude worse than impact factor).
Different journals from the same publishing house might also charge different rates for publications. For instance, AIP publishes both the Phys. Rev. series (+ PRL) and the J. Acoustical Society of America. While there are no charges for the former set, JASA has "voluntary" charges for publishing.
Such "voluntary" charges are very common (some flagship and top-of-the-line IEEE journals come to mind) and serves primarily as an opt-out for researchers from low income countries in Asia/Africa/S. America. Researchers in N. America, Europe and Australia/NZ are *expected* to pay the publishing charges, since with the way grants are structured in these countries, there is almost always a specific budget earmarked for publishing costs. Of course, if a researcher has no budget or is publishing in his spare time or as a secondary interest, then they're also welcome to not pay the charges, but I've never seen anyone that's on a grant decline to pay charges.
† Opinions on this site seem to be overrun by CS folks, who generally state them as facts of life. From my years of experience in a variety of fields, I've found their customs to be more of an outlier than the norm (perhaps only mathematics has something in common).
Upvotes: 4 [selected_answer] |
2013/06/14 | 647 | 2,731 | <issue_start>username_0: Going into college I had a great record, 3.65 GPA and a 30 on the ACT and about 18 AP and concurrent credits in statistics, calculus, literature, and Spanish. However, trying to balance religious callings and duties during my first year of college left me very unbalanced academically, and as such I failed two major classes, one of which being college, a class that I had already passed in high school and was simply taking to help acclimate myself to the college environment.
The following semester was about the same. I ended up deferring for two years to serve full time under a religious calling, I was able to do this because the university is owned and operated by my church. Going back I am resolved to do better this time, but I am wondering if there is any possible way to wipe away my record of my first year and start fresh again, maybe by applying to a new college instead of transferring, is that possible? Or could I petition on part of my ADHD and Aspergers syndrome for the first year to be overlooked?<issue_comment>username_1: I can tell you from experience that one can recover from a bad first year, though it would be up to the university what options there are regarding your GPA. Having said that, it is very unlikely that the university would wipe the results without making you repeat the subjects.
You would need documentary evidence for ADHD and Asperger's Syndrome to be considered as reasons for wiping the first year's grades. Even then, I am doubtful that a university would allow it.
A major option is to learn what you can from the first year, apply those lessons as part of your resolve to do better in coming academic years.
Upvotes: 2 <issue_comment>username_2: Another big question here is in what classes those bad grades are, relative to your major. IF those are intro classes in your intended major, this is a very, very bad thing, as it will have to be explained to anyone who looks at your transcript (admissions committees or prospective employers). On the other hand, if these are classes outside of your major, while still not ideal, they can at least be viewed as temporary aberrations or difficulties in the transition from high school to university.
Upvotes: 2 <issue_comment>username_3: If you do really well from now onwards, then a couple of poor years can be overlooked, for instance if you are going to apply for PhD studies.
Also, it seems that the problem is not necessarily ADHD or Aspergers; it seems to be taking on too many different activities. So prioritising your time and commitments during your degree studies seems to be the solution to the problem and then working very hard, not getting the record wiped.
Upvotes: 4 [selected_answer] |
2013/06/14 | 1,071 | 4,644 | <issue_start>username_0: I have recently graduated with Computer Science and Engineering and joined an R&D firm. During the job, I enrolled for a part time MS(Research) from the topmost institute in my country (with reputable world rank). Though the only difference from full time course is that we get 1 extra year to complete part time course.Everything else, like the courses, class timings, labs, exams and rigor remains the same.
I intend to join academia in future. How valuable is a part time degree with respect to joining academia and industry?<issue_comment>username_1: It would not make a great deal of difference at all. As you stated, everything but the time to complete it is the same. I am in the same boat and I have found that when I apply somewhere, that they are not in the slightest bit concerned about me studying part time.
Look at it this way, once you complete your research degree, you will also have a considerable amount of practical experience, that will speak volumes in your favour.
Upvotes: 3 <issue_comment>username_2: *The fact that you have done your degree either part or full time will not change your degree's value.* I have never heard of part-time degrees being treated differently from full-time degrees. In fact, it would be unusual to list a part-time or full time status on your CV so it's not clear how people would know. And I doubt many people would react differently.
Your decision to do a degree part-time or full-time has other important implications on your long term goals but they are indirect. If you want to go into industry, choosing a part-time degree might mean you have more time to build up industry experience which might be nice. If you want to go into academia — and you can afford to go to school full time — do the full time degree because work in industry will only slow you down from the pursuit of your long term goal. Being a part-time student might also mean less attention from faculty who might think you are less serious about an academic track while you are doing the degree.
Upvotes: 5 [selected_answer]<issue_comment>username_3: I got my master's degree in computer science by part time study while working full time. It was the same arrangement as in the question - same courses and other requirements as the corresponding full time program, but spread over an extra year.
My employers treated it as a valid qualification for the rest of my industry career. After that, I was accepted into a CS PhD program. It was my only formal CS qualification. My bachelor's degree was in mathematics.
If you look really closely at my resume, you can tell I was studying part time because of the overlap between university attendance and a job. There is nothing about my degree that makes any distinction between full or part time study.
Upvotes: 2 <issue_comment>username_4: While the OP has probably graduated by now, I believe the answer to this question is culture dependent. My answer will focus on the industry part of OP's question.
First and foremost, the accepted answer was partly incorrect as it would be quite obvious to an experienced HR, from your CV, that your employment record and your degree overlapped in time. Thus one or both of them has to be part-time.
HRs from my Asian hometown care. Why?
While the OP was pursuing a degree from his/her home country, other readers coming across this question might enroll in an overseas program. In this case, a full-time degree overseas would entail multicultural exposure, which is an important employment consideration for multinational companies. Depending on the country where you get your degree, it may also imply that you can speak a foreign language.
Whereas a part-time degree from overseas would most likely imply distance learning and a lack of networking with fellow classmates in your research area. The argument stands even if the degree is from your home country, but a different city.
It does not matter if my weak "deduction" above may not necessarily be correct. As long as *some* HRs think this way, those jobs are forever lost to you. (This *"some"* becomes **"most"** in Asia.) Even if actual researchers in the industry consider you an equal, your application will go straight to the bin before ever reaching their desk.
Nonetheless, your experience at an R&D firm, *if it is of the **same** area as your research*, is much more important than whether your degree is full-time or part-time when it comes to hiring decision. The word "same" is stressed because, referring to the culture of my hometown again, unrelated experience is not considered working experience, at all.
Upvotes: 2 |
2013/06/15 | 715 | 2,918 | <issue_start>username_0: I am in the third year of my undergraduate degree now, and in the process of applying for Graduate schools and Med schools. I have a burning questions about a creepy "C" in my transcript.
To summarize my story: I was recovering from a biopsy operation back then. Though I was advised to take one semester off to rest physically and mentally (I was extremely paranoid waiting for the pathology report, and luckily it came back benign), I still decided to take all the courses and the heavy research that I had started before. I performed so badly that semester that I received a C. That is like the most embarrassing element in my transcript.
Some told me that such a bad grade is a disadvantage for admissions. Do I still have a chance to make it to top grad and med schools? With extra effort (I have managed to pull up my GPA to 3.8 now, I have been on the Dean's list for some semesters, have 3 publications, and 2 poster presentations at symposium, my GRE and MCAT are good too), can I cover that ugly spot?<issue_comment>username_1: Your final GPA already shows that you have done well in your study and a bad grade in one of the semester or subject wouldn't matter much. Even if you are concerned about your grade affecting the admission chances, you can include in your *Statement of Purpose* why you performed badly, what did you learn from that and how did you managed to improvise upon it. Failure is also a learning experience, and if you are applying to a sane school, the admission committee are usually intelligent enough to understand it. Moreover, by describing how you improvised after a bad grade, would reflect your commitment and seriousness.
Upvotes: 3 <issue_comment>username_2: You have shown that you have overcome great adversity and still achieved great results - this says a lot of positives about your character - of resilience and perseverance - two attributes that are critical for any graduate studies. You had a cancer scare and still passed the subject despite the medical tests and the very justified anxiety.
Maybe, it is not an "ugly spot", but that C, and subsequent successes are a reminder of how much strength and tenacity you have shown.
Upvotes: 5 [selected_answer]<issue_comment>username_3: In your shoes, I would include a letter explaining the biopsy operation, the date of it, and its "correlation" with the "creepy C." This letter should probably come from a professor familiar with the situation, if possible, or maybe a doctor.
You've come a long way since then. You've got several publications/presentations, and a cumulative GPA of 3.8 that includes the C. Most schools would be happy let you in. They'd wonder about the C, but would also be looking for an "excuse" to overlook it.
This is something you don't want to let "pass" but you also don't want to make a "big deal" of it. A letter or two should be just about right
Upvotes: 2 |
2013/06/15 | 998 | 4,371 | <issue_start>username_0: This question is related to Physics research papers, particularly one that will be coming up for my ongoing PhD research (4th paper in all).
A core part of the research is developing, testing and validating relevant self-written code. So, my question is how much of the code should be included in the upcoming paper? Should it be the whole thing, or a written summary of the main components/subroutines used?<issue_comment>username_1: I would put the current version [of the code] on gist. Providing inputs and gnuplot (matplotlib/whatever) scripts for graph reproduction.
Upvotes: 3 <issue_comment>username_2: I would second @username_1's answer about having the code and some sort of walkthrough for reproducing your results online. However your question about how much to include in the *paper* is very journal specific. There are a number of publications specifically for computational science with formats where large chunks of code are expected, but if it's a journal whose focus is more on the science than the techniques, I would just describe in words the tricky parts of your algorithm and give a footnote reference to the website where the code lives.
Upvotes: 3 <issue_comment>username_3: If the goal of your paper is to show that your code solves a particular problem or performs a specific task, then it is your responsibility to prove to the reviewers that your code does what it claims to do. The easiest way to do this is to include the code as part of the supporting information for your paper.
However, *reproducibility*—which is what you're asking about, to a certain extent—is a not-so-simple question when it comes to showing codes do what they're supposed to. For instance, a code that runs one way under one configuration may return a slightly (or perhaps \_very) different result when run under another configuration. This doesn't mean that one result or the other is wrong—it just means that this behavior needs to be taken into account when evaluating the correctness of software.
One way to help this is to provide as much information as possible on your testing environment, so that any differences between the system on wich you were working and the conditions the reviewers and future potential users of your code will have can quickly be identified. This would be included, perhaps, as a text file that accompanies the code in the supporting information.
Upvotes: 5 [selected_answer]<issue_comment>username_4: Another option, in the field of astrophysics, is to submit your code to a repository, and register it in the Astrophysics Source Code Library (ASCL) at <https://ascl.net>, which is indexed by the SAO/NASA's Astrophysics Database System (ADS), the official compilation of literature (and now source code and datasets) in astrophysics.
If that's your case, you look at this link to find out how to submit your code to it: <https://ascl.net/submissions>
The page has additional resources telling you how to cite said codes, and this is what a citation might look like:
>
> <NAME>., 2013, AstroTaverna, Astrophysics Source Code Library, record ascl:1307.007
>
>
>
With the record being held at <https://ascl.net/1307.007>, and the ADS entry at <http://adsabs.harvard.edu/abs/2013ascl.soft07007G>
Upvotes: 1 <issue_comment>username_5: Papers on physics which utilise code should primarily describe the algorithms and their relevance to the physical problem at hand, and not focus on nuts and bolts of the code. There are several reasons for this: the printed article is the wrong place to include code, as page space is generally limited and not suited to formatting of code; there are issues with copyright as in most cases the ownership of the code is transferred to the journal, or if open access then put in the public domain; generally the audience of a paper should be broad enough to be useful to a wider range of researchers than your specialist area.
As others have suggested, you should release your code on GitHub or similar and apply a suitable licence, and simply refer to it in the paper. There are some exceptions to including code in a paper, generally when understanding the algorithm is essential for the reader or complicated, in which case pseudo code is acceptable since the essential information is included without any language specific distractions.
Upvotes: 2 |
2013/06/16 | 1,389 | 5,839 | <issue_start>username_0: It seems quite common for other teachers at my university (in Asia) to have students help with what are the normal duties of a teacher. Examples range from asking a student to help carry one of many stacks of books (because the teacher cannot carry all of them) to asking students to carry very small things (which the teacher could easily carry but the teacher would prefer not to carry anything at all and let the student do it) to having students complete marking sheets for graded assignments (where the teacher has already decided what the marks should be but prefers to have a student do the typing and printing).
It's very strange for me because I've never seen it in my own culture (which doesn't mean it doesn't happen), but I'm not Asian so perhaps I'm just not yet adjusted to the local culture.
Do most universities allow a teacher to put some of the teacher's regular duties on the shoulders of some students?
Note: This has nothing to do with teaching assistants, I'm just talking about regular students who appear willing to help (either to learn/understand more, to get in good graces with the teacher, or perhaps for other reasons I cannot see at this time).
**[EDIT]** I've modified the questions slightly away from ethics towards being common due to concerns from other users feeling it was too much of a discussion / opinion question.<issue_comment>username_1: [Confucius](http://en.wikipedia.org/wiki/Confucius) once said (in a rough translation) "It's student's job to serve the teacher when the teacher needs help." (the [source](http://www.dfg.cn/big5/chtwh/ssjz/33-lunyu-weizheng.htm) in Chinese)
I think in most places, where Confucius's principle is still being followed, helping the teacher in his normal duties is considered normal.
However, help marking the sheets is not normal as far as I understand. At least, Confucius never graded his students. He did have comments about his students, though.
Upvotes: 4 [selected_answer]<issue_comment>username_2: I can only provide **anecdotal** value from the 3 european universities I attended up to this date and from what I heard from others.
It is not common at all to do this here.
It would however not be a problem to ask for help *occasionally* either. If a professor did this regularly, especially if it would affect the grading process, problems would arise very soon as students would not be fine with this and intervene by talking to other professors or the director etc etc.
I believe this can only work in cases where the students massively respect or depend on the professor, right? Because if a professor was known to ask students to help them outside of the classroom, most students would probably have better things to do and find an excuse when asked.
Upvotes: 2 <issue_comment>username_3: It is highly unusual for *students* to be asked to take part in teaching duties such as the ones you have ascribed. It would be unusual to ask someone to help with "mundane" or "brute force" tasks—although you could see asking anyone to help out in such circumstances, so I wouldn't think much of being asked to move books or other materials, if the individuals just happened to be standing there at the time.
As for grading and more "official" tasks, I think it would be *highly* inappropriate to ask a student to prepare the list of grades, or to assist in grading, if the student is actively enrolled in the course. That is because you are then placing those students in a position of authority over their fellow students, and potentially creating a conflict of interest. Note that this does *not* preclude the use of "graders," as they are not also enrolled in the class at the time.
Upvotes: 3 <issue_comment>username_4: You're conflating different kinds of assistance under the title of "students helping their teacher". Let's consider the examples:
* **Carrying a stack of books to be distributed among students**: This is merely an extension of each student helping her/himself to a book, just in a more organized way. It's also a one-time activity. It's an activity the teacher is almost certainly not getting paid to do (as opposed to actually teaching, writing exams, grading etc). It's not pedgogical in any way and there's no benefit in a trained/educated teacher doing it. It can be construed as a courtesy among any two people - helping someone carry a heavy physical load; and that's especially true if the teacher is older and not as strong physically and the student is not scrawny and weak :-)
* **Carrying the teacher's stack of books to his room:** That is a sort of a personal service rendered to the teacher; it can be construed as a display of loyalty and even fondness, and certainly a recognition of status differences. It is something the teacher is borderline being paid to do (it depends on how you look at it I guess). It something that moves the student a bit into the 'personal space' of the teacher - you get to look at what s/he is reading right now, walk the hallways with him/her and no other people closeby, perhaps glance into his/her office, perhaps even have a chat with her/him. It's a often-repeatable activity.
* **Typing up grades** is something the teacher is definitely getting paid for; or rather, either him or the teaching asistants. It's, well, assistance in teaching. It's repeatable - many sheets, many homework assignments or exams. It's a task of responsibility and confidence, as you get to see personal information about other students.
The proprierty and the prevalence of each of these types of activities is a *completely different question*, I would say.
From my experience, and by the order above: Requested when relevant (but not often); Extremely rarely or never requested; Never requested, and if it is that the students can sue and win easily.
Upvotes: 1 |
2013/06/16 | 574 | 2,471 | <issue_start>username_0: I'm writing my BA thesis in computer science at a larger company and need to reference a confidential statistics report that has been made available to me in my reference list.
So my question is, how does one normally cite a confidential source? I assume you would somehow include the contact details of someone at the company with access to the material?
I'm using the Vancouver reference style by the way. Cheers!
Edit: I should clarify that the confidential "report" that was made available to me really isn't a report, but rather just raw data. It doesn't have an author or even a title, only site visitor statistics, thereby my confusion on how to reference it.<issue_comment>username_1: Before you do *anything* with the confidential data, you should have cleared its use with the company in question. Giving away the data in any form without their express permission could get you into a lot of trouble.
That said, if the report containing the data is a internal company technical report, it should be cited as such in a bibliography. This provides enough information for a person to track it down, although you may want to state that it is not available to the general public.
Upvotes: 5 [selected_answer]<issue_comment>username_2: It may be best to just cite the source as if it were any other reference, then include in your acknowledgements and/or a footnote more details on how you acquired the data and who to contact if it is indeed available to others.
Upvotes: 2 <issue_comment>username_3: I have two answers to this.
Once, as a researcher, I had the pleasure to use some unpublished data in my forecast. (My paper was about the forecast method itself.) I wrote something along `42. <NAME>. In private communication. 1899`, also citing a report that was partially based on the same data. <NAME> was the person, who gave me the data. (Thanks, "John"!)
On the other hand, as a supervisor to some BSc-theses that are written at commercial companies, the students are conflicted with internal data all the time. It is hard to cite internal data, esp. in a public thesis. The current way of doing it is to perform an "expert interview". You **talk** with some higher-ups at the company, they provide you both with data and with interpretations of the data.
Then you put the final interview in the appendix of the thesis, it is the primary information source, to which you can refer in other parts of your thesis.
Upvotes: 0 |
2013/06/16 | 467 | 2,135 | <issue_start>username_0: I have made some research about a computer science topic. The initial idea seemed good, but when I implement the idea; this works in some cases and in some others it does not bring conclusive results.
How can I write a paper about this subject in particular? I mean, I do not want to hide the bad results. I want to include them, but I feel scared that my paper could not get accepted because of that.
I was thinking also to submit it in a not so prestigious conference, but also I do not see any advantage on doing so.<issue_comment>username_1: It would be good to dig a bit deeper and analyze why the idea works in some cases and not in some others. More than the fact that your idea seems not to work in some cases, a conference PC may be less than impressed by the fact that you have not worked out all its ramifications and clearly identified the structure of the problem for which it is good. In all CS research conferences and journals of some reasonable quality, it is the responsibility of the author(s) to present ideas as well as to work out their consequences in some depth -- it is in general not acceptable to present a bare idea (or an idea not thought through properly) with the expectation that the audience/readers will take it forward on their own.
Presenting more analyses would also make your submission stronger by adding some other theoretical results to the paper. You could also identify future research directions for addressing the cases not covered by your current work, which could add gravitas to your current paper as well as suggest your own future direction.
Upvotes: 3 <issue_comment>username_2: It's important to remember that a paper is supposed to make some kind of contribution. It's not merely a report on "things you did". So if you have some good results and some inconclusive results, what's the contribution ? As username_1 says, you need to work out what's going on and come up with some answers. It's entirely possible that you fail in this endeavour and end up with NO paper. That's not fun, but it's something that you must accept as a possibility.
Upvotes: 3 |
2013/06/17 | 1,376 | 5,900 | <issue_start>username_0: My field is atmospheric physics.
The irony is that I have been a school teacher for over a decade, but soon, I'll be giving a presentation of some of my findings at a conference. I think that the nerves stem from speaking about my own research in front of my peers - something that I have not done to a large audience.
The questions that will no doubt be ask fill me with anticipation in both positive and negative ways.
Asides from being prepared, making sure the presentation is seamless and that I have a good night's sleep beforehand and 'knowing my stuff' inside and out. What are some strategies to anticipate the type of questions that would likely arise from a conference presentation?<issue_comment>username_1: There are a few obvious questions to ask yourself in planning for questions:
* What are the inherent weaknesses in the current work? (Almost no research is completely "airtight," so figuring out where the weak spots are will make a difference.)
* What are the ramifications of whatever assumptions I have made? Are they logical? What happens if I strengthen, relax, or eliminate some of those assumptions? Will everything still work in the more general (or more restricted) case?
* How would I apply this work to other problems? How will it help others in the field?
And then, with respect to the presentation:
* Have I left anything out in the interests of time that would potentially interest the user? Is the research methodology clear?
If there's anything in the last point, you may want to plan on having additional "backup" slides which highlight that info, but that aren't part of the "main" talk.
Upvotes: 5 [selected_answer]<issue_comment>username_2: One strategy to adopt when answering questions is to first repeat back the essence of the question to the questioner:
>
> If I understand correctly, you are asking ....
>
>
>
This will have two effects. Firstly, it ensures that you are actually answering the correct question. Secondly, it will buy you a little bit of time to gather your thoughts and think a bit about an answer.
Take your time when answering questions, rather than rushing to the first answer that pops into your head. In the end, it is okay to say, "I don't know" or to ask to discuss the question off-line, but the latter can seem like a bit of a cop-out. Try to answer the question, but only if the message is not getting through can you suggest to take it off-line.
But the real key is to practice your presentation extensively. If you deliver a good presentation and you know it, you will feel great and thus comfortable to answer questions.
Upvotes: 3 <issue_comment>username_3: In addition to strategies for anticipating questions, I thought it would be helpful to suggest how to cope with the nerves. I find that it sometimes helps to remind yourself a couple of things:
1. Remember that you have been working and thinking about your specific question probably more than most people hearing the talk - they are just hearing about your work for the first time. Even if there are important and smart people in the audience, **you are the expert on your work**.
2. Personally, I find that I am similarly anxious before giving talks regarding my work. However, it somehow always plays out fine - the atmosphere is usually relaxed and the questions tend to be either simple clarifications or interesting discussions. I suspect I am not the only person who experiences this.
Upvotes: 3 <issue_comment>username_4: Most of the questions you'll receive will either 1) ask for clarification about your methodology/results, or 2) suggest avenues of future work. Questions of the first kind are usually very easy (presumably you did the research and know the answer ;) ) and the second kind can be very helpful for identifying new research directions and collaborators! If you've already thought about the proposed direction, obviously chip in any insights you might have, but if not, "that's an interesting suggestion, and something I'd be interested in looking at in future work" is all you really need to say.
There are only a few realistic ways the Q&A session can go off of the rails. You might get questions like
* "How does your work compare to [Foo et al 2003]" (you have no idea who Foo is or what his method does)
* "Does your work account for (some factor you don't understand)?"
* "How might your work apply to (some area you know nothing about)?"
You can fall into the trap of feeling that you *should* know the answer, and that admitting ignorance is embarrassing... but the worst thing you can do is to bluff or make stuff up. If people "smell blood in the water" and get the impression that you are being misleading, they *will* come back with even more hostile questions. Instead, remember that you are in control of the conversation, and hold the ultimate trump:
* "Unfortunately I'm not familiar offhand with the method of Foo et al, but I'd be happy to chat with you offline about it after the session."
* "I don't know offhand, but I'd be happy to chat more offline."
* "Unfortunately I'm not too familiar with (area X), but I'd love to talk to you later about possible applications of my work there."
Upvotes: 2 <issue_comment>username_5: There is absolutely nothing wrong with not knowing the answer to a difficult question; if we know the answers to all the questions, then by definition it isn't research. Being comfortable with not knowing the answer should help with nerves. The questions are generally asked out of genuine interest, rather than as a test, so the person asking the question is not necessarily expecting you to have a good answer anyway. User168715's suggestion (+1) of saying "I don't know offhand, but I'd be happy to chat more off-line" is a good one", and is a good way of exchanging ideas with others interested in the same sorts of work as yourself.
Upvotes: 2 |
2013/06/17 | 1,015 | 4,495 | <issue_start>username_0: In the UK a portion, and in some cases all, work is "second marked" where an independent marker also marks the work. In cases where the 1st and 2nd marker disagree, a 3rd marker may be used. Finally, the entire work of each student over the course of his/her studies is evaluated by an exam board with (sometimes) 2 additional independent markers. These exam board markers tend to only consider cases that are on the border of different degree classifications.
From my understanding of statistics, having all of these different markers will regress marks towards the mean. As I am currently faced with the daunting task of 2nd marking a large stack of off topic papers, I am curious what are the advantages of second marking?<issue_comment>username_1: Double marking has many roles, but mostly it is to ensure accuracy and fairness. The main way of achieving this, and avoiding the statistical anomalies alluded to in the comments is to produce an effective marking scheme, so that academics with sufficient background can grade the exam and produce virtually the same grades. *Easier said than done.*
More details can be found on the Internet, for example, on [Swansea University's website](http://www.swan.ac.uk/registry/academicguide/assessmentandprogress/doublemarkingpolicy/).
Upvotes: 4 [selected_answer]<issue_comment>username_2: I will provide some little experience I have had. I am sure details will differ depending on how the system of two graders are set up.
In the system I experienced it is a custom to have the course responsible plus someone external (in my case even from a different country) do the grading. The grading was completed by having a discussion between the two graders about possible deviations. In the system grades are given as a number between one and six in steps of 0.1, so very detailed.
My experience was actually quite remarkable; it concerned a masters/PhD level course. We were most often within 0.2 of each other except in one case (answer) where one had given a 1 and the other a 6. In that case it turned out the question was ambiguous and could be interpreted in different ways. The grades were basically calculated as the average of both but only after we had discussed the problems/deviations. This is, for example, how we discovered the ambiguous question formulation.
From this, albeit miniscule, experience, I felt that the benefit of having two persons grading is that ambiguities in terms of questions and answers can be sorted out. It is also possible to discuss the apropriateness of the interpretations of answers given by students. The method also provides what I can call "legal certainty" since the grades will be based on two persons view rather than one. Of course the degree to which it is certian depends on the transparency of the process and to what extent the two gradings become official. The point in "my" case is that both graders have to agree so it is not signed by just one person.
As a grader I also appreciated the possibility to discuss the grading and the corrections jointly made were fair and made the process worth while. I would personally like ot see the system used more, but fear it will be difficult from a financial point of view in many universities (-y systems).
Upvotes: 2 <issue_comment>username_3: Like the other answers, I don't think it is an issue of regression to the mean (in the sense that the mean is the mean of all the students in the module). It is an issue of finding the true quality of the student's work.
In my current university we do sampling in that one marker will check maybe 1/3 of another marker's work. The point here is clearly not to catch every mistake a marker might make but rather to check for signs of abuse of power. Because everyone marking knows that *some* of their marks will be checked by another is supposed to keep the original marker from giving inappropriately high or low grades to any students, since the marker does not know which of the marks will be checked (admins actually do the selection of the sample).
I've worked at other universities where all work is double marked and in this case my experience is similar to username_2's.
Again, as I see it, the purpose of double marking (whether full or sampling) is to simply make sure that the student is getting marked fairly. Unfair marks can happen with intent (bad marker) or on accident (marker interpreting an ambiguous question differently from the student).
Upvotes: 2 |
2013/06/17 | 1,862 | 7,928 | <issue_start>username_0: Working on a cross-disciplinary project, in an increasingly cross-disciplinary field, I often find myself wondering whether or not I am developing skills in multiple different areas of my work. I intend this question to be as general as possible, but for the sake of clarity I will give my case as an example:
I have a MSc in applied mathematics, and have been working with biomedical research for three years now. Being branded as a *bioinformatician* I feel very appreciated on one hand, and absolutely disregarded on the other. In many cases I am expected to learn more of the biology and develop an understanding of "the real science" while all the tools and analysis should just work. I mean many of the seniors have absolutely no idea of the time and effort it takes to develop at software tool, maintain and further develop it. It appears as all that is given once and for all in the engineering school, after all programming is just programming... (please note the sarcasm here)
Be as it may, I have been trying to improve my knowledge and experience in the technical aspects of the work on my own; learning new algorithms, new languages, new tools... It is surprisingly hard to get accustomed to these when you are not in the university anymore. Consequently, I have given up on learning Maven for my Java projects, or Perl for speeding up my day-to-day scripting.
So my question is; what are good methods for learning or developing techniques that are not immediately in the scope of your project but is still very relevant in your development as a scientist?
Follow up question: am I mistaken in thinking that I should develop a broad set of skills in order to become as efficient and competent as possible?<issue_comment>username_1: My goodness, but this is a tough one. We all learn and maintain our skills in our own unique way. Especially in industry or when operating independently, so it's difficult to say what would work for you. I'm in a similar situation except that I'm a developer first and I've been tasked with 'just making security happen'. Much like with you and bio-med, there is an awful lot that is changing in the landscape of info-sec and staying sharp in both domains is a real challenge.
That said; your mileage will vary but here are some techniques that work for me.
**Find overlapping areas:**
When I started to move toward developing a new aspect of my skill set I looked for areas where the old and the new overlap. Luckily for me; this was a pretty easy thing to do with infosec and software development. The benefit here is that it allows you to leverage your existing skills in a new area. If can can find areas where you can turn two jobs into one you can ease your way into it, rather than just sitting down one weekend and deciding "I'm going to learn X.".
**Sit down one weekend and decide "I'm going to learn X.":**
Sometimes there is no easy overlap to ease your way into a new tech or topic. In those cases I've found that a couple of days in power study mode can be a real benefit. Strap on the headphones, coffee up it that's your thing, and just read the literature & work problems. If it's tech then do tutorials. If it's topic then vacuum up as much as you can.
**Find mentors:** Maybe you can't find the all encompassing guru of everything you want to do, but that's OK. Someday that guru will be you. In the meantime find people with expertise in your subject areas to help you fill in the gaps.
**Keep it fun:**
If at all possible and whenever possible. If you hate what you're doing then you're not going to do it well. At least that has been my experience.
**Don't give in to the temptation to 'dumb it down':** You're a smart person. You've taken the time to learn a new skill. You've actually read the materials. You work hard to develop and maintain a system that crosses a number of subject matter domains. Don't let people off easy when they ask hard questions; give them hard answers. My approach is to ask them 'do you want an answer or a response?' If all they want is a response that's OK. I give them a short and succinct response. If they want an answer then I do my best to provide the most exhaustive and thorough explanation of what I do, that I can. When I work hard to learn a new skill I don't need to show it off but I won't let it be taken for granted either.
Well, that's what I've got. All the best to you in your endeavors. If you ever need Java help feel free to grab me on chat and I'll get you an email address.
Upvotes: 5 [selected_answer]<issue_comment>username_2: Some methods I have found:
1. Taking courses in the field you want to learn.
2. Learning from books in the field you want to learn. Best done in a learning group of people with a similar background.
3. Working with experts in the field you want to learn.
4. Taking on research projects in the field you want to learn.
Remember that your PhD is the best time to gain this knowledge - later you will be much more busy with other things.
Finally, bioinformatics and computational biology is a bit of an odd field because you need to know math, biology, chemistry, physics and computer science (I won't detail all the subfields, but there are dozens). My point is that this is a HUGE field - don't expect to be an expert in every possible aspect. Instead, it is better to find a niche which you enjoy and become an expert there - but always keep your mind open to learning new things.
Upvotes: 2 <issue_comment>username_3: My understanding from "I am not in the uni anymore" phrase is that you work in some of sort of industry or semi-academic environment. If that is indeed the case, you are probably filling in time sheets for you work; if that is the case, you should be telling your superiors, "*This feature that you requested requires 10 hours of immediate development, and another 20 hours to test it properly. How high do you want me to put it on the priority list?*" It is unfortunately a little late for you to start talking like that, as you have been used as an ultimate programmer-who-can-code-anything-without-any-problem for three years now. But you can also bring it up in the sense that "*I am getting a variety of requests that I must process one by one*", and then put the feature value-vs-development time trade-off.
What you are asking, actually, is even more advanced: you need to reneg for some 5-10% of your time to be spent on professional development so that you can continue improving as the applied mathematician on the job. Again, you can say that this process will eventually pay off as you will do things on the projects faster. That's a tough sell given that your time is needed for actual research.
You might want to ask something like that on the "main" StackOverflow website: how people continue training themselves on the job. In an academic setting, you as a student could just commit yourself to write your next thing (be that a research publication or a term paper) using the new tool. In a production environment, you don't have that luxury of being able to make some errors and give yourself some time to recover from them. So once again, that's tough.
Upvotes: 3 <issue_comment>username_4: Am in a very much similar situation as you, doing a PhD in physics - however, there is strong connections to biology, photography and signal processing - but very specific topics therein.
A couple of methods that I use on top of what has been mentioned are:
1. Join stack exchanges, forums and discussion groups of similar topics
2. Set myself challenges to determine something new, partly relevant to my studies, but also to build my skills in that topic.
3. Attending conferences, workshops and seminars (as many as time and funds allow).
4. But one main method that seems to work particularly well is setting up 'Google alerts' of specific topics - I always have a great reading list.
Upvotes: 2 |
2013/06/18 | 1,565 | 5,944 | <issue_start>username_0: I am going to be a senior undergraduate and am looking to really find the area of research that I would like to be engaged in during graduate-school/senior year. I have submitted one conference paper as a collaborating author (currently waiting for the reviewers) in the area of Social Network Analysis (mathematical modeling) and am currently working on a conference paper in Graph Algorithms.
As you might guess, I am double majoring in Math and Computer Science and would like to pursue a graduate degree in applied math. So far (I haven't taken all the undergrad courses yet!) I have enjoyed Algorithms, Real Analysis, Graph Theory and Differential Equations. In the future I am curious to learn more about Stochastic Modeling, Mathematical Logic, Artificial Intelligence, Complex Analysis, Fractals and Abstract Algebra.
1. Where can I find current research journals about both the topics I have enjoyed and the topics I am curious to learn more about?
2. Do any journals have mobile apps (IOS, Android, or Windows) in which they can be viewed?
3. Where can I find unbiased information about the quality and related-data about journals?
**EDIT:** Do any journals "stream" (RSS feed) to GNU Emacs? or is there any type of package manager that will automatically download the latest publications? For example, I just found this ["package/program"](http://www.emacswiki.org/emacs/MitAiMemos) available in GNU Emacs. It is a list of AI publications from MIT up until 2005 (why would they stop then?)
Thanks for all the help! I am at least looking for a copy of a physical journal so I can take my eyes off the computer for a little bit! :)<issue_comment>username_1: I would agree that you should mostly be ignoring journals and focusing on conferences, as they publish the majority of new computer science research.
[Microsoft Academic Search](http://academic.research.microsoft.com/?SearchDomain=2&entitytype=3) is a good place to go to get an *approximate* listing of the top conferences for each sub-field of computer science. Other fields other than Computer Science are listed there too.
Use your school's network to access the [ACM Digital Library](http://dl.acm.org/) (this should be free through your school's library), and download the proceedings for the conferences in the past year or two. Find the papers that look interesting to you and then search for them on Google Scholar. You can print them out if you prefer hard copies.
Hope this helps!
Upvotes: 3 <issue_comment>username_2: @username_1 has answered regarding CS. I will answer regarding math.
I would start with the journals [of the AMS (pure math)](http://www.ams.org/journals) and [of SIAM (applied math)](http://siam.org/journals/). These are the pre-eminent professional societies in their fields and virtually all of their journals are top tier. In particular, you might start by browsing [the Journal of the AMS](http://www.ams.org/publications/journals/journalsframework/jams) and the [SIAM Review](http://siam.org/journals/sirev.php), the most selective journals from each society.
The journal that a paper gets published in is becoming less and less important, since most researchers find articles through search engines or social media rather than by browsing journals. The best way to keep up with new research in a particular subfield of math or CS is to subscribe to the appropriate arXiv RSS feed; for instance, for numerical analysis this is <http://arxiv.org/rss/math.NA>. This is how I usually learn about relevant new research.
Note that few mathematical conferences have proceedings, and none that I know of are considered prestigious (in CS, the situation is roughly the opposite). If you want to know which journals are the most highly regarded, talk to faculty in the field.
Journal articles are PDFs, so you can view them with any mobile app that understands PDFs. If you want to read a hard copy, either print the paper or go to your campus library.
Upvotes: 4 [selected_answer]<issue_comment>username_3: Let me put in a vote for *Mathematical Reviews* (online [version](http://www.ams.org/mr-database)). They're a few months behind the actual publication of the article, but someone who knows the field is giving you a two-paragraph (plus-or-minus) synopsis. Even low-quality papers get reviewed, and the authors-should-have-read-X comments of the reviewer are well worth it.
Your department can arrange a login for you. For that matter, just go join the American Mathematical Society; dues for grad students are trivial.
Upvotes: 2 <issue_comment>username_4: Here are some journals:
* [Journal of Combinatorial Theory](http://www.journals.elsevier.com/journal-of-combinatorial-theory-series-a/)
* [Discrete Mathematics](http://www.journals.elsevier.com/discrete-mathematics/)
* [Discrete Applied Mathematics](http://www.journals.elsevier.com/discrete-applied-mathematics/)
* [Electronic Journal of Combinatorics](http://www.combinatorics.org/)
* [Advances in Applied Mathematics](http://www.journals.elsevier.com/advances-in-applied-mathematics/)
* [Journal of the ACM](http://jacm.acm.org/)
* [SIAM Journal on Discrete Mathematics](http://epubs.siam.org/journal/sjdmec)
* [MAA Monthly](http://www.maa.org/publications/periodicals/american-mathematical-monthly)
* [Computational Methods in Engineering](https://www.springer.com/engineering/computational+intelligence+and+complexity/journal/11831)
* [Computational Intelligence and Complexity](https://www.springer.com/mathematics/journal/12532)
* [SIAM Journal on Optimization](http://www.scimagojr.com/journalsearch.php?q=26422&tip=sid&clean=0)
* [SIAM Journal on Computing](http://www.siam.org/journals/sicomp.php)
But you can search the list yourself for a more relevant journal:
<https://scholar.google.com/citations?view_op=top_venues>
This sorts them by citations.

Upvotes: 1 |
2013/06/18 | 792 | 3,276 | <issue_start>username_0: I encounter this term "principled approach" in some [papers](http://scholar.google.com/scholar?q=principled+approach&hl=en&btnG=Search&as_sdt=1%2C5&as_sdtp=on) of computer science. Since I' m not a native speaker, I don' t quite understand what this means. And I didn' t find any results online.
I' m not sure if this site is appropriate for such questions. Please let me know if I posted at the wrong place.<issue_comment>username_1: (I am in mathematics, but similar language is used roughly similarly, I believe.) As a place-holder answer: a "principled" approach in science is at least *opposite* to a quick-and-dirty, or *ad* \_hoc\_, or "kludge-y" approach, the latter three synonymous expressions meaning that the priority is getting *some* result out, perhaps even finding some rationalization for the conclusion one *wants*. Obviously a non-principled approach more lends itself to corrupted (but also quick, desired, easy) results.
The "principled" approach "takes the high road", does not bias conclusions, does not rationalize-away weaknesses or flaws in methodology or information.
That is, one could hope that a "principled" approach involves *no* conflict of interest for the parties involved, and could be trusted. At its worst, "unprincipled" approaches (which no one would ever admit to, except perhaps as a mildly perverse claim to fresh unorthodoxy) produce completely untrustworthy outcomes, because those outcomes are chosen in advance, and whatever results are obtained are "interpreted" to support the original premise.
A hilarious example I witnessed was a computer science M.S. (details elided to protect privacy), on which I was an "outside examiner", in which "the goal" was to prove that two bunches of events were correlated, thus proving that the people who were promoting the one as "cause" of the other were right, and people should invest in their product. (Nevermind that correlation is not causality.) The guy failed to find any correlation in any of the first twenty or so statistical tests he applied... but he kept at it, until he found a statistical test that *did* seem to assert a slight correlation.
Of course, what he had *really* proven was that there was apparently no correlation... but, taking an "unprincipled" approach, claimed the opposite of what his own evidence showed, etc.
Upvotes: 3 <issue_comment>username_2: A 'principled approach', at least the way that I've been exposed to this term, implies due care and diligence with regards to the rigor and discipline used in the materials context. A paper that describes a principled approach would be one that is presenting a procedure for the execution or evaluation of a given subject matter. For example, if I picked up a paper titled 'A principled approach to algorithm selection and implementation' then I would expect the contents of that paper to clearly enumerate a system of algorithm analysis, with exhaustive supporting documentation.
Conversely, a paper which uses a principled approach would be one that follows such a detailed and rigorous methodology that the data collected from its research may be considered to be functionally with out bias and with a low probability of corruption or inaccuracy.
Upvotes: 4 [selected_answer] |
2013/06/19 | 1,544 | 6,247 | <issue_start>username_0: I've read at least one career advice essay that calls out asking your PhD students to call you by your first name as unprofessional.
My coworkers and I always called our PhD advisor by his first name, and a graduate student calling *any* professor by their last name, much less their own advisor, strikes my sensibilities as quaint and old-fashioned (undergraduates are a different story, of course).
What is the standard practice for this?<issue_comment>username_1: I always have my students call me by my first (given) name. Currently, I'm teaching in Asia and the students have the local custom of calling everyone as Mr. Givenname or Miss Givenname (yes, even if she is married - strange, I know). This is completely different from my native culture but I bring my culture with me...for a reason.
I have no desire to introduce the formality of calling me in any sort of official way. I feel it distracts from the importance of focusing on the matter of education. As long as my students do not refer to me in a rude way, I'm quite flexible. I do, however, encourage (without insisting) them to use simply my first name, without any title, rank, or any other identifier. This is true not only for my graduate students but for my undergraduate students as well.
Others in my departments, most notably Asian teachers, do prefer to have the greater level of formality. To each their own. It really does come back to culture. For me, I allow my students to follow which ever culture they prefer, but I do let them I know I don't want formalities to interfere with the educational process in any way.
Upvotes: 4 <issue_comment>username_2: This is definitely a local practice. Here in Germany, it is not standard that *colleagues* call each other by their first names without specific invitation to do so. However, in other institutes, it is now standard policy that everybody refers to each other by their first name. So what is considered acceptable varies very much from location to location and group to group.
Within my own group, my undergraduate students tend to call me "Professor," while the graduate students and postdocs call me by my first name. This seems to me to be a reasonable balance—but I wouldn't really have a problem if an undergraduate who's worked for me for a while calls me by my first name.
A graduate student who isn't in my group, however, should not automatically expect to call me by my first name in an initial email. That would be rather presumptuous.
Upvotes: 5 <issue_comment>username_3: Yes, a PhD is essentially an apprenticeship in academic research, so they should be treated as a colleague in potential (it seems normal practice for an RA to refer to their supervisor by their first name). Also I think it is a bad idea for researchers to be overly formal and deferential towards their supervisors; if you ware working at the cutting edge of your subject, not all of your ideas will be good ones, and the PhD student should feel comfortable pointing out where they feel this is the case. This sort of self-skepticism (being comfortable with the idea of being wrong occasionally) is a key component of being a good scientist, and it seems to me to be difficult to communicate this by example if the student is constantly reminded of their place in the hierarchy by making them call me "<NAME>".
Upvotes: 6 <issue_comment>username_4: I recall a teacher of mine saying : "Dealing with different cultures is dealing with different expectations", and calling someone by his title or his first name is definitely related to customs.
Being a French (from Chinese parents) student myself, I have never called my teachers/professors by their first names, but things tend to change just as customs evolve. Maybe it is because of my chinese backgrounds, which implies a strong use of titles (even for family members) that is explained by the importance of respect for the elders in the society.
Then I got to study in Oslo for a few months, and people explicitly asked me to call them by their first names, which I did later. However, it still feels akward for me to call someone by his first name when he is "much" more older than me.
Now what I do is that I say "Monsieur" or "Madame", and use the first name if I am invited to do so.
Upvotes: 4 <issue_comment>username_5: In Japan, graduate students generally address their professors as *Lastname-sensei* or just *sensei*. Using the first-name would be unheard of. Even faculty do not address each other with their given names (unless they are foreigners).
Faculty generally address students by *lastname-kun* or *lastname-san.*
Use of given names in Japan is generally restricted to genuinely close friends and family in private situations.
Upvotes: 3 <issue_comment>username_6: This question is to me about: "how to behave professionally". As a teacher and a supervisor, I chose an intermediate way. I play with three items: first or last name, Mrs./Ms./Mr. or casual, and you and thou (a distinction that still exists in several languages; in French **tu** and **vous**). My aim is to show both respect and equality of treatment. **I dislike professional situations where someone calls a colleague by the first name without reciprocity**. So (as an advisor):
* Up to the Master of Science level, I call students with Ms (or Mrs)/Mr. and their last name, and I expect the students to do the same.
* I propose my PhD students (during their PhD time) to call them by their first name, and to call me with my first name as well (equal footing). Yet, in French, we have a difference between "tu" and "vous" ([see "Tu and Vous"](http://www.french-linguistics.co.uk/grammar/tu_and_vous.shtml)), with shades related to "you" and "[thou](https://en.wikipedia.org/wiki/Thou)"; "vous" is regarded as more polite, and we use the "tu" to talk in everyday life.
* When they get their thesis, I generally propose we switch to the more casual "tu", and I leave them the choice to use it, and first names as well. I am not anymore in a "supervising position" to impose them choices anymore.
And honestly, sometimes, I use "vous" and the last name when I meet students I have not met for a long time. **Parental education.**
Upvotes: 2 |
2013/06/19 | 485 | 1,739 | <issue_start>username_0: I'm an undergrad student who is interested in pursuing a Masters degree in complexity science/complex systems in the U.S. I know some schools put this program under physics, math or computer science departments.
I'd like to know which schools that provide such a program. Unlike physics, maths or cs programs, it is not straight-forward to find out such a list. Any recommendations on how to get a complete list of schools that provide a masters degree in complexity science/complex systems?<issue_comment>username_1: Try looking at speakers from conferences dealing with complex systems, e.g.:
* [European Conference on Complex Systems](http://www.eccs13.eu/)
* [NetSci](http://netsci2013.net/)
You can also search for other conferences (e.g. at <http://www.conference-service.com>) and then check the speakers. Furthermore, looking at affiliations from recent (say - last few years) papers you like my lead to some good trails.
When it comes to webpages being hubs from complex systems, try looking at:
* <http://www.complexssociety.eu>
* <http://www.network-science.org>
Some positions (including doctoral programs) and other resources are at <http://www.barabasilab.com>.
Also, some group websites are in my collections of links (Delicious: [complexity](https://delicious.com/stared/complexity) and [networks](https://delicious.com/stared/networks) or search at [my Pinboard](https://pinboard.in/search/u%3apmigdal)).
Upvotes: 3 [selected_answer]<issue_comment>username_2: Though [Santa Fe Institute](http://www.santafe.edu/) doesn't have a Master program, it has a [Complex System Course](http://tuvalu.santafe.edu/events/workshops/index.php/Complex_Systems_Summer_School_2013_%28CSSS%29).
Upvotes: 0 |
2013/06/20 | 1,157 | 4,576 | <issue_start>username_0: I would like to have a section titled 'Awards/Honors' in my CV and I am confused about how much detail to include for the items in this section. Here are some formats and levels of detail I am considering:
1. Harry Potter scholarship.
2. Hogwarts school of Witchcraft and Wizardry, Harry Potter scholarship.
3. Hogwarts school of Witchcraft and Wizardry, Harry Potter scholarship: awarded to one student every year for demonstrated excellence in horcrux-gathering.
For certain things, I feel like the first does not include enough context, beyond some very well-known things (such as a Fulbright or Goldwater, neither of which I have received). For some particular names I have in mind the second looks pretty clunky and does not always fit on a single line. The third option is even clunkier, and might be perceived as an attempt to add fluff to inflate the CV. I am personally leaning towards the second option, but I don't know that it adds much value beyond the first option.
Is there an accepted way to list awards/scholarships? Should I even be listing them at all?
**Some context.** I am a PhD student in Mathematics and I will be on the academic job market this coming year. I am currently based in the US and intend to apply to jobs here as well as internationally if I find any that are interesting. I am wondering both about the CV I keep that lives on my professional website and the one I hope to send as part of my job applications.
---
**Edited to add**: The first couple of answers suggest that one might choose the format/level of detail based on the type/prestige of award. If I were reading a CV and came across inconsistent formatting within a section, I would find it quite jarring. Is this something that only worries someone mildly-OCD such as myself (I notice things like en-dashes and em-dashes) and therefore perhaps to not be worried about, or should I be careful about such things (e.g. consistency of formatting) when creating a CV?<issue_comment>username_1: The **amount of detail** you give should be **proportional to the award's prestige**. So for example,
* If you were recognized as an "excellent" teacher, among a list of 100s at your university, that's worth mentioning, but it gets formatted as option 1.
* If you won a best student paper award, that probably gets option 2.
* If you won "best PhD thesis in your department" from a department with more than say 10 graduates each year, that might merit option 3, although it's likely to fit easily in format 2.
* If you won a prize from a professional society (awarded to very few each year), that might merit option 3.
Upvotes: 4 <issue_comment>username_2: I will disagree with username_1's answer above and state that you should only be choosing between options 2 and 3. At the very minimum, you want to indicate **who** gave you the award. Otherwise, it's not all that valuable to the reviewer of your CV, as they may not know whether an award is a big deal or not. If it's a major award, I'd default to option 3 if it's "non-obvious" how it works.
So, in general, I'd opt for option number 2, unless there's some specific information that needs to be shared, in which case I'd go for number 3.
Upvotes: 5 [selected_answer]<issue_comment>username_3: I would go so far as to give the opposite of username_1's advice: the more prestigious the award, the less detail you need to provide.
If you won the NSF fellowship or the Hertz, or got best paper at your national conference, everyone will understand what the award is and its significance, and belaboring the point has the risk of looking desperate or inexperienced.
On the other had, if you won your department's annual [famous former faculty] Memorial Award for Excellence in Teaching, the award is still worth listing and you should include at least that much information on the CV. Assume nobody is going to take the time to Google your awards.
I would say, as my rule of thumb, that each award should fit on one line. If explaining an award's significance requires more than one line, that is a red flag that perhaps the award is not pulling its weight. Understating your accomplishments and aggressive mediocrity will both kill your chances in the job market, but in my opinion the latter is the most dangerous of the two.
Upvotes: 4 <issue_comment>username_4: You got good answers. I just want to add that it is usually a good idea to list the relevant year(s). I typically use this format:
Mickey Mouse Prize. Institute of Advanced Cartoons. 2010. Extra comments if needed.
Upvotes: 2 |
2013/06/20 | 661 | 2,929 | <issue_start>username_0: Assume that I publish many papers in many journals and someday I changed the family name (my last name). How can I edit that where my papers have been published? and is it easy process?<issue_comment>username_1: I am afraid that you can't easily change the name on a published article. Publishing is (at least theoretically) still done on print, so there is simply no way to change the physical journal once it has been sent to university libraries all over the world. Even for more important issues such as plagiarism or factually incorrect data, only an errata is issued, or the paper is marked as "retracted".
What you can do instead is advertising the double name on your webpage and CV, and ensuring that the academic databases (such as Web of Science and Scopus) correctly recognize and handle your name change, marking all of your papers as written by a single author. You will probably need to notify them using the "contact us" functions on their websites.
Several authors in the same situation choose to keep the old name also on new papers; this makes it simpler for other researchers to recognize you, at the price of using a name that you might have disowned and now consider a relic from the past. In practice, there is no requirement that your academical *nom de plume* coincides with the one that is written on your ID and that you use on legal papers, so you are free to sign your papers using a different version of it. Once you choose this route, however, it will be more practical if you consistently use the old name also when attending conferences.
If you are simply getting married, then signing your papers with both surnames is probably the easiest option. (I realize that probably you have already considered and discarded this option, but I thought it more appropriate to include it in my answer anyway.)
Upvotes: 5 <issue_comment>username_2: As others have pointed out, there is almost certainly no way to change your name in already published papers. In line with the previous answer, I would like to call your attention to initiatives like [ORCID](http://orcid.org/) and [ResearcherID](http://www.researcherid.com/Home.action?returnCode=ROUTER.Unauthorized&SrcApp=CR&Init=Yes) that aim at creating unique identifications for each researcher, so you can collect all your scientific output under a single ID, more or less independently of a particular name or spelling.
Upvotes: 4 <issue_comment>username_3: There are two strategies that I have heard of that people with renowned accomplishments tend to employ when they get married (what is, I think, the most common situation when a person changes last name). It certainly depends on your legal system, but in my country they either: stay by the old name or use composite last name. The latter meaning that if a person's name was Smith and nupturient's name is Brown, then they change the name to Smith-Brown.
Upvotes: 0 |
2013/06/22 | 2,863 | 11,911 | <issue_start>username_0: I apologize if this is too long a post, but I could really do with a few
pointers about my current situation.
I am 25 years old and I will complete **4 years** of my PhD in a computational applied mathematics program in the US in August 2013.
My bachelors was in pure mathematics. I had gotten interested in numerical analysis in that time and so I had applied to my current PhD program. I have been under my adviser for 3 years now (the first year at my university is spent in coursework). A PhD at my university is usually 5 years long.
Right now, I have **almost nothing** to report in the way of research, and consequently no publications , no conference submissions. I am getting increasingly nervous and frustrated about whether or not I will make it, **even** if I give myself an extra year by funding myself.
My adviser has consistently been making me work on uninteresting stuff, where most of the work involved is purely technical like writing brain-dead code, with almost zero chance for innovation.
***BREAKDOWN OF MY PHD***
After monkeying around reading research papers, in the **first year** under my adviser, he got very confident about getting an industrial project and got me working on that, in anticipation that the contract would go through. At the end of the year we found out that we did not get the project.
In the **second year**, he said he wanted to get into GPU parallel computing
and to implement a few fluid dynamics algorithms. I slogged over many manuals, spent months and months writing and debugging code, all the time thinking that
this would be used to do some simulations he was interested in and get them published. But at the end of the second year my professor completely lost interest in these numerical techniques he was making my implement.
Seeing his capricious attitude, I almost wanted to quit then and there itself. But I decided to just stick it out, thinking it might be 'just a phase'. Due to funding issues, he once more got me working in the **third year** on another project which essentially involved writing a lot of stupid code, and running endless benchmark tests.
I have basically ended up trying to do a PhD in mathematics without any mathematics in this PhD.
Finally, a couple of weeks back, I told him that I had had enough, and to give me some actual
problems/material to work with. After about an hour of discussion, and informing him that I was ready to fund my self if required, he finally gave me a couple of possible starting points for what I hope would actually turn out to be worthwhile research.
***MY QUESTIONS***
* I do realize it was *extremely* foolish waiting for so long before putting my foot down, and not having the courage to speak up before. My adviser is well-regarded by colleagues in his field, and maybe I was subconsciously scared of
contradicting his handling of my PhD for pissing him off.
But even though he has now suggested problems which do seem interesting, after having had so many negative experiences I am very skeptical about the future. How should I proceed, and what are the factors I should consider ?
Frankly, I am feeling very burned out. In the way of future plans, I have been toying with the idea of dropping out, getting a break for a few months and then sitting for some entrance exams for a Masters in Economics in some good universities back in my home country. I always found economics very interesting through my undergrad and more so these past few-months while studying it has a hobby.
* Continuing would require me to stay on for an extra year till August 2015,
which leaves me **about 17 months tops** from now, before I start hunting in academic job market. This includes about 2-3 months I will have to spend doing literature review on the proposed topics and learning the requisite mathematical tools.
So if I decide to stay on, how should I re-structure my study/research time and the relationship with my adviser in these 17 months so that I can make some head-way.
Maybe 17 months is too short a time? Any suggestions would be really helpful !!<issue_comment>username_1: Firstly, no apology needed, your question is thorough and easy to read and understand. It sounds like you are in quite an unpleasant situation.
Don't take any advice I may give as gospel, but in answer to your questions:
You are most certainly **not** foolish to wait until now to stand up to your advisor, you have had several leads and have given many chances for the project to kick into gear. You have every right and reason to feel skeptical about the current promises and project direction.
Ultimately, how you proceed is up to you (you're probably understandably sick of hearing that), but look at the following considerations (no doubt many other members will add to this):
* The new direction could well be a winner, leading to papers, conferences and most of all, fulfillment. It could also be a good one as now, you have made your feelings clear to your supervisor.
* Could this be just another academic 'false positive'?
Perhaps outline a couple of potential papers and present them to your advisor (this is something I do). This could be an ongoing thing, alongside your research - outline potential papers.
As for the timeframe, 17 months - I would not be too worried about that - I have been able to get three papers published in less than ten months, with a 4th on the way and the 5th planned (I finish my Ph.D. at the end of the year).
I hope this helps, and I hope it all gets sorted out for you.
Upvotes: 6 [selected_answer]<issue_comment>username_2: It really helps if you take some time off and get a job in your field if you can. That way it serves to rejuvenate your mind and gives you a breather. By doing so you can hit three birds with one stone: 1) you take a break and feel better 2) you have some money to use 3) you become more interested in other subjects you never thought you would have liked, such as technology, fashion, the business world, different languages and culture, etc. I didn't say to quit, I said to take some time off and "find yourself, to rediscover yourself". Will it help? Maybe or maybe not. Perhaps if you look at yourself in comparison to the rest of the world. You're doing a PhD which is one of the most prestigious and most sought after degree in the world. Not many people are able to do that let alone get a bachelor degree. Be grateful. Some people don't even have enough money to afford a days meal or a roof over their head. Hell be lucky you're not in combat or war. Sometimes it helps to appreciate with what you have. That may give you motivation to keep going. Maybe PhD isn't for you. Maybe it is for you. maybe you're meant to become the next Bill Gates or the next president. Who knows. My point is no matter what happens always keep your head up, stay confident, and don't ever give up. Take a break. But don't give up. No don't worry you are not alone. The fact that you made it this far shows you are a winner (<NAME> haha just kidding). Don't give up doc.
Upvotes: 0 <issue_comment>username_3: One thing that you need to consider is the way quitting is going to look in your CV. If I was a prospective employer and I saw that you spent 4 years in a PhD program without getting a PhD (or even publications, for the matter), that would be a huge red flag. I'd wonder if you spent those years doodling on facebook and hanging out in cafes. I could even reason that perhaps you are just not as bright as you claim to be. Either way, that's not the kind of person I would want in my company. If you quit now, you should really find a way to preempt this kind of concerns.
With respect to time, I can tell you from experience that 17 months is more than enough provided that (i) you have a clearly defined dissertation topic; (ii) you work hard (and here we are talking about 60-to-70 hours/week; one of the guys in my cohort wrote his entire dissertation in 12 months and his girlfriend complained that, during those 12 months it was almost like she didn't have a boyfriend at all); and (iii) you have a good supporting network of peers and mentors to keep you going in the right direction. I'd say that, at this stage, (i) and (iii) are the most important points. If you can produce a proof-of-concept paper within the next couple of months and a couple of more experienced people agree that it is a worthy project, then you've overcome the largest obstacle.
Upvotes: 2 <issue_comment>username_4: I just wanted to share my experience with you as I am going through *almost* the exact same situation.
After a promising two year start and killing my physics classwork and getting my Masters in physics, I picked an adviser and took over a project that a graduate student, who was graduating as I was joining the group, had been working on. Like you, I spent almost all my time coding (a good deal of it CUDA programming) or dealing with certain mathematical problems. I've spent approximately 5-10% of my time on physics and feel that I've done more reproduction of others research, albeit in a more innovative and optimized way, than answering new questions.
I chose to use my time guiding these different projects to learn job-market relevant skills. I use my status as a student to take advantage of school-specific career fairs and professional development. What I've found is that there is quite a demand for physicists (and *even more so* computational applied mathematicians) out there. It also opened my mind to the types of skills the job market is looking for.
As a result, I've found my anxiety concerning lack of research results has dropped off dramatically! The burnt out feeling I had dissipated considerably as I started seeing that the skills I was learning directly contributed to my future success. I'd highly recommend you start the job search *now* and try and pick a project that *you* enjoy that would make you even more attractive to an employer you'd enjoy working for. Also, as for restructuring your relationship, I agree with you that you should indeed take more of a lead in your own research topics. Find projects that force you to learn modern, in-demand techniques and methods, especially those YOU find interesting. It'll help you from feeling burnt out.
Don't worry about quitting the Ph.D. Contrary to other answers, none of the employers I interviewed with cared about me quitting the Ph.D. In fact, they were specifically trying to hire Masters or below. I guess if you have your heart set on academia, then quitting the Ph.D. is an issue. In summary, I'd just say start your job search now and tailor your studies towards employment : it'll help your motivation stay high, produce solid results and allow you to seamlessly hit the job market when you finish!
Another possibility : get an internship. It'll help you get your foot in the door somewhere, give you some much needed professional experience on your resume and a much needed change of scenery. I find that when I take a break and come back to a project, I can hit it all the harder and get over some of the bad humps.
Upvotes: 2 <issue_comment>username_5: I had far more worse situations in my life than leaving a Ph.D, all were hard decisions. When something does not work, it is like trying to support a building severely hit by earthquake using temporary solutions. The problem is that you can never build a skyscraper and will always got stuck with a few floor tall building all over your life.
If you demolish your useless building, in the future you can build a strong skyscraper. Off course you will be homeless for some time, but you are still very young with many options. So if you can get a Master's degree instead of Ph.D definitely leave it. I think your advisor will also look this favoroubly. Even he can write good recommendation letters for you.
Upvotes: 0 |
2013/06/22 | 902 | 3,835 | <issue_start>username_0: I would like to know, if given both options, which of the options, would officials at the american universities follow?
By internal evaluation, my GPA at some of the universities is translated/scored as 2.87/4.00. Disbarring me from all of their programs.
By WES evaluation, my GPA is translated/scored as 3.25/4.00. This option allows me to run/apply for all the programs at the said universities.
Which of these evaluations would the university use?
Does WES's evaluation hold any weight?
Wouldn't this be a problem for those who are not aware of such services?<issue_comment>username_1: As far as I know, American universities do not rely on "transcript evaluation services"; they ask you to convert your GPA to whatever system they happen to use, since there are so many of them out there (5-point versus 4-point systems, A+/A/A- versus A/B/C, and so on).
You could of course provide the results of such an evaluation service, but I would **not** expect it to carry any weight with admissions committees, who can accept or ignore it as they please.
Upvotes: 3 <issue_comment>username_2: Every school does admissions in a different way, so it's hard to say, but I've never heard of *anyone* using any external services to evaluate transcripts. I'd say send in your application, and let the chips fall where they may.
Although quantities like GPA, GRE scores, etc. might be used as a filter to narrow the applicant pool, what will really get you accepted are 1) strong letters, 2) successful undergrad research projects, and 3) direct contact with a potential advisor at the university. If a professor knows about you and your work and wants you as a student, that will go an incredible way towards getting you admitted no matter what your GPA.
Upvotes: 3 [selected_answer]<issue_comment>username_3: My experience differs from the others. Although I am American, my wife is Polish and she applied to a slew (by which I mean 9) of grad schools for computational linguistics. Of them, three wanted the external transcript evaluation, and they even recommended which service to use.
But they also asked for a copy of the original transcript.
I don't know what impact it had, but she ended up going to a school that didn't require such an evaluation. But there is funny story here: no where on her transcript did it say that she actually finished her degree program. So when her chosen graduate school requested her "final transcript" (even though my wife graduated a year and a half ago, so they already had her final transcript), she ended up sending them an extra copy of the evaluation along with her explanation that she had, in fact, graduated.
From my point of view, the whole external evaluation process seemed overly expensive and annoying. I would advise that you just ask whatever graduate program that you're interested in whether they want applicants to have them (they don't bite, really), and hope that they don't.
Upvotes: 1 <issue_comment>username_4: Many smaller (and even some larger) academic institutions may not have resources dedicated to being familiar with all the various methods of grading and evaluation used by foreign schools. In these cases they will require the applying student to incur the added effort and expense of having an expert in such matters review the transcripts and then translate the material into a report that the institute can comprehend. They probably have a criteria for what kind of expert they will consider acceptable -- perhaps some kind of certification ? -- but it is always in the student's best interest to carefully select an expert that not only meets the criteria but who really understands the way that the foreign school did business (including any common cultural practices such as defacto lower grades for female students).
Upvotes: 0 |
2013/06/22 | 958 | 4,035 | <issue_start>username_0: I am a second year grad student who is trying to find advisors in two people (quite brilliant scientists!) who are going to join my grad school as faculty. They are going to be in campus only rarely now but will be full-time here from the next year.
* Does this situation sound very bad or scary or depressing or something wrong?
* Am I late into the game?
So I have done studies and have written up research drafts in areas related to these scientists and have been trying to get into discussions with them over emails. Both sounded quite interested in me - one of them met me for a few hours of discussion while in campus about a month ago - and the other one said "we should keep in touch and meet when I am there the next time" etc.
* But I get very scared and nervous when I don't get replies from them after even a week from the last email (stating my progress in their respective subjects)- I am always thinking if they struck me off from their mind - did they just forget me - did they decide I am not worth it etc. etc.
[...I am getting sick of just the unbearable tension of the fear of having been dropped...]
Anyway is the implicit expectation that I am going to read up all current papers in their fields and be able to come up with a paper on my own? (...thats what I am trying to do but clearly thats not easy!...) I don't know how "advising" is supposed to work with so little contact (...may be there is a culture conflict because in my previous institute one met one's professors daily and even multiple times a day at times...)
[just a side information - may be irrelevant but still for completeness of information - I think I am way ahead of my peers in terms of depth and breadth of knowledge and speed of learning new papers and my grad school grades are all at the top..]<issue_comment>username_1: I personally would be a little *too* concerned about a graduate student who keeps trying to "hard sell" themselves *before I arrived.* Partially this is because, if I were just starting a new position, I'd be worried about a million things, including winding down my previous employment situation, preparing for a move, figuring out all the different things that have to be done in the new position, and so on. Others may very well be different, though!
Note that I don't think it's wrong to be active when you sense a good opportunity, such as working with a scientist you hold in high regard as an advisor. However, being too aggressive may be just as damaging as being too passive. Steer clear of both extremes. For instance, have the advisors in question *asked* you to send them weekly updates? Have you asked them to schedule a phone or Skype chat? Do you know if they are even "at home" or if they're on travel when they're not responding?
Advisors have their own personal styles, and your style should mesh with theirs. If it doesn't, it will likely be an unproductive and unhappy situation for both of you.
Upvotes: 3 <issue_comment>username_2: Understand that faculty, even junior faculty, can get over a hundred emails a day, and even dealing with only the most urgent of these, such as
* Bureaucracy from the department chair / funding agency program director / etc.
* Requests from existing students and collaborators
* Reminders about late paper reviews
* Conference and travel logistics
* Letter of reference requests
* Complaints about grades from undergrads
takes up a huge chunk of their time. Recruiting good students is also usually a high priority... but if you've already agreed to work together next year, and have established an outline of what you can be doing until then to prepare yourself, I wouldn't read too much into a slow response to your follow-up emails, especially if they are in the form of long reports.
The best approach is probably the direct one: ask them what you can do between now and the fall to get a head start on the research project, and what kind of updates, if any, they would like from you between now and then.
Upvotes: 2 |
2013/06/22 | 431 | 1,585 | <issue_start>username_0: Some journals like to abbreviate journal names in the papers they publish, and the AMS maintains [a list of abbreviated journal names](http://www.ams.org/msnhtml/serials.pdf%20bgb) for those who need them. Is there a similar resource for conferences (in computer science)?
EDIT: to clarify, I'm not looking for acronyms (SODA, STOC, ICALP), but rather for something like "Proc. 6th Ann. ACM-SIAM Symp. Discrete Algorithms".<issue_comment>username_1: I am not aware of any *comprehensive* list of abbreviations for CS conferences. One way to see many abbreviations is through CS conference listings and ranking. For example, [here](http://core.edu.au/cms/images/downloads/conference/08sort%20acronymERA2010_conference_list.pdf) and [here](http://en.wikipedia.org/wiki/List_of_computer_science_conferences). The most obvious way is googling the name of the conference then checking the conference website.
Upvotes: 2 <issue_comment>username_2: AFAIK, there is no such resource, and even if it existed, it would not be very useful.
In general, people do not know the full names of the conference, only their acronyms. The full names tend to change slightly every now and then, while the acronyms are much more stable.
You can safely write pretty much anything that resembles the correct name, as long as you include the acronym. You can often save some space by removing useless words such as "Annual", "International", "ACM", "IEEE", etc.
For example, *"Proc. 6th Symposium on Discrete Algorithms (SODA)"* would be perfectly fine and unambiguous.
Upvotes: 2 |
2013/06/23 | 1,045 | 4,458 | <issue_start>username_0: I am interested in the statistics of early college graduation, or more generally, the statistics of extreme ages in academic settings (highschool, college, grad school, etc.). For instance, how many students who earn a college degree graduate one, two, three, or more years earlier than the typical age of 22 or so? I am willing for any statistics to qualified in any way (percentages in country W, at university X, in state Y, or from year Z). A quick Google search does not easily reveal this information.
I estimate that less than 5% of the population graduates college two years earlier than normal based on my acquaintances, but this is likely biased as I am a graduate student and it may be smaller.
EDIT: It occurs to me that my question does not ask for any opinions on whether early graduation is good, neutral, or bad. Perhaps it would be interesting to expand the question and have those who experienced graduating early give their stories or opinions.<issue_comment>username_1: At least in the USA, the number is probably *much* smaller than 5 percent.
The reason for this is that there are generally fixed lengths for education, and minimum enrollment ages at which the process can start (at least for publicly educated students, who are still the majority).
Finishing two or more years ahead of schedule means that you probably have had at least two events that belong to the following categories:
* Started education a year earlier than "normal" (perhaps because of birthday-limited enrollments)
* Skipped a grade during primary or secondary (high-school) education
* "Accelerated" college studies by reducing the expected enrollment time by either a semester or a full year (through early accumulation of credits via work in high school, or taking college placement exams, credit overloading, and other methods)
The first is the most common, but still only applies to about one-third of the population. The others are much less frequent, with the second probably pertaining to only about 1 percent of students (if that many). The third also probably is not that common, but I don't have hard numbers (but again, probably less than 5 percent of college students finish in three years or less!).
Now remember that you have to have at least *two* such events, and you can start to see why the odds are stacked against a 5 percent rate.
Upvotes: 4 [selected_answer]<issue_comment>username_2: In the UK this number is likely to be extremely small. Within our department we circulate a list of all students under the age of 18 at the beginning of the year. This includes both our students as well as students sitting in on our classes. For this sample size of about 500 per year the number under 18 is typically about 1%. Of these, the vast majority turn 18 there first year. Further there is either a bug in our software, possible, or the remainder turn 18 during the summer since I have never been told about a 2nd year student being under 18. The sample size for 2nd year students is about half as big since we do not get drop ins to our second year classes.
Obviously this could be biased by our department or university not attracting these students.
Upvotes: 2 <issue_comment>username_3: Specifically addressing the edit to to the question regarding experiences as a younger student:
Aside from what I've already posted in my comment, I guess the only social aspect that I felt I missed out on was getting into university-sponsored senior class events where alcohol was served (I was under 21 at a US school). Most of the time, my age was not an issue socially. I was also an RA (residential advisor), so my age was not assumed to correlate with lack of leadership ability. Academically, it might have even helped me get certain positions, because I was seen as a "driven" individual with prior academic success, and all of those positions helped me get into graduate school.
Now that I'm in graduate school, the only thing I really miss is not having interesting stories to tell about cool things I did during my gap year(s). That problem is common to a lot of people in my class, though, regardless of them being a few years older than me. I don't have people that are my age in my year, but I don't think it's affected my academic success here: I've had four years of high school, four years of undergrad, and three summers of undergraduate research, like (or better than) many of my peers.
Upvotes: 2 |
2013/06/23 | 1,119 | 4,980 | <issue_start>username_0: All of the following takes place in a UK university.
I have a BSc in Physics and an MSc in Computer Science. My thesis was on applying various machine learning/statistical techniques to biological datasets. I wanted to do something similar for my PhD, however my supervisor left the university.
I am now in the first year of my PhD in Computer Science, specifically Computational Biology. My work focuses on comparing different techniques (physical/statistical/machine learning) in single cell simulations. I am finding it hard to incorporate machine learning techniques into my work as there aren't many datasets for the kind of thing my supervisor wants me to do and so the machine learning approach is proving tricky.
I desperately want a job/postdoc in a machine learning/stats environment.
1. Lots of post docs I know switched field after their PhD e.g. Astro-physics to machine learning, dependable systems to machine learning, Biophysics to compiler design. In my case would anyone in the ML community take me seriously? (I thought my Msc would help me out...)
2. I have taught myself a fair bit of ML and stats, is there anything else I should do to increase the likelihood of getting an ML/stats postdoc?
3. Would anyone in a stats department take me seriously as I have no maths degree?
4. Do people that change career areas have successful careers or is this normally a red flag?<issue_comment>username_1: Some thoughts on your questions (please don't take any of this as gospel, I am in the final stages of my PhD and are looking for a Postdoc also).
Your ML MSc would more than likely benefit you in any postdoc application (to what extent would depend on the institution). Something to consider, is it possible to build/include ML principles in your current research?
One major way to get noticed in the fields that you are interested in is to get published in peer-reviewed journals and present at relevant conferences. Speak to academics involved in your field of interest, speak to your supervisor/advisor - perhaps inquire if there would be a chance of collaborative papers/conference presentations.
As for changing career paths, this is increasingly the norm - my own example is a switch from economic geology, through teaching to atmospheric physics. One major thing about this aspect is to focus on the skills that you have developed, particularly in research.
I hope this helps.
Upvotes: 2 <issue_comment>username_2: My answer is based more on experience from computational biology, but I think it is relevant for other fields:
* Changing fields is very common in academia, especially at the PhD/postdoc transition. In many cases it is actually considered an advantage, since you can import your skills, expertise and a certain thought-process into a field in which many people do not have those skills. For example, many physicists, computer scientists and mathematicians have migrated to biology and have made significant contributions. In fact, there are even postdoctoral fellowships that specifically fund this type of field-change.
* Regarding your "will they take me seriously" questions: Since you are aiming mostly at applied ML/stats, I don't think you should be too concerned if the ML/stats theoretical community take you seriously. Many theorists tend to look down on applied science - don't worry about it, you can still have a significant impact without advancing any theory. It sounds like in the future you will either belong to the department in which you want to apply the techniques (e.g. a biology department) or will work very closely with people in those departments. In this case, you will usually be considered the ML/stats expert.
* Having said all that, of course it is your job to become an expert. Teaching yourself the theory is important, but if you are going for applied science, especially applied ML/stats, it would be a big advantage to get actual experience in using them. There is a huge difference between learning about these methods and actually implementing and using them. You will see that during your PhD you can often expand your research in directions you are more interested in. It shouldn't be too difficult to use some ML/stats creatively in some sub-projects (which could later be expanded).
Upvotes: 3 <issue_comment>username_3: It depends strongly on what you want to do after pursing a PhD degree. More precisely, if you want to work as a technical staff then yes. It affects your career chances because it doesn't help a company that you are an expert, and therefore they have to pay you more than the average, in a different area and you don't have a *proven* solid background in the area where they want to be hired.
However, if you decided for working as a manager, sale, marketing or administration (e.g. signing applications) then it doesn't matter in which field do you have your PhD. In some positions, it is required to have a PhD title, no more.
Upvotes: 0 |
2013/06/23 | 1,135 | 4,493 | <issue_start>username_0: I would like to follow PhD studies in my field of Computer Science. The problem is that I have to work in my native country. I have read that this university, UNISA, is known because of their online Master and Doctoral studies.
The question that I have is because I see different comments from people from United States or Europe that wanted to enter into this online degrees. Does anybody have experiences or know if that university is worldwide recognized? Or would it be only a waste of money and time? In case of the latter, which institution of quality offers online PhD degrees in CS?
Consider that this question is not focused locally, because as I mentioned in the aforementioned paragraph; there are people from all over the world that want to take those UNISA degrees.<issue_comment>username_1: I do not have a direct experience with UNISA, but this kind of universities (having massive number of students) does not have international reputation usually. They normally address the local needs for graduating professionals, but what pushes a university among top ones is interactive connection of the staff and students, which is almost impossible to be conducted in a university with 5,000 staff and 300,000 students (even in the digital world).
Thus, if you do care about your education and reputation of your PhD degree, it is more reasonable to choose a university with international standards.
Upvotes: 1 <issue_comment>username_2: I'm a postdoc at the Computer Laboratory at the University of Cambridge, and do collaborate with some researchers at UNISA. However, this is with the Department of Decision Sciences, rather than Computer Science - I don't know the latter.
When it comes to PhD studies, generally the advisor plays a large role, too, compared to the university. I'd recommend to look at what and where the potential advisors for you publish, where their co-authors are from etc to get a vague impression how connected and well-regarded they'll be in your chosen discipline.
Upvotes: 2 <issue_comment>username_3: I offer the following evidence from credible and official sources **against** doing a PhD in Computer Science at UNISA:
* **Low graduation rates**: 17 doctoral students graduated from the College of Science, Engineering, and Techonology (which includes the PhD in Computer Science degree) in 2010, 2011, and 2012 combined. Compare this to the 99 doctoral student enrollments five years earlier in 2005, 2006, and 2007 (combined). Graduation rates are expected to be somewhat low for distance learning students, but rates this low are a bad sign. ([PDF source](http://osprey.unisa.ac.za/pg/req.pdf))
* **Inadequate supervisory capacity**: The school admits that "many [research] areas in the School of Computing have reached supervisory capacity" and says that "long waiting lists started forming due to lack of supervisors," which is another worrying sign. ([Same source](http://osprey.unisa.ac.za/pg/req.pdf))
* **Very, very low research output**: Research output is arguably the most important indicator of a reputable PhD program. On [this page](http://osprey.unisa.ac.za/supervisors.php), 6 professors in the School of Computing (which offers the PhD in Computer Science degree) are listed as having openings for PhD students. I looked up the Google Scholar and/or DBLP profiles of these 6 (for those that had them) and consulted personal pages of the rest to get a sense of their research productivity.
+ Averages across two years (2012 and 2013), the number of publications per person per year was 0.58 on average (range of 0-1.5)
+ Of the 7 publications I found for these professors in 2012 and 2013, 3 were in conferences or journals with the name "Africa" in the title (i.e., not international venues). So the average publication rate per person per year in non-local venues was 0.33.
Upvotes: 5 <issue_comment>username_4: UNISA is internationally recognized. The small numbers of students passing, shows by itself that it is not a piece of cake.
I did my Bsc Computer Science and believe me I had to work hard to pass and graduate.
Once you are there, the work load, assignments, deadlines etc, makes you forget that you are at a distance learning institution. It felt the same as when I was at a residential school.
I am currently doing my Msc Electronic Eng in the UK. It was the same UNISA credentials that took me there.
So no worries, please go with UNISA and you shall not regret.
username_4
Upvotes: -1 |
2013/06/24 | 2,362 | 9,846 | <issue_start>username_0: Here's a not so hypothetical situation. International student x is very talented but comes from a background where technical writing is not taught or understood very well. She writes a great thesis with a good literature review and nice results. However, the results are based on two key papers from previous students in the group. She decides to give credit to the papers in a special chapter, which she starts by saying "I need to give credit to this and that paper" and proceeds with copying paragraphs wholesale to describe what those other students did.
This was a few years back; X is now faculty at a good school and she contacts me (past advisor) in teary-terrified voice to let me know that she plagiarized in her thesis. I am now in a panic as well. How could I miss those? And how could she do that?? We both risk losing our jobs, and she is at risk of losing her degree as well (which, by the way, was a very strong thesis with a good number of top journal publications).
As far as I know there's no process for revising a thesis after it's been submitted and I don't know what else to do short of turning ourselves in - which I feel morally obligated to do.
Please advise.
*Edit*
Thanks all for weighing in on this. I spent the night going through the thesis and there appear to be three more sources that are suspect of being plagiarized, all in the same wretched chapter; one is a thesis of a colleague, the other is a textbook and the third is a book I wrote a while back. So this is more serious than I thought.
She has unfortunately not used quotes for the material, i.e., instead of saying "[paper i] says ," she just went on with "[paper i] says this and that."
She has *not* been accused of plagiarism by anybody. I am guessing that she has finally come to grips with good writing standards and upon looking through her thesis she realized that her "summary" was actually plagiarism. I have every reason to believe that she did what she did in good faith (she has proven her honesty on many occasions).<issue_comment>username_1: Take what I say below as a perspective, I am by no means an expert in how to deal with plagiarism.
I will say (as someone who has been plagiarised before), detecting and preventing plagiarism is the responsibility of all involved. But, having said that, we are human and we make mistakes - is it just that special chapter that has the plagiarism? How much did she copy?
I think being open and upfront is the best (and most probably, the *only*) course of action, as it would be far worse for both of you if it was detected by another academic, or worse still - the authors of the papers plagiarised. It may be best to be *honest* about both of your *mistake*, rather than being *perceived* in trying to cover it up.
Perhaps, find out what options are available in terms of resubmitting the thesis, or even the chapter in question. The original research in your former student's papers may also be in both of your favour in that it would show no malicious intent.
Upvotes: 3 <issue_comment>username_2: Well, here is the responsability of two persons: the advisor and the student, but the amount of responsability is somehow lesser for the advisor. I am pretty sure that your past student has signed a non-plagiarism form or put an statement that she was not plagiarizing anything in her thesis work, so she was doing that on purpose. It seems harsh my opinion, but it seems that way.
The only solution is to tell the truth to the Dean and for what I know, the penalty will come sooner or later. According to what you reply, that person plagiarized about 10 pages and also parts of the appendix, so in that case the only way out is to inform about the accident.
I do not think that she will lose her degree, remember the scandal that happened in Germany a few months ago?. The worse thing that could happen is that she get somehow "banned" from the journals that she has been cited, but only for a certain amount of time. About your case, I think that is not so probable that you will get into trouble.
Wish you the best.
Upvotes: 3 <issue_comment>username_3: Did any of the plagiarized material make it into journal papers, or was it all literature review that was never published outside of the thesis?
If some of it made it into papers, then it's important to contact the journals and publish corrections. This is more straightforward and predictable than dealing with the thesis itself. If the plagiarism is confined to background material, then I don't think retracting the papers would be necessary. Instead, I expect it would be possible to publish a correction that indicates the plagiarized portions and provides citations. This would be embarrassing and would hurt her reputation a little, but it would solve the problem as far as the papers were concerned. It would also strengthen the student's case for dealing with the thesis if she can say she voluntarily corrected the publications and did not need to retract any of them.
If none of the plagiarized material was published elsewhere, then it's trickier. Once all the original results are published in research papers, I doubt anyone will read the thesis and discover the plagiarism. Even if they do, they might take pity on the student and ignore the plagiarism. (I once ignored some mild plagiarism of my writing in the background sections of a thesis at another university. The student had already graduated, and I found no evidence of plagiarism in any of his research papers. If I knew for sure he could just file a correction to the thesis, then that would make sense, but I wouldn't want to potentially destroy his career over this mistake.) So she might well get away with it if she doesn't say anything. Still, I'd advise her to officially confess to the university. Turning herself in is likely to lead to a *much* better outcome than being caught by someone else. Plus it's the right thing to do, and it will save her from years of worrying about getting caught.
>
> We both risk losing our jobs, and she is at risk of losing her degree as well (which, by the way, was a very strong thesis with a good number of top journal publications).
>
>
>
Unless your university is extraordinarily strict, I don't think your job is in jeopardy. On the other hand, the student's degree or job might be, depending on how the university handles the situation. Based on your description, I think it would be unfair for her career to be ruined, but I can't predict what will actually happen. I hope your administration's sense of fairness is the same as mine, in which case a correction will suffice.
The hardest situation will be if she decides to remain silent. In that case you probably have an obligation to turn her in, and it would look terrible if anyone found out that you knew but didn't say anything. On the other hand, turning her in would be a tough decision. Much better for her to turn herself in voluntarily.
Upvotes: 5 <issue_comment>username_4: Um, I'm not sure it is worth bothering about. She gave credit for the results,
it was in the context of describing other people's work, it seems to only involve
the language. This is not a literary topic, so I would tell her to not worry about
it but learn how to write.
Upvotes: -1 <issue_comment>username_5: First off, just to make it clear, this is plagiarism. Providing a reference and then long string of text implies that the text is YOUR description of ideas of someone ELSE. To give full credit to someone else requires some sort of formatting distinction (typically block indentation or quotation makes). Potentially the plagiarism was accidental, but it is still plagiarism.
**Supervisor**
A doctoral dissertation is generally a single author piece of independent work. Plagiarism in a dissertation should have little direct impact on the career of the supervisor. It might have some indirect consequences like people questioning how you can be so unfamiliar with your students work that you do not catch plagiarism, but I think most people would be pretty understanding about this. If the thesis was not single author or if the work was published with your name on it, that is a different story since co-authorship implies you have BOTH plagiarized.
Failure to report academic misconducted (whether it is your student or not) can impact your career. At my university we do not classify failure of a student to report academic misconducted of another student as academic misconducted. I don't know the disciplinary process when faculty are involved. Personally, I would say that we all have a responsibility to the scientific process to report ALL cases of academic misconduct that we are aware of.
**Student**
At my University, the penalty for plagiarism by a current student is zero on the piece of work. This would mean the student would have failed her dissertation. As a department we would deem this penalty too severe and push that she would be able to re-submit a new dissertation that reuses the non-plagiarized material. The University would push back and ask for a completely independent dissertation. I have never experienced this with a PhD student, but this occurs regularly with our final year undergraduates and about 70% of the time the student is allowed to reuse the non-plagiarized material.
I don't know what would happen if the plagiarism was found after the degree was given. My guess is the University would have to retract the dissertation from the library and any electronic database. They may revoke the degree, but they could also look at other work and count those towards the dissertation.
The current university may try and fire or penalize her, but this seems harsh compared to the typical penalty of plagiarism in a dissertation of not getting/delaying a degree.
Upvotes: 4 |
2013/06/24 | 872 | 3,848 | <issue_start>username_0: I'm seaarching the literature at the moment to write a literature review but I don't know when should I stop searching and start writing?<issue_comment>username_1: You should develop an outline, in consultation with your advisor, of what is to be included in the literature review - this should be based on the research foci and priorities of your research.
What I do when writing a literature review, is to actually write the review at the same time as doing the research.
When should you stop? when you have covered each of the foci and priorities to the point when you (and your advisor) are satisfied that you have synthesised the scope of each of the foci and priorities.
Upvotes: -1 <issue_comment>username_2: You need to be very productive in terms of knowing your area. ***knowing*** what is happening is different from ***understanding*** it. This includes utilising every available tool which makes it easier to keep updated with what is published. An easy way , in Computer Science, is setting alerts (both on Arxiv and Google Scholar) and following pioneers in your area.
As a personal experience, I spent plenty of time gathering, reading and understanding related papers/books. I came across many interesting ideas. But finally, I found some of my ideas are published in others papers. Some were exactly the same! This is the curse of PhD. Try to publish before others as long as you have the required knowledge and a clear contribution. The literature is then translated to the cumulative process of publishing different papers related to your thesis problem. You stop when you do the defence (assuming you will change the area thereafter).
Upvotes: 2 <issue_comment>username_3: The facile answer is **you don't stop searching the literature**. Even as the review evolves, you should be including new references *if* they are noteworthy in addressing questions within the field. This process should continue as long as you are working in the field of the problem.
Of course, from a practical standpoint, you do need to select a cutoff. There should be a reasonable point in time in which you've set the outline of the review, and decided on the main topics and questions to be discussed. At that point, it would be fair to set aside adding more references, and stick to what's been published. However, you should continue to monitor the field, and if further revisions or updates of the text are necessary, then you should include the papers published in the interim as part of your updates.
Upvotes: 4 [selected_answer]<issue_comment>username_4: 1. Start writing the review **before** (or at the same time) as you start searching the literature.
I'd suggest to determine the scope of the review before you start searching the literature, and **already write it down**. This would answer questions like *Which problem am I looking at?* or *Why is this problem relevant now?* Answers to these questions will also guide your literature search.
2. Stop searching the literature when you stop writing.
Writing is not a linear process - it is going from a rough outline to a focussed text. As you go into focus, you will need to research more and more specific references. While you define your questions, search for literature dealing with these questions, and when you develop an argument, search for literature that would support (or contradict!) that argument.
My key suggestion would be to not view the literature search as shopping around randomly, but more like a visit to the grocery store with the shopping list in your hand.
Note that these suggestions apply to writing a literature review. [Staying on top of recent literature](https://academia.stackexchange.com/questions/7615/how-to-stay-on-top-of-recent-literature) or reading to get into a new field would be a different story.
Upvotes: 3 |
2013/06/24 | 705 | 2,877 | <issue_start>username_0: I am simply looking for a rough estimate on how many PhD applicants typically apply having published a peer-reviewed paper (or papers).
Particularly, I am interested in Computer Science (or STEM fields in general).
Also, it would be interesting to know the same percentage for the admitted students.
These questions can likely only be answered by those on (or previously on) admissions committees, but all responses are welcomed!<issue_comment>username_1: I'm in a CS department at a mid-ranked school in the US, and have reviewed applications for Ph.D programs in CS for the last 6 years. I didn't compile detailed numbers, but my sense is that the number of candidates with "actual" publications (as opposed to fluff pubs) is of the order of 5%. I suspect this number is higher for the top-ranked school.
Upvotes: 5 [selected_answer]<issue_comment>username_2: In mathematics, at a 10-20 ranked place in mathematics, essentially *no* grad-program applicants have an *real* publications.
About 1/3 may have some (as username_1 put it) "fluff-pubs" as spin-offs from summer REU programs. These are not *bad* things, by any measure, but are more indicative of the socio-economic class of the applicant than their talent or potential. For that matter, it is sometimes quite awkward to explain to novices that their "publication" is a fluff-pub, not real.
Thus, in fact, there is an actual negative to fluff-pubs on an application, since it suggests a possible unfortunate rigidity or over-confidence.
(Once again, in mathematics, if it were possible to do wonderful research in a few weeks over the summer, why does it take 5 years to earn a PhD? There is a misunderstanding... though, yes, it is good to cultivate enthusiasm among talented beginners! Let's just not lie to them.)
Upvotes: 4 <issue_comment>username_3: In countries where it is common to do a MSc, many PhD applicants have either published papers or prepared/submitted manuscripts, since a MSc would include a research thesis. The level of the publication can vary, and this can also vary by field (experimental projects typically take longer so probability of publication is smaller).
Upvotes: 2 <issue_comment>username_4: In computer science at a top-ranked US university, I'd estimate that about half of admitted Ph.D. students have a publication while they were an undergraduate.
So, having a (good) publication is really helpful, but not an absolute necessity. What matters most is research potential, i.e., the potential to be a successful researcher. Showing that you have done good research that led to a publication is one powerful way to show that you have good research potential, but there are other ways (e.g., by doing research, getting strong letters of recommendation from folks you have worked with, excelling in academic work, doing independent work).
Upvotes: 1 |
2013/06/24 | 713 | 3,364 | <issue_start>username_0: I have often seen in review papers in which the author mentions that there are more than # number of publications in the subject, to highlight the importance of the subject.
How are such numbers determined? While searching for certain keywords in websites such as Scopus or ScienceDirect may be useful, it does not necessarily give an accurate number, as some publications may mention the keywords but not actually deal with the subject, while others may use synonyms of the keywords.<issue_comment>username_1: First, I would argue that a precise number would be virtually impossible to obtain. This is because there is a large grey-zone between work published in established journals and work "published" in more "questionable" sources. Obviously the way to obtain a number would be to use search services such as Web of Science, Scopus etc. or reference data bases. But, for example, Web of Science only covers works published in ISI listed journals or papers referenced by ISI listed papers and on top of that only back in time for as long as journals have submitted reference information. This means such searches will be incomplete. Hence to arrive at a number may require quite a bit of work unless one would state the limitations imposed on a search sich as limting it to Web of science.
The choice of key word(s) will also be important and it is not certain keywords are systematically applied between sources or over time.
A claim to have found "all" literature is very questionable and I would argue that when one makes such a claim one must provide a picture of the limitations of the search because there will certainly always be such limitations.
Upvotes: 2 <issue_comment>username_2: In medical/life sciences, the situation is slightly better than in other fields since indexing in PubMed is the standard for a manuscript to be considered a "real" publication. I suspect that in life sciences, stating that there are more than *x* publications on a subject means counting the number of hits found when searching for that term on PubMed.
Upvotes: 2 <issue_comment>username_3: I agree with @Peter answer.
Moreover, I really doubt this relation of **more publications = more importance.**
Actually, I see it very weird information in my field (Computer Science) regardless of its source.
To show the importance of a subject, refer to some main papers/findings in that subject, show how and why its important to the general audience of your field. For example, in Computer Science, this can be done through listing some applications/real world scenarios of the subject.
Upvotes: 2 <issue_comment>username_4: In my field (mathematics) this can become even more questionable because there are papers that consider related problems, papers that consider problems that are essentially equivalent with a different terminology, and so on. The issue of different terminology is especially troublesome because you cannot search for consistent keywords.
Nevertheless this is an important metric. In mathematics you typically cannot point to real-world applications (any way, that is not the kind of importance you necessarily want for your paper). But it can provide context that the problem you are studying has been analyzed before, it gives you some other results that you can compare your paper to, and so on.
Upvotes: 1 |
2013/06/25 | 718 | 3,335 | <issue_start>username_0: Nowadays most journals use electronic forms for the referees to submit their recommendations to the editor.
However, if that's not the case, how should the letter to the editor be structured?
In the referee report I have already mentioned some points I consider should be revised. But, do I have to explicitly state these points in the letter? or should I just say that the points mentioned in the referee report should be considered before publication?<issue_comment>username_1: First, I would argue that a precise number would be virtually impossible to obtain. This is because there is a large grey-zone between work published in established journals and work "published" in more "questionable" sources. Obviously the way to obtain a number would be to use search services such as Web of Science, Scopus etc. or reference data bases. But, for example, Web of Science only covers works published in ISI listed journals or papers referenced by ISI listed papers and on top of that only back in time for as long as journals have submitted reference information. This means such searches will be incomplete. Hence to arrive at a number may require quite a bit of work unless one would state the limitations imposed on a search sich as limting it to Web of science.
The choice of key word(s) will also be important and it is not certain keywords are systematically applied between sources or over time.
A claim to have found "all" literature is very questionable and I would argue that when one makes such a claim one must provide a picture of the limitations of the search because there will certainly always be such limitations.
Upvotes: 2 <issue_comment>username_2: In medical/life sciences, the situation is slightly better than in other fields since indexing in PubMed is the standard for a manuscript to be considered a "real" publication. I suspect that in life sciences, stating that there are more than *x* publications on a subject means counting the number of hits found when searching for that term on PubMed.
Upvotes: 2 <issue_comment>username_3: I agree with @Peter answer.
Moreover, I really doubt this relation of **more publications = more importance.**
Actually, I see it very weird information in my field (Computer Science) regardless of its source.
To show the importance of a subject, refer to some main papers/findings in that subject, show how and why its important to the general audience of your field. For example, in Computer Science, this can be done through listing some applications/real world scenarios of the subject.
Upvotes: 2 <issue_comment>username_4: In my field (mathematics) this can become even more questionable because there are papers that consider related problems, papers that consider problems that are essentially equivalent with a different terminology, and so on. The issue of different terminology is especially troublesome because you cannot search for consistent keywords.
Nevertheless this is an important metric. In mathematics you typically cannot point to real-world applications (any way, that is not the kind of importance you necessarily want for your paper). But it can provide context that the problem you are studying has been analyzed before, it gives you some other results that you can compare your paper to, and so on.
Upvotes: 1 |
2013/06/26 | 481 | 1,737 | <issue_start>username_0: A version of the lecturer review website Rate Your Lecturer recently became active in the UK.
Do you know of any studies which consider to what extent students use this or any other review websites to guide their choice of university?<issue_comment>username_1: This is necessarily incomplete, but I do recall a few studies on the correlation between ratemyprofessor.com rankings and student evaluations. Two such studies are:
* [Hotness and Quality](http://www.insidehighered.com/news/2006/05/08/rateprof)
* [Student evaluations and RMP](http://www.insidehighered.com/news/2007/06/05/rmp)
These are disappointingly old though (2006/2007)
There's a more recent study from 2011:
* [Researchers and RMP](http://chronicle.com/article/Researchers-Rate/129820/)
As for other studies, your google is as good as mine :)
Upvotes: 3 <issue_comment>username_2: Not confirmed by genuine research, but a very strong hunch based on some decades experience: I'd anticipate that having a few crank-negative reviews among mostly-positive is tremendously beneficial, for more than one reason. First, your "supervisors" (dept head, dean, etc) are often not so naive as to think that there'd be no complaints, so it's harmless. Even better, and more significantly for your day-to-day life, the rants of a few cranks may significantly inhibit other cranks from signing up for your courses. "For the wrong reasons", but to your benefit, etc.
This would apply currently to top-50-research-schools in the U.S., I think, and I'd imagine to most other places in the U.S., since most have not committed to any quasi-automated officially validated anonymous rating system, or any other rating system for faculty teaching.
Upvotes: 2 |
2013/06/26 | 493 | 1,997 | <issue_start>username_0: Having personally contacted a professor about the possibility of a Ph.D., he asked me to send him a cv and a copy of my M.Sc. in order to evaluate.
Instead, in the online application form of "other" Ph.D. programmes, I have found that I need to include a cv, among the other things, but there is no possibility to attach a copy of the thesis. So, preparing the cv, I thought to imbed an hyperlink to the thesis.
>
> I would ask you: if it is acceptable, or is counter-productive?
>
>
><issue_comment>username_1: I would advise to add the links, because I don't see what you could possibly lose by doing so instead of not doing so.
You can add links in `LaTeX` using
```
\href{link.to.thesis}{My MSc thesis title}
```
In my opinion, this is a nice way to link to everything relevant that can not be properly adressed nor otherwise included in your CV. You can for example provide references such as the homepage of your advisor, your department, your other software projects etc. - all just by putting links behind their names.
Upvotes: 3 <issue_comment>username_2: Send an admissions office the information they ask for. If they give you the option to provide additional information as a text field, then you could list a link to a version of your thesis on Dropbox or on a university web site (or similar) as part of your "additional statement."
However, given the number of applications that a centralized admissions committee might receive, they are probably reluctant to get copies of master's theses and publications—it would be too much extra work to read them all.
Upvotes: 1 <issue_comment>username_3: Fill out the application form as far as possible. You could choose to include in your application a link to an online copy your MSc thesis, as you and others have suggested.
However, **as your potential professor has explicitly asked you for it**, I would send him a copy of your MSc thesis directly, as an email attachment if possible.
Upvotes: 2 |
2013/06/26 | 526 | 2,205 | <issue_start>username_0: I have completed my BSc in physics and now doing my MS in my country(Bangladesh), but I want to do PHD/PHD+MS in USA with decent funding (RA or TA).
I know In a PHD level, funding may be available, but what about MS?
I'm saying this because I want to apply in this fall-2014 and during application I may not get my MS results .<issue_comment>username_1: Whether a US university would accept you into a PhD program would depend on the University itself - you'll need to contact them and ask them directly.
Having said that, do you have semester results in your MS, in your application you could state these and say that the final results are pending. Ask the universities you apply to if it is alright to submit your final transcript a little bit later.
In the meantime, gather all your academic credentials that may help in your application - published papers, conference presentations (if any), references etc
Upvotes: 0 <issue_comment>username_2: In the US, you can apply to PhD programs without having a master's degree; this is actually the case in many fields. However, you will have to successfully complete the requirements for candidacy to a PhD program before being admitted to the doctoral phase of the program.
Funding for PhD-level programs at reputable departments (at least in science and engineering) should normally be guaranteed for some fixed term, provided you are making adequate progress and satisfy all program requirements. (What you have to do for such funding—whether it be research or TA—may vary, but that the funding will come from somewhere should be stated in advance.)
Upvotes: 2 <issue_comment>username_3: In most cases, you do not have to be a MS *graduate* in order to apply for a PhD. I was a MS *candidate* when I applied for PhD about 4 years ago. I could only show 2 semesters worth of results from my MS program. It works.
Also, in *most* programs around the HCI/information science space (and if I might venture to propose, also in computer science), there is very *limited* funding for MS only programs. Things might be different in physics. I will leave it to the folks who know physics better to comment.
Upvotes: 3 [selected_answer] |
2013/06/26 | 778 | 3,007 | <issue_start>username_0: This is something I have been pondering for a while. I am currently in my second year at University and my program comes with mandatory Co-op throughout the 4 years of study. I have completed 3 Co-op terms already and those have covered my expenses and tuition for each term at the university. I understand that not every Co-op will provide enough compensation every time but my parents are ready to support me where and when needed.
OSAP is a government program which funds students doing their post-secondary education in Ontario, more [here](http://en.wikipedia.org/wiki/Ontario_Student_Assistance_Program). The loans are interests free until the student has completed his education. Then the interest rate is as follows:
>
> On the provincial part of your OSAP loan, the rate is the prime rate
> of interest plus 1%. On the federal portion interest rate can be the
> prime rate of interest plus 2.5%.
>
>
>
So my question is, should I apply for OSAP and receive their funding and save what I earn from Co-op? Or is it better to not receive any OSAP funding?<issue_comment>username_1: This is entirely up to you, but, having said that - a few things to consider:
* How long would you have the OSAP debt for? Meaning, how long will it take you to pay it off? If you do get it, are their any penalties for early repayment? (just in case you get enough in your Co-op to repay).
* Look at how much savings you have and how much, if any, debts you may already have, alongside all your other expenses - how much would you have to sacrifice to make the repayments.
* You mention that your parents are ready to help if needed (very kind of them). Perhaps look at seeing how much you earn in your co-op - and seeing your parents to cover the balance.
I am in Australia, and, if eligible, we had HECS (now called [FEE-HELP](http://studyassist.gov.au/sites/studyassist/helppayingmyfees/fee-help/)), where our tuition is paid by the government and we repay the balance as a portion of tax. It is a good system, but I am still paying mine off over a decade later.
Upvotes: 0 <issue_comment>username_2: I also considered doing this, though I didn't (more from sloth than anything else).
As I understand it, from other students in Ontario, you can pay back the entire OSAP loan in a lump sum at any time. Since the loan is interest free, a rational investor would take out the maximum amount allowed, put it in a risk free asset, and pay it back as late as possible, pocketing the interest.
Now, interest rates are low right now, and OSAP needs a lot of paper work. Being in debt can also be stressful for some people. If you are sure that you won't need any OSAP money, then perhaps the $100 or so you'd make per year in interest is not worth your time.
Also, note that borrowing money now, and then paying it back early will often cause the provincial government to lower your future OSAP payments - since you clearly didn't need the whole amount.
Upvotes: 4 [selected_answer] |
2013/06/27 | 965 | 4,298 | <issue_start>username_0: For the purpose of curiosity, I was wondering what set of courses undergraduate math students take if they're on the track to apply for a PhD in Pure Mathematics vs. Applied Mathematics? Let me try to elaborate as best as I can. I mean at my current university, I've known several people who have gone through the track of going for a PhD in Pure Math and they have taken lots of Grad level Math courses and gotten into top US universities for a PhD in Pure Math where research experience in relevant fields is a plus (I think).
But my question is for undergraduate students trying to prepare for a PhD in Applied Math. What sort of coursework do they go through? I mean research experience (I believe) becomes important and taking graduate courses in Pure Mathematics is not so. I do not know much people who went for a PhD in applied math at my school.
I would appreciate it if anyone who can comment on the relevance of pure math grad courses for such students who aspire or strive in applying for a PhD in applied math. Any other comments pertaining to this is welcomed ;)<issue_comment>username_1: It might be somewhat controversial for some here, but I don't think there is any one course that you should take in order to get in a particular graduate program. The reason behind my statement is that I believe the logic presented in the OP is rather backwards; one usually pursues graduate studies in a particular field that s/he is knowledgeable and interested in. In other words, you get your undergrad and Masters, and based on what you know and like, you apply to PhD programs in fields where you are competent. The other way around (deciding on a PhD program without have the undergrad and masters done, and choosing courses based on the desired PhD program) does not make much sense, in my humble opinion.
Also consider that "a PhD in pure/applied maths" is really ill-defined. I can think of a hundred different projects that would have different requirements, with regards to previous courses. I would recommend deciding on a specific subject that you find interesting e.g. "elliptic curve cryptography" or "convex optimization" etc (not just pure/applied math).. then look for announced PhD programs based on projects focused on these subjects of interest.
All that being said; I would think advanced level courses in matrix theory (decompositions etc), functional and complex analysis, as well as optimization theory would be useful in many different graduate programs at most maths departments.
Upvotes: 3 <issue_comment>username_2: I agree with many things in username_1's answer, except that you might just get your bachelor's and apply straight to a PhD in the US (instead of getting an undergrad and master's, and then applying to a PhD). I should also say that any required course-work will vary from university to university. Many universities have some sort of 'qual' process, where you need to know certain things and pass certain tests at the start/end of your first/second (varying by university) year.
I'm a Brown PhD math student, and we have to pass our quals by the end of the first year, essentially; whereas I have a few friends at places like UChicago or Berkeley, where quals can be more immediate. From what I can tell, the subjects are almost always a subset of real analysis (and probability for applied math), complex analysis, algebra, topology, manifolds, and differential equations. I mention this because the subjects and level of testing can be very high and advanced, and if you did not do a sufficient amount of coursework in these areas, then you would probably have a very hard time passing the quals. (Really, the admissions process would probably take that into consideration, and would be less warm in the admissions process).
This is to say that there is a certain "core material" that many PhD programs seem to care about (although the exact material might vary from school to school). I would say that you absolutely must take coursework in analysis, topology, and complex analysis. But I suspect these are required courses in your studies.
But other than that, you should take classes that interest you, and apply to schools that have good programs in what you're interested in.
Upvotes: 3 [selected_answer] |
2013/06/27 | 768 | 3,395 | <issue_start>username_0: Some people do not list references/referees at the end of their CV, and simply quote `references are available on request`. This is more convenient to me for two reasons:
1. Normally, no reference is contacted without permission, so, it is not necessary to be included in the CV.
2. Depending on the purpose (application, proposal), you may want to introduce other references (more relevant to that application). This is the reason that some job applications need separate list of references.
On the other hand, including famous persons as references shows your connections and background. In addition, the audience might be suspicious why hiding possible references!
Which one is preferred and more reasonable?<issue_comment>username_1: In my opinion references are transient. I would expect anybody asking for my reference to do so for each time the reference is requested. the reason is that I would want to know for what I am providing reference and also because I am keen to make the decision to be a reference under my own control.
Hence, i see no reason to provide references in a CV in some permanent way. For each occassion the CV will be used it will of course be possible to add names to the CV but then only for one-time use. It is after all not complicated to edit the CV.
To add a line "references are available on request" would be a big no-no for me (this applies to all aspects of, for example, an application, not just the CV). If you have an application, it should be complete and provide all material and information you want in support unless the application makes it clear some information should be added upon request.
Upvotes: 5 [selected_answer]<issue_comment>username_2: I agree with both points you made and with the points made by username_1.
1. I would not allow my name to be used as a reference without permission - primarily to avoid getting a surprise call/email that I am not prepared for, and when I am not prepared I sound like a blubbering buffoon (which would potentially jeopardise the candidate's chances).
2. This is how my CV is organised, I only include referees that are relevant to what I am applying for (with their expressed permission).
As for famous people, for me, it would be only if they are relevant.
Upvotes: 3 <issue_comment>username_3: If you keep a generic CV on your personal website or a job search website profile, then it can make sense to add the notice about references, though if you don't have it and you get contacted by a prospective employer, they'll still ask you anyway. I doubt they are not going to hire you because they were to shy to ask for references themselves ;-)
The main reason not to add specific references to a generic CV is that you can't tailor them. If you go for a teaching position, you might want to give the name of the dean at that private college where you lectured for a summer as a reference, whereas if you go for a research position, you want your Ph.D. supervisor as a reference as well as that important guy in some other university that you briefly collaborated with, etc.
Also, when you name references for a specific application, it gives you a chance to let your references know to who to potentially expect a call from. It will help your references (and thereby you) to talk about aspects of you that are relevant to the job you're applying for.
Upvotes: 2 |
2013/06/27 | 1,756 | 7,175 | <issue_start>username_0: Is there any hidden rule for using the words "clearly", "obviously" or similar ones in a technical paper? It can be offensive to the readers in many cases (especially in mathematical proofs), since the reader may not find it "clear" or "obvious". But does that mean that we should completely avoid the use of these words?<issue_comment>username_1: We touched this particular subject in a "Technical Writing" course; the simple answer is that it's a power-stance. In other words, if you are a big-name professor in your field, you can use it without offending someone. Alternatively if you are a petty PhD candidate, then you are better off avoiding not only these two words but also other forms of bold statements when you are drawing conclusions.
As I said this is rather the short answer, I am sure those who are more into linguistics etc might have more insight into the matter.
Upvotes: 3 <issue_comment>username_2: Seconding username_1's appraisal, but being a little more blunt: if one is in a position to get away with bullying or intimidating people by implying that it's *their* problem if one has not explained well enough ... well, I'd say it's still a jerk-y thing to do. If one is in a lower-status position, such words will often be red flags.
Or, coming to functionality versus rhetoric versus "formal proof": at best these words are functionless filler. That is, *saying* something is clear is not what makes it clear: if it is clear after these words, it was clear before. *Conceivably* a thing is clear \_once\_noted\_, and thus deserves "Observe that...". But this, too, can be abused if used outside situations where one is noting that something is "a-fortiori" true, that is, is weaker than what the argument has already demonstrated... but presumably suffices for the issues at hand.
Upvotes: 5 <issue_comment>username_3: I don't think there is a clear consensus on how to use these words.
As mentioned in some other answers, some people find them annoying or obnoxious. Others think they are a perfectly acceptable way to mention a fact for which you believe a detailed explanation is not necessary. Certainly they are quite common in published writing.
I think it is a choice that you make as part of developing your own personal writing style, and your feelings may change over time.
My only advice is: when you write that something is "obvious", make absolutely sure it is *true*! I've been embarrassed this way before.
Upvotes: 4 <issue_comment>username_4: I don't think there is a very clear rule for using such words. One possible reason for my claim is that some authors don't even use words "clearly" or "obviously", but they simply say "it follows ...". In mathematics the level of details of a mathematical proof mostly depends on the writer's kindness to her/his readers. I have encountered with many not-so-obvious claims in papers written by experts, where needed several pages of explanations and perhaps some proofs, and several years later, I have found the proofs of those claims in newer papers written by other authors.
Unfortunately, there is an adage which says "brevity is a sign of genius" and it seems some people strongly believe in this adage and try to impress others by leaving not-so-obvious gaps in their works.
Personally I apply the following rules for using these words:
1. If the claim follows from previously mentioned materials by applying well known techniques in 5 minutes or so.
2. If it can be obtained by a few lines of computations again by applying well known techniques. Then I use the word "straightforward".
3. If it easily follows from a well known type of mathematical proofs, like induction, Zorn's lemma.
4. The proof is similar to a previous proof in the paper or in the literature. In this case I mention the resource.
5. I expect a PhD student in the field can prove it easily.
Upvotes: 4 <issue_comment>username_5: More broadly then in regards to mathematical proofs, a mark of good writing is to avoid the *superfluous*. Whether something is clear or obvious comes from the content, not the writer labelling it as such. Trimming unneeded adjectives and adverbs like those you describe should be a regular step in a proof-reading stage. See Strunk and White's [*Elements of Style*](http://rads.stackoverflow.com/amzn/click/020530902X) for a more detailed treatment.
Upvotes: 3 <issue_comment>username_6: By reading the comments and answers here, *the conclusion is*, that it is usually not a good idea to use these terms. Keep in mind that it might not always be the case that something is obvious to your reader. That being said, the reason you want to use such words is probably because you want to point out/conclude/summarize your findings to the reader.
The bottom line is not to tell your readers what (you find) is obvious, but to tell them what the obvious thing is (conclude/summarize). This way they will either:
A. Confirm their own observation
or
B. Let them know they haven't fully understood yet (they might re-read your article now)
Upvotes: 2 <issue_comment>username_7: I might go against most of the answers here and say **why not?**.
I am going to this right now. I am writing a paper proposing a solution for problem X by adopting well known mathematical model Y. Now Y has clear axioms and definitions (for instance, the set of considered elements has to form a commutative semigroup under combination). I defined X then defined the combination operator. Should I go further and proof it is commutative semigroup? I believe it is clear that X form a commutative semigroup *within my framework*. Yes It is obvious..
Now whether the author of these words is a student or professor, I believe it doesn't make difference. At the end, there is minimum knowledge required to understand any given paper, if its clear then it's clear and you better utilize the paper limited space in something not clear enough.
Upvotes: 1 <issue_comment>username_8: I propose *never* using these words unless your goal is to trick the reader into thoroughly checking your claim, or in an exam's trick question where you set a false premise (though these words are give-aways if not overused). If something *is* obvious there wouldn't be a need to even state it. And if you need to state something, it is *not* obvious.
*If* you think some non-trivial1 steps should be omitted so your 5 page paper doesn't bloat up to a 30 pager, then please have the decency to either briefly state the trickiest tool involved (be that induction or some specific part of [Wile's proof of Fermat's Last Theorem](https://en.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27s_Last_Theorem)) or - even better - put the detail which you *should* have done anyway into the appendix / online supplement and refer to it.
---
1Trivial is also one of these words.
Upvotes: 2 <issue_comment>username_9: I was always taught that if you had something to say that was "clear" or "obvious" to your intended readership, then it wasn't really worth saying at all. Made a lot of sense to me, and I've never used those words in any of my technical or academic writing since.
Upvotes: 2 |
2013/06/27 | 1,795 | 7,610 | <issue_start>username_0: Back when my older brother started his PhD degree I asked him what it meant to be a doctor in something other than medicine. I don't recall the exact wording he used, but the idea he portrayed was that you take a field, a narrow and specific field, and you specialise in it to a level at which when you are done, you have become *one of the ultimate experts in that very specific field*.
For instance if you are working with combustion physics, you might be one of the leading experts in efficient 2 cylinder, ultra-light engines made out of refined aluminium... Alternatively if you are into neuroscience you might be an expert on a particular neurotransmitter re-absorption in a particular zone of the brain following heavy exercise (or whatever, hopefully you get the point). It might be an opinionated view of a PhD but I feel it's a common way to look at a PhD degree; ***a certification of expertise***.
Fast-forward 15 years... I am about half-way in my PhD studies in the highly interdisciplinary field of bioinformatics, where statistics, mathematical modeling, physics, molecular chemistry and programming boil together with cell biology, to top it all you typically have a theme spice, which in my case is cancer biology. I have a growing feeling that I am getting stretched thinner and thinner by the day, instead of becoming increasingly competent in a specific field, I become semi-competent in increasingly many fields.
That being the case I am not sure I (or others like myself) will fit the "definition" above. I would appreciate some perspective as to how one should be seeing highly interdisciplinary PhD studies and the development (as a scientist and a professional) that graduate studies entitles. Subsequently, how should one go about to profile him/her-self to future employers, seeing as there is no one natural field to pursue, but rather many different ones.<issue_comment>username_1: This may not be a complete answer, but I can empathise with you, as I am in a similar boat.
My field is an academic puree of atmospheric physics, photobiology, optics, photography, oncology, opthamology, programming and a dash of education, public information and community safety.
The steps that I take are:
* Identify the main focus/foci - this/these are the overarching main goals of your project (eg for mine, it is Atmospheric Physics and Photobiology).
* Which fields are where the applications/potential applications of your research come from? (eg for mine, they are programming, optics and community safety/education)
* Look at where you can contribute to (the remainder of the list are mine).
That last point is something my supervisor suggested I remember in times that I felt I was being academically-spaghettified - look at the disciplines not so much as fields of study, but as areas that you can and are making a contribution.
I hope this helps.
Upvotes: 3 <issue_comment>username_2: I completely understand your feeling. I can tell you that many other people in this field feel the same. This has nothing to do with how smart or how good they are. This issue also bugged me a lot, but I can offer some insights:
1. Bioinformatics/computational biology is really huge and you cannot be an expert in all aspects. Even if you look at senior scientists in this field, I do not think there is someone who is an expert in all subfields. You simply can't master all the physics, math, CS, chemistry and biology at an expert level - even in a much longer time than a PhD.
2. You can still have a huge impact without being a super-expert in every subfield. Just look at some of the research published in top journals. The reason is that you will have knowledge and a way of thinking that people restricted to a single field may not have. From personal experience, I can say that this is a significant advantage for asking certain types of questions and coming up with certain ideas that single-field specialists won't come up with.
3. After a while you will realize that you actually are an expert. Maybe not in the sense that you know everything about everything, but you will see that you can give good advice to other people, foresee potential problems, and so on. In addition to knowing a lot about computational biology, you will become an expert in things such as quantitative modelling, applied machine learning and "big data" analysis (I hate that term), skills which are very useful in a wide range of fields.
4. The fact that you cannot become an expert in everything doesn't mean you should neglect learning. On the contrary, you should constantly try to expand your knowledge in all related fields. And yes, it can be more difficult than learning only one subject.
5. Finally, in the end you will be working on a specific problem in a given biological problem with a given set of tools. That problem is what you really need to be an expert on.
Upvotes: 4 <issue_comment>username_3: In my opinion, a PhD is much more than a deep expertise in a particular field. A PhD is a certificate of *the ability to do science*. That's why your PhD is more broadly applicable than in your particular expertise and that's why some people can change topics dramatically after their PhD: from particle physics to atmospheric science, from space science to ornithology¹. In a German *Habilitation*, which is I think a step on becoming a professor, one has to write a review of a field that is not ones own.
The other day, I came accross a job advertisement from the British Met Office that had the following requirements:
>
> · Proven ability to conduct scientific research, displaying initiative, independence and analytical skills.
>
>
> · Evidence of the motivation and drive to overcome obstacles in order to solve scientific problems.
>
>
> · Evidence of the ability to write software to address scientific questions.
>
>
>
A PhD in any natural science proves exactly that; in any case the first two points, and in many cases the third point, too. Of course, domain-specific knowledge is a plus, but it may not be a necessity. Therefore, I think you should profile yourself as *a scientist*.
¹Scientists performing stratospheric radar measurements discovered an odd diurnal pattern in their measurements near the Antarctic coast. It turns out a flock of birds was flying through the radar beam. One persons noise is another persons signal; said scientist is now cooperating in ornithological research.
Upvotes: 3 <issue_comment>username_4: As I commented on another board just this morning, the goal of all higher education is ultimately **to learn how to learn.** As a PhD-level scientist, you need to be able to understand, master, and solve problems in fields which you may have never seen before you started to work in them. This means that you need to have a well-developed process for assimilating information, synthesizing it, and analyzing it. You need to be able to evaluate what is useful or not, what is correct or not, and what might work and what might not.
In an interdisciplinary field, your challenge is even harder, as you are trying to assimilate potentially disparate fields of knowledge and combine them into something more than the sum of the parts. This requires learning different jargons, different attitudes, and different approaches to problem-solving and understanding the world. This will actually be even more useful, because this means that you can be pretty good at a lot of different things—which gives you an edge over someone who's outstanding at one thing, but only one thing.
Upvotes: 5 [selected_answer] |
Subsets and Splits