In 2005, my paper on breaking RSA by observing a single private-key operation from a different hyperthread sharing the same L1 cache -- literally the first publication of a cryptographic attack exploiting shared caches -- was rejected from the cryptology preprint archive on the grounds that "it was about CPU architecture, not cryptography". Rejection from journals is like rejection from VCs -- it happens all the time and often not for any good reason.
(That paper has now been cited 971 times according to Google Scholar, despite never appearing in a journal.)
I was confused by the title because paper rejection is incredibly common in research, but that's the point and one of the goals is to fight imposter syndrome.
It's a good initiative. Next step: everybody realizes that researchers are just random people like everybody. Maybe that could kill any remaining imposter syndrome.
A rejection, although common, is quite tough during your PhD though, even ignoring the imposter syndrome, because in a short time, you are expected to have a bunch of accepted papers, in prestigious publications if possible. It feels like a rejection slows you down, and the clock is still ticking. If we could kill some of this nefarious system, that'd be good as well.
It’s noteworthy because it’s from Terence Tao, regarded by many as the world’s greatest living mathematician.
If you read the full post he’s making the exact same point as you: it’s common and normal to get a paper rejected even if you’re Terence Tao, so don’t treat a rejection like the end of the world.
> It’s noteworthy because it’s from Terence Tao, regarded by many as the world’s greatest living mathematician.
I think it's important to post a follow-up comment clarifying that papers are reviewed following a double blind peer review process. So who the author is shouldn't be a factor.
Also, the author clarified that the paper was rejected on the grounds that the reviewer felt the topic wasn't a good fit for the journal. This has nothing to do with the quality of the paper, but uploading editorial guidelines on the subject. Trying to file a document in a wrong section and being gently nudged to file under another section hardly matches the definition of a rejection that leads authors to question their life choices.
Just a quick point: double blind is not common for mathematics journals, at least in my area. Some TCS conferences have started it.
In many fields there are only a handful of researchers that would submit a paper in a given specialty, and a couple of them are the reviewers. At some point blinding is just futile.
> regarded by many as the world’s greatest living mathematician.
Oh?
Perelman comes to mind (as the only person who has been eligible to claim one of the Millennium prizes), although he is no longer actively practicing math AFAIK. Of Abel prize winners, Wiles proved Fermat's last theorem, and Szemeredi has a number of number-theoretic and combinatorial contributions.
Recently deceased (past ~10 years) include figures such as John Nash, Grothendieck, and Conway.
Tao is definitely one of the most well-known mathematicians, and he's still got several more decades of accomplishments ahead of him, but I don't know that he rises to "greatest living mathematician" at this point.
That said, I do appreciate that he indicates that even his papers get rejected from time to time.
Having been a child prodigy somehow gives one infamy (in the wider public consciousness) beyond anything one can achieve as an adult.
Well, true.
He said “regarded by many”, which I think is probably an accurate statement.
I chose those words deliberately because I knew that if I phrased it too strongly then someone would reply to litigate the claim. Apparently I still phrased it too strongly, but hey, nothing's ever too pedantic for HN.
To add to your list, you can also find Richard Borcherds teaching math on YouTube.
> It’s noteworthy because it’s from Terence Tao, regarded by many as the world’s greatest living mathematician.
I didn't know :-)
> If you read the full post he’s making the exact same point as you
Oh yeah, that's because I did read the full post and was summarizing. I should have made this clear.
It's especially important coming from someone like Terence Tao. If one of the best and brightest mathematicians out there can get a paper declined, then it can happen to literally anyone.
I guess it is nice to know that he is also not perfect. But it’s still the case that his accomplishments outshine my own, so my imposter syndrome remains intact.
I'd counter the "like everybody": they're not. They spent a decade or more focused on honing their skills and deepening their knowledge to become experts in their subfield, and sometimes even entire fields. They are very much not random people like everybody in this context.
terence tao is suffering from imposter syndrome? if anything, imposter syndrome is suffering from terence tao ... do you maybe not know who terence tao is?
It's terence tao trying to help others with imposter syndrome. It seems quite unlikely he himself would suffer from it...
on the contrary, that's exactly what he states in comment discussion below the thread
having higher reputation means higher responsibility not to crush someone with it in the sub-fields you aren't as proficient as
I don't think it's a white lie. Whether he has imposter syndrome is beside the point. It shows he has sympathy for his colleagues who might have it. Maybe he himself had it before which would let him understand even better what it is, and now he doesn't anymore, this would motivate him to make this point.
The point he is making is all the motr convincing especially that he is seen as very good, whether he had imposter syndrome or not.
At my college, you only need one paper not many
In mine, I don't think there was a hard requirement, but your PhD would be seen as weak with zero paper, and only one would be common enough I guess but still be seen a bit weak. It's not very important to grade, but it's important for what follows: your carrier, including getting a position.
Adam Grant once related an amusing rejection from a double-blind review. One of the reviewers justified the rejection with something along the lines of “The author would do well to familiarize themselves with the work of Adam Grant”
Life imitates art. In a 1986 comedy "Back to School" Rodney Dangerfield's character delegates his college assignments to various subject matter experts. His English Lit teacher berates him for it, saying that not only did he obviously cheat, but he also copied his essay from someone who's unfamiliar with the works of Kurt Vonnegut. Of course, the essay was written by Vonnegut himself, appearing in a cameo role.
Fair warning: I don't know enough about mathematics to say if this is the case here.
I hear this all the time, but this is actually a real phenomenon that happens when well-known senior figures are rightfully cautious about over-citing their own work and/or are just so familiar with their own work that they don't include much of it in their literature review. For everybody else in the field, it's obvious that the work of famous person X should make up a substantial chunk of the lit review and be explicit about how the new work builds on X's prior literally paradigm shifting work. You can do a bad job at writing about your own past work for a given audience for so many different reasons, and many senior academics do all the time, making their work literally indistinguishable from that of graduate students --- hence the rejection.
I totally understand the case when an author doesn't sufficiently give context because they are so close to their previous work that they take it for granted that it's obvious (or, like you said, they are wary of auto-citation).
I may be misremembering, but I believe the case with Grant was that the referee was using his own work to discredit his submission. Ie "If the author was aware of the work of Adam Grant, they would understand why the submitted work is wrong."
This also happens pretty commonly. However, it's not even unreasonable! Sometimes you write a paper and you don't do a good enough of a job putting in the context of your own related work.
And sometimes the reviewer didn't read carefully and doesn't understand what you're doing.
I once wrote a paper along the lines of "look we can do X blazingly fast, which (among other things) lets us put it inside a loop and do it millions of times to do Y." A reviewer responded with "I don't understand what the point of doing X fast is if you're just going to put it in a loop and make it slow again." He also asked us to run simulations to compare our method to another paper which was doing an unrelated thing Z. The editor agreed that we could ignore his comments.
I, as a reviewer, made a similar mistake once! The author's initial version seemed to contradict one of their earlier papers but I was missing some context.
I also made this mistake! I recommended the author to read an adjacent work, which turned out to be by the very same author. He had just forgot to include it his work.
I’ve had it happen to me. Paper rejected because it was copying and not citing a prior message to a mailing list… the message from the mailing list was mine, and the paper was me turning it into a proper publication.
Play devil's advocate here. Human memory is not flawless, and people makes mistakes, so maybe Adam Grant should have read one of his previous work as a refresher. Even if not wrong, it is possible that he missed some stuff he thought he had published, but hadn't.
If, as a developer, you had the experience of looking at some terrible code, angrily searching for whoever wrote that monstrosity, only to realize that you did, that's the idea.
Yes, funny the first time.
Not so much the fifth!
I find it refreshing when researchers disclose their own failures. Science is made of negative results, errors, and rejections, though it's often characterized in a much different, unrealistic way.
By the way, even though some of you may know about it, here's the link to the Journal of Negative Results: https://www.jnr-eeb.org/index.php/jnr
[dead]
I am actually quite surprised Terence Tao still gets papers rejected from math journals... but appreciate him sharing this, as hearing this from him will help newer scientists not get discouraged by a rejection.
I had the lucky opportunity to do a postdoc with one of the most famous people in my field, and I was shocked how much difference the name did make- I never had a paper rejection from top tier journals submitting with him as the corresponding author. I am fairly certain the editors would have rejected my work for not being fundamentally on an interesting enough topic to them, if not for the name. The fact that a big name is interested in something, alone can make it a "high impact subject."
> I am actually quite surprised Terence Tao still gets papers rejected from math journals
At least it indicates that the system is working somewhat properly some of the time...
I find it bewildering that it wouldn’t, actually. I would have expected one of the earliest things in the review process happening would be to black out the submitters name and university, only to be revealed after the review is closed.
Well, the editor still sees the name of the submitter, and can also push the reviewers for an easy publication by downplaying the requirements of the journal.
Could you elaborate on this statement? It sounds like you're implying something, but it's not clear what.
I interpret it as saying that at least the system hasn't just degraded into a rubber stamp (where someone like Tao can publish anything on name alone).
I think it’s that a paper submitted by one of the most famous authors in the math field is not auto approved by the journals. That even he has to go through the normal process and gets rejected at times.
Could that also be because he reviewed the papers first and made sure they were in a suitable state to publish? Or you think it really was just the name alone, and if you had published without him they would not have been accepted?
He only skimmed them- scientists at his level are more like a CEO than the stereotype of a scientist- with multiple large labs, startups, and speaking engagements every few days. He trusted me to make sure the papers were good- and they were, but his name made the difference between getting into a good journal in the field, and a top “high impact” journal that usually does not consider the topic area popular enough to accept papers on, regardless of the quality or content of the paper. At some level, high impact journals are a popularity contest- to maintain the high citation rate, they only publish from people in large popular fields, as having more peers means more citations.
The master has failed more than the beginner has tried.
"Rejection is actually a relatively common occurrence for me, happening once or twice a year on average."
This feels like a superhuman trying to empathize with a regular person.
Non-zero failure rate is indeed often optimal because it provides valuable feedback toward finding the optimal horizon for various metrics, e.g. speed, quality, LPU[1], etc.
That said, given the labor involved in academic publishing and review, the optimal rejection rate should be quite low, i.e. find a lower cost way to pre-filter papers. OTOH, the reviewers may get value from rejected papers...
[1] least publishable unit
If you stick around in physics long enough you will submit a paper to Physical Review Letters (which is limited to about four pages) that gets rejected because it isn't of general enough interest, then you resubmit to some other section of The Physical Review and get in.
These days I read a lot of CS papers with an eye on solving the problems and personally I tend to find the short ones useless. (e.g. pay $30 for a 4-page paper because it supposedly has a good ranking function for named entity recognition except... it isn't a good ranking function)
Sure, even top mathematicians have paper rejections.
But I think the more important point is that very few people are capable of publishing papers in top math journals.
> Because of this, a perception can be created that all of one's peers are achieving either success or controversy, with one's own personal career ending up becoming the only known source of examples of "mundane" failure.
I've found similar insights when I joined a community of musicians and also discovered twitch / youtube presences of musicians I listen to. Some of Dragonforces corona streams are absolutely worth a watch.
It's easy to listen to mixed and finished albums and... despair to a degree. How could anyone learn to become that good? It must be impossible, giving up seems the only rational choice.
But in reality, people struggle and fumble along at their level. Sure enough, the level of someone playing guitar professionally for 20 years is a tad higher than mine, but that really, really perfect album take? That's the one take out of a couple dozen.
This really helped me "ground" or "calibrate" my sense of how good or how bad I am and gave me a better appreciation of how much of a marathon an instrument can be.
Academia is a paper tiger. The Internet means you don't need a publisher for your work. Ironically, this self published blog might be one of his most read works yet.
You never needed a publisher; before the Internet you could write up your findings and mail them to relevant people in your field. Quite a lot of scientists did this, actually.
What publication in a journal gives you is context, social proof, and structured placement in public archives like libraries. This remains true in the age of the Internet.
I agree with the discussion that rejection is normal and researchers should discuss it more often.
That said, I do think that "publish or perish" plays an unspoken role here. I see a lot of colleagues trying to push out "least publishable units" that might barely pass review (by definition). If you need to juice your metrics, it's a common strategy that people employ. Still, I think a lot of papers would pass peer review more easily if researchers just combined multiple results into a single longer paper. I find those papers to be easier to read since they require less boilerplate, and I imagine they would be easier to pass peer review by the virtue that they simply contain more significant results.
One of the issues is that we have grad students, and they need to publish in order to travel through the same cycle that we went through. As a more senior scientist I would be thrilled to publish one beautiful paper every two years, but then none of my students would ever learn anything or get a job.
Longer papers with more claims have more to prove, not less. I imagine they would be harder to pass peer review.
> Longer papers with more claims have more to prove, not less. I imagine they would be harder to pass peer review.
Yes, a longer paper puts more work on the peer reviewers (handful of people). But splitting one project in multiple papers puts more work on the reader (thousands of people). There is a balance to strike.
I agree with your first part but not your second. Most authors do not make outrageous claims, and I surely would reject their manuscript if they did. I've done it before and will do it again without any issue.
To me, the point of peer review is to both evaluate the science/correctness of the work, but also to ensure that this is something novel that is worth telling others about. Does the manuscript introduce something novel into the literature? That is my standard (and the standard that I was taught). I typically look for at least one of three things: new theory, new data/experiments, or an extensive review and summation of existing work. The more results the manuscript has, the more likely it is to meet this novelty requirement.
The more results a manuscript must have, the more work it is to develop the manuscript.
Peer review misses 100% of long papers that don't get submitted, where multiple non-outrageous claims were too much work to develop.
Lots of co-authors. That is one surefire way to inflate it.
I always thought that part of the upside of being tenured and extremely recognised as a leader of your field is the freedom to submit to incredibly obscure (non-predatory) journals just for fun.
In academic publishing, there is an implicit agreement between the authors and the journal to roughly match the importance of the paper to the prestige of the journal. Since there is no universal standard on either the prestige of the journal or the importance of the paper, mismatches happen regularly, and rejection is the natural result. In fact, the only way to avoid rejections is to submit a paper to a journal of lower prestige than your estimate, which is clearly not what authors want to do.
It’s not an accident - if academics underestimated the quality of their own work or overestimated that of the journal, this would increase acceptance rates.
Authors start at an attainable stretch goal, hope for a quick rejection if that’s the outcome, and work their way down the list. That’s why rejection is inevitable.
This is his main point, and I wholeheartedly agree: …a perception can be created that all of one's peers are achieving either success or controversy, with one's own personal career ending up becoming the only known source of examples of "mundane" failure. I speculate that this may be a contributor to the "impostor syndrome"…
Research is getting more and more specialized. Increasingly there may not be many potential journals for a paper, and, even if there are, the paper might be sent to the same reviewers (small sub communities).
You may have to leave a year of work on arxiv, with the expectation that the work will be rehashed and used in other published papers.
Journals are typically for-profit, and science is not, so they don't always align and we should not expect journals to serve science except incidentally.
Please note that despite much work being done in the equality department being famous is nowadays still a requirement for acquiring the status of impostor syndrome achiever. Persons who are not really famous do not have impostor syndrome but are just a simple copycats in this respect.
So the non-famous people who claim to have impostor syndrome are actual impostors because they claim to have impostor syndrome. Honestly, that seems like a bit of a weird take but to each their own.
We can laugh at academia but we know of these similar rejection stories nearly in all domains.
AirBnB being rejected for funding, musicians like Schubert struggling their entire life, writers like Rowling in poverty.
Rejection will always be the norm in competitive winner take all dynamics.
Whether it’s a journal, a university, a tech company… never take it personally because there’s bureaucracy, policies, etc and information lost in the operation of the whole process. Cast a wide net and believe in the value you’ve created or bring.
We often talk about how important it is to be a platform for oneself, self-host blog under own domain etc. Why it is not the case for science papers, articles, issues? Like, isn't the whole World Wide Web was invented specifically for that?
Saw the title and thought, nothing unusual in that really, then saw the domain was maths based, it's not Terrence Tao is it?! It was Terrence Tao. If one of the greats can get rejected then there's no shame in you getting rejected.
Rejection Therapy ftw. https://en.m.wikipedia.org/wiki/Rejection_Therapy
Reminds me—I wish someone would make an anti-LinkedIn, where the norm is to announce only setbacks and mistakes, disappointments etc.
There was a site where people posted company failures:
Just like in academia, no one cares about negative results in professional settings.
Folks already do. They often turn them into inspirational tales.
It’s okay Terence, it happens to the best of us.
It's important to remember there a journal's reputation is built by the authors who publish there, and not vice versa.
Should we therefore also publicize everything else that lies between success and failure?
- hey honey how was work today?
- it was fine, I desk rejected terence tao, his result was a bit meh and the write up wasn't up to my standard. Then I had a bit of a quite office hour, anyway, ...
I've had the surreal moment of attending a workshop where the main presenter (famous) is talking about their soon to-be-published work where I realize that I'm one of their reviewers (months after I wrote the review, so no impact on my score). In this case, I loved their paper and gave it high marks, and so did the other reviewers. Not surprising when I found out who the author was!!!
I have to not say a word to them as I talk to them or else I could ruin the whole peer review thing!
"Hey honey, I reviewed X work from Y famous person today"
> I have to not say a word to them as I talk to them or else I could ruin the whole peer review thing!
In what sense would it ruin peer review to reveal your role after you already wrote and submitted the review?
Why journals exist at all? Could papers be published on something like arxiv.org (like software is on github.com)?
It could support links/backref, citations(forks), questions(discussions), tags, followers, etc easily.
Part of the idea is that journals help curate better publications via the peer review process. Whether or not that occurs in practice is up for some debate.
Having a curated list can be important to separate the wheat from the chaff, especially in an era with ever increasing rates of research papers.
Eliminating journals as a corporate monopoly doesn't eliminate peer review. For example, it should be easy to show the number of citations and even their specific context in other articles on the arxiv-like site. For example, if I like some app/library implementation on github, I look at their dependencies (a citation in a sense) to discover things to try.
Curated lists can also exist on the site. Look at awesome* repos on github eg https://github.com/vinta/awesome-python
Obviously, some lists can be better than the others. Usual social mechanics is adequate here.
I think citation is a noisy/poor signal for peer-review. I've refereed a number of papers where I dig into the citations and find the article doesn't actually support the author's claim. Still, the vast majority of citations go unchecked.
I don't think peer-review has to be done by journals, I'm just not sure what the better solution is.
I’ve definitely encountered such cases myself (when actual cited paper didn’t support author’s claims).
Nothing prevents the site introducing more direct peer review (published X papers on a topic -> review a paper).
Though If we compare two cases: reading a paper to leave an anonymous review vs reading a paper to cite it. The latter seems like more authentic and useful (less perversed incentives).
I think in math, and in many other fields, it is pretty normal to post all papers on arXiv. But arXiv has a lot of incorrect papers on it (tons of P vs NP papers for example), so journals are supposed to act as a filtering mechanism. How well they succeed at it is debated.
It is naive to think that “journal paper” means correct paper. There are many incorrect papers in journals too (remember reproduction crisis).
Imagine, you found a paper on arxiv-like site: there can be metadata that might help determine quality (author credentials, citations by other high-ranked papers, comments) but nothing is certain. There may be cliques that violently disagree with each other (paper clusters with incompatible theories). The medium can help with highlighting quality results (eg by choosing the default ranking algorithm for the search, introducing StackOverflow-like gamification) but it can’t and shouldn’t do science instead of practitioners.
fwiw, editorial review =/= peer review
The second post in that thread is gold:
"""
... I once almost solved a conjecture, establishing the result with an "epsilon loss" in a key parameter. We submitted to a highly reputable journal, but it was rejected on the grounds that it did not resolve the full conjecture. So we submitted elsewhere, and the paper was accepted.
The following year, we managed to finally prove the full conjecture without the epsilon loss, and decided to try submitting to the highly reputable journal again. This time, the paper was rejected for only being an epsilon improvement over the previous literature!
...
"""
While I'm not a mathematician, I think such an attitude on behalf of the journal does not encourage healthy community dynamics.
Instead of allowing the community to join forces by breaking up a larger problem into pieces, it encourages siloing and camper mentality.
I agree. This is also a lack of effort on the journal's part to set expectations of what the reviewers should be looking for in an accepted paper.
In the journal's defense though, what most likely happened is that the reviewers were different between submissions and they didn't know about the context. Ultimately, I think, this type of rejection comes down to the mostly the reviewers discretion and it can lead to this type of situation.
I cut off the rest of the post but Tao finished it with this:
"""
... Being an editor myself, and having had to decline some decent submissions for a variety of reasons, I find it best not to take these sorts of rejections personally,
...
"""
The high standards of those academic journals sound incredible in this day and age when media is full of misinformation and irrelevant information.
The anecdote about the highly reputable journal rejecting the second of a 2-part paper which (presumably) would have been accepted as a 1-part paper is telling.
Hilarious irony:
> With hindsight, some of my past rejections have become amusing. With a coauthor, I once almost solved a conjecture, establishing the result with an "epsilon loss" in a key parameter. We submitted to a highly reputable journal, but it was rejected on the grounds that it did not resolve the full conjecture. So we submitted elsewhere, and the paper was accepted.
> The following year, we managed to finally prove the full conjecture without the epsilon loss, and decided to try submitting to the highly reputable journal again. This time, the paper was rejected for only being an epsilon improvement over the previous literature!
A lot of the replies make it seem like there is some great over-arching coordination and intent between subsequent submissions, but I’ll offer up an alternative explanation: sometimes the reviewer selection is an utter crap shoot. Just because the first set of reviewers may offer a justification for rejection, it may be completely unrelated to the rationale of a different set of reviewers. Reviewers are human and bring all kinds of biases and perspectives into the process.
It’s frustrating but the result of a somewhat haphazard process. It’s also not uncommon for conflicting comments within the same review cycle. Some of this may be attributed to a lack of clear communication by the author. But on occasion, it leads me to believe many journals don’t take a lot of time selecting appropriate reviewers and settle for the first few that agree to review.
Luck plays a lot of a role in many vaguely similar things. I regularly submit fiction and poetry for publication (with acceptance rates of 2% for fiction and 1.5% for poetry) and so much depends on things well out of my control (which is part of why I’m sanguine about those acceptance rates—given the venues I‘m submitting to, they’re not unreasonable numbers and more recent years’ stats are better than that).¹ In many cases the editors like what they read, but don’t have a place for it in the current issue or sometimes they’re just having a bad day.
⸻
1. For those who care about the full messy details I have charts and graphs at https://www.dahosek.com/2024-in-reejctions-and-acceptances/
And this is how we do science? How is that a good basis for scientific reality? Seems there should at least be transparency and oversight, or maybe the whole system is broke: open reviews on web not limited to a small committee sounds better.
Science is about the unknown, building testable models and getting data.
Even an AI review system could help.
This is how we don’t do papers.
Even though my pal did a full Gouraud shading in pure assembly using registers only (including the SP and a dummy stack segment) - absolute breakthrough back in 1997.
We did a 4 server p3 farm seeding 40mbits of outward traffic in 1999. Myself did a complete Perl-based binary stream unpacking - before protobuf was a thing. Still live handling POS terminals.
Discovered a much more effective teaching methodology which almost doubled effectiveness. Time-series compression with grammars,… And many more as we keep doing new r&d.
None of it is going to be published as papers on time (if ever), because we really don’t want to suffer this process which brings very little value afterwards for someone outside academia or even for people in academia unless they peruse PHD and similar positions.
I’m struggling to force myself to write an article on text2sql which is already checked and confirmed to contain a novel approach to RAG which works, but do I want to suffer such rejection humiliation? Not really…
It seems this paper ground is reserved for academics and mathematics in a certain ‘sectarian modus operandi’, and everyone else is a sucker. Sadly after a while the code is also lost…
> Half the job in science is informing (or convincing) everyone else about what you made and why it is significant.
Additionally, writing is the best way to properly think things through. If you can't write an article about your work then most likely you don't even understand it yet. Maybe there are critical errors in it. Maybe you'll find that you can further improve it. By researching and citing the relevant literature you put your work in perspective, how it relates to other results.
the point was about the paper thing, not about the joy of writing one's thoughts down or publishing in the public domain. most of my work gets published these days - code, designs, teaching, translations... actually prefer it to keeping backups of obscure random. but of course, people sure do release a lot of cool stuff on github and where not. one of my repos has ±300 stars and going, thousands of papers are anyway near in actual impact.
BUT... the topic is not about releasing stuff in the wild. opensource being a vehicle for research is outside the scope of present discussion. the incentives and the barrier to writing what is called an academic paper is. wild stuff does not bring impact factor, and does not get you closer to a PhD in the practical sense.
the whole paper thing is intended for sharing purposes, yet it keeps people away very successfully. its a system, not a welcoming one, that all I'm saying.
“do I want to suffer such rejection humiliation? Not really”
The point of Terence Tao’s original post is that you just cannot think of rejection as humiliation. Rejection is not a catastrophe.
> Sadly after a while the code is also lost…
Get it included in the archives of Software Heritage and Internet Archive:
https://archive.softwareheritage.org/ https://wiki.archiveteam.org/index.php/Codearchiver
>Discovered a much more effective teaching methodology which almost doubled effectiveness.
Please, could you elaborate?
> And this is how we do science? How is that a good basis for scientific reality?
The journal did not go out empty, and the paper did not cease to exist.
The incentives on academics reward them for publishing in exclusive journals, and the most exclusive journals - Nature, Science, Annals of Mathematics, The BMJ, Cell, The Lancet, JAMS and so on - only publish a limited number of pages in each issue. Partly because they have print editions, and partly because their limited size is why they're exclusive.
A rejection from "Science" or "Nature" doesn't mean that your paper is wrong, or that it's fraudulent, or that it's trivial - it just means you're not in the 20 most important papers out of the 50,000 published this week.
And yes, if instead of making one big splash you make two smaller splashes, you might well find neither splash is the biggest of the week.
It is not a good way of doing science, but it is the best we have.
All the alternatives, including the ones you proposed, have their own serious downsides, which is why we kept the status quo for the past few decades.
This is much too negative. Peer review indeed misses issues with papers, but by-and-large catches the most glaring faults.
I don’t believe for one moment that the vast majority of papers in reputable conferences are wrong, if only for the simple reason that putting out incorrect research gives an easy layup for competing groups to write a follow-up paper that exposes the flaw.
It’s also a fallacy to state that papers aren’t reproducible without code. Yes code is important, but in most cases the core contribution of the research paper is not the code, but some set of ideas that together describe a novel way to approach the tackled problem.
> It's not "the best we have", it's "the best those in power will allow". Those in power do not want consequences for publishing bad research, and also don't want the reviewing load required to keep bad research out.
This is a very conspiratorial view of things. The simple and true answer is your last suggestion: doing a more thorough review takes more time than anyone has available.
Reviewers work for free. Applying the level of scrutiny you're requesting would require far more work than reviewers currently do, and maybe even something approaching the amount of work required to write the paper in the first place. The more work it takes to review an article, the less willing reviewers are to volunteer their time, and the harder it is for editors to find reviewers. The current level of scrutiny that papers get at the peer-review stage is a result of how much time reviewers can realistically volunteer.
Peer review is a very low standard. It's only an initial filter to remove the garbage and to bring papers up to some basic quality standard. The real test of a paper is whether it is cited and built upon by other scientists after publication. Many papers are published and then forgotten, or found to be flawed and not used any more.
I heard first hand accounts from multiple people of running into a different set of problems (from academia) publishing papers in corporations. Publishing is never simple or easy. If you have concrete examples, or better, generally recognized studies that show there is an objectively better way to do research, I'd very like to know that.
Because, as an PhD who knows dozens of other PhDs in both academia and industry, and who has never heard of this magic new approach to doing science, it would be quite a surprise.
So the lesson is there is not a single good way to do science (or anything really), as whatever the approach retained, there will be human biases involved.
So the less brittle option obviously might be to go through all possible approaches, but this is obviously more resources demanding, plus we still have the issue of creating some synthesis of all the accumulated insights from various approaches which itself might be taken into various approaches. That’s more of a indefinitely deep spiral, under that perspective
An other perspective is to consider, what are the expected outcomes of the stakeholders maybe. A shiny academic career? An attempt to bring some enlightenment on deep cognitive patterns to the luckiest follows that have the resources at end to follow your high level intellectual gymnastic? A pursuit of ways to improve humanity condition through relevant and sound knowledge bodies? There are definitely many others.
We kept that mostly due to inertia and because it's the most profitable for the journals (everybody does their work for free and they don't have to invest in new systems), not because it's the best for science and scientists.
> It is not a good way of doing science, but it is the best we have.
It may have been for some time, but there is human social dynamics in play.
If by "open" you mean that the paper is there and people just voluntarily choose to review it, rather than having some top-down coordinated assignment process, the problem is that papers by the superstars would get hundreds of reviews while papers from unknown labs would get zero.
You could of course make it double blind, but that seems hard to enforce in practice in such an open setup, and still, hyped papers in fashionable topics would get many reviews while papers that are hardcore theoretical, in an underdog domain, etc. would get zero.
Finally, it also becomes much more difficult to handle conflicts of interest, and the system is highly vulnerable to reviewer collusion.
As others have mentioned, the main problem is that open systems are more vulnerable to low-cost, coordinated external attacks.
This is less of an issue with systems where there is little monetary value attached (I don't know anyone whose mortgage is paid for by their Stack Overflow reputation). Now imagine that the future prospects of a national lab with multi-million yearly budget are tied to a system that can be (relatively easily) gamed with a Chinese or Russian bot farm for a few thousand dollars.
There are already players that are trying hard to game the current system, and it sometimes sort of works, but not quite, exactly because of how hard it is to get into the "high reputation" club (on the other hand, once you're in, you can often publish a lot of lower quality stuff just because of your reputation, so I'm not saying this is a perfect system either).
In other words, I don't think anyone reasonable is seriously against making peer review more transparent, but for better or worse, the current system (with all of its other downsides) is relatively robust to outside interference.
So, unless we (a) make "being a scientist" much more financially accessible, or (b), untangle funding from this new "open" measure of "scientific achievement", the open system would probably not be very impactful. Of course, (a) is unlikely, at least in most high-impact fields; CS was an outlier for a long time, not so much today. And (b) would mean that funding agencies would still need something else to judge your research, which would most likely still be some closed, reputation-based system.
Edit TL;DR: Describe how the open science peer-review system should be used to distribute funding among researchers while begin reasonably robust to coordinated attacks. Then we can talk :)
The open internet.
i.e. trolls, brigades, spammers, bots, and all manner of uninformed voices.
> sometimes the reviewer selection is an utter crap shoot
Indeed, but when someone of Tao's caliber submits a paper, any editor would (should) make an extra effort to get the very best researchers to referee the paper.
But isn't that exactly why the submission should be anonymous to the reviewer? It's science, the paper should speak for itself. You don't want a reviewer to be biased by the previous accomplishments of the author. An absolute nobody can make groundbreaking and unexpected discoveries, and a Nobel prize winner can make stupid mistakes.
I agree, also many papers near the begining say
> We are exending our previous work in [7]
or cite a few relevant papers
> This topic has been studied in [3-8]
Where 3 was published by group X, 5 by group Y, 7 by group Z and 4, 6 and 8 by group W. Anyone can guess the author of the paper is in group W.
Just looking at the citations, it's easy to guess the group of the author.
In many subfields, the submitter isn't even attempted to be hidden from the reviewers. Usually, even the reviewers can be guessed with high accuracy by the submitters
Inherent in the editor trying to "get the very best researchers to [review] the paper" is likely to be a leak of signal. (My spouse was a scientific journal editor for years; reviewers decline to review for any number of reasons, often just being too busy and the same reviewer is often asked multiple times per year. Taking the extra effort to say "but this specific paper is from a really respected author" would be bad, but so would "but please make time to review this specific paper for reasons that I can't tell you".)
When submitting papers to high-profile journals, the expectations are very high for all authors. In most cases, the editorial team can determine from the abstract whether the paper is likely to meet their standards for acceptance.
Doesn’t that just move the source of bias from the reviewer to the coordinator? Some ‘nobody’ submitting a paper would get a crapshoot reviewer while a recognisable ‘somebody’ gets a well regarded fair reviewer.
I agree with a lot of this premise but this gave me pause:
>under this model, no paper is ever rejected for publication; papers just get trapped in an infinite revision loop
This could mean a viable paper never gets published. Most journals require that you only submit to one journal at a time. So if it didn’t meet criteria for whatever reason (even a bad scope fit) it would never get a chance at a better fit somewhere else).
Typically, papers are reviewed by 1 to 3 reviewers. I don't think you realistically can have more than two levels -- the editor as the first line, and then one layer of reviewers.
You can't really blind the author names. First, the reviewers must be able to recognize if there is a conflict of interest, and second, especially for papers on experiments, you know from the experiment name who the authors would be.
Assuming citations follow a zip distribution, almost all papers would have to go through all levels.
Unfortunately, reviewers do not get salary from this...
depending on the publication the reviewers might not even know who the authors are.
But the journal editor should.
Or maybe it doesn't matter. He got them published anyway and just lost some prestigious journal points on his career. Science/math was the winner on the day and that's the whole point of it. Maybe some of those lower ranked journals are run better and legitimately chipping away at the prestige of the top ones due to their carelessness.
Research and publication incur opportunity costs. For every manuscript that has to be reworked and submitted elsewhere, you’re losing the ability to do new research. So a researcher is left trying to balance the cost/benefit of additional time investment. Sometimes that results in a higher quality publication, sometimes it results in abandoning good (or bad) work, and sometimes it just wastes time.
foxglacier offered a very good point! If some guy is so talented as Tao, perhaps this is the time to ameliorate journal by his power (like what he did here).
It's as if big journals are after some drama. Or excitement at least. Not just an important result, but a groundbreaking result in its own right. If it's a relatively small achievement that finishes a long chain of gradual progress, it better be some really famous problem, like Fermat's last theorem, Poinrcaré's conjecture, etc.
I wonder if it's actually optimal from the journal's selfish POV. I would expect it to want to publish articles that would be cited most widely. These should be results that are important, that is, are hubs for more potential related work, rather that impressive but self-contained results.
This is all due to the preverse incentives of modern academia prioritizing quantity over quantity, flooding journals with an unending churn of low effort garbage.
There are easily tens of thousands of researchers globally. If every one did a single paper per year, that would still be way more than journals could realistically publish.
Since it is to some extent a numbers game, yes, academics (especially newer ones looking to build reputation) will submit quantity over quality. More tickets in the lottery means more chances to win.
I'm not sure though how you change this. With so many voices shouting for attention it's hard to distinguish "quality" from the noise. And what does it even mean to prioritize "quality"? Is science limited to 10 advancements per year? 100? 1000? Should useful work in niche fields be ignored simply because the fields are niche?
Is it helpful to have academics on staff for multiple years (decades?) before they reach the standard of publishing quality?
I think perhaps the root of the problem you are describing is less one of "quantity over quality" and more one of an ever-growing "industry" where participants are competing against more and more people.
Perhaps you have better insight into this, why do you think having the primary incentive for professors/researchers being quantity of papers published is appropriate? Or are you saying that it's simply unfixable and we must accept this? As far as I'm aware, quantity of papers published has no relevance to the value of the papers being published, with regard to contributing to the scientific record, and focusing on quantity is a very inappropriate and misleading metric to a researcher's actual contributions. And don't downplay that it isn't purely a numbers game for most people. Your average professor has their entire career tied to the quantity, from getting phd candidates through in a timely manner to acquiring grants. All of it hinging on quantity.
Publishing in the sense or reviewing, editing, etc. Distribution is the easy part.
What's the compensation scheme for reviewers?
Are there any mechanisms to balance out the "race to the bottom" observed in other types of academic compensation? e.g. increase of adjunct/gig work replacing full-time professorship.
Do universities require staff to perform a certain number of reviews in academic journals?
Normally, referees are unpaid. You're just supposed to do your share of referee work. And then the publisher sells the fruits of all that work (research and refereeing) back to universities at a steep price. Academic publishing is one of the most profitable businesses on the planet! But univesities and academics are fighting back. Have been for a few years, but the fight is not yet over.
More/easier/cheaper dissemination of research.
> Do universities require staff to perform a certain number of reviews in academic journals?
No. Reviewers mostly do it because its expected of them, and they want to publish their own papers so they can get grants
In the end, the university only cares about the grant (money), because they get a cut - somewhere between 30-70% depending on the instituition/field - for "overhead"
Its like the mafia - everyone has a boss they kick up to.
My old boss (PI on an RO1) explained it like this
Ideas -> Grant -> Money -> Equipment/Personnel -> Experiments -> Data -> Paper -> Submit/Review/Publish (hopefully) -> Ideas -> Grant
If you don't review, go to conferences/etc. its much less likely your own papers will get published, and you won't get approved for grants.
Sadly there is still a bit of "junior high popularity contest" , scratch my back I'll scratch yours that is still present in even "highly respected" science journals.
I hear this from basically every scientist I've known. Even successful ones - not just the marginal ones.
The editor does though, they all know each other. They would know who's not refereeing - and word gets around.
I don't thing it's a money problem. It's more like a framing issue, with some reviewers being too narrow-minded, or lacking background knowledge on the topic of the paper. It's not uncommon to have a full lab with people focussing on very different things, when you look in the details, the exact researchers interests don't overlap too much.
Typically, at least in physics (but as far as I know in all sciences), it's not compensated, and the reviewers are anonymous. Some journals try to change this, with some "reviewer coins", or Nature, which now publishes reviewer names if a paper is accepted and if the reviewer agrees. I think these are bad ideas.
Professors are expected to review by their employer, typically, and it's a (very small) part of the tenure process.
It's implicitly understood that volunteer work makes the publishing process 'work'. It's supposed to be a level playing field where money does not matter.
Do universities require staff to perform a certain number of reviews in academic journals?
Depends on what you mean by "require". At most research universities it is a plus when reviewing tenureship files, bonuses, etc. It is a sign that someone cares about your work, and the quality of the journal seeking your review matters. If it were otherwise faculty wouldn't list the journals they have reviewed for on their CVs. If no one would ever find out about a reviewers' efforts e.g. the process were double blind to everyone involved, the setup wouldnt work.
There is no compensation for reviewers, and usually no compensation for editors. It’s effectively volunteer work. I agree to review a paper if it seems interesting to me and I want to effectively force myself to read it a lot more carefully than normal. It’s hard work, especially if there is a problem with the paper, because you have to dig out the problem and explain it clearly. An academic could refuse to do any reviews with essentially no formal consequences, although they’d get a reputation as a “bad citizen” of some kind.
I know from some of my peers that reviewed biology (genetics) papers, they weren’t compensated.
I was approached to review something for no compensation as well, but I was a bad fit.
Right - it's somewhat similar to code review
Sometimes one person is looking for an improvement in this area while someone else cares more about that other area
This is totally reasonable! (Ideally if they're contradicting each other you can escalate to create a policy that prevents future contradictions of that sort)
This seems reasonable?
Suppose the full result is worth 7 impact points, which is broken up into 5 points for the partial result and 2 points for the fix. The journal has a threshold of 6 points for publication.
Had the authors held the paper until they had the full result, the journal would have published it, but neither part was significant enough.
Scholarship is better off for them not having done so, because someone else might have gotten the fix, but the journal seems to have acted reasonably.
If people thought this way - internalizing this publishing point idea - it would incentivize sitting on your incremental results, fiercely keeping them secret if and until you can prove the whole bigger result by yourself. However long that might take.
If a series of incremental results were as prestigious as holding off to bundle them people would have reason to collaborate and complete each other's work more eagerly. Delaying an almost complete result for a year so that a journal will think it has enough impact point seems straightforwardly net bad, it slows down both progress & collaboration.
> If people thought this way - internalizing this publishing point idea - it would incentivize sitting on your incremental results, fiercely keeping them secret if and until you can prove the whole bigger result by yourself. However long that might take.
This is exactly what people think, and exactly what happens, especially in winner-takes-all situations. You end up with an interesting tension between how long you can wait to build your story, and how long until someone else publishes the same findings and takes all the credit.
A classic example in physics involves the discovery of the J/ψ particle [0]. Samuel Ting's group at MIT discovered it first (chronologically) but Ting decided he needed time to flesh out the findings, and so sat on the discovery and kept it quiet. Meanwhile, Burton Richter's group at Stanford also happened upon the discovery, but they were less inclined to be quiet. Ting found out, and (in a spirit of collaboration) both groups submitted their papers for publication at the same time, and were published in the same issue of Physical Review Letters.
They both won the Nobel 2 years later.
People talk. The field isn't that big.
They got an optimal result in that case, isn't that nice.
The reasonable thing to do here is to discourage all of your collaborators from ever submitting anything to that journal again. Work with your team, submit incremental results to journals who will accept them, and let the picky journal suffer a loss of reputation from not featuring some of the top researchers in the field.
To supply a counter viewpoint here... The opposite is the "least publishable unit" which leads to loads and loads of almost-nothing results flooding the journals and other publication outlets. It would be hard to keep up with all that if there wasn't a reasonable threshold. If anything then I find that threshold too low currently, rather than too high. The "publish or perish" principle also pushes people that way.
> versus just a quick chat,
Everybody is free to keep a blog for this kind of informal chat/brainstorming kind of communication. Paper publications should be well-written, structured, thought-through results that make it worthwhile for the reader to spend their time. Anything else belongs to a blog post.
The educational and editorial quality of papers from before 1980 or so beats just about anything published today. That is what publish or perish - impact factor - smallest publishable unit culture did.
Don‘t know much about publishing in maths but in some disciplines it is clearly incentivised to create the biggest possible number of papers out of a single research project, leading automatically to incremental publishing of results. I call it atomic publishing (from Greek atomos - indivisible) since such a paper contains only one result that cannot be split up anymore.
Andrew Wiles spent 6 years working on 1 paper, and then another year working on a minor follow-up.
https://en.m.wikipedia.org/wiki/Wiles%27s_proof_of_Fermat%27...
Or cheese slicer publishing, as you are selling your cheese one slice at a time. The practice is usually frowned upon.
I thought this was called salami slicing in publication.
Science is almost all incremental results. There's far more incentive to get published now than there is to "sit on" an incremental result hoping to add to it to make a bigger splash.
Academic science discovers continuous integration.
In the software world, it's often desired to have a steady stream of small, individually reviewable commits, that each deliver a incremental set of value.
Dropping a 20000 files changed bomb "Complete rewrite of linux kernel audio subsystem" is not seen as prestigious. Repeated, gradual contributions and involvement in the community is.
The big question here is if journal space is a limited resource. Obviously it was at one point.
Supposing it is, you have to trade off publishing these incremental results against publishing someone else’s complete result.
What if it had taken ten papers to get there instead of two? For a sufficiently important problem, sure, but the interesting question is at a problem that’s interesting enough to publish complete but barely.
The limiting factor isn’t journal space, but attention among the audience. (In theory) the journals publishing restrictions help to filter and condense information so the audience is maximally informed given that they will only read a fixed amount
Journal space is not a limited resource. Premium journal space is.
That's because every researcher has a hierarchy of journals that they monitor. Prestigious journals are read by many researchers. So you're essentially competing for access to the limited attention of many researchers.
Conversely, publishing in a premium journal has more value than a regular journal. And the big scientific publishers are therefore in competition to make sure that they own the premium journals. Which they have multiple tricks to ensure.
Interestingly, their tricks only really work in science. That's because in the humanities, it is harder to establish objective opinions about quality. By contrast everyone can agree in science that Nature generally has the best papers. So attempting to raise the price on a prestigious science journal, works. Attempting to raise the price on a prestigious humanities journal, results in its circulation going down. Which makes it less prestigious.
Space isn't a limited resource, but prestige points are deliberatly limited, as a proxy for the publications' competition for attention. We can appreciate the irony, while considering the outcome reasonable - after all, the results weren't kept out of the literature. They just got published with a label that more or less puts them lower in the search ranking for the next mathematician who looks up the topic.
Hyper focusing on a single journal publication is going to lead to absurdities like this. A researcher is judged by the total delta of his improvements, at least by his peers and future humanity. (the sum of all points, not the max).
It is easy to defend any side of the argument by inflating the "pitfalls of other approach" ad absurdum. This is silly. Obviously, balance is the key, as always.
Instead, we should look at which side the, uh, industry currently tends to err. And this is definitely not the "sitting on your incremental results" side. The current motto of academia is to publish more. It doesn't matter if your papers are crap, it doesn't matter if you already have significant results and are working on something big, you have to publish to keep your position. How many crappy papers you release is a KPI of academia.
I mean, I can imagine a world were it would have been a good idea. I think it's a better world, where science journals don't exist. Instead, anybody can put any crap on ~arxiv.org~ Sci-Hub and anybody can leave comments, upvote/downvote stuff, papers have actual links and all other modern social network mechanics up to the point you can have a feed of most interesting new papers tailored specially for you. This is open-source, non-profit, 1/1000 of what universities used to pay for journal subscriptions is used to maintain the servers. Most importantly, because of some nice search screens or whatever the paper's metadata becomes more important than the paper itself, and in the end we are able to assign 10-word simple summary on what the current community consensus on the paper is: if it proves anything, "almost proves" anything, has been 10 times disproved, 20 research teams failed to reproduce to results or 100 people (see names in the popup) tried to read and failed to understand this gibberish. Nothing gets retracted, ever.
Then it would be great. But as things are and all these "highly reputable journals" keep being a plague of society, it is actually kinda nice that somebody encourages you to finish your stuff before publishing.
Now, should have been this paper of Tao been rejected? I don't know, I think not. Especially the second one. But it's somewhat refreshing.
Two submission in medium reputation journal does not have significantly lower prestige than one in high reputation journal.
Gauss did something along these lines and held back mathematical progress by decades.
During his college/grad school days, he was going half nuts, ideas would come to him faster than he could write them down.
Finally one professor saw what was happening, insisted that Gauss take some time off - being German that involved walking in the woods.
These patterns are ultimately detrimental to team/community building, however.
You see it in software as well: As a manager in calibration meetings, I have repeatedly seen how it is harder to convince a committee to promote/give a high rating to someone with a large pile of crucial but individually small projects delivered than someone with a single large project.
This is discouraging to people whose efforts seem to be unrewarded and creates bad incentives for people to hoard work and avoid sharing until one large impact, and it's disastrous when (as in most software teams) those people don't have significant autonomy over which projects they're assigned.
Hello, fellow Metamate ;)
The idea that a small number of reviewers can accurately quantify the importance of a paper as some number of "impact points," and the idea that a journal should rely on this number and an arbitrary cut off point to decide publication, are both unreasonable ideas.
The journal may have acted systematically, but the system is arbitrary and capricious. Thus, the journal did not act reasonably.
> This seems reasonable?
In some sense, but it does feel like the journal is missing the bigger picture somewhat. Say the two papers are A and B, and we have A + B = C. The journal is saying they'll publish C, but not A and B!
How many step papers before a keystone paper seems reasonable to you?
I suspect readers don’t find it as exciting to read partial result papers. Unless there is an open invitation to compete on its completion, which would have a purpose and be fun. If papers are not page turners, then the journal is going to have a hard time keeping subscribers.
On the other hand, publishing a proof of a Millennium Problem as several installments, is probably a fantastic idea. Time to absorb each contributing result. And the suspense!
Then republish the collected papers as a signed special leather limited series edition. Easton, get on this!
I meant if the editors found the paper’s problem and progress especially worthy of a competition.
> I suspect readers don’t find it as exciting to read partial result papers. Unless there is an open invitation to compete on its completion, which would have a purpose and be fun. If papers are not page turners, then the journal is going to have a hard time keeping subscribers.
Yeah I agree, a partial result is never going to be as exciting as a full solution to a major problem. Thinking on it a little more, it seems more of a shame the journal wasn't willing to publish the first part as that sounds like it was the bulk of the work towards the end result.
I quite like that he went to publish a less-than-perfect result, rather than sitting on it in the hopes of making the final improvement. That seems in the spirit of collaboration and advancing science, whereas the journal rejecting the paper because it's 98% of the problem rather than the full thing seems a shame.
Having said that I guess as a journal editor you have to make these calls all the time, and Im sure every author pitches their work in the best light ("There's a breakthrough just around the corner...") and Im sure there are plenty of ideas that turn out to be dead ends.
... A and B separately.
I agree this is reasonable from the individual publisher standpoint. I once received feedback from a reviewer that I was "searching for the minimum publishable unit", and in some sense the reviewer was right -- as soon as I thought the result could be published I started working towards the publication. A publisher can reasonably resist these kinds of papers, as you're pointing out.
I think the impact to scholarship in general is less clear. Do you immediately publish once you get a "big enough" result, so that others can build off of it? Or does this needlessly clutter the field with publications? There's probably some optimal balance, but I don't think the right balance is immediately clear.
Why would publishing anything new needlessly clutter the field?
Discovering something is hard, proving it correct is hard, and writing a paper about is hard. Why delay all this?
Playing devils advocate, there isn’t a consensus on what is incremental vs what is derivative. In theory, the latter may not warrant publication because anyone familiar with the state-of-the-art could connect the dots without reading about it in a publication.
Ouch. That would hurt to hear. It's like they're effectively saying, "yeah, obviously you came up with something more significant than this, which you're holding back. No one would be so incapable that this was as far as they could take the result!"
Thankfully the reviewer feedback was of such low quality in general that it had little impact on my feelings, haha. I think that’s unfortunately common. My advisor told me “leave some obvious but unimportant mistakes, so they have something to criticize, they can feel good, and move on”. I honestly think that was good advice.
If this was actually how stuff was measured, it might be defensible. I'm having trouble believing that things are actually done this objectively rather than the rejections being somewhat arbitrary. Do you think that results can really be analyzed and compared in this way? How do you know that it's 5 and 2 and not 6 and 1 or 4 and 3, and how do you determine how many points a full result is worth in total?
But proportionally, wouldn't a solution without an epsilon loss be much better than a solution with epsilon?
I am not sure what's the exact conjecture that the author solved, but if the epsilon difference is between an approximate solution versus an exact solution, and the journal rejected the exact solution because it was "only an epsilon improvement", I might question how reputable that journal really was.
It's demonstrably (there is one demonstration right there) self-defeating and counter-productive, and so by definition not reasonable.
Each individual step along the way merely has some rationale, but rationales come in the full spectrum of quality.
Given the current incentive scheme in place it's locally reasonable, but the current incentives suck. Is the goal to score the most impact points or to advance our understanding of the field?
In my experience, it depends on the scientist. But it’s hard to know what an advance is. Like, people long searched for evidence of æther before giving up and accepting that light doesn’t need a medium to travel in. Perhaps 100 years from now people will laugh at the attention is all you need paper that led to the llm craze. Who knows. That’s why it’s important to give space to science. From my understanding Lorenz worked for 5 years without publishing as a research scientist before writing his atmospheric circulation paper. That paper essentially created the field of chaos. Would he be able to do the same today? Maybe? Or maybe counting papers or impact factors or all these other metrics turned science into a game instead of an intellectual pursuit. Shame we cannot ask Lorenz or Maxwell about their times as a scientist. They are dead.
I don’t think that’s a useful way to think about this, especially when theres so little information provided about this. Reviewing is a capricious process.
It actually seems reasonable for a journal that has limited space and too many submissions. What's the alternative, to accept on or two of the half proofs, and bump one or two other papers in the process?
[flagged]
That logic is absurd. You might as well consider the whole internet a journal and everything is already published, so there is nothing to complain about.
Just because the storage is free doesn't mean there's no cost. It costs everyone time to read: the editorial staff, people who subscribe to the journal, etc. It costs copyediting time. More content creates more work.
To be the devil's advocate: Breaking a result up into little pieces to increase your paper count ("salami-slicing") is frowned upon.
Of course this is not what Terry Tao tried to do, but it was functionally indistinguishable from it to the reviewers/editors.
Do Reddit mods also edit math journals?
Sort of. But it makes sense. They missed out the first time and don’t want to be an also-ran. If he had gone for the glory from the start it may have been different. The prestigious journals probably don’t want incremental papers.
Are you sure this wasn’t an application to the DMV or an attempt to pull a building permit?
Don't you hate it when you lose your epsilon, only to find it and it's too late?
I wonder what the conjecture was?
So it's basically like submitting an iOS app to the app store.
A similar story.
I actively blogged about my thesis and it somehow came up in one of those older-model plagarism detectors (this was years and years ago, it might have been just some hamfisted google search).
The (boomer) profs convened a 'panel' without my knowledge and decided I had in fact plagiarized, and informed me I was in deep doo doo. I was pretty much ready to lose my mind, my career was over, years wasted, etc.
Luckily I was buddy with a Princeton prof. who had dealt with this sort of thing and he guided me through the minefield. I came out fine, but my school never apologized.
Failure is often just temporary and might not even be real failure.
[flagged]
[flagged]
I wish I has an IQ that high...
If you want to become smarter in math, read and attempt to understand brutally hard math papers and textbooks. Torture yourself harder than any time before in your life. :-)
IQ means interesting questions.
[dead]
The journal lost, as it would have increased their h-index and reputation significantly.
Time is always a better evaluater than anyone in any journal
[Add:controversity] Q: If 'Time is an 'always better' evaluater", why do i see nobody out there, writing about "compressed-time" ?
regards...
[dead]
Is it on the arxiv? If not, please put it there.
The paper is here: http://www.daemonology.net/hyperthreading-considered-harmful...
As its author noted, the paper has done fine ciation- and impact-wise.
Paper is here: https://www.daemonology.net/papers/cachemissing.pdf
Your link is the website I put up for non-experts when I announced the issue.
In this case, it's less about discoverability, but more about long term archival. Will daemonology.net continue to exist forever? Arxiv.org might perish, but I am sure the community will make sure the data will be preserved.
In my experience, once teachers retire or move on, or a course gets mothballed, it's only a matter of time for course websites disappear or become non-functional.
If the course website was even on the open web to begin with. If they're in some university content management system (CMS), chances are that access is limited to students and teachers of that university and the CMS gets "cleaned" regularly by removing old and "unused" content. Let alone what will happen when the CMS is replaced by another after a couple of years.