A news report in nature tells of yet another study concluding that Bayesian statistics are better than frequentist statistics. **Disclaimer: I don’t have time to read the actual scientific paper being reported, so the opinions that follow are about the Nature news report, not the original article**
John Ioannidis wrote a great paper a few years ago called “Why most published research findings are false”. However, Nature quotes his response to this new article, which to me is just too simple minded. Sure it could have been taken out of context, but in any case it is not a message I support,
“The family of Bayesian methods has been well developed over many decades now, but somehow we are stuck to using frequentist approaches,” says physician John Ioannidis of Stanford University in California, who studies the causes of non-reproducibility. “I hope this paper has better luck in changing the world.”
I will repeat my opinion on this kind of thing: (1) frequentist statistics are neither perfect nor terrible, (2) Bayesian statistics are neither perfect nor terrible, (3) it is possible to cheat with Bayesian statistics, (4) it is possible to cheat with frequentist statistics, and in conclusion, (5) the problem is not with this or that particular statistical paradigm, but rather with researchers really wanting to find results that are interesting…and therefore making interesting conclusions however they can (whether they is any truth to those conclusions or not).
Blaming a particular statistical paradigm is just a red herring. If we want science to be more reproducible, the scientific reward system needs to shift in favour of skepticism. This will have its downsides too, because if we don’t reward scientists making bold claims then science could be boring and may in fact fail to notice subtle but in-the-end interesting results. Of course the price of rewarding scientific boldness is many published results that are untrue.
The problem of how to encourage better scientific practice is at the intersection of the sociology of science and statistics (and methodology more generally). If you ignore one of these pieces (e.g. this recent Nature news report coming down on an entire statistical paradigm), then you will necessarily be oversimplifying the problem.
…and forgot that R-squared statistics are weird in R when the intercept is removed. There’s so much about this issue, so you really don’t have to read on. This post really is me just underlining one of my common silly mistakes so that I don’t ever redo it. This FAQ says it all. I understand these arguments, but the problem for me is that in interactive mode I often centre the response variable before sending it to
lm and then automatically add the
-1 to the formula. But in my head I know there's still an intercept. For some reason I can remember to mentally subtract a degree of freedom from the ANOVA tables but I always forget about the R-squared. I think the solution is just to never use
-1 if I don't really mean it.
Here’s a little function I often put at the beginning of R scripts in which I’m going to do a lot of cross validation:
loo <- function(data) lapply(1:nrow(data), function(i) data[-i,])
It usually works pretty well just as is, but sometimes needs additional bells and whistles (e.g. K-fold CV). Its also stupid slow for big data sets, but its some nice sugar for most of my CV problems.
Three Commandments for Modelers
The principles of model development can be summarized as three important rules: 1. Lie 2. Cheat 3. Steal.
These require some elaboration:
Lie. A good model includes incorrect assumptions. Practical models have to be simple enough that the number of parameters does not outstrip the available data. Theoretical models have to be simple enough that you can figure out what they’re doing and why. The real world, unfortunately, lacks these properties. So in order to be useful, a model must ignore some known biological details, and replace these with simpler assumptions that are literally false.
Cheat. More precisely, do things with data that would make a statistician nervous, such as using univariate data to fit a multivariate rate equation by multipli – cation of limiting factors or Liebig’s law of the minimum, and choosing between those options based on your biological knowledge or intuition. Statisticians like to let data “speak for themselves.” Modelers should do that when it is possible, but more often the data are only one input into decisions about model structure, the rest coming from the experience and subject-area knowledge of the scientists and modelers.
Steal. Take ideas from other modelers and models, regardless of discipline. Cutting-edge original science is often done with conventional kinds of models using conventional functional forms for rate equations—for example, compartment models abound in the study of HIV/AIDS. If somebody else has developed a sensible-looking model for a process that appears in your model, try it. If some – body else invested time and effort to estimate a parameter in a reasonable way, use it. Of course you need to be critical, and don’t hesitate to throw out what you’ve stolen if it doesn’t fit what you know about your system.
Repost of a very useful tip on how to get your local branch to a specific remote branch.
I know that discussions of peer review reform are getting kind of boring these days. Usually, each suggestion has pros and cons, and so it becomes difficult to use arguments alone to sort out what system will be best. What will actually happen to peer review is what always happens in any culture…the people and groups with the most conviction, energy, and resources will be able to influence the peer review process, and there will be some good and bad aspects to the results no matter who these people are. Not that debate isn’t important and useful. Its just that it can be difficult to disentangle salesmanship from dispassionate reasoning, and at this point I’ve heard so much about this topic that its all starting to sound like noise.
Nevertheless, I wanted an excuse to share some new research in the foundations of mathematics, which suggests to me that peer review isn’t necessary for doing extremely influential and interesting research. A mathematician I know referred to this book as “…a major tectonic change in the bedrock that math is built on.” So its pretty important I think.
The cool part about this work is that:
40 authors collaborated on GitHub to produce a 470-page Creative Commons-licensed book in six months, without the involvement of any academic publisher. The book resets the foundations of mathematics in terms that suit computer formalisation – they formalised their theory in both Coq and Agda before writing the book. Several of the authors are active on Google+ answering questions about it.
I don’t know what Coq and Agda are…but for me the really interesting thing is that it seems as though this research has managed to completely circumvent the peer review process. Unfortunately a quick Google search couldn’t verify that there was absolutely no formal peer review (does anyone know?). But if there wasn’t, then these authors have effectively bypassed peer review via collaboration. Here’s a particularly inspiring quotation from one of the authors:
But more importantly, the spirit of collaboration that pervaded our group at the Institute for Advanced Study was truly amazing. We did not fragment. We talked, shared ideas, explained things to each other, and completely forgot who did what (so much in fact that we had to put some effort into reconstruction of history lest it be forgotten forever). The result was a substantial increase in productivity. There is a lesson to be learned here (other than the fact that the Institute for Advanced Study is the world’s premier research institution), namely that mathematicians benefit from being a little less possessive about their ideas and results. I know, I know, academic careers depend on proper credit being given and so on, but really those are just the idiosyncrasies of our time. If we can get mathematicians to share half-baked ideas, not to worry who contributed what to a paper, or even who the authors are, then we will reach a new and unimagined level of productivity.
I love this…especially the ‘explained things to each other’ bit. I know I’m being a bit utopian. But I still love it. Can we do this in ecology?
Ben Bolker has recently put together some fantastic notes on this topic:
The second one’s a little technical, but really useful.
Edit: Ben has recently updated the first of these two documents.
A little while ago Radford Neal noticed some inefficiencies in R, so he started working on ways to make it more efficient. He’s reached a bit of milestone in the last few days, with the release of pqR. Check it out…it looks really cool.
PS — Julia is a newish language that’s also threatening R (and MATLAB?) with its speed, though it will probably be a while before its infrastructure grows to the point to where it can really start competing with R.
Anyone who tries to write software with other people will run into version control (e.g. git, svn). I’ve recently been involved with a project that uses git, which is great but can confuse me. I just found a really simple to-the-point large-font basic guide to git that’s really great.