Saturday, July 21, 2012

Bonferroni correcting lots of correlations


Someone posed me this question:
Some of my research, if not all of it (:-S) will use multiple correlations. I'm now only considering those correlations that are less than .001. However, having looked at bonferroni corrections today - testing 49 correlations require an alpha level of something lower than 0.001. So essentially meaning that correlations have to be significant at .000. Am I correct on this? The calculator that I am using from the internet says that with 49 correlational tests, with an alpha level of 0.001 - there is chance of finding a significant result in approximately 5% of the time.
Some people have said to me that in personality psychology this is okay - but I personally feel wary about publishing results that could essentially be regarded as meaningless. Knowing that you probably get hammered every day for answers to stats question, I can appreciate that you might not get back to me. However - if you can, could you give me your opinion on using multiple correlations? Just seems a clunky method for finding stuff out.
It seemed like the perfect opportunity for a rant, so here goes. My views on this might differ a bit from conventional wisdom, so might not get you published, but this is my take on it:
  1. Null hypothesis significance testing (i.e. looking at p-values) is a deeply flawed process. Stats people know it's flawed, but everyone does it anyway. I won't go into the whys and wherefors of it being flawed but I touch on a few things here  and to a lesser extent here. Basically, the whole idea of determining 'significance' based on an arbitrary cut off for a p-value is stupid. Fisher didn't think it was a good idea, Neyman and Pearson didn't think it was a good idea, and the whole thing dates back to prehistoric times when we didn't have computers to compute exact p-values for us.
  2. Because of the above, Bonferroni correcting when you've done a billion tests is even more ridiculous because your alpha level will be so small that you will almost certainly make Type II errors and lots of them. Psychologists are so scared of Type I errors, that they forget about Type II errors.
  3. Correlation coefficients are effect sizes. We don't need a p-value to interpret them. The p-value adds precisely nothing of value to a correlation coefficient other than to potentially fool you into thinking that a small effect is meaningful or that a large effect is not (depending on your sample size). I don't care how small your p-value is, an r = .02 or something is crap. If your sample size is fairly big then the correlation should be a precise estimate of the population effect (bigger sample = more precise). What does add value is a confidence interval for r, because it gives you limits within which the true (population) value is likely to lie.

So, in a nutshell, I would (personally) not even bother with p-values in this situation because, at best, they add nothing of any value, and, at worst, they will mislead you. I would, however, get confidence intervals for your many correlations (and if you bootstrap the CIs, which you can on SPSS, then all the better). I would then interpret effects based on the size of r and the likely size of the population effect (which the confidence intervals tells you).
Of course reviewers and PhD examiners might disagree with me on this, but they're wrong:-)
Ahhhhhh, that feels better.

SPSS is not dead


This blog was published recently showing that the use of R continues to grow in academia. One of the graphs (Figure 1) showed citations (using google scholar) of different statistical packages in academic papers (to which I have added annotations).
Figure 1: Citations of stats packages (from http://blog.revolutionanalytics.com/2012/04/rs-continued-growth-in-...)

At face value, this graph implies a very rapid decline in SPSS use since 2005. I sent a tongue in cheek tweet about this graph, and this perhaps got interpreted that I thought SPSS use was on the decline. So, I thought I’d write this blog. The thing about this graph is it deals with citations in academic papers. The majority of people do not cite the package they use to analyse their data, so this might just reflect a decline in people stating that they used SPSS in papers. Also, it might be that users of software such as R are becomming more inclined to cite the package to encourage others to use it (stats package preference does for some people mimic the kind of religious fervor that causes untold war and misery. Most packages have their pros and cons and some people should get a grip). Also, looking at my annotations on Figure 1 you can see that the decline in SPSS is in no way matched by an upsurge in the use of R/Stata/Systat. This gap implies some mysterious ghost package that everyone is suddenly using but is not included on this graph. Or perhaps people are just ditching SPSS for qualitative analysis or doing it by handJ
If you really want to look at the decline/increase of package use then there are other metrics you could use. This article details lots of them. For example you could look at how much people talk about packages online (Figure 2).
Figure 2: online talk of stats packages (Image from http://r4stats.com/popularity)

Based on this R seems very popular and SPSS less so. However, the trend for SPSS is completely stable between 2005-2010 (the period of decline in the Figure 1). Discussion of R is on the increase though. Again though you can’t really compare R and SPSS here because R is more difficult to use than SPSS (I doubt that this is simply my opinion, I reckon you could demonstrate empirically that the average user prefers the SPSS GUI to R’s command interface if you could be bothered). People are, therefore, more likely to seek help on discussion groups for R than they are for SPSS. It’s perhaps not an index of popularity so much as usability. 
There are various other interesting metrics discussed in the aforementioned article. Perhaps the closest we can get to an answer to package popularity (but not decline in use) is survey data on what tools people use for data mining. Figure 3 shows that people most frequently report R, SPSS and SAS. Of course this is a snapshot and doesn’t tell us about usage change. However, it shows that SPSS is still up there. I’m not sure what types of people were surveyed for this figure, but I suspect it was professional statisticians/business analysts rather than academics (who would probably not describe their main purpose as data mining). This would also explain the popularity of R, which is very popular amongst people who crunch numbers for a living.
Figure 3: Data mining/analytic tools reported in use on Rexer Analytics survey during 2009 (from http://r4stats.com/popularity).
To look at the decline or not of SPSS in academia what we really need is data about campus licenses over the past few years. There were mumblings about Universities switching from SPSS after IBM took over and botched the campus agreement, but I’m not sure how real those rumours were. In any case, the teething problems from the IBM take over seem to be over (at least most people have stopped moaning about them). Of course, we can’t get data on campus licenses because it’s sensitive data that IBM would be silly to put in the public domain. I strongly suspect campus agreements have not declined though. If they have, IBM will be doing all that they can (and they are an enormously successful company) to restore them because campus agreements are a huge part of SPSS’s business.
Also, I doubt campus agreements have declined because they will stop for two main reasons (1) SPSS isn’t used by anyone anymore, (2) the cost become prohibitive. These two reasons are related obviously – the point at which they stop the agreement will be a function of cost and campus usage. In terms of campus usage, If you grew up using SPSS as an undergraduate or postgraduate, you’re unlikely to switch software later in your academic career (unless you’re a geek like me who ‘enjoys’ learning R). So, I suspect the demand is still there. In terms of cost, as I said, I doubt IBM are daft enough to price themselves out of the market.
So, despite my tongue in cheek tweet, I very much doubt that there is a mass exodus from SPSS. Why would there be? Although some people tend to be a bit snooty about SPSS, it's a very good bit of software: A lot of what it does, it does very well. There are things I don’t like about it (graphs, lack of robust methods, their insistence on moving towards automated analysis), but there’s things I don’t like about R too. Nothing is perfect, but SPSS's user-friendly interface allows thousands of people who are terrified of stats to get into it and analyse data and, in my book, that's a very good thing.

One-Tailed Tests


I’ve been thinking about writing a blog on one-tailed tests for a while. The reason is that one of the changes I’m making in my re-write of DSUS4 is to alter the way I talk about one-tailed tests. You might wonder why I would want to alter something like that – surely if it was good enough for the third edition then it’s good enough for the fourth? Textbook writing is quite an interesting process because when I wrote the first edition, I was very much younger, and to some extent the content was driven by what I saw in other textbooks. Even as the book has evolved over certain editions, the publishers will get feedback from lecturers who use the book, I get emails from people who use the book, and so, again, content gets driven a bit by what people who use the book want and expect to see. People expect to learn about one-tailed tests in an introductory statistics book and I haven’t wanted to disappoint them. However, as you get older you also get more confident about having an opinion on things. So, although I have happily entertained one-tailed tests in the past, in more recent years I have felt that they are one of the worse aspects of hypothesis testing that should probably be discouraged.
Yesterday I got the following question landing in my inbox, which was the perfect motivator to write this blog and explain why I’m trying to deal with one-tailed tests very differently in the new edition of DSUS:
Question: “I need some advice and thought you may be able to help. I have a one-tailed hypothesis, ego depletion will increase response times on a Stroop task. The data is parametric and I am using a related T-Test.
Before depletion the Stroop performance mean is 70.66 (12.36)
After depletion the Stroop performance mean is 61.95 (10.36)
The t-test is, t (138) = 2.07, p = .02 (one-tailed)
Although the t-test comes out significant, it goes against what I have hypothesised. That Stroop performance decreased rather than increased after depletion. So it goes in the other direction. How do I acknowledge this in a report?
I have done this so far. Is it correct?
Although the graph suggests there was a decrease in Stroop performance times after ego-depletion. Before ego-depletion (M=70.66, SD=12.36) after ego-depletion (M= 61.95, SD=10.36), a t-test showed there was a significance between Stroop performance phase one and two t (138) = 10.94, p <.001 (one-tailed).”
This question illustrates perfectly the confusion people have about one-tailed tests. The author quite rightly wants to acknowledge that the effect was in the opposite direction, but quite wrongly still wants to report the effect … and why not, effects in the opposite direction and interesting and intriguing and any good scientists wants to explain interesting findings.
The trouble is that my answer to the question of what to do when you get a significant one-tailed p-value but the effect is in the opposite direction to what you predicted is (and I quote my re-written chapter 2 here): “if you do a one-tailed test and the results turn out to be in the opposite direction to what you predicted you must ignore them, resist all temptation to interpret them, and accept (no matter how much it pains you) the null hypothesis. If you don’t do this, then you have done a two-tailed test using a different level of significance from the one you set out to use”
[Quoting some edited highlights of the new section I wrote on one-tailed tests]:
One-tailed tests are problematic for three reasons:
  1. As the question I was sent illustrates, when scientists see interesting and unexpected findings their natural instinct is to want to explain them. Therefore, one-tailed tests are dangerous because like a nice piece of chocolate cake when you’re on a diet, they waft the smell of temptation under your nose. You know you shouldn’t eat the cake, but it smells so nice, and looks so tasty that you shovel it down your throat. Many a scientist’s throat has a one-tailed effect in the opposite direction to that predicted wedged in it, turning their face red (with embarrassment).
  2. One-tailed tests are appropriate only if a result in the opposite direction to the expected direction would result in exactly the same action as a non-significant result (Lombardi & Hurlbert, 2009; Ruxton & Neuhaeuser, 2010). This can happen, for example, if a result in the opposite direction would be theoretically meaningless or impossible to explain even if you wanted to (Kimmel, 1957). Another situation would be if, for example, you’re testing a new drug to treat depression. You predict it will be better than existing drugs. If it is not better than existing drugs (non-significant p) you would not approve the drug; however it was significantly worse than existing drugs (significant p but in the opposite direction) you would also not approve the drug. In both situations, the drug is not approved.
  3. One-tailed tests encourage cheating. If you do a two-tailed test and find that your p is .06, then you would conclude that your results were not significant (because .06 is bigger than the critical value of .05). Had you done this test one tailed however, the p you would get would be half of the two tailed value (.03). This one-tailed value would be significant at the conventional level. Therefore, if a scientist finds a two-tailed p that is just non-significant, they might be tempted to pretend that they’d always intended to do a one-tailed test, half the p value to make it significant and report that significant value. Partly this problem exists because of journal’s obsessions with p-values, which therefore rewards significance. This reward might be enough of a temptation for some people to half their p-value just to get a significant effect. This practice is cheating (for reasons explained in one of the Jane Superbrain boxes in Chapter 2 of my SPSS/SAS/R books). Of course, I’d never suggest that scientists would half their p-values just so that they become significant, but it is interesting that two recent surveys of practice in ecology journals concluded that “all uses of one-tailed tests in the journals surveyed seemed invalid.” (Lombardi & Hurlbert, 2009), and that only 1 in 17 papers using one-tailed tests were justified in doing so (Ruxton & Neuhaeuser, 2010).
For these reasons, DSUS4 is going to discourage the use of one-tailed tests unless there's a very good reason to use one (e.g., 2 above). 
PS Thanks to Shane Lindsay who, a while back now, sent me the Lombardi and Ruxton papers.

References

  • Kimmel, H. D. (1957). Three criteria for the use of one-tailed tests. Psychological Bulletin, 54(4), 351-353. doi: 10.1037/h0046737
  • Lombardi, C. M., & Hurlbert, S. H. (2009). Misprescription and misuse of one-tailed tests. Austral Ecology, 34(4), 447-468. doi: 10.1111/j.1442-9993.2009.01946.x
  • Ruxton, G. D., & Neuhaeuser, M. (2010). When should we use one-tailed hypothesis testing? Methods in Ecology and Evolution, 1(2), 114-117. doi: 10.1111/j.2041-210X.2010.00014.x

Rock makes you Racist ... Apparently


Like buses, you don’t get a blog for weeks and then two come at once. I saw today this headline: Does listening to rock make you racist? Seven minutes of Bruce Spri... in the daily mail online. They also included a helpful picture of Scott Weiland wearing a pseudo-nazi outfit (well, it was a black shirt, with a bit of a poor choice of peaked cap) to ‘reflect the association between rock and white people’. ‘The association between rock and white people’, bugger me, it’s as though bad brains, living colour, 24-7 spyz, Animals as Leaders, bodycount (shall I go on?) or those collaborations between public enemy and anthrax had never happened. In the world of the Daily Mail, rock makes you a racist, simple as. Now they’ve got the science to back it up. Mothers and fathers everywhere protect your children from this evil and rancid puff of Satan’s anal smoke that pervades society in the form of ‘rock music’, it will infect their brains and make them racists. I’d have thought this would be a good thing as far as the Daily Mail are concerned given this, and this, and this, and, well, every other article they publish.
Anyway, enough about the Daily Mail. The point is, this piece of research has been seized on by many a website, including theNME who have for years been trying to find a good reason to justify looking down their self-important noses at rock and heavy metal. Now they have one: it makes us all racist. Or does it?
It’s based on Helen LaMarre’s doctoral thesis. I don’t want to get into bashing this study because I suspect like most scientists who find their studies spreading like wildfire across the internet, they at no point said that listening to Bruce Springsteen makes you a racist. It’s easy to bash any study – nothing is perfect. My issue here is with the way the study is presented by the media.
Essentially, in this study they took 148 undergrads (all Caucasian otherwise it doesn’t really make sense), and sat them in a waiting room for 7 minutes during which one of three types of music was played:
  • Mainstream rock: The White Stripes, Bon Jovi, Bruce Springsteen, Van Morrison, Foo Fighters (2 songs), Radiohead
  • Radical white power rock (i.e. racist dickhead rock): Prussian Blue (2 songs), Screwdriver, Bound for Glory, Max Resist (2 songs)
  • Top 40 Pop: Justin Timberlake (3 songs), Fergie and Akon (2 songs), Fergie (withour Akon), Gwen Stefani (with Akon, who gets about a bit), Gwen Stefani (2 songs), Rihanna.
At the end of this they were asked to allocate $500,000, as percentage chunks, to four student groups based on descriptions of those groups. The descriptions depicted White American, African American, Arab American and Latino American student groups. So, for example, if you wanted to make equal allocations, then you would respond 25%, 25%, s5%, 25%. They found that when listening to pop music the allocations were fairly even (means of 24.02, 25.49, 24.02, 24.76), after rock music they allocated more to the white American student group (M = 35) compared to all of the others (all Ms around 21). After listening to right wing music, allocations were higher to White American students (M = 39.47) than to African (M = 16.09), Arab (M = 14.58) and Latina (M = 25.58) students.
Statistically speaking these are pretty decent sized effects (huge in some cases). However, a few things to consider in making your own mind up about whether this shows that 7 minutes of Bruce Springsteen makes you a racist:
  1. Is a control group of pop music appropriate? A no music control group (just being in the waiting room) would give you a better baseline of people’s natural responses. The pop music (I’m not really familiar with it, but judging by song titles) was quite love oriented, so it’s possible that hearing songs about love etc. puts you in a good mood, and in a good mood you make more balanced allocations of the funds. I don’t know this to be true, it’s a hypothesis. However, I think a no music control group is a better baseline than any other form of music, because you can then assess whether a particular genre changes things compared to nothing at all. We could then see whether rock music affects allocations negatively, or pop music affects them positively. As it stands we just know the genres differ, but we don’t know whether pop makes you fairer or rock makes you unfair, or both.
  2. Is it the music that matters? This kind of research is very difficult to do because you’re not just manipulating the genre of music, you’re manipulating all sorts of other confounds that systematically vary with your independent variable. One example in this study is (arguably) aggression (rock is arguably more aggressive than pop, right wing rock is undoubtedly more aggressive than lots of other things). So here, you have a pattern of the rockier the music, the more money was allocated to White American students, but is it just because of a mood induction? Is it that the more of a negative mood you’re in, the more biased you are to the same race? (It would be an interesting finding in itself that people show a same race bias when they’re in a bad mood, but it would undermine the conclusion that rock music per se causes a same-race bias because there are lots of things that might put you in a bad mood other than rock music. Reading the daily mail, for example.) The problem here is that rock wasn’t pitted against, say hardcore hip hop, or better still perhaps some minor threat or fugazi who are very aggressive but promote very liberal themes in their lyrics. No measures of mood were taken so we don’t know whether there was a mood effect at all, and we certainly don’t know whether it’s the genre that matters, the lyrics, or the tone of the music. As I said, it’s really hard to match all of the variables that you might want to match, but the press portray the research in very simplistic terms and it’s not that's simple.
  3. What about individual differences? When asked what music the people listened to the most common response was pop (the details of this questionnaire are sketchy so I’m not entirely sure what question was asked). So, in effect you’ve got a bunch of people who probably don’t listen to rock much, who are played rock in a waiting room. Some other people were played music that they ‘prefer’ (pop) and they are subsequently fair minded and nice than those played less familiar and less preferred music (rock). You’d really need some kind of measure of people’s preference and then look for an interaction between genre and preference. Maybe it’s simply that when you’re subjected to music that you don’t particularly like you show a same-race bias? This goes back to the mood effects problem. Again, what’s needed here is a bit more research that delves into how you’re affected by familiarity of the music, whether it’s music you actually like: by having a wider range of genres (not just rock and pop), different groups of people with different tastes (and from different racial backgrounds) we might be able to pick apart some of these potential confounds.
  4. The money allocation task: arguably the money allocation task magnifies the effect. You have 100% to allocate over 4 boxes. You have to allocate exactly 100%. So, let’s imagine you’re fair minded and allocate across the boxes as 25%, 25%, 25%, 25%. Job done. Let’s say you change you’re mind and decide that you want to give box 1 an extra percent: 26%, 25%, 25%, 25%. You’ve now allocated 101% and that’s not allowed. So, you’d have to remove a 1% from another box to complete the task as requested. So perhaps you decide box 2 is your least favourite so you now allocate: 26%, 24%, 25%, 25%. You have allocated 100% and you have completed the task as requested. My point is that a small preference for box 1 (you wanted to add 1%) gets doubled because to do this you have to subtract some from one or more of the other boxes: a 1% difference between box 1 and 2 is doubled to a 2% difference. I’m not saying that this means that the results are nonsense or anything like that, but I am saying that it has probably magnified the effects reported because a slight preference for one group will be magnified simply because to increase funds to that group you have to take them away from another.
These are just a few points off the top of my head. Of course, I’m a huge rock and metal fan and I have my own biases: years of listening to slayer have not made me a Satanist anymore than years of listening to public enemy made me anti-white (although it did give me an enlightening new perspective on many things). I’m prepared to be proved wrong, but on the basis of this study I’m not concerned that I’ll wake up tomorrow as a raving racist. So, like I said this blog is more about how the press portray what is actually a very complex research question in a completely idiotic way. I always like reading studies about music preferences and this, like many I have read, pose interesting questions about the effect that music has on us and how we study it. There are lots of methodological issues that arise in trying to control the appropriate confounds if you’re trying to make statements about genres of music. There are also lots of interesting questions about what aspects of music effect people (so digging below the rather arbitrary classifications of rock, pop, rap or indie) and how these characteristics interact with the personality types of people that listen to them to affect cognition and emotion.
Right, I’m off to listen to some Devin Townsend, after which I’m going to start a campaign to shut down all bad coffee outlets. Ziltoid ……..

TwitterPanic, NHST, Wizards, and the cult of significance again


****Warning, some bad language used: don't read if you're offended by that sort of thing****
I haven’t done a blog in a while, so I figured I ought to. Having joined Twitter a while back, I now find myself suffering from TwitterPanic™, which is an anxiety disorder (I fully anticipate to be part of DSM-V) characterised by a profound fear that people will unfollow you unless you keep posting things to remind them of why it’s great to follow you. In the past few weeks I have posted a video of a bat felating himself and a video of my cat stopping me writing my textbook. These might keep the animal ecologists happy, but most people probably follow me because they think I’m going to write interesting things about statistics, and not because they wanted to see a felating bat. Perhaps I’m wrong, and if so please tell me because I find it much easier to get ideas for things to put online that rhyme with stats (like bats and cats) than I do about stats itself.
Anyway, I need to get over my TwitterPanic, so I’m writing a blog that’s actually about stats. A few blogs back I discussed whether I should buy the book ‘the Cult of Statistical .... I did buy it, and read it. Well, when I say I read it, I started reading it, but if I’m honest I got a bit bored and stopped before the end. I’m the last person in the world who could ever criticise anyone for labouring points but I felt they did. To be fair to the authors I think the problem was more that they were essentially discussing things that I already knew, and it’s always difficult to keep focus when you’re not having ‘wow, I didn’t know that’ moments. I think if you’re a newbie to this debate then it’s an excellent book and easy to follow.
The Fields on Honeymoon
In the book, the authors argue the case for abandoning null hypothesis significance testing, NHST (and I agree with most of what they say – see this), but they frame the whole debate a bit like a war between them (and people like them) and ‘the sizeless scientists’ (that’s the people who practice NHST). The ‘sizeless scientists’ are depicted (possibly not intentionally) like a bunch of stubborn, self-important, bearded, cape-wearing, fuckwitted, wizards who sit around in their wizardy rooms atop the tallest ivory tower in the kingdom of elephant tusks, hanging onto notions of significance testing for the sole purpose of annoying the authors with their fuckwizardry. I suspect the authors have had their research papers reviewed by these fuckwizards. I can empathise with the seeds of bile that experience might have sewn in the authors’ bellies, however, I wonder whether writing things like ‘perhaps they [the sizeless scientists] don’t know what a confidence interval is’  is the first step towards thinking that the blue material w
ith stars on that you’ve just seen would look quite fetching as a hat.
I don’t believe that people who have PhDs and do research are anything other than very clever people, and I think the vast majority  want to do the right thing when it comes to stats and data analysis (am I naïve here?). The tone of most of the emails I get suggest that people are very keen indeed not to mess up their stats. So, why is NHST so pervasive? I think we can look at a few sources:
  1. Scientists in most disciplines are expected to be international experts in their discipline, which includes being theoretical leaders, research experts, and drivers of policy and practice. On top of this they’re also expected have a PhD in applied statistics. This situation is crazy really. So, people tend to think (not unreasonably) that what they were taught in university about statistics is probably still true. They don’t have too much time to update their knowledge. NHST is appealing because it’s a very recipe-book approach to things and recipes are easy to follow.
  2. Some of the people above, will be given the task of teaching research methods/statistics to undergraduates/postgraduates. Your natural instinct is to teach what you know. If you were taught NHST, then that’s what you’ll teach. You might also be doing a course that forms part of a wider curriculum and that will affect what you teach. For example, I teach second year statistics, and by the time I get these students they have had a year of NHST, so it seems to me that it will be enormously confusing for them if I suddenly say ‘oh, all that stuff you were taught last year, well, I think it’s bollocks, learn this instead’. Instead, I weave in some arguments against NHST, but in a fairly low key way so that I don’t send half of the year into mass confusion and panic. Statistics is confusing enough for them without me undermining a year of their hard word.
  3. Even if you wanted to remove NHST from your curriculum, you might be doing your students a great disservice because reviewers of research will likely be familiar with NHST and expect to see it. It might not be ideal that this is the case, but that is the world as we currently know it. When I write up research papers I would often love to abandon p-values but I know that if I do then I am effectively hand-carving a beautiful but knobbly stick, attaching it to my manuscript, and asking the editor if he or she would be so kind as to send the aforementioned stick to the reviewers so that they can beat my manuscript with it. If your students don’t know anything about NHST are you making their research careers more tricky to negotiate?
  4. Textbooks. As I might have mentioned a few million times, I’m updating Discovering Statistics Using SPSS (DSUS as I like to call it). This book is centred around NHST, not because I’m particularly a fan of it, but because it’s what teachers and people who adopt the book expect to see in it. If they don’t see it, they will probably use a different book. I’m aware that this might come across as me completely whoring my principles to sell my book, and perhaps I am, but I also feel that you have to appreciate from where other people come. If you were taught NHST, that’s what you’ve done for 10 or 20 years, that’s what you teach because that what you genuinely believe is the right way to do things, then the last thing you need is a pompous little arse from Brighton telling you to change everything. It’s much better to have that pompous little arse try to stealth-brainwash you into change: Yes, each edition I feel that I can do a bit more to promote approaches other than NHST. Subvert from within and all that.
 So, I think the cult of significance will change, but it will take time, and rather than seeing it as a war between rival factions, perhaps we should pretend it’s Christmas day, get out of the trenches, play a nice game of football/soccer, compliment each other on our pointy hats, and walk away with a better understanding of each other. It’d be nice if we didn’t go back to shooting each other on boxing day though.
The APA guidelines of over 10 years ago and the increased use of meta-analysis have, I think, had a positive impact on practice. However, we’re still in a sort of hybrid wilderness where everyone does significance tests and, if you’re lucky, people report effect sizes too. I think perhaps one day NHST will be abandoned completely, but it will take time, and by the time it has we’ll probably have a found a reason why confidence intervals and effect sizes are as comedic as sticking a leech on your testicles to cure a headache.
I’ve completely lost track of what the point of this blog was now. It started off that I was going to have a rant about one-tailed tests (I’ll save that for another day) because I thought that might ease my TwitterPanic. However, I got side tracked by thinking about the cult of significance book. I now feel a bit bad, because I might have been a bit critical of it and I don’t like it when people criticise my books so I probably shouldn’t criticise other’s. I stuck a sweet wizard hat related honeymoon picture in to hopefully soften the authors' attitude towards me in the unlikely event that they ever read this and decide to despise me. I then took some therapy for dealing with worrying too much about what other people think. It didn't work. Once I’d thought about that book I remembered that I’d wanted to tell anyone who might be interested that I thought the authors had been a bit harsh on people who use NHST. I think that side track was driven by a subconscious desire to use the word ‘fuckwizardry’, because it made me laugh when I thought of it and Sage will never let me put that in DSUS4. The end result is a blog about nothing, and that’s making my TwitterPanic worse …

Definitions

  • Fuckwizard: someone who does some complicated/impressive task in a fuckwitted manner but with absolute confidence that they are doing it correctly.
  • Fuckwizardry: doing a complicated or impressive task in a fuckwitted manner but with absolute confidence that you are doing it correctly

Factor Analysis for Likert/Ordinal/Non-normal Data


My friend Jeremy Miles sent me this article by Basto and Periera (2012) this morning with the subject line ‘this is kind of cool’. Last time I saw Jeremy, my wife and I gatecrashed his house in LA for 10 days to discuss writing the R book that’s about to come out. During that stay we talked about lots of things, none of which had anything to do with statistics, or R for that matter. It’s strange then that with the comforting blanket of the Atlantic ocean between us, we only ever talk about statistics, or rant at each other about statistics, or R, or SPSS, or each other.
Nevertheless, I’m always excited to see a message from Jeremy because it’s usually interesting, frequently funny, and only occasionally  insulting about me. Anyway, J was right, this article was actually kind of cool (in a geeky stats sort of way). The reason that the article is kind of cool is because it describes an SPSS interface for doing various cool factor analysis (FA) or principal components analysis (PCA) things in SPSS such as analysis of correlation matrices other than those containing Pearson’s r and parallel analysis/MAP. It pretty much addresses two questions that I get asked a lot:
  1. My data are Likert/not normal, can I do a PCA/FA on them?
  2. I’ve heard about Velicer’s minimum average partial (MAP) criteria and Parallel analysis, can you do them in SPSS.
PCA/FA is not something I use, and the sum total of my knowledge is in my SPSS/SAS/R book. Some of that isn’t even my knowledge, it’s Jeremy’s, because he likes to read my PCA chapter and get annoyed about how I’ve written it. The two questions are briefly answered in the book, sort of.
The answer to question 1 is apply the PCA to the correlation matrix of polychoric correlations (for Likert/ordinal/skewed data) or tetrachoric correlations (for dichotomous data) rather than the matrix of Pearson’s r. This is mentioned so briefly that you might miss it on p. 650 of the SPSS book (3rd ed) and 772 (in the proofs at least) of the R book.
The answer to question 2 is in Jane Superbrain 17.2 in the books, in which I very briefly explain parallel analysis and point to some syntax to do it that someone else wrote, and I don’t talk about MAP at all.
I cleverly don’t elaborate on how you would compute polychoric correlations, or indeed tetrachoric ones, and certainly don’t show anyone anything about MAP. In part this is because the books are already very large, but in the case of the SPSS book it’s because SPSS won’t let you do PCA on any correlation matrix other than one containing Pearson’s r and MAP/parallel analysis let’s just say have been overlooked in the software. Until now that is.
Basto and Periera (2012) have written an interface for doing PCA on correlation matrices containing things other than Pearson’sr, and you can do MAP parallel analysis and a host of other things. I recommend the article highly if PCA is your kind of thing.
However, the interesting thing is that underneath Basto and Periera’s interface SPSS isn’t doing anything – all of the stats are computed using R. In the third edition of the SPSS book I excitedly mentioned the R plugin for SPSS a few times. I was mainly excited because at the time I’d never used it, and I stupidly thought it was some highly intuitive interface that enabled you to access R from within SPSS without knowing anything about R. My excitement dwindled when I actually used it. It basically involves installing the plugin which may or may not work. Even if you get it working you simply type:
Input Program R
End Program
and stick a bunch of R code in between. It seemed to me that I might as well just use R and save myself the pain of trying to locate the plugin and actually get it working (it may be better now – I haven’t tried it recently). Basto and Periera’s interface puts a user-friendly dialog box around a bunch of R commands.
I’m absolutely not knocking Basto and Periera’s interface – I think it will be incredibly useful to a huge number of people who don’t want to use R commands, and very neatly provides a considerable amount of extra functionality to SPSS that would otherwise be unavailable. I’m merely making the point that it’s shame that having installed the interface, SPSS will get the credit for R’s work.
Admittedly it will be handy to have your data in SPSS, but you can do this in R with a line of code:
data<-read.spss(file.choose(), to.data.frame = TRUE)
Which opens a dialog box for you to select an SPSS file, and then puts it in a data frame that I have unimaginatively called ‘data’. Let’s imagine we opened Example1.sav from Basto and Periera’s paper. These data are now installed in an object calleddata.
Installing the SPSS interface involves installing R, installing the R plugin for SPSS, then installing the widget itself through the utilities menu. Not in itself hard, but by the time you have done all of this this I reckon you could type and execute this command in R:
rMatrix<-hetcor(data)$correlations
This creates a matrix (called rMatrix) containing polychoric correlations of the variables in the data frame (which, remember, was called data).
You probably also have time to run these command:
parallelAnalysis<-nScree(rMatrix)
parallelAnalysis
That’s your parallel analysis done on the polychoric correlation matrix (first command) and displayed on the screen (second command). The results will mimic the values in Figure 4 in Basto and Periera. If you want to generate Figure 3 from their paper as well then execute:
plotnScree(parallelAnalysis)
It's not entirely implausible that the R plugin for SPSS will still be downloading or installing at this point, so to relieve the tedium you could execute this command:
mapResults<-VSS(rMatrix, n.obs = 590, fm = "pc" )
mapResults
That’s the MAP analysis done on the polychoric correlation matrix using the VSS() function in R. n.obs is just the number of observations in the dataframe, and fm = “pc” tells it to do PCA rather than FA. The results will mimic the values in Figures 5 and 6 of  Basto and Periera.
The R plugin isn't working so you're frantically googling for a solution. While you do that, a small gerbil called Horace marches through the cat flap that you didn't realise you had, jumps on the table and types this into the R console:
PCA<-principal(rMatrix, nfactors = 4, rotate = "varimax")
print.psych(PCA, cut = 0.3, sort = TRUE)
Which will create an object called OPCA which contains the results of a PCA on your polychoric correlation matrix, extracting 4 factors and rotating using varimax (as Basto and Periera do for Example 1). You’ll get basically the same results as Figure 13.
You probably also have time to make some tea.
Like I said, my intention isn’t to diss Basto and Periera’s interface. I think it’s great, incredibly useful and opens up the doors to lots of useful things that I haven’t mentioned. My intention is instead to show you that you can do some really complex things in R with little effort (apart from the 6 month learning curve of how to use it obviously). Even a gerbil called Horace can do it. Parallel analysis was done using 33 characters: a line of code. PCA on polychoric correlations sounds like the access code to a secure unit in a mental institution, but it’s three short lines of code. Now that is pretty cool.

Files

PS The R-code here needs the following packages installed: psychnFactors and foreign.

Newspapers and 7 Core Statistical Concepts


There have been a few news stories recently about misunderstanding of statistics; especially confusing correlation with cause. For example, the Guardian ran this story/tutorial (http://t.co/AwCugVIV). Hot on the heels of that useful reminder, came a story linking unemployment in the UK to immigration from outside of the EU (http://t.co/7rTcFvbH). By the time the Daily Mail had got their hands on the story the ‘link’ had turned into a ‘cause’ (http://tinyurl.com/6s2lujq) and, according to one of their columnists was irrefutable proof that all immigrants should be shot (http://tinyurl.com/85xdfqy). Well, OK, he didn’t exactly say that but if you read between the lines I’m sure that’s what he meant. The trouble is not just that journalists and editors of newspapers (even the ones that carefully manufacture self-images of being more intellectual than other newspapers) know arse all about statistical theory, but that the 4.5 million or so readers of the daily mail (and others) also know arse all about statistics. Even my parents, who read the daily mail, know arse all about statistics and I’ve given them numerous copies of my books … that could be an association, but it’s certainly not a cause.
Anyway, all of this reminded me that when I’m trying to convince my students that statistics is a good thing to learn about, my main point is that it is a transferable skill that helps you to navigate the tricky terrain of life. After 3 years of a psychology degree (or any other degree that teaches applied statistics), you’re in the rather privileged position of being able to evaluate evidence for yourself. You don’t have to worry about whether the newspaper, or your GP, tells you not to vaccinate your child because the injection will grow them a second head, you can track down the research and evaluate the evidence for yourself.
To quote Utts, “What good is it to know how to carry out a t-test if a student can not read a newspaper article and determine that hypothesis testing has been misused?” 1, p. 78.  Utts 1 suggests seven core statistical ideas that could be described as ‘useful life skills’, which I summarized as 2:
  1. When causal relationships can and cannot be inferred, including the difference between observational studies and randomized experiments;
  2. The difference between statistical significance and practical importance, especially when using large sample sizes;
  3. The difference between finding ‘no effect’ and finding no statistically significant effect, especially when sample sizes are small;
  4. Sources of bias in surveys and experiments, such as poor wording of questions, volunteer response, and socially desirable answers;
  5. The idea that coincidences and seemingly very improbable events are not uncommon because there are so many possibilities (to use a classic example, although most people would consider it an unbelievable coincidence/unlikely event to find two people in a group of 30 that share the same birthday, the probability is actually .7, which is fairly high);
  6. ‘Confusion of the inverse’ in which a conditional probability in one direction is confused with the conditional probability in the other direction (for example, the prosecutor’s fallacy) ;
  7. Understanding that variability is natural, and that ‘normal’ is not the same as ‘average’ (for example, the average male height in the UK is 175cm; although a man of 190cm is, therefore, well above average, his height is within the normal range of male heights).

In a book chapter I wrote on teaching statistics in higher education2, I suggest that we should try, if nothing else, to get students to leave their degree programs with these core skills. We could also think about using real world examples (not necessarily from within our own discipline) to teach students how to apply these skills. This could have several benefits: (1) it might make the class more interesting; (2) it helps students to apply knowledge beyond the realm of their major subject; and (3) it will undermine the power that newspapers and the media in general has to sensationalize research findings, spread misinformation, and encourage lazy thinking. So, my main point is that, as teachers, we could think about these things when teaching, and students might take comfort in the fact that the stats classes they endured might have given them a useful shield to fend off the haddock of misinformation with which the media slaps their faces every day.
Right, I’m off to restructured my statistics course around those 7 key ideas ….

References

  1. Utts J. What Educated Citizens Should Know About Statistics and Probability The American Statistician 2003;57(2):74-79
  2. Field AP. Teaching Statistics. In: Upton D, Trapp A, editors. Teaching Psychology in Higher Education. Chichester, UK:: Wiley-Blackwell., 2010.

Bias in End of Year Album Polls?



So, in rolls 2012 and out rolls another year. I like new year: it’s a time to fill yourself with optimism about the exciting things that you will do. Will this be the year that I write something more interesting than a statistics book, for example? It’s also a time of year to reflect upon all of the things that you thought you’d do last year but didn’t. That’s a bit depressing, but luckily 2011 was yesterday and today is a new year and a new set of hopes that have yet to fail to come to fruition.
It’s also the time of year that magazines publish their end-of-year polls. Two magazines that I regularly read are Metal Hammer and Classic Rock (because, in case isn’t obvious from my metal-themed website and podcasts, I’m pretty fond of heavy metal and classic rock). The ‘album of the year’ polls in these magazines are an end of year treat for me: it’s an opportunity to see what highly rated albums I overlooked, to wonder at how an album that I hate has turned up in the top 5, or to pull a bemused expression at how my favourite album hasn’t made it into the top 20. At my age, it’s good to get annoyed about pointless things.
Anyway, for years I have had the feeling that these end of year polls are biased. I don’t mean in any nefarious way, but simply that reviewers who contribute to these polls tend to rate recently released albums more highly than ones released earlier in the year. Primacy and recency effects are well established in cognitive psychology: if you ask people to remember a list of things, they find it easier to recall items at the start or end of the list. Music journalists are (mostly) human so it’s only reasonable that reviewers will succumb to these effects, isn’t it?
I decided to actually take some time off this winter Solstice, and what happens when you take time off? In my case, you get bored and start to wonder whether you can test your theory that end of year polls are biased. The next thing you know, you’re creating a spreadsheet with Metal Hammer and Classic Rock’s end of year polls in it, then you’re on Wikipedia looking up other useful information about these albums, then, when you should be watching the annual re-run of Mary Poppins you find that you’re getting R to do a nonparametric bootstrap of a mediation analysis. The festive period has never been so much fun.
Anyway, I took the lists of top 20 albums from both Metal Hammer and Classic Rock magazine. I noted down their position in the poll (1 = best album of the year, 20 = 20th best album of the year), I also found out what month each album was released. From this information I could calculate how many months before the poll the album came out (0 = album came out the same month as the poll, 12 = the album came out 12 months before the poll). I called this variable Time.since.release.
My theory implies that an album’s position in the end of year list (position) will be predicted from how long before the poll the album was released. A recency effect would mean that albums released close to the end of the year (i.e. low score onTime.since.release) will be higher up the end of year poll (remember that the lower the score, the higher up the poll the album is). So, we predict a positive relationship between position and Time.since.release.
Let’s look at the scatterplot:

Both magazines show a positive relationship: albums higher up the poll (i.e. low score on position) tend to have been released more recently (i.e., low score on the number of months ago that the album was released). This effect is much more pronounced though for Metal Hammer than for Classic Rock.
To quantify the relationship between position in the poll and time since the album was released we can look at a simple correlation coefficient. Our position data are a rank, not interval/ratio, so it makes sense to use Spearman’s rho, and we have a fairly small sample so it makes sense to bootstrap the confidence interval. For Metal Hammer we get (note that because of the bootstrapping you’ll get different results if you try to replicate this) rho = .428 (bias = –0.02, SE = 0.19) with a 95% bias corrected and accelerated confidence interval that does not cross zero (BCa 95% = .0092, .7466). The confidence interval is pretty wide, but tells us that the correlation in the population is unlikely to be zero (in other words, the true relationship between position in the poll and time since release is likely to be more than no relationship at all). Also, because rho is an effect size we can interpret its size directly, and .428 is a fairly large effect. In other words, Metal Hammer reviewers tend to rank recent albums higher than albums released a long time before the poll. They show a relatively large recency effect.
What about Classic Rock? rho = .038 (bias = –0.001, SE = 0.236) with a BCa 95% CI = –.3921, .5129). The confidence interval is again wide, but this time crosses zero (in fact, zero is more or less in the middle of it). This CI tells us that the true relationship between position in the poll and time since release is could be zero, in other words no relationship at all. We can again interpret the rho directly, and .038 is a very small (it’s close to zero). In other words, Classic Rock reviewers do not tend to rank recent albums higher than albums released a long time before the poll. They show virtually no recency effect. This difference is interesting (especially given there is overlap between contributors to the two magazines!).
It then occurred to me, because I spend far too much time thinking about this sort of thing, that perhaps it’s simply the case that better albums come out later in the year and this explains why Metal Hammer reviewers rate them higher. ‘Better’ is far too subjective a variable to quantify, however, it might be reasonable to assume that bands that have been around for a long time will produce good material (not always true, as Metallica’s recent turd floating in the loo-loo demonstrates). Indeed, Metal Hammer’s top 10 contained Megadeth, Anthrax, Opeth, and Machine Head: all bands who have been around for 15-20 years or more. So, in the interests of fairness to Metal Hammer reviewers, let’s see whether ‘experience’ mediates the relationship between position in the poll and time since the album’s release. Another 30 minutes on the internet and I had collated the number of studio albums produced by each band in the top 20. The number of studio albums seems like a reasonable proxy for experience (and better than years as a band, because some bands produce an album every 8 years and others every 2). So, I did a mediation analysis with a nonparametric bootstrap (thanks to the mediation package in R). The indirect effect was 0.069 with a 95% CI = –0.275, 0.537. The proportion of the effect explained by mediation was about 1%. In other words, the recency bias in the Metal Hammer end of year poll could not be explained by the number of albums that bands in the poll had produced in the past (i.e. experience). Basically, despite my best efforts to give them the benefit of the doubt, Metal Hammer critics are actually biased towards giving high ratings to more recently released albums.
These results might imply many things:
  • Classic Rock reviewers are more objective when creating their end of year polls (because they over-ride the natural tendency to remember more recent things, like albums).
  • Classic Rock reviewers are not human because they don’t show the recency effects that you expect to find in normal human beings. (An interesting possibility, but we need more data to test it ...)
  • Metal Hammer should have a ‘let’s listen to albums released before June’ party each November to remind their critics of albums released earlier in the year. (Can I come please and have dome free beer?)
  • Metal Hammer should inversely weight reviewer’s ranks by the time since release so that albums released earlier in the year get weighted more heavily than recently released albums. (Obviously, I’m offering my services here …)
  • I should get a life.
Ok, that’s it. I’m sure this is of no interest to anyone other than me, but it does at least show how you can use statistics to answer pointless questions. A bit like what most of us scientists do for a large portion of our careers. Oh, and if I don’t get a ‘Spirit of the Hammer’ award for 15 years worth of infecting students of statistics with my heavy metal musings then there is no justice ion the world. British Psychological Society awards and National Teaching Fellowships are all very well, but I need a spirit of the hammer on my CV (or at least Defender of the Faith).

Have a rockin' 2012
andy
P.P.S. My top 11 (just to be different) albums of 2011 (the exact order is a bit rushed):
  1. Opeth: Heritage
  2. Wolves in the Throne Room: Celestial Lineage
  3. Anthrax: Worship Music
  4. Von Hertzen Brothers: Stars Aligned
  5. Liturgy: Split LP (although the Oval side is shit)
  6. Ancient Ascendent: The Grim Awakening
  7. Graveyard: Hisingen Blues
  8. Foo Fighters: Wasting Light
  9. Status Quo: Quid Pro Quo
  10. Manowar: Battle Hymns MMXI
  11. Mastodon: The Hunter