News and brain candy for the philosophy community
A paper recently published online in the Journal of Computer Assisted Learning (JCAL) has generated lively discussion on how the educational use of Twitter can affect college student engagement and grades. The effect of Twitter on college student engagement and grades by R Junco, G Heiberger and E Loken was published in November last year. The paper ‘provides experimental evidence that Twitter can be used as an educational tool to help engage students and to mobilise faculty into a more active and participatory role’ (quoted from the abstract).
However, a JCAL reader, Dr Ellen Murphy, has raised some interesting issues about the paper, particularly about the language that is used to describe cause and effect, in a letter she wrote to the JCAL Editor, Charles Crook. Rather than being published in JCAL itself, we think the debate and correspondence between the authors, Dr Murphy and the JCAL Editor is better aired via this blog.
JCAL Editor’s response
Letter to the Editor in response to The effect of Twitter on college student engagement and grades (E. Murphy)
This letter was submitted with a view to publication in the journal. Our advice on submissions does include the possibility of such correspondence. However, in my 8-year tenure as Editor, this is the first time I have had to consider that possibility. Moreover, ‘letters’ seem scarce items across the whole history of the journal. On the other hand, it is certainly an accepted and proper practice for journals to publish challenges to published papers. I believe it would be appropriate to do that in JCAL if the scientific credibility of a paper was convincingly questioned. In the extreme case this might result in a formal withdrawal of a published paper.
Therefore, as Editor, I am now required to judge whether and how to publicise the particular challenge contained within the correspondence arising from the paper by (Junco et al., in press)
I have discussed the case with an experienced member of the journal’s editorial board. We have reached a shared judgement as to the best form of publication. We do not think there are grounds arising from this debate to publicise it in the pages of the journal. This should be taken to mean that, after careful consideration, we do not judge that the critique of the paper is fatal: in other words, the original paper still contains useful data and a credible interpretative reflection. Of course, readers may make different interpretations of that data and those, in turn, may echo the reservations developed in the commentary published here. However, the fieldwork is clearly enough described to allow readers to do this. On the other hand, we do recognise (as does the author of the target paper) that the commentary makes useful points and we would like to see these points made in a public arena. On advice from the publisher, Wiley, we are therefore making the exchange available through their publication blog.
My own comments here will be of a general kind as I believe Murphy’s commentary makes observations about a specific paper that do apply widely. It is therefore useful to keep refreshing them within the community of researchers and practitioners in this area. I suggest that a lot hinges on the discourse of “effects”. If an academic reference point is sought for this controversy, the natural one is the debate that has been generated around Richard Clark’s seminal paper “Media will never influence learning”. Within that debate, Clark argues that media are always the “vehicles” for educational practices. Those practices amount to particular structures of interaction between people and a material or social environment. Such interactions are “mediated”. But individual instances of media are replaceable. So the interesting research questions centre on how particular media might “work” as vehicles for some educational practice. In Clark’s argument, this is largely a matter of determining optimal economy and efficiency. Although others (e.g., Cobb, 1997) have extended the notion of “efficiency” to embrace broader areas of cognition.
This implies that it must always be suspect to head up empirical papers in terms of the “the effects of X [some ICT] on Y [some learning]”. Yet doing so is widespread in the literature. For example the current issue of the journal Computers and Education has an “effects of” paper title of just this kind but also another with the (perhaps more cautious) “factors influencing” and (even more cautious) “factors related to”. Similarly, the current issue of our sister journal BJET has one papers with “effects of” and one with “influence of”. But the discourse of causality can be more subtle. So the same issue has a “fostering (practice)X by doing (ICT)Y” and a “enhancing (practice)X using (ICT)Y”. All those formulations tempt readers to the kind of simple causal reasoning that must be suspect. Even if Clark’s argument are not accepted as the basis of this doubt, it is widely recognised in the learning sciences arena that a more systemic approach to these matters is necessary.
The commentary on this target article about Twitter appears to accept that that such “effects discourses” are questionable but, perhaps because it is so widespread, the author argues it is not the main issue. And perhaps that is the right attitude – while being vigilant in noticing the kind of workarounds (illustrated above) that risk leading naïve readers to simpler conclusions.
However, having noted the suspect status of a simple causal mechanism in this Twitter situation, the commentary then concentrates on the shortcomings of an experimental/control group design. This does rather suggest a tolerance for neat, single-variable causal accounts. In the laboratory-clean world of some scientific activity, it is reasonable to strive for tight experimental/control design configurations in which only one thing is allowed to vary. I take it that is not what is being sought here. Apart from the many possible inequalities that the commentary identifies , I can think of many others – so many, that is unreasonable to expect an author to ever be diligent in ruling out all the hidden differences that such between-group designs can harbour.
This tension is common in the review of papers in this area. In fact, in the present case, the original reviewers were keen to draw attention to the poor description of the control group – very much in the manner that is pursued in the present commentary. A resubmission went a long way to improving this description. However, this effort is more in a spirit of establishing a degree of credibility on what is a reasonable “benchmark” of educational practice (rather than an experimental “control” for practice) to an interesting intervention. My own encouragement in the face of the original reviewers’ comments was to encourage this fuller specification but also to dwell more on the processes of interaction that were afforded by the Twitter intervention. I believe that there is an appetite in the community for case studies in which this dynamic is exposed. I believe also that the way we evaluate this dynamic is helped by a benchmark that reassures us we are not dealing with (for instance) a particularly receptive student community or a particularly assertive (say, results-oriented) teacher. In the end, I felt that the authors did achieve these useful aims. I felt that they gave a sufficiently rich account of the local events that occurred when Twitter was introduced to this educational context and a usefully rich account of the wider context itself.
In short, I welcome the underlying force of this commentary on Junco et al’s paper. There is certainly a need to be more cautious in the discourse of causality – not only when we write titles but when we interpret our results. The present authors have admitted some carelessness in this respect. I feel that they are honest in this admission but they are share such guilt with a very large number of their peers in relation to this matter. On the other hand, I think readers are increasingly sensitive to this problem and I trust that they will exercise good sense in taking a cautious (if illuminating) message from the present paper. The journal welcomes the chance to debate this important issue but stands by its faith in the paper at the focus of the present correspondence. Any intervention involving a new technology is subject to limits of evaluation (not least the Hawthorne effects implied in the commentary above) but it will always be difficult to take confident first steps in trying to understand what happens when we do exercise such new media. The present paper is a welcome, if still limited, venture into this territory.
Clark, R. E. (1983). Reconsidering Research on Learning from Media, Review of Educational Research, 53, 445-59.
Clark, R. E. (1994). Media will Never Influence Learning. Educational Technology Research and Development, 42(2), 21-29
Cobb, T. (1997) Cognitive efficiency: Toward a revised theory of media. Educational Technology Research & Development, 45(4), 21-35.
Junco, R., Heiberger, G. and Loken, E. (in press). The effect of Twitter on college student engagement and grades. Journal of Computer Assisted Learning, no. doi: 10.1111/j.1365-2729.2010.00387.x