Breaking News! Parachute use does not reduce death or major traumatic injury when jumping from aircraft

Researchers from Harvard Medical School, UCLA, University of Michigan showed in a randomized controlled experiment that the use of a parachute made no difference in the number of deaths or injuries of people jumping out of an arirplane.

To determine if using a parachute prevents death or major traumatic injury when jumping from an aircraft a randomized controlled trial was carried out. In private or commercial aircraft between September 2017 and August 2018, 92 aircraft passengers aged 18 and over were screened for participation. Of those 23 agreed to be enrolled and were randomized.

The design was simple, namely the authors used an intervention which entailed jumping from an aircraft (airplane or helicopter) with a parachute versus an empty backpack (unblinded).

As main outcome measures, the composite of death or major traumatic injury (defined by an Injury Severity Score over 15) upon impact with the ground was measured immediately after landing.

The results show that parachute use did not significantly reduce death or major injury (0% for parachute v 0% for control; P>0.9). This finding was consistent across multiple subgroups. Compared with individuals screened but not enrolled, participants included in the study were on aircraft at significantly lower altitude (mean of 0.6 m for participants v mean of 9146 m for non-participants; P<0.001) and lower velocity (mean of 0 km/h v mean of 800 km/h; P<0.001).

The authors conclude that parachute use did not reduce death or major traumatic injury when jumping from aircraft in the first randomized evaluation of this intervention. However, the trial was only able to enroll participants on small stationary aircraft on the ground, suggesting cautious extrapolation to high altitude jumps. When beliefs regarding the effectiveness of an intervention exist in the community, randomized trials might selectively enroll individuals with a lower perceived likelihood of benefit, thus diminishing the applicability of the results to clinical practice.


Parachutes are routinely used to prevent death or major traumatic injury among individuals jumping from aircraft. However, evidence supporting the efficacy of parachutes is weak and guideline recommendations for their use are principally based on biological plausibility and expert opinion. Despite this widely held yet unsubstantiated belief of efficacy, many studies of parachutes have suggested injuries related to their use in both military and recreational settings, and parachutist injuries are formally recognized in the World Health Organization’s ICD-10 (international classification of diseases, 10th revision).This could raise concerns for supporters of evidence-based medicine, because numerous medical interventions believed to be useful have ultimately failed to show efficacy when subjected to properly executed randomized clinical trials.

Representative study participant jumping from aircraft with an empty backpack.
This individual did not incur death or major injury upon impact with the ground.

Here’s a link to the article.


Registered Reports in the Journal of Computer Assisted Learning


Em. prof. dr. Paul. A. Kirschner, Editor-in-Chief, Open University of the Netherlands / Thomas More University of Applied Sciences, and

Dr. Jeroen Janssen, Associate Editor, Utrecht University, Netherlands

The science of learning and instruction is a rapidly evolving field, with a variety of theoretical and methodological approaches. It is also sometimes burdened by two gadflies, namely the so-called ‘replication crisis’ where attempts to replicate accepted theories and results may not replicate (for whatever reason) and ‘positive’ publication bias where articles that fail to confirm hypotheses and/or that don’t produce statistically significant results are seen to have a lesser chance of getting published. In an attempt to combat both of these problems, and increase trust in research, we at the Journal of Computer Assisted Learning have chosen to explicitly invite authors to submit Registered Reports – a new format of empirical articles that is designed to improve the transparency and reproducibility of hypothesis-driven research.

The call to invite authors to submit their Registered Reports went out in February 2018. Since then, we have received a growing number of Registered Reports submissions, and we are now delighted to publish our first collection of Registered Reports. To us, this proves that the learning and instruction community is responding positively to this new format.

Registered Reports differ from conventional empirical articles by performing part of the review process before researchers even collect and analyze data. The Introduction and Method section (including hypotheses and all relevant materials) are peer-reviewed prior to data collection. High quality pre-registered protocols that meet strict editorial criteria are then offered in principle acceptance, which guarantees publication of the results, provided that the authors adhere to their pre-registered protocol.

Daniel Ansari (University of Western Ontario, Canada) and Judit Gervain (Université Paris Descartes, Paris) in a blog about the introduction of Registered Reports in Developmental Science, noted the following:

“Registered Reports (RR) place an emphasis on the adequacy of methods and analysis plan for studies deemed to be informative. They thus benefit both the submitting authors and the discipline. If the rationale and the methods of the planned studies are sound, and accepted during the review process, authors can expect their work to be published, unless they deviate from the accepted methods, even if the results are weak, null, or different than predicted. The discipline gains, because the publication process will be more transparent and is therefore more likely to curtail questionable research practices. This increases the reliability and reproducibility of data. … [Also, in] RR format interpretations are less likely to be modified post hoc to fit the initial predictions.”

Three arguments for Registered Reports

For studies with a clear hypothesis, the Registered Reports format has three key strengths compared with traditional research publishing. First, it prevents publication bias by ensuring that editorial decisions are made on the basis of the theoretical importance and methodological rigor of a study, before research outcomes are known. Second, by requiring authors to pre-register their study methods and analysis plans in advance, it prevents common forms of research bias including p-hacking (mis-use of data analysis to find patterns that are statistically significant) and HARKing (Hypothesizing After Results are Known or hindsight bias) while still welcoming unregistered analyses that are clearly labelled as exploratory. Third, because protocols are accepted in advance of data being collected, the format provides an incentive for researchers to conduct important replication studies and other novel, resource-intensive projects (e.g. involving multi-site consortia) — projects that would otherwise be too risky to undertake where the publishability of the outcome is contingent on the results.

It is encouraging to see the positive reaction to Registered Reports from researchers in their various roles as editors, authors and peer reviewers. For more information and answers frequently asked questions about Registered Reports see


  • Authors are invited to submit ‘Stage 1 manuscripts’ which outline the rationale and method of the proposed study or studies (see Guidelines for Authors).
  • All submitted papers will be peer-reviewed for theoretical significance and methodological quality. In-Principle Acceptance (IPA) will be given to high quality submissions. Data collecting can only commence after a Stage 1 manuscript has been accepted.
  • Once the study (or studies) is complete, the ‘Stage 2 manuscripts’ will be also be peer-reviewed to see whether they are consistent with the pre-registered Stage 1 protocol. Editorial decisions will not be based on the perceived importance, novelty or conclusiveness of the results.
  • The journal operates double blind peer review as a means of tackling real or perceived bias in the review process, so the authors must provide their title page as a separate file from their main document. Title page includes the complete title of the paper, affiliation and contact details for the corresponding author (both postal address and email address).

Five tips for preparing a Registered Report for submission

The following tips have been offered by Ansari and Gervain. They are also useful for authors considering to submit a registered report to the Journal of Computer Assisted Learning

  1. Carefully read the submission guidelines. They are extremely detailed and informative.
  2. Ensure that you have all the resources necessary to carry out the research you have proposed. You don’t want to receive in-principle acceptance of a Stage 1 Registered Report, only to find out that you do not have the resources to collect the data! Hopefully in the near future, funding agencies will begin to support Stage 1 manuscripts that have received IPA.
  3. Make sure that you are setting yourself a realistic timeline that takes into account variability in the time it will take to get your manuscript reviewed at both Stage 1 and 2. This is especially important for graduate students and post-doctoral fellows who may have pressing deadlines for finalizing their data collection so that they can defend, move to a new position etc.
  4. Be very precise in your hypotheses and analysis plan. Registered Reports are all about spending a lot of time thinking and discussing your research plans before you collect the data. Being as precise as possible and having a very detailed analysis plan can save you a lot of time both during peer review as well as the analysis after you have collected the data and before you prepare your Stage 2 manuscript.
  5. Make sure that your research ethics approval will allow you to share your data on an open data repository, such as the Open Science Framework. Open data are a requirement for the publication of a Registered Report.

Evidence informed pedagogy

Em. prof. dr. Paul A. Kirschnera, b & Tim Surma, MSca

a ExpertiseCentrum voor Effectief Leren (ExCEL), Thomas More University of Applied Sciences, Mechelen, Belgium

b kirschnerED, Educational Advice and Training

This is the opening editorial that Tim Surma and I wrote for the Autumn 2020 edition of Impact on evidence-informed pedagogy. The reason that we collated a publication with this theme is simple and really straightforward: If we, as educational professionals, choose to inform the choices that we make for our practice by the best available evidence, we can make meaningful striking enhancements in our pedagogical practice, and thus on the efficiency, effectiveness, and success of our teaching and of children’s learning.

What is evidence-informed pedagogy?

Some educational policy makers, politicians, and teachers use the term evidence- based when they speak of instruction and teaching while others (we for example) use the term evidence-informed. Is there a difference, and if so, then what is it? There is an, albeit sometimes subtle, distinction between evidence-based and evidence-informed in terms of practice in education. Originating in medicine but now used across numerous professions such as economics, technology, and agriculture, an evidence-based practice is an approach to practice that focuses practitioner attention on sound empirical evidence in professional decision making and action (Rousseau & Gunia, 2016). In medical research for instance, research processes are more rigorous, more well-defined, and better controllable than in educational sciences which makes outcomes more distinct and reliable. As Neelen and Kirschner (2020, p. 3) state:

Sackett et al (1996) see it as a three legged stool integrating three basic principles: (1) the best available research evidence bearing on whether and why a treatment works, (2) clinical expertise of the health care professional (clinical judgment and experience) to rapidly identify each patient’s unique health state and diagnosis, their individual risks and benefits of potential interventions, and (3) client preferences and values.

Here everything is clean cut. The target population is clearly defined with respect to age, weight, disease, and so forth. Also, the directions for use are clear-cut, for example that the medicine should be consumed on an empty stomach, one hour prior to eating.

Evidence-informed practice is still based on empirical evidence, but acknowledges the fact that it’s harder for real classroom practice to determine what works for who under which circumstances. What seems to works in one classroom does not always work in another classroom. Five year olds are different from fifteen year olds both with respect to their cognitive development as their knowledge and expertise, a lesson on concepts and definitions is different from a lesson on applications, and to a lesser extent a lesson in chemistry differs from a lesson in drawing. Also, what works for one teacher might not work for another because teachers differ qualitatively; subtle and not so subtle differences between teachers mean that the way that they carry out the same thing differs both in how it is carried out and how it is perceived by their students. Also, what works in a lesson today won’t necessarily work in the same lesson this afternoon, tomorrow, or in three months. Just the fact that learners are different with respect to their prior knowledge, beliefs, needs, and/or motivations to participate, can change everything. Unfortunately, this entropy (i.e., lack of order or predictability) of the classroom does not allow us to predict with statistical ‘certainty’ which intervention will yield which effect and when. Even in perfect circumstances with the best prepared lessons, some of our students might still underperform, despite the evidence brought to us by eminent cognitive and educational psychologists. While ‘evidence-based’ provides fairly hard results, ‘evidence-informed’ is less hard, but still very useful with a higher chance of success if applied thoughtfully. That is why in this issue we advocate a pedagogy informed by evidence, more than a pedagogy based (or dictated?) by evidence. The challenge of going from the evidence to the design of actual pedagogical practices in the classroom calls for a deep understanding – let’s call it pedagogical knowledge – of what, why, when something works in optimal conditions in order to have, for example, conversations with your fellow teachers and headmasters on certain pedagogical decisions or actions.

Second, the literature presents a variety of accounts of exactly what pedagogy is. Since pedagogy has both broad and narrow definitions, we have had to make a choice and have chosen to follow Dylan Wiliam (2018) in using the broad definition. He cites Alexander (2008) in stating that pedagogy is “what one needs to know, and the skills one needs to command, in order to make and justify the many different kinds of decision of which teaching is constituted” (p. 47). It can be seen as the act and discourse of teaching (Alexander, 2004). Pedagogy therefore includes instruction but is broader and also embraces the interplay between factors that influence teaching and learning. Both evidence-informed practice and pedagogy assume that the educational professional knows what might the best options for optimal teaching and learning under given circumstances (knowing your repertoire as a teacher).

What do we know about teacher repertoires? We have already learned a lot about classroom practices from the abundance of quality research conducted in laboratories and schools, online and offline, and virtually anywhere and when teaching and learning takes place. The evidence is out there. The past few decades, researchers have designed interventions and devised general techniques – not rarely based on a two way street interaction between teachers and practitioners – that work or do not work for particular learners of particular ages undertaking particular academic tasks in particular subject areas (see, for example, Roediger & Pyc, 2012). Some fundamental techniques from cognitive and educational research were derived from this substantial base of empirical research and some of these are gaining attention as they hold the potential that they are sufficiently general that they can be applied in a range of academic subject matter areas and readily implemented in classrooms across all ages (for an overview of learning strategies see Dunlosky, Rawson, Marsh, Nathan, & Willingham, 2013). Several examples of effective techniques are elaborated in this issue as these general approaches may need domain-specific adjustments to maximise their promise as learning tools for particular domains, which in itself is a shining example of evidence informed practice. Retrieval practice, the act of engaging in active recall of already learned information (see Roediger & Karpicke, 2006), is adapted to the perspective of CPD (Beauchamp, this issue); Clare Badger transfers cognitive load theory (Sweller, 1998) into practical guidelines for chemistry courses; the provision and function of individual feedback (Hattie & Timperley, 2007) is tackled by Caroline Locke; and popular learning myths (Kirschner & Van Merriënboer, 2013) are challenged by both Jonathan Firth and Jennifer Zyke as well as by Lewis Baker.

Some other evidence principles are much less domain-general, which is why they were once called theories of subject matter by Richard Mayer (2004), and some of them are still conceived as a “unique and monumental contribution of educational psychology to the science of learning” (Mayer, 2017, p. 175). Striking examples are the theory of how people learn to read (i.e., explicit phonics instruction), learning a second language and so forth. This issue also pays attention to the subject-specific uniqueness of teaching, with a focus on a selection of subjects that are less often in the spotlight of educational research, such as arts (by Gatward) and physics (by Astolfi).

Although we may now sound endlessly full of self-confidence regarding evidence informing education, we must of course temper our enthusiasm to some extent: we obviously do not know the answer to every question, for the simple reason that education does not take place in isolation. The evidence is out there – but it’s neither definite nor complete. As an example, the concept of affect—which refers to students’ experience of emotion—is gaining growing recognition as an essential component in teaching and learning but still holds many secrets for both researchers and seasoned teachers (Mayer, 2017). Therefore, some consideration was given in this issue to educational outcomes beyond retention of basic declarative and procedural knowledge. Several articles explore pedagogies such as playful learning in the early years (by Sarah Seleznyov), reading for pleasure (by Alice Reedy) and the crossroads between coaching and direct instruction (by Ed Cope and Chris Cushion). Given the broad range of content areas represented in this issue, our readers should not be surprised that the educational outcomes in this issue differ greatly.

The UK occupies a pole position worldwide with educational undertakings, supporting the implementation of structured evidence informed education. We think of influential research centres such as the Education Endowment Foundation, or professional learning communities such as the Chartered College of Teaching and the ever increasing researchED community. A number of articles in this issue zoom in on this fascinating but complex interplay between research and practice. Andrew Davis elaborates on classroom research, Richard Churches and colleagues shine a light on teacher led randomized controlled trials, and Lorne Stefanini and Jenny Griffiths address some challenges when implementing an evidence-informed approach to education.

This issue might be what David Daniel (2012) described as a targeted investment in translational research: with this issue CCT supports the development of pedagogical approaches with the goal of understanding how, when and under what constraints to apply best-evidence strategies in relevant educational contexts. Readers will find a multiplicity of approaches in the current issue, all aimed at revealing how to inform your pedagogy with the best available evidence. We hope that this issue can help you to make more and better evidence-informed decisions.

Enjoy, learn, and use the content to reflect upon and improve both your teaching and your students learning!


Alexander, R. (2004). Still no pedagogy? Principle, pragmatism and compliance in primary education. Cambridge Journal of Education, 34, 7–33.

Alexander, R. (2008). Essays on pedagogy. London: Routledge.

Black, P., & Wiliam, D. (2018). Classroom assessment and pedagogy. Assessment in Education: Principles, Policy & Practice, 25, 551-575.

Daniel, D. B. (2012). Promising principles: Translating the science of learning to educational practice. Journal of Applied Research in Memory and Cognition, 1, 251–253

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77, 81-112.

Kirschner, P. A., & van Merriënboer, J. J. (2013). Do learners really know best? Urban legends in education. Educational psychologist, 48, 169-183.

Mayer, R. E. (2004). Teaching of subject matter. In S. T. Fiske (Ed.), Annual Review of Psychology (Vol. 55, pp. 715–744). Palo Alto, CA: Annual Reviews.

Mayer, R. E. (2018). Educational psychology’s past and future contributions to the science of learning, science of instruction, and science of assessment. Journal of Educational Psychology, 110, 174–179.

Neelen, M. & Kirschner, P. A. (2020). Evidence-informed learning design: Creating training to improve performance. London, UK: Kogan Page.

Roediger, H. L., & Karpicke, J. D. (2006). The power of testing memory: Basic research and implications for educational practice. Perspectives on Psychological Science, 1, 181–210.

Roediger, H.,& Pyc, M. (2012). Inexpensive techniques to improve education: Applying cognitive psychology to enhance educational practice. Journal of Applied Research in Memory and Cognition. 1, 242–248.

Rousseau, D. M., & Gunia, B. C. (2016). Evidence-based practice: The psychology of EBP implementation. Annual Review of Psychology, 67, 667-692.

Sackett, D. L., Rosenberg, W. M., Gray, J. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: What it is and what it isn’t, Clinical Orthopaedics and Related Research, 455, 3‑5.

Sweller, J. (1988). Cognitive load during problem solving: effects on learning. Cognitive Science, 12, 275–285.

Dat onze XXXen hetzelfde zijn betekent niet dat wij hetzelfde zijn

Ik zeg altijd dat onze cognitieve architectuur simpel is en dat dit geldt voor iedereen (bijzondere uitzonderingen daargelaten). Kort gezegd, wij hebben allen een sensorisch geheugen dat binnenkomende informatie (prikkels) waarneemt/behandelt, een werkgeheugen dat iets doet met de informatie die waargenomen wordt, en een langetermijngeheugen dat de verwerkte informatie opslaat. Hier een schematische weergave van onze cognitieve architectuur:

Gesimplificeerde afbeelding van onze cognitieve architectuur

Het zijn altijd dezelfde mensen met dezelfde argumenten die zich fel keren tegentegen het idee dat onze hersens eender ‘gebouwd’ zijn. Ook gebruiken ze altijd gelijksoortige argumenten, zoals

  • “dus jij denkt dat iedereen hetzelfde is!”,
  • “dus je zegt dat iedereen hetzelfde leert!” en/of
  • “dus je meent dat iedereen op dezelfde manier les moet krijgen!”

Ik zou vele woorden kunnen gebruiken om dit allemaal te weerspreken, maar kies hier voor een simpele analogie.

Onze anatomie als mensen geldt voor iedereen (wederom – net als bij de cognitieve architectuur – bijzondere uitzonderingen daargelaten). Onze bloedsomloop (hart naar longen, naar hart, naar slagaders, naar haarvaten, naar aders, naar hart) is hetzelfde. Ons skelet (soorten botten, gewrichten en kraakbeen) en de soorten spieren die wij hebben – en waar die zitten in onze lichaam – (dwarsgestreepte skeletspieren, gladde spieren en hartspier) verschillen niet. Wij hebben allemaal hersenen, milten, longen, harten, nieren, blinde darmen, magen, darmen, en ga zo maar door. Ook hebben we allen twee ogen, twee oren, een neus, een nek, een romp, armen, handen, vingers, benen, voeten en tenen. Ik stop hier, behalve om nog even op te merken dat vrouwen en mannen (geslachtstechnisch gezien) natuurlijk een paar organen enzovoorts hebben die van elkaar verschillen. Misschien heb ik het verkeerd, maar ik denk niet dat iemand hier iets tegenin zou brengen.

Gesimplificeerde afbeelding van onze bloedsomloop
Gesimplificeerde afbeelding van ons skelet

Als ik zeg dat de anatomie van mensen onderling hetzelfde is, betekent dit dan dat ik zeg dat iedereen hetzelfde is, dat mensen niet van elkaar verschillen op allerlei manieren? Betekent dit dat ik zeg dat iedereen hetzelfde functioneert, even sterk is, even snel loopt en even gezond is? Betekent dit dat ik zeg dat iedereen dezelfde trainingsschema’s of diëten zou moeten volgen? Natuurlijk niet.

Ik hoop dat iedereen ziet dat het onzin is om te zeggen dat, als onze lichamelijke architectuur in wezen gelijk is voor alle mensen, dit zou betekenen dat alle mensen hetzelfde zijn en dat mensen niet verschillen en niet verschillend behandeld moeten worden.

Waarom zou dit dan wel het geval zijn als ik of collega’s beweren dat onze cognitieve architectuur in wezen gelijk is voor alle mensen? Waarom worden mensen dan opeens emotioneel en roepen ze dat dit absoluut niet het geval is en dat we allemaal uniek zijn op onze eigen manier, met de bovengenoemde stromanargumenten en voorbeelden om hun punt te maken?  

Uit geloof, overtuigingen, filosofie? Denk ik wel. Uit wetenschappelijk bewijs? Denk ik niet.

Heavy electronic media use in late childhood linked to lower academic performance

A new article in PLOS ONE reports on an association between heavy television use and poorer reading performance in 8-11 year olds, as well as between heavy computer use and poorer numeracy–the ability to work with numbers.

The researchers (Lisa K. Mundy, Louise Canterford, Monsurul Hoq, Timothy Olds, Margarita Moreno-Betancur, Susan Sawyer, Silja Kosola, and George C. Patton) studied 1,239 8- to 9-year olds in Melbourne, Australia using national achievement test data to measure the children’s academic performance at baseline and again after two years. They also asked the children’s parents to report on their kids’ use of electronic media. They found that watching two or more hours of television per day at the age of 8 or 9 was associated with lower reading performance compared to peers two years later; the difference was equivalent to losing four months of learning. Using a computer for more than one hour per day was linked to a similar degree of lost numeracy. The analysis showed no links between use of videogames and academic performance.

Here’s the abstract and reference (including a link to the article):


The effects of electronic media use on health has received much attention but less is known about links with academic performance. This study prospectively examines the effect of media use on academic performance in late childhood.

Materials and methods

1239 8- to 9-year-olds and their parents were recruited to take part in a prospective, longitudinal study. Academic performance was measured on a national achievement test at baseline and 10–11 years of age. Parents reported on their child’s duration of electronic media use.


After control for baseline reading, watching more than two hours of television per day at 8–9 years of age predicted a 12-point lower performance in reading at 10–11 years, equivalent to the loss of a third of a year in learning. Using a computer for more than one hour a day predicted a similar 12-point lower numeracy performance. Regarding cross-sectional associations (presumed to capture short-term effects) of media use on numeracy, after controlling for prior media exposure, watching more than two hours of television per day at 10–11 years was concurrently associated with a 12-point lower numeracy score and using a computer for more than one hour per day with a 13-point lower numeracy performance. There was little evidence for concurrent effects on reading. There was no evidence of short- or long-term associations between videogame use and academic performance.


Cumulative television use is associated with poor reading and cumulative computer use with poorer numeracy. Beyond any links between heavy media use and health risks such as obesity, physical activity and mental health, these findings raise a possibility of additional risks of both television and computer use for learning in mid-childhood. These findings carry implications for parents, teachers and clinicians to consider the type and timing of media exposure in developing media plans for children.

Mundy, L. K., Canterford, L., Hoq, M., Olds, T., Moreno-Betancur, M., Sawyer, S., Kosola, & Patton, G. C. (2020). Electronic media use and academic performance in late childhood: A longitudinal study. PLOS ONE, 15(9), e0237908 DOI: 10.1371/journal.pone.0237908

Traditioneel is het Nieuwe Progressieve

Deze blog is een verbreding en verdieping van mijn blog in het septembernummer van Didactief.

Ik las net Robert Peal’s boek Progressively Worse: The Burden of Bad Ideas in British Schools waarin hij de opkomst van progressief onderwijs in het Verenigd Koninkrijk en de effecten daarvan op de onderwijskwaliteit, leerlingprestaties en leerlinggedrag beschrijft. Het voldoet te zeggen dat de VK er sinds de invoering van ‘progressief onderwijs’ in de jaren zestig van de vorige eeuw er op alle drie deze fronten niet op vooruitgegaan is.

John Dewey, zijn school en zijn boeken uit 1899 (The School and Society) en 1902 (The Child and the Curriculum) worden vaak aangehaald als het gaat om progressief onderwijs (samen met mensen als Rousseau, Pestalozzi en Fröebel). Peal beschijft Dewey’s kindgecentreerde visie op het leerproces:

[H]et leren van een kind ‘geassimileerd moet worden, niet als informatie-elementen, maar als organische onderdelen van zijn huidige behoeften en doelen’. [Dewey] verzette zich tegen de didactische leraar[1] en voegde eraan toe: ‘Uiteindelijk kan de opvoeder alleen stimuli aanpassen’. Zijn experimentele scholen waren erop gericht om op indirecte wijze alfabetisering en rekenen te onderwijzen, door jonge leerlingen te betrekken bij activiteiten als koken of timmeren[2].

Daarna citeert Peal uit een 1934 voordracht van William Bagley, hoogleraar onderwijskunde aan Columbia University Teachers College waarin Bagley de emotionele aantrekkingskracht van progressivisme beschrijft. Ik vind zijn woorden in het Engels zo mooi dat ik hoop dat de lezers mij vergeven dat ik het niet vertaal:

If you wish to be applauded at an educational convention, vociferate sentimental platitudes about the sacred rights of the child, specifying particularly his right to happiness gained though freedom. You are likely to get an extra ‘hand’ if you shed a few verbal tears over the cruelty of examinations and homework, while if with eloquent condemnation you deftly bring into every other sentence one of the favourite stereotypes of abuse, such as Latin, mathematics (geometry, especially), grammar, the traditional curriculum, compartmentalization, “chunks of the subject matter” to be memorised, discipline, formal discipline, and the like, you may be fairly certain of an ovation.

Peal beschrijft daarna – in detail – hoe het progressivisme vanaf 1960 steeds steviger verankerd werd in het onderwijsbeleid, de scholen, de leraren, de lerarenopleidingen, de onderwijsvakbonden, de politiek (en de wetgeving), de onderwijswetenschappelijke faculteiten van de universiteiten en zelfs een deel van de onderwijspers (lees TES: Times Educational Supplement) in de VK[3]. Met andere woorden, progressief onderwijs werd de norm waarbij de ene progressieve onderwijsvernieuwing werd gestapeld op de andere. En na iedere vernieuwing zag men de leerlingprestaties achteruit gaan gepaard met een toename in ordeproblemen in de klas (met uitwassen als verbaal en fysiek geweld tussen leerlingen onderling en van leerlingen tegen de leerkrachten). En bij iedere mislukking werd de schuld geschoven naar de leerkrachten die het progressieve onderwijs niet goed uitvoerden c.q. de kinderen niet goed motiveerden enzovoorts óf naar de omgeving omdat ‘wat kon je verwachten van kinderen van lage sociaaleconomische klassen?’[4]. Dit laatste noemt Peal ‘the soft bigotry of low expectations’ (de zachte onverdraagzaamheid van lage verwachtingen).

Tijdens het lezen van het boek ontkwam ik er niet aan overeenkomsten te spotten tussen wat er in de VK gebeurde en wat in Nederland ook heeft plaatsgevonden. Ook in NL hebben wij vanaf de tweede helft van de vorige eeuw een reeks onderwijsvernieuwingen gezien die vooral op een progressieve, kindgecentreerde, kennisminnende, antiautoritaire leest geschoeid waren. Deze veranderingen zijn terug te vinden in:

  • lesmethodes (e.g., realistisch rekenen, holistisch lezen),
  • curricula (gericht op het verwerven van maatschappelijke houdingen en vage – 21e-eeuwse – vaardigheden i.p.v. kennis gevolgd door vaardigheden),
  • rol, functie en taak van de docent die veranderd zijn van leraar met veel kennis en kunde die kinderen onderwijst en vormt naar gids, motiveerder, stimuleerder enzovoorts,
  • lerarenopleidingen en pabo’s waar onderwijsmythes en filosofieën gedoceerd worden i.p.v. door onderzoek geïnformeerd onderwijs over hoe kinderen leren en hoe onderwijs gericht kan worden, en tot slot
  • overheveling van macht, taken en geld naar wat Peal quangos noemt (uitspraak kwaengoos; acroniem voor quasi non-governmental organisation (quasi-autonome non-gouvernementele organisatie)) Een quango[5] is een organisatie die een openbare dienst of wettelijke taak uitvoert en die (deels) met publieke middelen wordt gefinancierd, maar niet (rechtstreeks) onder de overheid valt en een zekere mate van zelfstandigheid geniet. Denk hier aan SLO, Kennisnet, enzovoorts.

Vergelijkbaar met de VK zijn ook hier de onderwijsprestaties van leerlingen achteruitgegaan met als klap op de vuurpijl de melding voorjaar jl. dat circa een kwart van de 14-15-jarigen in Nederland functioneel analfabeet (laaggeletterd) is en de helft laaggecijferd. Let wel, het gaat hier niet over kinderen die gespijbeld hebben of die voortijdig zijn afgehaakt, maar over kinderen die hun school keurig hebben afgemaakt, zoals de leerplicht vereist.

En ook hier werd en wordt veel geld uitgegeven aan steeds nieuwe onderwijshervormingen. Denk alleen aan de meer dan 25 miljoen euro die uitgegeven is aan Onderwijs 2032 en het vervolg daarvan (en die laatste is net begonnen) om over de basisvorming en tweede fase niet te spreken.

Dit allemaal lezend kwam ik de gedachte dat als iets als ‘progressief onderwijs’ post heeft gevat in de jaren zestig en dus al 60 jaar heerst in het denken en doen, dan is het tijd om dit verschijnsel eigenlijk traditioneel te noemen. Daaruit vloeit dan wellicht voort dat onderwijs dat eerst gericht op het verkrijgen van kennis waarna die kennis gebruikt wordt voor het verwerven van vaardigheden en attituden, waar het lesgeven op wetenschappelijk bewijs-geïnformeerde leest geschoeid is, waar lerarenopleidingen en pabo’s de tijd de ruimte krijgen om a.s. leerkrachten te leren hoe kinderen leren en welke leerstrategieën daar het beste voor gebruikt kunnen en moeten worden, waar de leraar weer leraar is en niet gids, waar kinderen uit de ‘mindere’ milieus uitgedaagd worden i.p.v. gedoogd, enzovoorts eigenlijk heel innovatief en progressief is.

Goed bewijs-geïnformeerd onderwijs is eigenlijk het nieuwe progressieve!

Met dank aan Marcel Schmeier


Bagley, W. (1935). Is subject-matter obsolete? Educational Administration & Supervision, 21, 401-412.

Dewey, J. (1899). The School and Society. Chicago, IL: University of Chicago Press.

Dewey, J. (1902). The child and the curriculum. (No.5). University of Chicago Press.

Dewey, J. (1938). Experience and Education. Toronto, Canada: Collier-MacMillan Canada Ltd.

Peal, R. (2014). Progressively worse: The burden of bad ideas in British schools. London, VK: Civitas. Gratis te downloaden hier.

[1] In het Engels heeft ‘didactic teaching’ een pejoratieve betekenis in de zin van klassikaal frontaal onderwijs (louter chalk and talk).

[2] NB, wat vaak niet besproken wordt is dat Dewey later in zijn leven in Experience and Education (1938) expliciet afstand deed van veel van zijn eerdere opvattingen, waarin hij toegaf dat hij de behoefte aan directe instructie van de leraar flink had onderschat.

[3] Hij schetst ook hoe men opmerkte dat het onderwijs bergaf ging en dat leerlingen niet meer te handhaven waren. Hij beschrijft hoe al de net genoemde gremia dat toeschreven aan hoe de leraren niet in staat waren om de kinderen te motiveren of dat er meer geld uitgegeven moest worden of dat alle mislukkingen toegeschreven moesten worden aan externe invloeden zoals de omgeving, het milieu, de sociaal economische status van het gezin enzovoorts.

[4] Dit heet de sociologische visie op onderwijs: Het onderpresteren van leerlingen is een ongelukkig maar voorspelbaar / vanzelfsprekend resultaat van hun sociaal bepaalde levensloop. Het idee dat ‘als je voor een dubbeltje geboren bent, word je nooit een kwartje’ Dit weerspiegelt de overtuiging dat iemand die laag op de maatschappelijke ladder staat, nooit zal opklimmen; een behoorlijk pessimistisch en defaitist kijk op wat iemand kan!