Recently I came across this work from Koichi Sato at the University of Nebraska–Lincoln (his coordinates are at the bottom). I found it so clear and helpful that I asked him if I could work it into a blog and I’m happy to say he agreed. I took the liberty to add a few things, but don’t want to take any of the credit! Richard Mayer’s Cognitive Theory of Multimedia Learning is based on a number of assumptions, namely that there are two separate channels – auditory and visual – for processing information (Paivio, 1990); there is limited channel capacity (Sweller, 1988), and that learning is an active process of filtering, selecting, organizing, and integrating information (Baddeley & Hitch, 1974). Based upon these three assumptions, there have been 14 principles developed governing the good (and poor) use of multimedia. Herre’s the first.
Multimedia Principle: People learn better when texts and pictures are presented together rather than from words alone.
You often hear that active ‘doing’ (pupil-centred) is better than ‘telling’ (teacher-centred) and that this is even more important and more ‘true’ when it comes to younger children. Unfortunately, this new study at the University of Bath shows that storytelling — the oldest form of teaching — is the most effective way of teaching primary school children about evolution. A randomised controlled trial found that children learn about evolution more effectively when engaged through stories read by the teacher (i.e., instruction), than through doing tasks to demonstrate the same concept.
The scientists investigated several different methods of teaching evolution in primary schools, to test whether a pupil-centred approach (where pupils took part in an activity) or a teacher-centred approach (where pupils were read a story by the teacher), led to a greater improvement in understanding of the topic.
Abstract of the article in Nature’s Science of Learning journal:
Current educational discourse holds that effective pedagogy requires engagement through active student participation with subject matter relating to them. The lack of testing of lessons in series is recognized as a potential weakness in the evidence base, not least because standard parallel designs cannot capture serial interaction effects (cf. drug interactions). However, logistic issues make large-scale replicated in situ assessment of serial designs challenging. The recent introduction of evolution into the UK primary school curriculum presents a rare opportunity to overcome this. We implemented a randomised control 2 × 2 design with four inexpensive schemes of work, comparable to drug interaction trials. This involved an initial test phase (N = 1152) with replication (N = 1505), delivered by teachers, after training, in their classrooms with quantitative before-after-retention testing. Utilising the “genetics-first” approach, the schemes comprised four lessons taught in the same order. Lessons 1 (variation) and 3 (deep-time) were invariant. Lesson 2 (selection) was either student-centred or teacher-centred, with subject organism constant, while lesson 4 (homology) was either human-centred or not, with learning mode constant. All four schemes were effective in replicate, even for lower ability students. Unexpectedly, the teacher-focused/non-human centred scheme was the most successful in both test and replicate, in part owing to a replicable interaction effect but also because it enabled engagement. These results highlight the importance of testing lessons in sequence and indicate that there are many routes to effective engagement with no “one-size fits all” solution in education.
Here’s a really clear and useful blog by David Rodger-Goodwin on David Ausubel’s Advance Organisers.
The Situation – Expert teachers create and maintain, almost automatically, complex networks that link concepts, ideas and facts within their domain which students don’t have.
The Solution – Popularised by the psychologist David Ausubel (1968), advance organisers provide the conceptual framework for the incorporation and retention of new information. They should be presented in advance of a new topic (or sequence of learning) and at a higher level of abstraction than the learning that follows.
You’ve often heard from me about the problem that I have with using both self-report as a measure when doing research as well as using teacher-report as a measure. My reasons are simple, namely that such reports are subjective and not objective and that they are usually neither valid nor reliable. These two problems are compounded by the results of this study, namely that a person can only make a judgement if they really understand the concept. This, for example, is a prominent problem with the most often used measure of cognitive load; the one item scale by Paas and colleagues. The learner is often asked to judge how much mental effort a task required. Unfortunately, most people (1) have no basis with which they can judge their mental effort as they do for physical effort (e.g., whether or not they are out of breath, are sweating, have an increased heart rate, feel their arms or legs getting tired or cramping) and (2) confuse how much mental effort a task required with how difficult the task was, how much trouble they had when carrying out the task, and so forth.
The authors of this paper (Charlotte Dignath and Lara Sprenger) encountered this problem with respect to teacher assessment of self-regulated learning in their students. In the article ‘Can you only diagnose what you know? The relation between teachers’ self-regulation of learning concepts and their assessment of students’ self-regulation’ they conclude:
We conducted an examination of educators’ conceptualization of SRL, identified three patterns to classify these conceptualizations, and investigated how such conceptualization is associated with teachers’ assessment of SRL. We found that, in particular, teachers who conceptualize SRL as student autonomy and self-directedness might be at risk for using cues that are not diagnostic of SRL when attempting to identify their students’ self-regulation skills. This raises questions about the impact that teachers’ conceptualization of SRL has on their assessment accuracy for SRL, which might, in turn, affect teachers’ adaptive teaching and promoting SRL in the classroom.
The answer to the question is the article’s title is: No
Here the abstract:
Self-regulation of learning (SRL) positively affects achievement and motivation. Therefore, teachers are supposed to foster students’ SRL by providing them with strategies. However, two preconditions have to be met: teachers need to diagnose their students’ SRL to take instructional decisions about promoting SRL. To this end, teachers need knowledge about SRL to know what to diagnose. Only little research has investigated teachers’ knowledge about SRL and its assessment yet. Thus, the aim of this study was to identify teachers’ conceptions about SRL, to investigate their ideas about how to diagnose their students’ SRL, and to test relationships between both. To this end, we developed two systematic coding schemes to analyze the conceptions about SRL and the ideas about assessing SRL in the classroom among a sample of 205 teachers. The coding schemes for teachers’ open answers were developed based on models about SRL and were extended by deriving codes from the empirical data and produced satisfactory interrater reliability (conceptions about SRL: κ = 0.85, SE = 0.03; ideas about assessing SRL: κ = 0.63, SE = 0.05). The results showed that many teachers did not refer to any regulation procedure at all and described SRL mainly as student autonomy and self-directedness. Only few teachers had a comprehensive conception of the entire SRL cycle. We identified three patterns of teachers’ conceptualizations of SRL: a motivation-oriented, an autonomy-oriented, and a regulation-oriented conceptualization of SRL. Regarding teachers’ ideas about assessing their students’ SRL, teachers mainly focused on cues that are not diagnostic of SRL. Yet, many teachers knew about portfolios to register SRL among students. Finally, our results suggest that, partly, teachers’ ideas about assessing SRL varied as a function of their SRL concept: teachers with an autonomy-oriented conceptualization of SRL were more likely to use cues that are not diagnostic of SRL, such as unsystematic observation or off-task behavior. The results provide insights into teachers’ conceptions of SRL and of its assessment. Implications for future research in the field of SRL will be drawn, in particular about how to support teachers in diagnosing and fostering SR among their students.
Em. prof. dr. Paul. A. Kirschner, Editor-in-Chief, Open University of the Netherlands / Thomas More University of Applied Sciences,and
Dr. Jeroen Janssen, Associate Editor, Utrecht University, Netherlands
The science of learning and instruction is a rapidly evolving field, with a variety of theoretical and methodological approaches. It is also sometimes burdened by two gadflies, namely the so-called ‘replication crisis’ where attempts to replicate accepted theories and results may not replicate (for whatever reason) and ‘positive’ publication bias where articles that fail to confirm hypotheses and/or that don’t produce statistically significant results are seen to have a lesser chance of getting published. In an attempt to combat both of these problems, and increase trust in research, we at the Journal of Computer Assisted Learning have chosen to explicitly invite authors to submit Registered Reports – a new format of empirical articles that is designed to improve the transparency and reproducibility of hypothesis-driven research.
The call to invite authors to submit their Registered Reports went out in February 2018. Since then, we have received a growing number of Registered Reports submissions, and we are now delighted to publish our first collection of Registered Reports. To us, this proves that the learning and instruction community is responding positively to this new format.
Registered Reports differ from conventional empirical articles by performing part of the review process before researchers even collect and analyze data. The Introduction and Method section (including hypotheses and all relevant materials) are peer-reviewed prior to data collection. High quality pre-registered protocols that meet strict editorial criteria are then offered in principle acceptance, which guarantees publication of the results, provided that the authors adhere to their pre-registered protocol.
“Registered Reports (RR) place an emphasis on the adequacy of methods and analysis plan for studies deemed to be informative. They thus benefit both the submitting authors and the discipline. If the rationale and the methods of the planned studies are sound, and accepted during the review process, authors can expect their work to be published, unless they deviate from the accepted methods, even if the results are weak, null, or different than predicted. The discipline gains, because the publication process will be more transparent and is therefore more likely to curtail questionable research practices. This increases the reliability and reproducibility of data. … [Also, in] RR format interpretations are less likely to be modified post hoc to fit the initial predictions.”
Three arguments for Registered Reports
For studies with a clear hypothesis, the Registered Reports format has three key strengths compared with traditional research publishing. First, it prevents publication bias by ensuring that editorial decisions are made on the basis of the theoretical importance and methodological rigor of a study, before research outcomes are known. Second, by requiring authors to pre-register their study methods and analysis plans in advance, it prevents common forms of research bias including p-hacking (mis-use of data analysis to find patterns that are statistically significant) and HARKing (Hypothesizing After Results are Known or hindsight bias) while still welcoming unregistered analyses that are clearly labelled as exploratory. Third, because protocols are accepted in advance of data being collected, the format provides an incentive for researchers to conduct important replication studies and other novel, resource-intensive projects (e.g. involving multi-site consortia) — projects that would otherwise be too risky to undertake where the publishability of the outcome is contingent on the results.
It is encouraging to see the positive reaction to Registered Reports from researchers in their various roles as editors, authors and peer reviewers. For more information and answers frequently asked questions about Registered Reports see https://cos.io/rr/
Authors are invited to submit ‘Stage 1 manuscripts’ which outline the rationale and method of the proposed study or studies (see Guidelines for Authors).
All submitted papers will be peer-reviewed for theoretical significance and methodological quality. In-Principle Acceptance (IPA) will be given to high quality submissions. Data collecting can only commence after a Stage 1 manuscript has been accepted.
Once the study (or studies) is complete, the ‘Stage 2 manuscripts’ will be also be peer-reviewed to see whether they are consistent with the pre-registered Stage 1 protocol. Editorial decisions will not be based on the perceived importance, novelty or conclusiveness of the results.
The journal operates double blind peer review as a means of tackling real or perceived bias in the review process, so the authors must provide their title page as a separate file from their main document. Title page includes the complete title of the paper, affiliation and contact details for the corresponding author (both postal address and email address).
Five tips for preparing a Registered Report for submission
The following tips have been offered by Ansari and Gervain. They are also useful for authors considering to submit a registered report to the Journal of Computer Assisted Learning
Ensure that you have all the resources necessary to carry out the research you have proposed. You don’t want to receive in-principle acceptance of a Stage 1 Registered Report, only to find out that you do not have the resources to collect the data! Hopefully in the near future, funding agencies will begin to support Stage 1 manuscripts that have received IPA.
Make sure that you are setting yourself a realistic timeline that takes into account variability in the time it will take to get your manuscript reviewed at both Stage 1 and 2. This is especially important for graduate students and post-doctoral fellows who may have pressing deadlines for finalizing their data collection so that they can defend, move to a new position etc.
Be very precise in your hypotheses and analysis plan. Registered Reports are all about spending a lot of time thinking and discussing your research plans before you collect the data. Being as precise as possible and having a very detailed analysis plan can save you a lot of time both during peer review as well as the analysis after you have collected the data and before you prepare your Stage 2 manuscript.
Make sure that your research ethics approval will allow you to share your data on an open data repository, such as the Open Science Framework. Open data are a requirement for the publication of a Registered Report.
Ik zeg altijd dat onze cognitieve architectuur simpel is en dat dit geldt voor iedereen (bijzondere uitzonderingen daargelaten). Kort gezegd, wij hebben allen een sensorisch geheugen dat binnenkomende informatie (prikkels) waarneemt/behandelt, een werkgeheugen dat iets doet met de informatie die waargenomen wordt, en een langetermijngeheugen dat de verwerkte informatie opslaat. Hier een schematische weergave van onze cognitieve architectuur:
Het zijn altijd dezelfde mensen met dezelfde argumenten die zich fel keren tegentegen het idee dat onze hersens eender ‘gebouwd’ zijn. Ook gebruiken ze altijd gelijksoortige argumenten, zoals
“dus jij denkt dat iedereen hetzelfde is!”,
“dus je zegt dat iedereen hetzelfde leert!” en/of
“dus je meent dat iedereen op dezelfde manier les moet krijgen!”
Ik zou vele woorden kunnen gebruiken om dit allemaal te weerspreken, maar kies hier voor een simpele analogie.
Onze anatomie als mensen geldt voor iedereen (wederom – net als bij de cognitieve architectuur – bijzondere uitzonderingen daargelaten). Onze bloedsomloop (hart naar longen, naar hart, naar slagaders, naar haarvaten, naar aders, naar hart) is hetzelfde. Ons skelet (soorten botten, gewrichten en kraakbeen) en de soorten spieren die wij hebben – en waar die zitten in onze lichaam – (dwarsgestreepte skeletspieren, gladde spieren en hartspier) verschillen niet. Wij hebben allemaal hersenen, milten, longen, harten, nieren, blinde darmen, magen, darmen, en ga zo maar door. Ook hebben we allen twee ogen, twee oren, een neus, een nek, een romp, armen, handen, vingers, benen, voeten en tenen. Ik stop hier, behalve om nog even op te merken dat vrouwen en mannen (geslachtstechnisch gezien) natuurlijk een paar organen enzovoorts hebben die van elkaar verschillen. Misschien heb ik het verkeerd, maar ik denk niet dat iemand hier iets tegenin zou brengen.
Als ik zeg dat de anatomie van mensen onderling hetzelfde is, betekent dit dan dat ik zeg dat iedereen hetzelfde is, dat mensen niet van elkaar verschillen op allerlei manieren? Betekent dit dat ik zeg dat iedereen hetzelfde functioneert, even sterk is, even snel loopt en even gezond is? Betekent dit dat ik zeg dat iedereen dezelfde trainingsschema’s of diëten zou moeten volgen? Natuurlijk niet.
Ik hoop dat iedereen ziet dat het onzin is om te zeggen dat, als onze lichamelijke architectuur in wezen gelijk is voor alle mensen, dit zou betekenen dat alle mensen hetzelfde zijn en dat mensen niet verschillen en niet verschillend behandeld moeten worden.
Waarom zou dit dan wel het geval zijn als ik of collega’s beweren dat onze cognitieve architectuur in wezen gelijk is voor alle mensen? Waarom worden mensen dan opeens emotioneel en roepen ze dat dit absoluut niet het geval is en dat we allemaal uniek zijn op onze eigen manier, met de bovengenoemde stromanargumenten en voorbeelden om hun punt te maken?
Uit geloof, overtuigingen, filosofie? Denk ik wel. Uit wetenschappelijk bewijs? Denk ik niet.
I read an article today in my local paper Dagblad De Limburger about a school district that: “… in the morning when the cleaning service is present and in the evening when they are back at work, all windows are open against each other. There can also be ventilated in between…at the request of teachers, CO2 meters were also purchased to measure the amount of carbon dioxide in the classrooms …” Dit zette mij aan het denken omdat ik herinnerde een aantal artikelen over CO2 en leren.
Normal outside air contains carbon dioxide (CO2) at a level around 400 ppm (parts per million) and an accepted standard in a classroom is 1000 ppm which is not hard to maintain if it’s empty. But if the classroom is full of children, the level rises dramatically because we breathe carbon dioxide out with every breath. A classroom study in California (2013) found CO2 levels can reach 2200 ppm, more than 2X the recommended level, and 3X the level normally in an office setting. A study in Texas (2002) found CO2 levels over 3000 ppm in 21% of the classrooms tested; levels not conducive to efficient learning!
If the windows in a class room are closed all day or if there is poor ventilation in the room, the level of CO2 in the room is increased. That research was about CO2 concentrations and the adverse effect on lessons later in the day. In lessons at the start of the day, and therefore where the classroom could air all night, the learning and learning results were better than in lessons later in the day. Those results were attributed to the increase in CO2 during the day. In search of that article (which I didn’t find) I came across the following research results from Pawel Wargocki, José Alí Porras-Salazar, Sergio Contrertas-Espinoza, and William P. Bahnfleth (2020):
…reducing CO2 concentration from 2,100 ppm to 900 ppm would improve the performance of psychological tests and school tasks by 12% with respect to the speed at which the tasks are performed and by 2% with respect to errors made. For other learning outcomes and short-term sick leave, only the relationships published in the original studies were available. They were therefore used to make predictions. These relationships show that reducing the CO2 concentration from 2,300 ppm to 900 ppm would improve performance on the tests used to assess progress in learning by 5% and that reducing CO2 from 4,100 ppm to 1,000 ppm would increase daily attendance by 2.5%. These results suggest that increasing the ventilation rate in classrooms in the range from 2 L/s-person to 10 L/s-person can bring significant benefits in terms of learning performance and pupil attendance; no data are available for higher rates. The results provide a strong incentive for improving classroom air quality and can be used in cost-benefit analyses.
Petersen, Jensen, Pedersen, en Rasmussen (2016) found that:
Analysis of the total sample suggested the number of correct answers was improved significantly in four of four performance test, addition (6.3%), number comparison (4.8%), grammatical reasoning (3.2%), and reading and comprehension (7.4%), when the outdoor air supply rate was increased from an average of 1.7 (1.4-2.0) to 6.6 l/s per person. The increased outdoor air supply rate did not have any significant effect on the number of errors in any of the performance tests. Results from questionnaires regarding pupil perception of the indoor environment, reported Sick Building Syndrome symptoms, and motivation suggested that the study classroom air was perceived more still and pupil were experiencing less pain in the eyes in the recirculation condition compared to the fresh air condition.
Finally, though I could go on for pages, research at Harvard University by Allen and colleagues (2016) found “statistically significant declines” in cognitive function scores when CO2 concentrations were increased to 950 ppm, which is “common in indoor spaces “. The study found even larger declines when CO2 was raised to 1,400 ppm.
In other words, every cloud – no matter how dark – can also have a silver lining.
Allen, J. G., MacNaughton, P., Satish, U., Santanam, S., Vallarino, J. & Spengler, J. D. (2016). Associations of cognitive function scores with carbon dioxide, ventilation, and volatile organic compound exposures in office workers: A controlled exposure study of green and conventional office environments. Environmental Health Perspectives 124, 6. https://doi.org/10.1289/ehp.1510037
Corsi, R. L., Torres, V. M., Sanders, M., & Kinney, K. A. (20020). Carbon dioxide levels and dynamics in elementary schools: Results of the TESIAS study. In Indoor Air 2002, the 9th International Conference on Indoor Air Quality and Climate, (pp. 74-79). Monterey, Calif. Espoo, Finland: ISIAQ.
Mendell, M. J., & Heath, G. A. (2005). Do indoor pollutants and thermal conditions in schools influence student performance? A critical review of the literature. Indoor Air Journal; 15, 27–32.
Petersen, S., Jensen, K. L., Pedersen, A. L., & Rasmussen, H. S. (2016). The effect of increased classroom ventilation rate indicated by reduced CO2 concentration on the performance of schoolwork by children. Indoor air, 26, 366–379. https://doi.org/10.1111/ina.12210
Wargocki, P.; Porras-Salazar, J.A.; Contreras-Espinoza, S., & Bahnfleth, W. (2020). The relationships between classroom air quality and children’s performance in school. Building Environment, 173, 106749.
Found on CogSciSci, a really interesting blog by Rob King (@Ironic_Bonding) on spaced and massed practice to teach learners how to drive, along with some worrying ramifications for road safety. He then writes about applying this to the chemistry classroom. He sets his blog up as follows (a kind of graphic organiser):
It was previously claimed that the font Sans Forgetica, a type font created by a multidisciplinary team of designers and behavioural scientists from RMIT University, could enhance people’s memory for information, however researchers have found after carrying out numerous experiments that the font does not enhance memory.
Researcher Dr Kimberley Wade, from the Department of Psychology from Warwick University comments:
“After conducting four peer-reviewed experiments into Sans Forgetica and comparing it to Arial, we can confidently say that Sans Forgetica promotes a feeling of disfluency, but does not boost memory like it is claimed to.
“In fact, it seems like although Sans Forgetica is novel and hard to read, its effects might well end there.”
A discussion of the research can be found here. Unfortunately for many, the article itself is behind a paywall. Here’s the abstract:
Scientists working at the intersection of cognitive psychology and education have developed theoretically-grounded methods to help people learn. One important yet counterintuitive finding is that making information harder to learn – that is, creating desirable difficulties – benefits learners. Some studies suggest that simply presenting information in a difficult-to-read font could serve as a desirable difficulty and therefore promote learning. To address this possibility, we examined the extent to which Sans Forgetica, a newly developed font, improves memory performance – as the creators of the font claim. Across four experiments, we set out to replicate unpublished findings by the font’s creators. Subjects read information in Sans Forgetica or Arial, and rated how difficult the information was to read (Experiment 1) or attempted to recall the information (Experiments 2–4). Although subjects rated Sans Forgetica as being more difficult to read than Arial, Sans Forgetica led to equivalent memory performance, and sometimes even impaired it. These findings suggest that although Sans Forgetica promotes a feeling of disfluency, it does not create a desirable difficulty or benefit memory.